Xception模型详解

2024-04-02 19:28
文章标签 详解 模型 xception

本文主要是介绍Xception模型详解,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

简介

Xception的名称源自于"Extreme Inception",它是在Inception架构的基础上进行了扩展和改进。Inception架构是Google团队提出的一种经典的卷积神经网络架构,用于解决深度卷积神经网络中的计算和参数增长问题。

与Inception不同,Xception的主要创新在于使用了深度可分离卷积(Depthwise Separable Convolution)来替代传统的卷积操作。深度可分离卷积将卷积操作分解为两个步骤:深度卷积和逐点卷积。

深度卷积是一种在每个输入通道上分别应用卷积核的操作,它可以有效地减少计算量和参数数量。逐点卷积是一种使用1x1卷积核进行通道间的线性组合的操作,用于增加模型的表示能力。通过使用深度可分离卷积,Xception网络能够更加有效地学习特征表示,并在相同计算复杂度下获得更好的性能。

Xception 网络结构

一个标准的Inception模块(Inception V3)

简化后的Inception模块

简化后的Inception的等价结构

采用深度可分离卷积的思想,使 3×3 卷积的数量与 1×1卷积输出通道的数量相等

Xception模型,一共可以分为3个flow,分别是Entry flow、Middle flow、Exit flow。

在这里 Entry 与 Exit 都具有相同的部分,Middle 与这二者有所不同。

Xception模型的pytorch复现

(1)深度可分离卷积

class SeparableConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, bias=False):super(SeparableConv2d, self).__init__()self.conv = nn.Conv2d(in_channels, in_channels, kernel_size, stride, padding,dilation, groups=in_channels, bias=bias)self.pointwise = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0,dilation=1, groups=1, bias=False)def forward(self, x):x = self.conv(x)x = self.pointwise(x)return x

(2)构建三个flow结构

class EntryFlow(nn.Module):def __init__(self):super(EntryFlow, self).__init__()self.headconv = nn.Sequential(nn.Conv2d(3, 32, 3, 2, bias=False),nn.BatchNorm2d(32),nn.ReLU(inplace=True),nn.Conv2d(32, 64, 3, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),)self.residual_block1 = nn.Sequential(SeparableConv2d(64, 128, 3, padding=1),nn.BatchNorm2d(128),nn.ReLU(inplace=True),SeparableConv2d(128, 128, 3, padding=1),nn.BatchNorm2d(128),nn.MaxPool2d(3, stride=2, padding=1),)self.residual_block2 = nn.Sequential(nn.ReLU(inplace=True),SeparableConv2d(128, 256, 3, padding=1),nn.BatchNorm2d(256),nn.ReLU(inplace=True),SeparableConv2d(256, 256, 3, padding=1),nn.BatchNorm2d(256),nn.MaxPool2d(3, stride=2, padding=1))self.residual_block3 = nn.Sequential(nn.ReLU(inplace=True),SeparableConv2d(256, 728, 3, padding=1),nn.BatchNorm2d(728),nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728),nn.MaxPool2d(3, stride=2, padding=1))def shortcut(self, inp, oup):return nn.Sequential(nn.Conv2d(inp, oup, 1, 2, bias=False),nn.BatchNorm2d(oup))def forward(self, x):x = self.headconv(x)residual = self.residual_block1(x)shortcut_block1 = self.shortcut(64, 128)x = residual + shortcut_block1(x)residual = self.residual_block2(x)shortcut_block2 = self.shortcut(128, 256)x = residual + shortcut_block2(x)residual = self.residual_block3(x)shortcut_block3 = self.shortcut(256, 728)x = residual + shortcut_block3(x)return xclass MiddleFlow(nn.Module):def __init__(self):super(MiddleFlow, self).__init__()self.shortcut = nn.Sequential()self.conv1 = nn.Sequential(nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728),nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728),nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728))def forward(self, x):residual = self.conv1(x)input = self.shortcut(x)return input + residualclass ExitFlow(nn.Module):def __init__(self):super(ExitFlow, self).__init__()self.residual_with_exit = nn.Sequential(nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728),nn.ReLU(inplace=True),SeparableConv2d(728, 1024, 3, padding=1),nn.BatchNorm2d(1024),nn.MaxPool2d(3, stride=2, padding=1))self.endconv = nn.Sequential(SeparableConv2d(1024, 1536, 3, 1, 1),nn.BatchNorm2d(1536),nn.ReLU(inplace=True),SeparableConv2d(1536, 2048, 3, 1, 1),nn.BatchNorm2d(2048),nn.ReLU(inplace=True),nn.AdaptiveAvgPool2d((1, 1)),)def shortcut(self, inp, oup):return nn.Sequential(nn.Conv2d(inp, oup, 1, 2, bias=False),nn.BatchNorm2d(oup))def forward(self, x):residual = self.residual_with_exit(x)shortcut_block = self.shortcut(728, 1024)output = residual + shortcut_block(x)return self.endconv(output)

(3)构建网络(完整代码)

"""
Copyright (c) 2023, Auorui.
All rights reserved.Xception: Deep Learning with Depthwise Separable Convolutions<https://arxiv.org/pdf/1610.02357.pdf>
"""
import torch
import torch.nn as nnclass SeparableConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, bias=False):super(SeparableConv2d, self).__init__()self.conv = nn.Conv2d(in_channels, in_channels, kernel_size, stride, padding,dilation, groups=in_channels, bias=bias)self.pointwise = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0,dilation=1, groups=1, bias=False)def forward(self, x):x = self.conv(x)x = self.pointwise(x)return xclass EntryFlow(nn.Module):def __init__(self):super(EntryFlow, self).__init__()self.headconv = nn.Sequential(nn.Conv2d(3, 32, 3, 2, bias=False),nn.BatchNorm2d(32),nn.ReLU(inplace=True),nn.Conv2d(32, 64, 3, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),)self.residual_block1 = nn.Sequential(SeparableConv2d(64, 128, 3, padding=1),nn.BatchNorm2d(128),nn.ReLU(inplace=True),SeparableConv2d(128, 128, 3, padding=1),nn.BatchNorm2d(128),nn.MaxPool2d(3, stride=2, padding=1),)self.residual_block2 = nn.Sequential(nn.ReLU(inplace=True),SeparableConv2d(128, 256, 3, padding=1),nn.BatchNorm2d(256),nn.ReLU(inplace=True),SeparableConv2d(256, 256, 3, padding=1),nn.BatchNorm2d(256),nn.MaxPool2d(3, stride=2, padding=1))self.residual_block3 = nn.Sequential(nn.ReLU(inplace=True),SeparableConv2d(256, 728, 3, padding=1),nn.BatchNorm2d(728),nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728),nn.MaxPool2d(3, stride=2, padding=1))def shortcut(self, inp, oup):return nn.Sequential(nn.Conv2d(inp, oup, 1, 2, bias=False),nn.BatchNorm2d(oup))def forward(self, x):x = self.headconv(x)residual = self.residual_block1(x)shortcut_block1 = self.shortcut(64, 128)x = residual + shortcut_block1(x)residual = self.residual_block2(x)shortcut_block2 = self.shortcut(128, 256)x = residual + shortcut_block2(x)residual = self.residual_block3(x)shortcut_block3 = self.shortcut(256, 728)x = residual + shortcut_block3(x)return xclass MiddleFlow(nn.Module):def __init__(self):super(MiddleFlow, self).__init__()self.shortcut = nn.Sequential()self.conv1 = nn.Sequential(nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728),nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728),nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728))def forward(self, x):residual = self.conv1(x)input = self.shortcut(x)return input + residualclass ExitFlow(nn.Module):def __init__(self):super(ExitFlow, self).__init__()self.residual_with_exit = nn.Sequential(nn.ReLU(inplace=True),SeparableConv2d(728, 728, 3, padding=1),nn.BatchNorm2d(728),nn.ReLU(inplace=True),SeparableConv2d(728, 1024, 3, padding=1),nn.BatchNorm2d(1024),nn.MaxPool2d(3, stride=2, padding=1))self.endconv = nn.Sequential(SeparableConv2d(1024, 1536, 3, 1, 1),nn.BatchNorm2d(1536),nn.ReLU(inplace=True),SeparableConv2d(1536, 2048, 3, 1, 1),nn.BatchNorm2d(2048),nn.ReLU(inplace=True),nn.AdaptiveAvgPool2d((1, 1)),)def shortcut(self, inp, oup):return nn.Sequential(nn.Conv2d(inp, oup, 1, 2, bias=False),nn.BatchNorm2d(oup))def forward(self, x):residual = self.residual_with_exit(x)shortcut_block = self.shortcut(728, 1024)output = residual + shortcut_block(x)return self.endconv(output)class Xception(nn.Module):def __init__(self, num_classes=1000):super().__init__()self.num_classes = num_classesself.entry_flow = EntryFlow()self.middle_flow = MiddleFlow()self.exit_flow = ExitFlow()self.fc = nn.Linear(2048, num_classes)def forward(self, x):x = self.entry_flow(x)for i in range(8):x = self.middle_flow(x)x = self.exit_flow(x)x = x.view(x.size(0), -1)out = self.fc(x)return outif __name__=='__main__':import torchsummarydevice = 'cuda' if torch.cuda.is_available() else 'cpu'input = torch.ones(2, 3, 224, 224).to(device)net = Xception(num_classes=4)net = net.to(device)out = net(input)print(out)print(out.shape)torchsummary.summary(net, input_size=(3, 224, 224))# Xception Total params: 19,838,076

参考文章

【精读AI论文】Xception ------(Xception: Deep Learning with Depthwise Separable Convolutions)_xception论文-CSDN博客

[ 轻量级网络 ] 经典网络模型4——Xception 详解与复现-CSDN博客

神经网络学习小记录22——Xception模型的复现详解_xception timm-CSDN博客

【卷积神经网络系列】十七、Xception_xception模块-CSDN博客 

这篇关于Xception模型详解的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/870847

相关文章

Python标准库之数据压缩和存档的应用详解

《Python标准库之数据压缩和存档的应用详解》在数据处理与存储领域,压缩和存档是提升效率的关键技术,Python标准库提供了一套完整的工具链,下面小编就来和大家简单介绍一下吧... 目录一、核心模块架构与设计哲学二、关键模块深度解析1.tarfile:专业级归档工具2.zipfile:跨平台归档首选3.

idea的终端(Terminal)cmd的命令换成linux的命令详解

《idea的终端(Terminal)cmd的命令换成linux的命令详解》本文介绍IDEA配置Git的步骤:安装Git、修改终端设置并重启IDEA,强调顺序,作为个人经验分享,希望提供参考并支持脚本之... 目录一编程、设置前二、前置条件三、android设置四、设置后总结一、php设置前二、前置条件

python中列表应用和扩展性实用详解

《python中列表应用和扩展性实用详解》文章介绍了Python列表的核心特性:有序数据集合,用[]定义,元素类型可不同,支持迭代、循环、切片,可执行增删改查、排序、推导式及嵌套操作,是常用的数据处理... 目录1、列表定义2、格式3、列表是可迭代对象4、列表的常见操作总结1、列表定义是处理一组有序项目的

python使用try函数详解

《python使用try函数详解》Pythontry语句用于异常处理,支持捕获特定/多种异常、else/final子句确保资源释放,结合with语句自动清理,可自定义异常及嵌套结构,灵活应对错误场景... 目录try 函数的基本语法捕获特定异常捕获多个异常使用 else 子句使用 finally 子句捕获所

C++11范围for初始化列表auto decltype详解

《C++11范围for初始化列表autodecltype详解》C++11引入auto类型推导、decltype类型推断、统一列表初始化、范围for循环及智能指针,提升代码简洁性、类型安全与资源管理效... 目录C++11新特性1. 自动类型推导auto1.1 基本语法2. decltype3. 列表初始化3

SQL Server 中的 WITH (NOLOCK) 示例详解

《SQLServer中的WITH(NOLOCK)示例详解》SQLServer中的WITH(NOLOCK)是一种表提示,等同于READUNCOMMITTED隔离级别,允许查询在不获取共享锁的情... 目录SQL Server 中的 WITH (NOLOCK) 详解一、WITH (NOLOCK) 的本质二、工作

springboot自定义注解RateLimiter限流注解技术文档详解

《springboot自定义注解RateLimiter限流注解技术文档详解》文章介绍了限流技术的概念、作用及实现方式,通过SpringAOP拦截方法、缓存存储计数器,结合注解、枚举、异常类等核心组件,... 目录什么是限流系统架构核心组件详解1. 限流注解 (@RateLimiter)2. 限流类型枚举 (

Java Thread中join方法使用举例详解

《JavaThread中join方法使用举例详解》JavaThread中join()方法主要是让调用改方法的thread完成run方法里面的东西后,在执行join()方法后面的代码,这篇文章主要介绍... 目录前言1.join()方法的定义和作用2.join()方法的三个重载版本3.join()方法的工作原

Spring AI使用tool Calling和MCP的示例详解

《SpringAI使用toolCalling和MCP的示例详解》SpringAI1.0.0.M6引入ToolCalling与MCP协议,提升AI与工具交互的扩展性与标准化,支持信息检索、行动执行等... 目录深入探索 Spring AI聊天接口示例Function CallingMCPSTDIOSSE结束语

C语言进阶(预处理命令详解)

《C语言进阶(预处理命令详解)》文章讲解了宏定义规范、头文件包含方式及条件编译应用,强调带参宏需加括号避免计算错误,头文件应声明函数原型以便主函数调用,条件编译通过宏定义控制代码编译,适用于测试与模块... 目录1.宏定义1.1不带参宏1.2带参宏2.头文件的包含2.1头文件中的内容2.2工程结构3.条件编