DAMO-YOLO的Neck( Efficient RepGFPN)详解

2023-11-29 05:30

本文主要是介绍DAMO-YOLO的Neck( Efficient RepGFPN)详解,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

 这个图是有点问题的,在GiraffeNeckV2代码中只有了5个Fusion Block(图中有6个)

https://github.com/tinyvision/DAMO-YOLO/blob/master/damo/base_models/necks/giraffe_fpn_btn.py

代码中只有5个CSPStage

所以我自己画了一个总体图,在github上提了个issue,得到了原作者的肯定

I think the pictures in your paper are not rigorous in several places · Issue #91 · tinyvision/DAMO-YOLO · GitHub

 

想要看懂Neck部分,只需要看懂Fusion Block在做什么就行了,其他部分和PAN差不太多

class CSPStage(nn.Module):   def __init__(self,block_fn,ch_in,ch_hidden_ratio,ch_out,n,act='swish',spp=False):super(CSPStage, self).__init__()split_ratio = 2ch_first = int(ch_out // split_ratio)ch_mid = int(ch_out - ch_first)self.conv1 = ConvBNAct(ch_in, ch_first, 1, act=act)self.conv2 = ConvBNAct(ch_in, ch_mid, 1, act=act)self.convs = nn.Sequential()next_ch_in = ch_midfor i in range(n):if block_fn == 'BasicBlock_3x3_Reverse':self.convs.add_module(str(i),BasicBlock_3x3_Reverse(next_ch_in,ch_hidden_ratio,ch_mid,act=act,shortcut=True))else:raise NotImplementedErrorif i == (n - 1) // 2 and spp:self.convs.add_module('spp', SPP(ch_mid * 4, ch_mid, 1, [5, 9, 13], act=act))next_ch_in = ch_midself.conv3 = ConvBNAct(ch_mid * n + ch_first, ch_out, 1, act=act)def forward(self, x):y1 = self.conv1(x)y2 = self.conv2(x)mid_out = [y1]for conv in self.convs:y2 = conv(y2)mid_out.append(y2)y = torch.cat(mid_out, axis=1)y = self.conv3(y)return y

以上是CSPStage的代码,要想看懂,我们得先看懂ConvBNAct、BasicBlock_3x3_Reverse这两个类

class ConvBNAct(nn.Module):"""A Conv2d -> Batchnorm -> silu/leaky relu block"""def __init__(self,in_channels,out_channels,ksize,stride=1,groups=1,bias=False,act='silu',norm='bn',reparam=False,):super().__init__()# same paddingpad = (ksize - 1) // 2self.conv = nn.Conv2d(in_channels,out_channels,kernel_size=ksize,stride=stride,padding=pad,groups=groups,bias=bias,)if norm is not None:self.bn = get_norm(norm, out_channels, inplace=True)if act is not None:self.act = get_activation(act, inplace=True)self.with_norm = norm is not Noneself.with_act = act is not Nonedef forward(self, x):x = self.conv(x)if self.with_norm:x = self.bn(x)if self.with_act:x = self.act(x)return xdef fuseforward(self, x):return self.act(self.conv(x))

ConvBNAct还是很好看懂的,Conv +BN + SiLU就完事了(也可用别的激活函数,文章用SiLU)

 如果设置了groups参数就变成了组卷积了

class BasicBlock_3x3_Reverse(nn.Module):def __init__(self,ch_in,ch_hidden_ratio,ch_out,act='relu',shortcut=True):super(BasicBlock_3x3_Reverse, self).__init__()assert ch_in == ch_outch_hidden = int(ch_in * ch_hidden_ratio)self.conv1 = ConvBNAct(ch_hidden, ch_out, 3, stride=1, act=act)self.conv2 = RepConv(ch_in, ch_hidden, 3, stride=1, act=act)self.shortcut = shortcutdef forward(self, x):y = self.conv2(x)y = self.conv1(y)if self.shortcut:return x + yelse:return y

要看懂BasicBlock_3x3_Reverse这个类,就得了解RepConv类,这个类就是根据RepVGG网络的RepVGGBlock改的

class RepConv(nn.Module):'''RepConv is a basic rep-style block, including training and deploy statusCode is based on https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py'''def __init__(self,in_channels,out_channels,kernel_size=3,stride=1,padding=1,dilation=1,groups=1,padding_mode='zeros',deploy=False,act='relu',norm=None):super(RepConv, self).__init__()self.deploy = deployself.groups = groupsself.in_channels = in_channelsself.out_channels = out_channelsassert kernel_size == 3assert padding == 1padding_11 = padding - kernel_size // 2if isinstance(act, str):self.nonlinearity = get_activation(act)else:self.nonlinearity = actif deploy:self.rbr_reparam = nn.Conv2d(in_channels=in_channels,out_channels=out_channels,kernel_size=kernel_size,stride=stride,padding=padding,dilation=dilation,groups=groups,bias=True,padding_mode=padding_mode)else:self.rbr_identity = Noneself.rbr_dense = conv_bn(in_channels=in_channels,out_channels=out_channels,kernel_size=kernel_size,stride=stride,padding=padding,groups=groups)self.rbr_1x1 = conv_bn(in_channels=in_channels,out_channels=out_channels,kernel_size=1,stride=stride,padding=padding_11,groups=groups)def forward(self, inputs):'''Forward process'''if hasattr(self, 'rbr_reparam'):return self.nonlinearity(self.rbr_reparam(inputs))if self.rbr_identity is None:id_out = 0else:id_out = self.rbr_identity(inputs)return self.nonlinearity(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)def get_equivalent_kernel_bias(self):kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasiddef _pad_1x1_to_3x3_tensor(self, kernel1x1):if kernel1x1 is None:return 0else:return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])def _fuse_bn_tensor(self, branch):if branch is None:return 0, 0if isinstance(branch, nn.Sequential):kernel = branch.conv.weightrunning_mean = branch.bn.running_meanrunning_var = branch.bn.running_vargamma = branch.bn.weightbeta = branch.bn.biaseps = branch.bn.epselse:assert isinstance(branch, nn.BatchNorm2d)if not hasattr(self, 'id_tensor'):input_dim = self.in_channels // self.groupskernel_value = np.zeros((self.in_channels, input_dim, 3, 3),dtype=np.float32)for i in range(self.in_channels):kernel_value[i, i % input_dim, 1, 1] = 1self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)kernel = self.id_tensorrunning_mean = branch.running_meanrunning_var = branch.running_vargamma = branch.weightbeta = branch.biaseps = branch.epsstd = (running_var + eps).sqrt()t = (gamma / std).reshape(-1, 1, 1, 1)return kernel * t, beta - running_mean * gamma / stddef switch_to_deploy(self):if hasattr(self, 'rbr_reparam'):returnkernel, bias = self.get_equivalent_kernel_bias()self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels,out_channels=self.rbr_dense.conv.out_channels,kernel_size=self.rbr_dense.conv.kernel_size,stride=self.rbr_dense.conv.stride,padding=self.rbr_dense.conv.padding,dilation=self.rbr_dense.conv.dilation,groups=self.rbr_dense.conv.groups,bias=True)self.rbr_reparam.weight.data = kernelself.rbr_reparam.bias.data = biasfor para in self.parameters():para.detach_()self.__delattr__('rbr_dense')self.__delattr__('rbr_1x1')if hasattr(self, 'rbr_identity'):self.__delattr__('rbr_identity')if hasattr(self, 'id_tensor'):self.__delattr__('id_tensor')self.deploy = True

 RepConv的特点是结构重参数化,训练时采用三条分支,推理时将三个分支融合在一起,大大减少了推理时间(建议看看RepVGG的讲解视频),我图画得太丑了

  RepConv采用的两分支的结构(a)

 其他细节有缘再更,代码不难,慢慢看完全能懂。有写的不对的地方请见谅

这篇关于DAMO-YOLO的Neck( Efficient RepGFPN)详解的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/431638

相关文章

Python中Flask模板的使用与高级技巧详解

《Python中Flask模板的使用与高级技巧详解》在Web开发中,直接将HTML代码写在Python文件中会导致诸多问题,Flask内置了Jinja2模板引擎,完美解决了这些问题,下面我们就来看看F... 目录一、模板渲染基础1.1 为什么需要模板引擎1.2 第一个模板渲染示例1.3 模板渲染原理二、模板

Redis中6种缓存更新策略详解

《Redis中6种缓存更新策略详解》Redis作为一款高性能的内存数据库,已经成为缓存层的首选解决方案,然而,使用缓存时最大的挑战在于保证缓存数据与底层数据源的一致性,本文将介绍Redis中6种缓存更... 目录引言策略一:Cache-Aside(旁路缓存)策略工作原理代码示例优缺点分析适用场景策略二:Re

Java注解之超越Javadoc的元数据利器详解

《Java注解之超越Javadoc的元数据利器详解》本文将深入探讨Java注解的定义、类型、内置注解、自定义注解、保留策略、实际应用场景及最佳实践,无论是初学者还是资深开发者,都能通过本文了解如何利用... 目录什么是注解?注解的类型内置注编程解自定义注解注解的保留策略实际用例最佳实践总结在 Java 编程

MySQL数据库约束深入详解

《MySQL数据库约束深入详解》:本文主要介绍MySQL数据库约束,在MySQL数据库中,约束是用来限制进入表中的数据类型的一种技术,通过使用约束,可以确保数据的准确性、完整性和可靠性,需要的朋友... 目录一、数据库约束的概念二、约束类型三、NOT NULL 非空约束四、DEFAULT 默认值约束五、UN

Python使用Matplotlib绘制3D曲面图详解

《Python使用Matplotlib绘制3D曲面图详解》:本文主要介绍Python使用Matplotlib绘制3D曲面图,在Python中,使用Matplotlib库绘制3D曲面图可以通过mpl... 目录准备工作绘制简单的 3D 曲面图绘制 3D 曲面图添加线框和透明度控制图形视角Matplotlib

MySQL中的分组和多表连接详解

《MySQL中的分组和多表连接详解》:本文主要介绍MySQL中的分组和多表连接的相关操作,本文通过实例代码给大家介绍的非常详细,感兴趣的朋友一起看看吧... 目录mysql中的分组和多表连接一、MySQL的分组(group javascriptby )二、多表连接(表连接会产生大量的数据垃圾)MySQL中的

Java 实用工具类Spring 的 AnnotationUtils详解

《Java实用工具类Spring的AnnotationUtils详解》Spring框架提供了一个强大的注解工具类org.springframework.core.annotation.Annot... 目录前言一、AnnotationUtils 的常用方法二、常见应用场景三、与 JDK 原生注解 API 的

redis中使用lua脚本的原理与基本使用详解

《redis中使用lua脚本的原理与基本使用详解》在Redis中使用Lua脚本可以实现原子性操作、减少网络开销以及提高执行效率,下面小编就来和大家详细介绍一下在redis中使用lua脚本的原理... 目录Redis 执行 Lua 脚本的原理基本使用方法使用EVAL命令执行 Lua 脚本使用EVALSHA命令

SpringBoot3.4配置校验新特性的用法详解

《SpringBoot3.4配置校验新特性的用法详解》SpringBoot3.4对配置校验支持进行了全面升级,这篇文章为大家详细介绍了一下它们的具体使用,文中的示例代码讲解详细,感兴趣的小伙伴可以参考... 目录基本用法示例定义配置类配置 application.yml注入使用嵌套对象与集合元素深度校验开发

Python中的Walrus运算符分析示例详解

《Python中的Walrus运算符分析示例详解》Python中的Walrus运算符(:=)是Python3.8引入的一个新特性,允许在表达式中同时赋值和返回值,它的核心作用是减少重复计算,提升代码简... 目录1. 在循环中避免重复计算2. 在条件判断中同时赋值变量3. 在列表推导式或字典推导式中简化逻辑