YOLO11 改进 – 主干网络_ EfficientRep:一种旨在提高硬件效率的RepVGG风格卷积神经网络架构
前言
本文介绍了EfficientRep卷积神经网络架构及其核心模块在YOLOv11中的结合应用。为解决传统神经网络效率评估指标对硬件不敏感问题,提出设计硬件感知神经网络的方法,构建EfficientRep架构。该架构采用RepVGG风格卷积结构,训练时结合多分支确保准确性,推理时转换为单分支。我们将EfficientRep的RepVGGBlock和RepBlock集成进YOLOv11,实验表明,其在单GPU设备上提升了训练和推理速度,改进后的YOLOv11也取得了良好实验结果。
文章目录: YOLOv11改进大全:卷积层、轻量化、注意力机制、损失函数、Backbone、SPPF、Neck、检测头全方位优化汇总
专栏链接: YOLOv11改进专栏
文章目录
[TOC]
介绍
摘要
我们提出了一种硬件高效的卷积神经网络架构,该架构采用类似RepVGG的结构设计范式。传统的网络效率评估指标如FLOPs或参数量对硬件特性(包括计算能力与内存带宽)缺乏敏感性,因此如何设计能够有效利用硬件计算资源和内存带宽的神经网络架构成为关键科学问题。本文提出了一种硬件感知的神经网络设计方法,基于该方法我们开发了EfficientRep系列卷积网络,这些网络针对高性能计算硬件(如图形处理器GPU)进行了深度优化,并成功应用于YOLOv6目标检测框架。YOLOv6框架已发布v1和v2两个版本,包含YOLOv6N、YOLOv6S、YOLOv6M和YOLOv6L等多个不同规模的模型变体。相关实现代码已开源,可通过https://github.com/meituan/YOLOv6获取。
文章链接
论文地址: 论文地址
代码地址: 代码地址
基本原理
EfficientRep是一种高效的RepVGG风格卷积神经网络架构,旨在优化硬件的计算能力和内存带宽利用。该架构采用RepVGG风格的卷积结构,具有3x3卷积核,通过Winograd算法在GPU或CPU上进行高度优化。在训练状态下,EfficientRep结合了3x3分支、1x1分支和恒等映射,以确保训练期间的准确性。在推理状态下,通过重新参数化,多分支结构转换为单分支的3x3卷积。EfficientRep主要包括EfficientRep骨干网络和Rep-PAN颈部,这些结构对GPU友好,并应用于YOLOv6检测框架(YOLOv6-v1)[T3]。
在YOLOv6-v1中,EfficientRep骨干网络和Rep-PAN颈部的设计使得单GPU设备上的训练和推理速度得到提升。然而,当YOLOv6-v1扩展到中等规模时,推理速度下降过快,准确性也无法与CSP风格的YOLO系列相竞争。因此,为了在大型模型中实现更好的准确性和速度平衡,研究人员探索了多路径结构等新颖设计[T6]。
EfficientRep将在backbone中stride=2的卷积层换成了stride=2的RepConv层。并且也将CSP-Block修改为了RepBlock。
核心代码
import torch
import torch.nn as nn
# 定义卷积和批归一化的组合函数
def conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups=1):
result = nn.Sequential() # 使用 nn.Sequential 将多个层组合
# 添加卷积层
result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels,
kernel_size=kernel_size, stride=stride, padding=padding, groups=groups,
bias=False))
# 添加批归一化层
result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))
return result
# 定义 RepVGGBlock 类,继承自 nn.Module
class RepVGGBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3,
stride=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False):
super(RepVGGBlock, self).__init__()
self.deploy = deploy
self.groups = groups
self.in_channels = in_channels
padding_11 = padding - kernel_size // 2
# 使用 SiLU 作为激活函数
self.nonlinearity = nn.SiLU()
self.se = nn.Identity() # 使用 Identity 模块
if deploy:
# 如果处于部署模式,使用单一的卷积层
self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
stride=stride, padding=padding, dilation=dilation, groups=groups,
bias=True, padding_mode=padding_mode)
else:
# 否则使用多分支结构
self.rbr_identity = nn.BatchNorm2d(num_features=in_channels) if out_channels == in_channels and stride == 1 else None
self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
stride=stride, padding=padding, groups=groups)
self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride,
padding=padding_11, groups=groups)
# 融合批归一化层与卷积层的权重
def _fuse_bn_tensor(self, branch):
if branch is None:
return 0, 0
if isinstance(branch, nn.Sequential):
kernel = branch.conv.weight
running_mean = branch.bn.running_mean
running_var = branch.bn.running_var
gamma = branch.bn.weight
beta = branch.bn.bias
eps = branch.bn.eps
else:
assert isinstance(branch, nn.BatchNorm2d)
if not hasattr(self, 'id_tensor'):
input_dim = self.in_channels // self.groups
kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
for i in range(self.in_channels):
kernel_value[i, i % input_dim, 1, 1] = 1
self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
kernel = self.id_tensor
running_mean = branch.running_mean
running_var = branch.running_var
gamma = branch.weight
beta = branch.bias
eps = branch.eps
std = (running_var + eps).sqrt()
t = (gamma / std).reshape(-1, 1, 1, 1)
return kernel * t, beta - running_mean * gamma / std
# 获取等效的卷积核和偏置
def get_equivalent_kernel_bias(self):
kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
# 将1x1卷积核填充到3x3大小
def _pad_1x1_to_3x3_tensor(self, kernel1x1):
if kernel1x1 is None:
return 0
else:
return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
# 前向传播函数
def forward(self, inputs):
if self.deploy:
return self.nonlinearity(self.rbr_dense(inputs))
if hasattr(self, 'rbr_reparam'):
return self.nonlinearity(self.se(self.rbr_reparam(inputs)))
if self.rbr_identity is None:
id_out = 0
else:
id_out = self.rbr_identity(inputs)
return self.nonlinearity(self.se(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out))
# 定义 RepBlock 类,继承自 nn.Module
class RepBlock(nn.Module):
'''
RepBlock 是一个包含多个 RepVGGBlock 的阶段块
'''
def __init__(self, in_channels, out_channels, n=1, isTrue=None):
super().__init__()
self.conv1 = RepVGGBlock(in_channels, out_channels)
self.block = nn.Sequential(*(RepVGGBlock(out_channels, out_channels) for _ in range(n - 1))) if n > 1 else None
# 前向传播函数
def forward(self, x):
x = self.conv1(x)
if self.block is not None:
x = self.block(x)
return x
实验
脚本
import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO
if __name__ == '__main__':
# 修改为自己的配置文件地址
model = YOLO('/root/ultralytics-main/ultralytics/cfg/models/11/yolov11-EfficientRep.yaml')
# 修改为自己的数据集地址
model.train(data='/root/ultralytics-main/ultralytics/cfg/datasets/coco8.yaml',
cache=False,
imgsz=640,
epochs=10,
single_cls=False, # 是否是单类别检测
batch=8,
close_mosaic=10,
workers=0,
optimizer='SGD',
amp=True,
project='runs/train',
name='EfficientRep',
)