Pytorch实例----CAFAR10数据集分类(ResNet)

2024-03-02 16:20

本文主要是介绍Pytorch实例----CAFAR10数据集分类(ResNet),希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

在上一篇 Pytorch实例----CAFAR10数据集分类(VGG)的识别统计,本篇主要调整Net()类,设计ResNet网络(+BN),实现对CAFAR10分类数据集的分类任务。

ResNet网络结构编程实现:

#create residual block
class ResidualBlock(nn.Module):def __init__(self, inchannel, outchannel, stride=1):super(ResidualBlock, self).__init__()#define conv2d -> BN -> ReLU -> BNself.left = nn.Sequential(nn.Conv2d(inchannel, outchannel, kernel_size=3, stride=stride, padding=1, bias=False),nn.BatchNorm2d(outchannel),nn.ReLU(inplace=True),nn.Conv2d(outchannel, outchannel, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(outchannel))#define shortcutself.shortcut = nn.Sequential()if stride != 1 or inchannel != outchannel:self.shortcut = nn.Sequential(nn.Conv2d(inchannel, outchannel, kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(outchannel))def forward(self, x):out = self.left(x)out += self.shortcut(x)out = F.relu(out)return outclass ResNet(nn.Module):def __init__(self, ResidualBlock, num_classes=10):super(ResNet, self).__init__()self.inchannel = 64self.conv1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(64),nn.ReLU(),)#use make_layer to append residual blockself.layer1 = self.make_layer(ResidualBlock, 64,  2, stride=1)self.layer2 = self.make_layer(ResidualBlock, 128, 2, stride=2)self.layer3 = self.make_layer(ResidualBlock, 256, 2, stride=2)self.layer4 = self.make_layer(ResidualBlock, 512, 2, stride=2)self.fc = nn.Linear(512, num_classes)#define use nn.Sequential to create block or stagedef make_layer(self, block, channels, num_blocks, stride):strides = [stride] + [1] * (num_blocks - 1)   #strides=[1,1]layers = []for stride in strides:layers.append(block(self.inchannel, channels, stride))self.inchannel = channelsreturn nn.Sequential(*layers)def forward(self, x):out = self.conv1(x)out = self.layer1(out)out = self.layer2(out)out = self.layer3(out)out = self.layer4(out)out = F.avg_pool2d(out, 4)out = out.view(out.size(0), -1)out = self.fc(out)return outdef ResNet18():return ResNet(ResidualBlock)
#instance for ResNet18
#net = ResNet18()

整体代码实现:

import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
import torchvision.transforms as transforms
from torchvision import modelsimport matplotlib.pyplot as plt
import numpy as npdef imshow(img):img = img / 2 + 0.5np_img = img.numpy()plt.imshow(np.transpose(np_img, (1, 2, 0)))
#define Parameter for data
BATCH_SIZE = 4
EPOCH = 4
#define transform
#hint: Normalize(mean, var) to normalize RGB
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5))])
#define trainloader
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
#define testloader
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
#define class
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')#create residual block
class ResidualBlock(nn.Module):def __init__(self, inchannel, outchannel, stride=1):super(ResidualBlock, self).__init__()#define conv2d -> BN -> ReLU -> BNself.left = nn.Sequential(nn.Conv2d(inchannel, outchannel, kernel_size=3, stride=stride, padding=1, bias=False),nn.BatchNorm2d(outchannel),nn.ReLU(inplace=True),nn.Conv2d(outchannel, outchannel, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(outchannel))#define shortcutself.shortcut = nn.Sequential()if stride != 1 or inchannel != outchannel:self.shortcut = nn.Sequential(nn.Conv2d(inchannel, outchannel, kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(outchannel))def forward(self, x):out = self.left(x)out += self.shortcut(x)out = F.relu(out)return outclass ResNet(nn.Module):def __init__(self, ResidualBlock, num_classes=10):super(ResNet, self).__init__()self.inchannel = 64self.conv1 = nn.Sequential(nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(64),nn.ReLU(),)#use make_layer to append residual blockself.layer1 = self.make_layer(ResidualBlock, 64,  2, stride=1)self.layer2 = self.make_layer(ResidualBlock, 128, 2, stride=2)self.layer3 = self.make_layer(ResidualBlock, 256, 2, stride=2)self.layer4 = self.make_layer(ResidualBlock, 512, 2, stride=2)self.fc = nn.Linear(512, num_classes)#define use nn.Sequential to create block or stagedef make_layer(self, block, channels, num_blocks, stride):strides = [stride] + [1] * (num_blocks - 1)   #strides=[1,1]layers = []for stride in strides:layers.append(block(self.inchannel, channels, stride))self.inchannel = channelsreturn nn.Sequential(*layers)def forward(self, x):out = self.conv1(x)out = self.layer1(out)out = self.layer2(out)out = self.layer3(out)out = self.layer4(out)out = F.avg_pool2d(out, 4)out = out.view(out.size(0), -1)out = self.fc(out)return outdef ResNet18():return ResNet(ResidualBlock)net = ResNet18()
if torch.cuda.is_available():net.cuda()
print(net)
#define loss
cost = nn.CrossEntropyLoss()
#define optimizer
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)print('start')
#iteration for training
#setting for epoch
for epoch in range(EPOCH):running_loss = 0.0for i, data in enumerate(trainloader, 0):inputs, labels = datainputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())optimizer.zero_grad()outputs = net(inputs)loss = cost(outputs, labels)loss.backward()optimizer.step()#print loss resultrunning_loss += loss.item()if i % 2000 == 1999:print('[%d, %5d]  loss: %.3f'%(epoch + 1, i + 1, running_loss / 2000))running_loss = 0.001
print('done')#get random image and label
dataiter = iter(testloader)
images, labels = dataiter.next()
#imshow(torchvision.utils.make_grid(images))
print('groundTruth: ', ''.join('%6s' %classes[labels[j]] for j in range(4)))#get the predict result
outputs = net(Variable(images.cuda()))
_, pred = torch.max(outputs.data, 1)
print('prediction: ', ''.join('%6s' %classes[labels[j]] for j in range(4)))#test the whole result
correct = 0.0
total = 0
for data in testloader:images, labels = dataoutputs = net(Variable(images.cuda()))_, pred = torch.max(outputs.data, 1)total += labels.size(0)correct += (pred == labels.cuda()).sum()
print('average Accuracy: %d %%' %(100*correct / total))#list each class prediction
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:images, labels = dataoutputs = net(Variable(images.cuda()))_, pred = torch.max(outputs.data, 1)c = (pred == labels.cuda()).squeeze()for i in range(4):label = labels[i]class_correct[label] += float(c[i])class_total[label] += 1
print('each class accuracy: \n')
for i in range(10):print('Accuracy: %6s %2d %%' %(classes[i], 100 * class_correct[i] / class_total[i]))

实验结果:

【注】:随着算力的提升,这里更改了相对较高的training EPOCH, 统计结果如下:

 248
Loss0.748(0.789)0.4550.152
Acc74%(71%)79%81%

括号表示epoch为2时VGG网络对应的loss和Accuracy,可以看到,随着EPOCH的提升,Loss仍在下降,Accuracy继续提升,当epoch为8时,比VGG提升了10个百分点,表明将残差信息传递给下一级网络能有效避免过拟合和训练困难的问题,在目标检测中,RetinNet及以RetinNet为backbone的网络结构同样采用了该想法,实现了较好的检测效果。

practice makes perfect !

github source code : https://github.com/GinkgoX/CAFAR10_Classification_Task/blob/master/CAFAR10_ResNet.ipynb

这篇关于Pytorch实例----CAFAR10数据集分类(ResNet)的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/766726

相关文章

8种快速易用的Python Matplotlib数据可视化方法汇总(附源码)

《8种快速易用的PythonMatplotlib数据可视化方法汇总(附源码)》你是否曾经面对一堆复杂的数据,却不知道如何让它们变得直观易懂?别慌,Python的Matplotlib库是你数据可视化的... 目录引言1. 折线图(Line Plot)——趋势分析2. 柱状图(Bar Chart)——对比分析3

Spring Boot 整合 Redis 实现数据缓存案例详解

《SpringBoot整合Redis实现数据缓存案例详解》Springboot缓存,默认使用的是ConcurrentMap的方式来实现的,然而我们在项目中并不会这么使用,本文介绍SpringB... 目录1.添加 Maven 依赖2.配置Redis属性3.创建 redisCacheManager4.使用Sp

PyTorch中cdist和sum函数使用示例详解

《PyTorch中cdist和sum函数使用示例详解》torch.cdist是PyTorch中用于计算**两个张量之间的成对距离(pairwisedistance)**的函数,常用于点云处理、图神经网... 目录基本语法输出示例1. 简单的 2D 欧几里得距离2. 批量形式(3D Tensor)3. 使用不

Python Pandas高效处理Excel数据完整指南

《PythonPandas高效处理Excel数据完整指南》在数据驱动的时代,Excel仍是大量企业存储核心数据的工具,Python的Pandas库凭借其向量化计算、内存优化和丰富的数据处理接口,成为... 目录一、环境搭建与数据读取1.1 基础环境配置1.2 数据高效载入技巧二、数据清洗核心战术2.1 缺失

Java List排序实例代码详解

《JavaList排序实例代码详解》:本文主要介绍JavaList排序的相关资料,Java排序方法包括自然排序、自定义排序、Lambda简化及多条件排序,实现灵活且代码简洁,文中通过代码介绍的... 目录一、自然排序二、自定义排序规则三、使用 Lambda 表达式简化 Comparator四、多条件排序五、

Java实例化对象的​7种方式详解

《Java实例化对象的​7种方式详解》在Java中,实例化对象的方式有多种,具体取决于场景需求和设计模式,本文整理了7种常用的方法,文中的示例代码讲解详细,有需要的可以了解下... 目录1. ​new 关键字(直接构造)​2. ​反射(Reflection)​​3. ​克隆(Clone)​​4. ​反序列化

Python处理超大规模数据的4大方法详解

《Python处理超大规模数据的4大方法详解》在数据的奇妙世界里,数据量就像滚雪球一样,越变越大,从最初的GB级别的小数据堆,逐渐演变成TB级别的数据大山,所以本文我们就来看看Python处理... 目录1. Mars:数据处理界的 “变形金刚”2. Dask:分布式计算的 “指挥家”3. CuPy:GPU

使用Vue-ECharts实现数据可视化图表功能

《使用Vue-ECharts实现数据可视化图表功能》在前端开发中,经常会遇到需要展示数据可视化的需求,比如柱状图、折线图、饼图等,这类需求不仅要求我们准确地将数据呈现出来,还需要兼顾美观与交互体验,所... 目录前言为什么选择 vue-ECharts?1. 基于 ECharts,功能强大2. 更符合 Vue

Java如何根据word模板导出数据

《Java如何根据word模板导出数据》这篇文章主要为大家详细介绍了Java如何实现根据word模板导出数据,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学习一下... pom.XML文件导入依赖 <dependency> <groupId>cn.afterturn</groupId>

Python实现获取带合并单元格的表格数据

《Python实现获取带合并单元格的表格数据》由于在日常运维中经常出现一些合并单元格的表格,如果要获取数据比较麻烦,所以本文我们就来聊聊如何使用Python实现获取带合并单元格的表格数据吧... 由于在日常运维中经常出现一些合并单元格的表格,如果要获取数据比较麻烦,现将将封装成类,并通过调用list_exc