【python pytorch】Pytorch实现逻辑回归

2024-09-07 06:18

本文主要是介绍【python pytorch】Pytorch实现逻辑回归,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

pytorch 逻辑回归学习demo:

import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
from torch.autograd import Variable# Hyper Parameters 
input_size = 784
num_classes = 10
num_epochs = 10
batch_size = 50
learning_rate = 0.001# MNIST Dataset (Images and Labels)
train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(),download=True)print(train_dataset)test_dataset = dsets.MNIST(root='./data', train=False, transform=transforms.ToTensor())# Dataset Loader (Input Pipline)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)# Model
class LogisticRegression(nn.Module):def __init__(self, input_size, num_classes):super(LogisticRegression, self).__init__()self.linear = nn.Linear(input_size, num_classes)def forward(self, x):out = self.linear(x)return outmodel = LogisticRegression(input_size, num_classes)# Loss and Optimizer
# Softmax is internally computed.
# Set parameters to be updated.
criterion = nn.CrossEntropyLoss()  
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)  # Training the Model
for epoch in range(num_epochs):for i, (images, labels) in enumerate(train_loader):images = Variable(images.view(-1, 28*28))labels = Variable(labels)# Forward + Backward + Optimizeoptimizer.zero_grad()outputs = model(images)loss = criterion(outputs, labels)loss.backward()optimizer.step()if (i+1) % 100 == 0:print ('Epoch: [%d/%d], Step: [%d/%d], Loss: %.4f' % (epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.data[0]))# Test the Model
correct = 0
total = 0
for images, labels in test_loader:images = Variable(images.view(-1, 28*28))outputs = model(images)_, predicted = torch.max(outputs.data, 1)total += labels.size(0)correct += (predicted == labels).sum()print('Accuracy of the model on the 10000 test images: %d %%' % (100 * correct / total))# Save the Model
torch.save(model.state_dict(), 'model.pkl')

运行结果:

Epoch: [1/10], Step: [100/1200], Loss: 2.2397
Epoch: [1/10], Step: [200/1200], Loss: 2.1378
Epoch: [1/10], Step: [300/1200], Loss: 2.0500
Epoch: [1/10], Step: [400/1200], Loss: 1.9401
Epoch: [1/10], Step: [500/1200], Loss: 1.9175
Epoch: [1/10], Step: [600/1200], Loss: 1.8203
Epoch: [1/10], Step: [700/1200], Loss: 1.7322
Epoch: [1/10], Step: [800/1200], Loss: 1.6910
Epoch: [1/10], Step: [900/1200], Loss: 1.6678
Epoch: [1/10], Step: [1000/1200], Loss: 1.5577
Epoch: [1/10], Step: [1100/1200], Loss: 1.5113
Epoch: [1/10], Step: [1200/1200], Loss: 1.5671
Epoch: [2/10], Step: [100/1200], Loss: 1.4560
Epoch: [2/10], Step: [200/1200], Loss: 1.3170
Epoch: [2/10], Step: [300/1200], Loss: 1.3822
Epoch: [2/10], Step: [400/1200], Loss: 1.2793
Epoch: [2/10], Step: [500/1200], Loss: 1.4281
Epoch: [2/10], Step: [600/1200], Loss: 1.2763
Epoch: [2/10], Step: [700/1200], Loss: 1.1570
Epoch: [2/10], Step: [800/1200], Loss: 1.1050
Epoch: [2/10], Step: [900/1200], Loss: 1.1151
Epoch: [2/10], Step: [1000/1200], Loss: 1.0385
Epoch: [2/10], Step: [1100/1200], Loss: 1.0978
Epoch: [2/10], Step: [1200/1200], Loss: 1.0007
Epoch: [3/10], Step: [100/1200], Loss: 1.1849
Epoch: [3/10], Step: [200/1200], Loss: 1.0002
Epoch: [3/10], Step: [300/1200], Loss: 1.0198
Epoch: [3/10], Step: [400/1200], Loss: 0.9248
Epoch: [3/10], Step: [500/1200], Loss: 0.8974
Epoch: [3/10], Step: [600/1200], Loss: 1.1095
Epoch: [3/10], Step: [700/1200], Loss: 1.0900
Epoch: [3/10], Step: [800/1200], Loss: 1.0178
Epoch: [3/10], Step: [900/1200], Loss: 0.9809
Epoch: [3/10], Step: [1000/1200], Loss: 0.9831
Epoch: [3/10], Step: [1100/1200], Loss: 0.8701
Epoch: [3/10], Step: [1200/1200], Loss: 0.9855
Epoch: [4/10], Step: [100/1200], Loss: 0.9081
Epoch: [4/10], Step: [200/1200], Loss: 0.8791
Epoch: [4/10], Step: [300/1200], Loss: 0.7540
Epoch: [4/10], Step: [400/1200], Loss: 0.9443
Epoch: [4/10], Step: [500/1200], Loss: 0.9346
Epoch: [4/10], Step: [600/1200], Loss: 0.8974
Epoch: [4/10], Step: [700/1200], Loss: 0.8897
Epoch: [4/10], Step: [800/1200], Loss: 0.7797
Epoch: [4/10], Step: [900/1200], Loss: 0.8608
Epoch: [4/10], Step: [1000/1200], Loss: 0.9216
Epoch: [4/10], Step: [1100/1200], Loss: 0.8676
Epoch: [4/10], Step: [1200/1200], Loss: 0.9251
Epoch: [5/10], Step: [100/1200], Loss: 0.7640
Epoch: [5/10], Step: [200/1200], Loss: 0.6955
Epoch: [5/10], Step: [300/1200], Loss: 0.8431
Epoch: [5/10], Step: [400/1200], Loss: 0.8489
Epoch: [5/10], Step: [500/1200], Loss: 0.7191
Epoch: [5/10], Step: [600/1200], Loss: 0.6671
Epoch: [5/10], Step: [700/1200], Loss: 0.6980
Epoch: [5/10], Step: [800/1200], Loss: 0.6837
Epoch: [5/10], Step: [900/1200], Loss: 0.9087
Epoch: [5/10], Step: [1000/1200], Loss: 0.7784
Epoch: [5/10], Step: [1100/1200], Loss: 0.7890
Epoch: [5/10], Step: [1200/1200], Loss: 1.0480
Epoch: [6/10], Step: [100/1200], Loss: 0.5834
Epoch: [6/10], Step: [200/1200], Loss: 0.8300
Epoch: [6/10], Step: [300/1200], Loss: 0.8316
Epoch: [6/10], Step: [400/1200], Loss: 0.7249
Epoch: [6/10], Step: [500/1200], Loss: 0.6184
Epoch: [6/10], Step: [600/1200], Loss: 0.7505
Epoch: [6/10], Step: [700/1200], Loss: 0.6599
Epoch: [6/10], Step: [800/1200], Loss: 0.7170
Epoch: [6/10], Step: [900/1200], Loss: 0.6857
Epoch: [6/10], Step: [1000/1200], Loss: 0.6543
Epoch: [6/10], Step: [1100/1200], Loss: 0.5679
Epoch: [6/10], Step: [1200/1200], Loss: 0.8261
Epoch: [7/10], Step: [100/1200], Loss: 0.7144
Epoch: [7/10], Step: [200/1200], Loss: 0.7573
Epoch: [7/10], Step: [300/1200], Loss: 0.7254
Epoch: [7/10], Step: [400/1200], Loss: 0.5918
Epoch: [7/10], Step: [500/1200], Loss: 0.6959
Epoch: [7/10], Step: [600/1200], Loss: 0.7058
Epoch: [7/10], Step: [700/1200], Loss: 0.7382
Epoch: [7/10], Step: [800/1200], Loss: 0.7282
Epoch: [7/10], Step: [900/1200], Loss: 0.6750
Epoch: [7/10], Step: [1000/1200], Loss: 0.6019
Epoch: [7/10], Step: [1100/1200], Loss: 0.6615
Epoch: [7/10], Step: [1200/1200], Loss: 0.5851
Epoch: [8/10], Step: [100/1200], Loss: 0.6492
Epoch: [8/10], Step: [200/1200], Loss: 0.5439
Epoch: [8/10], Step: [300/1200], Loss: 0.6613
Epoch: [8/10], Step: [400/1200], Loss: 0.6486
Epoch: [8/10], Step: [500/1200], Loss: 0.8281
Epoch: [8/10], Step: [600/1200], Loss: 0.6263
Epoch: [8/10], Step: [700/1200], Loss: 0.6541
Epoch: [8/10], Step: [800/1200], Loss: 0.5080
Epoch: [8/10], Step: [900/1200], Loss: 0.7020
Epoch: [8/10], Step: [1000/1200], Loss: 0.6421
Epoch: [8/10], Step: [1100/1200], Loss: 0.6207
Epoch: [8/10], Step: [1200/1200], Loss: 0.9254
Epoch: [9/10], Step: [100/1200], Loss: 0.7428
Epoch: [9/10], Step: [200/1200], Loss: 0.6815
Epoch: [9/10], Step: [300/1200], Loss: 0.6418
Epoch: [9/10], Step: [400/1200], Loss: 0.7096
Epoch: [9/10], Step: [500/1200], Loss: 0.6846
Epoch: [9/10], Step: [600/1200], Loss: 0.5124
Epoch: [9/10], Step: [700/1200], Loss: 0.6300
Epoch: [9/10], Step: [800/1200], Loss: 0.6340
Epoch: [9/10], Step: [900/1200], Loss: 0.5593
Epoch: [9/10], Step: [1000/1200], Loss: 0.5706
Epoch: [9/10], Step: [1100/1200], Loss: 0.6258
Epoch: [9/10], Step: [1200/1200], Loss: 0.7627
Epoch: [10/10], Step: [100/1200], Loss: 0.5254
Epoch: [10/10], Step: [200/1200], Loss: 0.5318
Epoch: [10/10], Step: [300/1200], Loss: 0.5448
Epoch: [10/10], Step: [400/1200], Loss: 0.5634
Epoch: [10/10], Step: [500/1200], Loss: 0.6398
Epoch: [10/10], Step: [600/1200], Loss: 0.7158
Epoch: [10/10], Step: [700/1200], Loss: 0.6169
Epoch: [10/10], Step: [800/1200], Loss: 0.5641
Epoch: [10/10], Step: [900/1200], Loss: 0.5698
Epoch: [10/10], Step: [1000/1200], Loss: 0.5612
Epoch: [10/10], Step: [1100/1200], Loss: 0.5126
Epoch: [10/10], Step: [1200/1200], Loss: 0.6746
Accuracy of the model on the 10000 test images: 87 %Process finished with exit code 0

这篇关于【python pytorch】Pytorch实现逻辑回归的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1144288

相关文章

Django开发时如何避免频繁发送短信验证码(python图文代码)

《Django开发时如何避免频繁发送短信验证码(python图文代码)》Django开发时,为防止频繁发送验证码,后端需用Redis限制请求频率,结合管道技术提升效率,通过生产者消费者模式解耦业务逻辑... 目录避免频繁发送 验证码1. www.chinasem.cn避免频繁发送 验证码逻辑分析2. 避免频繁

分布式锁在Spring Boot应用中的实现过程

《分布式锁在SpringBoot应用中的实现过程》文章介绍在SpringBoot中通过自定义Lock注解、LockAspect切面和RedisLockUtils工具类实现分布式锁,确保多实例并发操作... 目录Lock注解LockASPect切面RedisLockUtils工具类总结在现代微服务架构中,分布

Java使用Thumbnailator库实现图片处理与压缩功能

《Java使用Thumbnailator库实现图片处理与压缩功能》Thumbnailator是高性能Java图像处理库,支持缩放、旋转、水印添加、裁剪及格式转换,提供易用API和性能优化,适合Web应... 目录1. 图片处理库Thumbnailator介绍2. 基本和指定大小图片缩放功能2.1 图片缩放的

精选20个好玩又实用的的Python实战项目(有图文代码)

《精选20个好玩又实用的的Python实战项目(有图文代码)》文章介绍了20个实用Python项目,涵盖游戏开发、工具应用、图像处理、机器学习等,使用Tkinter、PIL、OpenCV、Kivy等库... 目录① 猜字游戏② 闹钟③ 骰子模拟器④ 二维码⑤ 语言检测⑥ 加密和解密⑦ URL缩短⑧ 音乐播放

python panda库从基础到高级操作分析

《pythonpanda库从基础到高级操作分析》本文介绍了Pandas库的核心功能,包括处理结构化数据的Series和DataFrame数据结构,数据读取、清洗、分组聚合、合并、时间序列分析及大数据... 目录1. Pandas 概述2. 基本操作:数据读取与查看3. 索引操作:精准定位数据4. Group

Python pandas库自学超详细教程

《Pythonpandas库自学超详细教程》文章介绍了Pandas库的基本功能、安装方法及核心操作,涵盖数据导入(CSV/Excel等)、数据结构(Series、DataFrame)、数据清洗、转换... 目录一、什么是Pandas库(1)、Pandas 应用(2)、Pandas 功能(3)、数据结构二、安

Python使用Tenacity一行代码实现自动重试详解

《Python使用Tenacity一行代码实现自动重试详解》tenacity是一个专为Python设计的通用重试库,它的核心理念就是用简单、清晰的方式,为任何可能失败的操作添加重试能力,下面我们就来看... 目录一切始于一个简单的 API 调用Tenacity 入门:一行代码实现优雅重试精细控制:让重试按我

Python安装Pandas库的两种方法

《Python安装Pandas库的两种方法》本文介绍了三种安装PythonPandas库的方法,通过cmd命令行安装并解决版本冲突,手动下载whl文件安装,更换国内镜像源加速下载,最后建议用pipli... 目录方法一:cmd命令行执行pip install pandas方法二:找到pandas下载库,然后

Redis客户端连接机制的实现方案

《Redis客户端连接机制的实现方案》本文主要介绍了Redis客户端连接机制的实现方案,包括事件驱动模型、非阻塞I/O处理、连接池应用及配置优化,具有一定的参考价值,感兴趣的可以了解一下... 目录1. Redis连接模型概述2. 连接建立过程详解2.1 连php接初始化流程2.2 关键配置参数3. 最大连

Python实现网格交易策略的过程

《Python实现网格交易策略的过程》本文讲解Python网格交易策略,利用ccxt获取加密货币数据及backtrader回测,通过设定网格节点,低买高卖获利,适合震荡行情,下面跟我一起看看我们的第一... 网格交易是一种经典的量化交易策略,其核心思想是在价格上下预设多个“网格”,当价格触发特定网格时执行买