PyTorch——利用Accelerate轻松控制多个CPU/GPU/TPU加速计算

2024-03-09 10:30

本文主要是介绍PyTorch——利用Accelerate轻松控制多个CPU/GPU/TPU加速计算,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

PyTorch——利用Accelerate轻松控制多个CPU/GPU/TPU加速计算

    • 前言
    • 官方示例
    • 单个程序内控制多个CPU/GPU/TPU
      • 简单说一下
      • 设备环境
      • 导包
      • 加载数据 FashionMNIST
      • 创建一个简单的CNN模型
      • 训练函数-只包含训练
      • 训练函数-包含训练和验证
      • 训练
    • 多个服务器、多个程序间控制多个CPU/GPU/TPU
    • 参考链接

前言

  • CPU?GPU?TPU?
    • 计算设备太多,很混乱?
    • 切换环境,代码大量改来改去?
    • 不懂怎么调用多个CPU/GPU/TPU?或者想轻松调用?
  • OK!OK!OK!
    • 来自HuggingFace的Accelerate库帮你轻松解决这些问题,只需几行代码改动就可以快速完成计算设备的自动调整。
      huggingface
  • 相关地址
    • 官方文档:https://huggingface.co/docs/accelerate/index
    • GitHub:https://github.com/huggingface/accelerate
    • 安装(推荐用>=0.14的版本) $ pip install accelerate
  • 下面就来说说怎么用
    • 你也可以直接看我在Kaggle上做好的完整的Notebook示例

官方示例

  • 先大致看个样
  • 移除掉以前.to(device)部分的代码,引入Acceleratormodel、optimizer、data、loss.backward()做下处理即可
import torch
import torch.nn.functional as F
from datasets import load_dataset
from accelerate import Accelerator# device = 'cpu'
accelerator = Accelerator()# model = torch.nn.Transformer().to(device)
model = torch.nn.Transformer()
optimizer = torch.optim.Adam(model.parameters())dataset = load_dataset('my_dataset')
data = torch.utils.data.DataLoader(dataset, shuffle=True)model, optimizer, data = accelerator.prepare(model, optimizer, data)model.train()
for epoch in range(10):for source, targets in data:# source = source.to(device)# targets = targets.to(device)optimizer.zero_grad()output = model(source)loss = F.cross_entropy(output, targets)# loss.backward()accelerator.backward(loss)optimizer.step()

单个程序内控制多个CPU/GPU/TPU

  • 详细内容请参考官方Example

简单说一下

  • 对于单个计算设备,像前面那个简单示例改下代码即可
  • 多个计算设备(例如GPU)的情况下,有一点特殊的要处理,下面做个完整的PyTorch训练示例
    • 你可以拿这个和我之前发的示例做个对比 CNN图像分类-FashionMNIST
    • 也可以直接看我在Kaggle上做好的完整的Notebook示例

设备环境

  • 看看当前的显卡设备(2颗Tesla T4),命令 $ nvidia-smi
Thu Apr 27 10:53:26 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03   Driver Version: 470.161.03   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   43C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla T4            Off  | 00000000:00:05.0 Off |                    0 |
| N/A   41C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
  • 安装或更新Accelerate,命令 $ !pip install --upgrade accelerate

导包

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor, Compose
import torchvision.datasets as datasets
from accelerate import Accelerator
from accelerate import notebook_launcher

加载数据 FashionMNIST

train_data = datasets.FashionMNIST(root="./data",train=True,download=True,transform=Compose([ToTensor()])
)test_data = datasets.FashionMNIST(root="./data",train=False,download=True,transform=Compose([ToTensor()])
)print(train_data.data.shape)
print(test_data.data.shape)

创建一个简单的CNN模型

class CNNModel(nn.Module):def __init__(self):super(CNNModel, self).__init__()self.module1 = nn.Sequential(nn.Conv2d(1, 32, kernel_size=5, stride=1, padding=2),nn.BatchNorm2d(32),nn.ReLU(),nn.MaxPool2d(kernel_size=2, stride=2))  self.module2 = nn.Sequential(nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=2),nn.BatchNorm2d(64),nn.ReLU(),nn.MaxPool2d(kernel_size=2, stride=2))self.flatten = nn.Flatten()self.linear1 = nn.Linear(7 * 7 * 64, 64)self.linear2 = nn.Linear(64, 10)self.relu = nn.ReLU()def forward(self, x):out = self.module1(x)out = self.module2(out)out = self.flatten(out)out = self.linear1(out)out = self.relu(out)out = self.linear2(out)return out

训练函数-只包含训练

  • 注意看accelerator相关代码
  • 若要实现多设备控制训练,for epoch in range(epoch_num):中末尾处的代码必不可少
def training_function():# 参数配置epoch_num = 4batch_size = 64learning_rate = 0.005# device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')# 数据train_loader = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True)val_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True)# 模型/损失函数/优化器# model = CNNModel().to(device)model = CNNModel()criterion = nn.CrossEntropyLoss()optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)accelerator = Accelerator()model, optimizer, train_loader, val_loader = accelerator.prepare(model, optimizer, train_loader, val_loader)# 开始训练for epoch in range(epoch_num):# 训练model.train()for i, (X_train, y_train) in enumerate(train_loader):# X_train = X_train.to(device)# y_train = y_train.to(device)out = model(X_train)loss = criterion(out, y_train)optimizer.zero_grad()# loss.backward()accelerator.backward(loss)optimizer.step()if (i + 1) % 100 == 0:print(f"{accelerator.device} Train... [epoch {epoch + 1}/{epoch_num}, step {i + 1}/{len(train_loader)}]\t[loss {loss.item()}]")# 等待每个GPU上的模型执行完当前的epoch,并进行合并同步accelerator.wait_for_everyone() model = accelerator.unwrap_model(model)# 现在所有GPU上都一样了,可以保存modelaccelerator.save(model, "model.pth") 

训练函数-包含训练和验证

  • 相比前面的代码,多了“验证”相关的代码
  • 验证时,因为使用多个设备进行训练,所以会比较特殊,会涉及到多个设备的验证结果合并的问题
def training_function():# 参数配置epoch_num = 4batch_size = 64learning_rate = 0.005# 数据train_loader = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True)val_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True)# 模型/损失函数/优化器model = CNNModel()criterion = nn.CrossEntropyLoss()optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)accelerator = Accelerator()model, optimizer, train_loader, val_loader = accelerator.prepare(model, optimizer, train_loader, val_loader)# 开始训练for epoch in range(epoch_num):# 训练model.train()for i, (X_train, y_train) in enumerate(train_loader):out = model(X_train)loss = criterion(out, y_train)optimizer.zero_grad()accelerator.backward(loss)optimizer.step()if (i + 1) % 100 == 0:print(f"{accelerator.device} Train... [epoch {epoch + 1}/{epoch_num}, step {i + 1}/{len(train_loader)}]\t[loss {loss.item()}]")# 验证model.eval()correct, total = 0, 0for X_val, y_val in val_loader:with torch.no_grad():output = model(X_val)_, pred = torch.max(output, 1)# 合并每个GPU的验证数据pred, y_val = accelerator.gather_for_metrics((pred, y_val))total += y_val.size(0)correct += (pred == y_val).sum()# 用main process打印accuracyaccelerator.print(f'epoch {epoch + 1}/{epoch_num}, accuracy = {100 * (correct.item() / total):.2f}')# 等待每个GPU上的模型执行完当前的epoch,并进行合并同步accelerator.wait_for_everyone() model = accelerator.unwrap_model(model)# 现在所有GPU上都一样了,可以保存modelaccelerator.save(model, "model.pth") 

训练

  • 如果你在本地训练的话,直接调用前面定义的函数training_function即可。最后在命令行启动训练脚本 $ accelerate launch example.py
training_function()
  • 如果你在Kaggle/Colab上面,则需要利用notebook_launcher进行训练
# num_processes=2 指定使用2个GPU,因为当前我申请了2颗 Nvidia T4
notebook_launcher(training_function, num_processes=2)
  • 下面是2个GPU训练时的控制台输出样例
Launching training on 2 GPUs.
cuda:0 Train... [epoch 1/4, step 100/469]	[loss 0.43843933939933777]
cuda:1 Train... [epoch 1/4, step 100/469]	[loss 0.5267877578735352]
cuda:0 Train... [epoch 1/4, step 200/469]	[loss 0.39918822050094604]cuda:1 Train... [epoch 1/4, step 200/469]	[loss 0.2748252749443054]cuda:1 Train... [epoch 1/4, step 300/469]	[loss 0.54105544090271]cuda:0 Train... [epoch 1/4, step 300/469]	[loss 0.34716445207595825]cuda:1 Train... [epoch 1/4, step 400/469]	[loss 0.2694844901561737]
cuda:0 Train... [epoch 1/4, step 400/469]	[loss 0.4343942701816559]
epoch 1/4, accuracy = 88.49
cuda:0 Train... [epoch 2/4, step 100/469]	[loss 0.19695354998111725]
cuda:1 Train... [epoch 2/4, step 100/469]	[loss 0.2911057770252228]
cuda:0 Train... [epoch 2/4, step 200/469]	[loss 0.2948791980743408]
cuda:1 Train... [epoch 2/4, step 200/469]	[loss 0.292676717042923]
cuda:0 Train... [epoch 2/4, step 300/469]	[loss 0.222089946269989]
cuda:1 Train... [epoch 2/4, step 300/469]	[loss 0.28814008831977844]
cuda:0 Train... [epoch 2/4, step 400/469]	[loss 0.3431250751018524]
cuda:1 Train... [epoch 2/4, step 400/469]	[loss 0.2546379864215851]
epoch 2/4, accuracy = 87.31
cuda:1 Train... [epoch 3/4, step 100/469]	[loss 0.24118559062480927]cuda:0 Train... [epoch 3/4, step 100/469]	[loss 0.363821804523468]cuda:0 Train... [epoch 3/4, step 200/469]	[loss 0.36783623695373535]
cuda:1 Train... [epoch 3/4, step 200/469]	[loss 0.18346744775772095]
cuda:0 Train... [epoch 3/4, step 300/469]	[loss 0.23459288477897644]
cuda:1 Train... [epoch 3/4, step 300/469]	[loss 0.2887689769268036]
cuda:0 Train... [epoch 3/4, step 400/469]	[loss 0.3079166114330292]
cuda:1 Train... [epoch 3/4, step 400/469]	[loss 0.18255220353603363]
epoch 3/4, accuracy = 88.46
cuda:1 Train... [epoch 4/4, step 100/469]	[loss 0.27428603172302246]
cuda:0 Train... [epoch 4/4, step 100/469]	[loss 0.17705145478248596]
cuda:1 Train... [epoch 4/4, step 200/469]	[loss 0.2811894416809082]
cuda:0 Train... [epoch 4/4, step 200/469]	[loss 0.22682836651802063]
cuda:0 Train... [epoch 4/4, step 300/469]	[loss 0.2291710525751114]
cuda:1 Train... [epoch 4/4, step 300/469]	[loss 0.32024848461151123]
cuda:0 Train... [epoch 4/4, step 400/469]	[loss 0.24648766219615936]
cuda:1 Train... [epoch 4/4, step 400/469]	[loss 0.0805584192276001]
epoch 4/4, accuracy = 89.38
  • 下面是1个TPU训练时的控制台输出样例
Launching training on CPU.
xla:0 Train... [epoch 1/4, step 100/938]	[loss 0.6051161289215088]
xla:0 Train... [epoch 1/4, step 200/938]	[loss 0.27442359924316406]
xla:0 Train... [epoch 1/4, step 300/938]	[loss 0.557417631149292]
xla:0 Train... [epoch 1/4, step 400/938]	[loss 0.1840067058801651]
xla:0 Train... [epoch 1/4, step 500/938]	[loss 0.5252436399459839]
xla:0 Train... [epoch 1/4, step 600/938]	[loss 0.2718536853790283]
xla:0 Train... [epoch 1/4, step 700/938]	[loss 0.2763175368309021]
xla:0 Train... [epoch 1/4, step 800/938]	[loss 0.39897507429122925]
xla:0 Train... [epoch 1/4, step 900/938]	[loss 0.28720396757125854]
epoch = 0, accuracy = 86.36
xla:0 Train... [epoch 2/4, step 100/938]	[loss 0.24496735632419586]
xla:0 Train... [epoch 2/4, step 200/938]	[loss 0.37713131308555603]
xla:0 Train... [epoch 2/4, step 300/938]	[loss 0.3106330633163452]
xla:0 Train... [epoch 2/4, step 400/938]	[loss 0.40438592433929443]
xla:0 Train... [epoch 2/4, step 500/938]	[loss 0.38303741812705994]
xla:0 Train... [epoch 2/4, step 600/938]	[loss 0.39199298620224]
xla:0 Train... [epoch 2/4, step 700/938]	[loss 0.38932573795318604]
xla:0 Train... [epoch 2/4, step 800/938]	[loss 0.26298171281814575]
xla:0 Train... [epoch 2/4, step 900/938]	[loss 0.21517205238342285]
epoch = 1, accuracy = 90.07
xla:0 Train... [epoch 3/4, step 100/938]	[loss 0.366019606590271]
xla:0 Train... [epoch 3/4, step 200/938]	[loss 0.27360212802886963]
xla:0 Train... [epoch 3/4, step 300/938]	[loss 0.2014923095703125]
xla:0 Train... [epoch 3/4, step 400/938]	[loss 0.21998485922813416]
xla:0 Train... [epoch 3/4, step 500/938]	[loss 0.28129786252975464]
xla:0 Train... [epoch 3/4, step 600/938]	[loss 0.42534705996513367]
xla:0 Train... [epoch 3/4, step 700/938]	[loss 0.22158119082450867]
xla:0 Train... [epoch 3/4, step 800/938]	[loss 0.359947144985199]
xla:0 Train... [epoch 3/4, step 900/938]	[loss 0.3221997022628784]
epoch = 2, accuracy = 90.36
xla:0 Train... [epoch 4/4, step 100/938]	[loss 0.2814193069934845]
xla:0 Train... [epoch 4/4, step 200/938]	[loss 0.16465164721012115]
xla:0 Train... [epoch 4/4, step 300/938]	[loss 0.2897304892539978]
xla:0 Train... [epoch 4/4, step 400/938]	[loss 0.13403896987438202]
xla:0 Train... [epoch 4/4, step 500/938]	[loss 0.1135573536157608]
xla:0 Train... [epoch 4/4, step 600/938]	[loss 0.14964193105697632]
xla:0 Train... [epoch 4/4, step 700/938]	[loss 0.20239461958408356]
xla:0 Train... [epoch 4/4, step 800/938]	[loss 0.23625142872333527]
xla:0 Train... [epoch 4/4, step 900/938]	[loss 0.3418393135070801]
epoch = 3, accuracy = 90.11

多个服务器、多个程序间控制多个CPU/GPU/TPU

  • 详细内容请参考官方Example
  • 包括
    • 单服务器内,多个程序控制多个计算设备
    • 多个服务器间,多个程序控制多个计算设备
  • 写好代码后,请先在每个服务器下执行$ accelerate config生成对应的配置文件,下面是个样例
(huggingface) PS C:\Users\alion\temp> accelerate config
------------------------------------------------------------------------------------------------------------------------In which compute environment are you running?
This machine
------------------------------------------------------------------------------------------------------------------------Which type of machine are you using?
multi-GPU
How many different machines will you use (use more than 1 for multi-node training)? [1]: 2
------------------------------------------------------------------------------------------------------------------------What is the rank of this machine?
0
What is the IP address of the machine that will host the main process? 192.168.101
What is the port you will use to communicate with the main process? 12345
Are all the machines on the same local network? Answer `no` if nodes are on the cloud and/or on different network hosts [YES/no]: yes
Do you wish to optimize your script with torch dynamo?[yes/NO]:no
Do you want to use DeepSpeed? [yes/NO]: no
Do you want to use FullyShardedDataParallel? [yes/NO]: no
Do you want to use Megatron-LM ? [yes/NO]: no
How many GPU(s) should be used for distributed training? [1]:2
What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:0
------------------------------------------------------------------------------------------------------------------------Do you wish to use FP16 or BF16 (mixed precision)?
fp16
accelerate configuration saved at C:\Users\alion/.cache\huggingface\accelerate\default_config.yaml
  • 最后在每个服务器启动训练脚本 $ accelerate launch example.py(如果你是单台服务器多个程序,那就只启动一台的脚本就完了)

参考链接

  • https://github.com/huggingface/accelerate
  • https://www.kaggle.com/code/muellerzr/multi-gpu-and-accelerate
  • https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb
  • https://github.com/huggingface/accelerate/tree/main/examples

这篇关于PyTorch——利用Accelerate轻松控制多个CPU/GPU/TPU加速计算的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/790400

相关文章

SysMain服务可以关吗? 解决SysMain服务导致的高CPU使用率问题

《SysMain服务可以关吗?解决SysMain服务导致的高CPU使用率问题》SysMain服务是超级预读取,该服务会记录您打开应用程序的模式,并预先将它们加载到内存中以节省时间,但它可能占用大量... 在使用电脑的过程中,CPU使用率居高不下是许多用户都遇到过的问题,其中名为SysMain的服务往往是罪魁

Python文本相似度计算的方法大全

《Python文本相似度计算的方法大全》文本相似度是指两个文本在内容、结构或语义上的相近程度,通常用0到1之间的数值表示,0表示完全不同,1表示完全相同,本文将深入解析多种文本相似度计算方法,帮助您选... 目录前言什么是文本相似度?1. Levenshtein 距离(编辑距离)核心公式实现示例2. Jac

Python脚本轻松实现检测麦克风功能

《Python脚本轻松实现检测麦克风功能》在进行音频处理或开发需要使用麦克风的应用程序时,确保麦克风功能正常是非常重要的,本文将介绍一个简单的Python脚本,能够帮助我们检测本地麦克风的功能,需要的... 目录轻松检测麦克风功能脚本介绍一、python环境准备二、代码解析三、使用方法四、知识扩展轻松检测麦

Python批量替换多个Word文档的多个关键字的方法

《Python批量替换多个Word文档的多个关键字的方法》有时,我们手头上有多个Excel或者Word文件,但是领导突然要求对某几个术语进行批量的修改,你是不是有要崩溃的感觉,所以本文给大家介绍了Py... 目录工具准备先梳理一下思路神奇代码来啦!代码详解激动人心的测试结语嘿,各位小伙伴们,大家好!有没有想

MySQL中优化CPU使用的详细指南

《MySQL中优化CPU使用的详细指南》优化MySQL的CPU使用可以显著提高数据库的性能和响应时间,本文为大家整理了一些优化CPU使用的方法,大家可以根据需要进行选择... 目录一、优化查询和索引1.1 优化查询语句1.2 创建和优化索引1.3 避免全表扫描二、调整mysql配置参数2.1 调整线程数2.

Python中经纬度距离计算的实现方式

《Python中经纬度距离计算的实现方式》文章介绍Python中计算经纬度距离的方法及中国加密坐标系转换工具,主要方法包括geopy(Vincenty/Karney)、Haversine、pyproj... 目录一、基本方法1. 使用geopy库(推荐)2. 手动实现 Haversine 公式3. 使用py

Go语言使用select监听多个channel的示例详解

《Go语言使用select监听多个channel的示例详解》本文将聚焦Go并发中的一个强力工具,select,这篇文章将通过实际案例学习如何优雅地监听多个Channel,实现多任务处理、超时控制和非阻... 目录一、前言:为什么要使用select二、实战目标三、案例代码:监听两个任务结果和超时四、运行示例五

Spring Boot集成/输出/日志级别控制/持久化开发实践

《SpringBoot集成/输出/日志级别控制/持久化开发实践》SpringBoot默认集成Logback,支持灵活日志级别配置(INFO/DEBUG等),输出包含时间戳、级别、类名等信息,并可通过... 目录一、日志概述1.1、Spring Boot日志简介1.2、日志框架与默认配置1.3、日志的核心作用

PyTorch中的词嵌入层(nn.Embedding)详解与实战应用示例

《PyTorch中的词嵌入层(nn.Embedding)详解与实战应用示例》词嵌入解决NLP维度灾难,捕捉语义关系,PyTorch的nn.Embedding模块提供灵活实现,支持参数配置、预训练及变长... 目录一、词嵌入(Word Embedding)简介为什么需要词嵌入?二、PyTorch中的nn.Em

MySQL多实例管理如何在一台主机上运行多个mysql

《MySQL多实例管理如何在一台主机上运行多个mysql》文章详解了在Linux主机上通过二进制方式安装MySQL多实例的步骤,涵盖端口配置、数据目录准备、初始化与启动流程,以及排错方法,适用于构建读... 目录一、什么是mysql多实例二、二进制方式安装MySQL1.获取二进制代码包2.安装基础依赖3.清