Upload From- PyTorch-DDP-to-Accelerate-to-Trainer-cn.md
#9
by
innovation64
- opened
From- PyTorch-DDP-to-Accelerate-to-Trainer-cn.md
ADDED
@@ -0,0 +1,427 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 从PyTorch DDP 到 Accelerate 到 Trainer,轻松掌握分布式训练
|
2 |
+
## 总体回顾
|
3 |
+
|
4 |
+
本教程建立在假定你已经对于PyToch训练一个简单模型有一定的基础理解上。本教程将展示通过调用DDP进程在多个GPU上训练的过程,并且通过三个不同等级的抽象方法展示:
|
5 |
+
|
6 |
+
- 通过pytorch.distributed模块的原生pytorch DDP
|
7 |
+
- 利用 🤗 关于pytorch.distributed的轻量加速封装确保程序可以在零代码或低代码修改的情况下在当个GPU或TPU下正常运行
|
8 |
+
- 利用 🤗 Transformer的高级Trainer API ,该API抽象类所有模板代码并且支持不同设备和分布式场景。
|
9 |
+
|
10 |
+
|
11 |
+
## 什么是分布式训练,为什么他很重要?
|
12 |
+
|
13 |
+
下面展示了基本的PyTorch训练代码,根据[官方MNIST示例](https://github.com/pytorch/examples/blob/main/mnist/main.py),设置并训练MNIST模型
|
14 |
+
|
15 |
+
```python
|
16 |
+
import torch
|
17 |
+
import torch.nn as nn
|
18 |
+
import torch.nn.functional as F
|
19 |
+
import torch.optim as optim
|
20 |
+
from torchvision import datasets, transforms
|
21 |
+
|
22 |
+
class BasicNet(nn.Module):
|
23 |
+
def __init__(self):
|
24 |
+
super().__init__()
|
25 |
+
self.conv1 = nn.Conv2d(1, 32, 3, 1)
|
26 |
+
self.conv2 = nn.Conv2d(32, 64, 3, 1)
|
27 |
+
self.dropout1 = nn.Dropout(0.25)
|
28 |
+
self.dropout2 = nn.Dropout(0.5)
|
29 |
+
self.fc1 = nn.Linear(9216, 128)
|
30 |
+
self.fc2 = nn.Linear(128, 10)
|
31 |
+
self.act = F.relu
|
32 |
+
|
33 |
+
def forward(self, x):
|
34 |
+
x = self.act(self.conv1(x))
|
35 |
+
x = self.act(self.conv2(x))
|
36 |
+
x = F.max_pool2d(x, 2)
|
37 |
+
x = self.dropout1(x)
|
38 |
+
x = torch.flatten(x, 1)
|
39 |
+
x = self.act(self.fc1(x))
|
40 |
+
x = self.dropout2(x)
|
41 |
+
x = self.fc2(x)
|
42 |
+
output = F.log_softmax(x, dim=1)
|
43 |
+
return output
|
44 |
+
```
|
45 |
+
|
46 |
+
我们定义训练设备(cuda):
|
47 |
+
|
48 |
+
```python
|
49 |
+
device = "cuda"
|
50 |
+
```
|
51 |
+
|
52 |
+
构建一些基本的PyTorch DataLoaders:
|
53 |
+
|
54 |
+
```python
|
55 |
+
transform = transforms.Compose([
|
56 |
+
transforms.ToTensor(),
|
57 |
+
transforms.Normalize((0.1307), (0.3081))
|
58 |
+
])
|
59 |
+
|
60 |
+
train_dset = datasets.MNIST('data', train=True, download=True, transform=transform)
|
61 |
+
test_dset = datasets.MNIST('data', train=False, transform=transform)
|
62 |
+
|
63 |
+
train_loader = torch.utils.data.DataLoader(train_dset, shuffle=True, batch_size=64)
|
64 |
+
test_loader = torch.utils.data.DataLoader(test_dset, shuffle=False, batch_size=64)
|
65 |
+
```
|
66 |
+
|
67 |
+
把模型放入CUDA设备:
|
68 |
+
|
69 |
+
```python
|
70 |
+
model = BasicNet().to(device)
|
71 |
+
```
|
72 |
+
|
73 |
+
构建PyTorch optimizer(优化器)
|
74 |
+
|
75 |
+
```python
|
76 |
+
optimizer = optim.AdamW(model.parameters(), lr=1e-3)
|
77 |
+
```
|
78 |
+
|
79 |
+
在最终创建一个简单的训练和评估循环之前,该循环对数据集执行一个完整的迭代并计算测试准确性:
|
80 |
+
|
81 |
+
```python
|
82 |
+
model.train()
|
83 |
+
for batch_idx, (data, target) in enumerate(train_loader):
|
84 |
+
data, target = data.to(device), target.to(device)
|
85 |
+
output = model(data)
|
86 |
+
loss = F.nll_loss(output, target)
|
87 |
+
loss.backward()
|
88 |
+
optimizer.step()
|
89 |
+
optimizer.zero_grad()
|
90 |
+
|
91 |
+
model.eval()
|
92 |
+
correct = 0
|
93 |
+
with torch.no_grad():
|
94 |
+
for data, target in test_loader:
|
95 |
+
output = model(data)
|
96 |
+
pred = output.argmax(dim=1, keepdim=True)
|
97 |
+
correct += pred.eq(target.view_as(pred)).sum().item()
|
98 |
+
print(f'Accuracy: {100. * correct / len(test_loader.dataset)}')
|
99 |
+
```
|
100 |
+
|
101 |
+
通常从这里开始,可以将所有这些放入 python 脚本或在 Jupyter Notebook 上运行它。
|
102 |
+
|
103 |
+
然而,你怎样让这些资源可用下通过分布式训练利用这些脚本在两个GPU或多台机器上去提升训练速度呢?仅仅通过`python myscript.py`只会在单个GPU上跑。这正是`torch.distributed`模块存在的意义
|
104 |
+
|
105 |
+
## PyTorch分布式数据并行
|
106 |
+
|
107 |
+
正如字面意思所示,`torch.distributed`是指在分布式上工作。这包括多节点,多机器下的单个GPU,或者单个系统下多GPU,或者两者混合的情况。
|
108 |
+
|
109 |
+
为了转换为分布式设置,一些初始化设置必须首先定义,具体细节请看[DDP使用教程](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)
|
110 |
+
|
111 |
+
首先要生命`setup`和`cleanup`函数。这将创建一个进程组,并且所有计算进程通过这个通信。
|
112 |
+
|
113 |
+
>"注意:对于本教程的这一部分,应该假定这些是在 python 脚本文件中发送的。稍后将讨论使用 Accelerate 的启动器,它消除了这种必要性"
|
114 |
+
|
115 |
+
```python
|
116 |
+
import os
|
117 |
+
import torch.distributed as dist
|
118 |
+
|
119 |
+
def setup(rank, world_size):
|
120 |
+
"Sets up the process group and configuration for PyTorch Distributed Data Parallelism"
|
121 |
+
os.environ["MASTER_ADDR"] = 'localhost'
|
122 |
+
os.environ["MASTER_PORT"] = "12355"
|
123 |
+
|
124 |
+
# Initialize the process group
|
125 |
+
dist.init_process_group("gloo", rank=rank, world_size=world_size)
|
126 |
+
|
127 |
+
def cleanup():
|
128 |
+
"Cleans up the distributed environment"
|
129 |
+
dist.destroy_process_group()
|
130 |
+
```
|
131 |
+
|
132 |
+
最后一个疑问是,我怎样把我的数据和模型发送到另一个GPU上?
|
133 |
+
|
134 |
+
这正是` DistributedDataParallel`模型应用的地方,他会拷贝你的模型在每一个GPU上,并且当`loss.backward()`被调用进行反向传播的时候,所有这些模型副本的梯度将被平均/减少��这确保每个设备在优化器步骤后具有相同的权重。
|
135 |
+
|
136 |
+
下面是我们的训练设置示例,重构为具有此功能的函数:
|
137 |
+
|
138 |
+
>"注意:此处的rank是当前 GPU 与所有其他可用 GPU 相比的总体rank,这意味着它们的rank为0 -> n-1
|
139 |
+
|
140 |
+
```python
|
141 |
+
from torch.nn.parallel import DistributedDataParallel as DDP
|
142 |
+
|
143 |
+
def train(model, rank, world_size):
|
144 |
+
setup(rank, world_size)
|
145 |
+
model = model.to(rank)
|
146 |
+
ddp_model = DDP(model, device_ids=[rank])
|
147 |
+
optimizer = optim.AdamW(ddp_model.parameters(), lr=1e-3)
|
148 |
+
# Train for one epoch
|
149 |
+
model.train()
|
150 |
+
for batch_idx, (data, target) in enumerate(train_loader):
|
151 |
+
data, target = data.to(device), target.to(device)
|
152 |
+
output = model(data)
|
153 |
+
loss = F.nll_loss(output, target)
|
154 |
+
loss.backward()
|
155 |
+
optimizer.step()
|
156 |
+
optimizer.zero_grad()
|
157 |
+
cleanup()
|
158 |
+
```
|
159 |
+
|
160 |
+
需要根据特定设备上的模型(soddp_model和 not model)声明优化器,以便正确计算所有梯度。
|
161 |
+
|
162 |
+
最后,要运行脚本,PyTorch 有一个方便的`torchrun`命令行模块可以提供帮助。只需传入它应该使用的节点数以及要运行的脚本即可:
|
163 |
+
|
164 |
+
```bash
|
165 |
+
torchrun --nproc_per_nodes=2 --nnodes=1 example_script.py
|
166 |
+
```
|
167 |
+
|
168 |
+
上面的代码将在一台机器上的两个 GPU 上运行训练脚本,这是使用 PyTorch 仅执行分布式训练的情况。
|
169 |
+
|
170 |
+
现在让我们谈谈 Accelerate,一个旨在让这个过程更丝滑并且能通过少量尝试帮助你的达到最优的库
|
171 |
+
|
172 |
+
## 🤗 Accelerate
|
173 |
+
|
174 |
+
[Accelerate](https://huggingface.co/docs/accelerate)是一个库,旨在让您执行我们刚才所做的事情,而无需大幅修改您的代码。除此之外,Accelerate 固有的数据pipeline还可以提高代码的性能。
|
175 |
+
|
176 |
+
首先,让我们将刚刚执行的所有上述代码包装到一个函数中,以帮助我们直观地看到差异:
|
177 |
+
|
178 |
+
```python
|
179 |
+
def train_ddp(rank, world_size):
|
180 |
+
setup(rank, world_size)
|
181 |
+
# Build DataLoaders
|
182 |
+
transform = transforms.Compose([
|
183 |
+
transforms.ToTensor(),
|
184 |
+
transforms.Normalize((0.1307), (0.3081))
|
185 |
+
])
|
186 |
+
|
187 |
+
train_dset = datasets.MNIST('data', train=True, download=True, transform=transform)
|
188 |
+
test_dset = datasets.MNIST('data', train=False, transform=transform)
|
189 |
+
|
190 |
+
train_loader = torch.utils.data.DataLoader(train_dset, shuffle=True, batch_size=64)
|
191 |
+
test_loader = torch.utils.data.DataLoader(test_dset, shuffle=False, batch_size=64)
|
192 |
+
|
193 |
+
# Build model
|
194 |
+
model = model.to(rank)
|
195 |
+
ddp_model = DDP(model, device_ids=[rank])
|
196 |
+
|
197 |
+
# Build optimizer
|
198 |
+
optimizer = optim.AdamW(ddp_model.parameters(), lr=1e-3)
|
199 |
+
|
200 |
+
# Train for a single epoch
|
201 |
+
model.train()
|
202 |
+
for batch_idx, (data, target) in enumerate(train_loader):
|
203 |
+
data, target = data.to(device), target.to(device)
|
204 |
+
output = model(data)
|
205 |
+
loss = F.nll_loss(output, target)
|
206 |
+
loss.backward()
|
207 |
+
optimizer.step()
|
208 |
+
optimizer.zero_grad()
|
209 |
+
|
210 |
+
# Evaluate
|
211 |
+
model.eval()
|
212 |
+
correct = 0
|
213 |
+
with torch.no_grad():
|
214 |
+
for data, target in test_loader:
|
215 |
+
data, target = data.to(device), target.to(device)
|
216 |
+
output = model(data)
|
217 |
+
pred = output.argmax(dim=1, keepdim=True)
|
218 |
+
correct += pred.eq(target.view_as(pred)).sum().item()
|
219 |
+
print(f'Accuracy: {100. * correct / len(test_loader.dataset)}')
|
220 |
+
```
|
221 |
+
|
222 |
+
接下来让我们谈谈 Accelerate 如何提供帮助。上面的代码有几个问题:
|
223 |
+
|
224 |
+
1. 该代码有点低效,因为n个dataloader是基于每个设备推送的
|
225 |
+
2. 这些代码只能运行在多GPU下,因此需要特别注意再次在单个节点或 TPU 上运行。
|
226 |
+
|
227 |
+
|
228 |
+
Accelerate 通过 [Accelerator](https://huggingface.co/docs/accelerate/v0.12.0/en/package_reference/accelerator#accelerator)类解决上述问题。通过它,在比较单节点和多节点时,除了三行代码外,代码几乎保持不变,如下所示:
|
229 |
+
|
230 |
+
```python
|
231 |
+
def train_ddp_accelerate():
|
232 |
+
accelerator = Accelerator()
|
233 |
+
# Build DataLoaders
|
234 |
+
transform = transforms.Compose([
|
235 |
+
transforms.ToTensor(),
|
236 |
+
transforms.Normalize((0.1307), (0.3081))
|
237 |
+
])
|
238 |
+
|
239 |
+
train_dset = datasets.MNIST('data', train=True, download=True, transform=transform)
|
240 |
+
test_dset = datasets.MNIST('data', train=False, transform=transform)
|
241 |
+
|
242 |
+
train_loader = torch.utils.data.DataLoader(train_dset, shuffle=True, batch_size=64)
|
243 |
+
test_loader = torch.utils.data.DataLoader(test_dset, shuffle=False, batch_size=64)
|
244 |
+
|
245 |
+
# Build model
|
246 |
+
model = BasicModel()
|
247 |
+
|
248 |
+
# Build optimizer
|
249 |
+
optimizer = optim.AdamW(model.parameters(), lr=1e-3)
|
250 |
+
|
251 |
+
# Send everything through `accelerator.prepare`
|
252 |
+
train_loader, test_loader, model, optimizer = accelerator.prepare(
|
253 |
+
train_loader, test_loader, model, optimizer
|
254 |
+
)
|
255 |
+
|
256 |
+
# Train for a single epoch
|
257 |
+
model.train()
|
258 |
+
for batch_idx, (data, target) in enumerate(train_loader):
|
259 |
+
output = model(data)
|
260 |
+
loss = F.nll_loss(output, target)
|
261 |
+
accelerator.backward(loss)
|
262 |
+
optimizer.step()
|
263 |
+
optimizer.zero_grad()
|
264 |
+
|
265 |
+
# Evaluate
|
266 |
+
model.eval()
|
267 |
+
correct = 0
|
268 |
+
with torch.no_grad():
|
269 |
+
for data, target in test_loader:
|
270 |
+
data, target = data.to(device), target.to(device)
|
271 |
+
output = model(data)
|
272 |
+
pred = output.argmax(dim=1, keepdim=True)
|
273 |
+
correct += pred.eq(target.view_as(pred)).sum().item()
|
274 |
+
print(f'Accuracy: {100. * correct / len(test_loader.dataset)}')
|
275 |
+
```
|
276 |
+
|
277 |
+
借助这个对象,您的 PyTorch 训练循环现在已设置为可以在任何分布式设置上运行Accelerator。此代码仍然可以通过torchrunCLI 或通过 Accelerate 自己的 CLI 界面启动[accelerate launch](https://huggingface.co/docs/accelerate/v0.12.0/en/basic_tutorials/launch)。
|
278 |
+
|
279 |
+
因此,现在使用 Accelerate 执行分布式训练并尽可能保持 PyTorch 原生代码相同就变得很简单。
|
280 |
+
|
281 |
+
早些时候有人提到 Accelerate 还可以使 DataLoaders 更高效。这是通过自定义采样器实现的,它可以在训练期间自动将部分批次发送到不同的设备,从而允许一次知道一个数据副本,而不是一次将四个副本存入内存,具体取决于配置。与此同时,内存总量中只有原始数据集的一个完整副本。该数据集的子集在用于训练的所有节点之间进行拆分,从而允许在单个实例上训练更大的数据集,而不会使用内存爆炸
|
282 |
+
|
283 |
+
### 使用notebook_launcher
|
284 |
+
|
285 |
+
之前有人提到您可以直接从 Jupyter Notebook 开始分布式代码。这来自 Accelerate 的notebook_launcher实用程序,它允许基于 Jupyter Notebook 内部的代码启动多 GPU 训练。
|
286 |
+
|
287 |
+
使用它就像导入launcher一样简单:
|
288 |
+
|
289 |
+
```python
|
290 |
+
from accelerate import notebook_launcher
|
291 |
+
```
|
292 |
+
|
293 |
+
并传递我们之前声明的训练函数、要传递的任何参数以及要使用的进程数(例如 TPU 上的 8 个,或两个 GPU 上的 2 个)。以上两个训练函数都可以运行,但请注意,启动单次启动后,实例需要重新启动才能产生另一个
|
294 |
+
|
295 |
+
```python
|
296 |
+
notebook_launcher(train_ddp, args=(), num_processes=2)
|
297 |
+
```
|
298 |
+
|
299 |
+
或者:
|
300 |
+
|
301 |
+
```python
|
302 |
+
notebook_launcher(train_accelerate_ddp, args=(), num_processes=2)
|
303 |
+
```
|
304 |
+
|
305 |
+
## 使用🤗 Trainer
|
306 |
+
|
307 |
+
终于我们来到了最高级的API-- -- the Hugging Face [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer).
|
308 |
+
|
309 |
+
这包含了尽可能多的训练,同时仍然能够在分布式系统上进行训练,而用户根本不需要做任何事情。
|
310 |
+
|
311 |
+
首先我们需要导入Trainer:
|
312 |
+
|
313 |
+
```python
|
314 |
+
from transformers import Trainer
|
315 |
+
```
|
316 |
+
然后我们定义一些TrainingArguments来控制所有常用的超参数。trainer还通过词典工作,因此需要制作自定义整理功能。
|
317 |
+
|
318 |
+
最后,我们将训练器子类化并编写我们自己的compute_loss.
|
319 |
+
|
320 |
+
之后,这段代码也可以在分布式设置上运行,而无需编写任何训练代码!
|
321 |
+
|
322 |
+
```python
|
323 |
+
from transformers import Trainer, TrainingArguments
|
324 |
+
|
325 |
+
model = BasicNet()
|
326 |
+
|
327 |
+
training_args = TrainingArguments(
|
328 |
+
"basic-trainer",
|
329 |
+
per_device_train_batch_size=64,
|
330 |
+
per_device_eval_batch_size=64,
|
331 |
+
num_train_epochs=1,
|
332 |
+
evaluation_strategy="epoch",
|
333 |
+
remove_unused_columns=False
|
334 |
+
)
|
335 |
+
|
336 |
+
def collate_fn(examples):
|
337 |
+
pixel_values = torch.stack([example[0] for example in examples])
|
338 |
+
labels = torch.tensor([example[1] for example in examples])
|
339 |
+
return {"x":pixel_values, "labels":labels}
|
340 |
+
|
341 |
+
class MyTrainer(Trainer):
|
342 |
+
def compute_loss(self, model, inputs, return_outputs=False):
|
343 |
+
outputs = model(inputs["x"])
|
344 |
+
target = inputs["labels"]
|
345 |
+
loss = F.nll_loss(outputs, target)
|
346 |
+
return (loss, outputs) if return_outputs else loss
|
347 |
+
|
348 |
+
trainer = MyTrainer(
|
349 |
+
model,
|
350 |
+
training_args,
|
351 |
+
train_dataset=train_dset,
|
352 |
+
eval_dataset=test_dset,
|
353 |
+
data_collator=collate_fn,
|
354 |
+
)
|
355 |
+
```
|
356 |
+
|
357 |
+
```python
|
358 |
+
trainer.train()
|
359 |
+
```
|
360 |
+
|
361 |
+
```bash
|
362 |
+
***** Running training *****
|
363 |
+
Num examples = 60000
|
364 |
+
Num Epochs = 1
|
365 |
+
Instantaneous batch size per device = 64
|
366 |
+
Total train batch size (w. parallel, distributed & accumulation) = 64
|
367 |
+
Gradient Accumulation steps = 1
|
368 |
+
Total optimization steps = 938
|
369 |
+
|
370 |
+
```
|
371 |
+
|
372 |
+
Epoch | 训练损失| 验证损失
|
373 |
+
|--|--|--|
|
374 |
+
1|0.875700|0.282633|
|
375 |
+
|
376 |
+
与上面带有 的示例类似notebook_launcher,这可以通过将其全部放入训练函数中再次完成:
|
377 |
+
|
378 |
+
```python
|
379 |
+
def train_trainer_ddp():
|
380 |
+
model = BasicNet()
|
381 |
+
|
382 |
+
training_args = TrainingArguments(
|
383 |
+
"basic-trainer",
|
384 |
+
per_device_train_batch_size=64,
|
385 |
+
per_device_eval_batch_size=64,
|
386 |
+
num_train_epochs=1,
|
387 |
+
evaluation_strategy="epoch",
|
388 |
+
remove_unused_columns=False
|
389 |
+
)
|
390 |
+
|
391 |
+
def collate_fn(examples):
|
392 |
+
pixel_values = torch.stack([example[0] for example in examples])
|
393 |
+
labels = torch.tensor([example[1] for example in examples])
|
394 |
+
return {"x":pixel_values, "labels":labels}
|
395 |
+
|
396 |
+
class MyTrainer(Trainer):
|
397 |
+
def compute_loss(self, model, inputs, return_outputs=False):
|
398 |
+
outputs = model(inputs["x"])
|
399 |
+
target = inputs["labels"]
|
400 |
+
loss = F.nll_loss(outputs, target)
|
401 |
+
return (loss, outputs) if return_outputs else loss
|
402 |
+
|
403 |
+
trainer = MyTrainer(
|
404 |
+
model,
|
405 |
+
training_args,
|
406 |
+
train_dataset=train_dset,
|
407 |
+
eval_dataset=test_dset,
|
408 |
+
data_collator=collate_fn,
|
409 |
+
)
|
410 |
+
|
411 |
+
trainer.train()
|
412 |
+
|
413 |
+
notebook_launcher(train_trainer_ddp, args=(), num_processes=2)
|
414 |
+
```
|
415 |
+
## 相关资源
|
416 |
+
|
417 |
+
要了解有关 PyTorch 分布式数据并行性的更多信息,请查看[此处的文档](https://pytorch.org/docs/stable/distributed.html)
|
418 |
+
|
419 |
+
要了解有关 🤗 Accelerate 的更多信息,请查看[此处的文档](https://huggingface.co/docs/accelerate)
|
420 |
+
|
421 |
+
要了解有关 🤗 Transformer 的更多信息,请查看[此处的文档](https://huggingface.co/docs/transformers)
|
422 |
+
|
423 |
+
|
424 |
+
<hr>
|
425 |
+
|
426 |
+
>>>> 英文原文:[From PyTorch DDP to Accelerate to Trainer, mastery of distributed training with ease](https://huggingface.co/blog/pytorch-ddp-accelerate-transformers#%F0%9F%A4%97-accelerate)
|
427 |
+
>>>> 译者:innovation64 (李洋)
|