optimize pipeline between device fwd and host bwd
#1
by
zhaifly
- opened
- README.md +5 -1
- gaudi_config.json +2 -2
README.md
CHANGED
@@ -13,9 +13,13 @@ This model only contains the `GaudiConfig` file for running the [GPT2](https://h
|
|
13 |
**This model contains no model weights, only a GaudiConfig.**
|
14 |
|
15 |
This enables to specify:
|
|
|
|
|
|
|
|
|
|
|
16 |
- `use_fused_adam`: whether to use Habana's custom AdamW implementation
|
17 |
- `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
|
18 |
-
- `use_torch_autocast`: whether to use PyTorch's autocast mixed precision
|
19 |
|
20 |
## Usage
|
21 |
|
|
|
13 |
**This model contains no model weights, only a GaudiConfig.**
|
14 |
|
15 |
This enables to specify:
|
16 |
+
- `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP)
|
17 |
+
- `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html#configuration-options) for a detailed explanation
|
18 |
+
- `hmp_bf16_ops`: list of operators that should run in bf16
|
19 |
+
- `hmp_fp32_ops`: list of operators that should run in fp32
|
20 |
+
- `hmp_is_verbose`: verbosity
|
21 |
- `use_fused_adam`: whether to use Habana's custom AdamW implementation
|
22 |
- `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
|
|
|
23 |
|
24 |
## Usage
|
25 |
|
gaudi_config.json
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
|
|
2 |
"use_fused_adam": true,
|
3 |
-
"use_fused_clip_norm": true
|
4 |
-
"use_torch_autocast": true
|
5 |
}
|
|
|
1 |
{
|
2 |
+
"use_habana_mixed_precision": false,
|
3 |
"use_fused_adam": true,
|
4 |
+
"use_fused_clip_norm": true
|
|
|
5 |
}
|