Skip to content

Commit ade9e0f

Browse files
authored
Corrected max number for bf16 in transformer/docs (#33658)
Update perf_train_gpu_one.md per issue huggingface/hub-docs#1425 max number for bf16 should be 65,504 not 65,535
1 parent 196d35c commit ade9e0f

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/en/perf_train_gpu_one.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@ If you prefer to use 🤗 Accelerate, find the 🤗 Accelerate example [further
186186

187187
If you have access to an Ampere or newer hardware you can use bf16 for mixed precision training and evaluation. While
188188
bf16 has a worse precision than fp16, it has a much bigger dynamic range. In fp16 the biggest number you can have
189-
is `65535` and any number above that will result in an overflow. A bf16 number can be as large as `3.39e+38` (!) which
189+
is `65504` and any number above that will result in an overflow. A bf16 number can be as large as `3.39e+38` (!) which
190190
is about the same as fp32 - because both have 8-bits used for the numerical range.
191191

192192
You can enable BF16 in the 🤗 Trainer with:

0 commit comments

Comments
 (0)