Skip to content

Commit 99cf4c1

Browse files
authored
prune whitespaces for code readability (#1962)
1 parent cb421da commit 99cf4c1

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+420
-344
lines changed

.azure/gpu-test.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,4 +96,4 @@ jobs:
9696
env:
9797
PL_RUN_STANDALONE_TESTS: "1"
9898
# NUM_PARALLEL_TESTS: "10"
99-
timeoutInMinutes: "10"
99+
timeoutInMinutes: "10"

.github/ISSUE_TEMPLATE/ask-a-question.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ labels: question
66

77
---
88

9-
Please describe your question here.
9+
Please describe your question here.

.github/ISSUE_TEMPLATE/bug-report.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ body:
4242
```
4343
You can simply copy and paste the outputs below.
4444
value: |
45-
```
45+
```
4646
4747
4848

.github/ISSUE_TEMPLATE/feature-request.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ labels: enhancement
66

77
---
88

9-
Please describe the feature or enhancement along with the intended usecase.
9+
Please describe the feature or enhancement along with the intended usecase.

.pre-commit-config.yaml

Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
# Copyright The Lightning team.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
default_language_version:
16+
python: python3
17+
18+
ci:
19+
autofix_prs: true
20+
autoupdate_commit_msg: "[pre-commit.ci] pre-commit suggestions"
21+
autoupdate_schedule: quarterly
22+
# submodules: true
23+
24+
repos:
25+
- repo: https://github.com/pre-commit/pre-commit-hooks
26+
rev: v5.0.0
27+
hooks:
28+
- id: end-of-file-fixer
29+
- id: trailing-whitespace
30+
- id: check-yaml
31+
- id: check-toml
32+
#- id: check-docstring-first
33+
#- id: check-executables-have-shebangs
34+
- id: check-case-conflict
35+
- id: check-added-large-files
36+
args: ["--maxkb=250", "--enforce-all"]
37+
- id: detect-private-key
38+
39+
#- repo: https://github.com/codespell-project/codespell
40+
# rev: v2.3.0
41+
# hooks:
42+
# - id: codespell
43+
# additional_dependencies: [tomli]
44+
# args: ["--write-changes"]
45+
# exclude: pyproject.toml
46+
47+
#- repo: https://github.com/crate-ci/typos
48+
# rev: dictgen-v0.3.1
49+
# hooks:
50+
# - id: typos
51+
# args: [] # empty to do not write fixes
52+
# exclude: pyproject.toml
53+
54+
#- repo: https://github.com/executablebooks/mdformat
55+
# rev: 0.7.21
56+
# hooks:
57+
# - id: mdformat
58+
# args: ["--number"]
59+
# additional_dependencies:
60+
# - mdformat-gfm
61+
# - mdformat-black
62+
# - mdformat_frontmatter
63+
64+
#- repo: https://github.com/pre-commit/mirrors-prettier
65+
# rev: v3.1.0
66+
# hooks:
67+
# - id: prettier
68+
# files: \.(json|yml|yaml|toml)
69+
# # https://prettier.io/docs/en/options.html#print-width
70+
# args: ["--print-width=120"]
71+
72+
#- repo: https://github.com/astral-sh/ruff-pre-commit
73+
# rev: v0.8.6
74+
# hooks:
75+
# # try to fix what is possible
76+
# - id: ruff
77+
# args: ["--fix"]
78+
79+
- repo: https://github.com/tox-dev/pyproject-fmt
80+
rev: v2.5.0
81+
hooks:
82+
- id: pyproject-fmt
83+
additional_dependencies: [tox]
84+
- repo: https://github.com/abravalheri/validate-pyproject
85+
rev: v0.23
86+
hooks:
87+
- id: validate-pyproject

CITATION.cff

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ date-released: 2023-03-22
66
authors:
77
- name: "The Lightning AI team"
88
license: "Apache-2.0"
9-
url: "https://github.com/Lightning-AI/litgpt"
9+
url: "https://github.com/Lightning-AI/litgpt"

README.md

Lines changed: 55 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@
66
**20+ high-performance LLMs with recipes to pretrain, finetune, and deploy at scale.**
77

88
<pre>
9-
✅ From scratch implementations ✅ No abstractions ✅ Beginner friendly
10-
✅ Flash attention ✅ FSDP ✅ LoRA, QLoRA, Adapter
11-
✅ Reduce GPU memory (fp4/8/16/32) ✅ 1-1000+ GPUs/TPUs ✅ 20+ LLMs
9+
✅ From scratch implementations ✅ No abstractions ✅ Beginner friendly
10+
✅ Flash attention ✅ FSDP ✅ LoRA, QLoRA, Adapter
11+
✅ Reduce GPU memory (fp4/8/16/32) ✅ 1-1000+ GPUs/TPUs ✅ 20+ LLMs
1212
</pre>
1313

1414

@@ -21,17 +21,17 @@
2121
<p align="center">
2222
<a href="#quick-start">Quick start</a> •
2323
<a href="#choose-from-20-llms">Models</a> •
24-
<a href="#finetune-an-llm">Finetune</a> •
25-
<a href="#deploy-an-llm">Deploy</a> •
26-
<a href="#all-workflows">All workflows</a> •
24+
<a href="#finetune-an-llm">Finetune</a> •
25+
<a href="#deploy-an-llm">Deploy</a> •
26+
<a href="#all-workflows">All workflows</a> •
2727
<a href="#state-of-the-art-features">Features</a> •
2828
<a href="#training-recipes">Recipes (YAML)</a> •
2929
<a href="https://lightning.ai/">Lightning AI</a> •
3030
<a href="#tutorials">Tutorials</a>
3131
</p>
3232

3333
&nbsp;
34-
34+
3535
<a target="_blank" href="https://lightning.ai/lightning-ai/studios/litgpt-quick-start">
3636
<img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/get-started-badge.svg" height="36px" alt="Get started"/>
3737
</a>
@@ -43,10 +43,10 @@
4343
# Use, finetune, pretrain, and deploy LLMs Lightning fast ⚡⚡
4444
Every LLM is implemented from scratch with **no abstractions** and **full control**, making them blazing fast, minimal, and performant at enterprise scale.
4545

46-
**Enterprise ready -** Apache 2.0 for unlimited enterprise use.
47-
**Developer friendly -** Easy debugging with no abstraction layers and single file implementations.
48-
**Optimized performance -** Models designed to maximize performance, reduce costs, and speed up training.
49-
**Proven recipes -** Highly-optimized training/finetuning recipes tested at enterprise scale.
46+
**Enterprise ready -** Apache 2.0 for unlimited enterprise use.</br>
47+
**Developer friendly -** Easy debugging with no abstraction layers and single file implementations.</br>
48+
**Optimized performance -** Models designed to maximize performance, reduce costs, and speed up training.</br>
49+
**Proven recipes -** Highly-optimized training/finetuning recipes tested at enterprise scale.</br>
5050

5151
&nbsp;
5252

@@ -56,23 +56,23 @@ Install LitGPT
5656
pip install 'litgpt[all]'
5757
```
5858

59-
Load and use any of the [20+ LLMs](#choose-from-20-llms):
59+
Load and use any of the [20+ LLMs](#choose-from-20-llms):
6060
```python
6161
from litgpt import LLM
6262

6363
llm = LLM.load("microsoft/phi-2")
6464
text = llm.generate("Fix the spelling: Every fall, the familly goes to the mountains.")
6565
print(text)
66-
# Corrected Sentence: Every fall, the family goes to the mountains.
66+
# Corrected Sentence: Every fall, the family goes to the mountains.
6767
```
6868

6969
&nbsp;
7070

71-
✅ Optimized for fast inference
72-
✅ Quantization
73-
✅ Runs on low-memory GPUs
74-
✅ No layers of internal abstractions
75-
✅ Optimized for production scale
71+
✅ Optimized for fast inference</br>
72+
✅ Quantization</br>
73+
✅ Runs on low-memory GPUs</br>
74+
✅ No layers of internal abstractions</br>
75+
✅ Optimized for production scale</br>
7676

7777
<details>
7878
<summary>Advanced install options</summary>
@@ -92,7 +92,7 @@ pip install -e '.[all]'
9292

9393
---
9494
# Choose from 20+ LLMs
95-
Every model is written from scratch to maximize performance and remove layers of abstraction:
95+
Every model is written from scratch to maximize performance and remove layers of abstraction:
9696

9797
| Model | Model size | Author | Reference |
9898
|----|----|----|----|
@@ -164,20 +164,20 @@ Every model is written from scratch to maximize performance and remove layers of
164164
# Workflows
165165

166166
<p align="center">
167-
<a href="#finetune-an-llm">Finetune</a> •
168-
<a href="#pretrain-an-llm">Pretrain</a> •
169-
<a href="#continue-pretraining-an-llm">Continued pretraining</a> •
167+
<a href="#finetune-an-llm">Finetune</a> •
168+
<a href="#pretrain-an-llm">Pretrain</a> •
169+
<a href="#continue-pretraining-an-llm">Continued pretraining</a> •
170170
<a href="#evaluate-an-llm">Evaluate</a> •
171171
<a href="#deploy-an-llm">Deploy</a> •
172172
<a href="#test-an-llm">Test</a>
173173
</p>
174174

175175
&nbsp;
176176

177-
Use the command line interface to run advanced workflows such as pretraining or finetuning on your own data.
177+
Use the command line interface to run advanced workflows such as pretraining or finetuning on your own data.
178178

179179

180-
## All workflows
180+
## All workflows
181181
After installing LitGPT, select the model and workflow to run (finetune, pretrain, evaluate, deploy, etc...):
182182

183183
```bash
@@ -191,7 +191,7 @@ litgpt evaluate meta-llama/Llama-3.2-3B-Instruct
191191

192192
&nbsp;
193193

194-
----
194+
----
195195

196196
## Finetune an LLM
197197

@@ -230,7 +230,7 @@ litgpt serve out/custom-model/final
230230

231231
&nbsp;
232232

233-
----
233+
----
234234

235235
## Deploy an LLM
236236

@@ -242,7 +242,7 @@ litgpt serve out/custom-model/final
242242

243243
&nbsp;
244244

245-
Deploy a pretrained or finetune LLM to use it in real-world applications. Deploy, automatically sets up a web server that can be accessed by a website or app.
245+
Deploy a pretrained or finetune LLM to use it in real-world applications. Deploy, automatically sets up a web server that can be accessed by a website or app.
246246

247247
```bash
248248
# deploy an out-of-the-box LLM
@@ -286,7 +286,7 @@ litgpt evaluate microsoft/phi-2 --tasks 'truthfulqa_mc2,mmlu'
286286

287287
&nbsp;
288288

289-
----
289+
----
290290

291291
## Test an LLM
292292

@@ -297,7 +297,7 @@ litgpt evaluate microsoft/phi-2 --tasks 'truthfulqa_mc2,mmlu'
297297
</div>
298298

299299
&nbsp;
300-
300+
301301
Test how well the model works via an interactive chat. Use the `chat` command to chat, extract embeddings, etc...
302302

303303
Here's an example showing how to use the Phi-2 LLM:
@@ -322,7 +322,7 @@ litgpt chat microsoft/phi-2
322322
>> Prompt: What do Llamas eat?
323323
```
324324

325-
The download of certain models requires an additional access token. You can read more about this in the [download](tutorials/download_model_weights.md#specific-models-and-access-tokens) documentation.
325+
The download of certain models requires an additional access token. You can read more about this in the [download](tutorials/download_model_weights.md#specific-models-and-access-tokens) documentation.
326326

327327
</details>
328328

@@ -375,7 +375,7 @@ litgpt chat out/custom-model/final
375375

376376
&nbsp;
377377

378-
----
378+
----
379379

380380
## Continue pretraining an LLM
381381

@@ -418,27 +418,19 @@ litgpt chat out/custom-model/final
418418

419419
&nbsp;
420420

421-
----
421+
----
422422

423423
# State-of-the-art features
424424

425-
&nbsp;State-of-the-art optimizations: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, [optional CPU offloading](tutorials/oom.md#do-sharding-across-multiple-gpus), and [TPU and XLA support](extensions/xla).
426-
427-
&nbsp;[Pretrain](tutorials/pretrain.md), [finetune](tutorials/finetune.md), and [deploy](tutorials/inference.md)
428-
429-
&nbsp;Reduce compute requirements with low-precision settings: FP16, BF16, and FP16/FP32 mixed.
430-
431-
&nbsp;Lower memory requirements with [quantization](tutorials/quantize.md): 4-bit floats, 8-bit integers, and double quantization.
432-
433-
&nbsp;[Configuration files](config_hub) for great out-of-the-box performance.
434-
435-
&nbsp;Parameter-efficient finetuning: [LoRA](tutorials/finetune_lora.md), [QLoRA](tutorials/finetune_lora.md), [Adapter](tutorials/finetune_adapter.md), and [Adapter v2](tutorials/finetune_adapter.md).
436-
437-
&nbsp;[Exporting](tutorials/convert_lit_models.md) to other popular model weight formats.
438-
439-
&nbsp;Many popular datasets for [pretraining](tutorials/pretrain.md) and [finetuning](tutorials/prepare_dataset.md), and [support for custom datasets](tutorials/prepare_dataset.md#preparing-custom-datasets-for-instruction-finetuning).
440-
441-
&nbsp;Readable and easy-to-modify code to experiment with the latest research ideas.
425+
✅ State-of-the-art optimizations: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, [optional CPU offloading](tutorials/oom.md#do-sharding-across-multiple-gpus), and [TPU and XLA support](extensions/xla).</br>
426+
[Pretrain](tutorials/pretrain.md), [finetune](tutorials/finetune.md), and [deploy](tutorials/inference.md)</br>
427+
✅ Reduce compute requirements with low-precision settings: FP16, BF16, and FP16/FP32 mixed.</br>
428+
✅ Lower memory requirements with [quantization](tutorials/quantize.md): 4-bit floats, 8-bit integers, and double quantization.</br>
429+
[Configuration files](config_hub) for great out-of-the-box performance.</br>
430+
✅ Parameter-efficient finetuning: [LoRA](tutorials/finetune_lora.md), [QLoRA](tutorials/finetune_lora.md), [Adapter](tutorials/finetune_adapter.md), and [Adapter v2](tutorials/finetune_adapter.md).</br>
431+
[Exporting](tutorials/convert_lit_models.md) to other popular model weight formats.</br>
432+
✅ Many popular datasets for [pretraining](tutorials/pretrain.md) and [finetuning](tutorials/prepare_dataset.md), and [support for custom datasets](tutorials/prepare_dataset.md#preparing-custom-datasets-for-instruction-finetuning).</br>
433+
✅ Readable and easy-to-modify code to experiment with the latest research ideas.</br>
442434

443435
&nbsp;
444436

@@ -458,7 +450,7 @@ litgpt finetune \
458450
```
459451
<details>
460452
<summary>✅ Use configs to customize training</summary>
461-
453+
462454
Configs let you customize training for all granular parameters like:
463455

464456
```yaml
@@ -625,7 +617,7 @@ litgpt finetune \
625617

626618
# Project highlights
627619

628-
LitGPT powers many great AI projects, initiatives, challenges and of course enterprises. Please submit a pull request to be considered for a feature.
620+
LitGPT powers many great AI projects, initiatives, challenges and of course enterprises. Please submit a pull request to be considered for a feature.
629621

630622
<details>
631623
<summary>📊 SAMBA: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling</summary>
@@ -670,22 +662,22 @@ The research paper ["Pre-training Small Base LMs with Fewer Tokens"](https://arx
670662

671663
We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.
672664

673-
- [Request a feature](https://github.com/Lightning-AI/litgpt/issues)
674-
- [Submit your first contribution](https://lightning.ai/pages/community/tutorial/how-to-contribute-to-litgpt/)
675-
- [Join our Discord](https://discord.gg/VptPCZkGNa)
665+
- [Request a feature](https://github.com/Lightning-AI/litgpt/issues)
666+
- [Submit your first contribution](https://lightning.ai/pages/community/tutorial/how-to-contribute-to-litgpt/)
667+
- [Join our Discord](https://discord.gg/VptPCZkGNa)
676668

677669
&nbsp;
678670

679-
# Tutorials
671+
# Tutorials
680672

681-
🚀 [Get started](tutorials/0_to_litgpt.md)
682-
⚡️ [Finetuning, incl. LoRA, QLoRA, and Adapters](tutorials/finetune.md)
683-
🤖 [Pretraining](tutorials/pretrain.md)
684-
💬 [Model evaluation](tutorials/evaluation.md)
685-
📘 [Supported and custom datasets](tutorials/prepare_dataset.md)
686-
🧹 [Quantization](tutorials/quantize.md)
687-
🤯 [Tips for dealing with out-of-memory (OOM) errors](tutorials/oom.md)
688-
🧑🏽‍💻 [Using cloud TPUs](extensions/xla)
673+
🚀 [Get started](tutorials/0_to_litgpt.md)</br>
674+
⚡️ [Finetuning, incl. LoRA, QLoRA, and Adapters](tutorials/finetune.md)</br>
675+
🤖 [Pretraining](tutorials/pretrain.md)</br>
676+
💬 [Model evaluation](tutorials/evaluation.md)</br>
677+
📘 [Supported and custom datasets](tutorials/prepare_dataset.md)</br>
678+
🧹 [Quantization](tutorials/quantize.md)</br>
679+
🤯 [Tips for dealing with out-of-memory (OOM) errors](tutorials/oom.md)</br>
680+
🧑🏽‍💻 [Using cloud TPUs](extensions/xla)</br>
689681

690682
&nbsp;
691683

0 commit comments

Comments
 (0)