Skip to content

Commit 0403e60

Browse files
authored
Update intro.md
1 parent c305e53 commit 0403e60

File tree

1 file changed

+56
-3
lines changed

1 file changed

+56
-3
lines changed

docs/stable/intro.md

Lines changed: 56 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,62 @@
22
sidebar_position: 1
33
---
44

5-
# Serverless LLM
5+
<p align="center">
6+
<img src="./docs/images/serverlessllm.jpg" alt="ServerlessLLM Logo" width="30%">
7+
</p>
68

7-
## Documentation
9+
# ServerlessLLM
810

9-
### Getting Started
11+
ServerlessLLM is a fast, affordable and easy library designed for multi-LLM serving, also known as [Serverless Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html), [Inference Endpoint](https://huggingface.co/inference-endpoints/dedicated), or [Model Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints?view=azureml-api-2). This library is ideal for environments with limited GPU resources (GPU poor), as it allows efficient dynamic loading of models onto GPUs. By supporting high levels of GPU multiplexing, it maximizes GPU utilization without the need to dedicate GPUs to individual models.
1012

13+
## News
14+
15+
- [07/24] We are working towards to the first release and making documentation ready. Stay tuned!
16+
17+
## About
18+
19+
ServerlessLLM is Fast:
20+
21+
- Supports various leading LLM inference libraries including [vLLM](https://github.com/vllm-project/vllm) and [HuggingFace Transformers](https://huggingface.co/docs/transformers/en/index).
22+
- Achieves 5-10X faster loading speed than [Safetensors](https://github.com/huggingface/safetensors) and PyTorch Checkpoint Loader.
23+
- Supports start-time-optimized model loading scheduler, achieving 5-100X better LLM start-up latency than [Ray Serve](https://docs.ray.io/en/latest/serve/index.html) and [KServe](https://github.com/kserve/kserve).
24+
25+
ServerlessLLM is Affordable:
26+
27+
- Supports many LLM models to share a few GPUs with low model switching overhead and seamless inference live migration.
28+
- Fully utilizes local storage resources available on multi-GPU servers, reducing the need for employing costly storage servers and network bandwidth.
29+
30+
ServerlessLLM is Easy:
31+
32+
- Facilitates easy deployment via [Ray Cluster](https://docs.ray.io/en/latest/cluster/getting-started.html) and [Kubernetes](https://kubernetes.io/) (comming soon).
33+
- Seamlessly deploys [HuggingFace Transformers](https://huggingface.co/docs/transformers/en/index) models and your custom LLM models.
34+
- Integrates seamlessly with the [OpenAI Query API](https://platform.openai.com/docs/introduction).
35+
36+
## Getting Started
37+
38+
1. Install ServerlessLLM following [Installation Guide](https://serverlessllm.github.io/docs/stable/getting_started/installation/).
39+
40+
2. Start a local ServerlessLLM cluster following [Quick Start Guide](https://serverlessllm.github.io/docs/stable/getting_started/quickstart/).
41+
42+
3. Just want to try out fast checkpoint loading in your own code? Check out the [ServerlessLLM Store Guide](https://serverlessllm.github.io/docs/stable/store/quickstart).
43+
44+
## Performance
45+
46+
A detailed analysis of the performance of ServerlessLLM is [here](./benchmarks/README.md).
47+
48+
## Contributing
49+
50+
ServerlessLLM is actively maintained and developed by those [Contributors](./CONTRIBUTING.md). We welcome new contributors to join us in making ServerlessLLM faster, better and more easier to use. Please check [Contributing Guide](./CONTRIBUTING.md) for details.
51+
52+
## Citation
53+
54+
If you use ServerlessLLM for your research, please cite our [paper](https://arxiv.org/abs/2401.14351):
55+
56+
```bibtex
57+
@article{fu2024serverlessllm,
58+
title={ServerlessLLM: Low-Latency Serverless Inference for Large Language Models},
59+
author={Fu, Yao and Xue, Leyang and Huang, Yeqi and Brabete, Andrei-Octavian and Ustiugov, Dmitrii and Patel, Yuvraj and Mai, Luo},
60+
booktitle={USENIX Symposium on Operating Systems Design and Implementation (OSDI'24)},
61+
year={2024}
62+
}
63+
```

0 commit comments

Comments
 (0)