-
-## Ambassador program
-
-Love AI infrastructure and open-source? Become a `dstack` ambassador!
-
-As an ambassador, you’ll play a key role in growing our community by:
-
-* Sharing your expertise through blogs, talks, and tutorials
-* Organizing meetups and community events
-* Advocating for open-source AI container orchestration
-
-
- Get involved
-
-
-> We support ambassadors with recognition, wider exposure, and cloud GPU credits.
-
-## Contributing to dstack
-
-Join the development of `dstack` by contributing bug fixes,
-new features, and cloud integrations via custom backends.
-
-
diff --git a/docs/docs/concepts/backends.md b/docs/docs/concepts/backends.md
index b809ba265..91ac1f0f9 100644
--- a/docs/docs/concepts/backends.md
+++ b/docs/docs/concepts/backends.md
@@ -4,7 +4,7 @@ To use `dstack` with cloud providers, configure backends
via the [`~/.dstack/server/config.yml`](../reference/server/config.yml.md) file.
The server loads this file on startup.
-Alternatively, you can configure backends on the [project settings page](../guides/administration.md#backends) via UI.
+Alternatively, you can configure backends on the [project settings page](../concepts/projects.md#backends) via UI.
> For using `dstack` with on-prem servers, no backend configuration is required.
> Use [SSH fleets](../concepts/fleets.md#ssh) instead.
diff --git a/docs/docs/concepts/dev-environments.md b/docs/docs/concepts/dev-environments.md
index 3e963967e..1c4a45571 100644
--- a/docs/docs/concepts/dev-environments.md
+++ b/docs/docs/concepts/dev-environments.md
@@ -339,5 +339,5 @@ retry:
--8<-- "docs/concepts/snippets/manage-runs.ext"
!!! info "What's next?"
- 1. Read about [tasks](tasks.md), [services](services.md), and [repos](../guides/repos.md)
+ 1. Read about [tasks](tasks.md), [services](services.md), and [repos](repos.md)
2. Learn how to manage [fleets](fleets.md)
diff --git a/docs/docs/guides/monitoring.md b/docs/docs/concepts/metrics.md
similarity index 99%
rename from docs/docs/guides/monitoring.md
rename to docs/docs/concepts/metrics.md
index e71604776..fec6a222a 100644
--- a/docs/docs/guides/monitoring.md
+++ b/docs/docs/concepts/metrics.md
@@ -1,4 +1,4 @@
-# Monitoring
+# Metrics
## Prometheus
diff --git a/docs/docs/guides/administration.md b/docs/docs/concepts/projects.md
similarity index 99%
rename from docs/docs/guides/administration.md
rename to docs/docs/concepts/projects.md
index d45abc8ab..bf4d9dc88 100644
--- a/docs/docs/guides/administration.md
+++ b/docs/docs/concepts/projects.md
@@ -1,4 +1,4 @@
-# Administration
+# Projects
Projects enable the isolation of different teams and their resources. Each project can configure its own backends and
control which users have access to it.
diff --git a/docs/docs/guides/repos.md b/docs/docs/concepts/repos.md
similarity index 100%
rename from docs/docs/guides/repos.md
rename to docs/docs/concepts/repos.md
diff --git a/docs/docs/concepts/services.md b/docs/docs/concepts/services.md
index 420b764ad..c6d0d5dfc 100644
--- a/docs/docs/concepts/services.md
+++ b/docs/docs/concepts/services.md
@@ -473,7 +473,7 @@ If you'd like `dstack` to automatically retry, configure the
--8<-- "docs/concepts/snippets/manage-runs.ext"
!!! info "What's next?"
- 1. Read about [dev environments](dev-environments.md), [tasks](tasks.md), and [repos](../guides/repos.md)
+ 1. Read about [dev environments](dev-environments.md), [tasks](tasks.md), and [repos](repos.md)
2. Learn how to manage [fleets](fleets.md)
3. See how to set up [gateways](gateways.md)
4. Check the [TGI :material-arrow-top-right-thin:{ .external }](../../examples/deployment/tgi/index.md){:target="_blank"},
diff --git a/docs/docs/concepts/tasks.md b/docs/docs/concepts/tasks.md
index bec936e88..8d0739d34 100644
--- a/docs/docs/concepts/tasks.md
+++ b/docs/docs/concepts/tasks.md
@@ -418,6 +418,6 @@ retry:
--8<-- "docs/concepts/snippets/manage-runs.ext"
!!! info "What's next?"
- 1. Read about [dev environments](dev-environments.md), [services](services.md), and [repos](../guides/repos.md)
+ 1. Read about [dev environments](dev-environments.md), [services](services.md), and [repos](repos.md)
2. Learn how to manage [fleets](fleets.md)
3. Check the [Axolotl](/examples/fine-tuning/axolotl) example
diff --git a/docs/docs/guides/server-deployment.md b/docs/docs/guides/server-deployment.md
index d1b4e96a6..6b24c0b17 100644
--- a/docs/docs/guides/server-deployment.md
+++ b/docs/docs/guides/server-deployment.md
@@ -61,7 +61,7 @@ To use `dstack` with cloud providers, configure [backends](../concepts/backends.
via the `~/.dstack/server/config.yml` file.
The server loads this file on startup.
-Alternatively, you can configure backends on the [project settings page](../guides/administration.md#backends) via UI.
+Alternatively, you can configure backends on the [project settings page](../concepts/projects.md#backends) via UI.
> For using `dstack` with on-prem servers, no backend configuration is required.
> Use [SSH fleets](../concepts/fleets.md#ssh) instead.
diff --git a/docs/docs/guides/troubleshooting.md b/docs/docs/guides/troubleshooting.md
index f43023141..047e55eff 100644
--- a/docs/docs/guides/troubleshooting.md
+++ b/docs/docs/guides/troubleshooting.md
@@ -38,7 +38,7 @@ If you have configured a backend and still can't use it, check the output of `ds
for backend configuration errors.
> **Tip**: You can find a list of successfully configured backends
-> on the [project settings page](../guides/administration.md#backends) in the UI.
+> on the [project settings page](../concepts/projects.md#backends) in the UI.
#### Cause 2: Requirements mismatch
@@ -113,7 +113,7 @@ If you are using
[dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"},
you will not see marketplace offers until you top up your balance.
Alternatively, you can configure your own cloud accounts
-on the [project settings page](../guides/administration.md#backends)
+on the [project settings page](../concepts/projects.md#backends)
or use [SSH fleets](../concepts/fleets.md#ssh).
### Provisioning fails
diff --git a/docs/docs/reference/cli/dstack/init.md b/docs/docs/reference/cli/dstack/init.md
index 7cdf63d93..94aa9c5fc 100644
--- a/docs/docs/reference/cli/dstack/init.md
+++ b/docs/docs/reference/cli/dstack/init.md
@@ -1,6 +1,6 @@
# dstack init
-This command initializes the current directory as a `dstack` [repo](../../../guides/repos.md).
+This command initializes the current directory as a `dstack` [repo](../../../concepts/repos.md).
**Git credentials**
diff --git a/docs/overrides/home.html b/docs/overrides/home.html
index 4fbdefdff..bf0fb2601 100644
--- a/docs/overrides/home.html
+++ b/docs/overrides/home.html
@@ -16,90 +16,69 @@
{% endblock %}
@@ -110,26 +89,28 @@
Simplified AI workload orchestration
-
dstack is an open-source alternative to Kubernetes and Slurm, designed
- to simplify AI development for ML engineers. It streamlines AI workloads and GPU orchestration
- across top clouds and on-prem clusters.
+
dstack is an open-source alternative to
+ Kubernetes and Slurm, designed
+ to simplify GPU allocation and AI workload orchestration
+ for ML teams across top clouds, on-prem clusters, and accelerators.
- Designed for ML engineers, it simplifies development, training, cluster management, and
- inference.
+ dstack natively integrates with top GPU clouds, streamlining the
+ provisioning, allocation, and utilization of cloud GPUs and high-performance interconnected
+ clusters.
- dstack integrates natively with top GPU clouds and runs
- seamlessly on private clouds and data centers.
+ dstack provides a unified interface on top of GPU
+ clouds, simplifying development, training, and deployment for ML teams.
- Dev environments allow you to provision a remote machine, set up with your code and favorite
- IDE, with just one command.
-
+
Orchestrating workloads on existing clusters
+
Whether you have an on-prem cluster of GPU-equipped bare-metal machines or a pre-provisioned
+ cluster of GPU-enabled VMs, you just need to list the hostnames and SSH credentials of the hosts
+ to add the cluster as a fleet for running any AI workload.
- Dev environments are perfect for interactively running code
- using your favorite IDE or notebook before scheduling a task or deploying a service.
-
- Tasks allow you to schedule jobs or run web apps.
- Tasks can run on single nodes or be distributed across clusters.
- You can configure dependencies, resources, ports, and more.
+ Before running training jobs or deploying model endpoints, ML engineers often experiment with
+ their code in a desktop IDE while using cloud or on-prem GPU machines.
+ Dev environments simplify and streamline this process.
-
Tasks are ideal for training and fine-tuning jobs, running apps,
- or executing batch jobs, including those using Spark and Ray.
Services let you deploy models or web apps as private or public auto-scalable endpoints.
- You can configure dependencies, resources, authorization, auto-scaling rules, and more.
+
+
Scheduling jobs on clusters and single instances
-
Once deployed, the endpoint can be accessed by anyone on the team.
+
+ Tasks simplify the process of scheduling jobs on either optimized clusters or individual
+ instances. They can be used for pre-training or fine-tuning models, as well as for running any
+ AI or data workloads that require efficient GPU utilization.
+
Fleets streamline provisioning and management of cloud and on-prem
- clusters, ensuring optimal performance for AI workloads.
+
Deploying auto-scaling model endpoints
-
Once created, a Fleet enable teams to run Dev environments,
- Tasks, and Services.
+
+ With dstack, you can easily deploy any model as a secure,
+ auto-scaling OpenAI-compatible endpoint, all while using your custom code, Docker image, and
+ serving framework.
+
- Thanks to dstack, my team can quickly tap into affordable GPUs and streamline our workflows
+ Thanks to dstack, my team can quickly tap into affordable
+ GPUs and streamline our workflows
from testing and development to full-scale application deployment.
@@ -298,7 +322,8 @@
Andrew Spott
ML Engineer @Stealth
- Thanks to dstack, I get the convenience of having a personal Slurm cluster
+ Thanks to dstack, I get the convenience of having a personal
+ Slurm cluster
and using budget-friendly cloud GPUs, without paying the super-high premiums charged by the
big three.
@@ -313,7 +338,8 @@
Alvaro Bartolome
ML Engineer @Argilla
- With dstack it's incredibly easy to define a configuration within a
+ With dstack it's incredibly easy to define a configuration
+ within a
repository
and run it without worrying about GPU availability. It lets you focus on
data and your research.
@@ -329,7 +355,8 @@
Park Chansung
ML Researcher @ETRI
- Thanks to dstack, I can effortlessly access the top GPU options across
+ Thanks to dstack, I can effortlessly access the top GPU
+ options across
different clouds,
saving me time and money while pushing my AI work forward.
To efficiently support GPU workloads, Kubernetes typically requires custom operators, and it
may not offer the most intuitive interface for ML engineers.
-
dstack takes a different approach, focusing on container orchestration specifically for AI
+
dstack takes a different approach, focusing on container
+ orchestration specifically for AI
workloads, with the goal of making life easier for ML engineers.
-
Designed to be lightweight, dstack provides a simpler, more intuitive interface for
+
Designed to be lightweight, dstack provides a simpler, more
+ intuitive interface for
development,
training, and inference. It also enables more flexible and cost-effective provisioning
and management of clusters.
-
For optimal flexibility, dstack and Kubernetes can complement each other: dstack can handle
+
For optimal flexibility, dstack and Kubernetes can complement
+ each other: dstack can handle
development, while Kubernetes manages production deployments.
@@ -554,9 +638,11 @@
FAQ
Slurm excels at job scheduling across pre-configured clusters.
-
dstack goes beyond scheduling, providing a full suite of features tailored to ML teams,
+
dstack goes beyond scheduling, providing a full suite of
+ features tailored to ML teams,
including cluster management, dynamic compute provisioning, development environments, and
- advanced monitoring. This makes dstack a more comprehensive solution for AI workloads,
+ advanced monitoring. This makes dstack a more comprehensive
+ solution for AI workloads,
whether in the cloud or on-prem.
@@ -581,7 +667,8 @@
FAQ
- For ML teams seeking a more streamlined, AI-native development platform, dstack
+ For ML teams seeking a more streamlined, AI-native development platform, dstack
provides an alternative to Kubernetes and Slurm, removing the need for
MLOps or custom solutions.