Skip to content

Updated step operator docs #2908

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Aug 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions docs/book/component-guide/step-operators/vertex.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Once you added the step operator to your active stack, you can use it to execute
from zenml import step


@step(step_operator= <NAME>)
@step(step_operator=<NAME>)
def trainer(...) -> ...:
"""Train a model."""
# This step will be executed in Vertex.
Expand Down Expand Up @@ -115,13 +115,13 @@ For additional configuration of the Vertex step operator, you can pass `VertexSt
from zenml import step
from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings

@step(step_operator= <NAME>, settings={"step_operator.vertex": VertexStepOperatorSettings(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure we should fix that to a name here, people might think they can just copy-paste. the settings key is always fixed which is why we can hard code it here.

accelerator_type = "NVIDIA_TESLA_T4" # see https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#AcceleratorType
accelerator_count = 1
machine_type = "n1-standard-2" # see https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types
disk_type = "pd-ssd" # see https://cloud.google.com/vertex-ai/docs/training/configure-storage#disk-types
disk_size_gb = 100 # see https://cloud.google.com/vertex-ai/docs/training/configure-storage#disk-size
)})
@step(step_operator=<STEP_OPERATOR_NAME>, settings={"step_operator.vertex": VertexStepOperatorSettings(
accelerator_type= "NVIDIA_TESLA_T4", # see https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#AcceleratorType
accelerator_count = 1,
machine_type = "n1-standard-2", # see https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types
disk_type = "pd-ssd", # see https://cloud.google.com/vertex-ai/docs/training/configure-storage#disk-types
disk_size_gb = 100, # see https://cloud.google.com/vertex-ai/docs/training/configure-storage#disk-size
)})
def trainer(...) -> ...:
"""Train a model."""
# This step will be executed in Vertex.
Expand Down
5 changes: 3 additions & 2 deletions docs/book/getting-started/zenml-pro/system-architectures.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,12 @@ varying scenarios described below.
![Scenario 1: Full SaaS deployment](../../.gitbook/assets/cloud_architecture_scenario_1.png)


In this scenario, all services are hosted on infrastructure hosted by the ZenML Team.
In this scenario, all services are hosted on infrastructure hosted by the ZenML Team,
except the MLOps stack components.
Customer secrets and credentials required to access customer infrastructure are
stored and managed by the ZenML Pro Control Plane.

On our infrastructure for ZenML Pro SaaS only ML _metadata_ (e.g. pipeline and
On the ZenML Pro infrastructure, only ML _metadata_ (e.g. pipeline and
model tracking and versioning information) is stored. All the actual ML data
artifacts (e.g. data produced or consumed by pipeline steps, logs and
visualizations, models) are stored on the customer cloud. This can be set up
Expand Down
2 changes: 1 addition & 1 deletion docs/book/reference/global-settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ In addition to the above, you may also find the following files and folders unde
In order to help us better understand how the community uses ZenML, the pip package reports **anonymized** usage statistics. You can always opt out by using the CLI command:

```bash
zenml config analytics opt-out
zenml analytics opt-out
```

#### Why does ZenML collect analytics? <a href="#motivation" id="motivation"></a>
Expand Down