Skip to content

Allow specifying run tags in pipeline configuration #3130

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Oct 28, 2024

Conversation

schustmi
Copy link
Contributor

@schustmi schustmi commented Oct 23, 2024

Describe changes

This PR adds the ability to specify tags for pipeline runs

  • in the config file
# config.yaml
tags:
  - tag_in_config_file
  • in code on the pipeline decorator or in pipeline.configure/with_options
@pipeline(tags=["tag_on_decorator"])
def my_pipeline():
  ...

my_pipeline = my_pipeline.with_options(config_file="config.yaml")
run = my_pipeline()
run.tags  # <- will be tag_in_config_file and tag_on_decorator

Pre-requisites

Please ensure you have done the following:

  • I have read the CONTRIBUTING.md document.
  • If my change requires a change to docs, I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • I have based my new branch on develop and the open PR is targeting develop. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.
  • If my changes require changes to the dashboard, these changes are communicated/requested.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Other (add details above)

@github-actions github-actions bot added internal To filter out internal PRs and issues enhancement New feature or request labels Oct 23, 2024
Copy link
Contributor

coderabbitai bot commented Oct 24, 2024

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@schustmi schustmi requested review from bcdurak and htahir1 October 24, 2024 10:04
Copy link
Contributor

@htahir1 htahir1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good one!

@schustmi schustmi requested a review from htahir1 October 24, 2024 12:15
Copy link
Contributor

@bcdurak bcdurak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great. I have a few follow-up comments though:

  • First, since we are using tags more often now, it would be great if we implement a filtering mechanism for these entities based on their tags. (Not just for pipeline runs, but also for artifact versions, model versions, and so on). What do you think?

  • We have recently added the ability to add tags to sagemaker pipelines and steps. Structurally, they look a bit different and it might get a bit confusing to define tags separately for ZenML pipelines and orchestrator pipelines.

pipeline_settings = SagemakerOrchestratorSettings(
    pipeline_tags={
        "project": "my-ml-project",
        "environment": "production",
    }
)

step_settings = SagemakerOrchestratorSettings(
    tags={
        "step": "data-preprocessing",
        "owner": "data-team"
    }
)
  • Do you think there is also value in using the same tags in the future to tag step runs as well (as we do for Sagemaker pipelines)?

@schustmi
Copy link
Contributor Author

This looks great. I have a few follow-up comments though:

  • First, since we are using tags more often now, it would be great if we implement a filtering mechanism for these entities based on their tags. (Not just for pipeline runs, but also for artifact versions, model versions, and so on). What do you think?
  • We have recently added the ability to add tags to sagemaker pipelines and steps. Structurally, they look a bit different and it might get a bit confusing to define tags separately for ZenML pipelines and orchestrator pipelines.
pipeline_settings = SagemakerOrchestratorSettings(
    pipeline_tags={
        "project": "my-ml-project",
        "environment": "production",
    }
)

step_settings = SagemakerOrchestratorSettings(
    tags={
        "step": "data-preprocessing",
        "owner": "data-team"
    }
)
  • Do you think there is also value in using the same tags in the future to tag step runs as well (as we do for Sagemaker pipelines)?
  • Filtering by tags is implemented in the API and even in the frontend for all these entities? Or am I missing some other place where you'd want to filter by them?
  • We currently don't display steps anywhere separately (at least in the dashboard, which IMO is the main usecase for filtering). So I don't see any value in tagging step runs, at least for now. As for the confusion with sagemaker tags: They'll figure it out if they explicitly define tags for their sagemaker orchestrator and they don't show up on their ZenML pipelines, I wouldn't worry about that.

@schustmi schustmi requested a review from bcdurak October 25, 2024 13:42
Copy link
Contributor

@bcdurak bcdurak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Filtering by tags is implemented in the API and even in the frontend for all these entities? Or am I missing some other place where you'd want to filter by them?
  • We currently don't display steps anywhere separately (at least in the dashboard, which IMO is the main usecase for filtering). So I don't see any value in tagging step runs, at least for now. As for the confusion with sagemaker tags: They'll figure it out if they explicitly define tags for their sagemaker orchestrator and they don't show up on their ZenML pipelines, I wouldn't worry about that.

You are absolutely right. I looked over the filter model, could not see anything related to tags amongst the attributes, got confused, and completely overlooked the fact that it inherits from WorkspaceScopedTaggableFilter 🤦

As for Sagemaker (and for any other orchestrator for that matter), I think the UX could be improved a bit but I think we can take another look later. I trust your judgement on this one.

@schustmi schustmi merged commit 74e7368 into develop Oct 28, 2024
6 of 17 checks passed
@schustmi schustmi deleted the feature/allow-tagging-pipeline-runs branch October 28, 2024 08:29
schustmi added a commit that referenced this pull request Oct 28, 2024
* Add tags to pipeline configurations

* Merge lists in configurations

* Merge tags

* Docs

* Add test

* Improve comment

* Add redirect

* Fix redirect
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request internal To filter out internal PRs and issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants