Atlas C2PA Library is a Rust implementation for integrating C2PA (Coalition for Content Provenance and Authenticity) standards into machine learning workflows. This library enables cryptographically verifiable ML asset provenance, helping to establish trust in the AI ecosystem.
- Complete C2PA Compliance: Create, sign, and verify manifests according to C2PA specifications
- ML-Specific Extensions: Specialized types for models, datasets, and training processes
- Framework Support: Works with TensorFlow, PyTorch, ONNX, and other major ML frameworks
- Secure Provenance: Cryptographic verification of asset origin and lineage
- Transparent Lineage: Track datasets and components used to create models
- Governance Controls: DoNotTrain assertions to control asset usage permissions
Add Atlas C2PA Library to your Cargo.toml
:
[dependencies]
atlas-c2pa-lib = "0.1.2"
use atlas_c2pa_lib::ml::types::{ModelInfo, MLFramework, ModelFormat};
use atlas_c2pa_lib::ml::manifest::MLManifestBuilder;
// Define model information
let model_info = ModelInfo {
name: "bert-base".to_string(),
version: "1.0.0".to_string(),
framework: MLFramework::PyTorch,
format: ModelFormat::TorchScript,
};
// Create and build a manifest
let manifest = MLManifestBuilder::new(
model_info,
"0123456789abcdef0123456789abcdef".to_string(), // Model hash
)
.with_title("BERT Language Model".to_string())
.build()
.unwrap();
use atlas_c2pa_lib::assertion::{Assertion, DoNotTrainAssertion};
// Create a DoNotTrain assertion
let do_not_train = DoNotTrainAssertion {
reason: "This dataset contains licensed content".to_string(),
enforced: true,
};
// Convert to assertion type
let assertion = Assertion::DoNotTrain(do_not_train);
use atlas_c2pa_lib::ml::ingredient::create_dataset_ingredient;
use atlas_c2pa_lib::ml::manifest::MLManifestBuilder;
use atlas_c2pa_lib::ml::types::{ModelInfo, MLFramework, ModelFormat};
// Create model info
let model_info = ModelInfo {
name: "text-classifier".to_string(),
version: "2.0".to_string(),
framework: MLFramework::TensorFlow,
format: ModelFormat::SavedModel,
};
// Create dataset ingredient
let dataset = create_dataset_ingredient(
"Wikipedia-EN-2023",
"https://example.com/datasets/wiki2023.zip",
"0123456789abcdef0123456789abcdef", // Dataset hash
);
// Build model manifest with dataset
let manifest = MLManifestBuilder::new(
model_info,
"fedcba9876543210fedcba9876543210".to_string(), // Model hash
)
.with_dataset(dataset)
.build()
.unwrap();
- Model Cards: Enhance model documentation with verifiable provenance information
- Dataset Verification: Ensure datasets come from trustworthy sources and are unmodified
- Training Transparency: Document training parameters, metrics, and dataset usage
- Supply Chain Security: Verify the origin and integrity of model components
For comprehensive documentation, visit docs.rs/atlas-c2pa-lib.
The repository includes several examples demonstrating common usage patterns:
- Creating manifests for models and datasets
- Adding assertions about model capabilities and limitations
- Signing and validating manifests
- Linking related assets with cross-references
- Rust 1.70.0 or higher
Contributions are welcome! Please see our Contributing Guide for details.
This project is licensed under the Apache License, Version 2.0.
- C2PA Specification - The official C2PA specification
- Content Authenticity Initiative - Industry initiative for content provenance
- Atlas CLI - Related tools for responsible AI
This code in this repo is not stable yet, and should not be used in production environments.
Developed by Intel Labs to advance transparency and trust in AI systems.