AI/ML Systems
TRF’s AI/ML extension captures the lifecycle of machine learning models—from requirements to deployment monitoring—so teams can demonstrate responsible AI practices.
Core artifacts
| Kind | Description |
|---|---|
requirement | Functional and non-functional expectations (accuracy, latency, fairness). |
dataset | Training, validation, and test datasets with provenance, size, and quality metrics. |
model | Versioned models, architecture details, performance metrics. |
experiment | Training runs with hyperparameters, hardware, duration, seeds. |
runtime_monitor | Safety cages and runtime metrics (drift detection, fallback behavior). |
Typical workflow
- Define requirements and risk analyses for the AI feature (e.g., bias constraints, safety monitors).
- Capture dataset lineage: source systems, labeling processes, augmentation steps.
- Record training experiments with reproducibility hashes and outcomes.
- Publish model artifacts linked to datasets and experiments.
- Link runtime monitors and deployment metrics back to model versions.
Coverage patterns
requirement --> dataset (validated_by)
requirement --> model (implemented_by)
model --> experiment (trained_with)
model --> runtime_monitor (guarded_by)
Use tw coverage --from requirement --to model to confirm each requirement has an implemented model and corresponding validation.
Compliance alignment
- AI Act / NIST AI RMF – Document risk assessments, mitigations, monitoring plans.
- ISO/IEC 5338 – Capture ML lifecycle management artifacts.
- FDA GMLP – Track dataset provenance, performance testing, post-market monitoring.
Tips
- Tag datasets with source licenses and consent statements to streamline legal review.
- Attach model cards and evaluation notebooks to
modelartifacts viaattachments. - Use custom link relations (
derived_from,evaluated_on) for rich lineage graphs.
See real-world patterns in Use Cases and validation techniques in Cryptographic Integrity.