Skip to main content

AI/ML Systems

TRF’s AI/ML extension captures the lifecycle of machine learning models—from requirements to deployment monitoring—so teams can demonstrate responsible AI practices.

Core artifacts

KindDescription
requirementFunctional and non-functional expectations (accuracy, latency, fairness).
datasetTraining, validation, and test datasets with provenance, size, and quality metrics.
modelVersioned models, architecture details, performance metrics.
experimentTraining runs with hyperparameters, hardware, duration, seeds.
runtime_monitorSafety cages and runtime metrics (drift detection, fallback behavior).

Typical workflow

  1. Define requirements and risk analyses for the AI feature (e.g., bias constraints, safety monitors).
  2. Capture dataset lineage: source systems, labeling processes, augmentation steps.
  3. Record training experiments with reproducibility hashes and outcomes.
  4. Publish model artifacts linked to datasets and experiments.
  5. Link runtime monitors and deployment metrics back to model versions.

Coverage patterns

requirement --> dataset (validated_by)
requirement --> model (implemented_by)
model --> experiment (trained_with)
model --> runtime_monitor (guarded_by)

Use tw coverage --from requirement --to model to confirm each requirement has an implemented model and corresponding validation.

Compliance alignment

  • AI Act / NIST AI RMF – Document risk assessments, mitigations, monitoring plans.
  • ISO/IEC 5338 – Capture ML lifecycle management artifacts.
  • FDA GMLP – Track dataset provenance, performance testing, post-market monitoring.

Tips

  • Tag datasets with source licenses and consent statements to streamline legal review.
  • Attach model cards and evaluation notebooks to model artifacts via attachments.
  • Use custom link relations (derived_from, evaluated_on) for rich lineage graphs.

See real-world patterns in Use Cases and validation techniques in Cryptographic Integrity.