logo

383 Sandboxing to limit model exposure to unverified data sources


Summary

Protect your model environment from untrusted or malicious data inputs — whether they occur during pre-training, fine-tuning, or the use of embedding data.


Description

To protect the model environment from untrusted data, sandboxing measures are applied: The model runs in isolated containers or VMs with resource limits and read-only file systems; network and file access are tightly controlled to only allow whitelisted sources; and runtime behavior is enforced through policy-as-code tools like OPA to restrict data loading to approved locations.


Supported In

Advanced: True


References


Weaknesses


Last updated

2025/07/08