Sandboxing to limit model exposure to unverified data sources
Summary
Protect your model environment from untrusted or malicious data inputs — whether they occur during pre-training, fine-tuning, or the use of embedding data.
Description
To protect the model environment from untrusted data, sandboxing measures are applied: The model runs in isolated containers or VMs with resource limits and read-only file systems; network and file access are tightly controlled to only allow whitelisted sources; and runtime behavior is enforced through policy-as-code tools like OPA to restrict data loading to approved locations.
References
Weaknesses
Search for vulnerabilities in your apps for free with Fluid Attacks' automated security testing! Start your 21-day free trial and discover the benefits of the Continuous Hacking Essential plan.If you prefer the Advanced plan, which includes the expertise of Fluid Attacks' hacking team, fill out this contact form.
Supported In
This requirement is verified in following services
Essential Plan
Advanced Plan