Improper authorization control for web services In open-webui
Description
Open WebUI's responses passthrough endpoint lacks access control authorization
Summary
The /responses endpoint in the OpenAI router accepts any authenticated user and forwards requests directly to upstream LLM providers without enforcing per-model access control. While the primary chat completion endpoint (generate_chat_completion) checks model ownership, group membership, and AccessGrants before allowing a request, the /responses proxy only validates that the user has a valid session via get_verified_user.
This allows any authenticated user — regardless of role or group assignment — to interact with any model configured on the instance by sending a POST request to /api/openai/responses with an arbitrary model ID.
Impact
As per OWASP TOP 10 LLM:
Model Denial of Service (OWASP LLM04): An unauthorized user can submit resource-intensive requests to expensive models (e.g., o1-pro, GPT-4o) that were explicitly restricted by the administrator. In shared deployments, this can exhaust API budgets or rate limits, causing total service disruption for all legitimate users.
Model Theft (OWASP LLM10): If the instance proxies access to fine-tuned or self-hosted models, unauthorized users can freely interact with them, enabling capability extraction or model distillation without authorization.
Access Policy Bypass: Administrators lose the ability to enforce cost-tier restrictions, team-based model assignments, or compliance boundaries through the existing access control system.
The endpoint is a raw passthrough proxy and does not resolve workspace model configurations (system prompts, knowledge bases, RAG pipelines). Therefore, workspace-specific confidential data is not directly exposed through this vector.
Mitigation
Update Impact
Minimal update. May introduce new vulnerabilities or breaking changes.
Ecosystem | Package | Affected version | Patched versions |
|---|---|---|---|
pypi | open-webui | 0.9.0 |
Aliases
References