0b906016bc6070f0495e0d93919f17388c2fbf3d
🔧 Change listen address in configuration file to bind to localhost and add a newline at the end of the file
LLM Proxy
HTTP proxy for LLM APIs with streaming support and chunk processing.
Usage
./llm-proxy
Configuration
| Variable | Description | Default |
|---|---|---|
UPSTREAM_URL |
Upstream LLM API URL | https://api.openai.com/v1/chat/completions |
LISTEN_ADDR |
Listen address | :8080 |
API_KEY |
Upstream API key | - |
INSECURE |
Skip TLS verification | false |
Example
UPSTREAM_URL=https://api.openai.com/v1/chat/completions \
API_KEY=sk-... \
LISTEN_ADDR=:8080 \
./llm-proxy
Endpoints
GET /health- Health check/*- Proxies all requests to upstream
Streaming
Supports SSE (text/event-stream) and NDJSON (application/x-ndjson) streaming. Each chunk is processed via processChunk() before forwarding.
Description
Languages
Go
97.2%
Just
2.8%