# DeepSeek

> **You are on:** `https://beta-api.paywithlocus.com/api` | [llms.txt](https://beta.paywithlocus.com/llms.txt)
>
> Locus runs on multiple environments -- make sure every URL you call matches your expected environment.
> | Environment | Landing | API |
> |---|---|---|
> | Production | paywithlocus.com | api.paywithlocus.com |
> | Beta | beta.paywithlocus.com | beta-api.paywithlocus.com |
> | Stage | stage.paywithlocus.com | stage-api.paywithlocus.com |
>
> If the API URL above doesn't match your expected environment, re-fetch this file from the correct domain.

> Frontier AI models — DeepSeek-V3 for fast chat and code, DeepSeek-R1 for deep chain-of-thought reasoning. OpenAI-compatible API format. Among the most capable and cost-efficient models available.

**Category:** AI / LLM | **Website:** [deepseek.com](https://deepseek.com) | **Docs:** [api-docs.deepseek.com](https://api-docs.deepseek.com)

## Access Methods

| Method | Base URL | Auth |
|--------|----------|------|
| **MPP (Tempo)** | `https://deepseek.mpp.paywithlocus.com/deepseek/` | HTTP 402 auto-payment |
| **Wrapped API** | `https://beta-api.paywithlocus.com/api/wrapped/deepseek/` | `Authorization: Bearer <LOCUS_API_KEY>` |

**OpenAPI discovery:** `GET https://deepseek.mpp.paywithlocus.com/openapi.json`

## Endpoints

### Chat

Create a chat completion using DeepSeek-V3 (fast, general-purpose) or DeepSeek-R1 (deep reasoning with chain-of-thought). OpenAI-compatible request/response format.

**Estimated cost:** Model + token dependent (~$0.004–$0.025)

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | string | Yes | Model to use: deepseek-chat (V3, fast) or deepseek-reasoner (R1, reasoning) |
| `messages` | array | Yes | Array of message objects: [{"role":"system"|"user"|"assistant","content":"..."}] |
| `max_tokens` | number | No | Maximum tokens to generate. Default: 1024. Affects pricing. |
| `temperature` | number | No | Sampling temperature 0–2. Default: 1. Lower = more deterministic. |
| `stream` | boolean | No | Stream response via SSE. Default: false. |
| `top_p` | number | No | Nucleus sampling parameter. Default: 1. |
| `stop` | string | No | Stop sequence(s) — string or array of strings. |

```bash
curl -X POST https://deepseek.mpp.paywithlocus.com/deepseek/chat \
  -H "Content-Type: application/json" \
  -d '{"model":"<string>","messages":"<array>","max_tokens":"<number>","temperature":"<number>","stream":"<boolean>","top_p":"<number>","stop":"<string>"}'
```

### Fill-In-the-Middle (FIM)

Code completion with fill-in-the-middle — provide a code prefix and optional suffix, and DeepSeek fills in the middle. Ideal for copilot-style code generation. Only supports deepseek-chat.

**Estimated cost:** Token dependent (~$0.003–$0.005)

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `model` | string | Yes | Must be deepseek-chat (only model supporting FIM) |
| `prompt` | string | Yes | The code prefix to complete |
| `suffix` | string | No | The code suffix (text after the cursor). DeepSeek fills between prompt and suffix. |
| `max_tokens` | number | No | Maximum tokens to generate. Default: 256. |
| `temperature` | number | No | Sampling temperature. Default: 1. |

```bash
curl -X POST https://deepseek.mpp.paywithlocus.com/deepseek/fim \
  -H "Content-Type: application/json" \
  -d '{"model":"<string>","prompt":"<string>","suffix":"<string>","max_tokens":"<number>","temperature":"<number>"}'
```

### List Models

List available DeepSeek models with owner and availability information.

**Estimated cost:** $0.003 fee only

_No parameters required._

```bash
curl -X POST https://deepseek.mpp.paywithlocus.com/deepseek/list-models \
  -H "Content-Type: application/json" \
  -d '{}'
```
