claudeMessages

Create Message (Anthropic Format)

post
/messages

Creates a message using Anthropic's native Messages API format. Supports all Claude models through FastRouter with full feature parity including streaming, tool use, extended thinking, and vision.

Available at both:

  • POST https://api.fastrouter.ai/api/v1/messages

  • POST https://api.fastrouter.ai/v1/messages

Authentication: Use your FastRouter API key via either:

  • x-api-key: YOUR_API_KEY (Anthropic style)

  • Authorization: Bearer YOUR_API_KEY (OpenAI style)

Streaming: Set stream: true to receive Server-Sent Events (SSE) with Anthropic's native streaming format (message_start, content_block_start, content_block_delta, message_delta, etc.).

Cost tracking: The response usage object includes a cost field showing credits consumed.

Drop-in replacement: This endpoint is a drop-in replacement for Anthropic's /v1/messages API. Simply change the base URL to https://api.fastrouter.ai and use your FastRouter API key.

Authorizations
AuthorizationstringRequired

FastRouter API Key. Get yours at https://fastrouter.ai

Format: Authorization: Bearer YOUR_API_KEY

Body
modelstringRequired

Model ID. Use anthropic/ prefix or bare Anthropic model name. Examples: anthropic/claude-sonnet-4-20250514, claude-sonnet-4-20250514, anthropic/claude-opus-4-20250514

Example: anthropic/claude-sonnet-4-20250514
max_tokensintegerRequired

The maximum number of tokens to generate before stopping. Required for all requests.

Example: 1024
systemone ofOptional

System prompt. Can be a string or an array of content blocks for advanced use cases like prompt caching.

stringOptional
or
temperaturenumber · max: 1Optional

Amount of randomness injected into the response. Ranges from 0.0 to 1.0.

Example: 1
top_pnumberOptional

Nucleus sampling parameter. Use a lower value to ignore less probable options.

Example: 0.999
top_kintegerOptional

Only sample from the top K options for each subsequent token.

stop_sequencesstring[]Optional

Custom text sequences that will cause the model to stop generating.

streambooleanOptional

Whether to stream the response using Server-Sent Events (SSE).

Default: falseExample: false
Responses
post
/messages

Last updated