Function Calling
FastRouter supports Function Calling for models capable of planning and invoking tools or functions. This allows LLMs to return structured function calls instead of natural language responses.
Overview
Function calling empowers LLMs to:
Identify when a tool or function is needed based on user input.
Select the appropriate function from a provided set of tools.
Generate structured JSON arguments to invoke that function.
When you provide a list of tools in your API request, compatible models can choose to respond with one or more function calls. You then execute those functions in your application code and feed the results back to the model in a subsequent request. This creates a multi-turn conversation loop for tasks like data retrieval, API integrations, or complex workflows.
FastRouter.ai routes your requests to the best available providers (e.g., Google AI Studio, OpenAI) while supporting OpenAI-compatible formats for tools. This includes parallel function calling for models that support it.
Key Benefits:
Build agents that interact with real-world APIs (e.g., weather services, calendars).
Handle complex queries by breaking them into tool-based steps.
Improve reliability with structured outputs over free-form text.
Supported Models
FastRouter.ai supports function calling on models that natively offer this capability. Here's a partial list (check the Models page for the latest):
Google Models: Gemini 2.5 Pro
OpenAI Models: GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo
Anthropic Models: Claude 4 Opus, Claude 3.7 Sonnet
xAI Models: Grok 3, Grok 4
Use the provider
field in your request to route to a specific backend if needed.
Usage
To use function calling:
Define your tools in the
tools
array of your/chat/completions
request. Each tool follows the OpenAI-compatible schema (a JSON object withname
,description
, andparameters
).Send the request to FastRouter.ai's endpoint:
https://go.fastrouter.ai/api/v1/chat/completions
.If the model responds with a
tool_calls
array in the response, execute the functions in your code.Append the tool results as a new message (with
role: "tool"
) and send a follow-up request to let the model generate a final response.
Request Parameters:
tools
: Array of tool definitions.tool_choice
: Optional; controls how the model uses tools (e.g.,"auto"
for automatic selection,"none"
to disable, or specify a tool name).
Response Format:
If a tool is called, the response will include
choices[0].message.tool_calls
—an array of objects withfunction.name
andfunction.arguments
(JSON string).Execute the function and respond with a message like:
{"role": "tool", "content": "JSON result", "tool_call_id": "call_id_from_response"}
.
For authentication, use your API key in the Authorization: Bearer YOUR_API_KEY
header.
Quickstart: Single Function Call Example
This example uses two tools: get_current_weather
and schedule_meeting
. The model identifies the need for get_current_weather
based on the user's query.
curl --location 'https://go.fastrouter.ai/api/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer API-KEY' \
--data '{
"max_completion_tokens": 1000,
"messages": [
{
"content": "What is the temperature in Paris?",
"role": "user"
}
],
"model": "google/gemini-2.5-pro",
"stream": false,
"tools": [
{
"function": {
"description": "Return the current weather for a city",
"name": "get_current_weather",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string"
}
},
"required": [
"city"
]
}
},
"type": "function"
},
{
"function": {
"description": "Schedule a meeting with specified attendees at a given time and date.",
"name": "schedule_meeting",
"parameters": {
"type": "object",
"properties": {
"attendees": {
"type": "array",
"items": {
"type": "string"
}
},
"date": {
"type": "string"
},
"time": {
"type": "string"
},
"topic": {
"type": "string"
}
},
"required": [
"attendees",
"date",
"time",
"topic"
]
}
},
"type": "function"
}
],
"provider": {
"only": [
"googleaistudio"
]
}
}'
Sample Response:
{
"choices": [
{
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "call_0_123",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"city\":\"Paris\"}"
}
}
],
"annotations": null
},
"finish_reason": "tool_calls",
"index": 0
}
]
}
Executing Tools: Multi-Turn Example
After receiving a tool_calls
response, execute the function in your code and send the result back. Here's how to handle a full loop.
Python (Using requests
)
requests
)import requests
import json
API_KEY = "YOUR_API_KEY"
ENDPOINT = "https://go.fastrouter.ai/api/v1/chat/completions"
# Step 1: Initial request with tools
payload = {
"model": "openai/gpt-4o",
"messages": [{"role": "user", "content": "What is the temperature in Paris?"}],
"tools": [
# Same tools as above...
]
}
response = requests.post(ENDPOINT, json=payload, headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}).json()
# Step 2: Check for tool calls and execute
tool_calls = response["choices"][0]["message"].get("tool_calls")
if tool_calls:
for tool_call in tool_calls:
func_name = tool_call["function"]["name"]
args = json.loads(tool_call["function"]["arguments"])
# Simulate tool execution (replace with real API call)
if func_name == "get_current_weather":
result = {"temperature": 72, "condition": "Sunny"} # Mock weather data
# Add handling for other tools...
# Step 3: Append tool result to messages
payload["messages"].append(response["choices"][0]["message"]) # Add model's message
payload["messages"].append({
"role": "tool",
"content": json.dumps(result),
"tool_call_id": tool_call["id"]
})
# Step 4: Send follow-up request for final response
final_response = requests.post(ENDPOINT, json=payload, headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}).json()
print(final_response["choices"][0]["message"]["content"]) # e.g., "The temperature in Paris is 72°F and sunny."
Node.js (Using fetch
)
fetch
)const API_KEY = 'YOUR_API_KEY';
const ENDPOINT = 'https://go.fastrouter.ai/api/v1/chat/completions';
async function main() {
// Step 1: Initial request
const payload = {
model: 'openai/gpt-4o',
messages: [{ role: 'user', content: 'What is the temperature in Paris?' }],
tools: [ /* Same tools as above... */ ]
};
let response = await fetch(ENDPOINT, {
method: 'POST',
headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
}).then(res => res.json());
// Step 2: Handle tool calls
const toolCalls = response.choices[0].message.tool_calls;
if (toolCalls) {
for (const toolCall of toolCalls) {
const funcName = toolCall.function.name;
const args = JSON.parse(toolCall.function.arguments);
// Simulate execution
let result;
if (funcName === 'get_current_weather') {
result = { temperature: 72, condition: 'Sunny' }; // Mock
}
// Add other tool handlers...
// Step 3: Append to messages
payload.messages.push(response.choices[0].message);
payload.messages.push({
role: 'tool',
content: JSON.stringify(result),
tool_call_id: toolCall.id
});
}
// Step 4: Follow-up request
response = await fetch(ENDPOINT, {
method: 'POST',
headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
}).then(res => res.json());
console.log(response.choices[0].message.content); // Final response
}
}
main();
Best Practices
Error Handling: If tool execution fails, return an error message in the
content
field.Security: Validate arguments before executing tools to prevent injection attacks.
Streaming: Function calling works with
stream: true
, but tool calls appear in the final chunk.Costs: Tool calls count toward token usage—monitor via the response's
usage
field.Testing: Start with simple tools and iterate based on model behavior.
Last updated