Quickstart Guide
Get up and running with TokenFlux in just a few minutes. This guide will help you make your first API call and understand the basics of our platform.
TokenFlux is fully compatible with OpenAI’s client libraries. If you have existing OpenAI code, you can switch to TokenFlux by simply changing the base URL and API key.
Prerequisites
Before you begin, you’ll need:
- A TokenFlux account (sign up free)
- Credits in your account (purchase after signing up)
- Your API key from the dashboard
Installation
TokenFlux works with any OpenAI-compatible client library. Choose your preferred language:
JavaScript/TypeScript
Python
cURL
npm install openai
# or
yarn add openai
# or
bun add openai
No installation needed - cURL is typically pre-installed on most systems.
Your First API Call
Here’s how to make your first chat completion request:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'YOUR_TOKENFLUX_API_KEY',
baseURL: 'https://tokenflux.ai/v1',
});
async function main() {
const completion = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'Hello, TokenFlux!' }
],
});
console.log(completion.choices[0].message.content);
}
main();
from openai import OpenAI
client = OpenAI(
api_key="YOUR_TOKENFLUX_API_KEY",
base_url="https://tokenflux.ai/v1"
)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello, TokenFlux!"}
]
)
print(completion.choices[0].message.content)
curl https://tokenflux.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "X-Api-Key: YOUR_TOKENFLUX_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello, TokenFlux!"}
]
}'
Replace YOUR_TOKENFLUX_API_KEY with your actual API key from the dashboard. You can find it under Settings → API Keys.
Try Different Models
TokenFlux provides access to 200+ models from leading providers. Simply change the model parameter:
// Try different models
const models = [
'gpt-4o', // OpenAI GPT-4
'claude-3.5-sonnet', // Anthropic Claude
'gemini-pro', // Google Gemini
'mistral-large', // Mistral AI
'deepseek-chat', // DeepSeek
];
for (const model of models) {
const completion = await client.chat.completions.create({
model: model,
messages: [{ role: 'user', content: 'What makes you unique?' }],
});
console.log(`${model}: ${completion.choices[0].message.content}`);
}
Use the Models API to discover all available models with their pricing and capabilities.
Streaming Responses
For a better user experience with long responses, enable streaming:
const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a short story' }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
process.stdout.write(content);
}
Generate Embeddings
Create vector embeddings for semantic search and RAG applications:
const embedding = await client.embeddings.create({
model: 'text-embedding-3-small',
input: 'TokenFlux makes AI integration simple',
});
console.log(`Embedding dimensions: ${embedding.data[0].embedding.length}`);
Create Images
Generate images from text descriptions:
// Note: Image generation has a different response format
const response = await fetch('https://tokenflux.ai/v1/images/generations', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Api-Key': 'YOUR_TOKENFLUX_API_KEY',
},
body: JSON.stringify({
model: 'flux-pro',
prompt: 'A futuristic city with flying cars at sunset',
width: 1024,
height: 1024,
}),
});
const generation = await response.json();
console.log(`Generation ID: ${generation.id}`);
// Poll for completion or check status
What’s Next?
Now that you’ve made your first API calls, explore these resources:
Quick Links
Remember to keep your API key secure and never commit it to version control. Use environment variables or secure key management services in production.