Introduction to TokenFlux

TokenFlux is a unified API gateway that provides developers with seamless access to leading AI models and services. Whether you’re building chat applications, generating embeddings, creating images, or deploying AI agents, TokenFlux offers a single, consistent interface to power your applications.

What is TokenFlux?

TokenFlux simplifies AI development by providing:
  • Unified LLM Access: Connect to 200+ language models from providers like OpenAI, Anthropic, Google, and more through one API
  • OpenAI Compatibility: Drop-in replacement for existing OpenAI client code - no refactoring required
  • Image Generation: Access leading image generation models through our simple API
  • MCP Server Network: Deploy and manage AI agents across 600+ pre-configured Model Context Protocol servers
  • Usage-Based Pricing: Pay only for what you use with transparent, per-request pricing
  • Enterprise Ready: Robust authentication, usage tracking, and billing management

Key Features

OpenAI-Compatible API

TokenFlux implements the OpenAI API specification, making integration effortless. Simply change your base URL and API key to start using any supported model.

Multi-Provider Support

Access models from:
  • OpenAI: GPT-4, GPT-3.5, o1, and more
  • Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, and more
  • Google: Gemini Pro, Gemini Flash, and more
  • Mistral: Mistral Large, Mixtral, and more
  • Many others: Including open-source and specialized models

Real-Time Streaming

Support for Server-Sent Events (SSE) streaming for real-time chat completions and responses.

Comprehensive Usage Tracking

Built-in usage tracking and billing with detailed analytics and transparent pricing.

Getting Your API Key

  1. Sign up at tokenflux.ai
  2. Purchase credits based on your usage needs
  3. Generate an API key from your dashboard
  4. Start building with your new API key
API keys can be managed from your dashboard. You can create multiple keys with different permissions and expiration dates.

Authentication

TokenFlux supports multiple authentication methods:
  • API Key: Include X-Api-Key: your_api_key in request headers
  • Bearer Token: Use Authorization: Bearer your_api_key header
  • OAuth: Casdoor-based authentication for web applications

Base URL

All API endpoints are available at:
https://tokenflux.ai/v1

Quick Start Example

Here’s a simple example using the JavaScript/TypeScript OpenAI client:
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-tokenflux-api-key',
  baseURL: 'https://tokenflux.ai/v1',
});

const completion = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'user', content: 'Hello, TokenFlux!' }
  ],
});

console.log(completion.choices[0].message.content);
You can use any existing OpenAI client library by simply updating the baseURL and apiKey parameters.

Rate Limits

TokenFlux implements intelligent rate limiting to ensure fair usage:
  • Rate limits vary by model and pricing tier
  • Limits are enforced per API key
  • HTTP 429 responses include retry-after headers
  • Usage tracking helps you monitor consumption

Error Handling

TokenFlux returns standard HTTP status codes and structured error responses:
{
  "error": {
    "type": "invalid_request_error",
    "code": "model_not_found",
    "message": "The requested model does not exist",
    "param": "model"
  }
}
Common error types:
  • invalid_request_error: Invalid parameters or request format
  • authentication_error: Invalid API key or insufficient permissions
  • rate_limit_error: Too many requests
  • quota_exceeded_error: Insufficient credits
  • server_error: Internal server error

Next Steps

Ready to start building? Check out our API reference documentation:

Support

Need help? We’re here to assist: Start building amazing AI applications with TokenFlux today!