/// API Documentation
One Endpoint. Full Power.
Everything you need to integrate MatrixAIBase into your application. Simple, fast, and well-documented.
Development Notice
MatrixAIBase API is currently under development. Documentation below represents the planned API specification. Join the waitlist to get early access.
# Quick Start
Get up and running with MatrixAIBase in under 5 minutes.
Base URL
Endpoint
https://api.matrixaibase.com/v1
Authentication
All API requests require an API key passed in the Authorization header:
Header
Authorization: Bearer YOUR_API_KEY
Your First Request
cURL
curl -X POST https://api.matrixaibase.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "matrix-core-1",
"messages": [
{
"role": "user",
"content": "Hello, MatrixAIBase!"
}
],
"max_tokens": 256
}'
Response
JSON Response
{
"id": "mx-abc123",
"object": "chat.completion",
"created": 1709500000,
"model": "matrix-core-1",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm MatrixAIBase. How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 15,
"total_tokens": 27
}
}
# Models
MatrixAIBase offers three models optimized for different use cases.
| Model | Description | Max Tokens | Price per 1K tokens |
|---|---|---|---|
matrix-core-1 |
Standard model. Balanced speed and quality. | 4,096 | $0.0003 |
matrix-core-1-fast |
Optimized for speed. Faster response times. | 4,096 | $0.0006 |
matrix-core-1-deep |
Enhanced reasoning. Best for complex tasks. | 8,192 | $0.0012 |
# SDKs
Official SDKs for popular languages. Install and start building in seconds.
Python
Installation
pip install matrixaibase
Python Usage
from matrixaibase import MatrixAI
client = MatrixAI(api_key="YOUR_API_KEY")
response = client.chat.completions.create(
model="matrix-core-1",
messages=[
{"role": "user", "content": "Hello, MatrixAIBase!"}
]
)
print(response.choices[0].message.content)
Node.js
Installation
npm install matrixaibase
JavaScript Usage
import MatrixAI from 'matrixaibase';
const client = new MatrixAI({ apiKey: 'YOUR_API_KEY' });
const response = await client.chat.completions.create({
model: 'matrix-core-1',
messages: [
{ role: 'user', content: 'Hello, MatrixAIBase!' }
]
});
console.log(response.choices[0].message.content);
PHP
Installation
composer require matrixaibase/sdk
PHP Usage
use MatrixAIBase\Client;
$client = new Client('YOUR_API_KEY');
$response = $client->chat()->completions()->create([
'model' => 'matrix-core-1',
'messages' => [
['role' => 'user', 'content' => 'Hello, MatrixAIBase!']
]
]);
echo $response->choices[0]->message->content;
# API Reference
POST /v1/chat/completions
Creates a chat completion for the given messages.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
model |
string | Yes | Model ID to use (e.g., "matrix-core-1") |
messages |
array | Yes | Array of message objects with role and content |
max_tokens |
integer | No | Maximum tokens to generate (default: 1024) |
temperature |
float | No | Sampling temperature 0-2 (default: 0.7) |
stream |
boolean | No | Enable streaming responses (default: false) |
top_p |
float | No | Nucleus sampling parameter (default: 1.0) |
Error Codes
| Code | Description |
|---|---|
400 |
Bad request -- invalid parameters |
401 |
Unauthorized -- invalid or missing API key |
429 |
Rate limited -- too many requests |
500 |
Internal server error |
# Rate Limits
Rate limits vary by plan to ensure fair usage for all users.
| Plan | Requests/min | Tokens/min |
|---|---|---|
| Basic | 20 | 10,000 |
| Standard | 60 | 50,000 |
| Scale | Custom | Custom |
Rate limit headers are included in every response:
Response Headers
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 58
X-RateLimit-Reset: 1709500060
# Support
Need help? We're here for you.
- Email: support@matrixaibase.com
- Sales: sales@matrixaibase.com
- Status Page: status.matrixaibase.com