RealmPlay AI API Documentation
Overview
RealmPlay AI provides a cutting-edge AI model specialized in uncensored roleplay, storytelling, and creative writing. Our API allows developers to integrate our powerful AI capabilities into their applications, enabling rich, engaging, and unrestricted content generation.
Access Requirements
Eligibility: The API is exclusively available to Gold and Platinum tier users.
Usage: Gold and Platinum users receive unlimited, unmetered API access.
Getting Started
1. Sign Up and Create an API Key
Create an account at realmplay.ai
Navigate to the "Account" section in the navbar
Click "Create New API Key"
Copy your API key for use in your applications

2. API Base URL
https://model.realmplay.ai/v13. Authentication
All API requests require authentication using your API key in the header:
Authorization: Bearer YOUR_API_KEYAPI Reference
Chat Completions
Generate AI responses based on conversation context.
Endpoint: POST /v1/chat/completions
Request Format
{
"model": "realmplay-405b",
"messages": [
{
"role": "user",
"content": "Your message here"
}
],
"temperature": 0.9,
"top_p": 1.0,
"n": 1,
"stream": false,
"max_tokens": null,
"presence_penalty": 0.0,
"frequency_penalty": 0.0
}Request Parameters
model
string
Yes
Must be "realmplay-405b"
messages
array
Yes
Array of message objects with "role" and "content"
temperature
float
No
Controls randomness (0.0-2.0, default: 1.0)
top_p
float
No
Controls diversity via nucleus sampling (0.0-1.0, default: 1.0)
n
integer
No
Number of completions to generate (default: 1)
stream
boolean
No
If true, partial message deltas will be sent (default: false)
stop
array
No
Up to 4 sequences where the API will stop generating
max_tokens
integer
No
Maximum number of tokens to generate
presence_penalty
float
No
Penalizes repeated tokens (-2.0 to 2.0, default: 0)
frequency_penalty
float
No
Penalizes frequent tokens (-2.0 to 2.0, default: 0)
logit_bias
object
No
Modifies likelihood of specified tokens appearing
user
string
No
A unique identifier representing your end-user
Message Object
role
string
Either "system", "user", or "assistant"
content
string
The content of the message
Response Format (Non-streaming)
{
"id": "chatcmpl-123abc...",
"object": "chat.completion",
"created": 1684569800,
"model": "realmplay-405b",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The generated response"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}Streaming Response Format
When stream: true is set, the API returns chunks of data in the following format:
data: {"id": "chatcmpl-123abc...", "object": "chat.completion.chunk", "created": 1684569800, "model": "realmplay-405b", "choices": [{"index": 0, "delta": {"content": "partial"}, "finish_reason": null}]}
data: {"id": "chatcmpl-123abc...", "object": "chat.completion.chunk", "created": 1684569800, "model": "realmplay-405b", "choices": [{"index": 0, "delta": {"content": " content"}, "finish_reason": null}]}
data: {"id": "chatcmpl-123abc...", "object": "chat.completion.chunk", "created": 1684569800, "model": "realmplay-405b", "choices": [{"index": 0, "delta": {}, "finish_reason": "stop"}]}
data: [DONE]Code Examples
Python
Using the OpenAI SDK:
from openai import OpenAI
client = OpenAI(
base_url="https://model.realmplay.ai/v1",
api_key="YOUR_API_KEY" # Replace with your actual API key
)
# Basic completion
response = client.chat.completions.create(
model="realmplay-405b",
messages=[
{
"role": "user",
"content": "Write a short creative story about a magical forest."
}
],
temperature=0.9
)
print(response.choices[0].message.content)
# Streaming example
stream = client.chat.completions.create(
model="realmplay-405b",
messages=[
{
"role": "user",
"content": "Write a short creative story about a magical forest."
}
],
stream=True,
temperature=0.9
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")JavaScript/TypeScript
Using the OpenAI JS SDK:
import OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'https://model.realmplay.ai/v1',
apiKey: 'YOUR_API_KEY', // Replace with your actual API key
});
// Basic completion
async function generateCompletion() {
const completion = await openai.chat.completions.create({
model: 'realmplay-405b',
messages: [
{
role: 'user',
content: 'Write a short creative story about a magical forest.'
}
],
temperature: 0.9,
});
console.log(completion.choices[0].message.content);
}
// Streaming example
async function streamCompletion() {
const stream = await openai.chat.completions.create({
model: 'realmplay-405b',
messages: [
{
role: 'user',
content: 'Write a short creative story about a magical forest.'
}
],
stream: true,
temperature: 0.9,
});
let response = '';
for await (const chunk of stream) {
if (chunk.choices[0]?.delta?.content) {
response += chunk.choices[0].delta.content;
process.stdout.write(chunk.choices[0].delta.content);
}
}
}cURL
curl https://model.realmplay.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "realmplay-405b",
"messages": [
{
"role": "user",
"content": "Write a short creative story about a magical forest."
}
],
"temperature": 0.9
}'Using System Messages
You can control the AI's behavior by including system messages in your request:
response = client.chat.completions.create(
model="realmplay-405b",
messages=[
{
"role": "system",
"content": "You are a fantasy storyteller specializing in dark, mature themes."
},
{
"role": "user",
"content": "Tell me a story about a haunted castle."
}
],
temperature=0.9
)Error Handling
The API returns standard HTTP status codes:
200
OK - Request succeeded
400
Bad Request - Invalid request format or parameters
401
Unauthorized - Invalid API key
403
Forbidden - Valid API key but insufficient permissions (not Gold/Platinum tier)
500
Server Error - Something went wrong on our server
Support
If you encounter any issues or have questions about the API, please contact our support team at support@realmplay.ai
© 2025 RealmPlay.ai - All Rights Reserved
Last updated