Please wait while we prepare your experience
The chat completions API is the primary way to interact with HelpingAI. It generates conversational responses with built-in emotional intelligence and intermediate reasoning capabilities.
/v1/chat/completions
Create a chat completion with emotional intelligence and intermediate reasoning
Parameter | Type | Description |
---|---|---|
model | string | ID of the model to use. Currently supports Dhanishtha-2.0-preview |
messages | array | A list of messages comprising the conversation so far |
Parameter | Type | Description | Default |
---|---|---|---|
temperature | number | Controls randomness (0-2). Higher values make output more random | 0.7 |
max_tokens | integer | Maximum number of tokens to generate (1-4000) | 150 |
top_p | number | Nucleus sampling parameter (0-1) | 1 |
frequency_penalty | number | Penalize frequent tokens (-2 to 2) | 0 |
presence_penalty | number | Penalize new tokens (-2 to 2) | 0 |
stream | boolean | Whether to stream back partial progress | false |
hideThink | boolean | Hide intermediate reasoning in <think> tags | true |
tools | array | List of tools the model may call | null |
tool_choice | string/object | Controls which tool is called | "auto" |
Each message in the messages
array should have:
Field | Type | Description |
---|---|---|
role | string | The role of the message author (system , user , assistant , or tool ) |
content | string | The contents of the message |
name | string | (Optional) The name of the author of this message |
tool_calls | array | (Optional) Tool calls generated by the model |
tool_call_id | string | (Optional) Tool call that this message is responding to |
import requests
url = "https://api.helpingai.co/v1/chat/completions"
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
data = {
"model": "Dhanishtha-2.0-preview",
"messages": [
{"role": "user", "content": "I'm feeling overwhelmed with my workload today."}
],
"temperature": 0.7,
"max_tokens": 200
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
from openai import OpenAI
client = OpenAI(
base_url="https://api.helpingai.co/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "user", "content": "I'm feeling overwhelmed with my workload today."}
],
temperature=0.7,
max_tokens=200
)
print(response.choices[0].message.content)
from helpingai import HelpingAI
client = HelpingAI(api_key="YOUR_API_KEY")
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "user", "content": "I'm feeling overwhelmed with my workload today."}
],
temperature=0.7,
max_tokens=200
)
print(response.choices[0].message.content)
const axios = require("axios");
(async () => {
const response = await axios.post(
"https://api.helpingai.co/v1/chat/completions",
{
model: "Dhanishtha-2.0-preview",
messages: [
{
role: "user",
content: "I'm feeling overwhelmed with my workload today.",
},
],
temperature: 0.7,
max_tokens: 200,
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json",
},
}
);
console.log(response.data.choices[0].message.content);
})();
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://api.helpingai.co/v1",
apiKey: "YOUR_API_KEY",
});
async function main() {
const completion = await openai.chat.completions.create({
model: "Dhanishtha-2.0-preview",
messages: [
{
role: "user",
content: "I'm feeling overwhelmed with my workload today.",
},
],
temperature: 0.7,
max_tokens: 200,
});
console.log(completion.choices[0].message.content);
}
main();
import { HelpingAI } from "helpingai";
const client = new HelpingAI({
apiKey: "YOUR_API_KEY",
});
async function main() {
const completion = await client.chat.completions.create({
model: "Dhanishtha-2.0-preview",
messages: [
{
role: "user",
content: "I'm feeling overwhelmed with my workload today.",
},
],
temperature: 0.7,
max_tokens: 200,
});
console.log(completion.choices[0].message.content);
}
main();
import requests
url = "https://api.helpingai.co/v1/chat/completions"
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
data = {
"model": "Dhanishtha-2.0-preview",
"messages": [
{"role": "system", "content": "You are a supportive career counselor who provides empathetic guidance."},
{"role": "user", "content": "I'm thinking about changing careers but I'm scared."}
],
"temperature": 0.8,
"max_tokens": 300
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
from openai import OpenAI
client = OpenAI(
base_url="https://api.helpingai.co/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "system", "content": "You are a supportive career counselor who provides empathetic guidance."},
{"role": "user", "content": "I'm thinking about changing careers but I'm scared."}
],
temperature=0.8,
max_tokens=300
)
print(response.choices[0].message.content)
from helpingai import HelpingAI
client = HelpingAI(api_key="YOUR_API_KEY")
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "system", "content": "You are a supportive career counselor who provides empathetic guidance."},
{"role": "user", "content": "I'm thinking about changing careers but I'm scared."}
],
temperature=0.8,
max_tokens=300
)
print(response.choices[0].message.content)
const axios = require("axios");
(async () => {
const response = await axios.post(
"https://api.helpingai.co/v1/chat/completions",
{
model: "Dhanishtha-2.0-preview",
messages: [
{
role: "system",
content:
"You are a supportive career counselor who provides empathetic guidance.",
},
{
role: "user",
content: "I'm thinking about changing careers but I'm scared.",
},
],
temperature: 0.8,
max_tokens: 300,
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json",
},
}
);
console.log(response.data.choices[0].message.content);
})();
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://api.helpingai.co/v1",
apiKey: "YOUR_API_KEY",
});
async function main() {
const completion = await openai.chat.completions.create({
model: "Dhanishtha-2.0-preview",
messages: [
{
role: "system",
content:
"You are a supportive career counselor who provides empathetic guidance.",
},
{
role: "user",
content: "I'm thinking about changing careers but I'm scared.",
},
],
temperature: 0.8,
max_tokens: 300,
});
console.log(completion.choices[0].message.content);
}
main();
import { HelpingAI } from "helpingai";
const client = new HelpingAI({
apiKey: "YOUR_API_KEY",
});
async function main() {
const completion = await client.chat.completions.create({
model: "Dhanishtha-2.0-preview",
messages: [
{
role: "system",
content:
"You are a supportive career counselor who provides empathetic guidance.",
},
{
role: "user",
content: "I'm thinking about changing careers but I'm scared.",
},
],
temperature: 0.8,
max_tokens: 300,
});
console.log(completion.choices[0].message.content);
}
main();
import requests
url = "https://api.helpingai.co/v1/chat/completions"
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
data = {
"model": "Dhanishtha-2.0-preview",
"messages": [
{"role": "user", "content": "What's 15 * 24? Show your work."}
],
"hideThink": False, # Shows reasoning process
"temperature": 0.3,
"max_tokens": 300
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
from openai import OpenAI
client = OpenAI(
base_url="https://api.helpingai.co/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "user", "content": "What's 15 * 24? Show your work."}
],
hideThink=False, # Shows reasoning process
temperature=0.3,
max_tokens=300
)
print(response.choices[0].message.content)
from helpingai import HelpingAI
client = HelpingAI(api_key="YOUR_API_KEY")
response = client.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[
{"role": "user", "content": "What's 15 * 24? Show your work."}
],
hideThink=False, # Shows reasoning process
temperature=0.3,
max_tokens=300
)
print(response.choices[0].message.content)
const axios = require("axios");
(async () => {
const response = await axios.post(
"https://api.helpingai.co/v1/chat/completions",
{
model: "Dhanishtha-2.0-preview",
messages: [{ role: "user", content: "What's 15 * 24? Show your work." }],
hideThink: false, // Shows reasoning process
temperature: 0.3,
max_tokens: 300,
},
{
headers: {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json",
},
}
);
console.log(response.data.choices[0].message.content);
})();
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "https://api.helpingai.co/v1",
apiKey: "YOUR_API_KEY",
});
async function main() {
const completion = await openai.chat.completions.create({
model: "Dhanishtha-2.0-preview",
messages: [{ role: "user", content: "What's 15 * 24? Show your work." }],
hideThink: false, // Shows reasoning process
temperature: 0.3,
max_tokens: 300,
});
console.log(completion.choices[0].message.content);
}
main();
import { HelpingAI } from "helpingai";
const client = new HelpingAI({
apiKey: "YOUR_API_KEY",
});
async function main() {
const completion = await client.chat.completions.create({
model: "Dhanishtha-2.0-preview",
messages: [{ role: "user", content: "What's 15 * 24? Show your work." }],
hideThink: false, // Shows reasoning process
temperature: 0.3,
max_tokens: 300,
});
console.log(completion.choices[0].message.content);
}
main();
A successful response returns a JSON object with the following structure:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677652288,
"model": "Dhanishtha-2.0-preview",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I understand you're feeling overwhelmed with your workload today. That's a really common experience, and it's completely valid to feel that way when you have a lot on your plate..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 45,
"total_tokens": 65
}
}
Field | Type | Description |
---|---|---|
id | string | Unique identifier for the chat completion |
object | string | Object type, always "chat.completion" |
created | integer | Unix timestamp of when the completion was created |
model | string | The model used for the completion |
choices | array | List of completion choices |
usage | object | Token usage information |
Field | Type | Description |
---|---|---|
index | integer | The index of the choice in the list |
message | object | The generated message |
finish_reason | string | Reason the model stopped generating tokens |
Field | Type | Description |
---|---|---|
role | string | Always "assistant" for generated responses |
content | string | The generated response content |
tool_calls | array | (Optional) Tool calls made by the model |
Field | Type | Description |
---|---|---|
prompt_tokens | integer | Number of tokens in the prompt |
completion_tokens | integer | Number of tokens in the completion |
total_tokens | integer | Total tokens used (prompt + completion) |
For streaming responses, set stream: true
. You'll receive Server-Sent Events with partial responses:
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"Dhanishtha-2.0-preview","choices":[{"index":0,"delta":{"content":"I understand"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"Dhanishtha-2.0-preview","choices":[{"index":0,"delta":{"content":" you're"},"finish_reason":null}]}
data: [DONE]
Error responses follow this format:
{
"error": {
"message": "Invalid API key provided",
"type": "authentication_error",
"code": "invalid_api_key"
}
}
Code | Description |
---|---|
invalid_api_key | The API key is invalid or missing |
insufficient_quota | You've exceeded your usage quota |
model_not_found | The specified model doesn't exist |
invalid_request_error | The request format is invalid |
rate_limit_exceeded | Too many requests in a short time |
Streaming Guide - Learn about real-time responses
Tool Calling - Function calling capabilities
Intermediate Reasoning - Understanding AI thoughts
Models API - Available models and their capabilities model: "Dhanishtha-2.0-preview", messages: [ { role: "user", content: "I'm feeling overwhelmed with my workload today.", }, ], temperature: 0.7, max_tokens: 200, });
console.log(completion.choices[0].message.content); }
main();