PrimeThink.js - Message Response Handling¶
Overview¶
When sending messages to the AI, you can either wait synchronously for the response or handle it asynchronously via Socket.IO events. This guide covers both approaches and best practices.
Two Approaches¶
1. waitForMessageReceived(taskId) - Promise-based (Recommended)¶
The simplest approach for most use cases. Returns a Promise that resolves when the AI response is complete.
const result = await pt.addMessage('What is 2 + 2?');
const response = await pt.waitForMessageReceived(result.task_id);
console.log('AI says:', response.message);
2. onMessageReceived(taskId, callback) - Callback-based¶
For advanced use cases where you need more control (cancellation, multiple listeners, streaming UI).
const result = await pt.addMessage('What is 2 + 2?');
const unsubscribe = pt.onMessageReceived(result.task_id, (message) => {
console.log('AI says:', message.message);
unsubscribe();
});
Why Use Async Message Handling?¶
When sending messages without awaiting (await_response: false), the API returns immediately with a task_id:
const result = await pt.addMessage('Hello there', { awaitResponse: false });
// Returns immediately:
// {
// success: true,
// user_message: { id: 18695, message: "hello there" },
// details: "Message has been queued",
// task_id: "bc45778e-cbe0-4b5e-9071-a5f89033ca10"
// }
The AI then processes the message in the background and streams the response via Socket.IO.
Socket Events Flow¶
When a message is processed, the following Socket.IO events are emitted:
message- User message created (id: 18695)message- AI message placeholder created (id: 18696)stream_reasoning_token- Reasoning/thinking tokens (if model supports it)stream_partial_token- Response tokens as they're generatedstream_completed- Streaming finished for taskmessage- Final AI message with complete content
Both waitForMessageReceived and onMessageReceived wait for stream_completed and then deliver the final message.
API Reference¶
pt.waitForMessageReceived(taskId, options)¶
Wait for the AI response message for a specific task. Returns a Promise.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
taskId | string | required | The task_id returned from addMessage() |
options.timeout | number | 120000 | Timeout in milliseconds (default: 2 minutes) |
Returns:
Promise<object> - Resolves with the complete AI message object
Throws:
Error- If timeout is reached before response arrives
pt.onMessageReceived(taskId, callback)¶
Subscribe to receive the AI response message for a specific task.
Parameters:
| Parameter | Type | Description |
|---|---|---|
taskId | string | The task_id returned from addMessage() when await_response is false |
callback | function | Callback function called with the complete AI message object |
Returns:
function - Unsubscribe function to remove the listener
pt.waitForAllMessagesReceived(taskIds, options)¶
Wait for multiple AI responses to complete. Useful when sending multiple messages in parallel and waiting for all responses.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
taskIds | string[] | required | Array of task_ids returned from addMessage() |
options.timeout | number | 120000 | Timeout in milliseconds for ALL responses (default: 2 minutes) |
options.failFast | boolean | true | If true, reject immediately on first timeout |
options.onProgress | function | - | Callback (completed, total, message) called as each response arrives |
Returns:
Promise<object[]> - Resolves with array of message objects in same order as taskIds
Notes: - Results are returned in the same order as the input taskIds array - With failFast: false, timed-out tasks return null in the results array - The onProgress callback is useful for updating progress bars or status indicators
Message Object Structure¶
Both methods provide a message object with these properties:
| Property | Type | Description |
|---|---|---|
id | number | Message ID |
message | string | Full message text (automatically fetched if truncated) |
message_is_truncated | boolean | Always false after processing (full text is fetched) |
user_type | string | 'assistant' for AI responses |
type | string | Message type (e.g., 'message') |
created_at | string | ISO timestamp |
chat_uuid | string | UUID of the chat |
reasoning_steps | array\|null | Array of reasoning steps (if available) |
attachments | array | Array of attached documents |
from_virtual_assistant_id | number | ID of the responding AI agent |
Truncated Message Handling¶
Long AI responses may be truncated during Socket.IO transmission (messages over ~10KB). Both methods automatically:
- Detect if
message_is_truncatedistrue - Call
pt.getMessageText(messageId)to fetch the full text - Replace the truncated message with the complete text
- Set
message_is_truncatedtofalse
This ensures you always receive the complete message content.
Examples¶
Basic Usage with waitForMessageReceived (Recommended)¶
// Simple one-liner pattern
const { task_id } = await pt.addMessage('Explain quantum computing');
const { message } = await pt.waitForMessageReceived(task_id);
document.getElementById('answer').textContent = message;
Helper Function Pattern¶
// Create a reusable helper
async function ask(question, options = {}) {
const result = await pt.addMessage(question, options);
return pt.waitForMessageReceived(result.task_id);
}
// Usage becomes very clean
const response = await ask('What is machine learning?');
console.log(response.message);
// With options
const hidden = await ask('Process this silently', { hidden: true });
With Loading State¶
async function askQuestion(question) {
const loadingEl = document.getElementById('loading');
const responseEl = document.getElementById('response');
loadingEl.style.display = 'block';
responseEl.textContent = '';
try {
const result = await pt.addMessage(question);
const response = await pt.waitForMessageReceived(result.task_id);
responseEl.textContent = response.message;
// Show reasoning if available
if (response.reasoning_steps?.length > 0) {
const reasoningEl = document.getElementById('reasoning');
reasoningEl.innerHTML = response.reasoning_steps
.map(step => `<div class="step">${step.label}: ${step.content}</div>`)
.join('');
}
} catch (error) {
responseEl.textContent = 'Error: ' + error.message;
} finally {
loadingEl.style.display = 'none';
}
}
Custom Timeout for Long Responses¶
// For complex requests that may take longer
const result = await pt.addMessage('Write a detailed 5000 word essay about AI history');
const response = await pt.waitForMessageReceived(result.task_id, {
timeout: 300000 // 5 minutes
});
Parallel Questions¶
// Ask multiple questions simultaneously
const questions = [
'What is artificial intelligence?',
'What is machine learning?',
'What is deep learning?'
];
// Send all questions
const results = await Promise.all(
questions.map(q => pt.addMessage(q))
);
// Wait for all responses
const responses = await Promise.all(
results.map(r => pt.waitForMessageReceived(r.task_id))
);
// Display results
responses.forEach((response, i) => {
console.log(`Q: ${questions[i]}`);
console.log(`A: ${response.message}\n`);
});
Batch Questions with waitForAllMessagesReceived¶
// Simpler approach using waitForAllMessagesReceived
const questions = ['What is AI?', 'What is ML?', 'What is DL?'];
const results = await Promise.all(questions.map(q => pt.addMessage(q)));
const taskIds = results.map(r => r.task_id);
const responses = await pt.waitForAllMessagesReceived(taskIds);
responses.forEach((r, i) => {
console.log(`Q: ${questions[i]}`);
console.log(`A: ${r.message}`);
});
Batch Questions with Progress Tracking¶
const questions = ['Question 1?', 'Question 2?', 'Question 3?'];
const results = await Promise.all(questions.map(q => pt.addMessage(q)));
const taskIds = results.map(r => r.task_id);
const responses = await pt.waitForAllMessagesReceived(taskIds, {
timeout: 180000, // 3 minutes for all
onProgress: (completed, total, message) => {
updateProgressBar(completed / total * 100);
console.log(`${completed}/${total} responses received`);
}
});
Batch Helper Function¶
// Create a batch helper
async function askAll(questions, options = {}) {
const results = await Promise.all(questions.map(q => pt.addMessage(q)));
return pt.waitForAllMessagesReceived(
results.map(r => r.task_id),
options
);
}
// Usage
const answers = await askAll(['Q1?', 'Q2?', 'Q3?'], {
onProgress: (done, total) => console.log(`${done}/${total}`)
});
Graceful Degradation (failFast: false)¶
// Continue even if some responses timeout
const responses = await pt.waitForAllMessagesReceived(taskIds, {
failFast: false,
timeout: 60000
});
// responses may contain null for timed-out tasks
const successful = responses.filter(r => r !== null);
const failed = responses.filter(r => r === null).length;
console.log(`${successful.length} succeeded, ${failed} timed out`);
Error Handling¶
async function safeAsk(question) {
try {
const result = await pt.addMessage(question);
const response = await pt.waitForMessageReceived(result.task_id, {
timeout: 60000
});
return { success: true, message: response.message };
} catch (error) {
if (error.message.includes('Timeout')) {
return {
success: false,
error: 'timeout',
message: 'Response is taking too long. Please try again.'
};
}
return {
success: false,
error: 'unknown',
message: error.message
};
}
}
Using onMessageReceived for Cancellation¶
let currentUnsubscribe = null;
async function sendMessage(text) {
// Cancel previous if still pending
if (currentUnsubscribe) {
currentUnsubscribe();
currentUnsubscribe = null;
}
const result = await pt.addMessage(text);
currentUnsubscribe = pt.onMessageReceived(result.task_id, (message) => {
currentUnsubscribe = null;
displayResponse(message);
});
}
// Cancel button handler
document.getElementById('cancelBtn').onclick = () => {
if (currentUnsubscribe) {
currentUnsubscribe();
currentUnsubscribe = null;
showMessage('Response cancelled');
}
};
Combining with Streaming UI¶
// Show streaming tokens while waiting for final message
async function askWithStreaming(question) {
const result = await pt.addMessage(question);
const taskId = result.task_id;
// Show streaming tokens in real-time
const streamUnsubscribe = pt.onSocketEvent((event, data) => {
if (event === 'stream_partial_token' && data.task_id?.startsWith(taskId)) {
appendToDisplay(data.token);
}
});
// Wait for complete response
try {
const response = await pt.waitForMessageReceived(taskId);
// Replace streaming content with final formatted message
setDisplay(response.message);
return response;
} finally {
streamUnsubscribe();
}
}
Comparison: Approaches¶
| Aspect | waitForMessageReceived | waitForAllMessagesReceived | onMessageReceived |
|---|---|---|---|
| Style | Promise/async-await | Promise/async-await | Callback |
| Use case | Single response | Multiple responses | Advanced control |
| Cancellation | Not directly | Not directly | Easy with unsubscribe |
| Progress tracking | ❌ | ✅ onProgress callback | Manual |
| Code simplicity | ✅ Very simple | ✅ Simple | More boilerplate |
| Error handling | try/catch | try/catch + failFast | Manual |
| Best for | Most use cases | Batch operations | Cancellation, streaming UI |
Best Practices¶
- Use
waitForMessageReceivedfor most cases - cleaner code, easier error handling - Use
onMessageReceivedwhen you need cancellation - user can stop waiting - Set appropriate timeouts - longer for complex requests
- Handle errors gracefully - timeouts happen, show user-friendly messages
- Combine with streaming - show progress while waiting for final response
Related Methods¶
pt.addMessage()- Send messages to the chatpt.waitForAllMessagesReceived()- Wait for multiple AI responsespt.onSocketEvent()- Subscribe to all socket events (for streaming tokens)pt.getMessageText()- Fetch full message text by IDpt.stopStreamingMessage()- Stop a streaming response in progress
Last Updated: February 1, 2026 Version: 20260201