curl --request POST \ --url https://api.sambanovacloud.com/v1/chat/completions \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: application/json' \ --data '{ "messages": [ { "role": "system", "content": "Answer the question in a couple sentences." }, { "role": "user", "content": "Share a happy story with me" } ], "max_tokens": 800, "stop": [ "[INST", "[INST]", "[/INST]", "[/INST]" ], "model": "Meta-Llama-3.1-8B-Instruct", "stream": true, "stream_options": { "include_usage": true } }'
Copy
{ "id": "chatcmpl-123", "object": "chat.completion", "created": 1677652288, "model": "Llama-3-8b-chat", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "\n\nHello there, how may I assist you today?" }, "logprobs": null, "finish_reason": "stop" } ]}
Chat Completion
Creates a model response for the given chat conversation.
POST
/
v1
/
chat
/
completions
Copy
curl --request POST \ --url https://api.sambanovacloud.com/v1/chat/completions \ --header 'Authorization: Bearer <token>' \ --header 'Content-Type: application/json' \ --data '{ "messages": [ { "role": "system", "content": "Answer the question in a couple sentences." }, { "role": "user", "content": "Share a happy story with me" } ], "max_tokens": 800, "stop": [ "[INST", "[INST]", "[/INST]", "[/INST]" ], "model": "Meta-Llama-3.1-8B-Instruct", "stream": true, "stream_options": { "include_usage": true } }'
Copy
{ "id": "chatcmpl-123", "object": "chat.completion", "created": 1677652288, "model": "Llama-3-8b-chat", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "\n\nHello there, how may I assist you today?" }, "logprobs": null, "finish_reason": "stop" } ]}
If a request fails, the response body provides a JSON object with details about the error.
For more information on errors, refer to the API Error Codes page.