Generate text completions from a prompt (legacy API)
curl --request POST \
--url https://modelswitch.io/v1/completions \
--header 'Content-Type: application/json' \
--data '
{
"model": "<string>",
"prompt": "<string>",
"max_tokens": 123,
"temperature": 123,
"stream": true
}
'{
"id": "<string>",
"object": "<string>",
"created": 123,
"model": "<string>",
"choices": [
{
"text": "<string>",
"index": 123,
"finish_reason": "<string>"
}
],
"usage": {
"prompt_tokens": 123,
"completion_tokens": 123,
"total_tokens": 123
}
}gpt-3.5-turbo-instruct) support this format.Authorization: Bearer ms-YOUR_KEY
gpt-3.5-turbo-instruct.true, the response is delivered as server-sent events (SSE), ending with data: [DONE].{
"model": "gpt-3.5-turbo-instruct",
"prompt": "def bubble_sort(arr):\n",
"max_tokens": 256,
"temperature": 0.3
}
curl https://modelswitch.io/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ms-YOUR_KEY" \
-d '{
"model": "gpt-3.5-turbo-instruct",
"prompt": "def bubble_sort(arr):\n",
"max_tokens": 256,
"temperature": 0.3
}'
{
"id": "cmpl-xyz789",
"object": "text_completion",
"created": 1709000000,
"model": "gpt-3.5-turbo-instruct",
"choices": [{
"text": "\n n = len(arr)\n for i in range(n)...",
"index": 0,
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 85,
"total_tokens": 97
}
}
"text_completion".curl --request POST \
--url https://modelswitch.io/v1/completions \
--header 'Content-Type: application/json' \
--data '
{
"model": "<string>",
"prompt": "<string>",
"max_tokens": 123,
"temperature": 123,
"stream": true
}
'{
"id": "<string>",
"object": "<string>",
"created": 123,
"model": "<string>",
"choices": [
{
"text": "<string>",
"index": 123,
"finish_reason": "<string>"
}
],
"usage": {
"prompt_tokens": 123,
"completion_tokens": 123,
"total_tokens": 123
}
}