Model
idThe model identifier
objectThe object type, which is always 'model'
createdThe Unix timestamp (in seconds) when the model was created
owned_byThe organization that owns the model (extracted from model ID prefix)
Pricing information for the model
context_lengthSupported context length in tokens
ModelListResponse
objectThe object type, which is always 'list'
The list of models
ChatMessage
roleThe role of the message author
The contents of the message
nameAn optional name for the participant
tool_call_idTool call that this message is responding to
ChatCompletionRequest
modelID of the model to use
A list of messages comprising the conversation so far
temperatureWhat sampling temperature to use, between 0 and 2
top_pAn alternative to sampling with temperature
nHow many chat completion choices to generate for each input message
streamIf set, partial message deltas will be sent
Up to 4 sequences where the API will stop generating further tokens
max_tokensThe maximum number of tokens to generate in the chat completion
presence_penaltyNumber between -2.0 and 2.0
frequency_penaltyNumber between -2.0 and 2.0
logit_biasModify the likelihood of specified tokens appearing in the completion
userA unique identifier representing your end-user
ChatCompletionChoice
indexThe index of the choice in the list of choices
finish_reasonThe reason the model stopped generating tokens
ChatCompletionUsage
prompt_tokensNumber of tokens in the prompt
completion_tokensNumber of tokens in the generated completion
total_tokensTotal number of tokens used in the request
ChatCompletionResponse
idA unique identifier for the chat completion
objectThe object type, which is always 'chat.completion'
createdThe Unix timestamp (in seconds) of when the chat completion was created
modelThe model used for the chat completion
A list of chat completion choices
system_fingerprintThis fingerprint represents the backend configuration that the model runs with