Skip to main content
GET
/
v1
/
models
/
{model_id}
Get Model
curl --request GET \
  --url https://api.compilelabs.com/v1/models/{model_id}
{
  "id": "<string>",
  "author": "openai",
  "slug": "<string>",
  "name": "<string>",
  "description": "<string>",
  "created": 123,
  "owned_by": "<string>",
  "status": "pending",
  "visibility": "private",
  "pricing": [
    {
      "metric": "input_tokens",
      "rate": 1
    }
  ],
  "context_length": 2,
  "model_type": "text_generation",
  "capabilities": [
    "streaming"
  ]
}

Headers

x-api-key
string | null

Your API key.

Path Parameters

model_id
string
required

The id of the model.

Example:

"meta/llama-v3-8b-instruct"

Query Parameters

type
enum<string>
default:text_generation

The type of the model.

Available options:
text_generation,
image_generation,
speech_to_text,
text_to_speech,
moderation
Examples:

"text_generation"

"image_generation"

Response

Successful Response

Text generation model response

id
string
required

The model identifier, which can be referenced in the API endpoints.

Example:

"meta/llama-4-scout"

author
enum<string>
required

Information about the author or creator of the model.

Available options:
openai,
google,
anthropic,
meta,
qwen,
moonshot,
mistral,
black-forest-labs,
x-ai,
stability-ai,
deepseek,
z-ai
Example:

"meta"

slug
string
required

A URL-friendly identifier for the model.

Example:

"llama-4-scout"

name
string
required

The display name of the model.

Example:

"Llama 4 Scout"

description
string
required

A detailed description of the model's capabilities and use cases.

Example:

"Meta's latest large language model optimized for reasoning and code generation"

created
integer
required

The Unix timestamp (in seconds) when the model was created.

Example:

1677652288

owned_by
string
required

The organization that owns the model.

Example:

"compilelabs"

status
enum<string>
required

The current status of the model (active, inactive, pending, or deleted).

Available options:
pending,
active,
inactive,
deleted
Example:

"active"

visibility
enum<string>
required

Whether the model is publicly visible or private.

Available options:
private,
public
Example:

"public"

pricing
PricingModel · object[]
required

The pricing information for different usage metrics.

Example:
[
{ "metric": "input_tokens", "rate": 0.005 },
{ "metric": "output_tokens", "rate": 0.015 }
]
context_length
integer
required

The maximum number of tokens that can be processed in a single request (input + output).

Required range: x >= 1
Example:

128000

model_type
string
default:text_generation
Allowed value: "text_generation"
capabilities
enum<string>[]

The capabilities supported by the model.

Features a model supports

Available options:
streaming,
vision,
function_calling,
structured_output,
reasoning,
image_editing
Examples:

"streaming"

"structured_output"

"function_calling"