crimson-falcon-4
crimson-falcon-4 is Liutongβs flagship chat model. It excels at general-purpose text generation, conversation, code writing, summarization, and instruction following.
Capabilities
- Natural language conversation and Q&A
- Code generation and debugging across dozens of languages
- Long-form content writing and summarization
- Structured data extraction (JSON, CSV, etc.)
- Multi-turn conversation with context retention
- Function calling and tool use
API usage
Chat completion
from openai import OpenAI
client = OpenAI(
base_url="https://api.liutong.llby.org/v1",
api_key="lt_your_api_key",
)
response = client.chat.completions.create(
model="crimson-falcon-4",
messages=[
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": "Write a Python function to check if a number is prime."},
],
temperature=0.7,
max_tokens=1024,
)
print(response.choices[0].message.content)
Streaming
stream = client.chat.completions.create(
model="crimson-falcon-4",
messages=[{"role": "user", "content": "Explain quantum computing simply."}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
temperature | float | 1.0 | Controls randomness. Lower = more deterministic. |
max_tokens | int | β | Maximum tokens to generate. |
top_p | float | 1.0 | Nucleus sampling threshold. |
stream | bool | false | Enable streaming responses. |
stop | string/array | β | Stop sequences to halt generation. |
Endpoint
POST /v1/chat/completions
See the full Chat Completions API reference.