Model configuration
Fine-tune your agent's behavior
When using the @model
decorator, you can optionally provide a configuration object to customize the model's behavior:
Configuration options
maxTokens
number
-
Maximum number of tokens to generate in the response. Use this to control response length.
temperature
number
-
Sampling temperature between 0 and 1. Lower values make responses more focused and deterministic, while higher values make them more creative and diverse.
maxRetries
number
-
Maximum number of retries for failed API requests before giving up.
maxSteps
number
3
Maximum number of conversation steps (tool calls) the agent can take to complete a task.
toolChoice
'auto'
| 'none'
-
Controls how the model uses tools. Set to 'none' to disable tool usage, or 'auto' to let the model decide when to use tools.
Example usage
Best practices
Set
maxTokens
based on your expected response length needs to optimize costsUse lower
temperature
(0.1-0.4) for tasks requiring precise, factual responsesUse higher
temperature
(0.6-0.9) for tasks requiring creativityAdjust
maxSteps
based on task complexity - simple tasks may only need 1-2 stepsConsider setting
maxRetries
for improved reliability in production environments
Last updated