Configuration
Fine-tune your agent's behavior
When using the @model
decorator, you can optionally provide a configuration object to customize the model's behavior:
@model('openai:gpt-4', {
maxTokens: 100,
temperature: 0.5,
maxRetries: 3,
maxSteps: 3,
toolChoice: 'auto'
})
class MyAgent extends Agent<string, string> {}
Configuration options
maxTokens
number
-
Maximum number of tokens to generate in the response. Use this to control response length.
temperature
number
-
Sampling temperature between 0 and 1. Lower values make responses more focused and deterministic, while higher values make them more creative and diverse.
maxRetries
number
-
Maximum number of retries for failed API requests before giving up.
maxSteps
number
3
Maximum number of conversation steps (tool calls) the agent can take to complete a task.
toolChoice
'auto'
| 'none'
-
Controls how the model uses tools. Set to 'none' to disable tool usage, or 'auto' to let the model decide when to use tools.
Example usage
// Basic usage without configuration
@model('openai:gpt-4')
class SimpleAgent extends Agent<string, string> {}
// With partial configuration
@model('openai:gpt-4', {
maxTokens: 100,
temperature: 0.7
})
class ConfiguredAgent extends Agent<string, string> {}
// With full configuration
@model('openai:gpt-4', {
maxTokens: 150,
temperature: 0.5,
maxRetries: 3,
maxSteps: 5,
toolChoice: 'auto'
})
class FullyConfiguredAgent extends Agent<string, string> {}
Best practices
Set
maxTokens
based on your expected response length needs to optimize costsUse lower
temperature
(0.1-0.4) for tasks requiring precise, factual responsesUse higher
temperature
(0.6-0.9) for tasks requiring creativityAdjust
maxSteps
based on task complexity - simple tasks may only need 1-2 stepsConsider setting
maxRetries
for improved reliability in production environments
Last updated