Configuring CLAi

Model Selection

  • OpenAI API Key: First, enter your OpenAI API key in the designated field. This key enables CLAi to make requests on your behalf.

  • Model Options: CLAi supports various models, including GPT-4-1106-preview, GPT-4-0163, GPT-4-0125-preview, and GPT-4-Turbo preview. You can select your preferred model from the dropdown list.

  • Recommended Model: Through testing, we have determined that the GPT-4-1106-preview model provides the best results. We recommend using this model for optimal performance.

  • Cost Considerations: Different models may have varying costs associated with their use. For detailed pricing information, visit OpenAI Pricing.

  • Model Updates: A refresh button next to the model selection dropdown allows you to update the list with the latest compatible models from OpenAI.

Token Management

  • Max Conversation Tokens: This setting limits the number of tokens a single conversation can contain, helping to manage costs. The default limit is set to 100,000 tokens. Conversations exceeding this limit will be automatically terminated to prevent further charges.

  • Max Response Tokens: Similarly, you can limit the number of tokens for a single system response (the output from a command or goal). The default limit is 3,500 tokens. Responses exceeding this amount will be truncated, ensuring that costs are kept under control.