Skip to main content
This guide walks you through the process of customizing your AI assistants to better meet your needs by selecting different models, setting up custom personas, and attaching specialized tools.

Model Selection

Choose from a variety of AI models from leading providers to power your assistants:

OpenAI

GPT-4oMost capable
GPT-4o miniFast & affordable
o4-miniReasoning model
GPT-4.1Enhanced
GPT-4.5 PreviewLatest preview

Anthropic

Claude 3.7 SonnetLatest & most capable
Claude 3.5 SonnetBalanced performance
Claude 3.5 HaikuFast & affordable
Claude 3 OpusHigh capability

Google

Gemini 2.5 Pro PreviewLatest preview
Gemini 2.0 FlashFast & multimodal
Gemini 1.5 ProProven performance

Mistral

Mistral Small LatestEfficient & capable
Open Mistral NemoOpen source

DeepSeek

DeepSeek ChatGeneral purpose
DeepSeek ReasonerAdvanced reasoning

Groq

Llama3 70BUltra-fast inference

xAI

Grok 3 BetaMost capable
Grok 3 Fast BetaFast performance
Grok 3 Mini BetaCompact
Grok 3 Mini Fast BetaFast & compact

Cohere

Command ALatest model
Command R7BEfficient
Command R+Enhanced
Command RBalanced

Perplexity

SonarSearch-optimized
Sonar ProEnhanced search
Sonar Deep ResearchResearch specialist

OpenRouter

Dolphin Mixtral 8x22BUncensored
When selecting a model, consider the trade-off between capability and cost. More powerful models like GPT-4o and Claude 3.7 Opus provide superior reasoning and assistance but at a higher price point.

In-Depth Analysis of Major AI Models: Capabilities and Strengths

Updated: May 2025Each model has evolved to serve specific niches in the AI ecosystem, with trade-offs between capabilities, safety, cost, and availability. The choice of model depends heavily on the specific use case, budget constraints, and ethical considerations of the deployment environment.

OpenAI (ChatGPT)

  • Creative Writing and Content Generation: ChatGPT excels at creative writing, long-form content, coding complex programs, and generating AI images
  • Conversational Fluidity: Known for natural conversations and accessible, broad-use cases
  • Structured Research Assistance: Effective at educational content and knowledge synthesis
  • Content Marketing Agency: Generating blog posts, social media content, and marketing copy at scale
  • Educational Institution: Creating personalized learning materials and interactive tutoring systems
  • Creative Writing Studio: Developing character backstories, plot outlines, and dialogue for novels or screenplays
  • Business Analyst: Generating comprehensive reports, data analysis summaries, and presentation materials

Anthropic (Claude)

  • Coding Excellence: Claude 3.7 Sonnet achieves state-of-the-art performance on SWE-bench Verified, which evaluates AI models’ ability to solve real-world software issues
  • Extended Thinking: In extended thinking mode, it self-reflects before answering, which improves its performance on math, physics, instruction-following, coding, and many other tasks
  • Long Context Understanding: 200K token context window for complex document analysis
  • Software Engineering Firm: Solving complex codebase issues with Claude’s SWE-bench capabilities for legacy system modernization
  • Legal Research Team: Analyzing lengthy legal documents and contracts with 200K token context window
  • Scientific Research Lab: Using extended thinking mode for complex problem-solving in physics and mathematics
  • Enterprise Security Team: Building secure AI applications with Claude’s strong safety features
  • Technical Documentation Team: Creating comprehensive API documentation and technical specifications

Google (Gemini)

  • Multimodal Processing: Native multimodality across all Gemini models, handling text, images, audio, and video seamlessly
  • Reasoning Capabilities: From Gemini 1.5 to 2.5, all models feature advanced reasoning abilities with thinking capabilities built into core architecture
  • Long Context Window: Industry-leading context windows, with Gemini 1.5 Pro offering up to 1M tokens, and Gemini 2.5 Pro extending to 2M tokens
  • Language Support: Strong multilingual capabilities across diverse languages with cultural context understanding
  • Integration with Google Ecosystem: Seamless integration with Google Workspace, Cloud, and other Google services
  • Medical Research Institute: Processing and analyzing multimodal medical data (text, images, videos) for comprehensive patient assessments
  • Architecture Firm: Using long context windows to analyze entire building specifications, blueprints, and regulatory documents
  • Film Production Company: Processing video content for detailed scene analysis and automated content tagging
  • Global Enterprise: Leveraging Google Workspace integration for AI-powered productivity across multinational teams
  • Educational Platform: Creating adaptive learning systems with multimodal content understanding and generation

Mistral AI

  • Multilingual Support: Advanced capabilities across dozens of languages including European, Asian, and Middle Eastern languages
  • Efficiency: Models optimized for different scales - from lightweight Mistral 7B to powerful Mistral Large, all designed for efficient inference
  • Function Calling: Advanced tool use and API integration capabilities across the model family
  • Open Source Options: Mistral 7B and Mixtral models available as open-source, enabling community contributions and custom deployments
  • Startup Development Team: Using Mistral 7B for cost-effective local AI deployment with reasonable performance
  • European Financial Institution: Leveraging Mistral Large for multilingual compliance and regulatory analysis
  • Open Source AI Project: Building custom applications on top of Mixtral’s open architecture
  • Enterprise Software Company: Using Mistral’s function calling for complex workflow automation and API orchestration

DeepSeek

  • Logical Reasoning: On MATH-500, it attains an impressive score of 97.3%, performing on par with OpenAI-o1-1217 and significantly outperforming other models
  • Cost Efficiency: Significantly lower training costs compared to competitors
  • Code Competition: DeepSeek-R1 demonstrates expert level in code competition tasks, as it achieves 2,029 Elo rating on Codeforces outperforming 96.3% human participants
  • Open Source Advantage: Accessible under MIT license for research and development
  • Mathematics Research Department: Solving complex mathematical proofs with 97.3% accuracy on MATH-500
  • Competitive Programming Team: Training for coding competitions with Codeforces-level problem solving
  • Educational Technology Startup: Creating affordable AI tutoring systems with cost-efficient deployment
  • Algorithm Development Lab: Developing and testing new algorithms with expert-level logical reasoning
  • Academic Research Institution: Using open-source model for reproducible AI research and modifications

Groq

  • Lightning-Fast Inference: Serving the Mixtral 8x7B model at 480 tokens per second, the Groq LPU is providing one of the leading inference numbers in the industry
  • Energy Efficiency: The LPU delivers instant speed, unparalleled affordability, and energy efficiency at scale
  • Real-time Applications: Ideal for chatbots, autonomous vehicles, and robotics
  • Low Latency: Superior response times compared to traditional GPU solutions
  • High-Frequency Trading Firm: Implementing ultra-low latency AI for real-time market analysis and trading decisions
  • Autonomous Vehicle Company: Processing sensor data with 480 tokens/second for instant decision-making
  • Video Game Studio: Creating responsive AI NPCs that react in real-time to player actions
  • Live Customer Service Platform: Providing instant AI responses with minimal latency for customer inquiries
  • Green Tech Company: Deploying energy-efficient AI solutions to minimize carbon footprint

xAI (Grok)

  • Mathematical Reasoning: With their highest level of test-time compute, Grok 3 achieved 93.3% on this competition
  • Complex Problem Solving: Strong performance on graduate-level reasoning tasks
  • Long Context Processing: With a context window of 1 million tokens — 8 times larger than their previous models
  • Real-time Information: Direct integration with X (Twitter) for current data
  • Social Media Analytics Company: Leveraging real-time Twitter/X data integration for trend analysis
  • Quantitative Research Team: Solving complex mathematical problems for financial modeling
  • Education Technology Platform: Creating advanced math tutoring systems for AIME-level competitions
  • Market Research Firm: Analyzing large volumes of social media data with 1M token context
  • AI Research Team: Developing systems requiring graduate-level reasoning capabilities

Cohere

  • Enterprise Applications: Command R+ handles enterprise use cases such as categorization, workflow tool use automation, data analysis and more
  • Multilingual RAG: Strong capabilities for retrieval-augmented generation
  • Business Integration: Excellent for CRM, customer service, and automation
  • Cost Efficiency: command-r-08-2024: 0.15permillioninputtokens,0.15 per million input tokens, 0.60 per million output tokens
  • Enterprise CRM Team: Automating customer data categorization and workflow processes
  • Multinational Corporation: Implementing multilingual customer support with RAG capabilities
  • Business Process Automation Firm: Creating end-to-end workflow automation solutions
  • Data Analysis Consultancy: Building enterprise-scale data analysis pipelines with cost-efficient models
  • HR Department: Automating resume screening and candidate matching with advanced categorization

Perplexity (Sonar)

  • Real-time Search Integration: Build with the best AI answer engine API, created by Perplexity. Power your products with the fastest, cheapest offering out there with search grounding
  • Citation Generation: Automatic source attribution for all information
  • Fact-checking: Superior performance on factual correctness benchmarks
  • Cost Efficiency: Very competitive pricing for search-enabled AI
  • News Organization: Building real-time fact-checking systems with automatic citation generation
  • Legal Research Platform: Creating search-enhanced legal research tools with verified sources
  • Academic Research Tool Developer: Building scholarly search applications with citation tracking
  • Market Intelligence Firm: Developing real-time market analysis tools with web-wide research capabilities
  • Healthcare Information Provider: Creating medical information systems with reliable source attribution

Dolphin Models

  • Uncensored Responses: Provides unrestricted responses without safety filters
  • Long Context: Features an impressive array of capabilities with a vast 256k context window
  • Customization: Allows full user control over system behavior
  • Specialized Tasks: Useful for research, creative writing, and unfiltered analysis
  • Creative Writing Studio: Developing edgy fiction with complex, morally ambiguous characters
  • Security Research Lab: Testing AI vulnerabilities and exploring edge cases without restrictions
  • Academic AI Ethics Researcher: Studying uncensored AI behavior for safety research
  • Private Research Institution: Conducting sensitive research requiring unfiltered AI responses
  • Independent Game Developer: Creating realistic AI characters for mature gaming content

Customizing Name, Description, and System Prompt

Customize your assistant’s persona by setting a name, description, and system prompt:
1

Navigate to Assistant Settings

Open the assistant you want to customize and click the “Settings” button in the top-right corner of the assistant panel.
2

Set a Custom Name

Enter a meaningful name for your assistant that reflects its purpose or persona. For example:
  • “Research Assistant”
  • “Code Review Expert”
  • “Marketing Copywriter”
3

Add a Description

Write a brief description that explains the assistant’s purpose, expertise, and capabilities. This helps users understand when to use this particular assistant.Example:
A specialized research assistant with expertise in academic literature review, 
data analysis, and citation management. Ideal for students and researchers 
working on scholarly projects.
4

Create a System Prompt

The system prompt is the most important element for defining your assistant’s behavior. It provides instructions that guide how the AI responds to queries.
A well-crafted system prompt should define:
  • The assistant’s role and identity
  • Areas of expertise and knowledge
  • Communication style and tone
  • Any limitations or boundaries
  • Specific instructions for how to approach different types of queries
Example system prompt for a research assistant:
You are a highly knowledgeable research assistant with expertise in academic writing, 
research methodology, and data analysis. Your primary goal is to help users conduct 
thorough research, find relevant sources, and organize information effectively.

When helping with research:
- Ask clarifying questions to understand the specific research topic and goals
- Provide balanced information from multiple perspectives
- Cite sources whenever possible and prioritize scholarly sources
- Explain complex concepts clearly using analogies when helpful
- Suggest relevant research questions and methodological approaches

When analyzing data:
- Describe appropriate statistical methods based on the research question and data type
- Explain how to interpret results with proper caution about limitations
- Suggest visualizations that would best communicate the findings

Communicate in a professional but accessible tone. Avoid overly technical jargon 
unless necessary, and always explain specialized terms. Prioritize accuracy over 
certainty, and be transparent about the limitations of your knowledge.

Using the Generate System Prompt Button

For a quick start, you can use the “Generate System Prompt” button to create a baseline system prompt:
1

Click Generate System Prompt

In the assistant settings panel, locate and click the “Generate System Prompt” button.
2

Provide a Brief Description

Enter a short description of the assistant you want to create. For example:
A helpful coding assistant that specializes in React and JavaScript development
3

Review and Edit

The system will generate a comprehensive system prompt based on your description. Review the generated prompt and make any necessary edits to better align with your specific needs.
4

Save Changes

Once you’re satisfied with the system prompt, click “Save” to apply the changes to your assistant.
Even when using the generate feature, always review the system prompt carefully. The generated prompt serves as a starting point that you should customize to ensure it precisely matches your requirements.

Attaching Tools to Assistants

Enhance your assistant’s capabilities by attaching specialized tools that allow it to perform specific tasks:
1

Access Tool Settings

In the assistant settings panel, navigate to the “Tools” section.
2

Browse Available Tools

Browse through the available tools organized by categories such as Web, Images, Data, and Code.
3

Select Relevant Tools

Choose tools that align with your assistant’s purpose. For example:
  • A research assistant might benefit from Web Search and Web Scraper tools
  • A coding assistant would need Code Interpreter and File Analysis tools
  • A creative assistant could use Image Generation tools
4

Configure Tool Settings

Some tools may have configurable settings or require API keys. Complete any necessary setup for each tool you’ve selected.
5

Save Configuration

Click “Save” to apply the tool configuration to your assistant.
The assistant will automatically determine when to use the attached tools based on user queries. You don’t need to explicitly instruct it to use specific tools during a conversation.

Best Practices

Follow these best practices to create effective customized assistants:
  1. Focus on Specific Use Cases: Create specialized assistants for distinct purposes rather than one general-purpose assistant.
  2. Test Thoroughly: After customizing an assistant, test it with various queries to ensure it behaves as expected.
  3. Iterative Refinement: Refine your system prompts based on how the assistant performs in real conversations.
  4. Balance Tool Access: Only attach tools that the assistant needs for its specific role to avoid unnecessary complexity.
  5. Keep Updates: Regularly review and update your assistants to incorporate new models, tools, and improved system prompts.