Our app gives you access to a wide range of the best AI bots, each designed to excel at different tasks. Instead of relying on just one, you can choose the best bot for your needs—whether it’s for creative writing, problem-solving, or research. This flexibility allows you to get better results faster. Plus, we’re always adding new bots, so you’re always ahead with the latest tools.
DeepSeek-V3 is the latest model from the DeepSeek team, building upon the instruction following and coding abilities of the previous versions. Pre-trained on nearly 15 trillion tokens, the reported evaluations reveal that the model outperforms other open-source models and rivals leading closed-source models.
Discover compassionate guidance through Bible study and the wisdom of scripture. Receive insightful, non-judgmental answers to your questions, complete with relevant Bible verses to deepen your understanding.
Mistral Large 2 is the new generation of Mistral's flagship model. It is significantly capable in code generation, mathematics, and reasoning.
This bot will write stories and scenes. Describe the scene or setting you want, including characters, and the bot will write for you.
Jamba 1.5 Mini is the world's first production-grade Mamba-based model, combining SSM and Transformer architectures for a 256K context window and high efficiency. It works with 9 languages and can handle various writing and analysis tasks as well as or better than similar small models.
Jamba 1.5 Large is part of AI21's new family of open models, offering superior speed, efficiency, and quality. It features a 256K effective context window, the longest among open models, enabling improved performance on tasks like document summarization and analysis. Built on a novel SSM-Transformer architecture, it outperforms larger models on benchmarks while maintaining resource efficiency.
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
WizardLM-2 7B is the smaller variant of Microsoft AI's latest Wizard model. It is the fastest and achieves comparable performance with existing 10x larger opensource leading models. It is a finetune of Mistral 7B Instruct, using the same technique as WizardLM-2 8x22B.
Euryale 70B v2.1 is a model focused on creative roleplay. It has improved prompt adherence and spatial awareness, and adapts quickly to custom roleplay and formatting.
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay.
WizardLM-2 8x22B is Microsoft AI's most advanced Wizard model. It demonstrates highly competitive performance compared to leading proprietary models, and it consistently outperforms all existing state-of-the-art opensource models. It is an instruct finetune of Mixtral 8x22B.
Qwen2 VL 7B is a multimodal LLM from the Qwen Team with multimedia capabilities.
OpenChat 7B is a library of open-source language models, fine-tuned with C-RLFT (Conditioned Reinforcement Learning Fine-Tuning) - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.
Grok 2 Mini is xAI's fast, lightweight language model that offers a balance between speed and answer quality.
Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems. This mixture of experts model is built for general purpose AI, with an excellent handle on sequential data and context.
Qwen2 VL 72B is a multimodal LLM from the Qwen Team with impressive multimedia and automations support.
Grok 2 is xAI's frontier language model with state-of-the-art reasoning capabilities, best for complex and multi-step use cases.
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. Hermes 3 405B is a frontier-level, full-parameter finetune of the Llama-3.1 405B foundation model, focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.
First-generation reasoning model from DeepSeek. Open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.