Language Models

Discover the large language models supported by Hyper.


Hyper's platform is engineered to seamlessly integrate with a wide array of large language models (LLMs), offering developers the versatility to choose models that best fit their application's requirements. This integration spans across various LLMs, including GPT-4 Turbo, GPT-3.5, Antropic's Claude, and Llama.

Supported Language Models

Hyper supports an extensive list of LLMs, enabling sophisticated natural language processing, content generation, and data analysis capabilities. Here's an overview of the LLMs currently supported:

Model NameHyper Slug
GPT-4 TurboGPT4_TURBO
GPT-3.5GPT3_5
Antropic ClaudeCLAUDE
LlamaLLAMA
Google GeminiGEMINI

Compatibility Constraints

When integrating LLMs with embedding models, it's important to consider compatibility to ensure optimal performance. Here are some key points regarding compatibility between Hyper's Embedding Models and Language Models:

  • Model Alignment: For best results, the choice of embedding and language models should align with the data type and the application's requirements. For example, using GPT-based models with text data and CLIP or ViT for image-related tasks.
  • Size and Complexity: Larger language models, such as GPT-4 Turbo, might require more complex embeddings to fully leverage their capabilities. Smaller models like GPT-3.5 or Llama may be more compatible with a broader range of embedding sizes without significant performance trade-offs.
  • Domain-Specific Needs: Certain applications might benefit from domain-specific models. In such cases, ensuring that the embedding model captures relevant features is crucial for the LLM to generate accurate and contextually relevant outputs.
  • Performance Considerations: The computational requirements of both embedding and language models should be taken into account, especially for applications with real-time processing needs. Balancing model size and performance can help mitigate potential bottlenecks.

Best Practices

To maximize the effectiveness of your application, consider the following best practices:

  • Experimentation: Test different combinations of embedding and language models to identify the most effective pair for your specific use case.
  • Continuous Evaluation: As both embedding and language models evolve, regularly re-evaluate your model choices to take advantage of improvements and new capabilities.
  • User Feedback: Incorporate user feedback to fine-tune the model selection and configuration, ensuring that your application meets the expected performance and quality standards.