LobeChat
Ctrl K
Back to Discovery
InternLMInternLM
@InternLM
2 models
An open-source organization dedicated to the research and development of large model toolchains. It provides an efficient and user-friendly open-source platform for all AI developers, making cutting-edge large models and algorithm technologies easily accessible.

Supported Models

InternLM
Maximum Context Length
32K
Maximum Output Length
4K
Input Price
--
Output Price
--
Maximum Context Length
32K
Maximum Output Length
4K
Input Price
--
Output Price
--

Using InternLM in LobeChat

cover

InternLM is a large pre-trained language model jointly launched by the Shanghai Artificial Intelligence Laboratory and Shusheng Group. This model focuses on natural language processing, aimed at understanding and generating human language, boasting powerful semantic comprehension and text generation capabilities.

This article will guide you on how to use InternLM in LobeChat.

Step 1: Obtain the InternLM API Key

  • Register and log in to InternLM API
  • Create an API token
  • Save the API token in the pop-up window
Save API Token

Please store the API token shown in the pop-up securely; it will only appear once. If you lose it, you will need to create a new API token.

Step 2: Configure InternLM in LobeChat

  • Go to the Settings interface in LobeChat
  • Find the settings option for InternLM under Language Models
Enter API Key
  • Enter the obtained AccessKey ID and AccessKey Secret
  • Choose a InternLM model for your AI assistant to start a conversation
Select InternLM Model and Start Conversation

During usage, you may need to pay the API service provider; please refer to the pricing policy regarding InternLM.

You are now ready to engage in conversations using the models provided by InternLM in LobeChat.

Related Providers

OpenAIOpenAI
@OpenAI
21 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
46 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
Google
GeminiGemini
@Google
15 models
Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.