Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add new LLM API Provider: Novita AI #5115

Open
wants to merge 21 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ PROXY=
# HUGGINGFACE_TOKEN=
# MISTRAL_API_KEY=
# OPENROUTER_KEY=
# NOVITA_API_KEY=
# PERPLEXITY_API_KEY=
# SHUTTLEAI_API_KEY=
# TOGETHERAI_API_KEY=
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
- [Custom Endpoints](https://www.librechat.ai/docs/quick_start/custom_endpoints): Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with [Local & Remote AI Providers](https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints):
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Perplexity, ShuttleAI, Deepseek, Qwen, and more
- OpenRouter, Novita AI, Perplexity, ShuttleAI, Deepseek, Qwen, and more

- 🔧 **[Code Interpreter API](https://www.librechat.ai/docs/features/code_interpreter)**:
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
Expand Down
26 changes: 23 additions & 3 deletions api/server/services/Config/loadConfigModels.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,16 @@ const exampleConfig = {
},
dropParams: ['stop'],
},
{
name: 'NovitaAI',
apiKey: '${MY_NOVITA_API_KEY}',
baseURL: 'https://api.novita.ai/v3/openai',
models: {
default: ['meta-llama/llama-3.3-70b-instruct'],
fetch: true,
},
dropParams: ['stop'],
},
{
name: 'groq',
apiKey: 'user_provided',
Expand Down Expand Up @@ -208,11 +218,12 @@ describe('loadConfigModels', () => {
it('loads models based on custom endpoint configuration respecting fetch rules', async () => {
process.env.MY_PRECIOUS_MISTRAL_KEY = 'actual_mistral_api_key';
process.env.MY_OPENROUTER_API_KEY = 'actual_openrouter_api_key';
// Setup custom configuration with specific API keys for Mistral and OpenRouter
process.env.MY_NOVITA_API_KEY = 'actual_novita_api_key';
// Setup custom configuration with specific API keys for Mistral, OpenRouter and Novita AI
// and "user_provided" for groq and Ollama, indicating no fetch for the latter two
getCustomConfig.mockResolvedValue(exampleConfig);

// Assuming fetchModels would be called only for Mistral and OpenRouter
// Assuming fetchModels would be called only for Mistral, OpenRouter and NovitaAI
fetchModels.mockImplementation(({ name }) => {
switch (name) {
case 'Mistral':
Expand All @@ -224,14 +235,16 @@ describe('loadConfigModels', () => {
]);
case 'OpenRouter':
return Promise.resolve(['gpt-3.5-turbo']);
case 'NovitaAI':
return Promise.resolve(['meta-llama/llama-3.3-70b-instruct']);
default:
return Promise.resolve([]);
}
});

const result = await loadConfigModels(mockRequest);

// Since fetch is true and apiKey is not "user_provided", fetching occurs for Mistral and OpenRouter
// Since fetch is true and apiKey is not "user_provided", fetching occurs for Mistral, OpenRouter and NovitaAI
expect(result.Mistral).toEqual([
'mistral-tiny',
'mistral-small',
Expand All @@ -253,6 +266,13 @@ describe('loadConfigModels', () => {
}),
);

expect(result.NovitaAI).toEqual(['meta-llama/llama-3.3-70b-instruct']);
expect(fetchModels).toHaveBeenCalledWith(
expect.objectContaining({
name: 'NovitaAI',
apiKey: process.env.MY_NOVITA_API_KEY,
}),
);
// For groq and ollama, since the apiKey is "user_provided", models should not be fetched
// Depending on your implementation's behavior regarding "default" models without fetching,
// you may need to adjust the following assertions:
Expand Down
18 changes: 17 additions & 1 deletion api/server/services/ModelService.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,23 @@ describe('getOpenAIModels', () => {
});

it('attempts to use OPENROUTER_API_KEY if set', async () => {
process.env.OPENROUTER_API_KEY = 'test-router-key';
process.env.OPENROUTER_API_KEY = 'test-api-key';
const expectedModels = ['model-router-1', 'model-router-2'];

axios.get.mockResolvedValue({
data: {
data: expectedModels.map((id) => ({ id })),
},
});

const models = await getOpenAIModels({ user: 'user456' });

expect(models).toEqual(expect.arrayContaining(expectedModels));
expect(axios.get).toHaveBeenCalled();
});

it('attempts to use NOVITA_API_KEY if set', async () => {
process.env.NOVITA_API_KEY = 'test-api-key';
const expectedModels = ['model-router-1', 'model-router-2'];

axios.get.mockResolvedValue({
Expand Down
3 changes: 3 additions & 0 deletions client/public/assets/novita.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion client/src/components/Chat/Menus/Endpoints/UnknownIcon.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ const knownEndpointAssets = {
[KnownEndpoints.mlx]: '/assets/mlx.png',
[KnownEndpoints.ollama]: '/assets/ollama.png',
[KnownEndpoints.openrouter]: '/assets/openrouter.png',
[KnownEndpoints.novitaai]: '/assets/novita.svg',
[KnownEndpoints.perplexity]: '/assets/perplexity.png',
[KnownEndpoints.shuttleai]: '/assets/shuttleai.png',
[KnownEndpoints['together.ai']]: '/assets/together.png',
Expand Down Expand Up @@ -43,7 +44,7 @@ const getKnownClass = ({
context?: string;
className: string;
}) => {
if (currentEndpoint === KnownEndpoints.openrouter) {
if (currentEndpoint === KnownEndpoints.openrouter || currentEndpoint === KnownEndpoints.novitaai) {
return className;
}

Expand Down
2 changes: 1 addition & 1 deletion librechat.example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ endpoints:
# Recommended: Drop the stop parameter from the request as Openrouter models use a variety of stop tokens.
dropParams: ['stop']
modelDisplayLabel: 'OpenRouter'

# Portkey AI Example
- name: "Portkey"
apiKey: "dummy"
Expand Down
1 change: 1 addition & 0 deletions packages/data-provider/src/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -537,6 +537,7 @@ export enum KnownEndpoints {
mlx = 'mlx',
ollama = 'ollama',
openrouter = 'openrouter',
novitaai = 'novitaai',
perplexity = 'perplexity',
shuttleai = 'shuttleai',
'together.ai' = 'together.ai',
Expand Down