Skip to content

Simplifying the OpenAI provider to use multiple model providers #1248

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 10 commits into from

Conversation

srdas
Copy link
Collaborator

@srdas srdas commented Feb 17, 2025

Description

The OpenAI model interface has been widely adopted by many model providers (DeepSeek, vLLM, etc.) and this PR enables accessing these models using the OpenAI provider. Current OpenAI models are also accessible via the same interface.

This PR also updates related documentation on the use of these models that work via the OpenAI provider.

Demo

See the new usage of models and the required settings shown below, note the new "OpenAI::general interface":
image

For any OpenAI model:
openai-chat-openai

For DeepSeek models:
openai-chat-deepseek

For models deployed with vLLM:
openai-chat-vllm

Embedding Models

First, tested to make sure that the OpenAI models are working as intended with no changes to the code:
image

Second, modified check that the interface takes any OpenAI embedding model as an input and test that it works with OpenAI models as before:

image

Code Completion

This is also supported now as follows:
image

@srdas srdas added the enhancement New feature or request label Feb 17, 2025
@srdas srdas marked this pull request as ready for review February 17, 2025 23:41
@dlqqq dlqqq marked this pull request as draft February 18, 2025 19:17
@@ -125,19 +128,24 @@ export function ChatSettings(props: ChatSettingsProps): JSX.Element {
}

setLmLocalId(server.chat.lmLocalId);
setEmLocalId(server.config.embeddings_provider_id ?? '');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the line causing the local model ID being set to the universal model ID (including provider prefix). We need to add emLocalId to the server.chat: ProvidersInfo object. This is returned from use-server-info.ts.

const [clmProvider, setClmProvider] =
useState<AiService.ListProvidersEntry | null>(null);
const [showLmLocalId, setShowLmLocalId] = useState<boolean>(false);
const [showEmLocalId, setShowEmLocalId] = useState<boolean>(false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a duplicate of line 60.

@srdas
Copy link
Collaborator Author

srdas commented Feb 27, 2025

Replaced by PR #1264

@srdas srdas closed this Feb 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for embedding models served through an OpenAI API
2 participants