Skip to content

Support HuggingFace Models #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion .SAMPLE_env
Original file line number Diff line number Diff line change
@@ -1,2 +1,8 @@
// Set which model provider you want to use, HUGGING_FACE or OPEN_AI
ENABLED_MODEL_STORE=HUGGING_FACE

// Hugging Face API Key
HUGGINGFACEHUB_API_KEY=

//Open API API Key
OPENAI_API_KEY=""
OPENAI_API_KEY=
23 changes: 19 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,18 +23,22 @@ This template is an example project for a simple Large Language Model (LLM) appl

To get started, follow the below steps:

1. Create an `.env` file by copying the `SAMPLE_env` file and add API keys for the models you are going to use
1. Create an `.env` file by copying the `SAMPLE_env` file and add the model store provider you'll be using (e.g. HuggingFace or OpenAI) and the API keys for the models you are going to use
1. Install packages
1. Run the backend server that will start with a default port of `3100`

```bash
yarn start-server
```

1. Run the frontend server that will start with a default port of `5173`.

```bash
yarn start
```

_Note:_ You can use the `-p` flag to specify a port for the frontend server. To do this, you can either run `yarn start` with an additional flag, like so:

```bash
yarn start -- --port 3000
```
Expand All @@ -44,15 +48,26 @@ To get started, follow the below steps:
```bash
vite --port 3000
```

Additional scripts are provided to prepare the app for production

- `yarn build` — This will output a production build of the frontend app in the `dist` directory.
- `yarn preview` — This will run the production build of the frontend app locally with a default port of `5173` (_note_: this will not work if you haven't generated the production build yet).

### Tutorials

👽 If you're looking for more thorough instructions follow [this tutorial on running an LLM React Node app](https://blog.golivecosmos.com/build-an-llm-app-with-node-react-and-langchain-js/). 📚

-------------

## Shout out to the ⭐star gazers⭐ supporting the project
## How to Contribute

Feel free to try out the template and open any issues if there's something you'd like to see added or fixed, or open a pull request to contribute.

### Shout out to the ⭐star gazers⭐ supporting the project

[![Stargazers repo roster for @golivecosmos/llm-react-node-app-template](https://reporoster.com/stars/golivecosmos/llm-react-node-app-template)](https://github.com/golivecosmos/llm-react-node-app-template/stargazers)

### Thanks for the forks🍴

[![Forkers repo roster for @golivecosmos/llm-react-node-app-template](https://reporoster.com/forks/golivecosmos/llm-react-node-app-template)](https://github.com/golivecosmos/llm-react-node-app-template/network/members)
3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,12 @@
"start": "vite",
"preview": "vite preview",
"build": "vite build",
"start-server": "node ./server/index.js"
"start-server": "nodemon ./server/index.js"
},
"dependencies": {
"@emotion/react": "^11.11.0",
"@emotion/styled": "^11.11.0",
"@huggingface/inference": "^2.5.0",
"@koa/cors": "^4.0.0",
"@koa/router": "^12.0.0",
"@mui/icons-material": "^5.11.16",
Expand Down
12 changes: 12 additions & 0 deletions server/config/model_store_constants.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import { HuggingFaceService } from '../services/hf.js'
import { OpenAiService } from '../services/openai.js'

export const MODEL_STORES = {
'HUGGING_FACE': HuggingFaceService,
'OPEN_AI': OpenAiService,
};

export const { ENABLED_MODEL_STORE } = process.env;
export const DEFAULT_ENABLED_MODEL_STORE = 'HUGGING_FACE';

export const enabledModel = ENABLED_MODEL_STORE || DEFAULT_ENABLED_MODEL_STORE;
25 changes: 3 additions & 22 deletions server/handlers/chat_handler.js
Original file line number Diff line number Diff line change
@@ -1,32 +1,13 @@
import { ConversationChain } from 'langchain/chains';
import { ChatOpenAI } from 'langchain/chat_models/openai';
import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate } from 'langchain/prompts';
import { ConversationSummaryMemory } from 'langchain/memory';
import { MODEL_STORES, enabledModel } from '../config/model_store_constants.js';

class ChatService {
constructor () {
this.chat = new ChatOpenAI({ temperature: 0, verbose: true });
this.chatPrompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate('The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.'),
new MessagesPlaceholder('history'),
HumanMessagePromptTemplate.fromTemplate('{input}'),
]);

this.memory = new ConversationSummaryMemory({ llm: this.chat, returnMessages: true });
this.model = new MODEL_STORES[enabledModel]
}

async startChat(data) {
const { body: { userInput } } = data;

const chain = new ConversationChain({
memory: this.memory,
prompt: this.chatPrompt,
llm: this.chat,
});

const response = await chain.call({
input: userInput,
});
const response = await this.model.call(userInput);

return response;
}
Expand Down
25 changes: 25 additions & 0 deletions server/services/hf.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
import { HfInference } from "@huggingface/inference";

const { HUGGINGFACEHUB_API_KEY } = process.env;

class HuggingFaceService {
constructor () {
this.modelName = 'microsoft/DialoGPT-large';
this.model = new HfInference(HUGGINGFACEHUB_API_KEY);
}

async call(userInput) {
// TO DO: pass in past_user_inputs for context
const response = await this.model.conversational({
model: this.modelName,
temperature: 0,
inputs: {
text: userInput,
}
});

return { response: response && response.generated_text };
}
}

export { HuggingFaceService }
38 changes: 38 additions & 0 deletions server/services/openai.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import { ConversationChain } from 'langchain/chains';
import { ChatOpenAI } from 'langchain/chat_models/openai';
import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate } from 'langchain/prompts';
import { ConversationSummaryMemory } from 'langchain/memory';

class OpenAiService {
constructor () {
this.model = new ChatOpenAI({ temperature: 0, verbose: true });

this.chatPrompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate('The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.'),
new MessagesPlaceholder('history'),
HumanMessagePromptTemplate.fromTemplate('{input}'),
]);

this.memory = new ConversationSummaryMemory({ llm: this.model, returnMessages: true });
}

assembleChain () {
const chain = new ConversationChain({
memory: this.memory,
prompt: this.chatPrompt,
llm: this.model,
});
return chain;
}

call = async (userInput) => {
const chain = this.assembleChain();

const response = await chain.call({
input: userInput,
});
return response;
}
}

export { OpenAiService };
5 changes: 5 additions & 0 deletions yarn.lock
Original file line number Diff line number Diff line change
Expand Up @@ -520,6 +520,11 @@
resolved "https://registry.yarnpkg.com/@fortaine/fetch-event-source/-/fetch-event-source-3.0.6.tgz#b8552a2ca2c5202f5699b93a92be0188d422b06e"
integrity sha512-621GAuLMvKtyZQ3IA6nlDWhV1V/7PGOTNIGLUifxt0KzM+dZIweJ6F3XvQF3QnqeNfS1N7WQ0Kil1Di/lhChEw==

"@huggingface/inference@^2.5.0":
version "2.5.0"
resolved "https://registry.yarnpkg.com/@huggingface/inference/-/inference-2.5.0.tgz#8e14ee6696e91aecb132c90d3b07be8373e70338"
integrity sha512-X3NSdrWAKNTLAsEKabH48Wc+Osys+S7ilRcH1bf9trSDmJlzPVXDseXMRBHCFPCYd5AAAIakhENO4zCqstVg8g==

"@jridgewell/gen-mapping@^0.3.0", "@jridgewell/gen-mapping@^0.3.2":
version "0.3.3"
resolved "https://registry.yarnpkg.com/@jridgewell/gen-mapping/-/gen-mapping-0.3.3.tgz#7e02e6eb5df901aaedb08514203b096614024098"
Expand Down