add official api, remove davinci, organize root dir

This commit is contained in:
Daniel Avila 2023-03-01 21:39:57 -05:00
parent 90ac43c8c5
commit 1f270d349a
12 changed files with 75 additions and 45 deletions

2
.gitignore vendored
View file

@ -40,5 +40,7 @@ bower_components/
# Environment
.env
cache.json
data/
.eslintrc.js
src/style - official.css

View file

@ -1,5 +1,5 @@
# ChatGPT Clone #
![chatgpt-clone demo](./demo.gif)
![chatgpt-clone demo](./public/demo.gif)
## Wrap all conversational AIs under one roof. ##
Conversational/Utility AIs are the future and OpenAI revolutionized this movement with ChatGPT. While numerous methods exist to integrate conversational AIs, this app commemorates the original styling of ChatGPT, with the ability to integrate any current/future conversational AI models through user-provided APIs, while also having in mind improved client features, such as conversation search, and prompt templates. This project was also built with the anticipation of the official ChatGPT API from OpenAI, though it uses unofficial packages. Through this clone, you can avoid subscription-based models in favor of either free or pay-per-call APIs. I will most likely not deploy this app, as it's mainly a learning experience, but feel free to clone or fork to create your own custom wrapper.
@ -8,6 +8,11 @@
## Updates
<details open>
<summary><strong>2023-03-01</strong></summary>
Official ChatGPT API is out! Removed davinci since the official API is extremely fast and 10x less expensive. Since user labeling and prompt prefixing is officially supported, I will add a View feature so you can set this within chat, which gives the UI an added use case. I've kept the BrowserClient, since it's free to use like the official site.
The Messages UI correctly mirrors code syntax highlighting. The exact replication of the cursor is not 1-to-1 yet, but pretty close. Later on in the project, I'll implement tests for code edge cases and explore the possibility of running code in-browser. Right now, unknown code defaults to javascript, but will detect language as close as possible.
<details>
<summary><strong>2023-02-21</strong></summary>
BingAI is integrated (although sadly limited by Microsoft with the 5 msg/convo limit, 50 msgs/day). I will need to handle the case when Bing refuses to give more answers on top of the other styling features I have in mind. Official ChatGPT use is back with the new BrowserClient. Brainstorming how to handle the UI when the Ai model changes, since conversations can't be persisted between them (or perhaps build a way to achieve this at some level).
</details>
@ -39,9 +44,11 @@ Currently, this project is only functional with the `text-davinci-003` model.
- [x] Remember last selected model
- [x] Highlight.js for code blocks
- [x] Markdown handling
- [ ] Bing AI Styling (for suggested responses, convo end, etc.)
- [ ] 'Copy to clipboard' button for code and messages
- [ ] Set user/model label and prompt prefix view option
- [ ] AI model change handling (whether to pseudo-persist convos or start new convos within existing convo)
- [ ] Server convo pagination (limit fetch and load more with 'show more' button)
- [ ] Bing AI Styling (for suggested responses, convo end, etc.)
- [ ] Prompt Templates
- [ ] Conversation/Prompt Search
- [ ] Refactor/clean up code
@ -64,7 +71,7 @@ Currently, this project is only functional with the `text-davinci-003` model.
## Use Cases ##
![use case example](./use_case.png "GPT is down! Plus is too expensive!")
![use case example](./public/use_case.png "GPT is down! Plus is too expensive!")
- ChatGPT is down ( and don't want to pay for ChatGPT Plus).
- ChatGPT/Google Bard/Bing AI conversations are lost in space or
cannot be searched past a certain timeframe.

View file

@ -10,7 +10,7 @@ const askBing = async ({ text, progressCallback, convo }) => {
// If the above doesn't work, provide all your cookies as a string instead
// cookies: '',
debug: false,
cache: new KeyvFile({ filename: 'bingcache.json' })
store: new KeyvFile({ filename: './data/cache.json' })
});
let options = {

32
app/chatgpt-browser.js Normal file
View file

@ -0,0 +1,32 @@
require('dotenv').config();
const { KeyvFile } = require('keyv-file');
const clientOptions = {
// Warning: This will expose your access token to a third party. Consider the risks before using this.
reverseProxyUrl: 'https://chatgpt.duti.tech/api/conversation',
// Access token from https://chat.openai.com/api/auth/session
accessToken: process.env.CHATGPT_TOKEN
};
const browserClient = async ({ text, progressCallback, convo }) => {
const { ChatGPTBrowserClient } = await import('@waylaidwanderer/chatgpt-api');
const store = {
store: new KeyvFile({ filename: './data/cache.json' })
};
const client = new ChatGPTBrowserClient(clientOptions, store);
let options = {
onProgress: async (partialRes) => await progressCallback(partialRes)
};
if (!!convo.parentMessageId && !!convo.conversationId) {
options = { ...options, ...convo };
}
const res = await client.sendMessage(text, options);
return res;
};
module.exports = { browserClient };

View file

@ -1,39 +1,20 @@
require('dotenv').config();
const { KeyvFile } = require('keyv-file');
const proxyOptions = {
// Warning: This will expose your access token to a third party. Consider the risks before using this.
reverseProxyUrl: 'https://chatgpt.duti.tech/api/conversation',
// Access token from https://chat.openai.com/api/auth/session
accessToken: process.env.CHATGPT_TOKEN
};
const davinciOptions = {
const clientOptions = {
modelOptions: {
model: 'text-davinci-003'
model: 'gpt-3.5-turbo'
},
debug: false
};
const askClient = async ({ model, text, progressCallback, convo }) => {
const davinciClient = (await import('@waylaidwanderer/chatgpt-api')).default;
const { ChatGPTBrowserClient } = await import('@waylaidwanderer/chatgpt-api');
const clientOptions = model === 'chatgpt' ? proxyOptions : davinciOptions;
const modelClient = model === 'chatgpt' ? ChatGPTBrowserClient : davinciClient;
const askClient = async ({ text, progressCallback, convo }) => {
const ChatGPTClient = (await import('@waylaidwanderer/chatgpt-api')).default;
const store = {
store: new KeyvFile({ filename: 'cache.json' })
store: new KeyvFile({ filename: './data/cache.json' })
};
const params =
model === 'chatgpt'
? [clientOptions, store]
: [
process.env.OPENAI_KEY,
clientOptions,
store
];
const client = new modelClient(...params);
const client = new ChatGPTClient(process.env.OPENAI_KEY, clientOptions, store);
let options = {
onProgress: async (partialRes) => await progressCallback(partialRes)

View file

@ -1,9 +1,11 @@
const { titleConvo } = require('./chatgpt');
const { askClient } = require('./chatgpt-client');
const { browserClient } = require('./chatgpt-browser');
const { askBing } = require('./bingai');
module.exports = {
titleConvo,
askClient,
askBing,
browserClient,
};

View file

@ -1,5 +1,5 @@
{
"ignore": [
"cache.json"
"data/"
]
}

12
package-lock.json generated
View file

@ -4524,9 +4524,9 @@
}
},
"node_modules/@waylaidwanderer/chatgpt-api": {
"version": "1.20.8",
"resolved": "https://registry.npmjs.org/@waylaidwanderer/chatgpt-api/-/chatgpt-api-1.20.8.tgz",
"integrity": "sha512-e2F7mEPdypKL3UgQe9rWlqeknXJwXWarTiWFkFG3M5uWWcHdT8S0zPMgrFem+yj0njjlOSmFIs+oJ6jQQpwy5A==",
"version": "1.22.2",
"resolved": "https://registry.npmjs.org/@waylaidwanderer/chatgpt-api/-/chatgpt-api-1.22.2.tgz",
"integrity": "sha512-OjhRBtczNhPn5xIxOWDqIbAel3sC9FLPxh6iAVYOAlIJ+zqyAXI2mxd8P5/Qm8eD1M7YnrIpcWqlPfN8zsNPkA==",
"dependencies": {
"@fastify/cors": "^8.2.0",
"@waylaidwanderer/fastify-sse-v2": "^3.1.0",
@ -18362,9 +18362,9 @@
}
},
"@waylaidwanderer/chatgpt-api": {
"version": "1.20.8",
"resolved": "https://registry.npmjs.org/@waylaidwanderer/chatgpt-api/-/chatgpt-api-1.20.8.tgz",
"integrity": "sha512-e2F7mEPdypKL3UgQe9rWlqeknXJwXWarTiWFkFG3M5uWWcHdT8S0zPMgrFem+yj0njjlOSmFIs+oJ6jQQpwy5A==",
"version": "1.22.2",
"resolved": "https://registry.npmjs.org/@waylaidwanderer/chatgpt-api/-/chatgpt-api-1.22.2.tgz",
"integrity": "sha512-OjhRBtczNhPn5xIxOWDqIbAel3sC9FLPxh6iAVYOAlIJ+zqyAXI2mxd8P5/Qm8eD1M7YnrIpcWqlPfN8zsNPkA==",
"requires": {
"@fastify/cors": "^8.2.0",
"@waylaidwanderer/fastify-sse-v2": "^3.1.0",

View file

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

Before After
Before After

View file

@ -2,7 +2,7 @@ const express = require('express');
const crypto = require('crypto');
const router = express.Router();
const askBing = require('./askBing');
const { titleConvo, askClient } = require('../../app/');
const { titleConvo, askClient, browserClient } = require('../../app/');
const { saveMessage, deleteMessages, saveConvo } = require('../../models');
const { handleError, sendMessage } = require('./handlers');
@ -19,6 +19,8 @@ router.post('/', async (req, res) => {
console.log('ask log', { model, ...userMessage, parentMessageId, conversationId });
const client = model === 'chatgpt' ? askClient : browserClient;
res.writeHead(200, {
Connection: 'keep-alive',
'Content-Type': 'text/event-stream',
@ -53,8 +55,7 @@ router.post('/', async (req, res) => {
}
};
let gptResponse = await askClient({
model,
let gptResponse = await client({
text,
progressCallback,
convo: {
@ -63,7 +64,7 @@ router.post('/', async (req, res) => {
}
});
// console.log('CLIENT RESPONSE', gptResponse);
console.log('CLIENT RESPONSE', gptResponse);
if (!gptResponse.parentMessageId) {
gptResponse.text = gptResponse.response;

View file

@ -35,15 +35,21 @@ export default function Message({
'w-full border-b border-black/10 dark:border-gray-900/50 text-gray-800 dark:text-gray-100 group dark:bg-gray-800'
};
const bgColors = {
chatgpt: 'rgb(16, 163, 127)',
chatgptBrowser: 'rgb(25, 207, 207)',
bingai: '',
};
let icon = `${sender}:`;
const isGPT = sender === 'chatgpt' || sender === 'davinci' || sender === 'GPT';
let backgroundColor = bgColors[sender];
if (notUser) {
props.className =
'w-full border-b border-black/10 dark:border-gray-900/50 text-gray-800 dark:text-gray-100 group bg-gray-100 dark:bg-[#444654]';
icon = (
<div
style={isGPT ? { backgroundColor: 'rgb(16, 163, 127)' } : {}}
style={{ backgroundColor }}
className="relative flex h-[30px] w-[30px] items-center justify-center rounded-sm p-1 text-white"
>
{sender === 'bingai' ? <BingIcon /> : <GPTIcon />}

View file

@ -75,10 +75,9 @@ export default function ModelMenu() {
value={model}
onValueChange={onChange}
>
<DropdownMenuRadioItem value="bingai">BingAI</DropdownMenuRadioItem>
<DropdownMenuRadioItem value="chatgpt">ChatGPT</DropdownMenuRadioItem>
<DropdownMenuRadioItem value="davinci">Davinci</DropdownMenuRadioItem>
{/* <DropdownMenuRadioItem value="right">Right</DropdownMenuRadioItem> */}
<DropdownMenuRadioItem value="bingai">BingAI</DropdownMenuRadioItem>
<DropdownMenuRadioItem value="chatgptBrowser">{'ChatGPT (free)'}</DropdownMenuRadioItem>
</DropdownMenuRadioGroup>
</DropdownMenuContent>
</DropdownMenu>