Commit graph

453 commits

Author SHA1 Message Date
Danny Avila
f5f5b2bbdb
fix: Resolve Token Credit Balance Issues for Instruct Models 🛠️ (#1232)
* Fix: balance update error and add environment variable check

* fix(ChatGPTClient): return promptTokens for instruct/davinci models

* chore: remove unnecessary comments
2023-11-26 18:12:27 -05:00
Danny Avila
d7ef4590ea
🔧 Fix: Resolve Anthropic Client Issues 🧠 (#1226)
* fix: correct preset title for Anthropic endpoint

* fix(Settings/Anthropic): show correct default value for LLM temperature

* fix(AnthropicClient): use `getModelMaxTokens` to get the correct LLM max context tokens, correctly set default temperature to 1, use only 2 params for class constructor, use `getResponseSender` to add correct sender to response message

* refactor(/api/ask|edit/anthropic): save messages to database after the final response is sent to the client, and do not save conversation from route controller

* fix(initializeClient/anthropic): correctly pass client options (endpointOption) to class initialization

* feat(ModelService/Anthropic): add claude-1.2
2023-11-26 14:44:57 -05:00
Danny Avila
12209fe0dd
refactor: address potential issues with deploy-compose.yml (#1220)
* chore: remove /config/loader

* chore: remove config/loader steps from Dockerfile.multi

* chore: remove install script
2023-11-25 16:34:51 -05:00
Danny Avila
cc39074e0a
🛠️ refactor: Handle .webp, Improve File Life Cycle 📁 (#1213)
* fix: handle webp images correctly

* refactor: use the userPath from the start of the filecycle to avoid handling the blob, whose loading may fail upon user request

* refactor: delete temp files on reload and new chat
2023-11-24 16:45:06 -05:00
Danny Avila
55cdd2eec6
refactor: use fileSize limiternative to multer (#1209) 2023-11-22 18:42:21 -05:00
Danny Avila
5e6f8cbce7
fix: Correct Default Model Name in Response Sender and Update Anthropics 🤖 (#1208)
* feat: add claude-2.1 to default anthropic models

* chore: remove console log in NavLinks

* fix: issue with response sender not using model name, change anthropic default value to Claude

* fix: preset will not be selected on edit
2023-11-22 18:29:09 -05:00
Danny Avila
f3402401f1
feat: Order/disable endpoints with ENDPOINTS env var (#1206)
* fix: endpoint will not be select if disabled

* feat: order and disable endpoints with ENDPOINTS env var

* chore: remove console.log
2023-11-22 13:56:38 -05:00
Danny Avila
317cdd3f77
feat: Vision Support + New UI (#1203)
* feat: add timer duration to showToast, show toast for preset selection

* refactor: replace old /chat/ route with /c/. e2e tests will fail here

* refactor: move typedefs to root of /api/ and add a few to assistant types in TS

* refactor: reorganize data-provider imports, fix dependency cycle, strategize new plan to separate react dependent packages

* feat: add dataService for uploading images

* feat(data-provider): add mutation keys

* feat: file resizing and upload

* WIP: initial API image handling

* fix: catch JSON.parse of localStorage tools

* chore: experimental: use module-alias for absolute imports

* refactor: change temp_file_id strategy

* fix: updating files state by using Map and defining react query callbacks in a way that keeps them during component unmount, initial delete handling

* feat: properly handle file deletion

* refactor: unexpose complete filepath and resize from server for higher fidelity

* fix: make sure resized height, width is saved, catch bad requests

* refactor: use absolute imports

* fix: prevent setOptions from being called more than once for OpenAIClient, made note to fix for PluginsClient

* refactor: import supportsFiles and models vars from schemas

* fix: correctly replace temp file id

* refactor(BaseClient): use absolute imports, pass message 'opts' to buildMessages method, count tokens for nested objects/arrays

* feat: add validateVisionModel to determine if model has vision capabilities

* chore(checkBalance): update jsdoc

* feat: formatVisionMessage: change message content format dependent on role and image_urls passed

* refactor: add usage to File schema, make create and updateFile, correctly set and remove TTL

* feat: working vision support
TODO: file size, type, amount validations, making sure they are styled right, and making sure you can add images from the clipboard/dragging

* feat: clipboard support for uploading images

* feat: handle files on drop to screen, refactor top level view code to Presentation component so the useDragHelpers hook  has ChatContext

* fix(Images): replace uploaded images in place

* feat: add filepath validation to protect sensitive files

* fix: ensure correct file_ids are push and not the Map key values

* fix(ToastContext): type issue

* feat: add basic file validation

* fix(useDragHelpers): correct context issue with `files` dependency

* refactor: consolidate setErrors logic to setError

* feat: add dialog Image overlay on image click

* fix: close endpoints menu on click

* chore: set detail to auto, make note for configuration

* fix: react warning (button desc. of button)

* refactor: optimize filepath handling, pass file_ids to images for easier re-use

* refactor: optimize image file handling, allow re-using files in regen, pass more file metadata in messages

* feat: lazy loading images including use of upload preview

* fix: SetKeyDialog closing, stopPropagation on Dialog content click

* style(EndpointMenuItem): tighten up the style, fix dark theme showing in lightmode, make menu more ux friendly

* style: change maxheight of all settings textareas to 138px from 300px

* style: better styling for textarea and enclosing buttons

* refactor(PresetItems): swap back edit and delete icons

* feat: make textarea placeholder dynamic to endpoint

* style: show user hover buttons only on hover when message is streaming

* fix: ordered list not going past 9, fix css

* feat: add User/AI labels; style: hide loading spinner

* feat: add back custom footer, change original footer text

* feat: dynamic landing icons based on endpoint

* chore: comment out assistants route

* fix: autoScroll to newest on /c/ view

* fix: Export Conversation on new UI

* style: match message style of official more closely

* ci: fix api jest unit tests, comment out e2e tests for now as they will fail until addressed

* feat: more file validation and use blob in preview field, not filepath, to fix temp deletion

* feat: filefilter for multer

* feat: better AI labels based on custom name, model, and endpoint instead of  `ChatGPT`
2023-11-21 20:12:48 -05:00
madonchik123
d043a849a9
Added Reverse Proxy for Anthropic (#1106)
* Update AnthropicClient.js

Added BaseURL

* Update .env.example

Added ANTHROPIC_REVERSE_PROXY ENV

* Update initializeClient.js

Added Reverse_Proxy

* Update .env.example

* Update initializeClient.js

* Update AnthropicClient.js

* Update .env.example

Request

* Update initializeClient.js

Mae ANTHROPIC_REVERSE_PROXY let instead of const

* fix: lint errors, refactor(initializeClient)

* chore: change casing of reverseProxy

---------

Co-authored-by: Marco Beretta <81851188+Berry-13@users.noreply.github.com>
Co-authored-by: Danny Avila <110412045+danny-avila@users.noreply.github.com>
2023-11-20 20:12:53 -05:00
Danny Avila
c64970525b
feat: allow any reverse proxy URLs, add proxy support to model fetching (#1192)
* feat: allow any reverse proxy URLs

* feat: add proxy support to model fetching
2023-11-16 18:56:09 -05:00
Danny Avila
bac1fb67d2
WIP: Update UI to match Official Style; Vision and Assistants 👷🏽 (#1190)
* wip: initial client side code

* wip: initial api code

* refactor: export query keys from own module, export assistant hooks

* refactor(SelectDropDown): more customization via props

* feat: create Assistant and render real Assistants

* refactor: major refactor of UI components to allow multi-chat, working alongside CreationPanel

* refactor: move assistant routes to own directory

* fix(CreationHeader): state issue with assistant select

* refactor: style changes for form, fix setSiblingIdx from useChatHelpers to use latestMessageParentId, fix render issue with ChatView and change location

* feat: parseCompactConvo: begin refactor of slimmer JSON payloads between client/api

* refactor(endpoints): add assistant endpoint, also use EModelEndpoint as much as possible

* refactor(useGetConversationsQuery): use object to access query data easily

* fix(MultiMessage): react warning of bad state set, making use of effect during render (instead of useEffect)

* fix(useNewConvo): use correct atom key (index instead of convoId) for reset latestMessageFamily

* refactor: make routing navigation/conversation change simpler

* chore: add removeNullishValues for smaller payloads, remove unused fields, setup frontend pinging of assistant endpoint

* WIP: initial complete assistant run handling

* fix: CreationPanel form correctly setting internal state

* refactor(api/assistants/chat): revise functions to working run handling strategy

* refactor(UI): initial major refactor of ChatForm and options

* feat: textarea hook

* refactor: useAuthRedirect hook and change directory name

* feat: add ChatRoute (/c/), make optionsBar absolute and change on textarea height, add temp header

* feat: match new toggle Nav open button to ChatGPT's

* feat: add OpenAI custom classnames

* feat: useOriginNavigate

* feat: messages loading view

* fix: conversation navigation and effects

* refactor: make toggle change nav opacity

* WIP: new endpoint menu

* feat: NewEndpointsMenu complete

* fix: ensure set key dialog shows on endpoint change, and new conversation resets messages

* WIP: textarea styling fix, add temp footer, create basic file handling component

* feat: image file handling (UI)

* feat: PopOver and ModelSelect in Header, remove GenButtons

* feat: drop file handling

* refactor: bug fixes
use SSE at route level
add opts to useOriginNavigate
delay render of unfinishedMessage to avoid flickering
pass params (convoId) to chatHelpers to set messages query data based on param when the route is new (fixes can't continue convo on /new/)
style(MessagesView): matches height to official
fix(SSE): pass paramId and invalidate convos
style(Message): make bg uniform

* refactor(useSSE): setStorage within setConversation updates

* feat: conversationKeysAtom, allConversationsSelector, update convos query data on created message (if new), correctly handle convo deletion (individual)

* feat: add popover select dropdowns to allow options in header while allowing horizontal scroll for mobile

* style(pluginsSelect): styling changes

* refactor(NewEndpointsMenu): make UI components modular

* feat: Presets complete

* fix: preset editing, make by index

* fix: conversations not setting on inital navigation, fix getMessages() based on query param

* fix: changing preset no longer resets latestMessage

* feat: useOnClickOutside for OptionsPopover and fix bug that causes selection of preset when deleting

* fix: revert /chat/ switchToConvo, also use NewDeleteButton in Convo

* fix: Popover correctly closes on close Popover button using custom condition for useOnClickOutside

* style: new message and nav styling

* style: hover/sibling buttons and preset menu scrolling

* feat: new convo header button

* style(Textarea): minor style changes to textarea buttons

* feat: stop/continue generating and hide hoverbuttons when submitting

* feat: compact AI Provider schemas to make json payloads and db saves smaller

* style: styling changes for consistency on chat route

* fix: created usePresetIndexOptions to prevent bugs between /c/ and /chat/ routes when editing presets, removed redundant code from the new dialog

* chore: make /chat/ route default for now since we still lack full image support
2023-11-16 10:42:24 -05:00
Danny Avila
adbeb46399
v0.6.1 (#1189) 2023-11-16 08:53:09 -05:00
Danny Avila
e383ecba85
chore: bump langchain (#1174) 2023-11-13 11:17:43 -05:00
Danny Avila
c7205c9bb2
feat: Add DALL-E reverse proxy settings and handle errors in image generation (#1173)
* feat: Add DALL-E reverse proxy settings and handle errors in image generation

* fix(ci): avoid importing extra utilities
2023-11-13 11:05:59 -05:00
Danny Avila
5d95433c83
chore: remove jose as Bun now supports JWT 🍞 (#1167)
* chore: remove jose as Bun now supports JWT

* chore: npm audit
2023-11-12 00:44:46 -05:00
Danny Avila
9ca84edb9a
fix(openai/completions): use old method for instruct/davinci/text gen models (#1166) 2023-11-10 10:33:56 -05:00
Danny Avila
d5259e1525
feat(OpenAIClient): AZURE_USE_MODEL_AS_DEPLOYMENT_NAME, AZURE_OPENAI_DEFAULT_MODEL (#1165)
* feat(OpenAIClient): AZURE_USE_MODEL_AS_DEPLOYMENT_NAME, AZURE_OPENAI_DEFAULT_MODEL

* ci: fix initializeClient test
2023-11-10 09:58:17 -05:00
Danny Avila
5ab9802aa9
fix(OpenAIClient): use official SDK to identify client and avoid false Rate Limit Error (#1161)
* chore: add eslint ignore unused var pattern

* feat: add extractBaseURL helper for valid OpenAI reverse proxies, with tests

* feat(OpenAIClient): add new chatCompletion using official OpenAI node SDK

* fix(ci): revert change to FORCE_PROMPT condition
2023-11-09 14:04:36 -05:00
Danny Avila
43d7a751d6
feat: allow config of DALL-E-3 System Prompt via env 🎨 (#1150) 2023-11-07 18:52:23 -05:00
Danny Avila
4f3b66756a
refactor: condense dall-e instructions, add style parameter (#1148) 2023-11-06 20:07:01 -05:00
Danny Avila
3a38b4b842
feat: DALL-E-3 support 🎨 (#1147)
* feat: DALL-E-3 support

* fix(ci): lock-in openai dependency for types used in data-provider
2023-11-06 19:45:59 -05:00
Danny Avila
48c087cc06
chore: add token rate support for 11/06 models (#1146)
* chore: update model rates with 11/06 rates

* chore: add new models to env.example for OPENAI_MODELS

* chore: reference actual maxTokensMap in ci tests
2023-11-06 15:26:16 -05:00
Danny Avila
4b63eb5a2c
fix: correct conditional statement in ModelService.js (#1145) 2023-11-06 14:42:20 -05:00
Danny Avila
5f3ecef575
fix(config/scripts): Enhance User Creation and Ban Handling, Standardize Imports (#1144)
* chore: use relative imports for scripts

* fix(create-user): newUser.save() now properly awaited, double-check user creation, use relative imports, catch exception

* fix(ban-user): catch exception, handle case where IP is undefined, proper check of user ban on login
2023-11-06 09:19:43 -05:00
Danny Avila
0886441461
feat(azureOpenAI): Allow Switching Deployment Name by Model Name (#1137)
* feat(azureOpenAI): allow switching deployment name by model name

* ci: add unit tests and throw error on no api key provided to avoid API call

* fix(gptPlugins/initializeClient): check if azure is enabled; ci: add unit tests for gptPlugins/initializeClient

* fix(ci): fix expected error message for partial regex match:  unexpected token
2023-11-04 15:03:31 -04:00
Marco Beretta
a7b5639da1
feat: ban-user command (#1121)
* feat: ban-user command

* clean up code

* added duration

* fix(package-lock) revert commit
2023-11-04 11:38:58 -04:00
Danny Avila
8f328ec6a3
feat(Tx): Add timestamps to transaction schema (#1123) 2023-10-30 09:41:07 -04:00
Danny Avila
af69763103
refactor(addImages): use in functions agent response and assure generated images are included in the response (#1120) 2023-10-29 15:36:00 -04:00
Danny Avila
5c1e44eff7
feat(OpenAIClient): Add HttpsProxyAgent to initializeLLM (#1119)
* feat(OpenAIClient): Add HttpsProxyAgent to initializeLLM

* chore: fix linting error in ModelService
2023-10-29 13:20:30 -04:00
Walber Cardoso
ba5ab86037
Update ModelService.js (#1105)
Failed to fetch models from OpenAI API when set OPENROUTER_API_KEY on .env file
2023-10-26 21:18:03 -04:00
Danny Avila
05c4c7e551
feat: add CUSTOM_FOOTER env variable (#1098) 2023-10-23 21:08:18 -04:00
Danny Avila
00e0091f7a
Release v0.6.0 (#1089) 2023-10-22 14:42:56 -04:00
Danny Avila
70590251d1
chore: add back BrowserOp, make changes to CI env (#1088)
* chore: add back BrowserOp

* chore: make CI env and not DEV env generate refresh tokens every time

* chore: make 'CI' env var captilization uniform across the app

* chore: change NODE_ENV for playwright to
2023-10-22 13:50:25 -04:00
Danny Avila
6cb561abcf
fix: getLogStores Property and Handle 401 Error from Refresh Token Request (#1084)
* fix(getLogStores): correct wrong prop passed to keyv opts: duration -> ttl

* fix: edge case where we get a blank screen if the initially intercepted 401 error is from a refresh token request; in this case, make explicit to the server that we are retrying from a refreshToken request
2023-10-21 12:39:08 -04:00
Danny Avila
abbc57a49a
fix(formatMessages): Conform Name Property to OpenAI Expected Regex (#1076)
* fix(formatMessages): conform name property to OpenAI expected regex

* fix(ci): prior test was expecting non-sanitized name input
2023-10-19 10:02:20 -04:00
Danny Avila
ddf56db316
fix(auth/refresh): send 403 res for invalid token to properly invalidate session (#1068) 2023-10-17 08:34:14 -04:00
Danny Avila
377f2c7c19
refactor: add back getTokenCountForResponse for slightly more accurate mapping of responses token counts (#1067) 2023-10-17 06:42:58 -04:00
Danny Avila
352e01f9d0
fix(BingAI): update convo handling with encryptedConversationSignature (#1063) 2023-10-16 13:36:45 -04:00
Danny Avila
241bc68d0f
chore: switch from @waylaidwanderer/chatgpt-api to nodejs-gpt for latest fixes (#1050) 2023-10-14 13:06:50 -04:00
Marco Beretta
909cbb8529
fix: PluginStoreDialog refactor: plugins (#1047)
* fix(PluginStoreDialog) can't search on page 2/3.. & reset to page 1 when install and unistall

* var fix

* removed plugins that aren't working

* remove prompt perfect beacuase it isn't working

* fix(PluginStoreItem) set page 1 and reset search when dialog is close
2023-10-12 18:53:35 -04:00
Danny Avila
5145121eb7
feat(api): initial Redis support; fix(SearchBar): proper debounce (#1039)
* refactor: use keyv for search caching with 1 min expirations

* feat: keyvRedis; chore: bump keyv, bun.lockb, add jsconfig for vscode file resolution

* feat: api/search redis support

* refactor(redis) use ioredis cluster for keyv
fix(OpenID): when redis is configured, use redis memory store for express-session

* fix: revert using uri for keyvredis

* fix(SearchBar): properly debounce search queries, fix weird render behaviors

* refactor: add authentication to search endpoint and show error messages in results

* feat: redis support for violation logs

* fix(logViolation): ensure a number is always being stored in cache

* feat(concurrentLimiter): uses clearPendingReq, clears pendingReq on abort, redis support

* fix(api/search/enable): query only when authenticated

* feat(ModelService): redis support

* feat(checkBan): redis support

* refactor(api/search): consolidate keyv logic

* fix(ci): add default empty value for REDIS_URI

* refactor(keyvRedis): use condition to initialize keyvRedis assignment

* refactor(connectDb): handle disconnected state (should create a new conn)

* fix(ci/e2e): handle case where cleanUp did not successfully run

* fix(getDefaultEndpoint): return endpoint from localStorage if defined and endpointsConfig is default

* ci(e2e): remove afterAll messages as startup/cleanUp will clear messages

* ci(e2e): remove teardown for CI until further notice

* chore: bump playwright/test

* ci(e2e): reinstate teardown as CI issue is specific to github env

* fix(ci): click settings menu trigger by testid
2023-10-11 17:05:47 -04:00
Danny Avila
2dd545eaa4
fix(OpenAIClient/PluginsClient): allow non-v1 reverse proxy, handle "v1/completions" reverse proxy (#1029)
* fix(OpenAIClient): handle completions request in reverse proxy, also force prompt by env var

* fix(reverseProxyUrl): allow url without /v1/ but add server warning as it will not be compatible with plugins

* fix(ModelService): handle reverse proxy without v1

* refactor: make changes cleaner

* ci(OpenAIClient): add tests for OPENROUTER_API_KEY, FORCE_PROMPT, and reverseProxyUrl handling in setOptions
2023-10-08 16:57:25 -04:00
Danny Avila
d61e44742d
refactor(OpenAPIPlugin): add plugin prompt inspired by ChatGPT Invocator (#1023) 2023-10-07 12:50:16 -04:00
Danny Avila
e7ca40b5ab
feat: bun api support 🥟 (#1021)
* chore: update bun lockfile

* feat: backend api bun support, jose used in bun runtime

* fix: add missing await for signPayload call
2023-10-07 11:16:06 -04:00
Danny Avila
c0e2c58c03 chore(ci): update test to new rates 2023-10-06 14:01:08 -04:00
Danny Avila
09c03b9df0 refactor(Tx): record rate and use Math.ceil instead of Math.floor 2023-10-06 14:01:08 -04:00
Danny Avila
599d70f1de fix(getMultiplier): correct rate for gpt-4 context 2023-10-06 14:01:08 -04:00
Danny Avila
365c39c405
feat: Accurate Token Usage Tracking & Optional Balance (#1018)
* refactor(Chains/llms): allow passing callbacks

* refactor(BaseClient): accurately count completion tokens as generation only

* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM

* wip: summary prompt tokens

* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing

* refactor(createLLM): make streaming prop false by default

* chore: remove use of getTokenCountForResponse

* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace

* chore: remove passing of streaming prop, also console log useful vars for tracing

* feat: formatFromLangChain helper function to count tokens for ChatModelStart

* refactor(initializeLLM): add role for LLM tracing

* chore(formatFromLangChain): update JSDoc

* feat(formatMessages): formats langChain messages into OpenAI payload format

* chore: install openai-chat-tokens

* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring

* feat: accurate prompt tokens for ChatModelStart before generation

* refactor(handleChatModelStart): move to callbacks dir, use factory function

* refactor(initializeLLM): rename 'role' to 'context'

* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file

* refactor(initializeClient): add req,res objects to client options

* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update

* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match

* refactor(Tx): add fair fallback value multiplier incase the config result is undefined

* refactor(Balance): rename 'tokens' to 'tokenCredits'

* feat: balance check, add tx.js for new tx-related methods and tests

* chore(summaryPrompts): update prompt token count

* refactor(callbacks): pass req, res
wip: check balance

* refactor(Tx): make convoId a String type, fix(calculateTokenValue)

* refactor(BaseClient): add conversationId as client prop when assigned

* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls

* feat(spendTokens): helper to spend prompt/completion tokens

* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance

* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance

* chore: remove prompt and completion token logging from route handler

* chore(spendTokens): add JSDoc

* feat(logTokenCost): record transactions for basic api calls

* chore(ask/edit): invoke getResponseSender only once per API call

* refactor(ask/edit): pass promptTokens to getIds and include in abort data

* refactor(getIds -> getReqData): rename function

* refactor(Tx): increase value if incomplete message

* feat: record tokenUsage when message is aborted

* refactor: subtract tokens when payload includes function_call

* refactor: add namespace for token_balance

* fix(spendTokens): only execute if corresponding token type amounts are defined

* refactor(checkBalance): throws Error if not enough token credits

* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'

* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens

* fix: properly cancel title requests when there isn't enough tokens to generate

* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain

* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError

* refactor(createStartHandler): if summary, add error details to runs

* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer

* refactor(logTokenCost -> recordTokenUsage): rename

* refactor(checkBalance): include promptTokens in errorMessage

* refactor(checkBalance/spendTokens): move to models dir

* fix(createLanguageChain): correctly pass config

* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check

* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called

* refactor(createStartHandler): add error to run if context is plugins as well

* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run

* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic

* chore: use absolute equality for addTitle condition

* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional

* style: icon changes to match official

* fix(BaseClient): getTokenCountForResponse -> getTokenCount

* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc

* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled

* fix(e2e/cleanUp): cleanup new collections, import all model methods from index

* fix(config/add-balance): add uncaughtException listener

* fix: circular dependency

* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance

* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped

* fix(createStartHandler): correct condition for generations

* chore: bump postcss due to moderate severity vulnerability

* chore: bump zod due to low severity vulnerability

* chore: bump openai & data-provider version

* feat(types): OpenAI Message types

* chore: update bun lockfile

* refactor(CodeBlock): add error block formatting

* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON

* chore(logViolation): delete user_id after error is logged

* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex

* fix(DALL-E): use latest openai SDK

* chore: reorganize imports, fix type issue

* feat(server): add balance route

* fix(api/models): add auth

* feat(data-provider): /api/balance query

* feat: show balance if checking is enabled, refetch on final message or error

* chore: update docs, .env.example with token_usage info, add balance script command

* fix(Balance): fallback to empty obj for balance query

* style: slight adjustment of balance element

* docs(token_usage): add PR notes
2023-10-05 18:34:10 -04:00
Danny Avila
317a1bd8da
feat: ConversationSummaryBufferMemory (#973)
* refactor: pass model in message edit payload, use encoder in standalone util function

* feat: add summaryBuffer helper

* refactor(api/messages): use new countTokens helper and add auth middleware at top

* wip: ConversationSummaryBufferMemory

* refactor: move pre-generation helpers to prompts dir

* chore: remove console log

* chore: remove test as payload will no longer carry tokenCount

* chore: update getMessagesWithinTokenLimit JSDoc

* refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests

* refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message

* chore: add newer model to token map

* fix: condition was point to prop of array instead of message prop

* refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary
refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present
fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present

* chore: log previous_summary if debugging

* refactor(formatMessage): assume if role is defined that it's a valid value

* refactor(getMessagesWithinTokenLimit): remove summary logic
refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned
refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary
refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method
refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic

* fix: undefined handling and summarizing only when shouldRefineContext is true

* chore(BaseClient): fix test results omitting system role for summaries and test edge case

* chore: export summaryBuffer from index file

* refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer

* feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine'

* refactor: rename refineMessages method to summarizeMessages for clarity

* chore: clarify summary future intent in .env.example

* refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed

* feat(gptPlugins): enable summarization for plugins

* refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner

* refactor(agents): use ConversationSummaryBufferMemory for both agent types

* refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests

* refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers

* fix: forgot to spread formatMessages also took opportunity to pluralize filename

* refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing

* ci(formatMessages): add more exhaustive checks for langchain messages

* feat: add debug env var for OpenAI

* chore: delete unnecessary comments

* chore: add extra note about summary feature

* fix: remove tokenCount from payload instructions

* fix: test fail

* fix: only pass instructions to payload when defined or not empty object

* refactor: fromPromptMessages is deprecated, use renamed method fromMessages

* refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility

* fix(PluginsClient.buildPromptBody): handle undefined message strings

* chore: log langchain titling error

* feat: getModelMaxTokens helper

* feat: tokenSplit helper

* feat: summary prompts updated

* fix: optimize _CUT_OFF_SUMMARIZER prompt

* refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context

* fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context,
refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this
refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning

* fix(handleContextStrategy): handle case where incoming prompt is bigger than model context

* chore: rename refinedContent to splitText

* chore: remove unnecessary debug log
2023-09-26 21:02:28 -04:00
Marco Beretta
1bf6c259b9
feat: Logins log for Fail2Ban (#986)
* login logs and output

* fix(merge)

* fix(wiston) unistall

* fix(winston) installation in api

* fix(logger) new module
2023-09-24 12:18:10 -04:00