* fix(balanceQuery/modelsQuery): request only when authenticated
* style: match new chat capitalization to official
* fix(modelsQuery): update selected model optimistically
* ci: update e2e changes, disable title in ci env
* fix(ci): get new chat button by data-testid and not text
* fix(OpenAIClient): handle completions request in reverse proxy, also force prompt by env var
* fix(reverseProxyUrl): allow url without /v1/ but add server warning as it will not be compatible with plugins
* fix(ModelService): handle reverse proxy without v1
* refactor: make changes cleaner
* ci(OpenAIClient): add tests for OPENROUTER_API_KEY, FORCE_PROMPT, and reverseProxyUrl handling in setOptions
* refactor(Chains/llms): allow passing callbacks
* refactor(BaseClient): accurately count completion tokens as generation only
* refactor(OpenAIClient): remove unused getTokenCountForResponse, pass streaming var and callbacks in initializeLLM
* wip: summary prompt tokens
* refactor(summarizeMessages): new cut-off strategy that generates a better summary by adding context from beginning, truncating the middle, and providing the end
wip: draft out relevant providers and variables for token tracing
* refactor(createLLM): make streaming prop false by default
* chore: remove use of getTokenCountForResponse
* refactor(agents): use BufferMemory as ConversationSummaryBufferMemory token usage not easy to trace
* chore: remove passing of streaming prop, also console log useful vars for tracing
* feat: formatFromLangChain helper function to count tokens for ChatModelStart
* refactor(initializeLLM): add role for LLM tracing
* chore(formatFromLangChain): update JSDoc
* feat(formatMessages): formats langChain messages into OpenAI payload format
* chore: install openai-chat-tokens
* refactor(formatMessage): optimize conditional langChain logic
fix(formatFromLangChain): fix destructuring
* feat: accurate prompt tokens for ChatModelStart before generation
* refactor(handleChatModelStart): move to callbacks dir, use factory function
* refactor(initializeLLM): rename 'role' to 'context'
* feat(Balance/Transaction): new schema/models for tracking token spend
refactor(Key): factor out model export to separate file
* refactor(initializeClient): add req,res objects to client options
* feat: add-balance script to add to an existing users' token balance
refactor(Transaction): use multiplier map/function, return balance update
* refactor(Tx): update enum for tokenType, return 1 for multiplier if no map match
* refactor(Tx): add fair fallback value multiplier incase the config result is undefined
* refactor(Balance): rename 'tokens' to 'tokenCredits'
* feat: balance check, add tx.js for new tx-related methods and tests
* chore(summaryPrompts): update prompt token count
* refactor(callbacks): pass req, res
wip: check balance
* refactor(Tx): make convoId a String type, fix(calculateTokenValue)
* refactor(BaseClient): add conversationId as client prop when assigned
* feat(RunManager): track LLM runs with manager, track token spend from LLM,
refactor(OpenAIClient): use RunManager to create callbacks, pass user prop to langchain api calls
* feat(spendTokens): helper to spend prompt/completion tokens
* feat(checkBalance): add helper to check, log, deny request if balance doesn't have enough funds
refactor(Balance): static check method to return object instead of boolean now
wip(OpenAIClient): implement use of checkBalance
* refactor(initializeLLM): add token buffer to assure summary isn't generated when subsequent payload is too large
refactor(OpenAIClient): add checkBalance
refactor(createStartHandler): add checkBalance
* chore: remove prompt and completion token logging from route handler
* chore(spendTokens): add JSDoc
* feat(logTokenCost): record transactions for basic api calls
* chore(ask/edit): invoke getResponseSender only once per API call
* refactor(ask/edit): pass promptTokens to getIds and include in abort data
* refactor(getIds -> getReqData): rename function
* refactor(Tx): increase value if incomplete message
* feat: record tokenUsage when message is aborted
* refactor: subtract tokens when payload includes function_call
* refactor: add namespace for token_balance
* fix(spendTokens): only execute if corresponding token type amounts are defined
* refactor(checkBalance): throws Error if not enough token credits
* refactor(runTitleChain): pass and use signal, spread object props in create helpers, and use 'call' instead of 'run'
* fix(abortMiddleware): circular dependency, and default to empty string for completionTokens
* fix: properly cancel title requests when there isn't enough tokens to generate
* feat(predictNewSummary): custom chain for summaries to allow signal passing
refactor(summaryBuffer): use new custom chain
* feat(RunManager): add getRunByConversationId method, refactor: remove run and throw llm error on handleLLMError
* refactor(createStartHandler): if summary, add error details to runs
* fix(OpenAIClient): support aborting from summarization & showing error to user
refactor(summarizeMessages): remove unnecessary operations counting summaryPromptTokens and note for alternative, pass signal to summaryBuffer
* refactor(logTokenCost -> recordTokenUsage): rename
* refactor(checkBalance): include promptTokens in errorMessage
* refactor(checkBalance/spendTokens): move to models dir
* fix(createLanguageChain): correctly pass config
* refactor(initializeLLM/title): add tokenBuffer of 150 for balance check
* refactor(openAPIPlugin): pass signal and memory, filter functions by the one being called
* refactor(createStartHandler): add error to run if context is plugins as well
* refactor(RunManager/handleLLMError): throw error immediately if plugins, don't remove run
* refactor(PluginsClient): pass memory and signal to tools, cleanup error handling logic
* chore: use absolute equality for addTitle condition
* refactor(checkBalance): move checkBalance to execute after userMessage and tokenCounts are saved, also make conditional
* style: icon changes to match official
* fix(BaseClient): getTokenCountForResponse -> getTokenCount
* fix(formatLangChainMessages): add kwargs as fallback prop from lc_kwargs, update JSDoc
* refactor(Tx.create): does not update balance if CHECK_BALANCE is not enabled
* fix(e2e/cleanUp): cleanup new collections, import all model methods from index
* fix(config/add-balance): add uncaughtException listener
* fix: circular dependency
* refactor(initializeLLM/checkBalance): append new generations to errorMessage if cost exceeds balance
* fix(handleResponseMessage): only record token usage in this method if not error and completion is not skipped
* fix(createStartHandler): correct condition for generations
* chore: bump postcss due to moderate severity vulnerability
* chore: bump zod due to low severity vulnerability
* chore: bump openai & data-provider version
* feat(types): OpenAI Message types
* chore: update bun lockfile
* refactor(CodeBlock): add error block formatting
* refactor(utils/Plugin): factor out formatJSON and cn to separate files (json.ts and cn.ts), add extractJSON
* chore(logViolation): delete user_id after error is logged
* refactor(getMessageError -> Error): change to React.FC, add token_balance handling, use extractJSON to determine JSON instead of regex
* fix(DALL-E): use latest openai SDK
* chore: reorganize imports, fix type issue
* feat(server): add balance route
* fix(api/models): add auth
* feat(data-provider): /api/balance query
* feat: show balance if checking is enabled, refetch on final message or error
* chore: update docs, .env.example with token_usage info, add balance script command
* fix(Balance): fallback to empty obj for balance query
* style: slight adjustment of balance element
* docs(token_usage): add PR notes
* refactor: pass model in message edit payload, use encoder in standalone util function
* feat: add summaryBuffer helper
* refactor(api/messages): use new countTokens helper and add auth middleware at top
* wip: ConversationSummaryBufferMemory
* refactor: move pre-generation helpers to prompts dir
* chore: remove console log
* chore: remove test as payload will no longer carry tokenCount
* chore: update getMessagesWithinTokenLimit JSDoc
* refactor: optimize getMessagesForConversation and also break on summary, feat(ci): getMessagesForConversation tests
* refactor(getMessagesForConvo): count '00000000-0000-0000-0000-000000000000' as root message
* chore: add newer model to token map
* fix: condition was point to prop of array instead of message prop
* refactor(BaseClient): use object for refineMessages param, rename 'summary' to 'summaryMessage', add previous_summary
refactor(getMessagesWithinTokenLimit): replace text and tokenCount if should summarize, summary, and summaryTokenCount are present
fix/refactor(handleContextStrategy): use the right comparison length for context diff, and replace payload first message when a summary is present
* chore: log previous_summary if debugging
* refactor(formatMessage): assume if role is defined that it's a valid value
* refactor(getMessagesWithinTokenLimit): remove summary logic
refactor(handleContextStrategy): add usePrevSummary logic in case only summary was pruned
refactor(loadHistory): initial message query will return all ordered messages but keep track of the latest summary
refactor(getMessagesForConversation): use object for single param, edit jsdoc, edit all files using the method
refactor(ChatGPTClient): order messages before buildPrompt is called, TODO: add convoSumBuffMemory logic
* fix: undefined handling and summarizing only when shouldRefineContext is true
* chore(BaseClient): fix test results omitting system role for summaries and test edge case
* chore: export summaryBuffer from index file
* refactor(OpenAIClient/BaseClient): move refineMessages to subclass, implement LLM initialization for summaryBuffer
* feat: add OPENAI_SUMMARIZE to enable summarizing, refactor: rename client prop 'shouldRefineContext' to 'shouldSummarize', change contextStrategy value to 'summarize' from 'refine'
* refactor: rename refineMessages method to summarizeMessages for clarity
* chore: clarify summary future intent in .env.example
* refactor(initializeLLM): handle case for either 'model' or 'modelName' being passed
* feat(gptPlugins): enable summarization for plugins
* refactor(gptPlugins): utilize new initializeLLM method and formatting methods for messages, use payload array for currentMessages and assign pastMessages sooner
* refactor(agents): use ConversationSummaryBufferMemory for both agent types
* refactor(formatMessage): optimize original method for langchain, add helper function for langchain messages, add JSDocs and tests
* refactor(summaryBuffer): add helper to createSummaryBufferMemory, and use new formatting helpers
* fix: forgot to spread formatMessages also took opportunity to pluralize filename
* refactor: pass memory to tools, namely openapi specs. not used and may never be used by new method but added for testing
* ci(formatMessages): add more exhaustive checks for langchain messages
* feat: add debug env var for OpenAI
* chore: delete unnecessary comments
* chore: add extra note about summary feature
* fix: remove tokenCount from payload instructions
* fix: test fail
* fix: only pass instructions to payload when defined or not empty object
* refactor: fromPromptMessages is deprecated, use renamed method fromMessages
* refactor: use 'includes' instead of 'startsWith' for extended OpenRouter compatibility
* fix(PluginsClient.buildPromptBody): handle undefined message strings
* chore: log langchain titling error
* feat: getModelMaxTokens helper
* feat: tokenSplit helper
* feat: summary prompts updated
* fix: optimize _CUT_OFF_SUMMARIZER prompt
* refactor(summaryBuffer): use custom summary prompt, allow prompt to be passed, pass humanPrefix and aiPrefix to memory, along with any future variables, rename messagesToRefine to context
* fix(summaryBuffer): handle edge case where messagesToRefine exceeds summary context,
refactor(BaseClient): allow custom maxContextTokens to be passed to getMessagesWithinTokenLimit, add defined check before unshifting summaryMessage, update shouldSummarize based on this
refactor(OpenAIClient): use getModelMaxTokens, use cut-off message method for summary if no messages were left after pruning
* fix(handleContextStrategy): handle case where incoming prompt is bigger than model context
* chore: rename refinedContent to splitText
* chore: remove unnecessary debug log
* feat(localization): add Korean language support
* feat(Nav): add Korean language option to General Settings (#20)
* feat(localization): add Korean language support
* refactor(localization): remove unused translations in Korean language file
* feat(localization): update Korean translations
* refactor(localization): update Korean translations in Ko.tsx
* fix(Icon/types): pick types from TMessage and TConversation
* refactor: make abortScroll a global recoil state and change props/types for useScrollToRef
* refactor(Message): invoke abort setter onTouchMove and onWheel, refactor(Messages): remove redundancy, reset abortScroll when scroll button is clicked
* feat: add option to disable titling as well as decide what model to use for OpenAI titling
refactor: truncate conversation text so it caps around 200 tokens for titling requests, optimize some of the title prompts
* feat: disable bing titling with TITLE_CONVO as well
* added auto-detect language
* fix(TranslationSelect) now saving the selected language between sessions
* fix(LangSelector.spec)
* fix(conflict)
* fix(Swedish) sv-SE
* Added endpoint picture
* plugin icon fix & new minimalist icon
* changed from BingAIMinimalIcon to BingAIMinimalistIcon
* fix(Conversation) reduced the space between the icon and the title
* refactor(getIcon & getMinimalIcon)
* moved IconProps in ~/common
* refactor(getIcon & getMinimalistIcon) from switch/case to map
* fix(getIcon.tsx) renamed to Icon
* renamed all from Minimalist to Minimal
* feat: add global teardown to remove test data and add registration/log-out to auth flow
* refactor(models/Conversation): index user field and add JSDoc to deleteConvos
* refactor: add user index to message schema and ensure user is saved to each Message
* refactor: add user to each saveMessage call
* fix: handle case where title is null in zod schema
* feat(e2e): ensure messages are deleted on cleanUp
* fix: set last convo for all endpoints on conversation update
* fix: enable registration for CI env
* chore(ChatGPTClient.js): add support for OpenRouter API
chore(OpenAIClient.js): add support for OpenRouter API
* chore: comment out token debugging
* chore: add back streamResult assignment
* chore: remove double condition/assignment from merging
* refactor(routes/endpoints): -> controller/services logic
* feat: add openrouter model fetching
* chore: remove unused endpointsConfig in cleanupPreset function
* refactor: separate models concern from endpointsConfig
* refactor(data-provider): add TModels type and make TEndpointsConfig adaptible to new endpoint keys
* refactor: complete models endpoint service in data-provider
* refactor: onMutate for refreshToken and login, invalidate models query
* feat: complete models endpoint logic for frontend
* chore: remove requireJwtAuth from /api/endpoints and /api/models as not implemented yet
* fix: endpoint will not be overwritten and instead use active value
* feat: openrouter support for plugins
* chore(EndpointOptionsDialog): remove unused recoil value
* refactor(schemas/parseConvo): add handling of secondaryModels to use first of defined secondary models, which includes last selected one as first, or default to the convo's secondary model value
* refactor: remove hooks from store and move to hooks
refactor(switchToConversation): make switchToConversation use latest recoil state, which is necessary to get the most up-to-date models list, replace wrapper function
refactor(getDefaultConversation): factor out logic into 3 pieces to reduce complexity.
* fix: backend tests
* feat: optimistic update by calling newConvo when models are fetched
* feat: openrouter support for titling convos
* feat: cache models fetch
* chore: add missing dep to AuthContext useEffect
* chore: fix useTimeout types
* chore: delete old getDefaultConvo file
* chore: remove newConvo logic from Root, remove console log from api models caching
* chore: ensure bun is used for building in b:client script
* fix: default endpoint will not default to null on a completely fresh login (no localStorage/cookies)
* chore: add openrouter docs to free_ai_apis.md and .env.example
* chore: remove openrouter console logs
* feat: add debugging env variable for Plugins
* Language translation: swedish translation
* fix: remove unwanted row in Sv translation
remove com_nav_language
---------
Co-authored-by: Marcus Nätteldal <marcus.natteldal@ltu.se>
* chore: cleanup client depend 🧹
* chore: replace joi with zod and remove unused user validator
* chore: move dep from root to api, cleanup other unused api deps
* chore: remove unused dev dep
* chore: update bun lockfile
* fix: bun scripts
* chore: add bun flag to update script
* chore: remove legacy webpack + babel dev deps
* chore: add back dev deps needed for frontend unit testing
* fix(validators): make schemas as expected and more robust with a full test suite of edge cases
* chore: remove axios from root package, remove path from api, update bun
* refactor: require Auth middleware in route index files
* feat: concurrent message limiter
* feat: complete concurrent message limiter with caching
* refactor: SSE response methods separated from handleText
* fix(abortMiddleware): fix req and res order to standard, use endpointOption in req.body
* chore: minor name changes
* refactor: add isUUID condition to saveMessage
* fix(concurrentLimiter): logic correctly handles the max number of concurrent messages and res closing/finalization
* chore: bump keyv and remove console.log from Message
* fix(concurrentLimiter): ensure messages are only saved in later message children
* refactor(concurrentLimiter): use KeyvFile instead, could make other stores configurable in the future
* feat: add denyRequest function for error responses
* feat(utils): add isStringTruthy function
Introduce the isStringTruthy function to the utilities module to check if a string value is a case-insensitive match for 'true'
* feat: add optional message rate limiters by IP and userId
* feat: add optional message rate limiters by IP and userId to edit route
* refactor: rename isStringTruthy to isTrue for brevity
* refactor(getError): use map to make code cleaner
* refactor: use memory for concurrent rate limiter to prevent clearing on startup/exit, add multiple log files, fix error message for concurrent violation
* feat: check if errorMessage is object, stringify if so
* chore: send object to denyRequest which will stringify it
* feat: log excessive requests
* fix(getError): correctly pluralize messages
* refactor(limiters): make type consistent between logs and errorMessage
* refactor(cache): move files out of lib/db into separate cache dir
>> feat: add getLogStores function so Keyv instance is not redundantly created on every violation
feat: separate violation logging to own function with logViolation
* fix: cache/index.js export, properly record userViolations
* refactor(messageLimiters): use new logging method, add logging to registrations
* refactor(logViolation): make userLogs an array of logs per user
* feat: add logging to login limiter
* refactor: pass req as first param to logViolation and record offending IP
* refactor: rename isTrue helper fn to isEnabled
* feat: add simple non_browser check and log violation
* fix: open handles in unit tests, remove KeyvMongo as not used and properly mock global fetch
* chore: adjust nodemon ignore paths to properly ignore logs
* feat: add math helper function for safe use of eval
* refactor(api/convos): use middleware at top of file to avoid redundancy
* feat: add delete all static method for Sessions
* fix: redirect to login on refresh if user is not found, or the session is not found but hasn't expired (ban case)
* refactor(getLogStores): adjust return type
* feat: add ban violation and check ban logic
refactor(logViolation): pass both req and res objects
* feat: add removePorts helper function
* refactor: rename getError to getMessageError and add getLoginError for displaying different login errors
* fix(AuthContext): fix type issue and remove unused code
* refactor(bans): ban by ip and user id, send response based on origin
* chore: add frontend ban messages
* refactor(routes/oauth): add ban check to handler, also consolidate logic to avoid redundancy
* feat: add ban check to AI messaging routes
* feat: add ban check to login/registration
* fix(ci/api): mock KeyvMongo to avoid tests hanging
* docs: update .env.example
> refactor(banViolation): calculate interval rate crossover, early return if duration is invalid
ci(banViolation): add tests to ensure users are only banned when expected
* docs: improve wording for mod system
* feat: add configurable env variables for violation scores
* chore: add jsdoc for uaParser.js
* chore: improve ban text log
* chore: update bun test scripts
* refactor(math.js): add fallback values
* fix(KeyvMongo/banLogs): refactor keyv instances to top of files to avoid memory leaks, refactor ban logic to use getLogStores instead
refactor(getLogStores): get a single log store by type
* fix(ci): refactor tests due to banLogs changes, also make sure to clear and revoke sessions even if ban duration is 0
* fix(banViolation.js): getLogStores import
* feat: handle 500 code error at login
* fix(middleware): handle case where user.id is _id and not just id
* ci: add ban secrets for backend unit tests
* refactor: logout user upon ban
* chore: log session delete message only if deletedCount > 0
* refactor: change default ban duration (2h) and make logic more clear in JSDOC
* fix: login and registration limiters will now return rate limiting error
* fix: userId not parsable as non ObjectId string
* feat: add useTimeout hook to properly clear timeouts when invoking functions within them
refactor(AuthContext): cleanup code by using new hook and defining types in ~/common
* fix: login error message for rate limits
* docs: add info for automated mod system and rate limiters, update other docs accordingly
* chore: bump data-provider version
* feat(api): refresh token logic
* feat(client): refresh token logic
* feat(data-provider): refresh token logic
* fix: SSE uses esm
* chore: add default refresh token expiry to AuthService, add message about env var not set when generating a token
* chore: update scripts to more compatible bun methods, ran bun install again
* chore: update env.example and playwright workflow with JWT_REFRESH_SECRET
* chore: update breaking changes docs
* chore: add timeout to url visit
* chore: add default SESSION_EXPIRY in generateToken logic, add act script for testing github actions
* fix(e2e): refresh automatically in development environment to pass e2e tests