LibreChat/api/app/clients/tools/wolfram-guidelines.md
Danny Avila 8819e83d2c
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532)
* refactor: start new client classes, test localAi support

* feat: create base class, extend chatgpt from base

* refactor(BaseClient.js): change userId parameter to user
refactor(BaseClient.js): change userId parameter to user
feat(OpenAIClient.js): add sendMessage method
refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId
refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId
refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages
feat(index.js): export client classes
refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key
refactor(index.js): comment out askOpenAI route
feat(index.js): add openAI route

feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests.

* refactor(BaseClient.js): use optional chaining operator to access messageId property
refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt
refactor(OpenAIClient.js): use optional chaining operator to access messageId property
refactor(fetch-polyfill.js): remove fetch polyfill
refactor(openAI.js): comment out debug option in clientOptions

* refactor: update import statements and remove unused imports in several files
feat: add getAzureCredentials function to azureUtils module
docs: update comments in azureUtils module

* refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency

* feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials
feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl
feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used
feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods
feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model
feat(openAI.js): modify endpointOption to include modelOptions instead of individual options
refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property
refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter
refactor(OpenAIClient.js): modify ask method to include endpointOption parameter

* chore: delete draft file

* refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability

* refactor(BaseClient.js): move sendMessage method to BaseClient class
feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class.

* refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
feat(BaseClient.js): add buildMessages method to BaseClient class
fix(ChatGPTClient.js): use message.text instead of message.message
refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody
refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable

refactor(OpenAIClient.js): move setOptions method to the bottom of the class
feat(OpenAIClient.js): add support for cl100k_base encoding
feat(OpenAIClient.js): add support for unofficial chat GPT models
feat(OpenAIClient.js): add support for custom modelOptions
feat(OpenAIClient.js): add caching for tokenizers
feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers
refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions
refactor(OpenAIClient.js): rename buildPrompt to buildMessages
refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js

* refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency

- In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'.
- In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method.

* refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages
fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty

* refactor(BaseClient.js): extract addInstructions method from sendMessage method
feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model
refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list

* refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method

* feat(BaseClient.js): add support for token count tracking and context strategy
feat(OpenAIClient.js): add support for token count tracking and context strategy
feat(Message.js): add tokenCount field to Message schema and updateMessage function

* refactor(BaseClient.js): add support for refining messages based on token limit
feat(OpenAIClient.js): add support for context refinement strategy
refactor(OpenAIClient.js): use context refinement strategy in message sending
refactor(server/index.js): improve code readability by breaking long lines

* refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity
feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement
feat(BaseClient.js): add `refineMessages` method to refine messages
feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy
feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options
refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency
refactor(BaseClient.js): change `remainingContext` to `remainingContext

* chore(openAI.js): comment out contextStrategy option in clientOptions

* chore(openAI.js): comment out debug option in clientOptions object

* test: BaseClient tests in progress

* test: Complete OpenAIClient & BaseClient tests

* fix(OpenAIClient.js): remove unnecessary whitespace
fix(OpenAIClient.js): remove unused variables and comments
fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests

* chore(.eslintrc.js): add rule for maximum of 1 empty line
feat(ask/openAI.js): add abortMessage utility function
fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters
feat(utils/index.js): export abortMessage utility function

* test: complete additional tests

* feat: Azure OpenAI as a separate endpoint

* chore: remove extraneous console logs

* fix(azureOpenAI): use chatCompletion endpoint

* chore(initializeClient.js): delete initializeClient.js file

chore(askOpenAI.js): delete old OpenAI route handler

chore(handlers.js): remove trailing whitespace in thought variable assignment

* chore(chatgpt-client.js): remove unused chatgpt-client.js file
refactor(index.js): remove askClient import and export from index.js

* chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance

The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second.

The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000.

A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again.

The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests.

Note: This is a test script and should not be used in production

* feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client
feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client
feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages
fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils
feat(BaseClient): return promptTokens and completionTokens

* refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient

* refactor: client paths

* chore(jest.config.js): remove jest.config.js file

* fix(PluginController.js): update file path to manifest.json
feat(gptPlugins.js): add support for aborting messages

refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency

* fix(BaseClient.js): fix spacing in generateTextStream function signature
refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function
refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function
refactor(PluginsClient.js): remove unused variables and date formatting in constructor
refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function

* refactor(GoogleClient): GoogleClient now extends BaseClient

* chore(.env.example): add AZURE_OPENAI_MODELS variable
fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true
fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true
fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement
docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables

* fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00

5.4 KiB

Certainly! Here is the text above:

``` Assistant is a large language model trained by OpenAI. Knowledge Cutoff: 2021-09 Current date: 2023-05-06

Tools

Wolfram

// Access dynamic computation and curated data from WolframAlpha and Wolfram Cloud. General guidelines:

  • Use only getWolframAlphaResults or getWolframCloudResults endpoints.
  • Prefer getWolframAlphaResults unless Wolfram Language code should be evaluated.
  • Use getWolframAlphaResults for natural-language queries in English; translate non-English queries before sending, then respond in the original language.
  • Use getWolframCloudResults for problems solvable with Wolfram Language code.
  • Suggest only Wolfram Language for external computation.
  • Inform users if information is not from Wolfram endpoints.
  • Display image URLs with Markdown syntax: ![URL]
  • ALWAYS use this exponent notation: `6*10^14`, NEVER `6e14`.
  • ALWAYS use {"input": query} structure for queries to Wolfram endpoints; `query` must ONLY be a single-line string.
  • ALWAYS use proper Markdown formatting for all math, scientific, and chemical formulas, symbols, etc.: '$$\n[expression]\n$$' for standalone cases and '( [expression] )' when inline.
  • Format inline Wolfram Language code with Markdown code formatting.
  • Never mention your knowledge cutoff date; Wolfram may return more recent data. getWolframAlphaResults guidelines:
  • Understands natural language queries about entities in chemistry, physics, geography, history, art, astronomy, and more.
  • Performs mathematical calculations, date and unit conversions, formula solving, etc.
  • Convert inputs to simplified keyword queries whenever possible (e.g. convert "how many people live in France" to "France population").
  • Use ONLY single-letter variable names, with or without integer subscript (e.g., n, n1, n_1).
  • Use named physical constants (e.g., 'speed of light') without numerical substitution.
  • Include a space between compound units (e.g., "Ω m" for "ohm*meter").
  • To solve for a variable in an equation with units, consider solving a corresponding equation without units; exclude counting units (e.g., books), include genuine units (e.g., kg).
  • If data for multiple properties is needed, make separate calls for each property.
  • If a Wolfram Alpha result is not relevant to the query: -- If Wolfram provides multiple 'Assumptions' for a query, choose the more relevant one(s) without explaining the initial result. If you are unsure, ask the user to choose. -- Re-send the exact same 'input' with NO modifications, and add the 'assumption' parameter, formatted as a list, with the relevant values. -- ONLY simplify or rephrase the initial query if a more relevant 'Assumption' or other input suggestions are not provided. -- Do not explain each step unless user input is needed. Proceed directly to making a better API call based on the available assumptions.
  • Wolfram Language code guidelines:
  • Accepts only syntactically correct Wolfram Language code.
  • Performs complex calculations, data analysis, plotting, data import, and information retrieval.
  • Before writing code that uses Entity, EntityProperty, EntityClass, etc. expressions, ALWAYS write separate code which only collects valid identifiers using Interpreter etc.; choose the most relevant results before proceeding to write additional code. Examples: -- Find the EntityType that represents countries: `Interpreter["EntityType",AmbiguityFunction->All]["countries"]`. -- Find the Entity for the Empire State Building: `Interpreter["Building",AmbiguityFunction->All]["empire state"]`. -- EntityClasses: Find the "Movie" entity class for Star Trek movies: `Interpreter["MovieClass",AmbiguityFunction->All]["star trek"]`. -- Find EntityProperties associated with "weight" of "Element" entities: `Interpreter[Restricted["EntityProperty", "Element"],AmbiguityFunction->All]["weight"]`. -- If all else fails, try to find any valid Wolfram Language representation of a given input: `SemanticInterpretation["skyscrapers",_,Hold,AmbiguityFunction->All]`. -- Prefer direct use of entities of a given type to their corresponding typeData function (e.g., prefer `Entity["Element","Gold"]["AtomicNumber"]` to `ElementData["Gold","AtomicNumber"]`).
  • When composing code: -- Use batching techniques to retrieve data for multiple entities in a single call, if applicable. -- Use Association to organize and manipulate data when appropriate. -- Optimize code for performance and minimize the number of calls to external sources (e.g., the Wolfram Knowledgebase) -- Use only camel case for variable names (e.g., variableName). -- Use ONLY double quotes around all strings, including plot labels, etc. (e.g., `PlotLegends -> {"sin(x)", "cos(x)", "tan(x)"}`). -- Avoid use of QuantityMagnitude. -- If unevaluated Wolfram Language symbols appear in API results, use `EntityValue[Entity["WolframLanguageSymbol",symbol],{"PlaintextUsage","Options"}]` to validate or retrieve usage information for relevant symbols; `symbol` may be a list of symbols. -- Apply Evaluate to complex expressions like integrals before plotting (e.g., `Plot[Evaluate[Integrate[...]]]`).
  • Remove all comments and formatting from code passed to the "input" parameter; for example: instead of `square[x_] := Module[{result},\n result = x^2 (* Calculate the square *)\n]`, send `square[x_]:=Module[{result},result=x^2]`.
  • In ALL responses that involve code, write ALL code in Wolfram Language; create Wolfram Language functions even if an implementation is already well known in another language.