LibreChat/api/utils/abortMessage.js

19 lines
528 B
JavaScript
Raw Normal View History

refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
async function abortMessage(req, res, abortControllers) {
const { abortKey } = req.body;
console.log('req.body', req.body);
refactor: Client Classes & Azure OpenAI as a separate Endpoint (#532) * refactor: start new client classes, test localAi support * feat: create base class, extend chatgpt from base * refactor(BaseClient.js): change userId parameter to user refactor(BaseClient.js): change userId parameter to user feat(OpenAIClient.js): add sendMessage method refactor(OpenAIClient.js): change getConversation method to use user parameter instead of userId refactor(OpenAIClient.js): change saveMessageToDatabase method to use user parameter instead of userId refactor(OpenAIClient.js): change buildPrompt method to use messages parameter instead of orderedMessages feat(index.js): export client classes refactor(askGPTPlugins.js): use req.body.token or process.env.OPENAI_API_KEY as OpenAI API key refactor(index.js): comment out askOpenAI route feat(index.js): add openAI route feat(openAI.js): add new route for OpenAI API requests with support for progress updates and aborting requests. * refactor(BaseClient.js): use optional chaining operator to access messageId property refactor(OpenAIClient.js): use orderedMessages instead of messages to build prompt refactor(OpenAIClient.js): use optional chaining operator to access messageId property refactor(fetch-polyfill.js): remove fetch polyfill refactor(openAI.js): comment out debug option in clientOptions * refactor: update import statements and remove unused imports in several files feat: add getAzureCredentials function to azureUtils module docs: update comments in azureUtils module * refactor(utils): rename migrateConversations to migrateDataToFirstUser for clarity and consistency * feat(chatgpt-client.js): add getAzureCredentials function to retrieve Azure credentials feat(chatgpt-client.js): use getAzureCredentials function to generate reverseProxyUrl feat(OpenAIClient.js): add isChatCompletion property to determine if chat completion model is used feat(OpenAIClient.js): add saveOptions parameter to sendMessage and buildPrompt methods feat(OpenAIClient.js): modify buildPrompt method to handle chat completion model feat(openAI.js): modify endpointOption to include modelOptions instead of individual options refactor(OpenAIClient.js): modify getDelta property to use isChatCompletion property instead of isChatGptModel property refactor(OpenAIClient.js): modify sendMessage method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify buildPrompt method to use saveOptions parameter instead of modelOptions parameter refactor(OpenAIClient.js): modify ask method to include endpointOption parameter * chore: delete draft file * refactor(OpenAIClient.js): extract sendCompletion method from sendMessage method for reusability * refactor(BaseClient.js): move sendMessage method to BaseClient class feat(OpenAIClient.js): inherit from BaseClient class and implement necessary methods and properties for OpenAIClient class. * refactor(BaseClient.js): rename getBuildPromptOptions to getBuildMessagesOptions feat(BaseClient.js): add buildMessages method to BaseClient class fix(ChatGPTClient.js): use message.text instead of message.message refactor(ChatGPTClient.js): rename buildPromptBody to buildMessagesBody refactor(ChatGPTClient.js): remove console.debug statement and add debug log for prompt variable refactor(OpenAIClient.js): move setOptions method to the bottom of the class feat(OpenAIClient.js): add support for cl100k_base encoding feat(OpenAIClient.js): add support for unofficial chat GPT models feat(OpenAIClient.js): add support for custom modelOptions feat(OpenAIClient.js): add caching for tokenizers feat(OpenAIClient.js): add freeAndInitializeEncoder method to free and reinitialize tokenizers refactor(OpenAIClient.js): rename getBuildPromptOptions to getBuildMessagesOptions refactor(OpenAIClient.js): rename buildPrompt to buildMessages refactor(OpenAIClient.js): remove endpointOption from ask function arguments in openAI.js * refactor(ChatGPTClient.js, OpenAIClient.js): improve code readability and consistency - In ChatGPTClient.js, update the roleLabel and messageString variables to handle cases where the message object does not have an isCreatedByUser property or a role property with a value of 'user'. - In OpenAIClient.js, rename the freeAndInitializeEncoder method to freeAndResetEncoder to better reflect its functionality. Also, update the method calls to reflect the new name. Additionally, update the getTokenCount method to handle errors by calling the freeAndResetEncoder method instead of the now-renamed freeAndInitializeEncoder method. * refactor(OpenAIClient.js): extract instructions object to a separate variable and add it to payload after formatted messages fix(OpenAIClient.js): handle cases where progressMessage.choices is undefined or empty * refactor(BaseClient.js): extract addInstructions method from sendMessage method feat(OpenAIClient.js): add maxTokensMap object to map maximum tokens for each model refactor(OpenAIClient.js): use addInstructions method in buildMessages method instead of manually building the payload list * refactor(OpenAIClient.js): remove unnecessary condition for modelOptions.model property in buildMessages method * feat(BaseClient.js): add support for token count tracking and context strategy feat(OpenAIClient.js): add support for token count tracking and context strategy feat(Message.js): add tokenCount field to Message schema and updateMessage function * refactor(BaseClient.js): add support for refining messages based on token limit feat(OpenAIClient.js): add support for context refinement strategy refactor(OpenAIClient.js): use context refinement strategy in message sending refactor(server/index.js): improve code readability by breaking long lines * refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` for clarity feat(BaseClient.js): add `refinePrompt` and `refinePromptTemplate` to handle message refinement feat(BaseClient.js): add `refineMessages` method to refine messages feat(BaseClient.js): add `handleContextStrategy` method to handle context strategy feat(OpenAIClient.js): add `abortController` to `buildPrompt` method options refactor(OpenAIClient.js): change `payload` and `tokenCountMap` to let variables in `handleContextStrategy` method refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `handleContextStrategy` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContextTokens` in `getMessagesWithinTokenLimit` method for consistency refactor(BaseClient.js): change `remainingContext` to `remainingContext * chore(openAI.js): comment out contextStrategy option in clientOptions * chore(openAI.js): comment out debug option in clientOptions object * test: BaseClient tests in progress * test: Complete OpenAIClient & BaseClient tests * fix(OpenAIClient.js): remove unnecessary whitespace fix(OpenAIClient.js): remove unused variables and comments fix(OpenAIClient.test.js): combine getTokenCount and freeAndResetEncoder tests * chore(.eslintrc.js): add rule for maximum of 1 empty line feat(ask/openAI.js): add abortMessage utility function fix(ask/openAI.js): handle error and abort message if partial text is less than 2 characters feat(utils/index.js): export abortMessage utility function * test: complete additional tests * feat: Azure OpenAI as a separate endpoint * chore: remove extraneous console logs * fix(azureOpenAI): use chatCompletion endpoint * chore(initializeClient.js): delete initializeClient.js file chore(askOpenAI.js): delete old OpenAI route handler chore(handlers.js): remove trailing whitespace in thought variable assignment * chore(chatgpt-client.js): remove unused chatgpt-client.js file refactor(index.js): remove askClient import and export from index.js * chore(chatgpt-client.tokens.js): update test script for memory usage and encoding performance The test script in `chatgpt-client.tokens.js` has been updated to measure the memory usage and encoding performance of the client. The script now includes information about the initial memory usage, peak memory usage, final memory usage, and memory usage after a timeout. It also provides insights into the number of encoding requests that can be processed per second. The script has been modified to use the `OpenAIClient` class instead of the `ChatGPTClient` class. Additionally, the number of iterations for the encoding loop has been reduced to 10,000. A timeout function has been added to simulate a delay of 15 seconds. After the timeout, the memory usage is measured again. The script now handles uncaught exceptions and logs any errors that occur, except for errors related to failed fetch requests. Note: This is a test script and should not be used in production * feat(FakeClient.js): add a new class `FakeClient` that extends `BaseClient` and implements methods for a fake client feat(FakeClient.js): implement the `setOptions` method to handle options for the fake client feat(FakeClient.js): implement the `initializeFakeClient` function to initialize a fake client with options and fake messages fix(OpenAIClient.js): remove duplicate `maxTokensMap` import and use the one from utils feat(BaseClient): return promptTokens and completionTokens * refactor(gptPlugins): refactor ChatAgent to PluginsClient, which extends OpenAIClient * refactor: client paths * chore(jest.config.js): remove jest.config.js file * fix(PluginController.js): update file path to manifest.json feat(gptPlugins.js): add support for aborting messages refactor(ask/index.js): rename askGPTPlugins to gptPlugins for consistency * fix(BaseClient.js): fix spacing in generateTextStream function signature refactor(BaseClient.js): remove unnecessary push to currentMessages in generateUserMessage function refactor(BaseClient.js): remove unnecessary push to currentMessages in handleStartMethods function refactor(PluginsClient.js): remove unused variables and date formatting in constructor refactor(PluginsClient.js): simplify mapping of pastMessages in getCompletionPayload function * refactor(GoogleClient): GoogleClient now extends BaseClient * chore(.env.example): add AZURE_OPENAI_MODELS variable fix(api/routes/ask/gptPlugins.js): enable Azure integration if PLUGINS_USE_AZURE is true fix(api/routes/endpoints.js): getOpenAIModels function now accepts options, use AZURE_OPENAI_MODELS if PLUGINS_USE_AZURE is true fix(client/components/Endpoints/OpenAI/Settings.jsx): remove console.log statement docs(features/azure.md): add documentation for Azure OpenAI integration and environment variables * fix(e2e:popup): includes the icon + endpoint names in role, name property
2023-07-03 16:51:12 -04:00
if (!abortControllers.has(abortKey)) {
return res.status(404).send('Request not found');
}
const { abortController } = abortControllers.get(abortKey);
abortControllers.delete(abortKey);
const ret = await abortController.abortAsk();
console.log('Aborted request', abortKey);
console.log('Aborted message:', ret);
res.send(JSON.stringify(ret));
}
feat: ChatGPT Plugins/OpenAPI specs for Plugins Endpoint (#620) * wip: proof of concept for openapi chain * chore(api): update langchain dependency to version 0.0.105 * feat(Plugins): use ChatGPT Plugins/OpenAPI specs (first pass) * chore(manifest.json): update pluginKey for "Browser" tool to "web-browser" chore(handleTools.js): update customConstructor key for "web-browser" tool * fix(handleSubmit.js): set unfinished property to false for all endpoints * fix(handlers.js): remove unnecessary capitalizeWords function and use action.tool directly refactor(endpoints.js): rename availableTools to tools and transform it into a map * feat(endpoints): add plugins selector to endpoints file refactor(CodeBlock.tsx): refactor to typescript refactor(Plugin.tsx): use recoil Map for plugin name and refactor to typescript chore(Message.jsx): linting chore(PluginsOptions/index.jsx): remove comment/linting chore(svg): export Clipboard and CheckMark components from SVG index and refactor to typescript * fix(OpenAPIPlugin.js): rename readYamlFile function to readSpecFile fix(OpenAPIPlugin.js): handle JSON files in readSpecFile function fix(OpenAPIPlugin.js): handle JSON URLs in getSpec function fix(OpenAPIPlugin.js): handle JSON variables in createOpenAPIPlugin function fix(OpenAPIPlugin.js): add description for variables in createOpenAPIPlugin function fix(OpenAPIPlugin.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition fix(loadSpecs.js): add optional flag for is_user_authenticated and has_user_authentication in ManifestDefinition fix(Plugin.tsx): remove unnecessary callback parameter in getPluginName function fix(getDefaultConversation.js): fix browser console error: handle null value for lastConversationSetup in getDefaultConversation function * feat(api): add new tools Add Ai PDF tool for super-fast, interactive chats with PDFs of any size, complete with page references for fact checking. Add VoxScript tool for searching through YouTube transcripts, financial data sources, Google Search results, and more. Add WebPilot tool for browsing and QA of webpages, PDFs, and data. Generate articles from one or more URLs. feat(api): update OpenAPIPlugin.js - Add support for bearer token authorization in the OpenAPIPlugin. - Add support for custom headers in the OpenAPIPlugin. fix(api): fix loadTools.js - Pass the user parameter to the loadSpecs function. * feat(PluginsClient.js): import findMessageContent function from utils feat(PluginsClient.js): add message parameter to options object in initializeCustomAgent function feat(PluginsClient.js): add content to errorMessage if message content is found feat(PluginsClient.js): break out of loop if message content is found feat(PluginsClient.js): add delay option with value of 8 to generateTextStream function feat(PluginsClient.js): add support for process.env.PORT environment variable in app.listen function feat(askyourpdf.json): add askyourpdf plugin configuration feat(metar.json): add metar plugin configuration feat(askyourpdf.yaml): add askyourpdf plugin OpenAPI specification feat(OpenAPIPlugin.js): add message parameter to createOpenAPIPlugin function feat(OpenAPIPlugin.js): add description_for_model to chain run message feat(addOpenAPISpecs.js): remove verbose option from loadSpecs function call fix(loadSpecs.js): add 'message' parameter to the loadSpecs function feat(findMessageContent.js): add utility function to find message content in JSON objects * fix(PluginStoreDialog.tsx): update z-index value for the dialog container The z-index value for the dialog container was updated to "102" to ensure it appears above other elements on the page. * chore(web_pilot.json): add "params" field with "user_has_request" parameter set to true * chore(eslintrc.js): update eslint rules fix(Login.tsx): add missing semicolon after import statement * fix(package-lock.json): update langchain dependency to version ^0.0.105 * fix(OpenAPIPlugin.js): change header key from 'id' to 'librechat_user_id' for consistency and clarity feat(plugins): add documentation for using official ChatGPT Plugins with OpenAPI specs This commit adds a new file `chatgpt_plugins_openapi.md` to the `docs/features/plugins` directory. The file provides detailed information on how to use official ChatGPT Plugins with OpenAPI specifications. It explains the components of a plugin, including the Plugin Manifest file and the OpenAPI spec. It also covers the process of adding a plugin, editing manifest files, and customizing OpenAPI spec files. Additionally, the commit includes disclaimers about the limitations and compatibility of plugins with LibreChat. The documentation also clarifies that the use of ChatGPT Plugins with LibreChat does not violate OpenAI's Terms of Service. The purpose of this commit is to provide comprehensive documentation for developers who want to integrate ChatGPT Plugins into their projects using OpenAPI specs. It aims to guide them through the process of adding and configuring plugins, as well as addressing potential issues and chore(introduction.md): update link to ChatGPT Plugins documentation docs(introduction.md): clarify the purpose of the plugins endpoint and its capabilities * fix(OpenAPIPlugin.js): update SUFFIX variable to provide a clearer description docs(chatgpt_plugins_openapi.md): update information about adding plugins via url on the frontend * feat(PluginsClient.js): sendIntermediateMessage on successful Agent load fix(PluginsClient.js, server/index.js, gptPlugins.js): linting fixes docs(chatgpt_plugins_openapi.md): update links and add additional information * Update chatgpt_plugins_openapi.md * chore: rebuild package-lock file * chore: format/lint all files with new rules * chore: format all files * chore(README.md): update AI model selection list The AI model selection list in the README.md file has been updated to reflect the current options available. The "Anthropic" model has been added as an alternative name for the "Claude" model. * fix(Plugin.tsx): type issue * feat(tools): add new tool WebPilot feat(tools): remove tool Weather Report feat(tools): add new tool Prompt Perfect feat(tools): add new tool Scholarly Graph Link * feat(OpenAPIPlugin.js): add getSpec and readSpecFile functions feat(OpenAPIPlugin.spec.js): add tests for readSpecFile, getSpec, and createOpenAPIPlugin functions * chore(agent-demo-1.js): remove unused code and dependencies chore(agent-demo-2.js): remove unused code and dependencies chore(demo.js): remove unused code and dependencies * feat(addOpenAPISpecs): add function to transform OpenAPI specs into desired format feat(addOpenAPISpecs.spec): add tests for transformSpec function fix(loadSpecs): remove debugging code * feat(loadSpecs.spec.js): add unit tests for ManifestDefinition, validateJson, and loadSpecs functions * fix: package file resolution bug * chore: move scholarly_graph_link manifest to 'has-issues' * refactor(client/hooks): convert to TS and export from index * Update introduction.md * Update chatgpt_plugins_openapi.md
2023-07-16 12:19:47 -04:00
module.exports = abortMessage;