<h1>Large Language Model (“Chat-bot AI”) integration<aclass="headerlink"href="#large-language-model-chat-bot-ai-integration"title="Permalink to this headline">¶</a></h1>
<p>Contribution by Griatch 2023</p>
<p>This adds an LLMClient that allows Evennia to send prompts to a LLM server (Large Language Model, along the lines of ChatGPT). Example uses a local OSS LLM install. Included is an NPC you can chat with using a new <codeclass="docutils literal notranslate"><spanclass="pre">talk</span></code> command. The NPC will respond using the AI responses from the LLM server. All calls are asynchronous, so if the LLM is slow, Evennia is not affected.</p>
<p>There are many LLM servers, but they can be pretty technical to install and set up. This contrib was tested with <aclass="reference external"href="https://github.com/oobabooga/text-generation-webui">text-generation-webui</a>. It has a lot of features while also being easy to install. |</p>
<li><p><aclass="reference external"href="https://github.com/oobabooga/text-generation-webui#installation">Go to the Installation section</a> and grab the ‘one-click installer’ for your OS.</p></li>
<li><p>Unzip the files in a folder somewhere on your hard drive (you don’t have to put it next to your evennia stuff if you don’t want to).</p></li>
<li><p>In a terminal/console, <codeclass="docutils literal notranslate"><spanclass="pre">cd</span></code> into the folder and execute the source file in whatever way it’s done for your OS (like <codeclass="docutils literal notranslate"><spanclass="pre">source</span><spanclass="pre">start_linux.sh</span></code> for Linux, or <codeclass="docutils literal notranslate"><spanclass="pre">.\start_windows</span></code> for Windows). This is an installer that will fetch and install everything in a conda virtual environment. When asked, make sure to select your GPU (NVIDIA/AMD etc) if you have one, otherwise use CPU.</p></li>
<li><p>Once all is loaded, stop the server with <codeclass="docutils literal notranslate"><spanclass="pre">Ctrl-C</span></code> (or <codeclass="docutils literal notranslate"><spanclass="pre">Cmd-C</span></code>) and open the file <codeclass="docutils literal notranslate"><spanclass="pre">webui.py</span></code> (it’s one of the top files in the archive you unzipped). Find the text string <codeclass="docutils literal notranslate"><spanclass="pre">CMD_FLAGS</span><spanclass="pre">=</span><spanclass="pre">''</span></code> near the top and change this to <codeclass="docutils literal notranslate"><spanclass="pre">CMD_FLAGS</span><spanclass="pre">=</span><spanclass="pre">'--api'</span></code>. Then save and close. This makes the server activate its api automatically.</p></li>
<li><p>Now just run that server starting script (<codeclass="docutils literal notranslate"><spanclass="pre">start_linux.sh</span></code> etc) again. This is what you’ll use to start the LLM server henceforth.</p></li>
<li><p>Once the server is running, point your browser to <aclass="reference external"href="http://127.0.0.1:7860">http://127.0.0.1:7860</a> to see the running Text generation web ui running. If you turned on the API, you’ll find it’s now active on port 5000. This should not collide with default Evennia ports unless you changed something.</p></li>
<li><p>At this point you have the server and API, but it’s not actually running any Large-Language-Model (LLM) yet. In the web ui, go to the <codeclass="docutils literal notranslate"><spanclass="pre">models</span></code> tab and enter a github-style path in the <codeclass="docutils literal notranslate"><spanclass="pre">Download</span><spanclass="pre">custom</span><spanclass="pre">model</span><spanclass="pre">or</span><spanclass="pre">LoRA</span></code> field. To test so things work, enter <codeclass="docutils literal notranslate"><spanclass="pre">DeepPavlov/bart-base-en-persona-chat</span></code> and download. This is a relatively small model (350 million parameters) so should be possible to run on most machines using only CPU. Update the models in the drop-down on the left and select it, then load it with the <codeclass="docutils literal notranslate"><spanclass="pre">Transformers</span></code> loader. It should load pretty quickly. If you want to load this every time, you can select the <codeclass="docutils literal notranslate"><spanclass="pre">Autoload</span><spanclass="pre">the</span><spanclass="pre">model</span></code> checkbox; otherwise you’ll need to select and load the model every time you start the LLM server.</p></li>
<li><p>To experiment, you can find thousands of other open-source text-generation LLM models on <aclass="reference external"href="https://huggingface.co/models?pipeline_tag=text-generation&sort=trending">huggingface.co/models</a>. Beware to not download a too huge model; your machine may not be able to load it! If you try large models, <em>don’t</em> set the <codeclass="docutils literal notranslate"><spanclass="pre">Autoload</span><spanclass="pre">the</span><spanclass="pre">model</span></code> checkbox, in case the model crashes your server on startup.</p></li>
<p>For troubleshooting, you can look at the terminal output of the <codeclass="docutils literal notranslate"><spanclass="pre">text-generation-webui</span></code> server; it will show you the requests you do to it and also list any errors. See the text-generation-webui homepage for more details.</p>
<p>To be able to talk to NPCs, import and add the <codeclass="docutils literal notranslate"><spanclass="pre">evennia.contrib.rpg.llm.llm_npc.CmdLLMTalk</span></code> command to your Character cmdset in <codeclass="docutils literal notranslate"><spanclass="pre">mygame/commands/default_commands.py</span></code> (see the basic tutorials if you are unsure).</p>
<p>The default LLM api config should work with the text-generation-webui LLM server running its API on port 5000. You can also customize it via settings (if a setting is not added, the default below is used):</p>
<p>Don’t forget to reload Evennia if you make any changes.</p>
</section>
</section>
<sectionid="usage">
<h2>Usage<aclass="headerlink"href="#usage"title="Permalink to this headline">¶</a></h2>
<p>With the LLM server running and the new <codeclass="docutils literal notranslate"><spanclass="pre">talk</span></code> command added, create a new LLM-connected NPC and talk to it in-game.</p>
<p>Most likely, your first response will <em>not</em> be this nice and short, but will be quite nonsensical, looking like an email. This is because the example model we loaded is not optimized for conversations. But at least you know it works!</p>
<p>The conversation will be echoed to everyone in the room. The NPC will show a thinking/pondering message if the server responds slower than 2 seconds (by default).</p>
<h2>A note on running LLMs locally<aclass="headerlink"href="#a-note-on-running-llms-locally"title="Permalink to this headline">¶</a></h2>
<p>Running an LLM locally can be <em>very</em> demanding.</p>
<p>As an example, I tested this on my very beefy work laptop. It has 32GB or RAM, but no gpu. so i ran the example (small 128m parameter) model on cpu. it takes about 3-4 seconds to generate a (frankly very bad) response. so keep that in mind.</p>
<p>On <aclass="reference external"href="http://huggingface.co">huggingface.co</a> you can find listings of the ‘best performing’ language models right now. This changes all the time. The leading models require 100+ GB RAM. And while it’s possible to run on a CPU, ideally you should have a large graphics card (GPU) with a lot of VRAM too.</p>
<p>So most likely you’ll have to settle on something smaller. Experimenting with different models and also tweaking the prompt is needed.</p>
<p>Also be aware that many open-source models are intended for AI research and licensed for non-commercial use only. So be careful if you want to use this in a commercial game. No doubt there will be a lot of changes in this area over the coming years.</p>
<sectionid="why-not-use-an-ai-cloud-service">
<h3>Why not use an AI cloud service?<aclass="headerlink"href="#why-not-use-an-ai-cloud-service"title="Permalink to this headline">¶</a></h3>
<p>You could in principle use this to call out to an external API, like OpenAI (chat-GPT) or Google. Most cloud-hosted services are commercial and costs money. But since they have the hardware to run bigger models (or their own, proprietary models), they may give better and faster results.</p>
<p>Calling an external API is not tested, so report any findings. Since the Evennia Server (not the Portal) is doing the calling, you are recommended to put a proxy between you and the internet if you call out like this.</p>
<p>Here is an untested example of the Evennia setting for calling <aclass="reference external"href="https://platform.openai.com/docs/api-reference/completions">OpenAI’s v1/completions API</a>:</p>
<div><p>TODO: OpenAI’s more modern <aclass="reference external"href="https://platform.openai.com/docs/api-reference/chat">v1/chat/completions</a> api does currently not work out of the gate since it’s a bit more complex, having the prompt given as a list of all responses so far.</p>
<p>The LLM-able NPC class has a new method <codeclass="docutils literal notranslate"><spanclass="pre">at_talked_to</span></code> which does the connection to the LLM server and responds. This is called by the new <codeclass="docutils literal notranslate"><spanclass="pre">talk</span></code> command. Note that all these calls are asynchronous, meaning a slow response will not block Evennia.</p>
<p>The NPC’s AI is controlled with a few extra properties and Attributes, most of which can be customized directly in-game by a builder.</p>
<sectionid="prompt-prefix">
<h3><codeclass="docutils literal notranslate"><spanclass="pre">prompt_prefix</span></code><aclass="headerlink"href="#prompt-prefix"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal notranslate"><spanclass="pre">prompt_prefix</span></code> is very important. This will be added in front of your prompt and helps the AI know how to respond. Remember that an LLM model is basically an auto-complete mechaniss, so by providing examples and instructions in the prefix, you can help it respond in a better way.</p>
<p>The prefix string to use for a given NPC is looked up from one of these locations, in order:</p>
<ol>
<li><p>An Attribute <codeclass="docutils literal notranslate"><spanclass="pre">npc.db.chat_prefix</span></code> stored on the NPC (not set by default)</p></li>
<li><p>A property <codeclass="docutils literal notranslate"><spanclass="pre">chat_prefix</span></code> on the the LLMNPC class (set to <codeclass="docutils literal notranslate"><spanclass="pre">None</span></code> by default).</p></li>
<li><p>The <codeclass="docutils literal notranslate"><spanclass="pre">LLM_PROMPT_PREFIX</span></code> setting (unset by default)</p></li>
<li><p>If none of the above locations are set, the following default is used:</p>
<divclass="highlight-none notranslate"><divclass="highlight"><pre><span></span>"You are roleplaying as {name}, a {desc} existing in {location}.
Answer with short sentences. Only respond as {name} would.
From here on, the conversation between {name} and {character} begins."
</pre></div>
</div>
</li>
</ol>
<p>Here, the formatting tag <codeclass="docutils literal notranslate"><spanclass="pre">{name}</span></code> is replaced with the NPCs’s name, <codeclass="docutils literal notranslate"><spanclass="pre">desc</span></code> by it’s description, the <codeclass="docutils literal notranslate"><spanclass="pre">location</span></code> by its current location’s name and <codeclass="docutils literal notranslate"><spanclass="pre">character</span></code> by the one talking to it. All names of characters are given by the <codeclass="docutils literal notranslate"><spanclass="pre">get_display_name(looker)</span></code> call, so this may be different
from person to person.</p>
<p>Depending on the model, it can be very important to extend the prefix both with more information about the character as well as communication examples. A lot of tweaking may be necessary before producing something remniscent of human speech.</p>
</section>
<sectionid="response-template">
<h3>Response template<aclass="headerlink"href="#response-template"title="Permalink to this headline">¶</a></h3>
<p>The <codeclass="docutils literal notranslate"><spanclass="pre">response_template</span></code> AttributeProperty defaults to being</p>
<p>following common <codeclass="docutils literal notranslate"><spanclass="pre">msg_contents</span></code><aclass="reference internal"href="../Components/FuncParser.html"><spanclass="doc std std-doc">FuncParser</span></a> syntax. The <codeclass="docutils literal notranslate"><spanclass="pre">character</span></code> string will be mapped to the one talking to the NPC and the <codeclass="docutils literal notranslate"><spanclass="pre">response</span></code> will be what is said by the NPC.</p>
</section>
<sectionid="memory">
<h3>Memory<aclass="headerlink"href="#memory"title="Permalink to this headline">¶</a></h3>
<p>The NPC remembers what has been said to it by each player. This memory will be included with the prompt to the LLM and helps it understand the context of the conversation. The length of this memory is given by the <codeclass="docutils literal notranslate"><spanclass="pre">max_chat_memory_size</span></code> AttributeProperty. Default is 25 messages. Once the memory is maximum is reached, older messages are forgotten. Memory is stored separately for each player talking to the NPC.</p>
</section>
<sectionid="thinking">
<h3>Thinking<aclass="headerlink"href="#thinking"title="Permalink to this headline">¶</a></h3>
<p>If the LLM server is slow to respond, the NPC will echo a random ‘thinking message’ to show it has not forgotten about you (something like “The villager ponders your words …”).</p>
<p>They are controlled by two <codeclass="docutils literal notranslate"><spanclass="pre">AttributeProperties</span></code>:</p>
<ulclass="simple">
<li><p><codeclass="docutils literal notranslate"><spanclass="pre">thinking_timeout</span></code>: How long, in seconds to wait before showing the message. Default is 2 seconds.</p></li>
<li><p><codeclass="docutils literal notranslate"><spanclass="pre">thinking_messages</span></code>: A list of messages to randomly pick between. Each message string can contain <codeclass="docutils literal notranslate"><spanclass="pre">{name}</span></code>, which will be replaced by the NPCs name.</p></li>
<p><small>This document page is generated from <codeclass="docutils literal notranslate"><spanclass="pre">evennia/contrib/rpg/llm/README.md</span></code>. Changes to this
file will be overwritten, so edit that file rather than this one.</small></p>