This project was developed in mid-2023, a period of rapid change in the LLM ecosystem. It relies on severely outdated versions of key libraries and APIs, such as langchain and the OpenAI API.
LLM-SlackBot-Channels is a Slack bot developed using the Slack Bolt framework. It allows users to interact with the bot through Slack channels by employing various commands. The bot leverages a Large Language Model (LLM) to generate responses based on both user input and channel-specific configurations. Unique to each channel, the bot can adopt different personalities and follow a customized set of instructions. This includes the ability to use it as an agent, integrating tools and documents.
This repository mainly uses langchain. It supports the usage of open LLMs and embeddings, and OpenAI's models as well.
See a video example here:
example_usage.mp4
- Improve the way a file is uploaded to a QA thread
- If using OpenAI models, then you can customize which model you want to use in each channel
- Use the LLM model as an Agent with your own tools! see more in v0.2 release
- You can add files to the channel, which are used by the agent with a doc retriever tool
-
/modify_bot This command allows you to customize the bot's personality, instructions, and temperature within the channel it's operating in. If
!no-notifyis included, then no notification is sent to the channel. -
/bot_info This command presents the initial prompt used by the bot, as well as the default 'temperature' for generating responses.
-
/ask Use this command to ask questions or make requests. The bot employs the LLM to generate appropriate responses.
Command syntax:
/ask (<!all>) (<!temp=temp>) <question or request>Here, if
!allis included, the bot sends its response to the entire channel. If!tempis included, the response's "temperature" (randomness of the bot's output) is adjusted. -
/permissions (optional) This command modify which users can interact with the bot. It requires a password defined in the environment variables
Command syntax:
/permissions <PERMISSIONS_PASSWORD>If no password was defined inside
.env, then this command do nothing. -
/edit_docs This command allows the user to edit the descriptions of the documents that have been uploaded to the channel. These edited descriptions are used in the doc_retriever tool (See Mentions).
-
When the bot is mentioned in a thread, it can respond based on the context. The context limit is handled using a
max token limitin a similar way asConversationTokenBufferMemoryfrom langchain. -
If the bot is mentioned in channel along with uploaded file, then it ask if you want to start a QA thread or upload the file directly to the channel: The user has the possibility to add some context and new separators to chunk the file(s). The files are downloaded in
data/tmpto define a persistent VectorStore indata/db, after the generation of the VectorStore all files are deleted.- QA Thread: The bot responds to the user's message that contains the uploaded file(s), stating that a QA thread has been created with the uploaded file(s) and the context provided by the user.
- If the user wants to remove the QA thread, use the flag
!delete-qawhile mentioning the bot.
- If the user wants to remove the QA thread, use the flag
- Upload to channel: The file is upload to the channel and the tool doc_retriever appears in the list of tools once at least one file has been added to the channel. This tool take as context all the files uploaded by the users using this method.
If the channel is used as a simple LLM chain, then a
ConversationRetrievalChain, otherwise a tool to retrieve the important information from the documents is created and passed to the Agent.
- QA Thread: The bot responds to the user's message that contains the uploaded file(s), stating that a QA thread has been created with the uploaded file(s) and the context provided by the user.
The documents are handled using ChromaDB, saving the database to data/db/{channel_id}/{timestamp} for each QA thread, where channel_id refers to the channel which contains the thread and timestamp to the time when the QA thread was initiated. It is important to mention that typically embedding models are not compatible, so if you change the embedding model after creating the database for a given QA thread, then that thread will not work.
To install the necessary requirements, use:
pip install -r requirements.txtFor CTransformers or OpenAI functionalities, you will need to install these packages separately:
pip install ctransformers
pip install openaiFor Open-source embeddings, you will need to install sentence-transformers:
pip install sentence-transformersDuplicate example.env to .env and adjust as necessary:
OPENAI_API_KEY= Your OpenAI key
CTRANSFORMERS_MODEL= model name from HuggingFace or model path in your computer
EMB_MODEL=all-MiniLM-L6-v2 # Embedding model
SLACK_BOT_TOKEN=xoxb-... Slack API Bot Token
SLACK_APP_TOKEN=xapp-... Slack API App Token
PERMISSIONS_PASSWORD=CHANGEME # Password to activate /permissions command
To start the bot, simply run:
python main.pyThis file contains the basic configuration to run the bot:
from src.slackbot import SlackBot
from src.handlers import create_handlers
# Set model_type
# OpenAI
# Llama (CTransformers)
# FakeLLM (just for testing)
model_type='OpenAI'
bot = SlackBot(name='SlackBot', model_type=model_type)
# Set configuration
config = dict(model_name="gpt-3.5-turbo", temperature=0.8, max_tokens=500)
# Initialize LLM and embeddings
# max_tokens_threads refers to the max tokens to consider in a thread message history
bot.initialize_llm(max_tokens_threads=2000, config=config)
# If you don't want to use OpenAI Embeddings, you can add model_type parameter to use a model from EMB_MODEL env variable
bot.initialize_embeddings() # model_type='llama' for HugginFaceEmbeddings
# Create handlers for commands /ask, /modify_bot, /bot_info and bot mentions
create_handlers(bot)
### You can create new handlers for other commands as follow
# @bot.app.command("/foo")
# async def handle_foo(say, respond, ack, command):
# await ack()
# # do something..The bot requires the following permissions:
-
Enable Socket Mode
-
Activate Incoming Webhooks
-
Create Slash Commands
/askAsk a question or make a request/modify_botModify bot's configuration for the current channel/bot_infoGet prompt and temperature of the bot in the current channel/permissions(optional) Modify which users can interact with the bot/edit_docsModify documents uploaded to the bot in the current channel
-
Enable Events
- Subscribe to
app_mention
- Subscribe to
-
Set Scopes
app_mention:readchannels:historychannels:joinchannels:readchat:writefiles:readim:write<- To notify users about change in permissionsusers:read<- To get list of users
Note that for groups you will require also
groups:history,groups:joinandgroups:read
- Add a command to modify which users can interact with the bot. The command should be initialized using a password, example
/permissions <PASSWORD> - A
ingestmethod to create a vector database and use a QA retriever - add a custom CallbackHandler to update the messages on the go
- A modal view to modify files description
- a method to remove files from the vectorstore
- A way to delete unused QA threads
- Create a doc retriever for each document, currently is using the same approach from privateGPT
- Create tests


