SHOWCASE · LLM
App template:

GCP Logo Deploy with GCP | Render Logo Deploy with Render

Fully private RAG with Pathway

This is the accompanying code for deploying the adaptive RAG technique with Pathway.

To learn more about building & deploying RAG applications with Pathway, including containerization, refer to demo question answering.

Introduction

This app relies on modules provided under pathway.xpacks.llm.

BaseRAGQuestionAnswerer is the base class to build RAG applications with Pathway vector store and Pathway xpack components. It is meant to get you started with your RAG application right away.

This example uses the AdaptiveRAGQuestionAnswerer that extends the BaseRAGQuestionAnswerer with the adaptive retrieval technique.

Then, replies to requests in the endpoint /v1/pw_ai_answer.

pw_ai_query function takes the pw_ai_queries table as the input, this table contains the prompt, and other arguments coming from the post request, see the BaseRAGQuestionAnswerer class and defined schemas to learn more about getting inputs with post requests. We use the data in this table to call our adaptive retrieval logic.

To do that, we use answer_with_geometric_rag_strategy_from_index implementation provided under the pathway.xpacks.llm.question_answering. This function takes an index, LLM, prompt and adaptive parameters such as the starting number of documents. Then, iteratively asks the question to the LLM with an increasing number of context documents retrieved from the index. We also set strict_prompt=True. This adjusts the prompt with additional instructions and adds additional rails to parse the response.

We encourage you to check the implementation of answer_with_geometric_rag_strategy_from_index.

Modifying the code

Under the main function, we define:

  • input folders
  • LLM
  • embedder
  • index
  • host and port to run the app
  • run options (caching, cache folder)

By default, we used locally deployed Mistral 7B. App is LLM agnostic and, it is possible to use any LLM. You can modify any of the components by checking the options from the imported modules: from pathway.xpacks.llm import embedders, llms, parsers, splitters.

It is also possible to easily create new components by extending the pw.UDF class and implementing the __wrapped__ function.

Deploying and using a local LLM

Due to its popularity and ease of use, we decided to run the Mistral 7B on Ollama.

To run local LLM, refer to these steps:

  • Download Ollama from ollama.com/download
  • In your terminal, run ollama serve
  • In another terminal, run ollama run mistral

You can now test it with the following request:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "mistral",
  "prompt":"Here is a story about llamas eating grass"
 }'

Running the app

First, make sure your local LLM is up and running. By default, the pipeline tries to access the LLM at http://localhost:11434. You can change that by setting LLM_API_BASE environmental variable or creating .env file which sets its value.

With Docker

In order to let the pipeline get updated with each change in local files, you need to mount the folder onto the docker. The following commands show how to do that.

# Build the image in this folder
docker build -t privaterag .

# Run the image, mount the `data` folder into image 
# -e is used to pass value of LLM_API_BASE environmental variable
docker run -v ./data:/app/data -e LLM_API_BASE -p 8000:8000 privaterag

Locally

To run locally you need to install the Pathway app with LLM dependencies using:

pip install pathway[all]

Then change your directory in the terminal to this folder and run the app:

python app.py

Using the app

Finally, query the application with;

curl -X 'POST'   'http://0.0.0.0:8000/v1/pw_ai_answer'   -H 'accept: */*'   -H 'Content-Type: application/json'   -d '{
  "prompt": "What is the start date of the contract?" 
}'

December 21, 2015 [6]

Pathway Team

LLMRAGAdaptive RAGprompt engineeringexplainabilitymistralollamaprivate raglocal ragollama ragdocker
April 22, 2024
Share this article
Share new articles with me each month

Comments