You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/pipelines/demo-question-answering/README.md
+62-8Lines changed: 62 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@
11
11
12
12
# Pathway RAG app with always up-to-date knowledge
13
13
14
-
This demo shows how to create a RAG application using [Pathway](https://github.com/pathwaycom/pathway) that provides always up-to-date knowledge to your LLM without the need for a separate ETL.
14
+
This demo shows how to create a real-time RAG application using [Pathway](https://github.com/pathwaycom/pathway) that provides always up-to-date knowledge to your LLM without the need for a separate ETL.
15
15
16
16
You can have a preview of the demo [here](https://pathway.com/solutions/ai-pipelines).
17
17
@@ -20,7 +20,7 @@ This significantly reduces the developer's workload.
20
20
21
21
This demo allows you to:
22
22
23
-
- Create a vector store with real-time document indexing from Google Drive, Microsoft 365 SharePoint, or a local directory;
23
+
- Create a Document store with real-time document indexing from Google Drive, Microsoft 365 SharePoint, or a local directory;
24
24
- Connect an OpenAI LLM model of choice to your knowledge base;
25
25
- Get quality, accurate, and precise responses to your questions;
26
26
- Ask questions about folders, files or all your documents easily, with the help of filtering options;
@@ -40,7 +40,7 @@ Note: This app relies on [Document Store](https://pathway.com/developers/api-doc
40
40
41
41
## Summary of available endpoints
42
42
43
-
This example spawns a lightweight webserver that accepts queries on six possible endpoints, divided into two categories: document indexing and RAG with LLM.
43
+
This example spawns a lightweight webserver using Pathway’s [`QASummaryRestServer`](https://pathway.com/developers/api-docs/pathway-xpacks-llm/servers#pathway.xpacks.llm.servers.QASummaryRestServer)) that accepts queries on five possible endpoints, divided into two categories: document indexing and RAG with LLM.
44
44
45
45
### Document Indexing capabilities
46
46
-`/v1/retrieve` to perform similarity search;
@@ -55,11 +55,25 @@ See the [using the app section](###Using-the-app) to learn how to use the provid
55
55
56
56
## How it works
57
57
58
-
This pipeline uses several Pathway connectors to read the data from the local drive, Google Drive, and Microsoft SharePoint sources. It allows you to poll the changes with low latency and to do the modifications tracking. So, if something changes in the tracked files, the corresponding change is reflected in the internal collections. The contents are read into a single Pathway Table as binary objects.
58
+
1.**Data Ingestion**
59
+
We define one or more sources in `app.yaml` (local directories, Google Drive, Microsoft SharePoint, etc.).
60
+
- The provided demo references a local folder `data/` by default.
61
+
- The code can poll these sources at configured intervals, so when new documents appear or existing ones change, they are automatically parsed and re-indexed in real-time.
59
62
60
-
After that, those binary objects are parsed with [unstructured](https://unstructured.io/) library and split into chunks. With the usage of OpenAI API, the pipeline embeds the obtained chunks.
63
+
2.**Parsing & Splitting**
64
+
Using [Unstructured](https://unstructured.io/) (through Pathway’s [`ParseUnstructured`](https://pathway.com/developers/api-docs/pathway-xpacks-llm/parsers#pathway.xpacks.llm.parsers.ParseUnstructured)) and [TokenCountSplitter](https://pathway.com/developers/api-docs/pathway-xpacks-llm/splitters#pathway.xpacks.llm.splitters.TokenCountSplitter), documents are chunked into smaller parts.
61
65
62
-
Finally, the embeddings are indexed with the capabilities of Pathway's machine-learning library. The user can then query the created index with simple HTTP requests to the endpoints mentioned above.
66
+
3.**Embedding**
67
+
Via [`OpenAIEmbedder`](https://pathway.com/developers/api-docs/pathway-xpacks-llm/embedders#pathway.xpacks.llm.embedders.OpenAIEmbedder), the chunks get turned into embeddings. You can substitute your own embedder if you wish.
68
+
69
+
4.**Indexing**
70
+
Using [`BruteForceKnnFactory`](https://pathway.com/developers/api-docs/pathway-stdlib/indexing#pathway.stdlib.indexing.BruteForceKnnFactory), the embeddings are stored in a vector index. This is all streaming as well, so new embeddings are added or updated automatically.
71
+
72
+
5.**Serving**
73
+
- We create a [`SummaryQuestionAnswerer`](https://pathway.com/developers/api-docs/pathway-xpacks-llm/question_answering#pathway.xpacks.llm.question_answering.SummaryQuestionAnswerer) (as specified in `app.py`), which can handle both question-answering and summarization requests.
74
+
- A web server, [`QASummaryRestServer`](https://pathway.com/developers/api-docs/pathway-xpacks-llm/servers#pathway.xpacks.llm.servers.QASummaryRestServer), exposes multiple endpoints for retrieval, Q&A, summarization, and more.
75
+
76
+
Because Pathway is fully incremental, any changes to your source files immediately flow through parsing, embedding, indexing, and ultimately get reflected in the answers from the LLM. The user can then query the created index with simple HTTP requests to the endpoints mentioned above.
63
77
64
78
## Pipeline Organization
65
79
@@ -81,7 +95,7 @@ You can also use user-defined functions using the [`@pw.udf`](https://pathway.co
81
95
82
96
- RAG
83
97
84
-
Pathway provides all the tools to create a RAG application and query it: a [Pathway vector store](https://pathway.com/developers/api-docs/pathway-xpacks-llm/splitters) and a web server (defined with the [REST connector](https://pathway.com/developers/api-docs/pathway-io/http#pathway.io.http.rest_connector)).
98
+
Pathway provides all the tools to create a RAG application and query it: a [Pathway Document store](https://pathway.com/developers/api-docs/pathway-xpacks-llm/document_store#pathway.xpacks.llm.document_store.DocumentStore) and a web server (defined with the [REST connector](https://pathway.com/developers/api-docs/pathway-io/http#pathway.io.http.rest_connector)).
85
99
They are defined in our demo in the main class `PathwayRAG` along with the different functions and schemas used by the RAG.
86
100
87
101
For the sake of the demo, we kept the app simple, consisting of the main components you would find in a regular RAG application. It can be further enhanced with query writing methods, re-ranking layer and custom splitting steps.
The [`DocumentStore`](https://pathway.com/developers/api-docs/pathway-xpacks-llm/document_store#pathway.xpacks.llm.document_store.DocumentStore) makes building and customizing a document indexing pipeline straightforward. It processes documents and enables querying the closest documents to a given query based on your chosen indexing strategy. Here's how you can use a hybrid indexing approach, combining vector-based and text-based retrieval:
164
+
165
+
Example: Hybrid Indexing with `BruteForceKNN` and `TantivyBM25`
166
+
167
+
The following example demonstrates how to configure and use the [HybridIndex](/developers/api-docs/indexing#pathway.stdlib.indexing.HybridIndex)) that combines:
168
+
169
+
- **`BruteForceKNN`**: A vector-based index leveraging embeddings for semantic similarity search.
170
+
- **[`TantivyBM25`](/developers/api-docs/indexing#pathway.stdlib.indexing.TantivyBM25)**: A text-based index using BM25 for keyword matching.
Copy file name to clipboardExpand all lines: examples/pipelines/gpt_4o_multimodal_rag/README.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@
13
13
14
14
## **Overview**
15
15
16
-
This app template showcases how you can build a multimodal RAG application and launch a document processing pipeline that utilizes `GPT-4o` for both parsing and generation tasks. Pathway processes unstructured financial documents within specified directories, extracting and storing the information in a scalable in-memory vector index. This index is optimized for dynamic RAG, ensuring that search results are continuously updated as documents are modified or new files are added.
16
+
This app template showcases how you can build a multimodal RAG application and launch a document processing pipeline that utilizes a vision language model like the `GPT-4o` for parsing. Pathway processes unstructured financial documents within specified directories, extracting and storing the information in a scalable in-memory index. This index is optimized for dynamic RAG, ensuring that search results are continuously updated as documents are modified or new files are added.
17
17
18
18
Using this approach, you can make your AI application run in permanent connection with your drive, in sync with your documents which include visually formatted elements: tables, charts, images, etc.
19
19
@@ -155,6 +155,7 @@ By default, the app uses a local data source to read documents from the `data` f
155
155
> Note: Recommended way of running the Pathway on Windows is Docker, refer to [Running with the Docker section](#with-docker).
156
156
157
157
First, make sure to install the requirements by running:
0 commit comments