Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion docs/getstarted/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,10 @@ If you're planning to contribute and make modifications to the code, ensure that
```bash
git clone https://github.com/explodinggradients/ragas.git
pip install -e .
```
```

!!! note on "LangChain OpenAI dependency versions"
If you use `langchain_openai` (e.g., `ChatOpenAI`), install `langchain-core` and `langchain-openai` explicitly to avoid version mismatches. You can adjust bounds to match your environment, but installing both explicitly helps prevent strict dependency conflicts.
```bash
pip install -U "langchain-core>=0.2,<0.3" "langchain-openai>=0.1,<0.2" openai
```
9 changes: 6 additions & 3 deletions docs/getstarted/rag_eval.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ openai_client = openai.OpenAI()
embeddings = OpenAIEmbeddings(client=openai_client)
```

!!! note "OpenAI Embeddings API"
`ragas.embeddings.OpenAIEmbeddings` exposes `embed_text` (single) and `embed_texts` (batch), not `embed_query`/`embed_documents` like some LangChain wrappers. The example below uses `embed_texts` for documents and `embed_text` for the query. Please refer to [OpenAI embeddings implementation](https://docs.ragas.io/en/stable/references/embeddings/\#ragas.embeddings.OpenAIEmbeddings)

### Build a Simple RAG System

To build a simple RAG system, we need to define the following components:
Expand All @@ -43,14 +46,14 @@ To build a simple RAG system, we need to define the following components:
def load_documents(self, documents):
"""Load documents and compute their embeddings."""
self.docs = documents
self.doc_embeddings = self.embeddings.embed_documents(documents)
self.doc_embeddings = self.embeddings.embed_texts(documents)

def get_most_relevant_docs(self, query):
"""Find the most relevant document for a given query."""
if not self.docs or not self.doc_embeddings:
raise ValueError("Documents and their embeddings are not loaded.")

query_embedding = self.embeddings.embed_query(query)
query_embedding = self.embeddings.embed_text(query)
similarities = [
np.dot(query_embedding, doc_emb)
/ (np.linalg.norm(query_embedding) * np.linalg.norm(doc_emb))
Expand Down Expand Up @@ -197,4 +200,4 @@ If you want help with improving and scaling up your AI application using evals.

## Up Next

- [Generate test data for evaluating RAG](rag_testset_generation.md)
- [Generate test data for evaluating RAG](rag_testset_generation.md)
Loading