Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
4d717b1
Merge branch 'development' into release
karthikscale3 Apr 24, 2024
0233826
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Apr 28, 2024
7f4e951
Merge branch 'development' into release
karthikscale3 Apr 28, 2024
81a6ca0
Merge
karthikscale3 Jun 13, 2024
0c19f77
Merge branch 'development' into release
karthikscale3 Jun 13, 2024
c3a6ccf
remove logs
karthikscale3 Jun 13, 2024
a99cf10
remove requirements
karthikscale3 Jun 13, 2024
1379b27
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Jun 17, 2024
dae04e7
Merge branch 'development' into release
karthikscale3 Jun 17, 2024
129e927
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Jun 24, 2024
16e67f9
Merge branch 'development' into release
karthikscale3 Jun 24, 2024
e604e93
Bump version
karthikscale3 Jun 24, 2024
7e00473
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Jun 24, 2024
6ac71aa
Merge branch 'development' into release
karthikscale3 Jun 24, 2024
c39bf01
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Jun 24, 2024
f89e38c
Merge branch 'development' into release
karthikscale3 Jun 24, 2024
e95e743
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Jul 19, 2024
c62e803
Squash
karthikscale3 Jul 25, 2024
d7fd3fb
Merge
karthikscale3 Jul 25, 2024
c4ea507
Merge branch 'development' into release
karthikscale3 Jul 25, 2024
4c74fd8
Merge
karthikscale3 Jul 31, 2024
9a83e20
Merge branch 'development' into release
karthikscale3 Jul 31, 2024
09d5631
Merge
karthikscale3 Aug 3, 2024
508e72b
Merge
karthikscale3 Aug 3, 2024
ad44fa3
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Aug 13, 2024
ad168b3
Merge branch 'development' into release
karthikscale3 Aug 13, 2024
6876f92
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Aug 30, 2024
630169a
Merge branch 'development' into release
karthikscale3 Aug 30, 2024
0e1aae3
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Sep 1, 2024
c266698
Merge branch 'development' into release
karthikscale3 Sep 1, 2024
5b9895f
merge
karthikscale3 Sep 4, 2024
04fd825
Merge branch 'development' into release
karthikscale3 Sep 4, 2024
510e4b8
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Sep 6, 2024
e63bee2
Merge branch 'development' into release
karthikscale3 Sep 6, 2024
9741a3e
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
karthikscale3 Sep 8, 2024
4f7f3c4
Merge branch 'development' into release
karthikscale3 Sep 8, 2024
01dc2af
add sentry integration
rohit-kadhe Sep 10, 2024
f45110f
update readme
rohit-kadhe Sep 10, 2024
587494e
Merge pull request #341 from Scale3-Labs/rohit/S3EN-2793-sentry-integ…
rohit-kadhe Sep 10, 2024
2a44361
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
alizenhom Sep 11, 2024
a9115a0
hotfix for checking package installed
alizenhom Sep 11, 2024
c31c149
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
alizenhom Sep 11, 2024
c9ce384
Merge branch 'main' of github.com:Scale3-Labs/langtrace-python-sdk in…
alizenhom Sep 11, 2024
77024fb
Support Autogen (#242)
alizenhom Sep 11, 2024
c514af4
Support genai and also add token reporting and other data points (#345)
rohit-kadhe Sep 11, 2024
79d51d8
fix: weaviate datetime handling for request and response (#346)
darshit-s3 Sep 11, 2024
32fac05
merge
karthikscale3 Sep 11, 2024
4920395
Merge branch 'development' into release
karthikscale3 Sep 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 30 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,10 @@ langtrace.init(custom_remote_exporter=<your_exporter>, batch=<True or False>)
| `api_host` | `Optional[str]` | `https://langtrace.ai/` | The API host for the remote exporter. |
| `disable_instrumentations` | `Optional[DisableInstrumentations]` | `None` | You can pass an object to disable instrumentation for specific vendors ex: `{'only': ['openai']}` or `{'all_except': ['openai']}` |

### Error Reporting to Langtrace

By default all sdk errors are reported to langtrace via Sentry. This can be disabled by setting the following enviroment variable to `False` like so `LANGTRACE_ERROR_REPORTING=False`

### Additional Customization

- `@with_langtrace_root_span` - this decorator is designed to organize and relate different spans, in a hierarchical manner. When you're performing multiple operations that you want to monitor together as a unit, this function helps by establishing a "parent" (`LangtraceRootSpan` or whatever is passed to `name`) span. Then, any calls to the LLM APIs made within the given function (fn) will be considered "children" of this parent span. This setup is especially useful for tracking the performance or behavior of a group of operations collectively, rather than individually.
Expand Down Expand Up @@ -229,6 +233,7 @@ prompt = get_prompt_from_registry(<Registry ID>, options={"prompt_version": 1, "
```

### Opt out of tracing prompt and completion data

By default, prompt and completion data are captured. If you would like to opt out of it, set the following env var,

`TRACE_PROMPT_COMPLETION_DATA=false`
Expand All @@ -237,30 +242,31 @@ By default, prompt and completion data are captured. If you would like to opt ou

Langtrace automatically captures traces from the following vendors:

| Vendor | Type | Typescript SDK | Python SDK |
| ------------ | --------------- | ------------------ | ------------------------------- |
| OpenAI | LLM | :white_check_mark: | :white_check_mark: |
| Anthropic | LLM | :white_check_mark: | :white_check_mark: |
| Azure OpenAI | LLM | :white_check_mark: | :white_check_mark: |
| Cohere | LLM | :white_check_mark: | :white_check_mark: |
| Groq | LLM | :x: | :white_check_mark: |
| Perplexity | LLM | :white_check_mark: | :white_check_mark: |
| Gemini | LLM | :x: | :white_check_mark: |
| Mistral | LLM | :x: | :white_check_mark: |
| Langchain | Framework | :x: | :white_check_mark: |
| LlamaIndex | Framework | :white_check_mark: | :white_check_mark: |
| Langgraph | Framework | :x: | :white_check_mark: |
| DSPy | Framework | :x: | :white_check_mark: |
| CrewAI | Framework | :x: | :white_check_mark: |
| Ollama | Framework | :x: | :white_check_mark: |
| VertexAI | Framework | :x: | :white_check_mark: |
| Vercel AI SDK| Framework | :white_check_mark: | :x: |
| EmbedChain | Framework | :x: | :white_check_mark: |
| Pinecone | Vector Database | :white_check_mark: | :white_check_mark: |
| ChromaDB | Vector Database | :white_check_mark: | :white_check_mark: |
| QDrant | Vector Database | :white_check_mark: | :white_check_mark: |
| Weaviate | Vector Database | :white_check_mark: | :white_check_mark: |
| PGVector | Vector Database | :white_check_mark: | :white_check_mark: (SQLAlchemy) |
| Vendor | Type | Typescript SDK | Python SDK |
| ------------- | --------------- | ------------------ | ------------------------------- |
| OpenAI | LLM | :white_check_mark: | :white_check_mark: |
| Anthropic | LLM | :white_check_mark: | :white_check_mark: |
| Azure OpenAI | LLM | :white_check_mark: | :white_check_mark: |
| Cohere | LLM | :white_check_mark: | :white_check_mark: |
| Groq | LLM | :x: | :white_check_mark: |
| Perplexity | LLM | :white_check_mark: | :white_check_mark: |
| Gemini | LLM | :x: | :white_check_mark: |
| Mistral | LLM | :x: | :white_check_mark: |
| Langchain | Framework | :x: | :white_check_mark: |
| LlamaIndex | Framework | :white_check_mark: | :white_check_mark: |
| Langgraph | Framework | :x: | :white_check_mark: |
| DSPy | Framework | :x: | :white_check_mark: |
| CrewAI | Framework | :x: | :white_check_mark: |
| Ollama | Framework | :x: | :white_check_mark: |
| VertexAI | Framework | :x: | :white_check_mark: |
| Vercel AI SDK | Framework | :white_check_mark: | :x: |
| EmbedChain | Framework | :x: | :white_check_mark: |
| Autogen | Framework | :x: | :white_check_mark: |
| Pinecone | Vector Database | :white_check_mark: | :white_check_mark: |
| ChromaDB | Vector Database | :white_check_mark: | :white_check_mark: |
| QDrant | Vector Database | :white_check_mark: | :white_check_mark: |
| Weaviate | Vector Database | :white_check_mark: | :white_check_mark: |
| PGVector | Vector Database | :white_check_mark: | :white_check_mark: (SQLAlchemy) |

---

Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ dependencies = [
'sqlalchemy',
'fsspec>=2024.6.0',
"transformers>=4.11.3",
"sentry-sdk>=2.14.0",
]

requires-python = ">=3.9"
Expand Down
8 changes: 8 additions & 0 deletions src/examples/autogen_example/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
from .main import main as autogen_main
from .main import comedy_show


class AutoGenRunner:
def run(self):
# autogen_main()
comedy_show()
72 changes: 72 additions & 0 deletions src/examples/autogen_example/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
from langtrace_python_sdk import langtrace
from autogen import ConversableAgent
from dotenv import load_dotenv
from autogen.coding import LocalCommandLineCodeExecutor
import tempfile


load_dotenv()
langtrace.init(write_spans_to_console=False)
# agentops.init(api_key=os.getenv("AGENTOPS_API_KEY"))
# Create a temporary directory to store the code files.
temp_dir = tempfile.TemporaryDirectory()


# Create a local command line code executor.
executor = LocalCommandLineCodeExecutor(
timeout=10, # Timeout for each code execution in seconds.
work_dir=temp_dir.name, # Use the temporary directory to store the code files.
)


def main():

agent = ConversableAgent(
"chatbot",
llm_config={"config_list": [{"model": "gpt-4"}], "cache_seed": None},
code_execution_config=False, # Turn off code execution, by default it is off.
function_map=None, # No registered functions, by default it is None.
human_input_mode="NEVER", # Never ask for human input.
)

reply = agent.generate_reply(
messages=[{"content": "Tell me a joke.", "role": "user"}]
)
return reply


def comedy_show():
cathy = ConversableAgent(
name="cathy",
system_message="Your name is Cathy and you are a part of a duo of comedians.",
llm_config={
"config_list": [{"model": "gpt-4o-mini", "temperature": 0.9}],
"cache_seed": None,
},
description="Cathy is a comedian",
max_consecutive_auto_reply=10,
code_execution_config={
"executor": executor
}, # Use the local command line code executor.
function_map=None,
chat_messages=None,
silent=True,
default_auto_reply="Sorry, I don't know what to say.",
human_input_mode="NEVER", # Never ask for human input.
)

joe = ConversableAgent(
"joe",
system_message="Your name is Joe and you are a part of a duo of comedians.",
llm_config={
"config_list": [{"model": "gpt-4o-mini", "temperature": 0.7}],
"cache_seed": None,
},
human_input_mode="NEVER", # Never ask for human input.
)

result = joe.initiate_chat(
recipient=cathy, message="Cathy, tell me a joke.", max_turns=2
)

return result
2 changes: 2 additions & 0 deletions src/examples/langchain_example/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from examples.langchain_example.langchain_google_genai import basic_google_genai
from .basic import basic_app, rag, load_and_split
from langtrace_python_sdk import with_langtrace_root_span

Expand All @@ -12,6 +13,7 @@ def run(self):
rag()
load_and_split()
basic_graph_tools()
basic_google_genai()


class GroqRunner:
Expand Down
29 changes: 29 additions & 0 deletions src/examples/langchain_example/langchain_google_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from langchain_core.messages import HumanMessage
from langchain_google_genai import ChatGoogleGenerativeAI
from langtrace_python_sdk.utils.with_root_span import with_langtrace_root_span
from dotenv import find_dotenv, load_dotenv
from langtrace_python_sdk import langtrace

_ = load_dotenv(find_dotenv())

langtrace.init()

@with_langtrace_root_span("basic_google_genai")
def basic_google_genai():
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
# example
message = HumanMessage(
content=[
{
"type": "text",
"text": "What's in this image?",
},
]
)
message_image = HumanMessage(content="https://picsum.photos/seed/picsum/200/300")

res = llm.invoke([message, message_image])
# print(res)


basic_google_genai()
1 change: 1 addition & 0 deletions src/langtrace_python_sdk/constants/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
LANGTRACE_SDK_NAME = "langtrace-python-sdk"
SENTRY_DSN = "https://[email protected]/4507929133056000"
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
"GEMINI": "Gemini",
"MISTRAL": "Mistral",
"EMBEDCHAIN": "Embedchain",
"AUTOGEN": "Autogen",
}

LANGTRACE_ADDITIONAL_SPAN_ATTRIBUTES_KEY = "langtrace_additional_attributes"
2 changes: 2 additions & 0 deletions src/langtrace_python_sdk/instrumentation/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
from .weaviate import WeaviateInstrumentation
from .ollama import OllamaInstrumentor
from .dspy import DspyInstrumentation
from .autogen import AutogenInstrumentation
from .vertexai import VertexAIInstrumentation
from .gemini import GeminiInstrumentation
from .mistral import MistralInstrumentation
Expand All @@ -37,6 +38,7 @@
"WeaviateInstrumentation",
"OllamaInstrumentor",
"DspyInstrumentation",
"AutogenInstrumentation",
"VertexAIInstrumentation",
"GeminiInstrumentation",
"MistralInstrumentation",
Expand Down
3 changes: 3 additions & 0 deletions src/langtrace_python_sdk/instrumentation/autogen/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .instrumentation import AutogenInstrumentation

__all__ = ["AutogenInstrumentation"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
from opentelemetry.trace import get_tracer
from wrapt import wrap_function_wrapper as _W
from importlib_metadata import version as v
from .patch import patch_generate_reply, patch_initiate_chat


class AutogenInstrumentation(BaseInstrumentor):
def instrumentation_dependencies(self):
return ["autogen >= 0.1.0"]

def _instrument(self, **kwargs):
print("Instrumneting autogen")
tracer_provider = kwargs.get("tracer_provider")
tracer = get_tracer(__name__, "", tracer_provider)
version = v("autogen")
# conversable_agent.intiate_chat
# conversable_agent.register_function
# agent.Agent
# AgentCreation
# Tools --> Register_for_llm, register_for_execution, register_for_function
try:
_W(
module="autogen.agentchat.conversable_agent",
name="ConversableAgent.initiate_chat",
wrapper=patch_initiate_chat(
"conversable_agent.initiate_chat", version, tracer
),
)

_W(
module="autogen.agentchat.conversable_agent",
name="ConversableAgent.generate_reply",
wrapper=patch_generate_reply(
"conversable_agent.generate_reply", version, tracer
),
)
except Exception as e:
pass

def _uninstrument(self, **kwargs):
pass
Loading