-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Open
Labels
Description
(migrated from here)
Describe the bug
I'm using latest otel-lgtm and hooked into it via an ASP.NET backend using OpenTelemetry.Exporter.OpenTelemetryProtocol. Metrics seem to work fine, but not so for logs. If I attempt a query over a time range that includes some log entries I get the following error (enabled Loki logging to see this):
level=info ts=2025-02-07T03:24:55.591285638Z caller=metrics.go:237 component=querier org_id=fake traceID=1bd6f1bd0c08d322 latency=fast query="{service_name=\"backend_web\"} | json | logfmt | drop __error__,__error_details__" query_hash=2839858091 query_type=limited range_type=range length=24m41.15s start_delta=24m55.591261438s end_delta=14.441261538s step=14s duration=1.738757ms status=500 limit=100 returned_lines=1 throughput=5.3MB total_bytes=9.2kB total_bytes_structured_metadata=5.3kB lines_per_second=20129 total_lines=35 post_filter_lines=35 total_entries=1 store_chunks_download_time=0s queue_time=88.203µs splits=0 shards=0 query_referenced_structured_metadata=false pipeline_wrapper_filtered_lines=0 chunk_refs_fetch_time=179.206µs cache_chunk_req=0 cache_chunk_hit=0 cache_chunk_bytes_stored=0 cache_chunk_bytes_fetched=0 cache_chunk_download_time=0s cache_index_req=0 cache_index_hit=0 cache_index_download_time=0s cache_stats_results_req=0 cache_stats_results_hit=0 cache_stats_results_download_time=0s cache_volume_results_req=0 cache_volume_results_hit=0 cache_volume_results_download_time=0s cache_result_req=0 cache_result_hit=0 cache_result_download_time=0s cache_result_query_length_served=0s cardinality_estimate=0 ingester_chunk_refs=0 ingester_chunk_downloaded=0 ingester_chunk_matches=1 ingester_requests=1 ingester_chunk_head_bytes=9.2kB ingester_chunk_compressed_bytes=0B ingester_chunk_decompressed_bytes=0B ingester_post_filter_lines=35 congestion_control_latency=0s index_total_chunks=0 index_post_bloom_filter_chunks=0 index_bloom_filter_ratio=0.00 index_used_bloom_filters=false index_shard_resolver_duration=0s source=grafana-lokiexplore-app disable_pipeline_wrappers=false has_labelfilter_before_parser=false
level=error ts=2025-02-07T03:24:55.591670751Z caller=retry.go:107 org_id=fake traceID=1bd6f1bd0c08d322 msg="error processing request" try=4 type=queryrange.LokiRequest query="{service_name=\"backend_web\"} | json | logfmt | drop __error__,__error_details__" query_hash=2839858091 start=2025-02-07T03:00:00Z end=2025-02-07T03:24:41.15Z start_delta=24m55.591665251s end_delta=14.441665551s length=24m41.15s retry_in=4.406740283s code=Code(500) err="rpc error: code = Code(500) desc = failed to parse series labels to categorize labels: 1:2: parse error: unexpected \"=\" in label set, expected identifier or \"}\""
Maybe it's because I'm new to this stack but I have no idea what it's complaining about or how to rectify. Help would be very much appreciated.
To Reproduce
Steps to reproduce the behavior:
- Started Loki via otel-lgtm latest (
sha256:7b7644781f8f801bb8639872e6a8aef28ee94b59bc82f41bd75725aef872c02d) - Started an ASP.NET backend that is integrated via
<PackageReference Include="OpenTelemetry.Exporter.OpenTelemetryProtocol" Version="1.11.1" /> - Interacted with the ASP.NET backend to add a few logs to the system
- Attempt to view the logs per the below video
repro.mp4
Expected behavior
I was expecting to see the captured logs.
Environment:
- Infrastructure: Docker
- Deployment tool: Docker compose
Screenshots, Promtail config, or terminal output
See above.
GDWR, sergey-v9 and Wojtek-Cz7