-
Notifications
You must be signed in to change notification settings - Fork 3.1k
[chore] prom rw v2 exporter add support for batching #40051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chore] prom rw v2 exporter add support for batching #40051
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few initial comments
Co-authored-by: Owen Williams <[email protected]>
Co-authored-by: Owen Williams <[email protected]>
|
(waiting on reviewing while this is still draft -- do you need to do more work before it's ready for review?) |
As I mentioned in the description In this sort of naive implementation we sent the full symbolsTable with each separate request, is this approach okay and we can improve it in follow up PRs, or should I try to split the symbolsTable so we actually only send relevant symbols in each request already as part of this PR (much harder to implement). |
That will create a lot of confusion for our users, switching from one version of the protocol to another will suddenly increase their network costs because 1 sample != 1 byte.
I believe that's ok as a starting point, let's improve over time :) |
|
Talked with @ywwg and @ArthurSens this PR implements support for batching, each requests however contains the full symbolsTable, this will be handled in a follow up PR. |
|
Review please @ArthurSens @dashpole @ywwg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems strictly better! thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one question 😬
|
similar to #40494, this is feat, not a chore |
I believe Juraj chose this title because PRs that start with |
…40051) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Implementing batching support in RW2, in this sort of naive implementation we sent the full symbolsTable with each separate request, batching including splitting up the symbolsTable should be doable if we push down the logic for into the translator package, however it's not easy to implement batching based on bytes size instead of number of samples, so I wonder if for RW2 we could switch to batching based on number of samples similar to what prometheus does, WDYT @ArthurSens @dashpole @ywwg ? <!-- Issue number (e.g. open-telemetry#1234) or full URL to issue, if applicable. --> #### Link to tracking issue Partially implements open-telemetry#33661 (when merging PR please don't close the tracing issue) --------- Co-authored-by: Owen Williams <[email protected]>
…40051) <!--Ex. Fixing a bug - Describe the bug and how this fixes the issue. Ex. Adding a feature - Explain what this achieves.--> #### Description Implementing batching support in RW2, in this sort of naive implementation we sent the full symbolsTable with each separate request, batching including splitting up the symbolsTable should be doable if we push down the logic for into the translator package, however it's not easy to implement batching based on bytes size instead of number of samples, so I wonder if for RW2 we could switch to batching based on number of samples similar to what prometheus does, WDYT @ArthurSens @dashpole @ywwg ? <!-- Issue number (e.g. #1234) or full URL to issue, if applicable. --> #### Link to tracking issue Partially implements open-telemetry#33661 (when merging PR please don't close the tracing issue) --------- Co-authored-by: Owen Williams <[email protected]>
Description
Implementing batching support in RW2, in this sort of naive implementation we sent the full symbolsTable with each separate request, batching including splitting up the symbolsTable should be doable if we push down the logic for into the translator package, however it's not easy to implement batching based on bytes size instead of number of samples, so I wonder if for RW2 we could switch to batching based on number of samples similar to what prometheus does,
WDYT @ArthurSens @dashpole @ywwg ?
Link to tracking issue
Partially implements #33661 (when merging PR please don't close the tracing issue)