-
Notifications
You must be signed in to change notification settings - Fork 139
feat: Put tokens into Rc
#2780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: Put tokens into Rc
#2780
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2780 +/- ##
=======================================
Coverage 94.93% 94.93%
=======================================
Files 115 115
Lines 34425 34425
Branches 34425 34425
=======================================
Hits 32682 32682
Misses 1736 1736
Partials 7 7
|
Client/server transfer resultsPerformance differences relative to 6942acc. Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.
Download data for |
Benchmark resultsPerformance differences relative to 6942acc. 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: Change within noise threshold.time: [205.94 ms 206.37 ms 206.83 ms] thrpt: [483.50 MiB/s 484.57 MiB/s 485.57 MiB/s] change: time: [−1.6390% −1.1895% −0.7469%] (p = 0.00 < 0.05) thrpt: [+0.7525% +1.2038% +1.6663%] 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.time: [304.77 ms 306.15 ms 307.54 ms] thrpt: [32.516 Kelem/s 32.663 Kelem/s 32.812 Kelem/s] change: time: [−1.1612% −0.5519% +0.0908%] (p = 0.07 > 0.05) thrpt: [−0.0908% +0.5550% +1.1748%] 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.time: [28.058 ms 28.165 ms 28.284 ms] thrpt: [35.355 B/s 35.505 B/s 35.640 B/s] change: time: [−0.8021% −0.1517% +0.5228%] (p = 0.65 > 0.05) thrpt: [−0.5201% +0.1519% +0.8086%] 1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: 💔 Performance has regressed.time: [212.10 ms 212.47 ms 212.88 ms] thrpt: [469.76 MiB/s 470.65 MiB/s 471.48 MiB/s] change: time: [+1.8940% +2.2714% +2.5722%] (p = 0.00 < 0.05) thrpt: [−2.5077% −2.2209% −1.8588%] decode 4096 bytes, mask ff: No change in performance detected.time: [11.617 µs 11.648 µs 11.687 µs] change: [−0.4144% +0.1374% +0.7362%] (p = 0.64 > 0.05) decode 1048576 bytes, mask ff: No change in performance detected.time: [3.0034 ms 3.0109 ms 3.0202 ms] change: [−0.5611% −0.1277% +0.3031%] (p = 0.57 > 0.05) decode 4096 bytes, mask 7f: No change in performance detected.time: [19.362 µs 19.409 µs 19.462 µs] change: [−0.1582% +0.2703% +0.7067%] (p = 0.25 > 0.05) decode 1048576 bytes, mask 7f: No change in performance detected.time: [5.0843 ms 5.1037 ms 5.1325 ms] change: [−0.9875% −0.1310% +0.6761%] (p = 0.78 > 0.05) decode 4096 bytes, mask 3f: No change in performance detected.time: [5.5231 µs 5.5426 µs 5.5699 µs] change: [−0.8346% −0.0983% +0.5372%] (p = 0.80 > 0.05) decode 1048576 bytes, mask 3f: No change in performance detected.time: [1.7576 ms 1.7577 ms 1.7578 ms] change: [−0.4118% −0.1722% −0.0088%] (p = 0.08 > 0.05) coalesce_acked_from_zero 1+1 entries: No change in performance detected.time: [88.783 ns 89.099 ns 89.403 ns] change: [−0.2520% +0.1920% +0.6408%] (p = 0.41 > 0.05) coalesce_acked_from_zero 3+1 entries: No change in performance detected.time: [106.19 ns 106.48 ns 106.80 ns] change: [−2.0334% −0.6022% +0.3383%] (p = 0.42 > 0.05) coalesce_acked_from_zero 10+1 entries: No change in performance detected.time: [105.53 ns 105.91 ns 106.37 ns] change: [−1.4662% −0.2845% +0.8182%] (p = 0.67 > 0.05) coalesce_acked_from_zero 1000+1 entries: No change in performance detected.time: [89.544 ns 89.695 ns 89.860 ns] change: [−0.7149% +0.2734% +1.4216%] (p = 0.63 > 0.05) RxStreamOrderer::inbound_frame(): Change within noise threshold.time: [108.10 ms 108.22 ms 108.37 ms] change: [−0.3707% −0.2537% −0.1185%] (p = 0.00 < 0.05) sent::Packets::take_ranges: :green_heart: Performance has improved.time: [4.7108 µs 4.8499 µs 4.9862 µs] change: [−47.901% −46.338% −44.751%] (p = 0.00 < 0.05) transfer/pacing-false/varying-seeds: Change within noise threshold.time: [36.678 ms 36.769 ms 36.863 ms] change: [−1.9152% −1.5743% −1.2273%] (p = 0.00 < 0.05) transfer/pacing-true/varying-seeds: Change within noise threshold.time: [37.726 ms 37.855 ms 37.995 ms] change: [−1.2351% −0.7861% −0.3161%] (p = 0.00 < 0.05) transfer/pacing-false/same-seed: Change within noise threshold.time: [36.299 ms 36.361 ms 36.422 ms] change: [−1.4468% −1.2330% −1.0062%] (p = 0.00 < 0.05) transfer/pacing-true/same-seed: Change within noise threshold.time: [38.303 ms 38.404 ms 38.509 ms] change: [−1.4529% −1.1248% −0.7951%] (p = 0.00 < 0.05) Download data for |
Do you need reference counting, or would |
My theory was that |
Seems correct, yes. That said, what is the intention behind this pull request? The following confuses me. |
It's a bit of an experiment. In 1f6d802, I put the |
Failed Interop TestsQUIC Interop Runner, client vs. server, differences relative to 853e4be. neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
The theory here is that by putting
Tokens
into anRc
, we can avoid copying them when aPacket
gets cloned.