Skip to content

feat: Optimize neqo-transport #2828

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions neqo-transport/src/connection/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2578,7 +2578,10 @@ impl Connection {
max_datagrams: NonZeroUsize,
) -> Res<SendOptionBatch> {
let packet_tos = path.borrow().tos();
let mut send_buffer = Vec::new();
let mut send_buffer = Vec::with_capacity(min(
DatagramBatch::MAX,
path.borrow().plpmtu() * max_datagrams.get(),
));
Comment on lines +2581 to +2584
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks pretty clever, until you realize that this is just copied from a few lines down. Also, nope.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previous attempt with no sign of impact: #2782


let mut datagram_size = None;
let mut num_datagrams = 0;
Expand Down Expand Up @@ -3220,7 +3223,7 @@ impl Connection {
.streams_mut()
.inbound_frame(space, offset, data)?;
if self.crypto.streams().data_ready(space) {
let mut buf = Vec::new();
let mut buf = Vec::with_capacity(16384); // Typical handshake message size
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This number I know to be wrong. I know that we have certificates (which as a client we never send...) that can be large-ish, but not that large. In practice, the size we might pre-allocate here would be around 2k, not 16k.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is obviously way to big, but I wonder if 1500 makes sense.

let read = self.crypto.streams_mut().read_to_end(space, &mut buf);
qdebug!("Read {read:?} bytes");
self.handshake(now, packet_version, space, Some(&buf))?;
Expand Down Expand Up @@ -3806,7 +3809,7 @@ impl Connection {
};
let path = self.paths.primary().ok_or(Error::NotAvailable)?;
let mtu = path.borrow().plpmtu();
let mut buffer = Vec::new();
let mut buffer = Vec::with_capacity(mtu);
let encoder = Encoder::new_borrowed_vec(&mut buffer);

let (_, mut builder) = Self::build_packet_header(
Expand Down
4 changes: 2 additions & 2 deletions neqo-transport/src/recovery/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -630,7 +630,7 @@ impl Loss {
return (Vec::new(), Vec::new());
};
let loss_delay = primary_path.borrow().rtt().loss_delay();
let mut lost = Vec::new();
let mut lost = Vec::with_capacity(8); // Typically few packets are lost at once
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did Claude come up with this number on its own? Because I have questions. It's a reasonable thing to do, but a number as small as 8 is probably low enough that the first allocation will be that large anyway. And if there are NO lost packets, you made things slower.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is @claude's number.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might make sense, but I wish we had some data on loss events rather than a guess.

sp.detect_lost_packets(now, loss_delay, cleanup_delay, &mut lost);
self.stats.borrow_mut().lost += lost.len();

Expand Down Expand Up @@ -873,7 +873,7 @@ impl Loss {
let loss_delay = primary_path.borrow().rtt().loss_delay();
let confirmed = self.confirmed();

let mut lost_packets = Vec::new();
let mut lost_packets = Vec::with_capacity(16); // Pre-allocate for typical PTO scenarios
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, no evidence for the number.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

for space in self.spaces.iter_mut() {
let first = lost_packets.len(); // The first packet lost in this space.
let pto = Self::pto_period_inner(
Expand Down
Loading