-
Notifications
You must be signed in to change notification settings - Fork 486
Implement an opt-in packet splitter. #4841
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
...orking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/PayloadTypeRegistryImpl.java
Outdated
Show resolved
Hide resolved
...-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/splitter/FabricPacketSplitter.java
Outdated
Show resolved
Hide resolved
|
|
||
| import net.fabricmc.fabric.impl.networking.FabricPacketsImpl; | ||
|
|
||
| public record FabricSplitDataPacketPayload(int splitId, int part, ByteBuf byteBuf) implements FabricSplitPacketPayload { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lots of redundant data here. A simple split start packet with an integer for the number of payloads to follow is enough. No need for splitId and part anywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mostly to make it easier to find what went wrong if packets get reordered for whatever reason. Which while might not happen in vanilla, could be triggered by other mods / proxies
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Proxies should not be reordering, dropping arbitrarily, or otherwise altering packets that they don't recognize. Registry sync has sent order-sensitive packet split for a long time and as far as I know there has never been an issue.
...orking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/PayloadTypeRegistryImpl.java
Outdated
Show resolved
Hide resolved
|
When porting to 1.21.9, the registry sync module should be changed to use this too. (We can't change it for 1.21.8 because of the protocol change, but on the 1.21.9 update vanilla is changing the protocol anyway so it's fine). |
modmuss50
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has a lot of scope for unit tests, see DirectRegistryPacketHandlerTest for where we test the registry sync packet splitting.
...c-networking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/FabricPacketsImpl.java
Outdated
Show resolved
Hide resolved
...g-api-v1/src/main/java/net/fabricmc/fabric/mixin/networking/CustomPayloadS2CPacketMixin.java
Outdated
Show resolved
Hide resolved
...orking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/PayloadTypeRegistryImpl.java
Outdated
Show resolved
Hide resolved
...tworking-api-v1/src/main/java/net/fabricmc/fabric/api/networking/v1/PayloadTypeRegistry.java
Outdated
Show resolved
Hide resolved
...tworking-api-v1/src/main/java/net/fabricmc/fabric/api/networking/v1/PayloadTypeRegistry.java
Outdated
Show resolved
Hide resolved
...c-networking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/FabricPacketsImpl.java
Outdated
Show resolved
Hide resolved
...c-networking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/FabricPacketsImpl.java
Outdated
Show resolved
Hide resolved
...c-networking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/FabricPacketsImpl.java
Outdated
Show resolved
Hide resolved
...orking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/PayloadTypeRegistryImpl.java
Outdated
Show resolved
Hide resolved
...g-api-v1/src/main/java/net/fabricmc/fabric/mixin/networking/CustomPayloadS2CPacketMixin.java
Outdated
Show resolved
Hide resolved
...orking-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/PayloadTypeRegistryImpl.java
Outdated
Show resolved
Hide resolved
...-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/splitter/FabricPacketSplitter.java
Outdated
Show resolved
Hide resolved
...-api-v1/src/main/java/net/fabricmc/fabric/impl/networking/splitter/FabricPacketSplitter.java
Outdated
Show resolved
Hide resolved
# Conflicts: # fabric-networking-api-v1/src/main/resources/fabric-networking-api-v1.mixins.json
…a comments, use correct packet ids in error messages
|
Hey @Patbox I been thinking about this PR, now that 1.21.9 is out of the way its the next "big" thing I want to get in. I think i had really had 2/3 concerns/queries with the design:
I'd be happy to help look at this or maybe even just try out some of my ideas in a separate branch. |
|
For point 2: |
The impl might be a tiny bit easier, but I think the design will be easier to understand. Might be worth making a note somewhere of the packet format you choose. Im happy to look at it before you actually implement it, dont want you to keep going back and forth with this.
Im not sure I agree with this, in its current form from an external POV all packets will be sent over the fabric channel and without inspecting the packet data there is no way to tell what mod is sending that data. Please correct me if im wrong but I think the main use case for inspecting packet names is for profiling, this makes that harder.
As we said this might be something we want to look at supporting after this PR has done in. I would imagine there to be a |
|
I'd keep it in Both the first and last packet can be identified by a counter and carry part of the data up to their limit. Neither the part number nor the part data size need to be transmitted, the packets are already framed externally. It might be a good idea to disallow undersized parts to avoid potential abuse vectors and encoding variance. It also appears that the current implementation sends the embedded identifier and size twice. Otherwise I'd expect https://github.com/FabricMC/fabric/pull/4841/files#diff-026fe5703b81c5e0773f9aa169291cae9ef4d824254fdeaf55b319b76d9481d8R120 to use the respective members. It should be easy enough to parse the embedded packet header instead of duplicating the information. We should decide whether a future protocol revision is to be handled by a different id (other than |
|
Ok went will all suggestions here. So now:
|
|
For versioning doing it via packet channel should work just fine, because honestly there won't be much need to add any metadata anyway. |
If you implement https://github.com/FabricMC/fabric/pull/4841/files#diff-026fe5703b81c5e0773f9aa169291cae9ef4d824254fdeaf55b319b76d9481d8R71 differently it'd just work (tm) with any packet, no? What in the protocol requires it to be embedding custom payload exclusively? |
|
Mostly as it reads the packet size limit from payload id. Through yes, it should be possible to change that with an extra bit of logic |
|
So vanilla packets are [length,id,data], it seems like the EncoderHandler.encode doesn't include the size yet, but everything else. I'd manually prefix the length then to restore the full packet bytes, much like how MC does it later in the pipeline. The length is readily available at https://github.com/FabricMC/fabric/pull/4841/files#diff-351cb844fd0b49a8fc53c0fc3a763d4d6786d089d170000eb5da1f72822b3f85R67. You don't need |
…ck for validating packet type
|
Feels slightly uglier on sending part, but changed it. Also implemented packet type validation that currently only allows CustomPayload packets to get merged. It should be super simple to tweak that method to allow vanilla packet merging without changing the format (through naturally client/server would still require an update). |
| ByteBuf firstSplit = Unpooled.buffer(maxChunkSize); | ||
| VarInts.write(firstSplit, buf.readableBytes()); | ||
| // First slice needs to be slightly smaller to accommodate the size prefix | ||
| firstSplit.writeBytes(buf.readSlice(maxChunkSize - VarInts.getSizeInBytes(buf.readableBytes()))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: You should be able to query firstSplit for its remaining capacity or at least the written bytes instead of computing this manually. Not sure why the loop afterwards isn't something like while (buf.readableBytes() > 0) consumer.accept(packetConstructor.apply(new FabricSplitPacketPayload(buf.readSlice(Math.min(buf.readableBytes(), maxChunkSize)))));. No mandatory tail packet.
|
(Oops wrong pr hah) |
modmuss50
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Im happy with this. Most of this is implimentation detail that can be changed at a later date (likely across MC versions where we dont guarantee network compatibility).
I think there is a lot of scope to unit test this kind of code but im not going to hold up the PR any longer for that, it can be done in a follow up PR.
|
I don't like how the inner logic potentially sends one packet too many, even if that's normally prevented by the outer logic, which might cause us issues down the line. Changing the code makes it simpler and is easy enough. // First packet split with added packet size
ByteBuf firstSplit = Unpooled.buffer(maxChunkSize);
VarInts.write(firstSplit, buf.readableBytes());
// First slice needs to be slightly smaller to accommodate the header (by the already written data amount)
firstSplit.writeBytes(buf, maxChunkSize - firstSplit.readableBytes());
consumer.accept(packetConstructor.apply(new FabricSplitPacketPayload(firstSplit)));
// Remaining packets, as needed to send everything
while (buf.isReadable()) {
consumer.accept(packetConstructor.apply(new FabricSplitPacketPayload(buf.readSlice(Math.min(buf.readableBytes(), maxChunkSize))));
}It also uses a few more fitting ByteBuf overloads. |
| if (payload == null) { | ||
| throw new DecoderException("Received '" + packet.getPacketType().id() + "' packet, while expecting 'minecraft:custom_payload'!"); | ||
| } | ||
|
|
||
| if (!(payload instanceof FabricSplitPacketPayload splitPacketPayload)) { | ||
| throw new DecoderException("Expected '" + FabricSplitPacketPayload.ID.id() +"' payload packet, but received '" + payload.getId().id() + "'!"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any reasons to reject any non split packets here? This currently breaks existing mods that split packets into multiple subsystems based on packet type and multiplex them with priority scheduling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly validation that everything works correctly and all data gets received (and no one tries to send them as unintender). Since a lot of mods do depend on packet order being right (from vanilla, fabric, other libraries and their own) it just made sense to check it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now I think it's fair to expect any custom reordering scheme to keep fabric:split packets together.
Nevertheless I do see a point in relaxing the check/requirement to enable some advanced trickery. As long as no second stream of fabric:split packets intersects the first one, the merging would work fine. Someone could then put e.g. motion updates in the middle of bulk data streams for congested connections and potentially improve responsiveness. This would however complicate handling for proxies as they'd have to support those mixed packets as well.
* Start working on packet splitting * Fix checkstyle * Allow for split C2S packets, require setting max packet size (not validated on receiving end yet) * Remove old init * Finish up generic packet splitting, * Add fake passthough packet to avoid encoding splittable packets twice * Validate maxPacketSize * Source split size from CustomPayloadX2YPackets * No more magic numbers, don't send empty SplitData pacckets, some extra comments, use correct packet ids in error messages * Simplify packet splitting to single payload type, remove unnecessary metadata. * Replace finished flag with just appended packet length, add extra check for validating packet type * Correct name of genericPacketSplitter method * Fix not full rename oops * Apply player's suggestion (cherry picked from commit 57cb96d)
* Start working on packet splitting * Fix checkstyle * Allow for split C2S packets, require setting max packet size (not validated on receiving end yet) * Remove old init * Finish up generic packet splitting, * Add fake passthough packet to avoid encoding splittable packets twice * Validate maxPacketSize * Source split size from CustomPayloadX2YPackets * No more magic numbers, don't send empty SplitData pacckets, some extra comments, use correct packet ids in error messages * Simplify packet splitting to single payload type, remove unnecessary metadata. * Replace finished flag with just appended packet length, add extra check for validating packet type * Correct name of genericPacketSplitter method * Fix not full rename oops * Apply player's suggestion (cherry picked from commit 57cb96d) (cherry picked from commit b095d8d)
Implements a opt-in packet splitting logic, that can be used for any packet (through default implementation only handles CustomPayloadS2C and only works for S2C packets).
It's not enabled by for all packets, to prevent unwanted format breakage.
Functionally it's implemented in similar way to how vanilla handles Bundle packets, through work for both play and configuration stages. Outside the packet pipeline, the game / mods will only deal with their own packets. Aka entire splitting and merging is handled within packet pipeline.
Quick test (forcing all payload packets to go through it, with chunk size of 20 bytes) seemed to work just fine, but more extensive testing would be a good idea probably.
Currently splittable packets always go through packet splitting process, through ideally I want to get rid of that.
(Also keeping it as draft until I gather some feedback and more testing)