Skip to content

Conversation

@Patbox
Copy link
Member

@Patbox Patbox commented Aug 31, 2025

Implements a opt-in packet splitting logic, that can be used for any packet (through default implementation only handles CustomPayloadS2C and only works for S2C packets).

It's not enabled by for all packets, to prevent unwanted format breakage.
Functionally it's implemented in similar way to how vanilla handles Bundle packets, through work for both play and configuration stages. Outside the packet pipeline, the game / mods will only deal with their own packets. Aka entire splitting and merging is handled within packet pipeline.

Quick test (forcing all payload packets to go through it, with chunk size of 20 bytes) seemed to work just fine, but more extensive testing would be a good idea probably.

Currently splittable packets always go through packet splitting process, through ideally I want to get rid of that.

(Also keeping it as draft until I gather some feedback and more testing)


import net.fabricmc.fabric.impl.networking.FabricPacketsImpl;

public record FabricSplitDataPacketPayload(int splitId, int part, ByteBuf byteBuf) implements FabricSplitPacketPayload {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lots of redundant data here. A simple split start packet with an integer for the number of payloads to follow is enough. No need for splitId and part anywhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is mostly to make it easier to find what went wrong if packets get reordered for whatever reason. Which while might not happen in vanilla, could be triggered by other mods / proxies

Copy link
Contributor

@Earthcomputer Earthcomputer Aug 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proxies should not be reordering, dropping arbitrarily, or otherwise altering packets that they don't recognize. Registry sync has sent order-sensitive packet split for a long time and as far as I know there has never been an issue.

@Earthcomputer
Copy link
Contributor

When porting to 1.21.9, the registry sync module should be changed to use this too. (We can't change it for 1.21.8 because of the protocol change, but on the 1.21.9 update vanilla is changing the protocol anyway so it's fine).

Copy link
Member

@modmuss50 modmuss50 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has a lot of scope for unit tests, see DirectRegistryPacketHandlerTest for where we test the registry sync packet splitting.

@Patbox Patbox marked this pull request as ready for review September 2, 2025 08:30
# Conflicts:
#	fabric-networking-api-v1/src/main/resources/fabric-networking-api-v1.mixins.json
…a comments, use correct packet ids in error messages
@modmuss50
Copy link
Member

Hey @Patbox I been thinking about this PR, now that 1.21.9 is out of the way its the next "big" thing I want to get in. I think i had really had 2/3 concerns/queries with the design:

  1. Should it be lower level than just building on top of the existing API (as you have today in your PR) - Yes I think this is a reasonable way to go, as I think like you said it allows other mods to inspect these packets as-if they are one. If there is a problem this can be easily changed. Nothing to do here.

  2. Do we really need 3 different payloads to carry this data, I don't see why this cannot just be done with a single payload type. Maybe each payload should have a small header to dictate the type, packets that don't need to split should still fit into 1.

  3. Should large packets use their own channel ids? I think they should especially if its changed to only be 1 payload type, I'm not sure what the benefit of putting them all over a fabric:split or whatever is.

I'd be happy to help look at this or maybe even just try out some of my ideas in a separate branch.

@Patbox
Copy link
Member Author

Patbox commented Sep 30, 2025

For point 2:
It no longer wraps packets that don't require splitting without also duplicating the serialization (via packet passthrough). I feel using 3 seperate packet types makes implementation slighly cleaner, but it should be possible to use only one.
For point 3:
Using custom payload simplifies implementation, allows packet inspection to more clearly debug things and keeps things more compatible. Extra bonus, through probably one that won't be used by fabric api, is that it also allows to use this system to split vanilla packets too, which might be something some mods might want to enable in future.

@modmuss50
Copy link
Member

I feel using 3 seperate packet types makes implementation slighly cleaner, but it should be possible to use only one.

The impl might be a tiny bit easier, but I think the design will be easier to understand. Might be worth making a note somewhere of the packet format you choose. Im happy to look at it before you actually implement it, dont want you to keep going back and forth with this.

allows packet inspection to more clearly debug things and keeps things more compatible

Im not sure I agree with this, in its current form from an external POV all packets will be sent over the fabric channel and without inspecting the packet data there is no way to tell what mod is sending that data. Please correct me if im wrong but I think the main use case for inspecting packet names is for profiling, this makes that harder.

Extra bonus, through probably one that won't be used by fabric api, is that it also allows to use this system to split vanilla packets too, which might be something some mods might want to enable in future.

As we said this might be something we want to look at supporting after this PR has done in. I would imagine there to be a fabric:vanilla_large or whatever packet for this. This can still happen with the above change.

@Player3324
Copy link
Contributor

I'd keep it in fabric:split, but also strip the redundant data.

Both the first and last packet can be identified by a counter and carry part of the data up to their limit. Neither the part number nor the part data size need to be transmitted, the packets are already framed externally. It might be a good idea to disallow undersized parts to avoid potential abuse vectors and encoding variance.

It also appears that the current implementation sends the embedded identifier and size twice. Otherwise I'd expect https://github.com/FabricMC/fabric/pull/4841/files#diff-026fe5703b81c5e0773f9aa169291cae9ef4d824254fdeaf55b319b76d9481d8R120 to use the respective members. It should be easy enough to parse the embedded packet header instead of duplicating the information.

We should decide whether a future protocol revision is to be handled by a different id (other than fabric:split) or a byte in the first packet of each sequence. Using the channel and what I wrote above makes everything rather simple, we wouldn't add any headers and merely emit slices of the original packet bytes including its size+id header within fabric:split.

@Patbox
Copy link
Member Author

Patbox commented Oct 5, 2025

Ok went will all suggestions here. So now:

  • There's only one payload type, fabric:split that holds boolean flag for marking if it's finished and split ByteBuf data.
  • The type / limits are now read from the first split packet's bytebuf, limiting it to only work with CustomPayload packets.
  • Removed information about final packet size and part, as they could be received mostly from ByteBuf methods or weren't used outside of extra validation

@Patbox
Copy link
Member Author

Patbox commented Oct 5, 2025

For versioning doing it via packet channel should work just fine, because honestly there won't be much need to add any metadata anyway.

@Player3324
Copy link
Contributor

* The type / limits are now read from the first split packet's bytebuf, limiting it to only work with CustomPayload packets.

If you implement https://github.com/FabricMC/fabric/pull/4841/files#diff-026fe5703b81c5e0773f9aa169291cae9ef4d824254fdeaf55b319b76d9481d8R71 differently it'd just work (tm) with any packet, no? What in the protocol requires it to be embedding custom payload exclusively?

@Patbox
Copy link
Member Author

Patbox commented Oct 5, 2025

Mostly as it reads the packet size limit from payload id. Through yes, it should be possible to change that with an extra bit of logic

@Player3324
Copy link
Contributor

So vanilla packets are [length,id,data], it seems like the EncoderHandler.encode doesn't include the size yet, but everything else. I'd manually prefix the length then to restore the full packet bytes, much like how MC does it later in the pipeline. The length is readily available at https://github.com/FabricMC/fabric/pull/4841/files#diff-351cb844fd0b49a8fc53c0fc3a763d4d6786d089d170000eb5da1f72822b3f85R67.

You don't need finished if you check the data size against the original length once that's being transmitted. Merger.byteBuf should probably be presized.

@Patbox
Copy link
Member Author

Patbox commented Oct 5, 2025

Feels slightly uglier on sending part, but changed it. Also implemented packet type validation that currently only allows CustomPayload packets to get merged. It should be super simple to tweak that method to allow vanilla packet merging without changing the format (through naturally client/server would still require an update).

ByteBuf firstSplit = Unpooled.buffer(maxChunkSize);
VarInts.write(firstSplit, buf.readableBytes());
// First slice needs to be slightly smaller to accommodate the size prefix
firstSplit.writeBytes(buf.readSlice(maxChunkSize - VarInts.getSizeInBytes(buf.readableBytes())));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: You should be able to query firstSplit for its remaining capacity or at least the written bytes instead of computing this manually. Not sure why the loop afterwards isn't something like while (buf.readableBytes() > 0) consumer.accept(packetConstructor.apply(new FabricSplitPacketPayload(buf.readSlice(Math.min(buf.readableBytes(), maxChunkSize)))));. No mandatory tail packet.

@Patbox Patbox marked this pull request as draft October 13, 2025 11:23
@Patbox Patbox marked this pull request as ready for review October 13, 2025 11:23
@Patbox
Copy link
Member Author

Patbox commented Oct 13, 2025

(Oops wrong pr hah)

Copy link
Member

@modmuss50 modmuss50 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im happy with this. Most of this is implimentation detail that can be changed at a later date (likely across MC versions where we dont guarantee network compatibility).

I think there is a lot of scope to unit test this kind of code but im not going to hold up the PR any longer for that, it can be done in a follow up PR.

@modmuss50 modmuss50 added enhancement New feature or request status: last call If you care, make yourself heard right away! module: networking api Pull requests and issues related to the networking api labels Oct 13, 2025
@Player3324
Copy link
Contributor

I don't like how the inner logic potentially sends one packet too many, even if that's normally prevented by the outer logic, which might cause us issues down the line. Changing the code makes it simpler and is easy enough.
This logic for the tail of genericPacketSplitter should do it:

// First packet split with added packet size

ByteBuf firstSplit = Unpooled.buffer(maxChunkSize);
VarInts.write(firstSplit, buf.readableBytes());
// First slice needs to be slightly smaller to accommodate the header (by the already written data amount)
firstSplit.writeBytes(buf, maxChunkSize - firstSplit.readableBytes());

consumer.accept(packetConstructor.apply(new FabricSplitPacketPayload(firstSplit)));

// Remaining packets, as needed to send everything
while (buf.isReadable()) {
	consumer.accept(packetConstructor.apply(new FabricSplitPacketPayload(buf.readSlice(Math.min(buf.readableBytes(), maxChunkSize))));
}

It also uses a few more fitting ByteBuf overloads.

Comment on lines +59 to +65
if (payload == null) {
throw new DecoderException("Received '" + packet.getPacketType().id() + "' packet, while expecting 'minecraft:custom_payload'!");
}

if (!(payload instanceof FabricSplitPacketPayload splitPacketPayload)) {
throw new DecoderException("Expected '" + FabricSplitPacketPayload.ID.id() +"' payload packet, but received '" + payload.getId().id() + "'!");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any reasons to reject any non split packets here? This currently breaks existing mods that split packets into multiple subsystems based on packet type and multiplex them with priority scheduling.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly validation that everything works correctly and all data gets received (and no one tries to send them as unintender). Since a lot of mods do depend on packet order being right (from vanilla, fabric, other libraries and their own) it just made sense to check it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now I think it's fair to expect any custom reordering scheme to keep fabric:split packets together.

Nevertheless I do see a point in relaxing the check/requirement to enable some advanced trickery. As long as no second stream of fabric:split packets intersects the first one, the merging would work fine. Someone could then put e.g. motion updates in the middle of bulk data streams for congested connections and potentially improve responsiveness. This would however complicate handling for proxies as they'd have to support those mixed packets as well.

@modmuss50 modmuss50 merged commit 57cb96d into FabricMC:1.21.8 Oct 18, 2025
4 checks passed
modmuss50 pushed a commit that referenced this pull request Oct 18, 2025
* Start working on packet splitting

* Fix checkstyle

* Allow for split C2S packets, require setting max packet size (not validated on receiving end yet)

* Remove old init

* Finish up generic packet splitting,

* Add fake passthough packet to avoid encoding splittable packets twice

* Validate maxPacketSize

* Source split size from CustomPayloadX2YPackets

* No more magic numbers, don't send empty SplitData pacckets, some extra comments, use correct packet ids in error messages

* Simplify packet splitting to single payload type, remove unnecessary metadata.

* Replace finished flag with just appended packet length, add extra check for validating packet type

* Correct name of genericPacketSplitter method

* Fix not full rename oops

* Apply player's suggestion

(cherry picked from commit 57cb96d)
modmuss50 pushed a commit that referenced this pull request Oct 18, 2025
* Start working on packet splitting

* Fix checkstyle

* Allow for split C2S packets, require setting max packet size (not validated on receiving end yet)

* Remove old init

* Finish up generic packet splitting,

* Add fake passthough packet to avoid encoding splittable packets twice

* Validate maxPacketSize

* Source split size from CustomPayloadX2YPackets

* No more magic numbers, don't send empty SplitData pacckets, some extra comments, use correct packet ids in error messages

* Simplify packet splitting to single payload type, remove unnecessary metadata.

* Replace finished flag with just appended packet length, add extra check for validating packet type

* Correct name of genericPacketSplitter method

* Fix not full rename oops

* Apply player's suggestion

(cherry picked from commit 57cb96d)
(cherry picked from commit b095d8d)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request module: networking api Pull requests and issues related to the networking api status: last call If you care, make yourself heard right away!

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants