Limit number of requests for pre-fetching #402
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I noticed some ocational buffer under-runs as a consequence of PR #393 for which I made a fix here.
The issue is, that the pre-fetch algorithm can potentially fire of lots of small requests to keep the amount of pending bytes at the desired level. If bandwidth limits are hit, this leads to responses being interleaved - all requests are served simultaneously, which means the total data rate is great but each single request experiences a low data rate. A consequence can be, that the first request isn't responded to in time and we get a buffer under-run.
I solved this by limiting the amount of open requests when pre-fetching: a pre-fetch request is only sent when less than 4 requests are open. This leads to less requests which are larger. Thus, individual requests should still get decent download rates.
I experimented with different values. Using a limit of 3 leads to significantly lower overall download rates. Thus, I chose 4.
I also increased the amount of data that is requested ahead of the current read positions.
So far I haven't had any more buffer under-runs with these changes. Fingers crossed.