-
Notifications
You must be signed in to change notification settings - Fork 1.1k
docs(docker-compose): Explain NAT
overhead in docker-compose
#176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…y vs host networks Signed-off-by: Ryan Russell <[email protected]>
The thing is, attaching services to the host network is not a good idea in most cases. Overlay networks in Docker are what gives you isolation. For quick development environments it might work to attach the container to the host network. But in production you don't want that. Especially if you run multiple instances of Dragonfly for different services. |
@tamcore That's exactly why it is commented out with an explanation :) |
To help put it in perspective for people wanting to benchmark Dragonfly DB vs (redis|keydb|skytable|whatever), here's a few benchmarks to show how much your This will be more noticeable when Dragonfly DB is running on a dedicated hypervisor and your redis consumers incur the additional VLAN hit for every round trip. As with any benchmark, YMMV based on your hardware, hyperthreading, kernel, etc... but the pattern should follow something like the following: Four usage examples, all using exact same two machines with one running Dragonfly DB and the other running the
# 1GB lan quick test
uname -a
Linux ... 5.13.0-51-generic #58~20.04.1-Ubuntu SMP Tue Jun 14 11:29:12 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
sudo dmidecode --type 17 | grep 'MT/s'
Speed: 2133 MT/s
Configured Memory Speed: 1866 MT/s
Speed: 2133 MT/s
Configured Memory Speed: 1866 MT/s # Swarm mode using mesh namespace from client in swarm on separate machine
redis-benchmark -h db_dragonflydb -q
PING_INLINE: 81300.81 requests per second
PING_BULK: 73206.44 requests per second
SET: 78554.59 requests per second
GET: 82508.25 requests per second
INCR: 81566.07 requests per second
LPUSH: 82169.27 requests per second
RPUSH: 82034.45 requests per second
LPOP: 80385.85 requests per second
RPOP: 81566.07 requests per second
SADD: 82101.80 requests per second
HSET: 81300.81 requests per second
SPOP: 81103.00 requests per second
ZADD: 81632.65 requests per second
ZPOPMIN: 83194.67 requests per second
LPUSH (needed to benchmark LRANGE): 79554.50 requests per second
LRANGE_100 (first 100 elements): 46838.41 requests per second
LRANGE_300 (first 300 elements): 17343.05 requests per second
LRANGE_500 (first 450 elements): 12647.02 requests per second
LRANGE_600 (first 600 elements): 10460.25 requests per second
MSET (10 keys): 79176.56 requests per second # Swarm mode with host port
redis-benchmark -h 192.168.1.[dedicated_dragonfly_hypervisor] -q
PING_INLINE: 136612.02 requests per second
PING_BULK: 142857.14 requests per second
SET: 140056.03 requests per second
GET: 139082.06 requests per second
INCR: 136425.66 requests per second
LPUSH: 141843.97 requests per second
RPUSH: 141843.97 requests per second
LPOP: 142653.36 requests per second
RPOP: 148809.53 requests per second
SADD: 154559.50 requests per second
HSET: 140252.45 requests per second
SPOP: 141043.72 requests per second
LPUSH (needed to benchmark LRANGE): 135135.14 requests per second
LRANGE_100 (first 100 elements): 91911.76 requests per second
LRANGE_300 (first 300 elements): 40241.45 requests per second
LRANGE_500 (first 450 elements): 28710.88 requests per second
LRANGE_600 (first 600 elements): 21663.78 requests per second
MSET (10 keys): 130039.02 requests per second # docker-compose with port
redis-benchmark -h 192.168.1.[dedicated_dragonfly_hypervisor] -q
PING_INLINE: 165016.50 requests per second
PING_BULK: 168634.06 requests per second
SET: 168067.22 requests per second
GET: 168634.06 requests per second
INCR: 168918.92 requests per second
LPUSH: 169204.73 requests per second
RPUSH: 167504.19 requests per second
LPOP: 169491.53 requests per second
RPOP: 165016.50 requests per second
SADD: 165289.25 requests per second
HSET: 166666.66 requests per second
SPOP: 168350.17 requests per second
LPUSH (needed to benchmark LRANGE): 169204.73 requests per second
LRANGE_100 (first 100 elements): 106382.98 requests per second
LRANGE_300 (first 300 elements): 42844.90 requests per second
LRANGE_500 (first 450 elements): 28818.44 requests per second
LRANGE_600 (first 600 elements): 21654.40 requests per second
MSET (10 keys): 135318.00 requests per second # docker-compose with `networking_mode: "host"`
redis-benchmark -h 192.168.1.[dedicated_dragonfly_hypervisor] -q
PING_INLINE: 169491.53 requests per second
PING_BULK: 169491.53 requests per second
SET: 167785.23 requests per second
GET: 168350.17 requests per second
INCR: 169491.53 requests per second
LPUSH: 168634.06 requests per second
RPUSH: 168918.92 requests per second
LPOP: 168067.22 requests per second
RPOP: 168350.17 requests per second
SADD: 169204.73 requests per second
HSET: 167785.23 requests per second
SPOP: 168634.06 requests per second
LPUSH (needed to benchmark LRANGE): 166112.95 requests per second
LRANGE_100 (first 100 elements): 106723.59 requests per second
LRANGE_300 (first 300 elements): 42918.46 requests per second
LRANGE_500 (first 450 elements): 28876.70 requests per second
LRANGE_600 (first 600 elements): 21645.02 requests per second
MSET (10 keys): 141242.94 requests per second For low-latency, high-throughput use cases where you want the convenience of docker, it may be a very good thing indeed to use host mode in production. When this same logic is applied on a 40GB lan, the extra NAT translation really starts to show it's head during profiling. Sorry, I don't have those metrics handy. This is the same underlying hardware with zero changes other than how the docker NAT is utilized(or not). Empirically, the results have been meaningful for my use case to revert to lowly Hopefully this can help others understand WHY their deployment performance may not match claims and reduce noise on benchmark discussions... Perhaps a tuning guide should be added with more detailed discussions around this and other common pitfalls... |
Thanks for adding the explanation - i think the trade-off is correct. Regardings the benchmark, I believe redis-benchmark runs in a single thread by default, and is probably the major bottleneck itself for whatever the case you are checking with dragonfly. Better to increase the threads count with |
Signed-off-by: Ryan Russell [email protected]