-
Notifications
You must be signed in to change notification settings - Fork 1.1k
test(cluster): Test that cluster works with a standard cluster client. #1336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
tests/dragonfly/cluster_test.py
Outdated
client = redis.RedisCluster(decode_responses=True, host="localhost", port=nodes[0].port) | ||
|
||
for i in range(10_000): | ||
key = 'key' + str(i) | ||
assert client.set(key, 'value') == True | ||
assert client.get(key) == 'value' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small note: you're using the sync client inside an async function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that's a good catch!
I'm new to Python async (and also to Python in general).
My goal here is to use RedisCluster
, which seems to be a sync library. Is there a way for me to still use the other async tools we have here, like push_config()
and get_node_id()
?
Alternatively I guess I could use run_in_executor()
(from here), do you think that's advisable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use the async aioredis.RedisCluster
(which is now just a re-export of the official redis.asyncio). It should have the same interface
If there were only the sync version, wouldn't have caused any trouble either way, as you only perform sequential operations and don't have any tasks running
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That worked, thanks!
@chakaz This looks good. Will you please add also replica so we will check our client commands with replicas |
In this case, `redis.RedisCluster`. To be double sure I also looked at the actual packets and saw that the client asks for `CLUSTER SLOTS`, and then after the redistribution of slots, following a few `MOVED` replied, it asks for the new slots again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Python part looks valid, left a small performance comment. You could've formatted the config in a more compact way to save some lines and make it easier to read, but it probably doesn't matter in this test.
I don't track the cluster progress in depth, so please don't rely on me in reviews
tests/dragonfly/cluster_test.py
Outdated
|
||
for i in range(10_000): | ||
key = 'key' + str(i) | ||
assert await client.set(key, 'value') == True | ||
assert await client.get(key) == 'value' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the recent timeout in CI that we had. In the future:
- such parts can be split in multiple tasks that run concurrently, all on its range or modulo
- possibly ranges that surely belong to a single node can be handled in pipelines?
When more tests are added, it can add up to hundreds of thousands or requests (its already 40k for this test). Github runners in debug mode have a very low throughput.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whats also under question if this load is needed. Replication tests trigger all kinds of edge cases with different values and higher load (replicate during update, etc...). Here it seems like it doesn't really matter and you verify only correctness. So you could for example check 10% random values of the range: if you have a bug it's likely to be found in one or a few passes, otherwise why check the whole range
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not realize we're so constraint!
Thanks for the comment, I tuned down the test.
assert await client.get(key) == 'value' | ||
|
||
# Make sure that getting a value from a replica works as well. | ||
replica_response = await client.execute_command( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you dont wait here for the replica to finish replicating all the changes
tests/dragonfly/cluster_test.py
Outdated
await push_config(config, c_masters_admin + c_replicas_admin) | ||
|
||
for i in range(100): | ||
key = 'key' + str(i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that if you want to set only 100 keys you should generate a random key so that we test all slots when this test runs few times
tests/dragonfly/cluster_test.py
Outdated
client = aioredis.RedisCluster(decode_responses=True, host="localhost", port=masters[0].port) | ||
|
||
for i in range(100): | ||
key = 'key' + str(i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also here random key.
Maybe create nested function , you have the same loop below
tests/dragonfly/cluster_test.py
Outdated
assert await client.get(key) == 'value' | ||
async def test_random_keys(): | ||
for i in range(100): | ||
key = 'key' + str(i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you forgot to add the random call
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoops, nice catch :)
dragonflydb#1336) In this case, `redis.RedisCluster`. To be double sure I also looked at the actual packets and saw that the client asks for `CLUSTER SLOTS`, and then after the redistribution of slots, following a few `MOVED` replied, it asks for the new slots again.
In this case,
redis.RedisCluster
.To be double sure I also looked at the actual packets and saw that the client asks for
CLUSTER SLOTS
, and then after the redistribution of slots, following a fewMOVED
replied, it asks for the new slots again.