-
Notifications
You must be signed in to change notification settings - Fork 6.9k
[Data] Make test_dataset_throughput deterministic by increasing workload and applying tolerance #58693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to make test_dataset_throughput deterministic by increasing the workload and introducing a tolerance for the throughput assertions. The changes look good and should help improve the test's stability. I've added a couple of minor style suggestions to align a new variable name with PEP 8 conventions.
08f8c78 to
7f86e78
Compare
python/ray/data/tests/test_stats.py
Outdated
| ray.init(num_cpus=2) | ||
|
|
||
| f = dummy_map_batches_sleep(0.01) | ||
| f = dummy_map_batches_sleep(0.03) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A shorter sleep time is better because it reduces the execution time. However, we choose 0.03 instead of 0.02 because using 0.02 resulting in 1 failure during 20 test runs
|
@owenowenisme @bveeramani PTAL, thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR decreases the likeliness that this test fails, but ultimately, the test still relies on nondeterministic timing. It's also brittle because it uses regex that can break with minor formatting changes.
Rather than adjusting the parameters, could you rewrite this test as a unit test?
Separately, I realized that the "per node" throughputs actually represent the per-task throughput. Based on this, I think we should:
- Remove the "per node throughput" for the "Dataset throughput" section, because the average per-task throughput across all operators isn't really useful, and
- Rename "per node throughput" to "per task throughput" in the "Operator throughput" sections
The tests could look something like this:
def test_dataset_throughput_calculation():
"""Test throughput calculations using mock block stats."""
from ray.data._internal.stats import DatasetStats
from ray.data.block import BlockStats, BlockExecStats
# Helper to create minimal block stats with only timing fields
def create_block_stats(start_time, end_time, num_rows):
exec_stats = BlockExecStats()
exec_stats.start_time_s = start_time
exec_stats.end_time_s = end_time
exec_stats.wall_time_s = end_time - start_time
return BlockStats(
num_rows=num_rows,
size_bytes=None,
exec_stats=exec_stats
)
# Simulate 3 overlapping blocks
blocks = [
create_block_stats(0.0, 2.0, 100),
create_block_stats(0.5, 2.5, 100),
create_block_stats(1.0, 3.0, 100),
]
stats = DatasetStats(metadata={"Map": blocks}, parent=None)
summary = stats.to_summary()
# Throughput: total rows / total execution duration
# Total rows = 300
# Duration = max end_time - min start_time = 3.0s
# 300 rows / 3s = 100 rows/s
# TODO: You'll need to expose this as a property so that it's testable.
assert summary.num_rows_per_s == 100
def test_operator_throughput_calculation():
... # A similar unit test. You might need to do some refactoring.
# summary is a OperatorStatsSummary here, not DatasetStatsSummary
# TODO: You'll need to similarly expose this property.
assert summary.num_rows_per_s == 100
assert summary.num_rows_per_task_s == 100|
@dancingactor lemme know if you have any questions. |
|
Thanks for your detailed feedback! I have two questions: 1.My understanding is that we should remove the original 2.
May I confirm that this means we should modify the current |
|
@dancingactor that's right! |
|
Just to confirm, I need to do following things in this PR
|
That sounds right. One note of warning -- |
|
Hey @dancingactor, just following up here. Lemme know if I can provide any info or help to move this along! |
|
Hi @bveeramani, since I am new to ray, I spend some time understanding the context and the codebase. I almost completed the |
6f974a2 to
fabe20e
Compare
|
Hi @bveeramani, could you please advise on how to correctly test the new Currently I try to directly execute |
|
Ah, this sounds like your Ray Core version is out-of-date. Are you building Ray Core from source, or using the |
|
Thanks! I will try the |
Awesome! Lemme know if you run into any problems. Happy to help you out |
|
Hi @bveeramani, tests % pytest /Users/ryanchen/github/ray/python/ray/data/tests/test_stats.py
Test session starts (platform: darwin, Python 3.10.19, pytest 7.4.4, pytest-sugar 0.9.5)
rootdir: /Users/ryanchen/github/ray
configfile: pytest.ini
plugins: docker-tools-3.1.3, sphinx-0.5.1.dev0, forked-1.4.0, anyio-4.11.0, asyncio-0.17.2, sugar-0.9.5, timeout-2.1.0, shutil-1.8.1, lazy-fixtures-1.1.2, rerunfailures-11.1.2, pytest_httpserver-1.1.3, virtualenv-1.8.1, mock-3.14.0, aiohttp-1.1.0
asyncio: mode=auto
timeout: 180.0s
timeout method: signal
timeout func_only: False
collecting ...
python/ray/data/tests/test_stats.py ss✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓s✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓ 100% ██████████
Results (729.00s):
44 passed
3 skipped |
7f7199b to
7efd4f6
Compare
616744c to
963260c
Compare
…oad and applying tolerance Signed-off-by: dancingactor <[email protected]>
2. Rename "per node throughput" to "per task throughput" in the "Operator throughput" sections Signed-off-by: dancingactor <[email protected]>
bveeramani
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! Left a few comments
| output_num_rows = self.operators_stats[-1].output_num_rows | ||
| total_num_out_rows = output_num_rows["sum"] if output_num_rows else 0 | ||
| wall_time = self.get_total_wall_time() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this dead code now?
| out += "Dataset throughput:\n" | ||
| out += ( | ||
| "\t* Ray Data throughput:" | ||
| f" {total_num_out_rows / wall_time} " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be using num_rows_per_s here, right?
| # total number of rows produced the sum of the wall times across all blocks | ||
| # of all operators. This assumes that on a single node the work done would | ||
| # be equivalent, with no concurrency. | ||
| output_num_rows = self.operators_stats[-1].output_num_rows |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move the comments closer to where the calculations are actually made? (e.g., num_rows_per_s)
| ) | ||
| out += ( | ||
| indent + "\t* Estimated single node throughput:" | ||
| indent + "\t* Estimated single task throughput:" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and in the above statement -- I think we want to use num_rows_per_s and num_rows_per_task_s?
| def test_dataset_throughput(shutdown_only): | ||
| ray.shutdown() | ||
| ray.init(num_cpus=2) | ||
| blocks = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Here and for test_dataset_throughput_calculation -- I think ``block_stats` might be a clearer name to avoid confusion with the actual blocks
| from ray.data._internal.stats import DatasetStats | ||
| from ray.data.block import BlockExecStats, BlockStats |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Move to top for consistency with Python convention?
|
|
||
| # NOTE: All tests above share a Ray cluster, while the tests below do not. These | ||
| # tests should only be carefully reordered to retain this invariant! | ||
| def test_dataset_throughput_calculation(ray_start_regular_shared): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this and test_operator_throughput_calculation -- I think (?) we don't need ray_start_regular_shared anymore
Description
dummy_map_batches_sleep(0.01)->dummy_map_batches_sleep(0.03)Related issues
Closes #58565
Additional information
Verification
Run the following script to verify that
test_dataset_throughputwill be deterministic after this modificationbenchmark
The following statistics show the performance before and after the change. Execution time increases slightly, but the results are now more deterministic.
Before(
dummy_map_batches_sleep(0.01), no tolerance)Operator 1
Operator 2
Dataset
After(
dummy_map_batches_sleep(0.03), tolerance rate 0.9)Operator 1
Operator 2
Dataset