-
Couldn't load subscription status.
- Fork 4.9k
Develop Auto HDR tests for CI - Basic #14213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements Auto HDR tests for CI by creating a comprehensive test suite for High Dynamic Range (HDR) functionality. The changes restructure existing HDR tests and add new performance and configuration tests to ensure HDR features work correctly across different scenarios.
- Refactors existing HDR test code into a reusable helper module
- Creates new basic HDR preset tests with manual and auto configurations
- Adds performance testing to validate FPS under HDR operations
- Implements comprehensive configuration testing across multiple resolutions
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| unit-tests/live/hdr/test-hdr-preset.py | Basic HDR sanity tests for manual and auto modes |
| unit-tests/live/hdr/test-hdr-performance.py | Performance validation ensuring HDR maintains acceptable FPS |
| unit-tests/live/hdr/test-hdr-configurations.py | Comprehensive testing across resolutions and HDR disable functionality |
| unit-tests/live/hdr/hdr_helper.py | Shared helper module with HDR configurations and test utilities |
| unit-tests/live/d400/test-hdr-preset.py | Removal of old HDR test implementation |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
|
|
||
|
|
||
| def test_json_load(config, test_title): | ||
| global batch_size |
Copilot
AI
Aug 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using global variables can make code harder to test and maintain. Consider passing batch_size as a parameter or using a class-based approach to encapsulate state.
| global batch_size |
| device, ctx = test.find_first_device_or_exit() | ||
| am = rs.rs400_advanced_mode(device) | ||
| sensor = device.first_depth_sensor() | ||
| batch_size = 0 # updated on test_json_load |
Copilot
AI
Aug 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Global variable initialization at module level can lead to test interdependencies and state leakage between tests. Consider initializing this within test functions or using a test fixture.
| test.check(measured_fps > ACCEPTABLE_FPS, f"Measured FPS {measured_fps:.2f}") | ||
|
|
||
|
|
||
| test_hdr_performance() |
Copilot
AI
Aug 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using global variables in test functions can cause race conditions and make tests non-deterministic. Consider using a class or passing state through function parameters.
| test_hdr_performance() | |
| class HDRPerformanceTest: | |
| EXPECTED_FPS = 30 | |
| ACCEPTABLE_FPS = EXPECTED_FPS * 0.9 | |
| TIME_FOR_STEADY_STATE = 3 | |
| TIME_TO_COUNT_FRAMES = 5 | |
| def __init__(self, sensor): | |
| self.sensor = sensor | |
| self.count = 0 | |
| self.count_frames = False | |
| def frame_counter_callback(self, frame): | |
| if not self.count_frames: | |
| return # Skip counting if not enabled | |
| self.count += 1 | |
| log.d("Frame callback called, frame number:", frame.get_frame_number()) | |
| def run(self): | |
| """ | |
| Test HDR performance with various configurations | |
| """ | |
| for i, config in enumerate(HDR_CONFIGURATIONS): | |
| config_type = "Auto" if "depth-ae" in json.dumps(config) else "Manual" | |
| num_items = len(config["hdr-preset"]["items"]) | |
| test_name = f"Config {i + 1} ({config_type}, {num_items} items)" | |
| test_json_load(config, test_name) | |
| self.count = 0 | |
| depth_profile = next(p for p in self.sensor.get_stream_profiles() if p.stream_type() == rs.stream.depth) | |
| self.sensor.open(depth_profile) | |
| self.sensor.start(lambda frame: self.frame_counter_callback(frame)) | |
| time.sleep(self.TIME_FOR_STEADY_STATE) | |
| self.count_frames = True # Start counting frames | |
| time.sleep(self.TIME_TO_COUNT_FRAMES) | |
| self.count_frames = False # Stop counting | |
| self.sensor.stop() | |
| self.sensor.close() | |
| measured_fps = self.count / self.TIME_TO_COUNT_FRAMES | |
| log.d(f"Test {test_name}: Counted frames = {self.count}, Measured FPS = {measured_fps:.2f}") | |
| test.check(measured_fps > self.ACCEPTABLE_FPS, f"Measured FPS {measured_fps:.2f}") | |
| hdr_test = HDRPerformanceTest(sensor) | |
| hdr_test.run() |
|
|
||
| measured_fps = count / TIME_TO_COUNT_FRAMES | ||
| log.d(f"Test {test_name}: Counted frames = {count}, Measured FPS = {measured_fps:.2f}") | ||
| test.check(measured_fps > ACCEPTABLE_FPS, f"Measured FPS {measured_fps:.2f}") |
Copilot
AI
Aug 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message should include the expected threshold for clarity. Consider: f"Measured FPS {measured_fps:.2f} is below acceptable threshold {ACCEPTABLE_FPS}"
| test.check(measured_fps > ACCEPTABLE_FPS, f"Measured FPS {measured_fps:.2f}") | |
| test.check(measured_fps > ACCEPTABLE_FPS, f"Measured FPS {measured_fps:.2f} is below acceptable threshold {ACCEPTABLE_FPS}") |
| perform_auto_hdr_test(hdr_config, test_title, resolution) | ||
|
|
||
|
|
||
| device, ctx = test.find_first_device_or_exit() |
Copilot
AI
Aug 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Global device and context initialization in a helper module can cause issues when multiple tests run concurrently. Consider moving device initialization into individual test functions or using proper test fixtures.
Tracked on: [LRS-1299]