-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Description
-
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
- Consider checking out SDK examples.
- Have you looked in our documentations?
- Is you question a frequently asked one?
- Try searching our GitHub Issues (open and closed) for a similar issue.
-
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Required Info | |
---|---|
Camera Model | { D435 } |
Firmware Version | 05.11.06.200 |
Operating System & Version | Ubuntu 16.04 |
Kernel Version (Linux Only) | 4.4.0-146-generic |
Platform | PC |
SDK Version | 2.21.0.776 |
Language | {Python} |
Segment |
Issue Description
We are using pyrealsense2 to record 6 minute videos for our research. Because the camera takes a second or so to warm up, we are pausing the camera for two seconds prior to recording the full video. However, when we do this and open up the subsequent video in the realsense viewer, the frame numbers are incorrect and we cannot navigate the video as usual (i.e. we can only start the video from the beginning, but cannot drag to a specific frame). It looks like the frame number starts at ~63 (video is at 30fps, so this would be ~equal to the two second pause). If we record the videos without the pause, then there is no issue. This is our code for collecting the video and pausing the camera:
``
#Turn on emitting camera
pipeline_emit = rs.pipeline()
config_emit = rs.config()
config_emit.enable_device(SN_D415[camera_pair])
pipeline_emit.start(config_emit)
#Fire up recording camera
pipeline_record = rs.pipeline()
config_record = rs.config()
config_record.enable_device(SN_D435[camera_pair])
#Setup streaming and recording
config_record.enable_stream(rs.stream.depth, resolution['x'],
resolution['y'], rs.format.z16, fps)
config_record.enable_stream(rs.stream.color, resolution['x'],
resolution['y'], rs.format.rgb8, fps)
bag_filename = output_filename + ".bag"
config_record.enable_record_to_file(bag_filename)
#Turn on recording pipeline
pipeline_record_profile = pipeline_record.start(config_record)
device_record = pipeline_record_profile.get_device()
device_recorder = device_record.as_recorder()
#Turn off laser from recording camera
if recording_camera_laser.lower() == 'n':
depth_sensor_record = device_record.query_sensors()[0]
depth_sensor_record.set_option(rs.option.emitter_enabled, 0)
elif recording_camera_laser.lower() == 'y':
depth_sensor_record = device_record.query_sensors()[0]
depth_sensor_record.set_option(rs.option.emitter_enabled, 1)
#Set autoexposure priority to False to improve frame rate acquisition
color_sensor_record = device_record.query_sensors()[1]
color_sensor_record.set_option(rs.option.auto_exposure_priority, False)
#Start recording!
try:
#Pause before recording to let camera warm up
rs.recorder.pause(device_recorder)
print('Pausing.', end='')
for t in range(camera_startup_pause*2):
print('\r','Pausing','.'*(t+2), sep='', end='')
time.sleep(0.5)
print()
rs.recorder.resume(device_recorder)
#For colorizing the depth frame
colorizer = rs.colorizer()
start=time.time()
while time.time() - start < int(video_length):
frames = pipeline_record.wait_for_frames()
frames.keep() #Reduces frame drops (I think!)
#Get color and depth frames for display
color_frame = frames.get_color_frame()
depth_frame = frames.get_depth_frame()
color_frame = np.asanyarray(color_frame.get_data())
color_frame = cv2.cvtColor(color_frame, cv2.COLOR_RGB2BGR)
depth_frame = np.asanyarray(colorizer.colorize(depth_frame).get_data())
#Stack color and depth frames vertically for display
camera_images = np.vstack((color_frame, depth_frame))
#Display images
cv2.namedWindow('Camera ' + str(camera_pair), cv2.WINDOW_AUTOSIZE)
cv2.imshow('Camera ' + str(camera_pair), camera_images)
cv2.waitKey(5)
finally:
pipeline_record.stop()
pipeline_emit.stop()
cv2.destroyAllWindows()
return(bag_filename)``