-
Notifications
You must be signed in to change notification settings - Fork 736
Camera preview color conversions on GPU #913
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds GPU-accelerated camera frame conversion via a new gpu-converters crate and integrates a native camera preview path with diagnostics, Tauri debug commands, and shutdown coordination. Updates window wiring to initialize the preview, expands logging, and adds documentation and benchmarking/tests for the GPU path. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Camera as Camera Feed
participant App as Tauri Backend
participant Prev as CameraPreview
participant GPU as GPUCameraConverter
participant Fallback as CPU Fallback
participant Win as Camera Window
Camera->>Prev: RawCameraFrame (send via internal channel)
Prev->>Prev: Receive frame
alt GPU available
Prev->>GPU: convert_and_scale(input, target, quality)
GPU-->>Prev: RGBA bytes or error
opt GPU error
Prev->>GPU: analyze error
Prev->>Fallback: convert_with_fallback(...)
Fallback-->>Prev: RGBA bytes or error
end
else GPU unavailable
Prev->>Fallback: CPU conversion/scale
Fallback-->>Prev: RGBA bytes
end
Prev->>Win: upload/render RGBA
Note over Prev,Win: Logs counters, texture uploads, render passes
sequenceDiagram
autonumber
participant FE as Frontend (TS)
participant Cmd as Tauri Commands
participant Prev as CameraPreview
participant Diag as CameraDiagnostics
participant Win as Camera Window
FE->>Cmd: diagnose_camera_preview(window)
Cmd->>Diag: diagnose_camera_preview(&Prev, &Win)
Diag-->>Cmd: Diagnostic report (String)
Cmd-->>FE: CameraDebugReport { success, details }
FE->>Cmd: quick_fix_camera_preview(window)
Cmd->>Diag: quick_fix_camera_preview(&Prev, &Win)
Diag-->>Cmd: Vec<String> fixes
Cmd-->>FE: CameraDebugReport { fixes_applied }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 25
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
crates/gpu-converters/src/util.rs (1)
1-15
: Ensure safe buffer readback: avoid panics, propagate all mapping errors, unmap the buffer, and update call sites• In crates/gpu-converters/src/util.rs, refactor
read_buffer_to_vec
:
– Change its return type fromResult<Vec<u8>, wgpu::PollError>
toResult<Vec<u8>, ReadbackError>
.
– Replacetx.send(result).unwrap()
andrx.recv().unwrap().unwrap()
with non-panicking handling of both channel and map failures.
– After copying out the data, explicitly unmap the buffer to avoid leaking the mapping.
– Add a newReadbackError
enum (with variants forPoll
,Map
, andChannel
) and depend onthiserror
in Cargo.toml.• Update all call sites of
read_buffer_to_vec
to propagate the new error type (we found two incrates/gpu-converters/src/uyvy_nv12/mod.rs
at lines 223–224). You will need to:
– Import or fully qualifyReadbackError
.
– Adjust the enclosing function’s return type (and any error-conversion logic) to includeReadbackError
instead ofwgpu::PollError
.• Example diff for util.rs (apply as shown):
-pub fn read_buffer_to_vec( - buffer: &wgpu::Buffer, - device: &wgpu::Device, -) -> Result<Vec<u8>, wgpu::PollError> { +use thiserror::Error; + +#[derive(Debug, Error)] +pub enum ReadbackError { + #[error("device poll error: {0}")] + Poll(#[from] wgpu::PollError), + #[error("buffer map failed: {0}")] + Map(#[from] wgpu::BufferAsyncError), + #[error("channel receive failed: {0}")] + Channel(#[from] std::sync::mpsc::RecvError), +} + +pub fn read_buffer_to_vec( + buffer: &wgpu::Buffer, + device: &wgpu::Device, +) -> Result<Vec<u8>, ReadbackError> { let buffer_slice = buffer.slice(..); let (tx, rx) = std::sync::mpsc::channel(); buffer_slice.map_async(wgpu::MapMode::Read, move |result| { - tx.send(result).unwrap(); + // Ignore send errors; handle via recv() + let _ = tx.send(result); }); - device.poll(wgpu::PollType::Wait)?; - rx.recv().unwrap().unwrap(); + // Device poll errors flow through ReadbackError::Poll + device.poll(wgpu::PollType::Wait)?; + match rx.recv()? { + Ok(()) => {} + Err(map_err) => return Err(ReadbackError::Map(map_err)), + } let data = buffer_slice.get_mapped_range(); - Ok(data.to_vec()) + let out = data.to_vec(); + drop(data); + buffer.unmap(); + Ok(out) }• Cargo.toml: add
thiserror = "1.0"• In
crates/gpu-converters/src/uyvy_nv12/mod.rs
, around lines 223–224:- read_buffer_to_vec(&y_read_buffer, &self.device)?, - read_buffer_to_vec(&uv_read_buffer, &self.device)?, + // These now return ReadbackError; ensure the enclosing function’s error type is updated + read_buffer_to_vec(&y_read_buffer, &self.device)?, + read_buffer_to_vec(&uv_read_buffer, &self.device)?,(and update that function’s signature from
-> Result<…, wgpu::PollError>
to-> Result<…, ReadbackError>
or otherwise convert between error types.)apps/desktop/src-tauri/src/windows.rs (2)
656-660
: Compile-time bug: matching unit variant as struct variant.ShowCapWindow::Camera is defined as a unit-like variant (Line 188), but id() matches it as ShowCapWindow::Camera { .. }. This won’t compile.
Apply this diff:
- ShowCapWindow::Camera { .. } => CapWindowId::Camera, + ShowCapWindow::Camera => CapWindowId::Camera,
353-418
: Camera window: initialize preview with error handling and lifecycle cleanup.
- init_preview_window(...).await.unwrap() will crash the process on error. Handle and log errors instead.
- Consider adding a WindowEvent::Destroyed hook to call CameraPreview::shutdown(), preventing leaks if the window is closed independently. This may also help with the “black background” symptom when the surface/device tears down unexpectedly.
- camera_preview - .init_preview_window(window.clone()) - .await - .unwrap(); // TODO: Error handling + if let Err(err) = camera_preview.init_preview_window(window.clone()).await { + tracing::error!("camera preview init failed: {}", err); + // Optional: show a non-fatal dialog or retry, depending on UX. + } + + // Proactive cleanup on window destroy + window.on_window_event({ + let handle = app.clone(); + move |event| { + if matches!(event, tauri::WindowEvent::Destroyed) { + if let Some(preview) = handle.try_state::<CameraPreview>() { + preview.shutdown(); + } + } + } + });crates/gpu-converters/src/uyvy_rgba/mod.rs (1)
13-78
: Breaking Change: Update UYVYToRGBA Constructor SignatureThe async constructor for
UYVYToRGBA
was changed topub async fn new( device: &wgpu::Device, queue: &wgpu::Queue ) -> Result<Self, ConversionError>This is a breaking API change.
Actions Required:
- Update all callers to
- pass
&device
and&queue
- await the future and handle the
Result
— internally, there are two call‐sites in
•crates/gpu-converters/src/lib.rs:431
•crates/gpu-converters/src/lib.rs:532
- Bump the crate version to reflect a breaking change in
crates/gpu-converters/Cargo.toml
(currently0.1.0
)[package]-name = "gpu-converters"
-version = "0.1.0"
+name = "gpu-converters"
+version = "0.2.0" # bump for breaking change- Search and update any examples or documentation referencing `UYVYToRGBA` (e.g., README, docs) to match the new signature - Add a note to CHANGELOG.md describing the new signature and migration steps </blockquote></details> </blockquote></details>
♻️ Duplicate comments (3)
crates/gpu-converters/src/yuyv_rgba/mod.rs (1)
13-13
: Consider removing unusedasync
qualifierThe constructor doesn't perform any async operations.
crates/gpu-converters/src/yuv420p_rgba/mod.rs (1)
13-13
: Consider removing unusedasync
qualifierThe constructor doesn't perform any async operations.
crates/gpu-converters/src/nv12_rgba/mod.rs (1)
13-16
: Consider removing unusedasync
qualifierThe constructor doesn't perform any async operations.
🧹 Nitpick comments (69)
crates/gpu-converters/src/nv12_rgba/shader.wgsl (2)
8-12
: Use producer-plane dimensions for bounds check to avoid mismatches.Using textureDimensions(output) assumes output texture size always matches Y plane. Using the Y plane’s dimensions is slightly more robust and prevents subtle mismatches when the output is preallocated or reused at a different size.
Apply this diff:
- let dims = textureDimensions(output); + let dims = textureDimensions(y_plane);
21-27
: Parameterize color range and matrix (BT.601 vs BT.709) to avoid color casts across devices.Camera sources may be full range (0–255) or limited/video range (16–235). Also, SD often uses BT.601 while HD/most webcams use BT.709. Hardcoding 0.5-centering and 601 coefficients can produce visible tints. Recommend factoring range/matrix via uniforms.
Minimal, in-place improvement (more precise 601 constants + hook for limited range):
- // YUV to RGB conversion - let u = uv.r - 0.5; - let v = uv.g - 0.5; - - let r = y + 1.402 * v; - let g = y - 0.344 * u - 0.714 * v; - let b = y + 1.772 * u; + // YUV to RGB conversion (default: BT.601 full-range) + // TODO: drive limited/full range and matrix via uniforms. + let u = uv.r - 0.5; + let v = uv.g - 0.5; + // If limited-range input is detected, prefer: + // let y_lin = max((y - 0.0627451) * 1.1643836, 0.0); + let y_lin = y; + let r = y_lin + 1.402000 * v; + let g = y_lin - 0.344136 * u - 0.714136 * v; + let b = y_lin + 1.772000 * u;If helpful, I can draft a small uniform block (range + 3x3 matrix + offsets) shared by all YUV shaders.
crates/gpu-converters/src/bgra_rgba/shader.wgsl (1)
6-11
: Prefer input texture dimensions for bounds check.Safer to derive dims from the sampled input texture to avoid writing outside if output got mis-sized.
Apply this diff:
- let dims = textureDimensions(output); + let dims = textureDimensions(input_texture);crates/gpu-converters/src/uyvy_rgba/shader.wgsl (2)
30-77
: Guard cross-word reads or ensure buffer padding to avoid robust-access penalties.Branches that read input_buffer[word_index + 1u] can overrun at the end of the last row if the buffer is tightly sized. WebGPU may inject robust bounds checks (returns zeros) which is safe but slower. Either:
- Ensure CPU-side padding to at least 3 extra bytes (prefer 4) at the end, or
- Pass total_words as a uniform and guard reads.
If you choose padding, verify at upload sites that buffers are rounded up to a multiple of 4 and zero-filled. If you prefer guarding, I can provide a guarded read helper.
85-87
: Use precise BT.601 constants to reduce subtle green/magenta tints.Small change but noticeable on skin tones.
Apply this diff:
- let g = y_norm - 0.344 * u_norm - 0.714 * v_norm; + let g = y_norm - 0.344136 * u_norm - 0.714136 * v_norm;crates/gpu-converters/src/rgb24_rgba/shader.wgsl (1)
42-54
: Potential OOB on final pixel of the last row; prefer padding or guarded read.Same concern as UYVY: word1 access can run past the buffer end for tightly sized buffers. Either pad the upload to a 4-byte boundary (recommended) or add a guard using a total_words uniform.
Would you like me to add a tiny helper that safely fetches word1 with zero-fill if out-of-range?
crates/gpu-converters/src/yuv420p_rgba/shader.wgsl (2)
8-12
: Prefer using y_plane dimensions for bounds check.As with NV12, basing dims on the producer plane avoids accidental mismatches.
Apply this diff:
- let dims = textureDimensions(output); + let dims = textureDimensions(y_plane);
23-31
: Expose color range/matrix selection; improve constants.Hardcoding BT.601 full-range can be off for many 420p producers (often BT.709 limited). Suggest making range/matrix uniforms; meanwhile, tightening constants helps.
Apply this diff:
- // Convert from YUV to RGB color space - // Using ITU-R BT.601 conversion matrix - let u_centered = u - 0.5; - let v_centered = v - 0.5; - - let r = y + 1.402 * v_centered; - let g = y - 0.344 * u_centered - 0.714 * v_centered; - let b = y + 1.772 * u_centered; + // Convert from YUV to RGB color space (default: BT.601 full-range) + // TODO: drive limited/full range and matrix via uniforms. + let u_centered = u - 0.5; + let v_centered = v - 0.5; + // If limited-range input is detected, prefer: + // let y_lin = max((y - 0.0627451) * 1.1643836, 0.0); + let y_lin = y; + let r = y_lin + 1.402000 * v_centered; + let g = y_lin - 0.344136 * u_centered - 0.714136 * v_centered; + let b = y_lin + 1.772000 * u_centered;crates/gpu-converters/src/yuyv_rgba/shader.wgsl (3)
20-24
: Guard against odd widths and row stride assumptionsYour byte addressing assumes tightly packed rows with width even (YUYV 4:2:2 requires it). If an odd width or padded stride slips through from the host,
word_index + 1u
paths can read past the row/image end. Verify the host enforces: (a) width is even, (b) row stride == width * 2 bytes. If either can vary, add arow_stride_bytes
uniform and use it instead ofdims.x * 2u
.I can draft the stride-aware variant if you confirm padding is possible on your capture source(s).
31-77
: Simplify alignment branches for the common case (fewer divergent paths, fewer loads)For even widths (typical),
byte_offset
is always 0, so only the first branch runs. The other three branches are dead in the common path yet still incur control-flow divergence. Consider specializing forbyte_offset == 0u
and placing the fallback under a rare-pathelse
. This reduces instruction pressure and memory reads.Here’s a minimal restructuring that keeps the fallback but fast-paths the aligned case:
- // Extract YUYV components based on alignment - if (byte_offset == 0u) { + // Extract YUYV components based on alignment + if (byte_offset == 0u) { // YUYV starts at word boundary: [YUYV] let word = input_buffer[word_index]; let y0 = (word >> 0u) & 0xFFu; let u_val = (word >> 8u) & 0xFFu; let y1 = (word >> 16u) & 0xFFu; let v_val = (word >> 24u) & 0xFFu; y = select(y0, y1, is_odd_pixel); u = u_val; v = v_val; - } else if (byte_offset == 1u) { + } else if (byte_offset == 1u) { // YUYV spans boundary: [?YUY][V???] ...If you prefer, we can entirely drop the misalignment cases once the host asserts even width and 4-byte-aligned row starts.
79-88
: Use canonical BT.601 coefficients; optionally support limited/full-range YThe current constants are rounded. For better accuracy use canonical values. Also, some cameras deliver full-range Y (0–255) and others limited (16–235). Consider a mode flag to switch the Y offset.
Apply this tweak for coefficients:
- let r = y_norm + 1.402 * v_norm; - let g = y_norm - 0.344 * u_norm - 0.714 * v_norm; - let b = y_norm + 1.772 * u_norm; + let r = y_norm + 1.402000 * v_norm; + let g = y_norm - 0.344136 * u_norm - 0.714136 * v_norm; + let b = y_norm + 1.772000 * u_norm;If you need limited-range support, we can add:
let y_lin = max((f32(y) - 16.0) / 219.0, 0.0);
- keep
u_norm = (f32(u) - 128.0) / 255.0;
and same forv_norm
and branch on a uniform flag.crates/gpu-converters/src/scaler/bilinear.wgsl (2)
1-3
: Unused sampler binding
input_sampler
isn’t used. If you’re standardizing bind group layouts across scalers, keeping it is fine. Otherwise, drop it to simplify the pipeline layout.We can also switch to
textureSampleLevel
for readability and use the sampler consistently across all scalers.
14-15
: Workgroup size tuning opportunity
@workgroup_size(8, 8)
is a safe default. On discrete GPUs,16x16
often improves occupancy and memory coalescing for per-pixel compute kernels like this. Worth a quick micro-benchmark.I can add a tiny harness to compare 8x8 vs 16x16 on your target GPUs.
crates/gpu-converters/src/scaler/bicubic.wgsl (2)
65-79
: Reduce recomputation of weights inside the nested loopsYou recompute
cubic_weight(fx - f32(i))
andcubic_weight(fy - f32(j))
each iteration. Precompute the 4 weights for X and Y outside the loops and use them by index. This reduces ALU and improves cache utilization.Sketch:
- for (var j = -1; j <= 2; j++) { - for (var i = -1; i <= 2; i++) { + var wx = array<f32, 4>(cubic_weight(fx + 1.0), cubic_weight(fx + 0.0), cubic_weight(fx - 1.0), cubic_weight(fx - 2.0)); + var wy = array<f32, 4>(cubic_weight(fy + 1.0), cubic_weight(fy + 0.0), cubic_weight(fy - 1.0), cubic_weight(fy - 2.0)); + for (var j = -1; j <= 2; j++) { + for (var i = -1; i <= 2; i++) { ... - let weight_x = cubic_weight(fx - f32(i)); - let weight_y = cubic_weight(fy - f32(j)); + let weight_x = wx[(i + 1) as u32]; + let weight_y = wy[(j + 1) as u32];
86-89
: Clamping after normalization is good; watch for haloing on high-contrast edgesCatmull-Rom can overshoot; your clamp mitigates that. If you notice ringing/halos in camera UI, consider Mitchell-Netravali (B=1/3, C=1/3) weights as an alternative.
Happy to provide a Mitchell-Netravali variant if needed.
crates/gpu-converters/src/scaler/nearest.wgsl (2)
27-30
: Nearest coordinate rounding uses floor; consider center-correct roundingFor nearest, aligning to centers typically uses rounding rather than floor to avoid a half-pixel shift.
Center-correct rounding:
- let input_x = i32(f32(output_coords.x) * scale_x); - let input_y = i32(f32(output_coords.y) * scale_y); + let input_x = i32((f32(output_coords.x) + 0.5) * scale_x); + let input_y = i32((f32(output_coords.y) + 0.5) * scale_y);This matches your bilinear/bicubic center mapping.
1-3
: Unused sampler bindingSame note as in bilinear: if not standardizing the layout, you can drop
input_sampler
.crates/gpu-converters/src/perf.rs (5)
110-116
: Provide Default for PerformanceTracker (Clippy hint)Add
Default
per Clippy suggestion to align withnew()
and improve ergonomics.Apply:
impl PerformanceTracker { pub fn new() -> Self { Self { metrics_history: Vec::new(), operation_stats: HashMap::new(), max_history_size: 1000, } } } + +impl Default for PerformanceTracker { + fn default() -> Self { + Self::new() + } +}
164-177
: Average throughput/efficiency are unweighted means; consider sum-based aggregatesAveraging per-op throughput and efficiency equally can skew results toward small/fast ops. Summing bytes and durations yields a truer aggregate throughput; similarly, weight GPU efficiency by total durations.
I can propose a patch to compute:
- total_bytes = sum(input_size)
- total_time = sum(total_duration)
- avg_throughput = total_bytes / total_time
- gpu_efficiency = sum(gpu_duration) / sum(total_duration)
367-378
: Use usize-suffixed literals for sizes in tests
1920 * 1080 * 4
defaults to i32 before widening. It fits, but usingusize
avoids accidental overflow if these constants change.- input_size: 1920 * 1080 * 4, - output_size: 1920 * 1080 * 4, + input_size: 1920usize * 1080usize * 4usize, + output_size: 1920usize * 1080usize * 4usize,
431-434
: Avoid strict float equality in testsFloating-point math can drift across platforms/optimizations. Prefer approx comparisons.
Example:
- assert_eq!(metrics.throughput_mbps(), 1.0); - assert_eq!(metrics.pixels_per_second(), 1024.0 * 1024.0); - assert_eq!(metrics.gpu_efficiency(), 0.8); + assert!((metrics.throughput_mbps() - 1.0).abs() < 1e-6); + assert!((metrics.pixels_per_second() - (1024.0 * 1024.0)).abs() < 1e-3); + assert!((metrics.gpu_efficiency() - 0.8).abs() < 1e-6);
123-141
: History maintenance uses Vec::remove(0) (O(n))For large histories,
remove(0)
shifts elements. If churn is high, consider a ring buffer orVecDeque
.I can switch this to a
VecDeque
withpop_front()
in a follow-up patch.crates/gpu-converters/src/util.rs (1)
17-17
: Prefer feature-gating over blanket#[allow(dead_code)]
To avoid masking accidental dead code, gate with cfg for debug/tests or a feature flag instead of a global allow.
Example:
-#[allow(dead_code)] +#[cfg_attr(not(any(test, debug_assertions, feature = "gpu-readback")), allow(dead_code))]If you prefer not to add a new feature, at least limit the allow to non-test, non-debug builds.
apps/desktop/src-tauri/Cargo.toml (1)
92-92
: Consider making GPU converters opt-in via a Cargo feature to keep prod builds leanPulling
cap-gpu-converters
into all targets may increase build times and binary size, and ships debug-only paths. Gate it behind a feature and enable it for dev/debug.Example:
-cap-gpu-converters = { path = "../../../crates/gpu-converters" } +cap-gpu-converters = { path = "../../../crates/gpu-converters", optional = true }Add features:
[features] default = [] gpu-preview = ["cap-gpu-converters"] camera-debug = []Then guard usage with
#[cfg(feature = "gpu-preview")]
. This also aligns with keeping the new debug commands (see commands/mod.rs) out of release builds.CAMERA_DEBUG_GUIDE.md (4)
224-242
: Use the crate path and match the compile gate in the integration snippetThe snippet imports from
commands::camera_debug::*;
which may not resolve from the current crate root and should be gated. Prefer the explicit crate path and guard with the feature/debug gate recommended in code.Apply:
-// In your main.rs or lib.rs -use commands::camera_debug::*; +// In your main.rs or lib.rs +#[cfg(any(debug_assertions, feature = "camera-debug"))] +use crate::commands::camera_debug::*; fn main() { tauri::Builder::default() - .invoke_handler(tauri::generate_handler![ + .invoke_handler(tauri::generate_handler![ test_camera_feed, get_camera_loading_state, force_show_camera_window, diagnose_camera_preview, quick_fix_camera_preview, debug_camera_auto_fix, get_window_status ])Also mention in prose that these commands are intended for debug/dev and should be feature-gated in release builds.
201-209
: Add languages to fenced code blocks to satisfy markdownlint (MD040) and improve readabilityTwo blocks lack a language hint. Use
text
for log-like blocks.Apply:
-**✅ Good signs:** -``` +**✅ Good signs:** +```text ✓ Camera feed is working ✓ GPU camera converter initialized successfully ✓ Camera finished loading, received first frame ✓ Window forced visible Uploading texture #N: 1280x720, stride: 5120, buffer size: 3686400 bytes Surface presented #N--- `211-219`: **Same: add language to the “Problem indicators” fenced block** Apply: ```diff -**❌ Problem indicators:** -``` +**❌ Problem indicators:** +```text ✗ No camera frames received for 5.0s ✗ GPU conversion failed, falling back to ffmpeg ✗ Failed to force show window No texture data provided for render #N Buffer too small: X bytes, expected at least Y bytes
--- `275-286`: **Avoid auto-invoking “quick fix” on a timer in production UIs** Calling `quick_fix_camera_preview` every 10s can cause disruptive side effects (window focus/visibility changes, resource re-inits). Consider logging and surfacing a manual “Apply quick fix” action instead, or gate the timer behind a debug flag. </blockquote></details> <details> <summary>apps/desktop/src-tauri/src/recording.rs (5)</summary><blockquote> `530-534`: **Avoid potential panics; don’t hard-require CameraPreview state and remove duplicate shutdowns.** - app.state::<CameraPreview>() will panic if the state isn’t present. Use try_state() or gate the call. - You also call shutdown here and again in handle_recording_end (Lines 661-663). Pick a single, reliable place (preferably handle_recording_end) to avoid double-shutdown races. Apply this diff (and centralize shutdown in handle_recording_end only): ```diff - println!("STOP RECORDING COMMAND FIRE"); - - // TODO: This should be derived. - app.state::<CameraPreview>().shutdown(); + tracing::debug!("stop_recording: invoked"); + // CameraPreview shutdown is handled in handle_recording_end to avoid double-shutdown races.
If you still want an early shutdown here, guard it:
- app.state::<CameraPreview>().shutdown(); + if let Some(preview) = app.try_state::<CameraPreview>() { + preview.shutdown(); + }Note: try_state::() is available on recent Tauri versions; if unavailable, keep shutdown solely in handle_recording_end.
536-537
: Redundant/odd pattern: return Err(...)?return Err(...)? is a clippy “useless_question_mark” and is confusing. Return the error directly.
- return Err("Recording not in progress".to_string())?; + return Err("Recording not in progress".to_string());
539-548
: Use tracing instead of println and drop placeholder logs.The println! calls (“AA”, “BB”, “STOP RECORDING COMMAND DONE”) should be tracing::debug! with context or removed.
- println!("AA"); + tracing::debug!("stop_recording: cleared current recording"); - println!("BB"); + tracing::debug!("stop_recording: recording stopped"); - println!("STOP RECORDING COMMAND DONE"); + tracing::debug!("stop_recording: completed");Add debug to imports:
- use tracing::{error, info}; + use tracing::{debug, error, info};
651-663
: Double shutdown and noisy println; make shutdown idempotent and centralized.
- You already clear camera/mic feeds. Calling CameraPreview::shutdown() here in addition to stop_recording introduces double-shutdown potential and races. Centralize shutdown here, and remove the earlier call in stop_recording.
- Remove the println! and the commented-out window-close block or gate them behind a debug flag.
- // if let Some(v) = CapWindowId::Camera.get(&handle) { - // let _ = v.close(); - // } - println!("I WANT YOU TO SHUTDOWN PLZ"); + // Optionally close the camera window if still present: + // if let Some(v) = CapWindowId::Camera.get(&handle) { let _ = v.close(); } + tracing::debug!("handle_recording_end: shutting down CameraPreview"); app.camera_feed.take(); app.mic_feed.take(); - // TODO: This shouldn't be required - handle.state::<crate::camera::CameraPreview>().shutdown(); + if let Some(preview) = handle.try_state::<crate::camera::CameraPreview>() { + preview.shutdown(); + }If try_state is not available in your Tauri version, consider making CameraPreview::shutdown idempotent and safe to call multiple times.
42-42
: Import debug for logging.To use tracing::debug! in this file, add it to imports.
- use tracing::{error, info}; + use tracing::{debug, error, info};crates/gpu-converters/tests/integration_test.rs (5)
45-66
: Solid basic GPU init and NV12→RGBA path coverage.This test validates the happy path well. Consider also asserting a few sample pixels (or checksum) to catch silent format/channel order regressions.
118-143
: Assumes Balanced preset enables perf tracking; verify that invariant or gate the assertion.If Balanced may disable tracking via configuration/env, this test could be flaky. Consider asserting summary.is_some() only if a feature flag is set, or enforce enabling tracking when constructing the converter for the test.
172-201
: Texture pool test lacks behavioral assertions.You capture stats but do not assert on expected growth or reuse patterns. Adding minimal expectations (e.g., total_available > 0 after conversions) would catch pool regressions.
- let _final_stats = converter.get_texture_pool_stats(); - // Note: exact behavior depends on implementation details + let final_stats = converter.get_texture_pool_stats(); + assert!( + final_stats.total_available >= initial_stats.total_available, + "Texture pool should not shrink during conversions" + );
237-253
: Double-check BGRA.needs_conversion() expectation.This asserts BGRA needs conversion. If your pipeline supports BGRA passthrough in some presets, this test will fail spuriously. Confirm intended semantics across presets.
275-293
: Stride helpers coverage is good; add odd-dimension format validation.Consider adding a negative test for formats requiring even width/height (e.g., NV12) to ensure you return InvalidDimensions or similar rather than panicking.
I can add a small NV12 odd-dimension test if you want.
apps/desktop/src-tauri/src/windows.rs (2)
372-381
: Visibility choreography: ensure transparent window is actually transparent.You rely on transparent(true) and visibility controlled later. To address the TODO “Fix the black background on the camera preview”:
- Ensure the WGPU surface uses a compositing alpha mode that blends with the window (e.g., CompositeAlpha::PreMultiplied / wgpu::CompositeAlphaMode::PreMultiplied in SurfaceConfiguration on platforms that support it).
- Clear with transparent color (0,0,0,0) in your render/compute path.
- On macOS, the NSWindow must be non-opaque with a clear background (see set_window_transparent below).
I can draft the surface configuration changes if you share the converter/preview surface setup file.
768-782
: macOS transparency: also clear the NSWindow background color.set_window_transparent flips opacity, but macOS windows may still draw an opaque background. Set background color to clear for the camera and overlay windows.
#[cfg(target_os = "macos")] pub fn set_window_transparent(_window: tauri::Window, _value: bool) { #[cfg(target_os = "macos")] { let ns_win = _window .ns_window() .expect("Failed to get native window handle") as *const objc2_app_kit::NSWindow; unsafe { (*ns_win).setOpaque(!_value); + if _value { + use objc2_app_kit::NSColor; + (*ns_win).setBackgroundColor(Some(NSColor::clearColor())); + } } } }This change often fixes black backdrops on transparent overlays when using GPU surfaces.
crates/gpu-converters/src/scaler/mod.rs (4)
31-42
: Constructor is fine; consider caching or lazily building pipelines.If startup latency matters, you can lazily build only the selected pipeline on first use. Not critical; just a potential optimization.
173-205
: Sampler creation per call; consider reusing per-quality samplers.Creating a sampler for each scale adds overhead. You could precreate two samplers (Nearest/Linear) in GPUScaler and choose at runtime.
273-311
: copy_texture assumes Rgba8Unorm; document or parameterize the format.If upstream passes non-RGBA textures, the copy here won’t match. Either document that input textures must be RGBA8Unorm or accept a format parameter and validate.
144-172
: Async without awaiting.scale_texture doesn’t await anything; consider making it synchronous for clarity unless you want a uniform async API surface.
- pub async fn scale_texture( + pub fn scale_texture( &self, input_texture: &wgpu::Texture, output_width: u32, output_height: u32, quality: ScalingQuality, ) -> Result<wgpu::Texture, ConversionError> {Update call sites accordingly.
crates/gpu-converters/README.md (6)
18-21
: Quick Start: missing imports for FallbackStrategy and ScalingQuality.Readers will hit compile errors without these in scope.
-use cap_gpu_converters::{ - GPUCameraConverter, CameraInput, CameraFormat, ConversionPreset -}; +use cap_gpu_converters::{ + GPUCameraConverter, CameraInput, CameraFormat, ConversionPreset, FallbackStrategy, ScalingQuality +};
105-108
: Memory usage example prints a struct with Display; show fields instead.As written, println!("GPU memory usage: {}", usage) likely won’t compile. Print specific fields.
-if let Some(usage) = converter.get_memory_usage() { - println!("GPU memory usage: {}", usage); -} +if let Some(usage) = converter.get_memory_usage() { + println!( + "GPU memory: pool={} bytes, in_pool={}, in_use={}", + usage.estimated_pool_memory_bytes, usage.textures_in_pool, usage.textures_in_use + ); +}
207-224
: Specify language for fenced code block (markdownlint MD040).Add a language tag for the example output block.
-``` +```text === Benchmark Results === ...--- `158-164`: **API drift: convert_to_rgba_texture not exercised in tests.** Ensure this API exists and remains stable. If it’s private or unimplemented on some backends, consider gating the doc with cfgs or linking to the function signature. --- `258-291`: **Camera example may not match cap_camera API.** The example assumes list_cameras() returns an iterator and CameraInfo supports start_capturing with a callback. Validate against actual cap_camera APIs to avoid misleading users. Share the cap_camera public functions, and I’ll align this example to compile. --- `295-303`: **Benchmark table values should be marked as illustrative.** You already note “Results may vary,” good. Consider adding “Example” in the table caption or an explicit disclaimer above to avoid misinterpretation as guaranteed performance. </blockquote></details> <details> <summary>crates/gpu-converters/src/bgra_rgba/mod.rs (2)</summary><blockquote> `13-13`: **Consider removing unused `async` qualifier** The `new` constructor doesn't perform any async operations and could be a synchronous function instead. ```diff -pub async fn new(device: &wgpu::Device, queue: &wgpu::Queue) -> Result<Self, ConversionError> { +pub fn new(device: &wgpu::Device, queue: &wgpu::Queue) -> Result<Self, ConversionError> {
216-226
: Consider using?
operator for cleaner error handlingThe nested
map_err
calls can be simplified using the?
operator with intermediate results.-rx.recv() - .map_err(|e| ConversionError::GPUError(format!("Failed to receive result: {}", e)))? - .map_err(|e| ConversionError::GPUError(format!("Failed to map buffer: {:?}", e)))?; +let map_result = rx.recv() + .map_err(|e| ConversionError::GPUError(format!("Failed to receive result: {}", e)))?; +map_result + .map_err(|e| ConversionError::GPUError(format!("Failed to map buffer: {:?}", e)))?;apps/desktop/src-tauri/src/lib.rs (1)
2238-2242
: TODO comments need proper trackingThese TODOs appear related to the camera window lifecycle. They should either be implemented or tracked properly.
Would you like me to help create GitHub issues to track these TODO items for proper camera window cleanup during the Main window destruction and Camera window destruction events?
Also applies to: 2263-2267
crates/gpu-converters/src/rgb24_rgba/mod.rs (1)
13-13
: Consider making the constructor synchronousThe
new
method doesn't perform any actual async operations. All WGPU operations are synchronous, so this could be a regular synchronous function.- pub async fn new(device: &wgpu::Device, queue: &wgpu::Queue) -> Result<Self, ConversionError> { + pub fn new(device: &wgpu::Device, queue: &wgpu::Queue) -> Result<Self, ConversionError> {crates/gpu-converters/examples/benchmark.rs (1)
441-442
: Test assertion could benefit from clearer calculationThe expected size calculation could be more explicit for readability.
fn test_generate_test_data() { let data = generate_test_data(CameraFormat::NV12, 640, 480); - let expected_size = (640 * 480) as f32 * 1.5; - assert_eq!(data.len(), expected_size as usize); + let expected_size = (640 * 480) + (640 * 480) / 2; // Y plane + UV plane + assert_eq!(data.len(), expected_size);crates/gpu-converters/src/texture_pool.rs (3)
114-114
: Use or_default() for cleaner codeAs suggested by Clippy, use
or_default()
instead ofor_insert_with(Vec::new)
.- let available = self.available_textures.entry(key).or_insert_with(Vec::new); + let available = self.available_textures.entry(key).or_default();
226-226
: Use or_default() for cleaner codeAs suggested by Clippy, use
or_default()
instead ofor_insert_with(Vec::new)
.- let textures = self.available_textures.entry(key).or_insert_with(Vec::new); + let textures = self.available_textures.entry(key).or_default();
190-231
: Consider texture format validation in pre_warmThe
pre_warm
method hardcodes texture usage flags which may not match all use cases. Consider taking usage flags as a parameter or using the descriptors provided by the helper methods.- pub fn pre_warm(&mut self, width: u32, height: u32, format: wgpu::TextureFormat, count: usize) { - let desc = TextureDescriptor { - label: Some("Pre-warmed Texture"), - size: wgpu::Extent3d { - width, - height, - depth_or_array_layers: 1, - }, - mip_level_count: 1, - sample_count: 1, - dimension: wgpu::TextureDimension::D2, - format, - usage: wgpu::TextureUsages::STORAGE_BINDING - | wgpu::TextureUsages::COPY_SRC - | wgpu::TextureUsages::TEXTURE_BINDING, - view_formats: &[], - }; + pub fn pre_warm(&mut self, desc: &TextureDescriptor, count: usize) {crates/gpu-converters/src/fallback.rs (3)
14-14
: Complex type could benefit from type aliasAs suggested by Clippy, the Custom variant's type is complex. Consider creating a type alias for better readability.
+type FallbackFn = Arc<dyn Fn(&CameraInput) -> Result<Vec<u8>, ConversionError> + Send + Sync>; + pub enum FallbackStrategy { /// No fallback - return error immediately None, /// Use CPU-based conversion as fallback CpuConversion, /// Try software implementation with different parameters SoftwareRetry, /// Custom fallback function provided by user - Custom(Arc<dyn Fn(&CameraInput) -> Result<Vec<u8>, ConversionError> + Send + Sync>), + Custom(FallbackFn), }
324-332
: Potential index out-of-bounds accessThe bounds check happens after calculating the index, which could theoretically overflow before the check. While unlikely with typical image dimensions, it's safer to validate bounds first.
for x in 0..dst_width { let src_x = (x as f32 * x_ratio) as u32; let src_y = (y as f32 * y_ratio) as u32; + // Ensure source coordinates are within bounds + let src_x = src_x.min(src_width - 1); + let src_y = src_y.min(src_height - 1); + let src_idx = ((src_y * src_width + src_x) * 4) as usize; - if src_idx + 3 < rgba_data.len() { - scaled_data.push(rgba_data[src_idx]); // R - scaled_data.push(rgba_data[src_idx + 1]); // G - scaled_data.push(rgba_data[src_idx + 2]); // B - scaled_data.push(rgba_data[src_idx + 3]); // A - } else { - // Fallback to black pixel if out of bounds - scaled_data.extend_from_slice(&[0, 0, 0, 255]); - } + scaled_data.push(rgba_data[src_idx]); // R + scaled_data.push(rgba_data[src_idx + 1]); // G + scaled_data.push(rgba_data[src_idx + 2]); // B + scaled_data.push(rgba_data[src_idx + 3]); // A }
365-377
: Async function doesn't actually await anythingThe
check_gpu_health
function is marked async but doesn't perform any async operations. Consider making it synchronous.- pub async fn check_gpu_health(device: &wgpu::Device) -> bool { + pub fn check_gpu_health(device: &wgpu::Device) -> bool {apps/desktop/src-tauri/src/commands/camera_debug.rs (2)
186-219
: Consider consolidating error handling in get_window_statusMultiple similar error handling patterns could be simplified using a helper closure or macro.
pub fn get_window_status(window: WebviewWindow) -> Result<CameraDebugReport, String> { let mut details = Vec::new(); // Helper to add status with error handling let add_status = |details: &mut Vec<String>, label: &str, result: Result<String, _>| { match result { Ok(value) => details.push(format!("{}: {}", label, value)), Err(e) => details.push(format!("{} check failed: {}", label, e)), } }; add_status(&mut details, "Visible", window.is_visible().map(|v| v.to_string())); add_status(&mut details, "Size", window.inner_size().map(|s| format!("{}x{}", s.width, s.height))); add_status(&mut details, "Position", window.outer_position().map(|p| format!("{}, {}", p.x, p.y))); add_status(&mut details, "Focused", window.is_focused().map(|f| f.to_string())); Ok(CameraDebugReport { success: true, message: "Window status retrieved".to_string(), details: Some(details.join("\n")), fixes_applied: vec![], }) }
277-284
: Consider handling append operation more safelyThe mutable append operation modifies fixes in place, which could be simplified.
// Step 4: Apply additional quick fixes match CameraDiagnostics::quick_fix_camera_preview(&camera_preview, &window).await { - Ok(mut fixes) => { - all_fixes.append(&mut fixes); + Ok(fixes) => { + all_fixes.extend(fixes); } Err(e) => { messages.push(format!("⚠ Quick fix failed: {}", e)); } }apps/desktop/src-tauri/src/camera.rs (2)
109-110
: TODO comment needs to be addressed.The TODO comment suggests that
camera_feed.take()
should be used instead of the cancellation token. This indicates technical debt that should be resolved.Would you like me to create an issue to track the removal of the
cancel
field and migration tocamera_feed.take()
for shutdown coordination?
563-564
: Fix empty lines after doc comments.There are empty lines between doc comments and the functions they document, which violates Rust documentation conventions.
Remove the empty lines after the closing doc comment blocks:
/// ``` - /// Debug function to check camera feed status
/// ``` - /// Test camera feed reception with timeout
/// ``` - /// Comprehensive test function for debugging camera preview issues
Also applies to: 621-622, 698-699
crates/gpu-converters/src/lib.rs (4)
183-218
: Consider making GPU adapter selection configurable.The converter hardcodes
PowerPreference::HighPerformance
which might not be suitable for all use cases (e.g., battery-powered devices). Consider making this configurable through the API or presets.Add a builder pattern or configuration option:
pub struct GPUConverterConfig { pub power_preference: wgpu::PowerPreference, pub force_fallback_adapter: bool, } impl GPUCameraConverter { pub async fn with_config(config: GPUConverterConfig) -> Result<Self, ConversionError> { // Use config.power_preference instead of hardcoded value } }
193-193
: Improve error messages for better debugging.The error mapping loses the original error details by using
format!
with a generic message. Consider preserving the original error or using thesource
chain.Instead of:
.map_err(|e| ConversionError::GPUError(format!("Failed to request adapter: {}", e)))?;Consider using the Display trait directly since the error is already converted to string:
.ok_or_else(|| ConversionError::GPUError("Failed to request adapter: no suitable adapter found".to_string()))?Note that
request_adapter
returnsOption<Adapter>
, notResult
, so the current code won't compile as-is.
416-470
: Consider reducing code duplication in converter initialization.The
convert_to_texture
andconvert_to_rgba_texture
methods have nearly identical initialization logic for each format converter. This could be refactored to reduce duplication.Consider using a macro or generic helper:
macro_rules! get_or_init_converter { ($self:expr, $field:ident, $converter_type:ty) => { if $self.$field.is_none() { $self.$field = Some(<$converter_type>::new(&$self.device, &$self.queue).await?); } $self.$field.as_ref().unwrap() }; } // Usage: let converter = get_or_init_converter!(self, nv12_converter, NV12ToRGBA);
573-585
: Memory usage estimation could be more accurate.The memory estimation assumes all textures are 1920x1080 RGBA (8MB each), which may not reflect actual usage. Consider tracking actual texture sizes for more accurate reporting.
Track actual texture dimensions in the pool and calculate real memory usage:
pub fn get_memory_usage(&self) -> Option<MemoryUsage> { let pool_stats = self.texture_pool.stats(); // Get actual memory usage from texture pool let actual_memory = self.texture_pool.get_total_memory_bytes(); // ... }
|
||
// Fallback ffmpeg scaler | ||
info!("Initializing FFmpeg fallback scaler..."); | ||
let mut fallback_scaler = match scaling::Context::get( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused mutable qualifier and variable.
The fallback_scaler
variable is declared as mutable but never mutated, and is actually never used in the code. This appears to be dead code from the FFmpeg fallback implementation that's currently replaced with todo!()
.
Apply this diff to fix the issues:
- let mut fallback_scaler = match scaling::Context::get(
+ let _fallback_scaler = match scaling::Context::get(
Or remove the entire block if the fallback is not being used:
- // Fallback ffmpeg scaler
- info!("Initializing FFmpeg fallback scaler...");
- let mut fallback_scaler = match scaling::Context::get(
- Pixel::RGBA,
- 1,
- 1,
- Pixel::RGBA,
- 1,
- 1,
- scaling::Flags::empty(),
- ) {
- Ok(scaler) => {
- info!("FFmpeg fallback scaler initialized successfully");
- Some(scaler)
- }
- Err(err) => {
- error!("Error initializing ffmpeg scaler: {err:?}");
- None
- }
- };
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let mut fallback_scaler = match scaling::Context::get( | |
let _fallback_scaler = match scaling::Context::get( |
🧰 Tools
🪛 GitHub Check: Clippy
[warning] 182-182: variable does not need to be mutable
warning: variable does not need to be mutable
--> apps/desktop/src-tauri/src/camera.rs:182:17
|
182 | let mut fallback_scaler = match scaling::Context::get(
| ----^^^^^^^^^^^^^^^
| |
| help: remove this mut
|
= note: #[warn(unused_mut)]
on by default
[warning] 182-182: unused variable: fallback_scaler
warning: unused variable: fallback_scaler
--> apps/desktop/src-tauri/src/camera.rs:182:21
|
182 | let mut fallback_scaler = match scaling::Context::get(
| ^^^^^^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: _fallback_scaler
|
= note: #[warn(unused_variables)]
on by default
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/camera.rs around line 182, the declaration let mut
fallback_scaler = match scaling::Context::get(...) creates a mutable and unused
variable (and the surrounding fallback block is unused); remove the mutable
qualifier and the unused variable by deleting that let binding (or remove the
entire fallback block if it's dead code) so there are no unused bindings left
and the code compiles without dead/unused variables.
// output_width, | ||
// output_height, | ||
// ) | ||
todo!() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Incomplete fallback implementation with todo!() calls.
The code contains todo!()
macros that will cause panics at runtime when GPU conversion fails. This is a critical issue as the fallback path is meant to provide resilience when GPU conversion isn't available.
The todo!()
calls need to be replaced with proper fallback implementation. Based on the commented code, you should either:
- Implement the FFmpeg fallback properly by uncommenting and fixing the
gpu_to_ffmpeg_fallback
calls - Return a proper error instead of panicking
Apply this diff as a temporary fix to prevent panics:
- todo!()
+ None // Return None to indicate conversion failure
And:
- todo!()
+ None // Return None when no GPU converter is available
Also applies to: 453-453
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/camera.rs around lines 433 and 453, the code
currently calls todo!() in the GPU-conversion fallback paths which will panic at
runtime; replace each todo!() with a proper fallback: call the existing
gpu_to_ffmpeg_fallback function with the same arguments used in the surrounding
code (propagate its Result) or, if that fallback is inappropriate here, return
an Err with a clear CameraError describing conversion failure instead of
panicking; ensure errors from the GPU path are preserved and propagated (or
mapped) so callers can handle failure instead of crashing.
fn gpu_to_ffmpeg_fallback( | ||
scaler: &mut Option<scaling::Context>, | ||
resampler_frame: &mut Cached<(u32, u32), frame::Video>, | ||
frame: &frame::Video, | ||
output_width: u32, | ||
output_height: u32, | ||
) -> Option<(Vec<u8>, u32)> { | ||
if let Some(scaler) = scaler { | ||
let resampler_frame = resampler_frame | ||
.get_or_init((output_width, output_height), frame::Video::empty); | ||
|
||
// Cache the scaler configuration | ||
scaler.cached( | ||
frame.format(), | ||
frame.width(), | ||
frame.height(), | ||
format::Pixel::RGBA, | ||
output_width, | ||
output_height, | ||
ffmpeg::software::scaling::flag::Flags::FAST_BILINEAR, | ||
); | ||
|
||
if !window_visible { | ||
window_visible = true; | ||
if let Err(err) = renderer.window.show() { | ||
error!("Failed to show camera preview window: {}", err); | ||
// Run the scaling operation | ||
if let Err(err) = scaler.run(&frame, resampler_frame) { | ||
error!( | ||
"Error rescaling frame with ffmpeg - input: {}x{} {:?}, output: {}x{}: {err:?}", | ||
frame.width(), | ||
frame.height(), | ||
frame.format(), | ||
output_width, | ||
output_height | ||
); | ||
return None; | ||
} | ||
|
||
let data = resampler_frame.data(0); | ||
let stride = resampler_frame.stride(0) as u32; | ||
|
||
if data.is_empty() { | ||
error!("FFmpeg scaler produced empty frame data"); | ||
return None; | ||
} | ||
|
||
Some((data.to_vec(), stride)) | ||
} else { | ||
error!("No ffmpeg scaler available for fallback - cannot convert frame"); | ||
None | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Unused helper function should be removed or properly integrated.
The gpu_to_ffmpeg_fallback
function is defined but never called (all call sites are commented out). This is dead code that should either be removed or properly integrated into the fallback logic.
Either remove this function entirely or integrate it properly into the fallback flow to replace the todo!()
calls.
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/camera.rs around lines 494 to 542, the helper
function gpu_to_ffmpeg_fallback is dead code (defined but never called); either
remove the entire function or integrate it into the fallback flow by replacing
the existing todo!() call sites with calls to gpu_to_ffmpeg_fallback, wiring the
scaler and resampler_frame arguments appropriately and handling the
Option<(Vec<u8>, u32)> return (propagate None as existing error branch or
convert to the expected return type), and ensure any required ffmpeg imports and
mutable references are available at the call sites and update tests or callers
accordingly.
println!("ABORT EXISTING SETUP"); | ||
|
||
// Ask currently running setup to abort | ||
cancel.send(()).await.ok(); | ||
|
||
// TODO: We don't care about this because the sender will just remount again. | ||
// We can assume a window was already initialized. | ||
// Stop it so we can recreate it with the correct `camera_tx` | ||
if let Some(win) = CapWindowId::Camera.get(&app_handle) { | ||
win.close().unwrap(); // TODO: Error handling | ||
}; | ||
// if let Some(win) = CapWindowId::Camera.get(&app_handle) { | ||
// println!("WINDOW CLOSE ONE"); | ||
// win.close().unwrap(); // TODO: Error handling | ||
// }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Debug print statements should be removed or converted to proper logging
The println!
statements appear to be debug output that should be removed or converted to proper logging using the log
or tracing
macros.
-println!("ABORT EXISTING SETUP");
+debug!("Aborting existing camera setup");
// Ask currently running setup to abort
cancel.send(()).await.ok();
-// TODO: We don't care about this because the sender will just remount again.
-// We can assume a window was already initialized.
-// Stop it so we can recreate it with the correct `camera_tx`
-// if let Some(win) = CapWindowId::Camera.get(&app_handle) {
-// println!("WINDOW CLOSE ONE");
-// win.close().unwrap(); // TODO: Error handling
-// };
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/lib.rs around lines 269 to 280, the plain println!
debug prints should be removed or replaced with structured logging; replace the
println!("ABORT EXISTING SETUP") (and any other println! in this block) with an
appropriate tracing or log macro (e.g., tracing::debug! or tracing::info!) that
includes context (like action and relevant IDs), remove any leftover temporary
debug prints, and ensure the crate has the chosen logging/tracing dependency
initialized so logs are emitted.
println!("SHOWING WINDOW"); | ||
// let window = ShowCapWindow::Camera.show(&app_handle).await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Debug print statement should use proper logging
Replace with proper logging.
-println!("SHOWING WINDOW");
+debug!("Initializing camera window");
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
println!("SHOWING WINDOW"); | |
// let window = ShowCapWindow::Camera.show(&app_handle).await.unwrap(); | |
debug!("Initializing camera window"); | |
// let window = ShowCapWindow::Camera.show(&app_handle).await.unwrap(); |
🤖 Prompt for AI Agents
In apps/desktop/src-tauri/src/lib.rs around lines 285-286, replace the stray
debug println!("SHOWING WINDOW") with a proper logging call (e.g., using the
crate's logger such as log::info!, tracing::info!, or tauri::api::log::info!) to
follow project logging conventions; remove the println, import/ensure the chosen
logger is initialized, and use a descriptive log level and message (e.g.,
info!("Showing capture window") ) consistent with surrounding code.
impl Drop for PooledTexture { | ||
fn drop(&mut self) { | ||
if let Some(texture) = self.texture.take() { | ||
// Safety: The pool pointer is valid as long as the PooledTexture exists | ||
// and the pool is guaranteed to outlive all PooledTextures | ||
unsafe { | ||
(*self.pool).return_texture(texture, self.key); | ||
} | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsafe usage of raw pointer is concerning
The use of raw pointers for the pool reference is unsafe and could lead to undefined behavior if the TexturePool is dropped while PooledTextures are still alive. Consider using Arc<Mutex<TexturePool>>
for safe shared ownership.
The current implementation with raw pointers is unsafe and could cause memory corruption. Would you like me to provide a safe implementation using Arc and Mutex that guarantees the pool outlives all pooled textures?
@group(0) @binding(0) var<storage, read> input_buffer: array<u32>; | ||
@group(0) @binding(1) var output: texture_storage_2d<rgba8unorm, write>; | ||
@group(0) @binding(2) var<uniform> dimensions: vec2<u32>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing bytes-per-row (stride) uniform will corrupt indexing on padded inputs.
UYVY is commonly padded per row. Without a stride, your byte_index assumes tightly packed rows (width*2). That will misaddress rows and can manifest as black/garbled output. Add a bytes_per_row (and optionally a base offset) uniform.
Apply this diff to add a stride uniform:
@group(0) @binding(0) var<storage, read> input_buffer: array<u32>;
@group(0) @binding(1) var output: texture_storage_2d<rgba8unorm, write>;
@group(0) @binding(2) var<uniform> dimensions: vec2<u32>;
+@group(0) @binding(3) var<uniform> bytes_per_row: u32; // stride in bytes for UYVY
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@group(0) @binding(0) var<storage, read> input_buffer: array<u32>; | |
@group(0) @binding(1) var output: texture_storage_2d<rgba8unorm, write>; | |
@group(0) @binding(2) var<uniform> dimensions: vec2<u32>; | |
@group(0) @binding(0) var<storage, read> input_buffer: array<u32>; | |
@group(0) @binding(1) var output: texture_storage_2d<rgba8unorm, write>; | |
@group(0) @binding(2) var<uniform> dimensions: vec2<u32>; | |
@group(0) @binding(3) var<uniform> bytes_per_row: u32; // stride in bytes for UYVY |
// Calculate byte index for this pixel pair | ||
let byte_index = (row_index * dims.x + pixel_pair_index * 2u) * 2u; | ||
let word_index = byte_index / 4u; | ||
let byte_offset = byte_index % 4u; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Compute byte_index using stride to fix row addressing (likely root cause of the black preview background).
The current formula ignores padding. Use bytes_per_row and the fixed 4 bytes per macropixel (2 pixels) for UYVY.
Apply this diff:
- // Calculate byte index for this pixel pair
- let byte_index = (row_index * dims.x + pixel_pair_index * 2u) * 2u;
+ // Calculate byte index using stride: each macropixel is 4 bytes (U Y V Y)
+ let byte_index = row_index * bytes_per_row + pixel_pair_index * 4u;
let word_index = byte_index / 4u;
let byte_offset = byte_index % 4u;
Follow-up: pipe bytes_per_row from the producer (FFmpeg/OS frame) into the bind group. I can help wire this end-to-end in Rust.
Committable suggestion skipped: line range outside the PR's diff.
if input.y_data.len() < y_size { | ||
return Err(ConversionError::InsufficientData { | ||
expected: y_size, | ||
actual: input.y_data.len(), | ||
}); | ||
} | ||
if input.u_data.len() < uv_size { | ||
return Err(ConversionError::InsufficientData { | ||
expected: uv_size, | ||
actual: input.u_data.len(), | ||
}); | ||
} | ||
if input.v_data.len() < uv_size { | ||
return Err(ConversionError::InsufficientData { | ||
expected: uv_size, | ||
actual: input.v_data.len(), | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate YUV420P plane sizes exactly
The YUV420P format has strict size requirements. Consider validating exact sizes instead of minimum sizes to catch buffer issues early.
-if input.y_data.len() < y_size {
+if input.y_data.len() != y_size {
return Err(ConversionError::InsufficientData {
expected: y_size,
actual: input.y_data.len(),
});
}
-if input.u_data.len() < uv_size {
+if input.u_data.len() != uv_size {
return Err(ConversionError::InsufficientData {
expected: uv_size,
actual: input.u_data.len(),
});
}
-if input.v_data.len() < uv_size {
+if input.v_data.len() != uv_size {
return Err(ConversionError::InsufficientData {
expected: uv_size,
actual: input.v_data.len(),
});
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if input.y_data.len() < y_size { | |
return Err(ConversionError::InsufficientData { | |
expected: y_size, | |
actual: input.y_data.len(), | |
}); | |
} | |
if input.u_data.len() < uv_size { | |
return Err(ConversionError::InsufficientData { | |
expected: uv_size, | |
actual: input.u_data.len(), | |
}); | |
} | |
if input.v_data.len() < uv_size { | |
return Err(ConversionError::InsufficientData { | |
expected: uv_size, | |
actual: input.v_data.len(), | |
}); | |
} | |
if input.y_data.len() != y_size { | |
return Err(ConversionError::InsufficientData { | |
expected: y_size, | |
actual: input.y_data.len(), | |
}); | |
} | |
if input.u_data.len() != uv_size { | |
return Err(ConversionError::InsufficientData { | |
expected: uv_size, | |
actual: input.u_data.len(), | |
}); | |
} | |
if input.v_data.len() != uv_size { | |
return Err(ConversionError::InsufficientData { | |
expected: uv_size, | |
actual: input.v_data.len(), | |
}); | |
} |
🤖 Prompt for AI Agents
In crates/gpu-converters/src/yuv420p_rgba/mod.rs around lines 99 to 116, the
code currently checks that each Y, U and V plane is at least the expected size;
change these checks to require exact sizes instead of minimums: compute y_size
and uv_size (as already done) and return ConversionError::InsufficientData when
input.y_data.len() != y_size or input.u_data.len() != uv_size or
input.v_data.len() != uv_size, including the expected and actual lengths in each
error case so buffers with wrong sizes are rejected immediately.
let expected_size = (width * height * 2) as usize; // YUYV is 2 bytes per pixel | ||
if input_data.len() < expected_size { | ||
return Err(ConversionError::InsufficientData { | ||
expected: expected_size, | ||
actual: input_data.len(), | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
YUYV format expects exactly 2 bytes per pixel
Similar to the BGRA converter, consider validating for exact buffer size to ensure data integrity.
let expected_size = (width * height * 2) as usize; // YUYV is 2 bytes per pixel
-if input_data.len() < expected_size {
+if input_data.len() != expected_size {
return Err(ConversionError::InsufficientData {
expected: expected_size,
actual: input_data.len(),
});
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let expected_size = (width * height * 2) as usize; // YUYV is 2 bytes per pixel | |
if input_data.len() < expected_size { | |
return Err(ConversionError::InsufficientData { | |
expected: expected_size, | |
actual: input_data.len(), | |
}); | |
} | |
let expected_size = (width * height * 2) as usize; // YUYV is 2 bytes per pixel | |
if input_data.len() != expected_size { | |
return Err(ConversionError::InsufficientData { | |
expected: expected_size, | |
actual: input_data.len(), | |
}); | |
} |
🤖 Prompt for AI Agents
In crates/gpu-converters/src/yuyv_rgba/mod.rs around lines 86 to 92, the current
check only guards against input shorter than expected but YUYV requires exactly
2 bytes per pixel; change the validation to require input_data.len() ==
expected_size and return an appropriate ConversionError when the lengths differ
(update to a more suitable error variant or add a new InvalidSize/InvalidData
variant if needed), so callers get a clear error when buffer size is not exactly
expected.
This PR is just AI slop but it seems to be working on my machine. It drops the ffmpeg rescaler in favor of using shaders for the color conversions. This should theorically be much faster and more efficient.
This needs an insane review and cleanup before we should even concider merging it.
TODO:
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Documentation