Skip to content

drivers: video: stm32_dcmi and probably others handling differences in formats and overhead #94907

@KurtE

Description

@KurtE

Is your feature request related to a problem? Please describe.

Probably not unique to video_stm32_dcmi.c code and Arduino boards, but the current default setup on these boards
is that all of the buffers for the camera and in the case of the GIGA the video frame buffer are all allocated in
SDRAM. And with some of these boards, we are lucky if the video sub-system does not crash in the DMA and
SDRAM subsystem, which does not recover. I believe that is: #93287

So looking for ways to reduce the amount of reading/writing to SDRAM. Currently when a frame has been
read, it calls: HAL_DCMI_FrameEventCallback, which copies the data from one buffer to another.

But sometimes the data coming from the camera is not in the format that the apps are expecting.
Examples:

  1. with RGB565 some of the cameras return the bytes in the reverse order we are expecting.
    The current Arduino camera library Camera.begin method has a parameter that when it receives
    a new image it walks it and swaps the bytes.

  2. With the unreleased HM01b0 camera code, the camera can return the image data over 8 or 4 or 1 data pin. The Arducam HM01b0 for the Arduino GIGA only returns 4 bits at a time, the ESP32 Pico version returns 1. I have a
    modified version of the camera code where you can tell the video system that it returns two bytes per pixel. I
    then have Arduino sketch code that combines two bytes into one.

  3. With the current HM01b0 code, when I ask for 320x240 resolution, it actually returns 324x244. Maybe that
    can be fixed. Currently I hacked in new format resolution of the 326x244, as it is needed for the system to
    allocate large enough buffers and to align the data properly.

Describe the solution you'd like

I am wondering in some of these cases if we can have the intermediary code in video_stm32_dcmi.c,
be able to either have less overhead or be able to do some simple conversions/fixups.

Options like:

  1. When it completes a frame, simply swap buffers. Note: maybe should be optional as the current way you can
    make Video system to work more reliably is to allocate the main camera buffer in real memory.

  2. Option that instead of memcpy maybe instead will copy with swapping the bytes.

  3. Option that maybe can combine bytes. Implies main camera buffer needs to be larger than others.

  4. Maybe the stm32 code should handle some of the get/setSelection. I currently have a PR still in review
    VIDEO: GC2145/stm32_dcmi support for CROP/Snapshot #93797 that adds some support. Currently it only forwards to underlying Camera, This PR also adds support for
    the GC2145 to be able to set a Crop window... Currently GC2145 only processes the TGT_CROP and NATIVE_SIZE.
    It does not handle TGT_COMPOSE. Wondering if it would make sense to have the STM32_DCMI handle the
    compose? It could in theory than do a smart copy... Note: might want it to do 2) as well at same time.

Describe alternatives you've considered

Leave it up to the cameras and user code to handle the issues.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions