-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Description
Is your feature request related to a problem? Please describe.
Probably not unique to video_stm32_dcmi.c code and Arduino boards, but the current default setup on these boards
is that all of the buffers for the camera and in the case of the GIGA the video frame buffer are all allocated in
SDRAM. And with some of these boards, we are lucky if the video sub-system does not crash in the DMA and
SDRAM subsystem, which does not recover. I believe that is: #93287
So looking for ways to reduce the amount of reading/writing to SDRAM. Currently when a frame has been
read, it calls: HAL_DCMI_FrameEventCallback, which copies the data from one buffer to another.
But sometimes the data coming from the camera is not in the format that the apps are expecting.
Examples:
-
with RGB565 some of the cameras return the bytes in the reverse order we are expecting.
The current Arduino camera library Camera.begin method has a parameter that when it receives
a new image it walks it and swaps the bytes. -
With the unreleased HM01b0 camera code, the camera can return the image data over 8 or 4 or 1 data pin. The Arducam HM01b0 for the Arduino GIGA only returns 4 bits at a time, the ESP32 Pico version returns 1. I have a
modified version of the camera code where you can tell the video system that it returns two bytes per pixel. I
then have Arduino sketch code that combines two bytes into one. -
With the current HM01b0 code, when I ask for 320x240 resolution, it actually returns 324x244. Maybe that
can be fixed. Currently I hacked in new format resolution of the 326x244, as it is needed for the system to
allocate large enough buffers and to align the data properly.
Describe the solution you'd like
I am wondering in some of these cases if we can have the intermediary code in video_stm32_dcmi.c,
be able to either have less overhead or be able to do some simple conversions/fixups.
Options like:
-
When it completes a frame, simply swap buffers. Note: maybe should be optional as the current way you can
make Video system to work more reliably is to allocate the main camera buffer in real memory. -
Option that instead of memcpy maybe instead will copy with swapping the bytes.
-
Option that maybe can combine bytes. Implies main camera buffer needs to be larger than others.
-
Maybe the stm32 code should handle some of the get/setSelection. I currently have a PR still in review
VIDEO: GC2145/stm32_dcmi support for CROP/Snapshot #93797 that adds some support. Currently it only forwards to underlying Camera, This PR also adds support for
the GC2145 to be able to set a Crop window... Currently GC2145 only processes the TGT_CROP and NATIVE_SIZE.
It does not handle TGT_COMPOSE. Wondering if it would make sense to have the STM32_DCMI handle the
compose? It could in theory than do a smart copy... Note: might want it to do 2) as well at same time.
Describe alternatives you've considered
Leave it up to the cameras and user code to handle the issues.