Skip to content

Conversation

KurtE
Copy link
Contributor

@KurtE KurtE commented Jul 29, 2025

Added support to video_stm32_dcmi for the new
set and get selection. These implementations simply forward the message to the underlying
camera object if they support these messages.

Also added support for a snapshot mode instead of
always using continuous capture mode. Tried
to make it semi-transparent when you desire it to be

The stm32_dcmi code now also allows you to work
with only one buffer. This will force it into snap shot mode. There is also new calls added to the api for: video_get_snapshot_mode and video_set_snapshot_mode.

That allows you to set it with more than one buffer and query what mode you are in.

GC2145 was updated first to try out these changes. The camera now allows me to follow the call order
that @josuah mentioned in another pr/issue.

With this driver I also updated it to allow more or less any video resolution:
{
.pixelformat = format, .width_min = width_l, .width_max = width_h,
.height_min = height_l, .height_max = height_h, .width_step = 0, .height_step = 0,
}

static const struct video_format_cap fmts[] = {
	GC2145_VIDEO_FORMAT_CAP_HL(128, 1600, 128, 1200, VIDEO_PIX_FMT_RGB565),
	GC2145_VIDEO_FORMAT_CAP_HL(128, 1600, 128, 1200, VIDEO_PIX_FMT_YUYV),

When resolution is set, it computes the scale factor. If you then later call set_crop, the same code is
used except it uses the ratios computed from the set_resolution.

With these changes: I was able to setup a test app, for the Arduino Nicla vision and send out a 480x320 image over USB.

More to come

Note: this is a replacement for #91975

My current test sketch/app is up at:
https://github.com/KurtE/zephyr_test_sketches/tree/master/camera_capture_to_usb
built using:

west build -p -b arduino_nicla_vision//m7
west flash

I am using the Arducam viewer with this on my PC. I am using the one at:
https://github.com/mjs513/Teensy_Camera/tree/main/extras/host_app

Picture using GC2145 on Arduino Nicla Vision shown on Arducam mini viewer.
image

Edit: current summary of changes:
There are several changes some of which will likely change if/when code reviews happen. Things like:
a) The stm_dcmi driver handles the get/set selection APIs and if the camera has also implemented these APIs, it
forwards the messages to them, else return error not implemented.

a1) GC2145 camera implements them.

b) Currently I allow arbitrary size of the frame GC2145, that is I have one fmt (per RGB...) which sets min and max versus the
ones where it current code which has 3 arbitrary sizes (1600x1200 - ratio 1, 640x480 ratio 2, 320x240 ratio 3). I instead
compute ratio and allow you to choose for example 800x600 which is less arbitrary than the 640x480... Note 320x240
computes ratio=5, except I currently limit to max ratio=3 per seeing other implementations that do so... Maybe should
make that max configurable.

c) Setting to allow the code to run in SNAPSHOT mode, which starts the camera, waits for one frame to come back and then deactivates.

This is the way that Arduino library works at least on MBED. Note snapshot mode also has some ability to recover from
failures...

d) Allow you to configure to only have one buffer, before it required at least two, If set to 1, it set forces SNAPSHOT mode.

With this running on Zephyr, I for example was able to program a Nicla Vision, which has no SDRAM and output at
480x320 over USB to an Arducam viewer. My test sketch (not sure what to call them on Zephyr)
is up at https://github.com/KurtE/zephyr_test_sketches/tree/master/camera_capture_to_usb

Also have others that output to Portenta H7 to an ST7796 display...

@KurtE KurtE force-pushed the camera_snapshot branch 13 times, most recently from 558b218 to aa416e3 Compare July 30, 2025 13:58
@KurtE
Copy link
Contributor Author

KurtE commented Jul 30, 2025

@josuah @mjs513 @dkalowsk @iabdalkader and all:

As @josuah mentioned in my previous PR:
#91975 (comment)
Which I closed as per his earlier comments about using the new set/get selection video stuff.

This might make more sense when considering there is only one particular order users are expected to follow:

  1. Set the format with video_set_format()
  2. Set the cropping region with video_set_selection(dev, VIDEO_SET_TGT_CROP)
  3. Set the scaling parameter with video_set_selection(dev, VIDEO_SET_TGT_COMPOSE)
    Every step reset the values of what is below it: "select the native size, remove margins, and scale it up/down" always in this order.

Which makes sense: But now wondering about a few details. In my own test case app I have:

	LOG_INF("- Video format: %s %ux%u",
		VIDEO_FOURCC_TO_STR(fmt.pixelformat), fmt.width, fmt.height);

	if (video_set_format(video_dev, &fmt)) {
		LOG_ERR("Unable to set format");
		return 0;
	}

#if CONFIG_VIDEO_FRAME_HEIGHT || CONFIG_VIDEO_FRAME_WIDTH
#if CONFIG_VIDEO_FRAME_HEIGHT
	fmt.height = CONFIG_VIDEO_FRAME_HEIGHT;
#endif

#if CONFIG_VIDEO_FRAME_WIDTH
	fmt.width = CONFIG_VIDEO_FRAME_WIDTH;
#endif
#endif	

	/* First set the format which has the size of the frame defined */
	LOG_INF("video_set_format: %u %u", fmt.width, fmt.height);
	if (video_set_format(video_dev, &fmt)) {
		LOG_ERR("Unable to set format");
		return 0;
	}

	/* initialize the bsize to the size of the frame */
	bsize = fmt.width * fmt.height * 2;
	/* Set the crop setting if necessary */
#if CONFIG_VIDEO_SOURCE_CROP_WIDTH && CONFIG_VIDEO_SOURCE_CROP_HEIGHT
	sel.target = VIDEO_SEL_TGT_CROP;
	sel.rect.left = CONFIG_VIDEO_SOURCE_CROP_LEFT;
	sel.rect.top = CONFIG_VIDEO_SOURCE_CROP_TOP;
	sel.rect.width = CONFIG_VIDEO_SOURCE_CROP_WIDTH;
	sel.rect.height = CONFIG_VIDEO_SOURCE_CROP_HEIGHT;
	LOG_INF("video_set_selection: VIDEO_SEL_TGT_CROP(%u, %u, %u, %u)", 
			sel.rect.left, sel.rect.top, sel.rect.width, sel.rect.height);
	if (video_set_selection(video_dev, &sel)) {
		LOG_ERR("Unable to set selection crop  (%u,%u)/%ux%u",
			sel.rect.left, sel.rect.top, sel.rect.width, sel.rect.height);
		return 0;
	}
	LOG_INF("Selection crop set to (%u,%u)/%ux%u",
		sel.rect.left, sel.rect.top, sel.rect.width, sel.rect.height);
	bsize = sel.rect.width * sel.rect.height * 2;
#endif

	if (video_get_format(video_dev, &fmt)) {
		LOG_ERR("Unable to retrieve video format");
		return 0;
	}
	LOG_INF("video_get_format: ret fmt:%u w:%u h:%u pitch:%u",fmt.pixelformat, fmt.width, fmt.height, fmt.pitch);

And the CONF file has:

CONFIG_VIDEO_FRAME_WIDTH=800
CONFIG_VIDEO_FRAME_HEIGHT=600
CONFIG_VIDEO_SOURCE_CROP_WIDTH=480
CONFIG_VIDEO_SOURCE_CROP_HEIGHT=320

But if I was not using my updated fmts, which allows more resolutions, I would have done FRAME_WIDTH=640 and HEIGHT=480

So now assume:

CONFIG_VIDEO_FRAME_WIDTH=640
CONFIG_VIDEO_FRAME_HEIGHT=480

With this, the call to video_set_format would have a width=640 and height= 480
Which internally sets the ratio (scaling to 2) and crop to 640x480.

Note: If you now (first commit) call video_get_selection with VIDEO_SEL_TGT_NATIVE_SIZE, it will return 800x600

So now to do step 2) to crop it to 480x320 - Currently I ignore the passed in top and left in the crop rectangle, but that
is what I wish to update in the next commit, The current code sets the crop rectangle top=0, left=0. However internally
it actually sets the crop to (80,60, 640, 480) to center the image in the sensor.

So with this setup, I would expect on the TGT_CROP that I should be able to pass in rectangles in the range:
(0, 0, 480, 320) - Upper left area of sensor. to
(159, 139, 480, 320) - lower right (not sure about if 159 or 160...)
Which would allow you to pan over the entire sensor...

But in this case: should step 1) have set the top=0, left=0 or should it instead of set it to 80,60?
Should the sketch calling the setting the TGT_CROP compute this themself, that is currently if you set it to 0, 0
it would be at one end of the sensor? Should there be a default value?

Thanks
Kurt

@KurtE
Copy link
Contributor Author

KurtE commented Jul 30, 2025

Having problems with this PR first commit on getting signoff to be accepted:

I actually copy/pasted the signoff line from previous PRS which worked then but not on this one?

Signed-off-by: Kurt Eckhardt <[email protected]>
I tried using:
Signed-off-by: Kurt E <[email protected]>
which I had changed my profile to have: Kurt E instead of KurtE for profile name...
I tried what: git commit -s added:
`Signed-off-by: KurtE [email protected]

All of which failed like:

  -- Run compliance checks on patch series (PR): Identity.txt#L0See https://docs.zephyrproject.org/latest/contribute/guidelines.html#commit-guidelines for more details 52ea30be81e17759c1725a3ef7e7c2c64d6ed5c0: Signed-off-by line (Signed-off-by: Kurt Eckhardt ) does not follow the syntax: First Last .  


[Run compliance checks on patch series (PR): Identity.txt#L0](https://github.com//pull/93797/files#annotation_37244444845) See https://docs.zephyrproject.org/latest/contribute/guidelines.html#commit-guidelines for more details

52ea30be81e17759c1725a3ef7e7c2c64d6ed5c0: Signed-off-by line (Signed-off-by: Kurt Eckhardt [email protected]) does not follow the syntax: First Last .

@KurtE KurtE force-pushed the camera_snapshot branch 2 times, most recently from f1d99dd to acb2521 Compare July 30, 2025 16:03
@KurtE
Copy link
Contributor Author

KurtE commented Jul 31, 2025

More mumbling to self ;)
Reworking some of this, based on what was done elsewhere including our Teensy_camera code.

With this, the call to video_set_format would have a width=640 and height= 480 Which internally sets the ratio (scaling to 2) and crop to 640x480.

Note: If you now (first commit) call video_get_selection with VIDEO_SEL_TGT_NATIVE_SIZE, it will return 800x600

Reworking: Currently I have the crop code recalculate most of the window and crop registers. Will instead have it
only update the crop registers... As such if you passed in 640x480 on set_format, that is what you are limited to.
So VIDEO_SEL_TGT_NATIVE_SIZE will return 640x480, so I need to save that away and/or grab it from registers.

As the setting the crop updates the fmt structure:

		drv_data->fmt.width = drv_data->crop.width;
		drv_data->fmt.height = drv_data->crop.height;
		drv_data->fmt.pitch = drv_data->fmt.width
			* video_bits_per_pixel(drv_data->fmt.pixelformat) / BITS_PER_BYTE;

Why? Because the buffer management code requires the buffers to be that size:

static int video_stm32_dcmi_enqueue(const struct device *dev, struct video_buffer *vbuf)
{
	struct video_stm32_dcmi_data *data = dev->data;
	const uint32_t buffer_size = data->fmt.pitch * data->fmt.height;

	if (buffer_size > vbuf->size) {
		return -EINVAL;
	}
...

So for example if I setup for the GC2145 camera to output 480x320 the buffer size needed is: 307200 bytes
which the STM32H747 like the Nicla Vision can hold in its memory. Now if the buffer size calculation is based on:
640x480 (614400 bytes) or worse 800x600 (960000 bytes), won't fit in memory.

This also effects the range of crop LEFT and TOP to fit the 480x320 within the range of 640x480...

@KurtE KurtE force-pushed the camera_snapshot branch 4 times, most recently from 3b43aca to 019f84b Compare August 6, 2025 02:26
KurtE added a commit to KurtE/ArduinoCore-zephyr that referenced this pull request Aug 6, 2025
Note: this all uses the Zephyr updates from my PR
zephyrproject-rtos/zephyr#93797

Which I added to the STM dcmi driver the ability to have the camera work in snapshot mode
instead of in continuous video mode.  This allows for example that we start the camera
it grabs a frame and stops, we then take the buffer and process it, and repeat this.
This helps minimize how much the SDRAM gets used concurrently.

In addition, I added to the VIDEO_STM32_DCMI and GC2145 the ability to use some of the
new set_selection and get_selection code that was added for the DCMIPP.  In particular
the DCMI simply forwards these messages to the camera if it defined thise apis...

And with this it allows you to setup a viewport into the frame.  For example:
You can setup the frame on the GC2145 to be 800x600 and then define a view port
to be 480x320 to fill an ST7796/ILI9486 tft display or you could do it 400x240 to half fill
the GIGA display.  You can also move that view port around within the frame (pan)
I have examples that do this on Portenta H7 on the ST7796 display and another one
that does this on the GIGA display shield.

Still WIP as we probably need to refine the APIS and the like
@KurtE KurtE force-pushed the camera_snapshot branch 2 times, most recently from 0cb0ea4 to 74b3afe Compare August 6, 2025 20:28
@KurtE KurtE marked this pull request as ready for review August 6, 2025 20:28
@zephyrbot zephyrbot added the platform: STM32 ST Micro STM32 label Aug 6, 2025
@@ -302,11 +315,34 @@ static int video_stm32_dcmi_dequeue(const struct device *dev, struct video_buffe
{
struct video_stm32_dcmi_data *data = dev->data;

if (data->snapshot_mode) {
/* See if we were already called and have an active buffer */
if (data->vbuf == NULL) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are in snapshot mode and have already a valid (non NULL) vbuf pointer here, it means that dequeue has already been called once but the frame hasn't been capture in time (aka within the requested timeout period) so I do not see the reason to start again a SNAPSHOT capture again. The frame should be capture at the 2nd (or further more if the framerate is very low low). There is a very good chance that starting again the snapshot capture via HAL_DCMI_Start_DMA will again lead to the same result of timeout, or at least will most probably not be reliable since this means that timeout is very close to a VSYNC period.
Right ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may be right. Although the usage case I have run into, especially with the GIGA with a display active.
Is we get a reasonably high percentage of frames where the DMA errors out. And currently with this it allows
the camera to restart again, which recovers. In some of the cases I have seen it will error out maybe 1 out of 3 or 4 frames.

So far the only reliable way I have found is if we are not using SDRAM for the camera buffer.

The other option is maybe in the failure case, we could pull the buffer away from the camera and set it to NULL.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may be right. Although the usage case I have run into, especially with the GIGA with a display active. Is we get a reasonably high percentage of frames where the DMA errors out. And currently with this it allows the camera to restart again, which recovers. In some of the cases I have seen it will error out maybe 1 out of 3 or 4 frames.

So far the only reliable way I have found is if we are not using SDRAM for the camera buffer.

The other option is maybe in the failure case, we could pull the buffer away from the camera and set it to NULL.

Sounds like basically after a failing dequeue (due to DMA issue), running dequeue again will allow (or at least will increase the chance) to get it correct this time.
I tend to think that with this snapshot in place, reaching the timeout prior to getting the frame is possible, hence it is necessary to be able to get a frame AFTER the dequeue as timedout, hence, as I said above, avoid restarting the DMA again, and simply go get the buffer out of fifo_out if it is available.

For the case of DMA failing, then, what about implementing the DMA error handler and in this one, if this is snapshot more, put back the buffer into the fifo_in since it hasn't been used, which will again, later on to trig the capture / dma again. I think this is the other option you propose right ?

Comment on lines +456 to +531
static int video_stm32_dcmi_set_selection(const struct device *dev, struct video_selection *sel)
{
const struct video_stm32_dcmi_config *config = dev->config;

return video_set_selection(config->sensor_dev, sel);
}

static int video_stm32_dcmi_get_selection(const struct device *dev, struct video_selection *sel)
{
const struct video_stm32_dcmi_config *config = dev->config;

return video_get_selection(config->sensor_dev, sel);
}
Copy link

@avolmat-st avolmat-st Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually here, within this pipeline, the crop / compose could be done at various places. Can be at the sensor level, as currently implied but also within the DCMI since it also has such capabilities.

@KurtE, indeed, I agree, why sending lots of pixels if anyway the user isn't requiring that much, this will consume more power, generate more perturbation and potentially leads to more memory consumed as well. But this really depends on the use-case, the sensor and IPs being involved.
It is hard to make a generic case for all. A trend is to not try to have the driver smarter than it should be and leave this to upper layer, let it decide where the cropping / compose should be done and by who since it is the one that knows better the usecase.

Here, the GC2145 is able to perform crop hence can generate various resolutions, but, depending on the sensor this can be at the cost to probably only get for example the center of the frame since it wouldn't be able to perform sub-sampling. In such case the DCMI level sub-sampling would become useful since the sensor would send the full frame and the DCMI will subsample it, leading to the frame requested by the application (hence not using more memory since the sub-sampling is done in HW).

In order to achieve such fine tuning of the pipeline elements, the application would have to go talk to not only the video device but also to the other elements of the pipeline to fine-tune the settings. Aka, here, say to the sensor to capture in full size and ask to the dcmi itself only to do the crop. With such mechanism it becomes possible to address very very specific / complex use-cases without having too complex drivers which would otherwise have to try to figure-out what is the best solution. This is the kind of thing that are address by libcamera that @josuah mentioned, used in the Linux world.

We had already some discussion about putting in place such method in Zephyr as well but I believe this isn't yet done.

So, as a conclusion, considering what is currently available, while I'd be more in favor of such fine-tuned settings (thus allowing to have both dcmi crop & sensor crop controllable), since such mechanism isn't yet available, maybe it is ok to let the DCMI do path-through for the time being. Even if this will lead to a behavior change later on when moving to "subdev" like control, but we have the migration-guide for that purpose I guess.

@avolmat-st
Copy link

@avolmat-st - Let me know if I missed any of requested changes. I double checked and the 2 changed files don't currently show any thing requested.

Thanks again

Thanks for all those updates and sorry for the delay. Had been away last several weeks. Did again the review, a few small points to address.

@KurtE KurtE force-pushed the camera_snapshot branch 2 times, most recently from b681f1b to b2079ac Compare September 2, 2025 01:03
JarmouniA
JarmouniA previously approved these changes Sep 2, 2025
@josuah
Copy link
Contributor

josuah commented Sep 2, 2025

@KurtE the Zephyr project uses the Linux kernel coding style, but with the max line length increased from 80 characters to 100 characters... however this is assuming tabs are 8-characters wide, which might be why CI is unhappy:

https://github.com/zephyrproject-rtos/zephyr/actions/runs/17390277450/job/49362921103?pr=93797#step:10:47

Maybe editorconfig supported by Zephyr here, or configuring tab width to 8 can help with that.

Thank you for improving these drivers!

@avolmat-st
Copy link

Small typo in the latest push, the gc2145 commit summary line is wrong:

video: gc145: support for CROP

(the 2 is missing in gc2145)

@KurtE
Copy link
Contributor Author

KurtE commented Sep 2, 2025

@KurtE the Zephyr project uses the Linux kernel coding style, but with the max line length increased from 80 characters to 100 characters... however this is assuming tabs are 8-characters wide, which might be why CI is unhappy:

https://github.com/zephyrproject-rtos/zephyr/actions/runs/17390277450/job/49362921103?pr=93797#step:10:47

Maybe editorconfig supported by Zephyr here, or configuring tab width to 8 can help with that.

Thank you for improving these drivers!

Thanks, yep I missed that the line continuation character in the macro was pushed out...
And yes normally I have tabs set to 4, like many other projects wanted...

I am glad it is not 80 characters max like I believe Adafruit projects require.

Pushed up the fix for that line...

@avolmat-st Also fixed the commit name in last push... Not sure how that changed... But...

@KurtE KurtE force-pushed the camera_snapshot branch 4 times, most recently from 04c8773 to 288ea61 Compare September 4, 2025 01:19
Copy link
Contributor

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for continuously improving this PR.
Maybe a few details will be pointed out for modification, but it seems generally ready API-wise IMHO.

It might be reasonable to make it 2 commits: one for the GC2145, one for DCMI, and remove the debug

@KurtE
Copy link
Contributor Author

KurtE commented Sep 4, 2025

Thank you for continuously improving this PR. Maybe a few details will be pointed out for modification, but it seems generally ready API-wise IMHO.

It might be reasonable to make it 2 commits: one for the GC2145, one for DCMI, and remove the debug
Thanks,

That is my plan (to squash these last commits into the DCMI one). Was waiting to see if some of you had opinion
on where the recovery from errors would be best. But I will probably go with my gut and try to only do the minimum
in the actual error callbacks.

@KurtE KurtE force-pushed the camera_snapshot branch 3 times, most recently from 313ae93 to c921534 Compare September 4, 2025 21:19
@KurtE
Copy link
Contributor Author

KurtE commented Sep 4, 2025

@josuah @avolmat-st - pushed up the changes and I squashed into two...

@KurtE KurtE force-pushed the camera_snapshot branch 2 times, most recently from 35fd0b8 to 4976650 Compare September 4, 2025 22:37
Copy link
Contributor

@josuah josuah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like now we can have one or two buffers as minimum, depending on configuration.

It seems like this need to have the video_stm32_dcmi_get_caps() function updated like in DCMIPP, but with caps->min_vbuf_count = config->snapshot_mode ? 1 : 2:

caps->min_vbuf_count = 1;

It lacked proper configuration for it so far, so this is more a fixup of the existing driver.

Comment on lines +656 to +664
data->snapshot_mode = config->snapshot_mode;
if (CONFIG_VIDEO_BUFFER_POOL_NUM_MAX == 1) {
LOG_DBG("Only one buffer so snapshot mode only");
data->snapshot_mode = true;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea to check that there are enough buffers when not in snapshot mode! Rather than trying to correct the error silently, it is tempting to instead halt the execution with an LOG_ERR(); return -EINVAL;.

Copy link
Contributor Author

@KurtE KurtE Sep 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @josuah - I am torn on some of this. That is I understand your desire to have it all driven by the device tree.
Makes sense for a static setup. But I keep trying to figure out some flexible way to setup for an Arduino
environment, where the one setup of the .conf and .overlay per board pre release.
i.e. your discussion: #93058

Another option, which I sort of started with before, was an explicit API.
Something like: video_set_snaphsot_mode(const struct device *dev, bool f);
or video_set_mode(const struct device *dev, video_mode mode);
where video_mode would be an enum with maybe modes like: (VIDEO_MODE_VIDEO, VIDEO_MODE_SNAPSHOT, ???)

Edit: My first version of adding snapshot mode, I added two new entries into the API structure, but wondering if
we go this route, if it makes sense to wrap it into the set/get_selection settings. However we don't need a
rectangle for this, but probably just one value (on/off or mode). But my guess is that at some point
this might be extended for other non rectangle settings. For example maybe with an IMXRT setup with
CSI, maybe we will want to use the PXP and have it do rotation of the image coming in from the camera
in addition to resize...

I sort of look at like smart phone, or camera, they often have a switch on it that changes the mode (from video, to photo, and other modes).

Thoughts?

@KurtE
Copy link
Contributor Author

KurtE commented Sep 5, 2025

It seems like now we can have one or two buffers as minimum, depending on configuration.

It seems like this need to have the video_stm32_dcmi_get_caps() function updated like in DCMIPP, but with caps->min_vbuf_count = config->snapshot_mode ? 1 : 2:

Updated and pushed back.

Copy link

sonarqubecloud bot commented Sep 5, 2025

Added support to video_stm32_dcmi for the new
set and get selection.  These implementations simply
forward the message to the underlying camera object
if they support these messages.

Also added support for a snapshot mode instead of
always using continuous capture mode.  Tried
to make it semi-transparent when you desire it to be

The stm32_dcmi code now also allows you to work
with only one buffer.  This will force it into snapshot
mode.  likewise if you call the video_set_stream and
have not added any buffers yet, it will also set it
into this mode.

You can also specify to use snapshot mode, in the
device tree, like:
```
&dcmi {
	snapshot-mode;
};
```

This commit also has some recovery
from DMA errors in
snapshot mode, in addition it appears
to recover in many of the cases in
continuous mode as well.
At least it is a start to resolve some of the hangs.

Updated: video_stm32_dcmi_get_caps.

Signed-off-by: Kurt Eckhardt <[email protected]>
Implements the set_selection and get_selection APIs
Which are forwarded to it from video_stm32_dcmi
as part of the Pull request.  It uses the new messages
to allow you to set a crop window on top of the
current format window.  It also then allows you
to move this crop window around in the frame
window.

With this driver I also updated it to allow any resolution
from the displays min to max limits.
static const struct video_format_cap fmts[] = {
  GC2145_VIDEO_FORMAT_CAP_HL(128, 1600, 128, 1200,
                              VIDEO_PIX_FMT_RGB565),
GC2145_VIDEO_FORMAT_CAP_HL(128, 1600, 128, 1200,
                              VIDEO_PIX_FMT_YUYV),

When the resolution is set, it computes the scale factor.

Using the set_selection(VIDEO_SEL_TGT_CROP) allows you
define a crop window within the format window.

It clamps the ratio to a max of 3 as some other
drivers limit it saying it helps with frame rates.

Signed-off-by: Kurt Eckhardt <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: Video Video subsystem platform: STM32 ST Micro STM32
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants