This repository contains a series of samples on how to analyse a video using multimodal Large Language Models, like GPT-4o or GPT-4o-mini. The repository also includes samples of the use of different AI libraries like:
- Azure AI OpenAI library for .NET. using Azure OpenAI Services
- Microsoft.Extensions.AI.OpenAI using OpenAI APIs
- Microsoft.Extensions.AI.AzureAIInference to work with GitHub Models
- Microsoft.Extensions.AI.Ollama to work with Ollama local models
- OpenAI .NET API library to work with OpenAI APIs.
Note: We recommend first going through the Run Sample Projects before running this app locally, since some of the demos needs credentials for to work properly.
- Features
- Architecture diagram
- Getting started
- Deploying to Azure
- Run Sample Projects
- Resources
- Video Recordings
- Contributing
- Guidance
- Resources
GitHub CodeSpaces: This project is designed to be opened in GitHub Codespaces as an easy way for anyone to try out these libraries entirely in the browser.
This is the OpenAI sample running in Codespaces. The sample analyze this video:
And return this output:
There is also a full Aspire solution, including a Blazor FrontEnd application and an API Service to process the video as a reference application to implement this.
COMING SOON!
The sample Console projects are in the folder ./src/Labs/. The full Aspire + Blazor solution is in the folder ./srcBlazor/.
Currently there are samples for the following technologies:
- OpenAI .NET SDK
- Microsoft AI Extensions
- GitHub Models
- Local Image analysis using Ollama
- OpenCV, using OpenCVSharp
- Phi-3.5
Once you've opened the project in Codespaces, or locally, you can deploy it to Azure.
From a Terminal window, open the folder with the clone of this repo and run the following commands.
-
Login to docker:
docker login
-
Login to Azure:
azd auth login
-
Provision and deploy all the resources:
azd up
It will prompt you to provide an
azd
environment name (like "videoanalyserdev"), select a subscription from your Azure account, and select a location where OpenAI is available (like "eastus2"). -
When
azd
has finished deploying, you'll see an endpoint URI in the command output. Visit that URI, and you should see the chat app! 🎉
The Aspire and Blazor Demo deploy to Azure page has more information on how to deploy the solution to Azure and test the project.
The Aspire and Blazor demo content also check the Deploy Troubleshooting Guide.
There is an online video describing the deploy to Azure process: Step-by-Step .NET AI Video Analyzer repo on Azure + Troubleshooting Tips.
Library | Project Name | Description |
---|---|---|
OpenAI library for .NET | .\src\ConsoleOpenAI-04-VideoAnalyzer |
Console project demonstrating the use of the OpenAI .NET API library to work with OpenAI APIs. |
Azure AI OpenAI library for .NET. | .\src\ConsoleAOAI-04-VideoAnalyzer |
Console project demonstrating the use of the stable release of Azure AI OpenAI library for .NET. using Azure OpenAI Services for the video analysis process. |
Microsoft.Extensions.AI | .\src\ConsoleMEAI-04-OpenAI |
Console project demonstrating the use of the Microsoft.Extensions.AI.OpenAI using OpenAI APIs for the video analysis process. |
Microsoft.Extensions.AI | .\src\ConsoleMEAI-05-GitHubModels |
Console project demonstrating the use of the Microsoft.Extensions.AI.AzureAIInference to work with GitHub Models |
Microsoft.Extensions.AI | .\src\ConsoleMEAI-06-Ollama |
Console project demonstrating the use of the Microsoft.Extensions.AI.Ollama to work with Ollama local models. This sample uses Llava 7B for image analysis and Phi 3.5 for Chat completion |
Microsoft.Extensions.AI | .\src\ConsoleMEAI-07-AOAI |
Console project demonstrating the use of the Microsoft.Extensions.AI.AzureAIInference to work with Azure OpenAI Services |
Aspire + Blazor Complete Demo) | .\srcBlazor\AspireVideoAnalyserBlazor.sln |
Aspire Project with a frontEnd in Blazor and an API service to analyze videos. Work with GitHub Models by default, or Azure OpenAI Services with Default Credentials or an API Key. |
-
Create a GitHub Personal Token Access to be used on the GH Models samples. Get your GitHub Personal Access Token.
-
Create a new Codespace using the
Code
button at the top of the repository.
-
The Codespace createion process can take a couple of minutes.
-
Once the Codespace is loaded, it should have all the necessary requirements to run the demo projects.
To run the project locally, you'll need to make sure the following tools are installed:
- .NET 8
- Git
- Azure Developer CLI (azd)
- VS Code or Visual Studio
- If using VS Code, install the C# Dev Kit
To run the Aspire and Blazor sample, you also need .NET Aspire.
- .NET Aspire workload: Installed with the Visual Studio installer or the .NET CLI workload.
- An OCI compliant container runtime, such as:
- Docker Desktop or Podman.
This is an example of the user secrets you need to set for the AppHost project:
User Secrets for AppHost
{
"Azure:Location": "eastus2",
"Azure:ResourceGroup": "rg-visionapiservice",
"Azure:TenantId": "< tenant id >",
"Azure:SubscriptionId": "< subscription id >",
"Azure:AllowResourceGroupCreation": true,
"Azure:CredentialSource": "AzureCli",
}
To run the sample using GitHub Models, located in ./src/ConsoleMEAI-05-GitHubModels
.
Once you got your PAT, follow this steps.
- Navigate to the sample project folder using the command:
cd ./src/ConsoleMEAI-05-GitHubModels/
- Run the project:
dotnet run
- You can expect an output similar to this one:
To run the sample using OpenAI APIs, located in ./src/ConsoleOpenAI-04-VideoAnalyzer
, you must set your OpenAI Key in a user secret.
- Navigate to the sample project folder using the command:
cd ./src/ConsoleOpenAI-04-VideoAnalyzer/
- Add the user secrets running the command:
dotnet user-secrets init
dotnet user-secrets set "OPENAI_KEY" "< your key goes here >"
- Run the project:
dotnet run
- You can expect an output similar to this one:
To run the sample using Azure OpenAI Services, located in ./src/ConsoleAOAI-04-VideoAnalyzer
, you must set your Azure OpenAI keys in a user secret.
- Navigate to the sample project folder using the command:
cd ./src/ConsoleAOAI-04-VideoAnalyzer/
- Add the user secrets running the command:
dotnet user-secrets init
dotnet user-secrets set "AZURE_OPENAI_MODEL" "gpt-4o"
dotnet user-secrets set "AZURE_OPENAI_ENDPOINT" "https://< your service endpoint >.openai.azure.com/"
dotnet user-secrets set "< your key goes here >"
- Run the project with the command:
dotnet run
- You can expect an output similar to this one:
-
Follow the instructions in the Run Ollama Sample
-
You can expect an output similar to this one:
GitHub Models offers a set of models that can be used to test this projects. This free API usage are rate limited by requests per minute, requests per day, tokens per request, and concurrent requests. If you get rate limited, you will need to wait for the rate limit that you hit to reset before you can make more requests.
More Information: GitHub Models Rate Limits
For Azure OpenAI Services, pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. The majority of the Azure resources used in this infrastructure are on usage-based pricing tiers. However, Azure Container Registry has a fixed cost per registry per day.
You can try the Azure pricing calculator for the resources:
- Azure OpenAI Service: S0 tier, gpt-4o-mini model. Pricing is based on token count. Pricing
- Azure Container App: Consumption tier with 0.5 CPU, 1GiB memory/storage. Pricing is based on resource allocation, and each month allows for a certain amount of free usage. Pricing
- Azure Container Registry: Basic tier. Pricing
- Log analytics: Pay-as-you-go tier. Costs based on data ingested. Pricing
azd down
.
Samples in this templates uses Azure OpenAI Services with ApiKey and Managed Identity for authenticating to the Azure OpenAI service.
The Aspire + Blazor sample uses Managed Identity](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/overview) for authenticating to the Azure OpenAI service.
Additionally, we have added a GitHub Action that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure continued best practices in your own repository, we recommend that anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled.
You may want to consider additional security measures, such as:
- Protecting the Azure Container Apps instance with a firewall and/or Virtual Network.