Tested with Azure OpenAI models gpt-35-turbo and text-embedding-ada-002.
Name of models in the code (can be changed, of course):
gpt-35-turbotext-embedding-3-small
Populate a .env file with the following variables:
AZURE_OPENAI_API_KEY=
AZURE_OPENAI_ENDPOINT=docker build -t rag-demo .
docker run --env-file ./.env -p 8080:8080 rag-demoOpen a browser and navigate to http://localhost:8080/
Ask the question: Will there be drinks at the event? and see the response 😀
Thanks to Philipp Bergsmann for the initial implementation of the RAG model with Azure OpenAI services.
