Skip to content

estudamais-tech/luiza-streamlit-1.0

 
 

Repository files navigation

CI - Code Quality and Security CD - Deploy VPS Codacy Badge

Docker Image Size Docker Pulls

Python Streamlit Docker Nginx LangChain GitHub Actions


EstudaMais.tech – Generative AI Infrastructure with RAG and Automated Deployment

This repository implements a complete stack for an educational chatbot powered by Generative AI, using a Retrieval-Augmented Generation (RAG) architecture, Docker Compose, Streamlit, LangChain, GitHub Actions, and automated deployment to a VPS using Nginx + HTTPS via Certbot.

The AI assistant, codenamed Luiza, answers questions based on a local knowledge base built from Markdown files, combining vector search with OpenAI-generated responses.


🔍 Architecture and Components

Retrieval-Augmented Generation (RAG)

  • LangChain Retriever using embedded ChromaDB
  • Embeddings are generated on boot from .md files in the /docs folder
  • Integration with RetrievalQA and OpenAI (configurable model)

Containerization & Deployment

  • Dockerfile based on python:3.13-slim
  • docker-compose.yml orchestrates app + Nginx
  • HTTPS reverse proxy via Certbot
  • Persistent volume for CSV-based structured logs
  • Automated deployment to a VPS using CI/CD

CI/CD

  • GitHub Actions pipeline performs:

    • Code linting with ruff, bandit, mypy
    • Docker image build
    • Push to Docker Hub
    • Remote deployment via SSH + docker compose up

📂 Project Structure

.
├── .github/workflows/        # CI/CD Pipelines
├── /docs/                    # Markdown knowledge base (indexed with ChromaDB)
├── /logs/                    # CSV logs of user queries
├── retriever.py              # Document loader and embedder
├── streamlit_app.py          # Main application interface
├── docker-compose.yml        # Service orchestration
├── Dockerfile                # Main Docker image
├── redeploy.sh               # Manual restart script for VPS
└── .env                      # OpenAI API key

⚙️ Running Locally

  1. Clone the repository:
git clone https://github.com/92username/langchain-quickstart.git
cd langchain-quickstart
  1. Create a .env file with your OpenAI key:
OPENAI_API_KEY=sk-xxxxx
  1. Run with Docker Compose:
docker compose up --build

The application will be available at http://localhost:8501


🔐 Production

The production environment runs on a VPS (Hostinger) and is accessible via:

https://estudamais.tamanduas.dev

  • Secure HTTPS requests
  • Reverse proxy via Nginx
  • Automated deployment via GitHub Actions

📈 Observability & Logging

  • All user questions are logged in logs/conversas.csv
  • Volume is persistent across builds
  • Logs may be used to generate a data-driven FAQ in the future

📌 Roadmap

  • RAG with LangChain + OpenAI
  • Automated deployment with GitHub Actions
  • Persistent interaction logging
  • Implement caching for frequently asked questions and repeated answers.
  • Auto-generated FAQ module
  • Admin dashboard with usage metrics

Testing & Security

  • bandit for security scanning at build time
  • ruff and mypy for linting and static typing
  • CI pipeline fails on security violations

About

EstudaMais.tech AI assistant · DevOps showcase · RAG · Docker · CI/CD · Github Actions · LangChain

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 85.0%
  • Shell 12.4%
  • Dockerfile 2.6%