BiasLens AI is an Azure-powered platform that helps healthcare professionals, administrators, and researchers detect bias in clinical notes, patient feedback, and medical documentation.
By analyzing unstructured text for subtle but harmful language related to gender, race, age, and ability, it brings responsible AI and equity into the heart of healthcare systems.
- Uses Azure OpenAI to identify problematic phrases like "non-compliant", "emotional", or "aggressive"
 - Context-aware flagging and suggestions for inclusive rewrites
 
- Streamlit app interface for real-time analysis
 - Inclusive nudges during documentation
 - Supports training and reflection modules
 
- SHAP/LIME for optional explainability
 - Azure Content Safety integration
 - Anonymized NLP processing for compliance
 
Input: "Patient appears overly emotional and uncooperative. History unclear."
Output: Flags "overly emotional" as gender-coded and "uncooperative" as ambiguous.
Suggested Rewrite: "Patient expressed distress during session; full history not yet confirmed."
- Bias Detection Engine
 - Explainable Language Insights
 - Inclusive Suggestions
 - Streamlit UI
 - Bias Reporting (Coming Soon)
 
- Reduce unconscious bias in healthcare documentation
 - Improve accuracy and inclusivity in clinical notes
 - Promote responsible and explainable AI in healthcare
 
- Azure OpenAI
 - Streamlit
 - Azure Content Safety
 - Python
 - SHAP / LIME (Optional)
 - Azure Blob Storage
 - Azure Static Web Apps
 
BiasLens AI promotes SDGs:
- 3 Good Health & Wellbeing
 - 5 Gender Equality
 - 10 Reduced Inequalities
 
git clone https://github.com/Priscillaod/biaslens-ai.git
cd biaslens-aipip install -r requirements.txtCreate a .env file:
OPENAI_API_KEY=your-key-here
streamlit run app/main.pyBuilt for the Microsoft Innovation Hackathon 2025.