UNIT 1 — Foundations (Weeks 1–2)
Week 1 — Introduction to Visual Tools for AI
Visual analytics for AI
Real-world use cases (health, smart cities, education)
User profiles: data scientist, analyst, decision-maker
Course dataset overview
Exploratory Visual Tools: jupyterlab, plotly, Streamlit and Notebook Dashboards
Week 2 — Explainable AI (XAI) and Model Interpretation
Black-box vs. interpretable models
Introduction to Explainable AI (XAI)
SHAP and LIME fundamentals
SHAP visualizations: summary, force, and dependence plots
Communicating model explanations to non-technical stakeholders
UNIT 2 — Technical Dashboards (Weeks 3–5)
Week 3-4 – Technical Dashboards for AI
Grafana for AI monitoring
Alternative tools: Superset, Kibana
Data sources (CSV, SQL, Prometheus, APIs)
Week 5 – AI + Grafana
Prediction visualization
Performance and evaluation metrics
Using Grafana for explainable and operational AI dashboards
Introduction to AI-assisted features in Grafana (AI Assistant):
Natural language queries for dashboard creation
AI-assisted panel and query generation
Accelerating data exploration and troubleshooting with AI support
UNIT 3 — Microsoft Power BI & Excel (Weeks 6–8)
Week 6 — Foundations: Excel & Power Pivot
Power Query (Visual ETL)
Power Pivot
Data Modeling basics
Connecting to data sources
Week 7 — Power BI Desktop & Modeling
Power BI overview
Basic DAX
Visualizing Data
Dashboard design
Week 8 — Power BI for AI Use Cases
Native AI Visuals
Advanced analytics integration for custom visuals
Visualizing ML outputs
UNIT 4 – Observability & Visual Analytics for LLM -based AI (Week 9)
Week 9 —Observability in LLM-based systems
From model interpretability to system observability
Limitations of classical XAI methods for LLMs
Prompt–context–response traces as visual explanations
Key observability signals: latency, cost, errors, feedback
Visualizing hallucinations and failure patterns
Introduction to LLM observability tools (open-source and industrial examples)
Communicating reliability and limitations of AI systems to stakeholders
UNIT 5 — Integration & Final Project (Weeks 10–12)
Week 10 — Tool Integration
End-to-end AI pipelines for visual analytics
Shared data pipelines across tools (Jupyter, Power BI, Grafana)
From experimentation (notebooks) to operational dashboards
Week 11 — Final Project Development
Project implementation
Asynchronous mentoring
Week 12 — Final Project Presentation & Evaluation
Final project presentation
Final evaluation
⸻
🎯 Learning objectives
By the end of this unit, students will be able to:
- Explain why traditional XAI techniques are insufficient for LLM-based systems
- Describe the concept of observability in AI systems
- Identify key signals for monitoring LLM behavior (prompts, traces, latency, cost, feedback)
- Apply visual analytics techniques to interpret LLM system behavior
- Critically assess the reliability and limitations of LLM-generated outputs
- Communicate LLM system performance and risks to non-technical stakeholders
⸻
🧠 Key concepts
- Observability vs. interpretability
- LLMs as probabilistic systems
- Prompt-driven behavior
- System-level explanations
- Human-in-the-loop evaluation
- Semantic errors and hallucinations
- Trust and accountability in AI systems