Visual Tools for AI

Training details

Location

UPC North Campus, Barcelona

Date

04/05/2026

Target Audiance

Student-Focused

Teaching language(s)

English / Catalan / Spanish

Organizing institution

Universitat Politècnica de Catalunya

Delivery mode

Hybrid

Level

Intermediate

Format

Case Study Session, Hands-on session, Tutorial

Capacity or seats limit

30

Topics / Keywords

Explainable Artificial Intelligence (XAI), Data Visualization, Dashboards

This 3 ECTS course introduces students to visual tools and techniques for interpreting, explaining, and communicating AI systems in a bidirectional way, enabling both human understanding of models and interactive feedback into the analytical process. Through a combination of theory and hands-on practice, students will explore explainable AI methods, visual analytics, and dashboard design using tools such as Power BI, Grafana, Plotly, and Jupyter-based environments.

Special emphasis is placed on observability for LLM-based systems and on infrastructure-level observability, addressing model behavior, performance, cost, reliability, and governance across end-to-end AI pipelines.

The course is delivered in a hybrid format, combining in-person sessions, specifically the first and the final sessions of the course, with asynchronous and remote learning activities. It emphasizes the effective communication of AI insights to non-technical stakeholders. The course concludes with a real-world project in which students apply the acquired knowledge to analyze, visualize, explain, and monitor an AI model in a practical context.

What You Will Learn

Learning objectives

  • Explain the role of shallow learning in modern AI systems
  • Implement key supervised algorithms (regression, classification, trees, ensembles, SVM) in Python using standard libraries
  • Apply clustering and PCA to explore and structure data
  • Describe and apply basic NLP workflows (tokenization, vectorization, embeddings) and compare them with shallow text-classification approaches.
  • Design and run small ML experiments, including train/validation splits and model comparison
  • Evaluate models with appropriate metrics and communicate their performance and limitations
  • Plan and execute an ML mini-project following CRISP-DM and present the results clearly

Learning outcomes

  • Build explainable dashboards
    Communicate AI results to non-technical stakeholders
    Select the appropriate visual tool for each context
  • Communicate results, limitations and next steps to technical and non-technical stakeholders
  • Visually interpret and explain AI models

Agenda

UNIT 1 — Foundations (Weeks 1–2)

Week 1 — Introduction to Visual Tools for AI

Visual analytics for AI
Real-world use cases (health, smart cities, education)
User profiles: data scientist, analyst, decision-maker
Course dataset overview
Exploratory Visual Tools: jupyterlab, plotly, Streamlit  and Notebook Dashboards

Week 2 — Explainable AI (XAI) and Model Interpretation

Black-box vs. interpretable models
Introduction to Explainable AI (XAI)
SHAP and LIME fundamentals
SHAP visualizations: summary, force, and dependence plots
Communicating model explanations to non-technical stakeholders

UNIT 2 — Technical Dashboards (Weeks 3–5)

Week 3-4 – Technical Dashboards for AI

Grafana for AI monitoring
Alternative tools: Superset, Kibana
Data sources (CSV, SQL, Prometheus, APIs)

Week 5 – AI + Grafana

Prediction visualization
Performance and evaluation metrics
Using Grafana for explainable and operational AI dashboards
Introduction to AI-assisted features in Grafana (AI Assistant):
Natural language queries for dashboard creation
AI-assisted panel and query generation
Accelerating data exploration and troubleshooting with AI support

UNIT 3 — Microsoft Power BI & Excel (Weeks 6–8)

Week 6 — Foundations: Excel & Power Pivot

Power Query (Visual ETL)
Power Pivot
Data Modeling basics
Connecting to data sources

Week 7 — Power BI Desktop & Modeling

Power BI overview
Basic DAX
Visualizing Data
Dashboard design

Week 8 — Power BI for AI Use Cases

Native AI Visuals
Advanced analytics integration for custom visuals
Visualizing ML outputs

UNIT 4 – Observability & Visual Analytics for LLM -based AI (Week 9)

Week 9 —Observability in LLM-based systems

From model interpretability to system observability
Limitations of classical XAI methods for LLMs
Prompt–context–response traces as visual explanations
Key observability signals: latency, cost, errors, feedback
Visualizing hallucinations and failure patterns
Introduction to LLM observability tools (open-source and industrial examples)
Communicating reliability and limitations of AI systems to stakeholders

UNIT 5 — Integration & Final Project (Weeks 10–12)

Week 10 — Tool Integration

End-to-end AI pipelines for visual analytics
Shared data pipelines across tools (Jupyter, Power BI, Grafana)
From experimentation (notebooks) to operational dashboards

Week 11 — Final Project Development

Project implementation
Asynchronous mentoring

Week 12 — Final Project Presentation & Evaluation

Final project presentation
Final evaluation

🎯 Learning objectives

By the end of this unit, students will be able to:

  • Explain why traditional XAI techniques are insufficient for LLM-based systems
  • Describe the concept of observability in AI systems
  • Identify key signals for monitoring LLM behavior (prompts, traces, latency, cost, feedback)
  • Apply visual analytics techniques to interpret LLM system behavior
  • Critically assess the reliability and limitations of LLM-generated outputs
  • Communicate LLM system performance and risks to non-technical stakeholders

🧠 Key concepts

  • Observability vs. interpretability
  • LLMs as probabilistic systems
  • Prompt-driven behavior
  • System-level explanations
  • Human-in-the-loop evaluation
  • Semantic errors and hallucinations
  • Trust and accountability in AI systems

Instructor name(s)

Jesus Alcober
Juan Lopez Rubio
Toni Oller

Instructor’s biography

JESUS ALCOBER. Telecommunications Engineer and Doctor of Telecommunications Engineering from the Universitat Politècnica de Catalunya (UPC). Full Professor in the Department of Telematics Engineering at UPC, at the School of Telecommunications and Aerospace Engineering of Castelldefels (EETAC). After serving as Deputy Director of the Institute of Education Sciences (ICE), he is currently the Rector’s Delegate for the Unite! Digital Campus. President of the ISO/IEC SC6 Standardization Subcommittee. Founder of Alteraid S.L., a UPC spin-off providing solutions to improve people’s quality of life. He has conducted research stays in the United States, France, and the United Kingdom.

JUAN LÓPEZ. Doctor of Engineering from the Universitat Politècnica de Catalunya (UPC) and Computer Engineer (UPC). In 2002, he joined the Department of Computer Architecture at UPC, where he teaches as an Associate Professor at the School of Telecommunications and Aeronautical Engineering of Castelldefels (EETAC). He has completed predoctoral stays at the University of Lugano (Switzerland) and at the École Nationale de l’Aviation Civile (Toulouse, France). In 2011, he participated in the creation of the UPC spin-off Alteraid, focused on the development of web technologies and Internet of Things (IoT) applications for e-health and well-being. In 2013, he carried out his postdoctoral stay at NASA’s Goddard Space Flight Center. He has participated as a researcher in more than 20 national and European projects, with over 25 publications in international conferences and journals.

TONI OLLER. Computer Engineer and adjunct professor in the Department of Telematics Engineering and researcher at the BAMPLA Research Group of the Universitat Politècnica de Catalunya since 2002. Since 1998, he has participated in numerous Internet projects with significant technical requirements for various commercial brands. From 2002 onwards, he has held technical responsibilities in several state- and EU-funded projects. Additionally, he is a consultant at the Universitat Oberta de Catalunya and co-founder of the spin-off Alteraid S.L.

Course Description

The “Visual Tools for AI” course is designed to equip students with the knowledge and practical skills to visually interpret, explain, and communicate the behavior of artificial intelligence models in real-world contexts. In an era where AI is increasingly embedded in business, healthcare, and research, the ability to make AI models transparent and understandable is critical for building trust and enabling informed decision-making by non-technical stakeholders.

This course provides hands-on experience with modern visual analytics tools, including Microsoft Power BI, Grafana, Plotly, and Jupyter-based interactive visualizations, allowing students to explore and explain AI model outputs effectively. Students will learn to create interactive dashboards, perform model interpretability analyses using methods like SHAP and LIME, and communicate insights clearly.

The course is project-driven, culminating in a real-world assignment where students apply the tools and techniques learned to analyze AI models, build dashboards, and deliver actionable insights. By bridging AI and visual analytics, this course prepares students to translate complex model results into understandable and impactful visual narratives for both technical and non-technical audiences.

Prerequisites

<ul>
<li>Basic programming experience (Python)</li>
</ul>
<p>Optional:</p>
<ul>
<li>Fundamentals of data analysis (Pandas)</li>
<li>Basic knowledge of SQL or data sources</li>
</ul>
<p>Prerequisites (knowledge, skills)</p>
<ul>
<li>Comfortable programming in Python (variables, functions, basic libraries)</li>
<li>Basic knowledge of linear algebra (vectors, matrices)</li>
<li>Introductory understanding of probability and statistics</li>
<li>Ability to work with CSV/tabular data and Jupyter notebooks is highly recommended</li>
</ul>

Certificate/badge details

<p>Certificate of achievement</p>

Required readings or materials

<p><strong>Explainable AI (XAI) Concepts</strong></p>
<ul>
<li>Molnar, Christoph. <em>Interpretable Machine Learning: A Guide for Making Black Box Models Explainable</em> (2nd Edition, 2023).<br />
<a href="https://christophm.github.io/interpretable-ml-book/">https://christophm.github.io/interpretable-ml-book/</a></li>
</ul>
<p><strong>SHAP and LIME for Model Interpretability</strong></p>
<ul>
<li>Lundberg, Scott M., and Su-In Lee. “A Unified Approach to Interpreting Model Predictions.” <em>NeurIPS 2017</em>.<br />
https://arxiv.org/abs/1705.07874</li>
<li>Ribeiro, Marco Tulio, et al. “Why Should I Trust You? Explaining the Predictions of Any Classifier.” <em>KDD 2016</em>.<br />
https://arxiv.org/abs/1602.04938</li>
</ul>
<p><strong>Data Visualization and Dashboards</strong></p>
<ul>
<li>Microsoft Power BI Documentation: <a href="https://learn.microsoft.com/en-us/power-bi/">https://learn.microsoft.com/en-us/power-bi/</a></li>
<li>Plotly Python Open-Source Graphing Library: <a href="https://plotly.com/python/">https://plotly.com/python/</a></li>
<li>Grafana Documentation: <a href="https://grafana.com/docs/grafana/latest/">https://grafana.com/docs/grafana/latest/</a></li>
</ul>
<p><strong>Jupyter and Python for Data Analysis</strong></p>
<ul>
<li>McKinney, Wes. <em>Python for Data Analysis</em> (2nd Edition, 2017).<br />
<a href="https://www.oreilly.com/library/view/python-for-data/9781491957653/">https://www.oreilly.com/library/view/python-for-data/9781491957653/</a></li>
<li>Jupyter Notebooks Official Guide: https://jupyter.org/documentation</li>
</ul>
<p><strong>Case Studies & Projects</strong></p>
<ul>
<li>Kaggle datasets for AI visualization practice: <a href="https://www.kaggle.com/datasets">https://www.kaggle.com/datasets</a></li>
<li>Real-world project tutorials using dashboards and XAI: <a href="https://towardsdatascience.com/tagged/xai">https://towardsdatascience.com/tagged/xai</a></li>
</ul>
<p><strong>Observability in LLM-based systems</strong></p>
<ul>
<li><em> (2024).</em> Opik Documentation – Open-Source LLM Observability & Optimization. Retrieved from <a href="https://www.comet.com/docs/opik/">https://www.comet.com/docs/opik/</a></li>
<li>Mendels, G., & Verre, J. (2024). Introducing Open Source LLM Evaluation from Comet (Opik). Comet blog.</li>
<li></li>
</ul>

Technical setup

<ul>
<li>Computer with a minimum of 8 GB of RAM. 16GB recommended. A discrete GPU is not required, but can speed the model training process up.</li>
<li>Excel with Power Pivot addon and PowerBI desktop</li>
<li>Stable internet connection with the ability to video conference</li>
</ul>