This intensive course provides hands-on training in modern GenAI technologies, covering the full spectrum from transformer architectures to autonomous agents. Participants will learn to work with large language models, build sophisticated RAG systems, implement the Model Context Protocol, and create REACT agents capable of complex reasoning and action. The course emphasizes practical implementation alongside theoretical understanding, preparing participants to deploy production-ready AI solutions.
Transformers, LLMs, RAG and Agents: From Theory to Production

Training details
Location
UPC North Campus
Date
29/06/2026
Target Audiance
Student-Focused
Teaching language(s)
English / Catalan / Spanish
Organizing institution
Universitat Politècnica de Catalunya
Delivery mode
Hybrid
Level
Advanced
Format
Case Study Session, Hands-on session, Lecture, Panel Discussion, Self-paced Module
Capacity or seats limit
30
Industrial domains
Topics / Keywords
Transformers, Large Language Models, Natural Language Processing, Fine-tuning, Prompt Engineering, RAG (Retrieval Augmented Generation), Vector Embeddings, Semantic Search, Model Context Protocol, MCP Servers, REACT Agents, Hugging Face, Ollama, Multi-modal AI, Agent Frameworks, A2A Protocol
What You Will Learn
Participants will gain comprehensive experience with the modern LLM ecosystem, including working with transformer models via Hugging Face, deploying models locally and via APIs, building retrieval-augmented generation systems, implementing the Model Context Protocol for tool integration, and creating autonomous REACT agents. The course emphasizes practical, hands-on learning with real-world projects and use cases.
Learning Objectives:
● Understand the architecture and mechanisms of transformer models and their
application in natural language processing
● Implement and fine-tune pre-trained language models using modern frameworks and
libraries
● Design and deploy Retrieval Augmented Generation (RAG) systems for enhanced AI
assistants
● Build autonomous agents using the REACT paradigm and integrate them with external
tools via Model Context Protocol
● Evaluate and select appropriate tools and frameworks for different LLM application
scenarios
● Deploy LLM-based solutions both locally and via cloud APIs
Agenda
Week 1: Transformers and Large Language Models
● Introduction to NLP and transformer architecture
● Hands-on with Hugging Face transformers library
● Fine-tuning pre-trained models
● Understanding LLM capabilities and emerging properties
● Prompt engineering fundamentals
● Working with LLM APIs and local deployment (Ollama, llama.cpp)
● Multi-modal LLMs overview
Week 2: RAG Assistants and Embeddings
● Vector embeddings and semantic search principles
● Embedding databases (Pinecone, Chroma, …)
● Retrieval Augmented Generation architecture
● Building simple assistants (Tool: https://lamb-project.org )
● RAG frameworks comparison vs custom implementations
● Case study the: Lamb Knowledge-base-server
● Optimization strategies for RAG systems
Week 3: Model Context Protocol (MCP)
● Introduction to Model Context Protocol
● Integrating MCP servers with development tools (VS Code, Cursor, Claude Desktop)
● MCP in software development workflows
● Building custom MCP servers
● Debugging and deployment best practices
● MCP server authentication and security
Week 4: REACT Agents and Advanced Integration
● Understanding the REACT (Reasoning and Acting) paradigm
● Building REACT agents from scratch
● Agent frameworks and orchestration
● Docker containerization for agents
● Integration with Google and OpenAI APIs
● Introduction to Agent-to-Agent (A2A) protocol
Instructor name(s)
Marc Alier
Cristian Maximilano Rodriguez
Instructor’s biography
Marc Alier is an Associate Professor in the Barcelona School of Informatics, he has Computer Science Engineering Degree and a Phd in Sciences, he also holds a Full Professor accreditation by ANECA. Alier has more than 25 years of teaching and research experience in software engineering, information systems and programming. Hew has more than 180 academic publications 50 of wich papers in indexed scientific journals. Currently he is deputy director for AI in Education at ICE UPC. In his spare time he records podcasts, builds electric guitars and fathers teenagers.
Course Description
Purpose: This course addresses the growing demand for professionals who can effectively implement and deploy large language model applications. As organizations increasingly adopt AI technologies, there is a critical need for practitioners who understand not only the theoretical foundations but also the practical implementation of LLM-based systems.
Context: Large Language Models have revolutionized artificial intelligence, enabling unprecedented capabilities in natural language understanding, generation, and reasoning. This course bridges the gap between academic knowledge and industry application, covering the entire pipeline from model selection and fine-tuning to building production-ready RAG systems and autonomous agents.
Learning Objectives:
● Understand and explain transformer architecture and attention mechanisms
● Implement fine-tuning workflows for pre-trained language models
● Design and deploy RAG systems using vector databases and embedding models
● Build custom MCP servers for tool integration with AI assistants
● Develop REACT agents capable of reasoning and autonomous action
● Evaluate trade-offs between different LLM deployment strategies (API vs local)
● Apply prompt engineering techniques for optimal model performance
● Integrate multiple AI components into cohesive application architectures
Target Audience:
● University students (master’s level and advanced undergraduates) in computer science,
AI, or related fields
● Researchers exploring LLM applications in their domains
● PhD candidates working on NLP or AI-related research
● Data scientists and ML engineers transitioning to LLM development
● Software developers building AI-powered applications
Prerequisites
<p>● Proficient programming skills in Python (3+ years experience preferred)<br />
● Understanding of machine learning fundamentals (supervised/unsupervised learning,<br />
model training)<br />
● Familiarity with neural networks and deep learning concepts<br />
● Experience with version control (Git) and command-line interfaces<br />
● Basic knowledge of RESTful APIs and web services<br />
● Recommended: Prior exposure to PyTorch or TensorFlow</p>
Certificate/badge details
<p>Certificate of Achievement</p>
Required readings or materials
<p>1. “Attention Is All You Need” (Vaswani et al., 2017) – Foundational transformer<br />
paperhttps://arxiv.org/abs/1706.03762<br />
2. Hugging Face Transformers Documentation – Practical implementation<br />
guidehttps://huggingface.co/docs/transformers/index<br />
3. “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks” (Lewis et<br />
al., 2020)https://arxiv.org/abs/2005.11401<br />
4. Model Context Protocol Documentation https://modelcontextprotocol.io/<br />
5. “ReAct: Synergizing Reasoning and Acting in Language Models” (Yao et al.,<br />
2022)https://arxiv.org/abs/2210.03629</p>
Technical setup
<p>● Basic to intermediate programming skills in Python, HTML and Javascript<br />
● Experience with Git and command-line tools<br />
● Basic knowledge of APIs, web services and http protocol</p>
