In this masterclass, πππ²ππππ© ππ‘ππ€π«ππππ«ππ², Director of AI in Tech at Piramal Capital & Housing Finance Limited, explores one of the most buzzed-about technologies shaping the future of AI - MCP.
Developed by Anthropic, MCP is often called the "USB-C of AI" for its potential to streamline integrations across AI systems.
In this video, we cover:
β What MCP is and why itβs revolutionizing the AI world
β The origin of MCP and how it enhances Large Language Models (LLMs)
β Real-world examples of how MCP is transforming AI applications
β The core architecture of MCP and how it works
We'll also discuss how MCP differs from traditional APIs, focusing on its unique dynamic self-discovery capabilities that make AI more autonomous. Whether you're a developer or an AI enthusiast, this video will provide you with the foundation you need to understand MCP and its impact on AI-driven solutions.
00:00 Introduction
02:00 Why the World Needs MCP
02:57 About the Creator
03:34 Module Agenda
04:30 The Importance of Context in AI
10:05 Origin Story of MCP
14:10 The NΓM Scenario
17:43 MCP Architecture
24:50 Capabilities of MCPs
27:30 MCP vs Traditional APIs: Key Differences
31:41 Recap
32:29 Ending
The video introduces the Model Context Protocol (MCP), often called the βUSB-C for AI,β a standard designed to connect large language models (LLMs) with external tools, APIs, and data sources. Developed by Anthropic in 2024, MCP is quickly gaining attention in 2025 as a foundational technology for building scalable and reliable AI agents. It addresses the growing complexity of integrating multiple LLMs with diverse tools and systems, something that has traditionally been fragile and difficult to maintain.
A central theme of the video is the importance of context in LLMs. Without context, AI models often generate generic or irrelevant answers. Through examples like planning a Goa trip, interpreting Garfield references, and generating images, the video demonstrates how adding more detail leads to more useful, accurate results. Just like humans need background information to respond meaningfully, LLMs rely heavily on context to produce value.
Before MCP, one major advancement was function calling, introduced by OpenAI. This allowed LLMs to connect with APIs and return structured responses. For example, when asked about the weather, an LLM could call a getWeather function, process the JSON output, and deliver a natural language response. While this opened the door for LLMs to interact with real-time data, it had significant limitations. Each integration had to be hand-coded, updates to APIs often broke connections, and scaling across multiple tools and models quickly became unmanageable.
This challenge is described as the N Γ M problem: as organizations adopt multiple LLMs and multiple APIs, the number of integrations grows exponentially. For instance, five models connected to ten tools means fifty custom integrations, each requiring maintenance. This creates redundancy, brittleness, and high costsβultimately limiting scalability.
MCP solves this by introducing a standardized client-server architecture. The MCP Host or Client is the LLM-powered application, like Claude Desktop or a ChatGPT agent, while MCP Servers expose tools, APIs, or resources. Instead of hardcoding every integration, the host can dynamically discover what tools or data are available. This plug-and-play design means a Gmail MCP could be swapped for an Outlook MCP with no changes to the client, reducing development overhead and ensuring flexibility.
A real-world example highlights how this works. An agent connected to Gmail MCP can be asked, βWhat board games did I recently purchase and at what cost?β The server queries Gmail, retrieves the relevant details, and delivers them back to the LLM, which explains the results naturally. This demonstrates how MCP allows agents to fetch and use context seamlessly, adapting to different sources as needed.
MCP defines three core primitives: Tools, which act like APIs or actions the LLM can take (e.g., sending an email, creating a Jira ticket); Resources, which provide access to structured or unstructured data like databases and local files; and Prompt Templates, reusable instructions that standardize interactions. Together, these primitives give LLMs both flexibility and structure in how they use external systems.
The video also compares MCP with traditional APIs. While both follow a client-server model, APIs are static and require manual coding, whereas MCP enables runtime discovery of capabilities. MCP is designed specifically for LLMs and agents, allowing developers to build once and integrate everywhere. In practice, MCP often wraps around existing APIs, making them agent-friendly while avoiding the fragility of direct function calls.
In conclusion, the video emphasizes that MCP is not just another integration layer but a critical step toward scalable, autonomous AI agents. By standardizing tool access, reducing redundant coding, and enabling dynamic context retrieval, MCP solves the limitations of earlier approaches like function calling. The session ends by previewing future modules on hands-on coding, building custom MCP servers, and exploring advanced use cases that will shape the next generation of AI applications.
Jaydeep Chakrabarty
Jaydeep Chakrabarty, Director of AI in Tech at Piramal Capital & Housing Finance Limited, is a technologist, open-source contributor, and thought leader in Artificial Intelligence. Previously at Thoughtworks, he led Generative AI engagements, R&D, and tech communities. As a speaker and author, Jaydeep shares insights on AI, innovation, and the future of technology.