Function Calling and MCP for LLMs: A Visual Guide

This title was summarized by AI from the post below.
View profile for Avi Chawla

Co-founder DailyDoseofDS | IIT Varanasi | ex-AI Engineer MastercardAI | Newsletter (150k+)

What is Function calling & MCP for LLMs? (explained with visuals and code) Before MCPs became popular, AI workflows relied on traditional Function Calling for tool access. Now, MCP is standardizing it for Agents/LLMs. The visual below explains how Function Calling and MCP work under the hood. Today, let's learn: - Function calling by building custom tools for Agents. - How MCPs help by building a local MCP client with mcp-use and using tools from Browserbase MCP server. In Function Calling: - The LLM receives a prompt. - The LLM decides the tool. - The programmer implements a procedure to accept a tool call request from the LLM and prepare a function call. The tool call request is found in the LLM's response when you prompt it. - A backend service executes the tool. This Function Calling takes place within our stack: - We host the tool. - We implement a logic to determine the tool to invoke and its parameters. - We execute it. So Function Calling requires us to wire everything manually. MCP simplifies this! Instead of hard-wiring tools, MCP: - Standardizes defining, hosting, and exposing tools. - Makes it easy to discover tools, understand schemas, and use them. - Demands approval before invoking them. - Detaches implementation from consumption. For instance, whenever you integrate an MCP server, you never write a line of Python code to integrate the tools. Instead, you just integrate the MCP server and everything beyond this follows a standard protocol handled by the MCP client and the LLM: - They identify the MCP tool. - They prepare the input argument. - They invoke the tool. - They use the tool’s output to generate a response. Everything happens through a standard (but abstracted) protocol. So here’s the key point: MCP and Function Calling are not in conflict. They’re two sides of the same workflow. - Function Calling helps an LLM decide what it wants to do. - MCP ensures that tools are reliably available, discoverable, and executable, without you needing to custom-integrate everything. For example, an agent might say, “I need to search the web,” using function calling. That request can be routed through MCP to select from available web search tools, invoke the correct one, and return the result. Check the workflow in the diagram below. In this setup, to build a local MCP client, I used mcp-use because it lets us connect any LLM to MCP servers & build private MCP clients, unlike Claude/Cursor. - Compatible with Ollama & LangChain - Stream Agent output async - Built-in debugging mode, etc Find the mcp-use GitHub repo in the comments! ____ Find me → Avi Chawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.

  • graphical user interface, application
Avi Chawla

Co-founder DailyDoseofDS | IIT Varanasi | ex-AI Engineer MastercardAI | Newsletter (150k+)

2mo

Repo: https://github.com/mcp-use/mcp-use (7k+ stars) (don't forget to star ⭐)

Pooja Jain

Storyteller | Lead Data Engineer@Wavicle| Linkedin Top Voice 2025,2024 | Globant | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP’2022

2mo

Informative post and creative illustration explaining Function calling & MCP for LLMs! Avi Chawla

This makes building agents feel less like duct-taping APIs and more like composing features.

Like
Reply
Shivam Bharadwaj

AI Engineering | Neuroscience | Ophthalmic AI | Technical Humour & Insights

2mo

Great share Avi, the breakdown of function calling vs MCP really clarifies how both complement each other to make agent workflows more reliable and standardized.

Like
Reply
Aditya Sharma

Learn AI with Me | AI Tools • AI Agents • AI News | 154K+ Followers | Ex-Deloitte & PwC

1mo

Avi Chawla Super helpful explanation 💯

Like
Reply
Rahul Bhatt

CTO @Fycan | CEO @Wings Tech | 22+ Years in SaaS & Technology Leadership | Growth & Innovation Strategist | Driving Digital Transformation & Product Excellence | Canada, India

1mo

Great breakdown. Function calling gives LLMs the “decision” layer, while MCP provides the infrastructure layer to make those decisions reliable and standardized. Together, they make agents far more practical to build and scale.

Like
Reply
Sivasankar Natarajan

Technical Director | GenAI Practitioner | Azure Cloud Architect | Data & Analytics | Solutioning What’s Next

2mo

This is an excellent, clear explanation of how Function Calling and MCP complement each other in modern LLM workflows. Thanks for breaking it down so effectively Avi Chawla

Like
Reply
Tamojit Bhowmik

Founder @ Synkluna | We build RAG-powered AI Agents for SaaS & Startups | From knowledge to action in 14 days

1mo

Yes MCP makes it so much comfortable Avi Chawla

Like
Reply
James Davis

“Building at the crossroads of intellect and intuition — weaving logic, passion, and vision into intelligent systems.”

2mo

Function Calling is like choosing the dish off the menu 🍜 — the LLM knows what it wants. MCP is the waiter who knows every kitchen in town 🥡 — it standardizes the order, finds the right chef, and brings back the meal without you wiring the kitchen yourself. That’s the magic: Function Calling = decision, MCP = execution. 🔑✨ #AI #LLM #MCP #FunctionCalling #AIagents

Like
Reply
Pavan Vinay Veesam

Financial Analyst | Data Analyst | Business Analyst | Financial Modeling | Financial Reporting | Financial Forecasting | SQL | Power BI | Tableau | Python | R | Advanced Excel | Machine Learning | MS Business Analytics

2mo

Hi Avi Chawla , can you please provide resources for the text to SQL project

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories