What is Function calling & MCP for LLMs? (explained with visuals and code) Before MCPs became popular, AI workflows relied on traditional Function Calling for tool access. Now, MCP is standardizing it for Agents/LLMs. The visual below explains how Function Calling and MCP work under the hood. Today, let's learn: - Function calling by building custom tools for Agents. - How MCPs help by building a local MCP client with mcp-use and using tools from Browserbase MCP server. In Function Calling: - The LLM receives a prompt. - The LLM decides the tool. - The programmer implements a procedure to accept a tool call request from the LLM and prepare a function call. The tool call request is found in the LLM's response when you prompt it. - A backend service executes the tool. This Function Calling takes place within our stack: - We host the tool. - We implement a logic to determine the tool to invoke and its parameters. - We execute it. So Function Calling requires us to wire everything manually. MCP simplifies this! Instead of hard-wiring tools, MCP: - Standardizes defining, hosting, and exposing tools. - Makes it easy to discover tools, understand schemas, and use them. - Demands approval before invoking them. - Detaches implementation from consumption. For instance, whenever you integrate an MCP server, you never write a line of Python code to integrate the tools. Instead, you just integrate the MCP server and everything beyond this follows a standard protocol handled by the MCP client and the LLM: - They identify the MCP tool. - They prepare the input argument. - They invoke the tool. - They use the tool’s output to generate a response. Everything happens through a standard (but abstracted) protocol. So here’s the key point: MCP and Function Calling are not in conflict. They’re two sides of the same workflow. - Function Calling helps an LLM decide what it wants to do. - MCP ensures that tools are reliably available, discoverable, and executable, without you needing to custom-integrate everything. For example, an agent might say, “I need to search the web,” using function calling. That request can be routed through MCP to select from available web search tools, invoke the correct one, and return the result. Check the workflow in the diagram below. In this setup, to build a local MCP client, I used mcp-use because it lets us connect any LLM to MCP servers & build private MCP clients, unlike Claude/Cursor. - Compatible with Ollama & LangChain - Stream Agent output async - Built-in debugging mode, etc Find the mcp-use GitHub repo in the comments! ____ Find me → Avi Chawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.
Informative post and creative illustration explaining Function calling & MCP for LLMs! Avi Chawla
This makes building agents feel less like duct-taping APIs and more like composing features.
Great share Avi, the breakdown of function calling vs MCP really clarifies how both complement each other to make agent workflows more reliable and standardized.
Avi Chawla Super helpful explanation 💯
Great breakdown. Function calling gives LLMs the “decision” layer, while MCP provides the infrastructure layer to make those decisions reliable and standardized. Together, they make agents far more practical to build and scale.
This is an excellent, clear explanation of how Function Calling and MCP complement each other in modern LLM workflows. Thanks for breaking it down so effectively Avi Chawla
Yes MCP makes it so much comfortable Avi Chawla
Function Calling is like choosing the dish off the menu 🍜 — the LLM knows what it wants. MCP is the waiter who knows every kitchen in town 🥡 — it standardizes the order, finds the right chef, and brings back the meal without you wiring the kitchen yourself. That’s the magic: Function Calling = decision, MCP = execution. 🔑✨ #AI #LLM #MCP #FunctionCalling #AIagents
Hi Avi Chawla , can you please provide resources for the text to SQL project
Co-founder DailyDoseofDS | IIT Varanasi | ex-AI Engineer MastercardAI | Newsletter (150k+)
2moRepo: https://github.com/mcp-use/mcp-use (7k+ stars) (don't forget to star ⭐)