In this article, you will discover how to build a minimal Model Context Protocol (MCP) server in Python in just a few lines. This quick-start guide shows you how to use FastMCP, which is the official Python SDK, to expose simple functions (tools) to AI clients, enabling seamless integration of Python code, data access, and automation into your AI workflows. With a tiny server.py, you can register tools, start the server, and make your local environment accessible to LLM-powered agents, all without complicated setup or external infrastructure. Perfect for developers who want to add real tooling to LLMs quickly, securely, and reliably.
Understanding the fundamentals of MCP Server
MCP is rapidly becoming one of the most powerful ways to connect AI models with real-world tools, and learning how to build a basic MCP server in Python is the perfect place to start. In this tutorial, we’ll walk through a minimal example you can run locally, giving you a clear foundation for building your own custom MCP tools.
What is MCP?
MCP (Model Context Protocol) is an open-source standard for connecting AI applications to external systems. It is a live, bidirectional connection where tools can return structured data, prompts, resources, and real-time context tailored specifically for AI models. Other frameworks like the REST work, but they aren’t designed for how AI models think or operate.
There are two sides to building an MCP Server:
1. MCP Server: a Python script that exposes simple tool
2. Client: connects to the server and uses those tools
In this section, we will be focusing on the first part which is creating an MCP Server locally without any additional network connections.
Part 1 – Creating an MCP Server: a Python script that exposes simple tools inside the MCP
We will first be focusing on understanding how to create an MCP Server locally on your device. In a later segment, we will be using a client such as ChatGPT or Google Gemini that connects to the server and uses those tools exposed. Eventually we will use these clients to build some useful applications.
The explanation for each section, as well as additional information will follow the code blocks.
Required Knowledge & Setup
Before building a basic MCP server in Python, make sure you’re comfortable with the following:
Basic Python syntax and scripting VS Code (or any Python-friendly IDE) installed Creating and activating a virtual environment for Python projects Installing Python libraries inside a project environment
Follow the steps below to start your first MCP Server
1. Open VSCode or some editor & create a file server.py inside a directory named MCP
2. Activate the virtual environment using the code below:
python3 -m venv mcp-env
source mcp-env/bin/activate
You should be able to see the terminal update with the virtual environment in round brackets such as:
(mcp-env) techeasy@MacBook-Pro-2 MCP %
Why you may need virtual environment while using MCP locally
A virtual environment keeps all MCP-related packages sandboxed inside this project, preventing version conflicts and ensuring your setup is reproducible. This step simply activates your isolated Python environment. Once activated, any packages you install, including the mcp library will be contained inside this environment, keeping your system Python untouched.
Note: On many Macs, installing mcp globally fails because system Python is protected or tied to macOS packages. A virtual environment avoids these restrictions and gives you a clean, writable Python space where pip install mcp actually works. If you try installing it directly, you may encounter an error such as:
error: externally-managed-environment
You cannot install mcp using Homebrew; you install it with pip. Homebrew can install Python or uv, but not the MCP library itself.
3. Install the mcp package using pip
(mcp-env) techeasy@MacBook-Pro-2 MCP % pip install mcp
This should complete the installation of mcp and all its dependencies.
Difference between mcp and mcp[cli]
mcp[cli] is simply the expanded package of MCP that also provides a built-in interface. mcp[cli] installs the core MCP library plus the official command-line tooling, debug utilities, and any extra dependencies needed for those CLI features.
Note: package[extras] is a standard Python practice that allows packages to also install optional “extra” dependencies.
For this example, we are trying to understand the bare bone basics of MCP and we will be looking at the command line interface(CLI) in a future post. CLIs improve
4. Now paste the code below inside the file server.py
Note: There are formatting issues in the code pasted below, make sure the indentation is Pythonic.
from mcp.server.fastmcp import FastMCP
server = FastMCP("hello-server")
print("MCP server starting…")
print("server object:", server)
print("Waiting for MCP client...")
@server.tool()
def say_hello(name: str) -> str:
return f"Hello, {name}!"
@server.tool()
def ping() -> str:
print("PING tool was called!")
return "PONG"
if __name__ == "__main__":
server.run()
Before we run the code, let’s break it down:
from mcp.server.fastmcp import FastMCP
The core package that we are importing here is mcp. It is the official Python SDK for Model Context Protocol servers and clients.
Difference between MCP and FastMCP
The key difference between MCP and FastMCP is that MCP provides the low-level foundation of the protocol, handling JSON-RPC messaging, stdio transport, and server primitives, while leaving you responsible for manually registering tools, defining schemas, and wiring requests. It’s powerful but verbose, essentially the bare metal of the system. FastMCP, on the other hand, is the high-level wrapper built within the same package(mcp.server.fastmcp) that offers a far more developer-friendly interface, including decorators like @tool and @resource, automatic JSON schema generation from type hints, argument parsing, simplified server startup, and safe defaults. In short: FastMCP gives you batteries-included convenience, while MCP gives you the raw engine.
print("server object:", server)
The print() statement is added simply to demonstrate the MCP server object created.
Where is the server logically created?
It is created on your local machine, as a regular Python process, exactly like running any Python script, only while you keep the terminal open. There is no cloud deployment, no daemon, no background service, no Docker container. You’re not creating a “server” in the networking sense. You’re creating an MCP interface layer inside your Python script.
By default:
- communicates over stdio (standard input/output)
- does not open a port or listen on HTTP
- is not a network server
@server.tool()
def say_hello(name: str) -> str:
return f"Hello, {name}!"
@server.tool is one of the important decorators used in MCP. To be precise, these are FastMCP Decorators used to expose MCP tools and resources.
How are decorators used in MCP?
To understand its usage requires a broader understanding of decorators in Python. In simple terms, decorators takes the function defined below it, wraps it inside another function, and returns the modified version.
For the purpose of demonstration, the manual version of the say_hello() function above without a decorator will look as below:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("demo")
def say_hello(name: str) -> str:
return f"Hello, {name}!"
# MANUAL REGISTRATION:
mcp.register_tool(
name="say_hello",
handler=say_hello,
args_schema={
"type": "object",
"properties": {
"name": {"type": "string"}
},
"required": ["name"]
},
returns_schema={"type": "string"}
)
The @server.tool simply knows how to setup the tool we are creating under it and will register say_hello() as an MCP tool that can be used by clients like Google Gemini or ChatGPT to find later.
Other widely used decorators in MCP include:
- @mcp.tool(): Used to expose functions as tools
- @mcp.resource(): Used to expose retrievable resources
- @mcp.stream(): Create a pipeline for streaming tool responses
- @mcp.event(): Server-pushed events
- @mcp.schema(): Typed input/output models
Finally:
if __name__ == "__main__":
server.run()
server.run() is the method that actually starts your FastMCP server. It opens the communication channel (usually stdio), listens for incoming MCP requests, and keeps the process running so clients like ChatGPT or Gemini can call your tools. It stays running till the terminal window is open or we interrupt the process.
5. Run the code
You will see an output such as:
(mcp-env) techeasy@MacBook-Pro-2 MCP % python [server.py](<http://server.py/>) MCP server starting…
server object: <mcp.server.fastmcp.server.FastMCP object at 0x10559cec0> Waiting for MCP client...
Not much, huh?
The output is exactly what a healthy MCP server looks like, and also exactly why it feels unsatisfying. It’s basically telling you “I’m alive and waiting for someone to talk to me.”
No one is actually calling your tool yet, so you don’t get that nice “it did something” closure..yet.
To make the output a little more interesting, modify the code as below:
import logging
logging.basicConfig(level=logging.DEBUG)
from mcp.server.fastmcp import FastMCP
server = FastMCP("hello-server")
print("MCP server starting…")
print("server object:", server)
print("Waiting for MCP client...")
@server.tool()
def say_hello(name: str) -> str:
print(f"[DEBUG] say_hello called with: {name}")
return f"Hello, {name}!"
@server.tool()
def ping() -> str:
print("PING tool was called!")
return "PONG"
if __name__ == "__main__":
server.run()
The output will be as below:
(mcp-env) techeasy@MacBook-Pro-2 MCP % python server.py
DEBUG:mcp.server.lowlevel.server:Initializing server 'hello-server'
DEBUG:mcp.server.lowlevel.server:Registering handler for ListToolsRequest
DEBUG:mcp.server.lowlevel.server:Registering handler for CallToolRequest
DEBUG:mcp.server.lowlevel.server:Registering handler for ListResourcesRequest
DEBUG:mcp.server.lowlevel.server:Registering handler for ReadResourceRequest
DEBUG:mcp.server.lowlevel.server:Registering handler for PromptListRequest
DEBUG:mcp.server.lowlevel.server:Registering handler for GetPromptRequest
DEBUG:mcp.server.lowlevel.server:Registering handler for ListResourceTemplatesRequest
MCP server starting…
server object: <mcp.server.fastmcp.server.FastMCP object at 0x102e65010>
Waiting for MCP client...
DEBUG:asyncio:Using selector: KqueueSelector
What is a server? And how does an MCP server operate locally without any network connection?
Servers are conventionally synonymous with network servers because of the wide-scale usage of the internet today. While that is true in majority of cases, a server does not necessarily require a network. A server is any process that serves requests to a client, with or without a network. Whether it uses ports, pipes, stdio, or shared memory, it can still be called as a server. SQLite for example is a database server that runs entirely in-process. Browsers like Chrome use multiple local servers such as the audio, extension and network services that behave like IPC pipelines instead of network servers. Even VSCode extensions run in a separate process. They serve requests such as lists, diagnostics, etc. to the editor that acts like a client.
Why use MCP at all?
It is fair to ask.. why bother with MCP at all if you can just run Python locally or ask ChatGPT to execute code?
MCP is helpful because it allows AI clients to interact with your local tools, data, and processes in a safe, organized, and reusable manner. That is something that cannot be accomplished by simply running Python locally or manually uploading files. By explicitly registering tools with typed inputs and outputs, MCP removes hallucination and gives you precise control over what the AI may access. Without exposing your system to risky commands or complete shell access, it enables AI clients like ChatGPT Desktop, Cursor, or Claude to call your Python functions efficiently.
In contrast to one-time uploads or ad hoc scripts, MCP can also establish a persistent interface so that the AI can read files, execute scripts, or query local data throughout sessions without ever touching the underlying file system beyond what you expose. MCP standardizes these interactions across many AI clients, transforming your local Python code into a discoverable, callable toolbox that can be called by any AI that is compatible with MCP securely, reliably, and without having to write new glue code every time.

