Model Context Protocol (MCP) is an open standard developed by Anthropic that defines how AI models communicate with external tools and data sources. If you are new to MCP, CODE Magazine has already published a previous article MCP: Building the Bridge Between AI and the Real World that covers the fundamentals in detail.
In this article, I will take a more hands-on approach, focusing on practical code samples and walking you through how to write your own MCP server from scratch. As a bonus, I will also show you how to build an MCP client that behaves just like Claude Desktop, giving you a deeper understanding of how the two sides of the protocol work together.
MCP supports two main transport mechanisms for communication between client and server:
stdio (Standard I/O) is the simpler of the two. The client launches the MCP server as a local subprocess and communicates with it by writing to its stdin and reading from its stdout. This approach works well for local development and testing, as everything runs on the same machine with no network configuration needed. The examples in this article all use stdio transport.
SSE (Server-Sent Events) is designed for remote or production deployments. The MCP server runs as a standalone HTTP server, and the client connects to it over a network using SSE, a standard web protocol for streaming data from server to client. This allows the MCP server to be hosted separately from the client, making it suitable for cloud deployments where multiple clients may need to share the same server.
In short, stdio is ideal for local, tightly coupled setups, while SSE is better suited for distributed, networked environments where the server and client run independently of each other.
Using Ready-Made MCP Servers
Before diving into ready-made MCP servers, you first need an MCP client. An MCP client is a program or interface that connects to an MCP server to send commands, receive responses, and manage resources. It acts as the bridge between the user and the server, enabling you to submit tasks to the server, receive results or status updates, configure or monitor server settings, and authenticate and maintain a session.
A good example of an MCP client is Claude Desktop, which provides a user-friendly interface for connecting to MCP servers, submitting tasks, and receiving responses. In this article, I will show you how to build your own MCP client from scratch.
You can download Claude Desktop from https://claude.com/download. Once installed, launch the application and you will be greeted with the interface shown in Figure 1.

To add an MCP server to Claude Desktop, you need to modify a configuration file. The easiest way to locate it is to open Claude Desktop, click on your login name, then navigate to Settings | Developer, and finally click the Edit Config button. This is shown in Figure 2 and Figure 3.


Clicking the Edit Config button opens Finder on a Mac, or Windows Explorer on Windows, revealing the location of the configuration file named claude_desktop_config.json (see Figure 4).

We will modify this configuration file to add MCP servers to Claude Desktop, which we will walk through in the next section.
Filesystem MCP Server
Our first MCP server is one written by Anthropic—the Filesystem MCP Server (@modelcontextprotocol/server-filesystem). It runs via npx, so no manual installation is required; it is automatically downloaded and executed on demand.
To add this MCP server to Claude Desktop, open claude_desktop_config.json and modify it as follows:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol\/server-filesystem",
"/Users/weimenglee/Desktop/\SharedFolder",
"/Users/weimenglee/Desktop/\Report"
]
}
}
}
The -y flag ensures the process runs without any user confirmation prompts. In this configuration, the server is granted access to two specific folders on the local machine—/Users/weimenglee/Desktop/SharedFolder and /Users/weimenglee/Desktop/Report—meaning Claude can only read, write, list, search, and manage files within these two directories and cannot access anything outside of them. This makes it a safe and sandboxed way to give Claude file system capabilities on your local machine.
Note that for the filesystem MCP server to work, you need to have Node.js on your computer. You'll want Node.js v20 or higher for best compatibility. If you don't have it installed, you can grab it from nodejs.org.
To use the MCP server, you need to restart Claude Desktop. On a Mac, you must quit the app entirely—simply closing the window is not sufficient. On Windows, use Task Manager to end the “Claude Desktop” task.
Note that if you made a mistake while editing claude_desktop_config.json, Claude Desktop will flag the error when it restarts, as shown in Figure 5.

The easiest way to diagnose the error is to click the Open Developer settings button, then click View Logs to see what went wrong (see Figure 6).

This opens the log file containing entries recorded by the MCP server. Scroll to the bottom to find the details of the error.
If your configuration is correct, the MCP server should load successfully. After a moment, click the + icon, then select Connectors as shown in Figure 7, and you should see the Filesystem MCP server isted there—also referred to as a connector in Claude Desktop.

Clicking Manage Connectors from the previous menu allows you to view and configure the permissions available for the Filesystem MCP server, as shown in Figure 8.

To verify that the Filesystem MCP server is working, try asking Claude: "What are the files in the SharedFolder?"
As shown in Figure 9, Claude will consult the Filesystem MCP server to determine which tool to call in order to answer the question. It will then ask for your permission before invoking the tool. You can choose Always Allow to skip this prompt in the future, or grant permission just this once.

Figure 10 shows the Filesystem MCP server's response, listing all the files found in the SharedFolder directory.

Besides reading from the SharedFolder directory, you can also get the filesystem MCP server to write a file to the directory. Let's ask the following question:
Can you help me save the following to a file named “To Dos”:
1. Work on the MCP article for CODE magazine
2. Reply to outstanding emails
3. Work on proposal for new book
As shown in Figure 11, the Filesystem MCP server will prompt you to confirm the location where the file should be saved.

Once you confirm the destination folder, the file will be saved to that location (see Figure 12).

Memory MCP Server
Next, let's look at another useful MCP server: the Memory MCP server. This server allows Claude to store and retrieve information across conversations, giving it the ability to remember useful details over time. By connecting Claude to the Memory MCP server, you can persist information beyond a single chat session.
To enable it, add the memory, command and args in this snippet to your claude_desktop_config.json file:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol\/server-filesystem",
"/Users/weimenglee/Desktop/\SharedFolder",
"/Users/weimenglee/Desktop/\Reports"
]
},
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol\/server-memory"
]
}
}
}
As usual, perform a complete restart of Claude Desktop. Once restarted, you can start using the Memory MCP server to store facts across sessions. Figure 13 shows an example where Claude is asked to remember a personal preference—in this case, a love for durians!

You can verify this by starting a new chat and asking: "What do you remember about me?" As shown in Figure 14, Claude still remembers the stored preference—and it will continue to do so even after starting a fresh conversation.

With the memory MCP server, Claude can build up a persistent knowledge graph about you over time. Here are some practical ways this pays off:
- Personalization: Claude remembers your preferences, so you don't have to repeat yourself. Things like “I prefer concise answers,” “I use Python 3.11,” or “I'm on macOS” get remembered automatically.
- Project context: If you're working on a long project over many sessions, Claude can remember the project name, architecture decisions, tech stack, and where you left off—without you having to re-explain everything each time.
- Work style: Claude can remember things like “Wei Meng prefers code examples over explanations,” or “always use type hints in Python code,” and apply those preferences consistently.
- Personal facts: Your timezone, your team members' names, your company's coding standards, recurring tasks you do—all of this can be stored and referenced naturally.
- Incremental learning: The knowledge graph grows over time. The more you interact, the more context Claude has, making it increasingly useful the longer you use it.
In short, it transforms Claude from a stateless tool you have to brief every single session into something closer to a persistent assistant that actually knows you and your work.
Creating Your Own MCP Servers
Now that you have seen how to add ready-made MCP servers, it is a good time to learn how to write your own. In this section, we will build two MCP servers using Python:
- A Weather MCP Server that retrieves current weather information for a given city
- A Holiday MCP Server that looks up public holidays for a given country
We will use Python for both servers. If you do not already have Python installed, I recommend downloading Anaconda from https://www.anaconda.com/download/success, which is what we'll be using throughout this section.
We also need to install the required Python packages for building MCP servers. Run the following commands to get started:
pip install "mcp[cli]" requests
The first package is mcp[cli], which is the MCP Python package together with its CLI (command line interface) extras. The mcp is the core library used to build or interact with Model Context Protocol (MCP) servers and clients. The [cli] installs additional dependencies required for command-line tools provided by the MCP package. The requests is a widely used Python library for making HTTP requests. It allows your Python program to easily interact with REST APIs.
Weather MCP Server
In this section, we will build our first MCP server—a Weather MCP server. This server will expose a single tool that accepts a city name and returns current weather information such as temperature, humidity, and wind speed by querying the OpenWeatherMap API. To get started, you will need a free API key from OpenWeatherMap, which you can obtain by signing up at https://openweathermap.org.
Listing 1 contains the code for the Weather MCP server –weather.py.
Listing 1: Code for the Weather MCP Server - weather.py
import requests
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("weather")
WEATHER_API_KEY = "xxxxxxxxxxxx"
@mcp.tool()
def get_current_weather(city: str) -> str:
"""Get the current weather for a city.
Args:
city: Name of the city
(e.g. 'Singapore', 'Tokyo')
"""
base = (
"http://api.openweathermap.org"
"/data/2.5/weather"
)
url = (
f"{base}?q={city}"
f"&appid={WEATHER_API_KEY}"
f"&units=metric"
)
response = requests.get(url)
if response.status_code == 200:
data = response.json()
weather = (
data["weather"][0]["description"]
)
temperature = data["main"]["temp"]
humidity = data["main"]["humidity"]
wind_speed = data["wind"]["speed"]
return (
f"The weather in {city} "
f"is {weather} "
f"with a temperature of "
f"{temperature}°C, "
f"humidity of {humidity}%, "
f"and wind speed of "
f"{wind_speed} m/s."
)
else:
# Return full error details for debugging
return (
f"Error {response.status_code}: "
f"{response.json()}"
)
if __name__ == "__main__":
mcp.run()
The code starts by importing the required libraries: requests for sending HTTP requests to APIs and FastMCP to quickly create an MCP server.
An MCP server named “weather” is created using FastMCP("weather"). The WEATHER_API_KEY variable stores the API key needed to authenticate requests to the OpenWeatherMap API.
A tool is defined using the @mcp.tool() decorator, which registers the function get_current_weather(city: str) -> str as callable by an AI client. The function accepts a city name as input and constructs a full API URL including the city, API key, and metric units. It sends an HTTP GET request using requests.get(url) and checks if the request was successful (status_code == 200). If successful, the JSON response is parsed into a Python dictionary, and key weather information—such as description, temperature, humidity, and wind speed—is extracted.
The function then returns a human-readable summary of the current weather. If the request fails, the function returns the status code and the JSON error for debugging. Finally, the server is started with mcp.run() inside the if __name__== "__main__" block, making the weather tool available to MCP clients like Claude Desktop.
To run the Weather MCP server, you need to know the path to your Python interpreter. You can find this by running the following command in your terminal:
which python3
This command prints the full path to your Python interpreter, which will look something like /opt/anaconda3/bin/python3.
With that, you are ready to test the Weather MCP server. Add the weather, command, and args lines to your claude_desktop_config.json file as shown at the bottom of this code snippet, assuming that weather.py is saved on your desktop:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol\/server-filesystem",
"/Users/weimenglee/Desktop/\SharedFolder",
"/Users/weimenglee/Desktop/\Reports"
]
},
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol\/server-memory"
]
},
"weather": {
"command": "/opt/anaconda3/bin/python3",
"args": [
"/Users/weimenglee/Desktop/\weather.py"
]
}
}
}
After restarting Claude Desktop, the Weather MCP server should now appear in your list of connectors, as shown in Figure 15.

You can now ask Claude: "Using my weather MCP server, what is the weather for London today?" Figure 16 shows the result returned by the Weather MCP server.

Holidays MCP Server
Next, we will build a Holiday MCP server that allows users to query public holidays for any country. This server exposes a single tool, fetch_public_holidays(), that accepts a two-letter country code and a year and retrieves the corresponding list of public holidays by querying the Nager.Date API—a free, no-key-required public holidays API. The code is shown in Listing 2.
Listing 2: The source for the holidays MCP server – holidays.py
import requests
from typing import Union, List
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("holidays")
HOLIDAYS_API = "https://date.nager.at/api/v3"
@mcp.tool()
def fetch_public_holidays(
country: str,
year: int
) -> Union[List[str], str]:
"""Fetch public holidays for a country.
Args:
country: 2-letter country code
(e.g. 'US', 'SG', 'GB')
year: The year (e.g. 2025)
"""
url = (
f"{HOLIDAYS_API}"
f"/PublicHolidays/{year}/{country}"
)
try:
response = requests.get(url, timeout=5)
response.raise_for_status()
holidays = response.json()
if not holidays:
return (
f"No holidays found for "
f"{country} in {year}."
)
return [
f"- {h['date']}: {h['localName']}"
for h in holidays
]
except requests.exceptions.HTTPError:
return (
f"Invalid country code '{country}'"
f" or year '{year}'."
)
except requests.RequestException as e:
return f"Error fetching holidays: {e}"
if __name__ == "__main__":
mcp.run()
This Python code creates an MCP server named “holidays” that provides a tool to fetch public holidays for a given country and year. It uses the requests library to call the Nager.Date API (https://date.nager.at/api/v3) and returns either a list of formatted holiday dates and names or an error message. The fetch_public_holidays function is registered as an MCP tool using the \@mcp.tool() decorator and takes two parameters: a 2-letter country code (e.g., “US”, “SG”) and a year (e.g., 2025). It constructs the API URL, sends a GET request with a timeout, and parses the JSON response. If no holidays are found, it returns a message stating so; if the country code or year is invalid, it returns an error message, and other request errors are also handled gracefully. Finally, the server is started with mcp.run(), making the tool available to MCP clients.
As before, add the holidays, command, and args entries to the end of your claude_desktop_config.json file, as shown in this code snippet:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol\/server-filesystem",
"/Users/weimenglee/Desktop/\SharedFolder",
"/Users/weimenglee/Desktop/\Reports"
]
},
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol\/server-memory"
]
},
"weather": {
"command": "/opt/anaconda3/bin/python3",
"args": [
"/Users/weimenglee/Desktop/\weather.py"
]
},
"holidays": {
"command": "/opt/anaconda3/bin/python3",
"args": [
"/Users/weimenglee/Desktop/\holidays.py"
]
}
}
}
After restarting Claude Desktop, try asking: Using my holidays MCP server, find me the public holidays in the US for 2027. Figure 17 shows the response returned by the Holidays MCP server.

Creating an MCP Client
So far, we have been using Claude Desktop as our MCP client. Now it is time to take things a step further and write our own MCP client from scratch. To do this, we will first create a new MCP server that bundles both tools—weather and holidays—into a single server called utilities, and then write a client to connect to it and call its tools. Listing 3 shows the source code for the utilities MCP server.
Listing 3. The MCP server that contains two tools – mcpserver.py
import requests
from typing import Union, List
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("utilities")
WEATHER_API_KEY = "3451fab31b63ac5a53a7dbeefae0ca8f"
WEATHER_API = (
"http://api.openweathermap.org"
"/data/2.5/weather"
)
HOLIDAYS_API = "https://date.nager.at/api/v3"
@mcp.tool()
def get_current_weather(city: str) -> str:
"""Get the current weather for a city.
Args:
city: Name of the city
(e.g. 'Singapore', 'Tokyo')
"""
url = (
f"{WEATHER_API}?q={city}"
f"&appid={WEATHER_API_KEY}"
f"&units=metric"
)
response = requests.get(url)
if response.status_code == 200:
data = response.json()
weather = (
data["weather"][0]["description"]
)
temperature = data["main"]["temp"]
humidity = data["main"]["humidity"]
wind_speed = data["wind"]["speed"]
return (
f"The weather in {city} "
f"is {weather} "
f"with a temperature of "
f"{temperature}°C, "
f"humidity of {humidity}%, "
f"and wind speed of "
f"{wind_speed} m/s."
)
else:
return (
f"Error {response.status_code}: "
f"{response.json()}"
)
@mcp.tool()
def fetch_public_holidays(
country: str,
year: int
) -> Union[List[str], str]:
"""Fetch public holidays for a country.
Args:
country: 2-letter country code
(e.g. 'US', 'SG', 'GB')
year: The year (e.g. 2025)
"""
url = (
f"{HOLIDAYS_API}"
f"/PublicHolidays/{year}/{country}"
)
try:
response = requests.get(url, timeout=5)
response.raise_for_status()
holidays = response.json()
if not holidays:
return (
f"No holidays found for "
f"{country} in {year}."
)
return [
f"- {h['date']}: {h['localName']}"
for h in holidays
]
except requests.exceptions.HTTPError:
return (
f"Invalid country code '{country}'"
f" or year '{year}'."
)
except requests.RequestException as e:
return f"Error fetching holidays: {e}"
if __name__ == "__main__":
mcp.run()
The utilities MCP server exposes two tools that AI models (like Claude) can call:
get_current_weather(city)queries the OpenWeatherMap API to return current conditions for a given city, including temperature, humidity, wind speed, and a weather description.fetch_public_holidays(country, year)queries the Nager.Date API to return a list of public holidays for a given country code and year.
When run directly, the server starts and makes these two tools available for use by any MCP-compatible client.
With the utilities MCP server in place, we can now write a client to connect to it. The client will initialize a session with the server, list its available tools, and test each one by making a direct tool call. Listing 4 shows the complete code for the MCP client.
Listing 4. The MCP client – mcpclient.py
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
SERVER_SCRIPT = (
"/Users/weimenglee/Desktop/mcpserver.py"
)
PYTHON = "/opt/anaconda3/bin/python3"
async def main():
server_params = StdioServerParameters(
command=PYTHON,
args=[SERVER_SCRIPT]
)
async with stdio_client(server_params) as (
read, write
):
async with ClientSession(
read, write
) as session:
await session.initialize()
# List available tools
tools = await session.list_tools()
print("\n=== Available Tools ===")
for tool in tools.tools:
print(f"- {tool.name}")
# — Test get_current_weather —
print("\n=== Weather Query ===")
weather = await session.call_tool(
"get_current_weather",
arguments={"city": "Singapore"}
)
print(weather.content[0].text)
# — Test fetch_public_holidays —
print("\n=== Public Holidays ===")
holidays = await session.call_tool(
"fetch_public_holidays",
arguments={
"country": "SG",
"year": 2026
}
)
for item in holidays.content:
print(item.text)
if __name__ == "__main__":
asyncio.run(main())
This script is an MCP client that connects to the MCP server defined in the previous script and tests its two tools. It starts by launching the server as a subprocess over standard I/O using StdioServerParameters, then opens an async session and initializes it.
Once connected, it does three things: lists all tools registered on the server, calls get_current_weather with Singapore as the city and prints the result, then calls fetch_public_holidays with Singapore's country code and the year 2026 and prints each holiday.
The entire flow is async, driven by asyncio.run(main()) at the bottom.
StdioServerParameters is a configuration object from the mcp library that tells the client how to launch the MCP server as a subprocess. Essentially, it defines how the client should spin up and communicate with the server process, in this case over standard I/O (stdin/stdout) rather than over a network connection.
Let's run the MCP client using the following command in Terminal:
python3 mcpclient.py
Figure 18 shows the output.

Building an Interactive MCP Client with Ollama
In this section, we'll build an interactive command-line assistant that combines an MCP server with a locally running Ollama language model. Rather than hardcoding tool calls as in the previous example, this client lets the user type natural language questions, which are sent to Ollama along with the available tool definitions.
Ollama decides whether to answer directly or delegate to one of the MCP tools, such as fetching weather data or retrieving public holidays. The result is a conversational interface where the language model acts as the “brain” and the MCP server acts as the “hands,” executing real-world tasks on its behalf.
Listing 5 shows the code for the client using Ollama.
Listing 5: Calling the MCP servers with the help of an LLM - mcpclient_llm.py
import asyncio
import json
import requests
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
SERVER_SCRIPT = (
"/Users/weimenglee/Desktop/mcpserver.py"
)
PYTHON = "/opt/anaconda3/bin/python3"
OLLAMA_URL = "http://localhost:11434/api/chat"
MODEL = "gpt-oss:120b-cloud"
def ask_ollama(question: str, tools: list) -> dict:
"""Send question to Ollama with tool
definitions and get tool call back."""
ollama_tools = [
{
"type": "function",
"function": {
"name": t.name,
"description": t.description,
"parameters": t.inputSchema
}
}
for t in tools
]
payload = {
"model": MODEL,
"messages": [
{
"role": "user",
"content": question
}
],
"tools": ollama_tools,
"stream": False
}
response = requests.post(
OLLAMA_URL,
json=payload,
timeout=30
)
response.raise_for_status()
return response.json()
def extract_tool_call(
ollama_response: dict
) -> tuple[str, dict] | None:
"""Extract tool name and arguments
from Ollama response."""
message = ollama_response.get(
"message", {}
)
tool_calls = message.get("tool_calls", [])
if not tool_calls:
return None
tool = tool_calls[0]["function"]
name = tool["name"]
args = tool.get("arguments", {})
if isinstance(args, str):
args = json.loads(args)
return name, args
async def main():
server_params = StdioServerParameters(
command=PYTHON,
args=[SERVER_SCRIPT]
)
async with stdio_client(server_params) as (
read, write
):
async with ClientSession(
read, write
) as session:
await session.initialize()
tools_result = (
await session.list_tools()
)
tools = tools_result.tools
print("=== Utilities Assistant ===")
print("Available tools:")
for t in tools:
print(f" - {t.name}")
print("\nType 'quit' to exit.\n")
while True:
question = input("You: ").strip()
if not question:
continue
if question.lower() in (
"quit", "exit", "q"
):
print("Goodbye!")
break
try:
ollama_resp = ask_ollama(
question, tools
)
result = extract_tool_call(
ollama_resp
)
if not result:
# Ollama answered directly
content = (
ollama_resp
.get("message", {})
.get("content", "")
)
print(f"Assistant: {content}\n")
continue
tool_name, tool_args = result
print(
f"[Calling {tool_name} "
f"with {tool_args}]"
)
tool_result = (
await session.call_tool(
tool_name,
arguments=tool_args
)
)
print("Assistant:")
for item in tool_result.content:
print(f" {item.text}")
print()
except requests.RequestException as e:
print(
f"Ollama error: {e}\n"
f"Is Ollama running?\n"
)
except Exception as e:
print(f"Error: {e}\n")
if __name__ == "__main__":
asyncio.run(main())
The script is made up of three main parts: helper functions, the main async loop, and the entry point.
ask_ollama()takes the user's question and the list of available MCP tools, converts the tools into a format Ollama understands, and sends everything to the Ollama API as a chat request. It returns Ollama's raw JSON response.extract_tool_call()examines Ollama's response and checks whether it decided to call a tool. If it did, the function extracts and returns the tool name and its arguments. If no tool call was made, it returns None, meaning Ollama answered the question directly from its own knowledge.main()is the core of the script. It launches the MCP server as a subprocess, connects to it, and lists the available tools. It then enters an interactive loop where the user can type questions. For each question, it callsask_ollama()to get a response, then callsextract_tool_call()to check if a tool needs to be invoked. If a tool call is needed, it forwards the call to the MCP server and prints the result. If no tool call is needed, it prints Ollama's direct response. The loop continues until the user types quit, exit, or q.
Let's type the following command in Terminal to run the client:
python3 mcpclient_llm.py
Figure 19 shows how the client responded when I asked the question: "How's the weather looking for San Francisco?"

The results look great! The LLM correctly identifies that it needs to call the get_current_weather tool with San Francisco as the city name.
Now let's test the other tool by asking: "What are the public holidays in Germany in 2026?" As shown in Figure 20, the LLM correctly decides to call the fetch_public_holidays tool, passing in the right country code and year.

Summary
In this article, we took a hands-on approach to exploring the Model Context Protocol (MCP). We started by introducing MCP and its two transport mechanisms—stdio and SSE—before walking through how to use Claude Desktop as an MCP client to connect to ready-made MCP servers, including the Filesystem and Memory servers provided by Anthropic. We then stepped into the role of server developer, building two MCP servers from scratch using Python—one for retrieving live weather data and another for looking up public holidays. Finally, as a bonus, we wrote our own MCP client that connects to an MCP server, discovers its tools, and uses a locally running Ollama language model to intelligently decide which tool to call based on natural language questions. By the end of this article, you should have a solid understanding of how MCP works in practice and the confidence to build your own MCP servers and clients.



