Skip to main content

· 2 min read

Defang Compose Update

April flew by with big momentum at Defang. From deeper investments in the Model Context Protocol (MCP), to deploying LLM-based inferencing apps, to live demos of Vibe Deploying, we're making it easier than ever to go from idea to cloud.

MCP + Vibe Deploying​

This month we focused on making cloud deployments as easy as writing a prompt. Our latest Vibe Deploying blog shows how you can launch full-stack apps right from your IDE just by chatting.

Whether you're working in Cursor, Windsurf, VS Code, or Claude, Defang's MCP integration lets you deploy to the cloud just as easily as conversing with the AI to generate your app. For more details, check out the docs for the Defang Model Context Protocol Server – it explains how it works, how to use it, and why it's a game changer for deploying to the cloud. You can also watch our tutorials for Cursor, Windsurf, and VS Code.

Managed LLMs​

Last month we shipped the x-defang-llm compose service extension to easily deploy inferencing apps that use managed LLM services such as AWS Bedrock. This month, we're excited to announce the same support for GCP Vertex AI – give it a try and let us know your feedback!

Events and Programs​

On April 28, we kicked things off with an epic night of demos, dev energy, and cloud magic at RAG & AI in Action. Our own Kevin Vo showed how fast and easy it is to deploy AI apps from Windsurf to the cloud using just the Defang MCP. The crowd got a front-row look at how Vibe Deploying turns cloud infra into a background detail.

We finished the month with our signature Defang Coffee Chat, a casual hangout with product updates, live Q&A, and great conversations with our community. Our Campus Advocates also hosted workshops around the world, bringing Defang to new students and builders.

We wrapped up the month with our latest Defang Coffee Chat, featuring live demos, product updates, and a solid conversation around vibe deploying. Thanks to everyone who joined.

The next one is on May 21 at 10 AM PST. Save your spot here.

Looking Ahead​

Here's what's coming in May:

  • Web Summit Vancouver – Defang will be a startup sponsor, please come see us on the expo floor.
  • More MCP tutorials and dev tools.

Let's keep building. 🚀

· 3 min read

"I'm building a project, but it's not really coding. I just see stuff, say stuff, run stuff, and copy-paste stuff. And it mostly works."

– Andrej Karpathy

Welcome to the world of vibe coding, an AI-assisted, intuition-driven way of building software. You do not spend hours reading diffs, organizing files, or hunting through documentation. You describe what you want, let the AI take a pass, and keep iterating until it works.

The Tools of Vibe Coding

Vibe coding would not exist without a new generation of AI-first tools. Here are some of the platforms powering this new workflow.

While each has it's own strengths and weaknesses, they all support the basic vibe coding workflow described above.

Using Defang for "Vibe Deployment"

Once your app runs locally with these vibe coding tools, the next question is: how do you get it live in the cloud so you can share it with the world?

That is where Defang comes in.

Defang takes your app, as specified in your docker-compose.yml, and deploys it to the public cloud (AWS, GCP, or DigitalOcean) or the Defang Playground with a single command. It is already used by thousands of developers around the world to deploy their projects to the cloud.

Defang Vibe Deploy

And now with the Defang MCP Server, you can "vibe deploy" your project right from your favorite IDE! Once you have the Defang MCP Server installed (see instructions here), just type in "deploy" (or any variation thereof) in the chat, it's that simple! It is built for hobbyists, vibe coders, fast-moving teams, and AI-powered workflows.

Currently, we support deployment to the Defang Playground only, but we'll be adding deployment to public cloud soon.

How it works:

Defang MCP Workflow

The Defang MCP Server connects your coding editor (like VS Code or Cursor) with Defang's cloud tools, so you can ask your AI assistant to deploy your project just by typing a prompt. While natural language commands are by nature imprecise, the AI in your IDE translates that natural language prompt to a precise Defang command needed to deploy your application to the cloud. And your application also has a formal definition - the compose.yaml file - either something you wrote or the AI generated for you. So, the combination of a formal compose.yaml with a precise Defang command means that the resulting deployment is 100% deterministic and reliable. So the Defang MCP Server gives you the best of both worlds - the ease of use and convenience of natural language interaction with the AI, combined with the predictability and reliability of a deterministic deployment.

We are so excited to make Defang even more easy to use and accessible now to vibe coders. Give it a try and let us know what you think on our Discord!

· 4 min read

Defang Compose Update

Wow - another month has gone by, time flies when you're having fun!

Let us share some important updates regarding what we achieved at Defang in March:

Managed LLMs: One of the coolest features we have released in a bit is support for Managed LLMs (such as AWS Bedrock) through the x-defang-llm compose service extension. When coupled with the defang/openai-access-gateway service image, Defang offers the easiest way to migrate your OpenAI-compatible application to cloud-native managed LLMs without making any changes to your code. Support for GCP and DigitalOcean coming soon.

Defang Pulumi Provider: Last month, we announced a preview of the Defang Pulumi Provider, and this month we are excited to announce that V1 is now available in the Pulumi Registry. As much as we love Docker, we realize there are many real-world apps that have components that (currently) cannot be described completely in a Compose file. With the Defang Pulumi Provider, you can now leverage the declarative simplicity of Defang with the imperative power of Pulumi.

Production-readiness: As we onboard more customers, we are fixing many fit-n-finish items:

  1. Autoscaling: Production apps need the ability to easily scale up and down with load, and so we've added support for autoscaling. By adding the x-defang-autoscaling: true extension to your service definition in Compose.yaml file, you can benefit from automatic scale out to handle large loads and scale in when load is low. Learn more here.

  2. New CLI: We've been busy making the CLI more powerful, secure, and intelligent. • Smarter Config Handling: The new --random flag simplifies setup by generating secure, random config values, removing the need for manual secret creation. Separately, automatic detection of sensitive data in Compose files helps prevent accidental leaks by warning you before they are deployed. Together, these features improve security and streamline your workflow. • Time-Bound Log Tailing: Need to investigate a specific window? Use tail --until to view logs up to a chosen time—no more scrolling endlessly. Save time from sifting through irrelevant events and focus your investigation. • Automatic generation of a .dockerignore file for projects that don't already have one, saving you time and reducing image bloat. By excluding common unnecessary files—like .git, node_modules, or local configs—it helps keep your builds clean, fast, and secure right from the start, without needing manual setup.

  3. Networking / Reduce costs: We have implemented private networks, as mentioned in the official Compose specification. We have also reduced costs by eliminating the need for a pricy NAT Gateway in "development mode" deployments!

Events and Programs​

In March, we had an incredible evening at the AWS Gen AI Loft in San Francisco! Our CTO and Co-founder Lionello Lunesu demoed how Defang makes deploying secure, scalable, production-ready containerized applications on AWS effortless. Check out the demo here!

We also kicked off the Defang Campus Advocate Program, bringing together advocates from around the world. After launching the program in February, it was amazing to see the energy and momentum already building on campuses world-wide. Just as one example, check out this post from one of the students who attended a session hosted by our Campus Advocate Swapnendu Banerjee and then went on to deploy his project with Defang. This is what we live for!

We wrapped up the month with our monthly Coffee Chat, featuring the latest Defang updates, live demos, and a conversation on vibe coding. Thanks to everyone who joined. The next one is on April 30. Save your spot here.

As always, we appreciate your feedback and are committed to making Defang even better. Deploy any app to any cloud with a single command. Go build something awesome!

· 4 min read

In this guide, we'll walk through the easiest and fastest way to deploy a full-featured Django application—including real-time chat and background task processing—to the cloud using Defang. You'll see firsthand how simple Defang makes it to deploy apps that require multiple services like web servers, background workers, Redis, and Postgres.

Clone the repo​

Before we get started, you'll want to clone the repo with the app code, here.

Overview of Our Django Application​

We're deploying a real-time chat application that includes automatic moderation powered by a background worker using the Natural Language Toolkit (NLTK). The application structure includes:

  • Web Service: Django app with chat functionality using Django Channels for real-time interactions.
  • Worker Service: Background tasks processing messages for profanity and sentiment analysis.
  • Postgres Database: Managed database instance for persistent storage.
  • Redis Broker: Managed Redis instance serving as the broker for Celery tasks and Django Channels.

Running Locally​

To run the app locally, we use Docker Compose, splitting configurations into two YAML files:

  • compose.yaml: Production configuration.
  • compose.dev.yaml: Development overrides extending production.

You can quickly spin up the application locally with:

docker compose --env-file .env.dev -f compose.dev.yaml up --build

This runs things with autoreloading so you can iterate on the Django app, all while passing environment variables in the same way as we will with Defang's secure configuration system and being ready to deploy to production.

Application Features​

Real-time Chat​

Using Django Channels and Redis, users can engage in real-time conversations within chat rooms.

Background Moderation Tasks​

The worker service runs independently, handling moderation tasks asynchronously. It uses NLTK to:

  • Check for profanity.
  • Perform sentiment analysis.
  • Automatically flag negative or inappropriate messages.

This decouples resource-intensive tasks from the main API server, ensuring optimal application responsiveness. The demo isn't doing anything very complicated, but you could easily run machine learning models with access to GPUs with Defang if you needed to.

Django Admin​

The Django admin is setup to quickly visualize messages and their moderation status. Access it at /admin with your superuser credentials: username admin and password admin setup by default when you first run or deploy.

Deploying with Defang​

Deploying multi-service applications to cloud providers traditionally involves complex infrastructure setup, including configuring ECS clusters, security groups, networking, and more. Defang simplifies this significantly.

Deploying to Defang Playground​

The Defang Playground lets you quickly preview your deployed app in a managed environment.

Secure Configuration

Before deploying, securely set encrypted sensitive values:

defang config set DJANGO_SECRET_KEY
defang config set POSTGRES_PASSWORD

Then run the deployment command:

defang compose up

Defang automatically:

  • Builds Docker containers.
  • Sets up required services.
  • Manages networking and provisioning.

Once deployed, your app is accessible via a public URL provided by Defang, which you can find in the CLI output or in our portal at https://portal.defang.io

Deploying to Your Own Cloud​

To deploy directly into your AWS account (or other supported providers):

  1. Set your cloud provider:

In my case, I use an AWS Profile, but you should be able to use any methods supported by the AWS CLI

export DEFANG_PROVIDER=AWS
export AWS_PROFILE=your-profile-name

Secure Configuration

Before deploying, securely set encrypted sensitive values in your cloud account:

defang config set DJANGO_SECRET_KEY
defang config set POSTGRES_PASSWORD
  1. Deploy:
defang compose up

Defang handles provisioning managed services (RDS for Postgres, ElastiCache for Redis), container builds, and networking setup. Note: Initial provisioning for managed data stores might take a few minutes.

Cloud Deployment Results​

Post-deployment, your Django app infrastructure includes (among other things):

  • Managed Postgres: AWS RDS instance.
  • Managed Redis: AWS ElastiCache instance.
  • Containers: ECS services with load balancers and DNS configured.

Why Use Defang?​

Defang simplifies complex cloud deployments by:

  • Automatically provisioning managed cloud resources.
  • Securely handling sensitive configurations.
  • Providing seamless container orchestration without manual infrastructure setup.

Try It Yourself​

Explore deploying your Django applications effortlessly with Defang. The full source code for this example is available on GitHub. Feel free to give it a try, and let us know how it goes!

Happy deploying!

· 4 min read

Defang Compose Update

When we refreshed the Defang brand, we knew our website needed more than just a fresh coat of paint. It needed to become a more dynamic part of our stack. We needed some parts to be more flexible, some parts to be more interactive, and better aligned with how modern apps are organized and deployed. And what better way to take it there than to deploy it with Defang itself?

This is part of our ongoing "Defang on Defang" series, where we show how we're using our own tool to deploy all the services that power Defang. In this post, we're diving into how we turned our own website into a project to better understand how Defang can be used to deploy a dynamic Next.js apps and how we can improve the experience for developers.


From S3 + CloudFront to Dynamic, Containerized Deployments​

Our original site was a Next.js app using static exports deployed via S3 and fronted by CloudFront. That setup worked for a while—it was fast and simple. But with our brand refresh, we added pages and components where it made sense to use (and test for other developers) some Next.js features that we couldn't use with the static export:

That meant static hosting wouldn't cut it. So we decided to run the site as an app in a container.

That being said, our learnings from the previous setup are being used to develop the capabilities of Defang. We're using the experience to make sure that Defang can handle the deployment of static sites as well as dynamic ones. We'll keep you updated when that's ready.


Deploying with Defang (and Why It Was Easy)​

We already deploy our other services with Defang using Compose files. In fact, the static website actually already used a Dockerfile and Compose file to manage the build process. So we just had to make some minor changes to the Compose file to take into account new environment variables for features we're adding and make a few small changes to the Dockerfile to handle the new build process.

Some things we had to change:

Adding ports to the Compose file:

    ports:
- mode: ingress
target: 3000
published: 3000

Adding domain info the Composer file:

    domainname: defang.io
networks:
default:
aliases:
- www.defang.io

One other hiccup was that we used to do www to non-www redirects using S3. There are a few ways to switch that up, but for the time being we decided to use Next.js middleware.

Pretty soon after that, the site was up and running in an AWS account—with TLS, DNS, and both the www and root domains automatically configured. Pretty straightfoward!


Real-World Lessons That Are Shaping Defang​

Deploying the website wasn't just a checkbox—it helped surface real-world pain points and ideas for improvement.

1. Static Assets Still Need CDNs​

Even though the site is dynamic now, we still want assets like /_next/static to load quickly from a CDN. This made it clear that CDN support—like CloudFront integration—should be easier to configure in Defang. That’s now on our roadmap. That's also going to be useful for other frameworks that use similar asset paths, like Django.

2. Next.js Env Vars Can Be Tricky in Containers​

Next.js splits env vars between build-time and runtime, and the rules aren’t always obvious. Some need to be passed as build args, and others as runtime envs. That made us think harder about how Defang could help clarify or streamline this for developers—even if we can’t change that aspect of Next.js itself.

3. Redirects and Rewrites​

We had to add a middleware to handle www to non-www redirects. This is a common need, so we're keeping an eye on how we can make this easier to deal with in Defang projects.

These are the kinds of things we only notice by using Defang on real-world projects.


The Takeaway​

Our site now runs like the rest of our infrastructure:

  • Fully containerized
  • Deployed to our own AWS account
  • Managed with a Compose file
  • Deployed with Defang

Stay tuned for the next post in the series—because this is just one piece of the puzzle.

· 3 min read

Defang Compose Update

Well, that went by quick! Seems like it was just a couple of weeks ago that we published the Jan update, and it’s already time for the next one. Still, we do have some exciting progress to report in this short month!

  1. Pulumi Provider: We are excited to announce a Preview of the Defang Pulumi Provider. With the Defang Pulumi Provider, you can leverage all the power of Defang with all of the extensibility of Pulumi. Defang will provision infrastructure to deploy your application straight from your Compose file, while allowing you to connect that deployment with other resources you deploy to your cloud account. The new provider makes it easy to leverage Defang if you’re already using Pulumi, and it also provides an upgrade-path for users who need more configurability than the Compose specification can provide.
  2. Portal Update: We are now fully deploying our portal with Defang alone using the defang compose up command. Our original portal architecture was designed before we supported managed storage so we used to use Pulumi to provision and connect external storage. But since we added support in Compose to specify managed storage, we can fully describe our Portal using Compose alone. This has allowed us to rip out hundreds of lines of code and heavily simplify our deployments. To learn more about how we do this, check out our Defang-Deployed-with-Defang (Part 1) blog.
  3. Open-Auth Contribution: In the past couple months we have been communicating with the OpenAuth maintainers and contributors via PRs (#120, #156) and Issues (#127) to enable features like local testing with DynamoDB, enabling support for scopes, improving standards alignment, supporting Redis, and more. We are rebuilding our authentication systems around OpenAuth and are excited about the future of the project.

Events and Social Media

February was an exciting month for the Defang team as we continued to engage with the developer community and showcase what’s possible with Defang. We sponsored and demo’ed at the DevTools Vancouver meetup, as well as sponsored the Vancouver.dev IRL: Building AI Startups event. Also, at the AWS Startup Innovation Showcase in Vancouver, our CTO Lio demonstrated how Defang makes it effortless to deploy secure, scalable, and cost-efficient serverless apps on AWS! And finally, we had a great response to our LinkedIn post on the Model Context Protocol, catching the attention of many observers, including some of our key partners.

We are eager to see what you deploy with Defang. Join our Discord to ask any questions, see what others are building, and share your own experience with Defang. And stay tuned for more to come in March!

· 5 min read

Defang Compose Update

Deploying applications is hard. Deploying complex, multi-service applications is even harder. When we first built the Defang Portal, we quickly recognized the complexity required to deploy it, even with the early Defang tooling helping us simplify it a lot. But we’ve worked a lot to expand Defang’s capabilities over the last year+ so it could take on more of the work and simplify that process.

This evolution wasn’t just based on our own instincts and what we saw in the Portal—it was informed by listening to developers who have been using Defang, as well as our experience building dozens of sample projects for different frameworks and languages. Each time we build a new sample, we learn more about the different requirements of various types of applications and developers and refine Defang’s feature set accordingly. The Portal became an extension of this learning process, serving as both a proving ground and an opportunity to close any remaining gaps, since it’s one of the most complex things we’ve built with Defang.

Finally, though, the Defang Portal—an application with six services, including two managed data stores, authentication, and a frontend—is fully deployed using just Docker Compose files and the Defang CLI using GitHub Actions.


The Initial Setup: A More Complex Deployment​

The Portal isn’t a simple static website; it’s a full-stack application with the following services:

  • Next.js frontend – Including server components and server actions.
  • Hasura (GraphQL API) – Serves as a GraphQL layer.
  • Hono (TypeScript API) – Lightweight API for custom business logic.
  • OpenAuth (Authentication Service) – Manages authentication flows.
  • Redis – Used for caching and session storage.
  • Postgres – The main database.

Initially, we provisioned databases and some DNS configurations using Infra-as-Code because Defang couldn’t yet manage them for us. We also deployed the services themselves manually through infrastructure-as-code, requiring us to define each service separately.

This worked, but seemed unnecessarily complex, if we had the right tooling…


The Transition: Expanding Defang to Reduce Complexity​

We’ve made it a priority to expand Defang’s capabilities a lot over the last year so it could take on more of the heavy lifting of a more complex application. Over the past year, we’ve added loads of features to handle things like:

  • Provisioning databases, including managing passwords and other secrets securely
  • Config interpolation using values stored in AWS SSM, ensuring the same Compose file works both locally and in the cloud
  • Provisioning certs and managing DNS records from configuration in the Compose file.

As a result, we reached a point where we no longer needed custom infrastructure definitions for most of our deployment.

What Changed?​

  • Previously: GitHub Actions ran infra-as-code scripts to provision databases, manage DNS, and define services separately from the Docker Compose file we used for local dev
  • Now: Our Defang GitHub Action targets normal Compose files and deploys everything, using secrets and variables managed in GitHub Actions environments.
  • Result: We eliminated hundreds of lines of Infra-as-Code, making our deployment leaner and easier to manage and reducing the differences between running the Portal locally and running it in the cloud.

This wasn’t just about reducing complexity—it was also a validation exercise. We knew that Defang had evolved enough to take over much of our deployment, but by going through the transition process ourselves, we could identify and close the remaining gaps and make sure our users could really make use of Defang for complex production-ready apps.


How Deployment Works Today​

Config & Secrets Management​

  • Sensitive configuration values (database credentials, API keys) are stored securely in AWS SSM using Defang’s configuration management tooling.
  • Environment variable interpolation allows these SSM-stored config values to be referenced directly in the Compose file, ensuring the same configuration works in local and cloud environments.
  • Defang provisions managed Postgres and Redis instances automatically when using the x-defang-postgres and x-defang-redis extensions, securely injecting credentials where needed with variable interpolation.

Deployment Modes​

  • Deployment modes (development, staging, production) adjust infrastructure settings across our dev/staging/prod deployments making sure dev is low cost, and production is secure and resilient.

DNS & Certs​

CI/CD Integration​

  • Previously: GitHub Actions ran custom infra-as-code scripts.
  • Now: The Defang GitHub Action installs Defang automatically and runs defang compose up, simplifying deployment.
  • Result: A streamlined, repeatable CI/CD pipeline.

The Takeaway: Why This Matters​

By transitioning to fully Compose-based deployments, we:

  • âś… Eliminated hundreds of lines of Infra-as-Code
  • âś… Simplified configuration management with secure, environment-aware secrets handling
  • âś… Streamlined CI/CD with a lightweight GitHub Actions workflow
  • âś… Simplified DNS and cert management

Every sample project we built, every conversation we had with developers, and every challenge we encountered with the Portal helped us get to this point where we could focus on closing the gaps last few gaps to deploying everything from a Compose file.

· 2 min read

Defang New Look

Over the last couple of years, as we have been building Defang, we've learnt a lot about the key needs of developers in deploying their applications to the cloud - the primacy of a simple developer experience, while at the same time providing a flexible and production-ready solution that can work seamlessly with all of the popular cloud platform targets.

In response, we have been constantly evolving our product functionality to address those needs in the simplest yet most powerful way we can come up with. While certainly there is a long way to go, we have definitely come a long way since we started.

Why the Refresh?​

As we reflected on our journey, we realized our branding and messaging needed to better reflect Defang's current value-proposition. That's why today, we're excited to unveil our brand refresh, our first since the early days of Defang.

Here's what's new:​

1. Refining Our Messaging​

As Defang evolves, so does our message:

  • Our Promise: Develop Anything, Deploy Anywhere.
  • What We Enable: Any App, Any Stack, Any Cloud.
  • How It Works: Take your app from Docker Compose to a secure, scalable deployment on your favorite cloud in minutes.

We've modernized our logo while keeping the core hexagonal design. The new look symbolizes Defang's role in seamlessly deploying any cloud application to any cloud.

3. A Redesigned Website​

We've refreshed our website with a sleek, intuitive design and a modern user experience to better showcase Defang's capabilities.

Rolling Out the Refresh​

Starting today, you'll see these updates across our Defang.io homepage and social media platforms (Twitter, LinkedIn, Discord, BlueSky). In the coming days, we'll extend this refresh across all our digital assets.

We'd Love Your Feedback!​

Check out the new look and let us know what you think! And if you haven't, please join us on Discord and follow us on social media.

· 7 min read

mcp

Anthropic recently unveiled the Model Context Protocol (MCP), “a new standard for connecting AI assistants to the systems where data lives”. However, as Docker pointed out, “packaging and distributing MCP Servers is very challenging due to complex environment setups across multiple architectures and operating systems”. Docker helps to solve this problem by enabling developers to “encapsulate their development environment into containers, ensuring consistency across all team members’ machines and deployments.” The Docker work includes a list of reference MCP Servers packaged up as containers, which you can deploy locally and test your AI application.

However, to put such containerized AI applications into production, you need to be able to not only test locally, but also easily deploy the application to the cloud. This is what Defang enables. In this blog and the accompanying sample, we show how to build a sample AI application using one of the reference MCP Servers, run and test it locally using Docker, and when ready, to easily deploy it to the cloud of your choice (AWS, GCP, or DigitalOcean) using Defang.

Sample Model Context Protocol Time Chatbot Application​

Using Docker’s mcp/time image and Anthropic Claude, we made a chatbot application that can access time-based resources directly on the user’s local machine and answer time-based questions.

The application is containerized using Docker, enabling a convenient and easy way to get it running locally. We will later demonstrate how we deployed it to the cloud using Defang.

Let’s go over the structure of the application in a local environment.

mcp_before

General Overview​

  1. There are two containerized services, Service 1 and Service 2, that sit on the local machine.
    • Service 1 contains a custom-built web server that interacts with an MCP Client.
    • Service 2 contains an MCP Server from Docker as a base image for the container, and a custom-built MCP Client we created for interacting with the MCP Server.
  2. We have a browser on our local machine, which interacts with the web server in Service 1.
  3. The MCP Server in Service 2 is able to access tools from either a cloud or on our local machine. This configuration is included as a part of the Docker MCP image.
  4. The MCP Client in Service 2 interacts with the Anthropic API and the web server.

Architecture​

Service 1: Web Server

Service 1 contains a web server and the UI for a chat application (not shown in the diagram), written in Next.js. The chat UI updates based on user-entered queries and chatbot responses. A POST request is sent to Service 1 every time a user enters a query from the browser. In the web server, a Next.js server action function is used to forward the user queries to the endpoint URL of Service 2 to be processed by the MCP Client.

Service 2: MCP Service Configuration

The original Docker mcp/time image is not designed with the intent of being deployed to the cloud - it is created for a seamless experience with Claude Desktop. To achieve cloud deployment, an HTTP layer is needed in front of the MCP Server. To address this, we've bundled an MCP Client together with the Server into one container. The MCP Client provides the HTTP interface and communicates with the MCP Server via standard input/output (stdio).

MCP Client

The MCP Client is written in Python, and runs in a virtual environment (/app/.venv/bin) to accommodate specific package dependencies. The MCP Client is instantiated in a Quart app, where it connects to the MCP Server and handles POST requests from the web server in Service 1. Additionally, the MCP Client connects to the Anthropic API to request LLM responses.

MCP Server and Tools (from the Docker Image)

The MCP Server enables access to tools from an external source, whether it be from a cloud or from the local machine. This configuration is included as a part of the Docker MCP image. The tools can be accessed indirectly by the MCP Client through the MCP Server. The Docker image is used as a base image for Service 2, and the MCP Client is built in the same container as the MCP Server. Note that the MCP Server also runs in a virtual environment (/app/.venv/bin).

Anthropic API

The MCP Client connects to the Anthropic API to request responses from a Claude model. Two requests are sent to Claude for each query. The first request will send the query contents and a list of tools available, and let Claude respond with a selection of the tools needed to craft a response. The MCP Client will then call the tools indirectly through the MCP Server. Once the tool results come back to the Client, a second request is sent to Claude with the query contents and tool results to craft the final response.

Setting Up Dockerfiles​

Service 1: Web Server - Dockerfile

The base image for Service 1 is the node:bookworm-slim image. We construct the image by copying the server code and setting an entry point command to start the web server.

Service 2: MCP Service Configuration - Dockerfile

The base image for Service 2 is the Docker mcp/time image. Since both the MCP Client and Server run in a virtual environment, we activate a venv command in the Dockerfile for Service 2 and create a run.sh shell script that runs the file containing the MCP Client and Server connection code. We then add the shell script as an entry point command for the container.

Compose File​

To define Services 1 and 2 as Docker containers, we’ve written a compose.yaml file in the root directory, as shown below.

services:
service-1: # Web Server and UI
build:
context: ./service-1
dockerfile: Dockerfile
ports:
- target: 3000
published: 3000
mode: ingress
deploy:
resources:
reservations:
memory: 256M
environment:
- MCP_SERVICE_URL=http://service-2:8000
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/"]

service-2: # MCP Service (MCP Client and Server)
build:
context: ./service-2
dockerfile: Dockerfile
ports:
- target: 8000
published: 8000
mode: host
environment:
- ANTHROPIC_API_KEY

Testing and Running on Local Machine​

Now that we’ve defined our application in Docker containers using a compose.yaml file, we can test and run it on our local machine by running the command:

docker compose up --build

Once the application is started up, it can be easily tested in a local environment. However, to make it easily accessible to others online, we should deploy it to the cloud. Fortunately, deploying the application is a straightforward process using Defang, particularly since the application is Compose-compatible.

Deploying to the Cloud​

Let’s go over the structure of the application after cloud deployment.

mcp_after

Here we can see what changes if we deploy to the cloud:

  1. Service 1 and Service 2 are now deployed to the cloud, not on the local machine anymore.
  2. The only part on the local machine is the browser.

Using the same compose.yaml file as shown earlier, we can deploy the containers to the cloud with the Defang CLI. Once we’ve authenticated and logged in, we can choose a cloud provider (i.e. AWS, GCP, or DigitalOcean) and use our own cloud account for deployment. Then, we can set a configuration variable for the Anthropic API key:

defang config set ANTHROPIC_API=<your-api-key-value>

Then, we can run the command:

defang compose up

Now, the MCP time chatbot application will be up and running in the cloud. This means that anyone can access the application online and try it for themselves!

For our case, anyone can use the chatbot to ask for the exact time or convert time zones from their machine, regardless of where they are located.

mcp_time_chatbot

Most importantly, this chatbot application can be adapted to use any of the other Docker reference MCP Server images, not just the mcp/time server.

Have fun building and deploying MCP-based containerized applications to the cloud with Defang!

· 3 min read

Defang Compose Update

Welcome to 2025! As we had shared in our early Dec update, we reached our V1 milestone with support for GCP and DigitalOcean in Preview and production support for AWS. We were very gratified to see the excitement around our launch, with Defang ending 2024 with twice the number of users as our original goal!

We are excited to build on that momentum going into 2025. And we are off to a great start in Jan, with some key advancements:

  1. GCP parity with AWS: We are really excited to announce that our GCP provider is now feature-complete, with support for key features such as Managed Postgres, Managed Redis, BYOD (Bring-Your-Own-Domain), GPUs, AI-assisted Debugging, and more. Install the latest version of our CLI and give it a try! Please let us know your feedback.
  2. Defang Deployed with Defang: In 2025, we are doubling our focus on production use-cases where developers are using Defang every day to deploy their production apps. And where better to start than with Defang itself? We had already been using Defang to deploy portions of our infrastructure (such as our web site), but we are super happy to report that now we are using Defang to deploy all our services - including our Portal, Playground, the Defang back-end (aka Fabric) and more. We’ll be sharing more about how we did this, and publishing some of the related artifacts, in a blog post soon - stay tuned.
  3. Campus Advocate Program: One of our key goals for 2025 is to bring Defang to more students and hobbyists. To do this, we are very excited to launch our Campus Advocate Program, a community of student leaders passionate about cloud technology. Our advocates will build communities, host events, and help peers adopt cloud development with Defang. If you’re a student eager to drive cloud innovation on your campus, we’d love to hear from you - you can apply here.
  4. 1-click Deploy instructions: One of our most popular features is the ability to deploy any of our 50+ samples with a single click. We have now published instructions showing how you can provide a similar experience for your project or sample. We are curious to see what you deploy with this!
  5. Model Context Protocol sample: AI agents are of course the rage nowadays. Recently, Docker published a blog showing how you can use Docker to containerize “servers” following Anthropic’s Model Context Protocol. We have now published a sample that shows you how to easily deploy such containerized servers to the cloud using Defang - check it out here.

So, you can see we have been busy! But that is not all - we have a lot more in the pipeline in the coming months. Stay tuned - it’s going to be an exciting 2025!

P.S.: Defang is now on Bluesky! Follow us to stay connected, get the latest updates, and join the conversation. See you there!