Skip to main content

3 posts tagged with "Docker"

View All Tags

· 3 min read

"I'm building a project, but it's not really coding. I just see stuff, say stuff, run stuff, and copy-paste stuff. And it mostly works."

– Andrej Karpathy

Welcome to the world of vibe coding, an AI-assisted, intuition-driven way of building software. You do not spend hours reading diffs, organizing files, or hunting through documentation. You describe what you want, let the AI take a pass, and keep iterating until it works.

The Tools of Vibe Coding

Vibe coding would not exist without a new generation of AI-first tools. Here are some of the platforms powering this new workflow.

While each has it's own strengths and weaknesses, they all support the basic vibe coding workflow described above.

Using Defang for "Vibe Deployment"

Once your app runs locally with these vibe coding tools, the next question is: how do you get it live in the cloud so you can share it with the world?

That is where Defang comes in.

Defang takes your app, as specified in your docker-compose.yml, and deploys it to the public cloud (AWS, GCP, or DigitalOcean) or the Defang Playground with a single command. It is already used by thousands of developers around the world to deploy their projects to the cloud.

Defang Vibe Deploy

And now with the Defang MCP Server, you can "vibe deploy" your project right from your favorite IDE! Once you have the Defang MCP Server installed (see instructions here), just type in "deploy" (or any variation thereof) in the chat, it's that simple! It is built for hobbyists, vibe coders, fast-moving teams, and AI-powered workflows.

Currently, we support deployment to the Defang Playground only, but we'll be adding deployment to public cloud soon.

How it works:

Defang MCP Workflow

The Defang MCP Server connects your coding editor (like VS Code or Cursor) with Defang's cloud tools, so you can ask your AI assistant to deploy your project just by typing a prompt. While natural language commands are by nature imprecise, the AI in your IDE translates that natural language prompt to a precise Defang command needed to deploy your application to the cloud. And your application also has a formal definition - the compose.yaml file - either something you wrote or the AI generated for you. So, the combination of a formal compose.yaml with a precise Defang command means that the resulting deployment is 100% deterministic and reliable. So the Defang MCP Server gives you the best of both worlds - the ease of use and convenience of natural language interaction with the AI, combined with the predictability and reliability of a deterministic deployment.

We are so excited to make Defang even more easy to use and accessible now to vibe coders. Give it a try and let us know what you think on our Discord!

· 4 min read

In this guide, we'll walk through the easiest and fastest way to deploy a full-featured Django application—including real-time chat and background task processing—to the cloud using Defang. You'll see firsthand how simple Defang makes it to deploy apps that require multiple services like web servers, background workers, Redis, and Postgres.

Clone the repo​

Before we get started, you'll want to clone the repo with the app code, here.

Overview of Our Django Application​

We're deploying a real-time chat application that includes automatic moderation powered by a background worker using the Natural Language Toolkit (NLTK). The application structure includes:

  • Web Service: Django app with chat functionality using Django Channels for real-time interactions.
  • Worker Service: Background tasks processing messages for profanity and sentiment analysis.
  • Postgres Database: Managed database instance for persistent storage.
  • Redis Broker: Managed Redis instance serving as the broker for Celery tasks and Django Channels.

Running Locally​

To run the app locally, we use Docker Compose, splitting configurations into two YAML files:

  • compose.yaml: Production configuration.
  • compose.dev.yaml: Development overrides extending production.

You can quickly spin up the application locally with:

docker compose --env-file .env.dev -f compose.dev.yaml up --build

This runs things with autoreloading so you can iterate on the Django app, all while passing environment variables in the same way as we will with Defang's secure configuration system and being ready to deploy to production.

Application Features​

Real-time Chat​

Using Django Channels and Redis, users can engage in real-time conversations within chat rooms.

Background Moderation Tasks​

The worker service runs independently, handling moderation tasks asynchronously. It uses NLTK to:

  • Check for profanity.
  • Perform sentiment analysis.
  • Automatically flag negative or inappropriate messages.

This decouples resource-intensive tasks from the main API server, ensuring optimal application responsiveness. The demo isn't doing anything very complicated, but you could easily run machine learning models with access to GPUs with Defang if you needed to.

Django Admin​

The Django admin is setup to quickly visualize messages and their moderation status. Access it at /admin with your superuser credentials: username admin and password admin setup by default when you first run or deploy.

Deploying with Defang​

Deploying multi-service applications to cloud providers traditionally involves complex infrastructure setup, including configuring ECS clusters, security groups, networking, and more. Defang simplifies this significantly.

Deploying to Defang Playground​

The Defang Playground lets you quickly preview your deployed app in a managed environment.

Secure Configuration

Before deploying, securely set encrypted sensitive values:

defang config set DJANGO_SECRET_KEY
defang config set POSTGRES_PASSWORD

Then run the deployment command:

defang compose up

Defang automatically:

  • Builds Docker containers.
  • Sets up required services.
  • Manages networking and provisioning.

Once deployed, your app is accessible via a public URL provided by Defang, which you can find in the CLI output or in our portal at https://portal.defang.io

Deploying to Your Own Cloud​

To deploy directly into your AWS account (or other supported providers):

  1. Set your cloud provider:

In my case, I use an AWS Profile, but you should be able to use any methods supported by the AWS CLI

export DEFANG_PROVIDER=AWS
export AWS_PROFILE=your-profile-name

Secure Configuration

Before deploying, securely set encrypted sensitive values in your cloud account:

defang config set DJANGO_SECRET_KEY
defang config set POSTGRES_PASSWORD
  1. Deploy:
defang compose up

Defang handles provisioning managed services (RDS for Postgres, ElastiCache for Redis), container builds, and networking setup. Note: Initial provisioning for managed data stores might take a few minutes.

Cloud Deployment Results​

Post-deployment, your Django app infrastructure includes (among other things):

  • Managed Postgres: AWS RDS instance.
  • Managed Redis: AWS ElastiCache instance.
  • Containers: ECS services with load balancers and DNS configured.

Why Use Defang?​

Defang simplifies complex cloud deployments by:

  • Automatically provisioning managed cloud resources.
  • Securely handling sensitive configurations.
  • Providing seamless container orchestration without manual infrastructure setup.

Try It Yourself​

Explore deploying your Django applications effortlessly with Defang. The full source code for this example is available on GitHub. Feel free to give it a try, and let us know how it goes!

Happy deploying!

· 7 min read

mcp

Anthropic recently unveiled the Model Context Protocol (MCP), “a new standard for connecting AI assistants to the systems where data lives”. However, as Docker pointed out, “packaging and distributing MCP Servers is very challenging due to complex environment setups across multiple architectures and operating systems”. Docker helps to solve this problem by enabling developers to “encapsulate their development environment into containers, ensuring consistency across all team members’ machines and deployments.” The Docker work includes a list of reference MCP Servers packaged up as containers, which you can deploy locally and test your AI application.

However, to put such containerized AI applications into production, you need to be able to not only test locally, but also easily deploy the application to the cloud. This is what Defang enables. In this blog and the accompanying sample, we show how to build a sample AI application using one of the reference MCP Servers, run and test it locally using Docker, and when ready, to easily deploy it to the cloud of your choice (AWS, GCP, or DigitalOcean) using Defang.

Sample Model Context Protocol Time Chatbot Application​

Using Docker’s mcp/time image and Anthropic Claude, we made a chatbot application that can access time-based resources directly on the user’s local machine and answer time-based questions.

The application is containerized using Docker, enabling a convenient and easy way to get it running locally. We will later demonstrate how we deployed it to the cloud using Defang.

Let’s go over the structure of the application in a local environment.

mcp_before

General Overview​

  1. There are two containerized services, Service 1 and Service 2, that sit on the local machine.
    • Service 1 contains a custom-built web server that interacts with an MCP Client.
    • Service 2 contains an MCP Server from Docker as a base image for the container, and a custom-built MCP Client we created for interacting with the MCP Server.
  2. We have a browser on our local machine, which interacts with the web server in Service 1.
  3. The MCP Server in Service 2 is able to access tools from either a cloud or on our local machine. This configuration is included as a part of the Docker MCP image.
  4. The MCP Client in Service 2 interacts with the Anthropic API and the web server.

Architecture​

Service 1: Web Server

Service 1 contains a web server and the UI for a chat application (not shown in the diagram), written in Next.js. The chat UI updates based on user-entered queries and chatbot responses. A POST request is sent to Service 1 every time a user enters a query from the browser. In the web server, a Next.js server action function is used to forward the user queries to the endpoint URL of Service 2 to be processed by the MCP Client.

Service 2: MCP Service Configuration

The original Docker mcp/time image is not designed with the intent of being deployed to the cloud - it is created for a seamless experience with Claude Desktop. To achieve cloud deployment, an HTTP layer is needed in front of the MCP Server. To address this, we've bundled an MCP Client together with the Server into one container. The MCP Client provides the HTTP interface and communicates with the MCP Server via standard input/output (stdio).

MCP Client

The MCP Client is written in Python, and runs in a virtual environment (/app/.venv/bin) to accommodate specific package dependencies. The MCP Client is instantiated in a Quart app, where it connects to the MCP Server and handles POST requests from the web server in Service 1. Additionally, the MCP Client connects to the Anthropic API to request LLM responses.

MCP Server and Tools (from the Docker Image)

The MCP Server enables access to tools from an external source, whether it be from a cloud or from the local machine. This configuration is included as a part of the Docker MCP image. The tools can be accessed indirectly by the MCP Client through the MCP Server. The Docker image is used as a base image for Service 2, and the MCP Client is built in the same container as the MCP Server. Note that the MCP Server also runs in a virtual environment (/app/.venv/bin).

Anthropic API

The MCP Client connects to the Anthropic API to request responses from a Claude model. Two requests are sent to Claude for each query. The first request will send the query contents and a list of tools available, and let Claude respond with a selection of the tools needed to craft a response. The MCP Client will then call the tools indirectly through the MCP Server. Once the tool results come back to the Client, a second request is sent to Claude with the query contents and tool results to craft the final response.

Setting Up Dockerfiles​

Service 1: Web Server - Dockerfile

The base image for Service 1 is the node:bookworm-slim image. We construct the image by copying the server code and setting an entry point command to start the web server.

Service 2: MCP Service Configuration - Dockerfile

The base image for Service 2 is the Docker mcp/time image. Since both the MCP Client and Server run in a virtual environment, we activate a venv command in the Dockerfile for Service 2 and create a run.sh shell script that runs the file containing the MCP Client and Server connection code. We then add the shell script as an entry point command for the container.

Compose File​

To define Services 1 and 2 as Docker containers, we’ve written a compose.yaml file in the root directory, as shown below.

services:
service-1: # Web Server and UI
build:
context: ./service-1
dockerfile: Dockerfile
ports:
- target: 3000
published: 3000
mode: ingress
deploy:
resources:
reservations:
memory: 256M
environment:
- MCP_SERVICE_URL=http://service-2:8000
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/"]

service-2: # MCP Service (MCP Client and Server)
build:
context: ./service-2
dockerfile: Dockerfile
ports:
- target: 8000
published: 8000
mode: host
environment:
- ANTHROPIC_API_KEY

Testing and Running on Local Machine​

Now that we’ve defined our application in Docker containers using a compose.yaml file, we can test and run it on our local machine by running the command:

docker compose up --build

Once the application is started up, it can be easily tested in a local environment. However, to make it easily accessible to others online, we should deploy it to the cloud. Fortunately, deploying the application is a straightforward process using Defang, particularly since the application is Compose-compatible.

Deploying to the Cloud​

Let’s go over the structure of the application after cloud deployment.

mcp_after

Here we can see what changes if we deploy to the cloud:

  1. Service 1 and Service 2 are now deployed to the cloud, not on the local machine anymore.
  2. The only part on the local machine is the browser.

Using the same compose.yaml file as shown earlier, we can deploy the containers to the cloud with the Defang CLI. Once we’ve authenticated and logged in, we can choose a cloud provider (i.e. AWS, GCP, or DigitalOcean) and use our own cloud account for deployment. Then, we can set a configuration variable for the Anthropic API key:

defang config set ANTHROPIC_API=<your-api-key-value>

Then, we can run the command:

defang compose up

Now, the MCP time chatbot application will be up and running in the cloud. This means that anyone can access the application online and try it for themselves!

For our case, anyone can use the chatbot to ask for the exact time or convert time zones from their machine, regardless of where they are located.

mcp_time_chatbot

Most importantly, this chatbot application can be adapted to use any of the other Docker reference MCP Server images, not just the mcp/time server.

Have fun building and deploying MCP-based containerized applications to the cloud with Defang!