Skip to main content

· 2 min read

Introducing Our New Whitepaper: Bridging Local Development and Cloud Deployment with Docker Compose and Defang

We’re excited to announce the release of our new whitepaper, "Bridging Local Development and Cloud Deployment with Docker Compose and Defang."

Want to skip the blog?

Modern software development moves fast, but deploying to the cloud often remains a complex hurdle. Docker Compose revolutionized local development by providing a simple way to define multi-service apps, but translating that simplicity into cloud deployment has remained challenging—until now.

Defang bridges this gap by extending Docker Compose into native cloud deployments across AWS, GCP, DigitalOcean, and more, all with a single command: defang compose up. This integration empowers developers to:

  • Use familiar Docker Compose definitions for cloud deployment.
  • Enjoy seamless transitions from local to multi-cloud environments.
  • Automate complex infrastructure setups including DNS, networking, autoscaling, managed storage, and even managed LLMs.
  • Estimate cloud costs and choose optimal deployment strategies (affordable, balanced, or high availability).

Our whitepaper dives deep into how Docker Compose paired with Defang significantly reduces complexity, streamlines workflows, and accelerates development and deployment.

Discover how Docker + Defang can simplify your journey from local development to production-ready deployments across your preferred cloud providers.

· 3 min read

Defang Compose Update

May was a big month at Defang. We shipped support for managed LLMs in Playground, added MongoDB support on AWS, improved the Defang MCP Server, and dropped new AI samples to make deploying faster than ever.

🚀 Managed LLMs in Playground​

You can now try managed LLMs directly in the Defang Playground. Defang makes it easy to use cloud-native language models across providers — and now you can test them instantly in the Playground.

  • Managed LLM support
  • Playground-ready
  • Available in CLI v1.1.22 or higher

To use managed language models in your own Defang services, just add x-defang-llm: true — Defang will configure the appropriate roles and permissions for you.

Already built on the OpenAI API? No need to rewrite anything.

With Defang's OpenAI Access Gateway, you can run your existing apps on Claude, DeepSeek, Mistral, and more — using the same OpenAI format.

Learn more here.

Try it out here.

📦 MongoDB Preview on AWS​

Last month, we added support for MongoDB-compatible workloads on AWS via Amazon DocumentDB.

Just add this to your compose.yaml:

services:
db:
x-defang-mongodb: true

Once you add x-defang-mongodb: true, Defang will auto-spin a DocumentDB cluster in your AWS — no setup needed.

🛠 MCP Server Improvements​

We've made the MCP Server and CLI easier to use and deploy:

  • Users are now prompted to agree to Terms of Service via the portal login
  • MCP Server and CLI are now containerized, enabling faster setup, smoother deployments, and better portability across environments

🌎 Events and Community​

We kicked off the month by sponsoring Vancouver's first Vibe Coding IRL Sprint. Jordan Stephens from Defang ran a hands-on workshop on "Ship AI Faster with Vertex AI" with GDG Vancouver (GDG Vancouver). Around the same time, our CTO and Co-founder Lio joined the GenAI Founders Fireside panel hosted by AInBC and AWS.

Big moment for the team — we won the Best Canadian Cloud Award at the Vancouver Cloud Summit. Right after, we hit the expo floor at Web Summit Vancouver as part of the BETA startup program and got featured by FoundersBeta as one of the Top 16 Startups to Watch.

Our Campus Advocates also kept the momentum going, hosting Defang events around the world with live demos and workshops.

Last month's Defang Coffee Chat brought together the community for product updates, live demos, and a great convo on vibe deploying.

We're back again on June 25 at 10 AM PST. Save your spot here.

We can't wait to see what you deploy with Defang. Join our Discord to ask questions, get support, and share your builds.

More coming in June.

· 2 min read

Defang Compose Update

April flew by with big momentum at Defang. From deeper investments in the Model Context Protocol (MCP), to deploying LLM-based inferencing apps, to live demos of Vibe Deploying, we're making it easier than ever to go from idea to cloud.

MCP + Vibe Deploying​

This month we focused on making cloud deployments as easy as writing a prompt. Our latest Vibe Deploying blog shows how you can launch full-stack apps right from your IDE just by chatting.

Whether you're working in Cursor, Windsurf, VS Code, or Claude, Defang's MCP integration lets you deploy to the cloud just as easily as conversing with the AI to generate your app. For more details, check out the docs for the Defang Model Context Protocol Server – it explains how it works, how to use it, and why it's a game changer for deploying to the cloud. You can also watch our tutorials for Cursor, Windsurf, and VS Code.

Managed LLMs​

Last month we shipped the x-defang-llm compose service extension to easily deploy inferencing apps that use managed LLM services such as AWS Bedrock. This month, we're excited to announce the same support for GCP Vertex AI – give it a try and let us know your feedback!

Events and Programs​

On April 28, we kicked things off with an epic night of demos, dev energy, and cloud magic at RAG & AI in Action. Our own Kevin Vo showed how fast and easy it is to deploy AI apps from Windsurf to the cloud using just the Defang MCP. The crowd got a front-row look at how Vibe Deploying turns cloud infra into a background detail.

We finished the month with our signature Defang Coffee Chat, a casual hangout with product updates, live Q&A, and great conversations with our community. Our Campus Advocates also hosted workshops around the world, bringing Defang to new students and builders.

We wrapped up the month with our latest Defang Coffee Chat, featuring live demos, product updates, and a solid conversation around vibe deploying. Thanks to everyone who joined.

The next one is on May 21 at 10 AM PST. Save your spot here.

Looking Ahead​

Here's what's coming in May:

  • Web Summit Vancouver – Defang will be a startup sponsor, please come see us on the expo floor.
  • More MCP tutorials and dev tools.

Let's keep building. 🚀

· 3 min read

"I'm building a project, but it's not really coding. I just see stuff, say stuff, run stuff, and copy-paste stuff. And it mostly works."

– Andrej Karpathy

Welcome to the world of vibe coding, an AI-assisted, intuition-driven way of building software. You do not spend hours reading diffs, organizing files, or hunting through documentation. You describe what you want, let the AI take a pass, and keep iterating until it works.

The Tools of Vibe Coding

Vibe coding would not exist without a new generation of AI-first tools. Here are some of the platforms powering this new workflow.

While each has it's own strengths and weaknesses, they all support the basic vibe coding workflow described above.

Using Defang for "Vibe Deployment"

Once your app runs locally with these vibe coding tools, the next question is: how do you get it live in the cloud so you can share it with the world?

That is where Defang comes in.

Defang takes your app, as specified in your docker-compose.yml, and deploys it to the public cloud (AWS, GCP, or DigitalOcean) or the Defang Playground with a single command. It is already used by thousands of developers around the world to deploy their projects to the cloud.

Defang Vibe Deploy

And now with the Defang MCP Server, you can "vibe deploy" your project right from your favorite IDE! Once you have the Defang MCP Server installed (see instructions here), just type in "deploy" (or any variation thereof) in the chat, it's that simple! It is built for hobbyists, vibe coders, fast-moving teams, and AI-powered workflows.

Currently, we support deployment to the Defang Playground only, but we'll be adding deployment to public cloud soon.

How it works:

Defang MCP Workflow

The Defang MCP Server connects your coding editor (like VS Code or Cursor) with Defang's cloud tools, so you can ask your AI assistant to deploy your project just by typing a prompt. While natural language commands are by nature imprecise, the AI in your IDE translates that natural language prompt to a precise Defang command needed to deploy your application to the cloud. And your application also has a formal definition - the compose.yaml file - either something you wrote or the AI generated for you. So, the combination of a formal compose.yaml with a precise Defang command means that the resulting deployment is 100% deterministic and reliable. So the Defang MCP Server gives you the best of both worlds - the ease of use and convenience of natural language interaction with the AI, combined with the predictability and reliability of a deterministic deployment.

We are so excited to make Defang even more easy to use and accessible now to vibe coders. Give it a try and let us know what you think on our Discord!

· 4 min read

Defang Compose Update

Wow - another month has gone by, time flies when you're having fun!

Let us share some important updates regarding what we achieved at Defang in March:

Managed LLMs: One of the coolest features we have released in a bit is support for Managed LLMs (such as AWS Bedrock) through the x-defang-llm compose service extension. When coupled with the defang/openai-access-gateway service image, Defang offers the easiest way to migrate your OpenAI-compatible application to cloud-native managed LLMs without making any changes to your code. Support for GCP and DigitalOcean coming soon.

Defang Pulumi Provider: Last month, we announced a preview of the Defang Pulumi Provider, and this month we are excited to announce that V1 is now available in the Pulumi Registry. As much as we love Docker, we realize there are many real-world apps that have components that (currently) cannot be described completely in a Compose file. With the Defang Pulumi Provider, you can now leverage the declarative simplicity of Defang with the imperative power of Pulumi.

Production-readiness: As we onboard more customers, we are fixing many fit-n-finish items:

  1. Autoscaling: Production apps need the ability to easily scale up and down with load, and so we've added support for autoscaling. By adding the x-defang-autoscaling: true extension to your service definition in Compose.yaml file, you can benefit from automatic scale out to handle large loads and scale in when load is low. Learn more here.

  2. New CLI: We've been busy making the CLI more powerful, secure, and intelligent. • Smarter Config Handling: The new --random flag simplifies setup by generating secure, random config values, removing the need for manual secret creation. Separately, automatic detection of sensitive data in Compose files helps prevent accidental leaks by warning you before they are deployed. Together, these features improve security and streamline your workflow. • Time-Bound Log Tailing: Need to investigate a specific window? Use tail --until to view logs up to a chosen time—no more scrolling endlessly. Save time from sifting through irrelevant events and focus your investigation. • Automatic generation of a .dockerignore file for projects that don't already have one, saving you time and reducing image bloat. By excluding common unnecessary files—like .git, node_modules, or local configs—it helps keep your builds clean, fast, and secure right from the start, without needing manual setup.

  3. Networking / Reduce costs: We have implemented private networks, as mentioned in the official Compose specification. We have also reduced costs by eliminating the need for a pricy NAT Gateway in "development mode" deployments!

Events and Programs​

In March, we had an incredible evening at the AWS Gen AI Loft in San Francisco! Our CTO and Co-founder Lionello Lunesu demoed how Defang makes deploying secure, scalable, production-ready containerized applications on AWS effortless. Check out the demo here!

We also kicked off the Defang Campus Advocate Program, bringing together advocates from around the world. After launching the program in February, it was amazing to see the energy and momentum already building on campuses world-wide. Just as one example, check out this post from one of the students who attended a session hosted by our Campus Advocate Swapnendu Banerjee and then went on to deploy his project with Defang. This is what we live for!

We wrapped up the month with our monthly Coffee Chat, featuring the latest Defang updates, live demos, and a conversation on vibe coding. Thanks to everyone who joined. The next one is on April 30. Save your spot here.

As always, we appreciate your feedback and are committed to making Defang even better. Deploy any app to any cloud with a single command. Go build something awesome!

· 4 min read

In this guide, we'll walk through the easiest and fastest way to deploy a full-featured Django application—including real-time chat and background task processing—to the cloud using Defang. You'll see firsthand how simple Defang makes it to deploy apps that require multiple services like web servers, background workers, Redis, and Postgres.

Clone the repo​

Before we get started, you'll want to clone the repo with the app code, here.

Overview of Our Django Application​

We're deploying a real-time chat application that includes automatic moderation powered by a background worker using the Natural Language Toolkit (NLTK). The application structure includes:

  • Web Service: Django app with chat functionality using Django Channels for real-time interactions.
  • Worker Service: Background tasks processing messages for profanity and sentiment analysis.
  • Postgres Database: Managed database instance for persistent storage.
  • Redis Broker: Managed Redis instance serving as the broker for Celery tasks and Django Channels.

Running Locally​

To run the app locally, we use Docker Compose, splitting configurations into two YAML files:

  • compose.yaml: Production configuration.
  • compose.dev.yaml: Development overrides extending production.

You can quickly spin up the application locally with:

docker compose --env-file .env.dev -f compose.dev.yaml up --build

This runs things with autoreloading so you can iterate on the Django app, all while passing environment variables in the same way as we will with Defang's secure configuration system and being ready to deploy to production.

Application Features​

Real-time Chat​

Using Django Channels and Redis, users can engage in real-time conversations within chat rooms.

Background Moderation Tasks​

The worker service runs independently, handling moderation tasks asynchronously. It uses NLTK to:

  • Check for profanity.
  • Perform sentiment analysis.
  • Automatically flag negative or inappropriate messages.

This decouples resource-intensive tasks from the main API server, ensuring optimal application responsiveness. The demo isn't doing anything very complicated, but you could easily run machine learning models with access to GPUs with Defang if you needed to.

Django Admin​

The Django admin is setup to quickly visualize messages and their moderation status. Access it at /admin with your superuser credentials: username admin and password admin setup by default when you first run or deploy.

Deploying with Defang​

Deploying multi-service applications to cloud providers traditionally involves complex infrastructure setup, including configuring ECS clusters, security groups, networking, and more. Defang simplifies this significantly.

Deploying to Defang Playground​

The Defang Playground lets you quickly preview your deployed app in a managed environment.

Secure Configuration

Before deploying, securely set encrypted sensitive values:

defang config set DJANGO_SECRET_KEY
defang config set POSTGRES_PASSWORD

Then run the deployment command:

defang compose up

Defang automatically:

  • Builds Docker containers.
  • Sets up required services.
  • Manages networking and provisioning.

Once deployed, your app is accessible via a public URL provided by Defang, which you can find in the CLI output or in our portal at https://portal.defang.io

Deploying to Your Own Cloud​

To deploy directly into your AWS account (or other supported providers):

  1. Set your cloud provider:

In my case, I use an AWS Profile, but you should be able to use any methods supported by the AWS CLI

export DEFANG_PROVIDER=AWS
export AWS_PROFILE=your-profile-name

Secure Configuration

Before deploying, securely set encrypted sensitive values in your cloud account:

defang config set DJANGO_SECRET_KEY
defang config set POSTGRES_PASSWORD
  1. Deploy:
defang compose up

Defang handles provisioning managed services (RDS for Postgres, ElastiCache for Redis), container builds, and networking setup. Note: Initial provisioning for managed data stores might take a few minutes.

Cloud Deployment Results​

Post-deployment, your Django app infrastructure includes (among other things):

  • Managed Postgres: AWS RDS instance.
  • Managed Redis: AWS ElastiCache instance.
  • Containers: ECS services with load balancers and DNS configured.

Why Use Defang?​

Defang simplifies complex cloud deployments by:

  • Automatically provisioning managed cloud resources.
  • Securely handling sensitive configurations.
  • Providing seamless container orchestration without manual infrastructure setup.

Try It Yourself​

Explore deploying your Django applications effortlessly with Defang. The full source code for this example is available on GitHub. Feel free to give it a try, and let us know how it goes!

Happy deploying!

· 4 min read

Defang Compose Update

When we refreshed the Defang brand, we knew our website needed more than just a fresh coat of paint. It needed to become a more dynamic part of our stack. We needed some parts to be more flexible, some parts to be more interactive, and better aligned with how modern apps are organized and deployed. And what better way to take it there than to deploy it with Defang itself?

This is part of our ongoing "Defang on Defang" series, where we show how we're using our own tool to deploy all the services that power Defang. In this post, we're diving into how we turned our own website into a project to better understand how Defang can be used to deploy a dynamic Next.js apps and how we can improve the experience for developers.


From S3 + CloudFront to Dynamic, Containerized Deployments​

Our original site was a Next.js app using static exports deployed via S3 and fronted by CloudFront. That setup worked for a while—it was fast and simple. But with our brand refresh, we added pages and components where it made sense to use (and test for other developers) some Next.js features that we couldn't use with the static export:

That meant static hosting wouldn't cut it. So we decided to run the site as an app in a container.

That being said, our learnings from the previous setup are being used to develop the capabilities of Defang. We're using the experience to make sure that Defang can handle the deployment of static sites as well as dynamic ones. We'll keep you updated when that's ready.


Deploying with Defang (and Why It Was Easy)​

We already deploy our other services with Defang using Compose files. In fact, the static website actually already used a Dockerfile and Compose file to manage the build process. So we just had to make some minor changes to the Compose file to take into account new environment variables for features we're adding and make a few small changes to the Dockerfile to handle the new build process.

Some things we had to change:

Adding ports to the Compose file:

    ports:
- mode: ingress
target: 3000
published: 3000

Adding domain info the Composer file:

    domainname: defang.io
networks:
default:
aliases:
- www.defang.io

One other hiccup was that we used to do www to non-www redirects using S3. There are a few ways to switch that up, but for the time being we decided to use Next.js middleware.

Pretty soon after that, the site was up and running in an AWS account—with TLS, DNS, and both the www and root domains automatically configured. Pretty straightfoward!


Real-World Lessons That Are Shaping Defang​

Deploying the website wasn't just a checkbox—it helped surface real-world pain points and ideas for improvement.

1. Static Assets Still Need CDNs​

Even though the site is dynamic now, we still want assets like /_next/static to load quickly from a CDN. This made it clear that CDN support—like CloudFront integration—should be easier to configure in Defang. That’s now on our roadmap. That's also going to be useful for other frameworks that use similar asset paths, like Django.

2. Next.js Env Vars Can Be Tricky in Containers​

Next.js splits env vars between build-time and runtime, and the rules aren’t always obvious. Some need to be passed as build args, and others as runtime envs. That made us think harder about how Defang could help clarify or streamline this for developers—even if we can’t change that aspect of Next.js itself.

3. Redirects and Rewrites​

We had to add a middleware to handle www to non-www redirects. This is a common need, so we're keeping an eye on how we can make this easier to deal with in Defang projects.

These are the kinds of things we only notice by using Defang on real-world projects.


The Takeaway​

Our site now runs like the rest of our infrastructure:

  • Fully containerized
  • Deployed to our own AWS account
  • Managed with a Compose file
  • Deployed with Defang

Stay tuned for the next post in the series—because this is just one piece of the puzzle.

· 3 min read

Defang Compose Update

Well, that went by quick! Seems like it was just a couple of weeks ago that we published the Jan update, and it’s already time for the next one. Still, we do have some exciting progress to report in this short month!

  1. Pulumi Provider: We are excited to announce a Preview of the Defang Pulumi Provider. With the Defang Pulumi Provider, you can leverage all the power of Defang with all of the extensibility of Pulumi. Defang will provision infrastructure to deploy your application straight from your Compose file, while allowing you to connect that deployment with other resources you deploy to your cloud account. The new provider makes it easy to leverage Defang if you’re already using Pulumi, and it also provides an upgrade-path for users who need more configurability than the Compose specification can provide.
  2. Portal Update: We are now fully deploying our portal with Defang alone using the defang compose up command. Our original portal architecture was designed before we supported managed storage so we used to use Pulumi to provision and connect external storage. But since we added support in Compose to specify managed storage, we can fully describe our Portal using Compose alone. This has allowed us to rip out hundreds of lines of code and heavily simplify our deployments. To learn more about how we do this, check out our Defang-Deployed-with-Defang (Part 1) blog.
  3. Open-Auth Contribution: In the past couple months we have been communicating with the OpenAuth maintainers and contributors via PRs (#120, #156) and Issues (#127) to enable features like local testing with DynamoDB, enabling support for scopes, improving standards alignment, supporting Redis, and more. We are rebuilding our authentication systems around OpenAuth and are excited about the future of the project.

Events and Social Media

February was an exciting month for the Defang team as we continued to engage with the developer community and showcase what’s possible with Defang. We sponsored and demo’ed at the DevTools Vancouver meetup, as well as sponsored the Vancouver.dev IRL: Building AI Startups event. Also, at the AWS Startup Innovation Showcase in Vancouver, our CTO Lio demonstrated how Defang makes it effortless to deploy secure, scalable, and cost-efficient serverless apps on AWS! And finally, we had a great response to our LinkedIn post on the Model Context Protocol, catching the attention of many observers, including some of our key partners.

We are eager to see what you deploy with Defang. Join our Discord to ask any questions, see what others are building, and share your own experience with Defang. And stay tuned for more to come in March!

· 5 min read

Defang Compose Update

Deploying applications is hard. Deploying complex, multi-service applications is even harder. When we first built the Defang Portal, we quickly recognized the complexity required to deploy it, even with the early Defang tooling helping us simplify it a lot. But we’ve worked a lot to expand Defang’s capabilities over the last year+ so it could take on more of the work and simplify that process.

This evolution wasn’t just based on our own instincts and what we saw in the Portal—it was informed by listening to developers who have been using Defang, as well as our experience building dozens of sample projects for different frameworks and languages. Each time we build a new sample, we learn more about the different requirements of various types of applications and developers and refine Defang’s feature set accordingly. The Portal became an extension of this learning process, serving as both a proving ground and an opportunity to close any remaining gaps, since it’s one of the most complex things we’ve built with Defang.

Finally, though, the Defang Portal—an application with six services, including two managed data stores, authentication, and a frontend—is fully deployed using just Docker Compose files and the Defang CLI using GitHub Actions.


The Initial Setup: A More Complex Deployment​

The Portal isn’t a simple static website; it’s a full-stack application with the following services:

  • Next.js frontend – Including server components and server actions.
  • Hasura (GraphQL API) – Serves as a GraphQL layer.
  • Hono (TypeScript API) – Lightweight API for custom business logic.
  • OpenAuth (Authentication Service) – Manages authentication flows.
  • Redis – Used for caching and session storage.
  • Postgres – The main database.

Initially, we provisioned databases and some DNS configurations using Infra-as-Code because Defang couldn’t yet manage them for us. We also deployed the services themselves manually through infrastructure-as-code, requiring us to define each service separately.

This worked, but seemed unnecessarily complex, if we had the right tooling…


The Transition: Expanding Defang to Reduce Complexity​

We’ve made it a priority to expand Defang’s capabilities a lot over the last year so it could take on more of the heavy lifting of a more complex application. Over the past year, we’ve added loads of features to handle things like:

  • Provisioning databases, including managing passwords and other secrets securely
  • Config interpolation using values stored in AWS SSM, ensuring the same Compose file works both locally and in the cloud
  • Provisioning certs and managing DNS records from configuration in the Compose file.

As a result, we reached a point where we no longer needed custom infrastructure definitions for most of our deployment.

What Changed?​

  • Previously: GitHub Actions ran infra-as-code scripts to provision databases, manage DNS, and define services separately from the Docker Compose file we used for local dev
  • Now: Our Defang GitHub Action targets normal Compose files and deploys everything, using secrets and variables managed in GitHub Actions environments.
  • Result: We eliminated hundreds of lines of Infra-as-Code, making our deployment leaner and easier to manage and reducing the differences between running the Portal locally and running it in the cloud.

This wasn’t just about reducing complexity—it was also a validation exercise. We knew that Defang had evolved enough to take over much of our deployment, but by going through the transition process ourselves, we could identify and close the remaining gaps and make sure our users could really make use of Defang for complex production-ready apps.


How Deployment Works Today​

Config & Secrets Management​

  • Sensitive configuration values (database credentials, API keys) are stored securely in AWS SSM using Defang’s configuration management tooling.
  • Environment variable interpolation allows these SSM-stored config values to be referenced directly in the Compose file, ensuring the same configuration works in local and cloud environments.
  • Defang provisions managed Postgres and Redis instances automatically when using the x-defang-postgres and x-defang-redis extensions, securely injecting credentials where needed with variable interpolation.

Deployment Modes​

  • Deployment modes (affordable, balanced, high_availability) adjust infrastructure settings across our dev/staging/prod deployments making sure dev is low cost, and production is secure and resilient.

DNS & Certs​

CI/CD Integration​

  • Previously: GitHub Actions ran custom infra-as-code scripts.
  • Now: The Defang GitHub Action installs Defang automatically and runs defang compose up, simplifying deployment.
  • Result: A streamlined, repeatable CI/CD pipeline.

The Takeaway: Why This Matters​

By transitioning to fully Compose-based deployments, we:

  • ✅ Eliminated hundreds of lines of Infra-as-Code
  • ✅ Simplified configuration management with secure, environment-aware secrets handling
  • ✅ Streamlined CI/CD with a lightweight GitHub Actions workflow
  • ✅ Simplified DNS and cert management

Every sample project we built, every conversation we had with developers, and every challenge we encountered with the Portal helped us get to this point where we could focus on closing the gaps last few gaps to deploying everything from a Compose file.

· 2 min read

Defang New Look

Over the last couple of years, as we have been building Defang, we've learnt a lot about the key needs of developers in deploying their applications to the cloud - the primacy of a simple developer experience, while at the same time providing a flexible and production-ready solution that can work seamlessly with all of the popular cloud platform targets.

In response, we have been constantly evolving our product functionality to address those needs in the simplest yet most powerful way we can come up with. While certainly there is a long way to go, we have definitely come a long way since we started.

Why the Refresh?​

As we reflected on our journey, we realized our branding and messaging needed to better reflect Defang's current value-proposition. That's why today, we're excited to unveil our brand refresh, our first since the early days of Defang.

Here's what's new:​

1. Refining Our Messaging​

As Defang evolves, so does our message:

  • Our Promise: Develop Anything, Deploy Anywhere.
  • What We Enable: Any App, Any Stack, Any Cloud.
  • How It Works: Take your app from Docker Compose to a secure, scalable deployment on your favorite cloud in minutes.

We've modernized our logo while keeping the core hexagonal design. The new look symbolizes Defang's role in seamlessly deploying any cloud application to any cloud.

3. A Redesigned Website​

We've refreshed our website with a sleek, intuitive design and a modern user experience to better showcase Defang's capabilities.

Rolling Out the Refresh​

Starting today, you'll see these updates across our Defang.io homepage and social media platforms (Twitter, LinkedIn, Discord, BlueSky). In the coming days, we'll extend this refresh across all our digital assets.

We'd Love Your Feedback!​

Check out the new look and let us know what you think! And if you haven't, please join us on Discord and follow us on social media.