June was a big month at Defang. We rolled out powerful features across our CLI, Playground, and Portal, expanded support for both AWS and GCP, and introduced new tools to help you ship faster and smarter. From real-time cloud cost estimation to internal infra upgrades and community highlights, here’s everything we accomplished.
We just launched something we’re really excited about: live AWS cost estimation before you deploy.
Most devs ship to the cloud without knowing what it’s going to cost and that’s exactly the problem we’re solving. With Defang, you can now estimate the cost of deployment of an Docker Compose application and choose the deployment mode - affordable / balanced / high_availability - that best suits your needs.
In June, we launched a full-stack starter kit for building real-time RAG and multi-agent apps with CrewAI + Defang.
It’s designed to help you move fast with a production-style setup — including Django, Celery, Channels, Postgres (with pgvector), Redis for live updates, and Dockerized model runners you can easily customize. CrewAI handles the agent workflows, and with Defang, you can deploy the whole thing to the cloud in a single command.
Whether you’re building a smart Q&A tool or a multi-agent research assistant, this stack gives you everything you need to get started.
We’ve added active deployment information to the Defang Portal. You can now see your currently active deployments across various cloud providers and understand the details of each, while still managing your cloud environments through the provider’s own tools (e.g. the AWS Console).
Internally, we also hit a big milestone: The Defang Playground now runs on both AWS and GCP, showing the power of Defang’s multi-cloud infrastructure. We’ve also enabled load balancing between the two platforms and plan to share a detailed blog post on how it works soon.
You can now try out the Ask Defang chatbot directly within Intercom! This new integration makes it easier than ever to get instant answers and support while you work. Ask Defang itself is deployed using Defang to our own cloud infrastructure.
And one more thing: bridging local development and cloud deployment just got easier. We’ve published white papers on how Defang extends Docker Compose and GCP workflows to the cloud — using familiar tools at scale. An AWS white paper is coming soon.
In June, we showcased a powerful new demo at AWS events: “What If You Could See AWS Costs Before You Deployed?” Jordan Stephens walked through how to go from Docker Compose to AWS infra with real-time cost estimates and easy teardown, all via Defang.
Let’s be honest: every developer who’s played with LLMs gets that rush of “wow” from the first working demo. But the real headaches show up when you need to stitch LLMs into something production-grade: an app that can pull in real data, coordinate multi-step logic, and more. Suddenly, you’re not just writing single prompts. You’re coordinating between multiple prompts, managing queues, adding vector databases, orchestrating workers, and trying to get things back to the user in real-time. We've found that CrewAI (coordinating prompts, agents, tools) + Django (building an api, managing data), with a bit of Celery (orchestrating workers/async tasks), is a really nice set of tools for this. We're also going to use Django Channels (real-time updates) to push updates back to the user. And of course, we'll use Defang to deploy all that to the cloud.
If this sounds familiar (or if you're dreading the prospect of dealing with it), you’re the target audience for this sample. Instead of slogging through weeks of configuration and permissions hell, you get a ready-made template that runs on your laptop, then scales—unchanged—to Defang’s Playground, and finally to your own AWS or GCP account. All the gnarly infra is abstracted, so you can focus on getting as much value as possible out of that magical combo of CrewAI and Django.
Imagine you're building a system. It might use multiple LLM calls. It might do complex, branching logic in its prompts. It might need to store embeddings to retrieve things in the future, either to pull them into a prompt, or to return them outright. It might need to store other records that don't have embeddings. Here's a very lightweight version of a system like that, as a starting point:
Behind the scenes, the workflow is clean and powerful. The browser connects via WebSockets to our app using Django Channels. Heavy work is pushed to a Celery worker. That worker generates an embedding, checks Postgres with pgvector for a match, and either returns the summary or, if there’s no hit, fires up a CrewAI agent to generate one. Every update streams back through Redis and Django Channels so users get progress in real time.
Durable state lives in Postgres and Redis. Model services (LLMs and embeddings) are fully swappable, so you can upgrade to different models in the cloud or localize with the Docker Model Runner without rewriting the full stack.
The Django app is the front door, routing HTTP and WebSocket traffic, serving up the admin, and delivering static content. It’s built on Daphne and Django Channels, with Redis as the channel layer for real-time group events. Django’s admin is your friend here: to start you can check what summaries exist, but if you start building out your own app, it'll make it a breeze to debug and manage your system.
This is where your data lives. Summaries and their 1024-dimension embeddings go here. A simple SQL query checks for close matches by cosine distance, and pgvector’s index keeps search blazing fast. In BYOC (bring-your-own-cloud) mode, flip a single flag and Defang provisions you a production-grade RDS instance.
Redis is doing triple duty: as the message broker and result backend for Celery, and as the channel layer for real-time WebSocket updates. The pub/sub system lets a single worker update all browser tabs listening to the same group. And if you want to scale up, swap a flag and Defang will run managed ElastiCache in production. No code change required.
The Celery worker is where the magic happens. It takes requests off the queue, generates embeddings, checks for similar summaries, and—if necessary—invokes a CrewAI agent to get a new summary. It then persists summaries and pushes progress updates back to the user.
Thanks to Docker Model Runner, the LLM and embedding services run as containerized, OpenAI-compatible HTTP endpoints. Want to switch to a different model? Change a single line in your compose file. Environment variables like LLM_URL and EMBEDDING_MODEL are injected for you—no secret sharing or hard-coding required.
With CrewAI, your agent logic is declarative and pluggable. This sample keeps it simple—a single summarization agent—but you can add classification, tool-calling, or chain-of-thought logic without rewriting your task runner.
In local dev, your compose.local.yaml spins up Gemma and Mixedbread models, running fully locally and with no cloud credentials or API keys required. URLs for service-to-service communication are injected at runtime. When you’re ready to deploy, swap in the main compose.yaml which adds Defang’s x-defang-llm, x-defang-redis, and x-defang-postgres flags. Now, Defang maps your Compose intent to real infrastructure—managed model endpoints, Redis, and Postgres—on cloud providers like AWS or GCP. It handles all networking, secrets, and service discovery for you. There’s no YAML rewriting or “dev vs prod” drift.
You can run everything on your laptop with a single docker compose -f ./compose.local.yaml up command—no cloud dependencies, fast iteration, and no risk of cloud charges. When you’re ready for the next step, use defang compose up to push to the Defang Playground. This free hosted sandbox is perfect for trying Defang, demos, or prototyping. It automatically adds TLS to your endpoints and sleeps after a week. For production, use your own AWS or GCP account. DEFANG_PROVIDER=aws defang compose up maps each service to a managed equivalent (ECS, RDS, ElastiCache, Bedrock models), wires up secrets, networking, etc. Your infra. Your data.
This sample uses vector similarity to try and fetch summaries that are semantically similar to the input. For more robust results, you might want to embed the original input. You can also think about chunking up longer content for finer-grained matches that you can integrate in your CrewAI workflows. Real-time progress via Django Channels beats HTTP polling, especially for LLM tasks that can take a while. The app service is stateless, which means you can scale it horizontally just by adding more containers which is easy to specify in your compose file.
You’re not limited to a single summarization agent. CrewAI makes it trivial to add multi-agent flows (classification, tool use, knowledge retrieval). For big docs, chunk-level embeddings allow granular retrieval. You can wire in tool-calling to connect with external APIs or databases. You can integrate more deeply with Django's ORM and the PGVector tooling that we demo'd in the sample to build more complex agents that actually use RAG.
With this sample, you’ve got an agent-ready, RAG-ready backend that runs anywhere, with no stacks of YAML or vendor lock-in. Fork it, extend it, productionize it: scale up, add more agents, or swap in different models, or more models!
Quickstart:
# Local docker compose -f compose.local.yaml up --build # Playground defang compose up # BYOC # Setup credentials and then swap <provider> with aws or gcp DEFANG_PROVIDER=<provider> defang compose up
Want more? File an issue to request a sample—we'll do everything we can to help you deploy better and faster!
Modern software development moves fast, but deploying to the cloud often remains a complex hurdle. Docker Compose revolutionized local development by providing a simple way to define multi-service apps, but translating that simplicity into cloud deployment has remained challenging—until now.
Defang bridges this gap by extending Docker Compose into native cloud deployments across AWS, GCP, DigitalOcean, and more, all with a single command: defang compose up. This integration empowers developers to:
Use familiar Docker Compose definitions for cloud deployment.
Enjoy seamless transitions from local to multi-cloud environments.
Automate complex infrastructure setups including DNS, networking, autoscaling, managed storage, and even managed LLMs.
Estimate cloud costs and choose optimal deployment strategies (affordable, balanced, or high availability).
Our whitepaper dives deep into how Docker Compose paired with Defang significantly reduces complexity, streamlines workflows, and accelerates development and deployment.
Discover how Docker + Defang can simplify your journey from local development to production-ready deployments across your preferred cloud providers.
May was a big month at Defang. We shipped support for managed LLMs in Playground, added MongoDB support on AWS, improved the Defang MCP Server, and dropped new AI samples to make deploying faster than ever.
You can now try managed LLMs directly in the Defang Playground.
Defang makes it easy to use cloud-native language models across providers — and now you can test them instantly in the Playground.
Managed LLM support
Playground-ready
Available in CLI v1.1.22 or higher
To use managed language models in your own Defang services, just add x-defang-llm: true — Defang will configure the appropriate roles and permissions for you.
Already built on the OpenAI API? No need to rewrite anything.
With Defang's OpenAI Access Gateway, you can run your existing apps on Claude, DeepSeek, Mistral, and more — using the same OpenAI format.
April flew by with big momentum at Defang. From deeper investments in the Model Context Protocol (MCP), to deploying LLM-based inferencing apps, to live demos of Vibe Deploying, we're making it easier than ever to go from idea to cloud.
This month we focused on making cloud deployments as easy as writing a prompt. Our latest Vibe Deploying blog shows how you can launch full-stack apps right from your IDE just by chatting.
Whether you're working in Cursor, Windsurf, VS Code, or Claude, Defang's MCP integration lets you deploy to the cloud just as easily as conversing with the AI to generate your app. For more details, check out the docs for the Defang Model Context Protocol Server – it explains how it works, how to use it, and why it's a game changer for deploying to the cloud. You can also watch our tutorials for Cursor, Windsurf, and VS Code.
Last month we shipped the x-defang-llm compose service extension to easily deploy inferencing apps that use managed LLM services such as AWS Bedrock. This month, we're excited to announce the same support for GCP Vertex AI – give it a try and let us know your feedback!
On April 28, we kicked things off with an epic night of demos, dev energy, and cloud magic at RAG & AI in Action. Our own Kevin Vo showed how fast and easy it is to deploy AI apps from Windsurf to the cloud using just the Defang MCP. The crowd got a front-row look at how Vibe Deploying turns cloud infra into a background detail.
We finished the month with our signature Defang Coffee Chat, a casual hangout with product updates, live Q&A, and great conversations with our community. Our Campus Advocates also hosted workshops around the world, bringing Defang to new students and builders.
We wrapped up the month with our latest Defang Coffee Chat, featuring live demos, product updates, and a solid conversation around vibe deploying. Thanks to everyone who joined.
The next one is on May 21 at 10 AM PST. Save your spot here.
Welcome to the world of vibe coding, an AI-assisted, intuition-driven way of building software. You do not spend hours reading diffs, organizing files, or hunting through documentation. You describe what you want, let the AI take a pass, and keep iterating until it works.
The Tools of Vibe Coding
Vibe coding would not exist without a new generation of AI-first tools. Here are some of the platforms powering this new workflow.
Defang takes your app, as specified in your docker-compose.yml, and deploys it to the public cloud (AWS, GCP, or DigitalOcean) or the Defang Playground with a single command. It is already used by thousands of developers around the world to deploy their projects to the cloud.
And now with the Defang MCP Server, you can "vibe deploy" your project right from your favorite IDE! Once you have the Defang MCP Server installed (see instructions here), just type in "deploy" (or any variation thereof) in the chat, it's that simple! It is built for hobbyists, vibe coders, fast-moving teams, and AI-powered workflows.
Currently, we support deployment to the Defang Playground only, but we'll be adding deployment to public cloud soon.
How it works:
The Defang MCP Server connects your coding editor (like VS Code or Cursor) with Defang's cloud tools, so you can ask your AI assistant to deploy your project just by typing a prompt. While natural language commands are by nature imprecise, the AI in your IDE translates that natural language prompt to a precise Defang command needed to deploy your application to the cloud. And your application also has a formal definition - the compose.yaml file - either something you wrote or the AI generated for you. So, the combination of a formal compose.yaml with a precise Defang command means that the resulting deployment is 100% deterministic and reliable. So the Defang MCP Server gives you the best of both worlds - the ease of use and convenience of natural language interaction with the AI, combined with the predictability and reliability of a deterministic deployment.
We are so excited to make Defang even more easy to use and accessible now to vibe coders. Give it a try and let us know what you think on our Discord!
Defang Pulumi Provider: Last month, we announced a preview of the Defang Pulumi Provider, and this month we are excited to announce that V1 is now available in the Pulumi Registry. As much as we love Docker, we realize there are many real-world apps that have components that (currently) cannot be described completely in a Compose file. With the Defang Pulumi Provider, you can now leverage the declarative simplicity of Defang with the imperative power of Pulumi.
Production-readiness: As we onboard more customers, we are fixing many fit-n-finish items:
Autoscaling: Production apps need the ability to easily scale up and down with load, and so we've added support for autoscaling. By adding the x-defang-autoscaling: true extension to your service definition in Compose.yaml file, you can benefit from automatic scale out to handle large loads and scale in when load is low. Learn more here.
NewCLI: We've been busy making the CLI more powerful, secure, and intelligent.
• Smarter Config Handling: The new --random flag simplifies setup by generating secure, random config values, removing the need for manual secret creation. Separately, automatic detection of sensitive data in Compose files helps prevent accidental leaks by warning you before they are deployed. Together, these features improve security and streamline your workflow.
• Time-Bound Log Tailing: Need to investigate a specific window? Use tail --until to view logs up to a chosen time—no more scrolling endlessly. Save time from sifting through irrelevant events and focus your investigation.
• Automatic generation of a .dockerignore file for projects that don't already have one, saving you time and reducing image bloat. By excluding common unnecessary files—like .git, node_modules, or local configs—it helps keep your builds clean, fast, and secure right from the start, without needing manual setup.
Networking / Reduce costs: We have implemented private networks, as mentioned in the official Compose specification. We have also reduced costs by eliminating the need for a pricy NAT Gateway in "development mode" deployments!
In March, we had an incredible evening at the AWS Gen AI Loft in San Francisco! Our CTO and Co-founder Lionello Lunesu demoed how Defang makes deploying secure, scalable, production-ready containerized applications on AWS effortless. Check out the demo here!
We also kicked off the Defang Campus Advocate Program, bringing together advocates from around the world. After launching the program in February, it was amazing to see the energy and momentum already building on campuses world-wide. Just as one example, check out this post from one of the students who attended a session hosted by our Campus Advocate Swapnendu Banerjee and then went on to deploy his project with Defang. This is what we live for!
We wrapped up the month with our monthly Coffee Chat, featuring the latest Defang updates, live demos, and a conversation on vibe coding. Thanks to everyone who joined. The next one is on April 30. Save your spot here.
As always, we appreciate your feedback and are committed to making Defang even better. Deploy any app to any cloud with a single command. Go build something awesome!
Anthropic recently unveiled the Model Context Protocol (MCP), “a new standard for connecting AI assistants to the systems where data lives”. However, as Docker pointed out, “packaging and distributing MCP Servers is very challenging due to complex environment setups across multiple architectures and operating systems”. Docker helps to solve this problem by enabling developers to “encapsulate their development environment into containers, ensuring consistency across all team members’ machines and deployments.” The Docker work includes a list of reference MCP Servers packaged up as containers, which you can deploy locally and test your AI application.
However, to put such containerized AI applications into production, you need to be able to not only test locally, but also easily deploy the application to the cloud. This is what Defang enables. In this blog and the accompanying sample, we show how to build a sample AI application using one of the reference MCP Servers, run and test it locally using Docker, and when ready, to easily deploy it to the cloud of your choice (AWS, GCP, or DigitalOcean) using Defang.
Sample Model Context Protocol Time Chatbot Application
Using Docker’s mcp/time image and Anthropic Claude, we made a chatbot application that can access time-based resources directly on the user’s local machine and answer time-based questions.
The application is containerized using Docker, enabling a convenient and easy way to get it running locally. We will later demonstrate how we deployed it to the cloud using Defang.
Let’s go over the structure of the application in a local environment.
There are two containerized services, Service 1 and Service 2, that sit on the local machine.
Service 1 contains a custom-built web server that interacts with an MCP Client.
Service 2 contains an MCP Server from Docker as a base image for the container, and a custom-built MCP Client we created for interacting with the MCP Server.
We have a browser on our local machine, which interacts with the web server in Service 1.
The MCP Server in Service 2 is able to access tools from either a cloud or on our local machine. This configuration is included as a part of the Docker MCP image.
The MCP Client in Service 2 interacts with the Anthropic API and the web server.
Service 1 contains a web server and the UI for a chat application (not shown in the diagram), written in Next.js. The chat UI updates based on user-entered queries and chatbot responses. A POST request is sent to Service 1 every time a user enters a query from the browser. In the web server, a Next.js server action function is used to forward the user queries to the endpoint URL of Service 2 to be processed by the MCP Client.
Service 2: MCP Service Configuration
The original Docker mcp/time image is not designed with the intent of being deployed to the cloud - it is created for a seamless experience with Claude Desktop. To achieve cloud deployment, an HTTP layer is needed in front of the MCP Server. To address this, we've bundled an MCP Client together with the Server into one container. The MCP Client provides the HTTP interface and communicates with the MCP Server via standard input/output (stdio).
MCP Client
The MCP Client is written in Python, and runs in a virtual environment (/app/.venv/bin) to accommodate specific package dependencies. The MCP Client is instantiated in a Quart app, where it connects to the MCP Server and handles POST requests from the web server in Service 1. Additionally, the MCP Client connects to the Anthropic API to request LLM responses.
MCP Server and Tools (from the Docker Image)
The MCP Server enables access to tools from an external source, whether it be from a cloud or from the local machine. This configuration is included as a part of the Docker MCP image. The tools can be accessed indirectly by the MCP Client through the MCP Server. The Docker image is used as a base image for Service 2, and the MCP Client is built in the same container as the MCP Server. Note that the MCP Server also runs in a virtual environment (/app/.venv/bin).
Anthropic API
The MCP Client connects to the Anthropic API to request responses from a Claude model. Two requests are sent to Claude for each query. The first request will send the query contents and a list of tools available, and let Claude respond with a selection of the tools needed to craft a response. The MCP Client will then call the tools indirectly through the MCP Server. Once the tool results come back to the Client, a second request is sent to Claude with the query contents and tool results to craft the final response.
The base image for Service 1 is the node:bookworm-slim image. We construct the image by copying the server code and setting an entry point command to start the web server.
The base image for Service 2 is the Docker mcp/time image. Since both the MCP Client and Server run in a virtual environment, we activate a venv command in the Dockerfile for Service 2 and create a run.sh shell script that runs the file containing the MCP Client and Server connection code. We then add the shell script as an entry point command for the container.
Now that we’ve defined our application in Docker containers using a compose.yaml file, we can test and run it on our local machine by running the command:
docker compose up --build
Once the application is started up, it can be easily tested in a local environment. However, to make it easily accessible to others online, we should deploy it to the cloud. Fortunately, deploying the application is a straightforward process using Defang, particularly since the application is Compose-compatible.
Let’s go over the structure of the application after cloud deployment.
Here we can see what changes if we deploy to the cloud:
Service 1 and Service 2 are now deployed to the cloud, not on the local machine anymore.
The only part on the local machine is the browser.
Using the same compose.yaml file as shown earlier, we can deploy the containers to the cloud with the Defang CLI. Once we’ve authenticated and logged in, we can choose a cloud provider (i.e. AWS, GCP, or DigitalOcean) and use our own cloud account for deployment. Then, we can set a configuration variable for the Anthropic API key:
defang config set ANTHROPIC_API=<your-api-key-value>
Then, we can run the command:
defang compose up
Now, the MCP time chatbot application will be up and running in the cloud. This means that anyone can access the application online and try it for themselves!
For our case, anyone can use the chatbot to ask for the exact time or convert time zones from their machine, regardless of where they are located.
Most importantly, this chatbot application can be adapted to use any of the other Docker reference MCP Server images, not just the mcp/time server.
Have fun building and deploying MCP-based containerized applications to the cloud with Defang!
Welcome to 2025! As we had shared in our early Dec update, we reached our V1 milestone with support for GCP and DigitalOcean in Preview and production support for AWS. We were very gratified to see the excitement around our launch, with Defang ending 2024 with twice the number of users as our original goal!
We are excited to build on that momentum going into 2025. And we are off to a great start in Jan, with some key advancements:
GCP parity with AWS: We are really excited to announce that our GCP provider is now feature-complete, with support for key features such as Managed Postgres, Managed Redis, BYOD (Bring-Your-Own-Domain), GPUs, AI-assisted Debugging, and more. Install the latest version of our CLI and give it a try! Please let us know your feedback.
Defang Deployed with Defang: In 2025, we are doubling our focus on production use-cases where developers are using Defang every day to deploy their production apps. And where better to start than with Defang itself? We had already been using Defang to deploy portions of our infrastructure (such as our web site), but we are super happy to report that now we are using Defang to deploy all our services - including our Portal, Playground, the Defang back-end (aka Fabric) and more. We’ll be sharing more about how we did this, and publishing some of the related artifacts, in a blog post soon - stay tuned.
Campus Advocate Program: One of our key goals for 2025 is to bring Defang to more students and hobbyists. To do this, we are very excited to launch our Campus Advocate Program, a community of student leaders passionate about cloud technology. Our advocates will build communities, host events, and help peers adopt cloud development with Defang. If you’re a student eager to drive cloud innovation on your campus, we’d love to hear from you - you can apply here.
1-click Deploy instructions: One of our most popular features is the ability to deploy any of our 50+ samples with a single click. We have now published instructions showing how you can provide a similar experience for your project or sample. We are curious to see what you deploy with this!
Model Context Protocol sample: AI agents are of course the rage nowadays. Recently, Docker published a blog showing how you can use Docker to containerize “servers” following Anthropic’s Model Context Protocol. We have now published a sample that shows you how to easily deploy such containerized servers to the cloud using Defang - check it out here.
So, you can see we have been busy! But that is not all - we have a lot more in the pipeline in the coming months. Stay tuned - it’s going to be an exciting 2025!
P.S.: Defang is now on Bluesky!Follow us to stay connected, get the latest updates, and join the conversation. See you there!
At Defang, we’re enabling developers go from idea to code to deployment 10x faster. We’re thrilled to announce that Defang V1 is officially launching during our action-packed Launch Week, running from December 4–10, 2024! This marks a major milestone as we officially release the tools and features developers have been waiting for.
Defang is a powerful tool that lets you easily develop, deploy, and debug production-ready cloud applications. With Defang V1, we continue to deliver on our vision to make cloud development effortlessly simple and portable, with the ability to develop once and deploy anywhere. Here’s what’s included in this milestone release:
Production-Ready Support for AWS
Seamlessly deploy and scale with confidence on AWS. Defang is now WAFR-compliant, assuring that your deployments conform to all the best-practices for AWS deployments. Defang is now officially part of the AWS Partner Network.
This week, we are excited to unveil support for deployments to GCP, in Preview. Start building and exploring and give us feedback as we work to enhance the experience further and move towards production support. Defang is also now officially part of the Google Cloud Partner Advantage program.
Support for DigitalOcean in Preview
Developers using DigitalOcean can explore our Preview features, with further enhancements and production support coming soon.
Defang Product Tiers and Introductory Pricing 🛠️
As we move into V1, we are also rolling out our differentiated product tiers, along with our special introductory pricing. Fear not, we will always have a free tier for hobbyists - conveniently called the Hobby tier. We now also provide Personal, Pro, and Enterprise tiers for customers with more advanced requirements. Check out what is included in each here. And as always, the Defang CLI is and remains open-source.
We’ve lined up an exciting week of activities to showcase the power of Defang and bring together our growing community:
December 4: Vancouver CDW x AWS re:Invent Watch Party
Join us at the Vancouver CDW x AWS re:Invent Watch Party, where we will have a booth showcasing Defang’s capabilities and AWS integration. Stop by to learn more about Defang and see a live demo from the Defang dev team.
December 5–6: GFSA DemoDay and Git Push to 2025: Devs Social Party
Hear directly from Defang’s co-founder and CTO, Lio Lunesu, as we unveil Defang’s support for GCP at the Google for Startups Accelerator (GFSA) DemoDay event in Toronto. This event will also be live-streamed here.
Additionally, join us on December 5th for the final meetup of the year for Vancouver’s developer groups, hosted by VanJS in collaboration with other local dev communities.
December 6 & 7: MLH Global Hack Week (GHW)
Join us during MLH Global Hack Week for hands-on workshops and learn how to build production-ready cloud applications in minutes with Defang.
December 7: Cloud Chat
An IRL event with our team to explore V1 features in depth, answer your questions, and share insights from our journey.
December 10: Product Hunt Launch
Be part of our Product Hunt debut and show your support as we reach the broader tech community.