Skip to main content

4 posts tagged with "Vibe Deploying"

View All Tags

· 3 min read

Defang Agent

From Vibe-Coding to Production… Without a DevOps Team

Building apps has never been easier. Tools like Cursor, Windsurf, Lovable, V0, and Bolt have ushered in a new era of coding called vibe coding, rapid, AI-assisted app development where developers can go from idea to prototype in hours, bringing ideas to life faster than ever before.

And with the recently released AWS Kiro, we have now entered a new phase of AI-assisted development called "spec-driven development" where the AI breaks down the app development task even further. You can think of a "PM agent" that goes from prompt to a requirements document, and then an "Architect agent" that goes from the requirements document to a design document, which is then used by "Dev", "Test" and "Docs" agents to generate app code, tests, and documentation respectively. This approach is much more aligned with enterprise use cases and produces higher quality output.

The Hard Part Isn’t Building. It’s Shipping.

However, cloud app deployment remains a major challenge! As Andrej Karpathy shared during his recent YC talk:

"I vibe-coded the app in four hours… and spent the rest of the week deploying it."

While AI-powered tools make building apps a breeze, deploying them to the cloud is still frustratingly complex. Kubernetes, Terraform, IAM policies, load balancers, DNS, CI/CD all add layers of difficulty. This complexity continues to be a significant bottleneck that AI tools have yet to fully address, making deployment a critical challenge for developers.

The bottleneck is no longer the code. It's the infrastructure.

Enter Defang: Your AI DevOps Agent

Defang is an AI-enabled agent that takes care of your entire deployment workflow, going from app code to a production-ready deployment on your favorite cloud in a single step.

By understanding your app stack (using Docker Compose), Defang provisions the right infrastructure and securely deploys to AWS, GCP, or DigitalOcean, following each cloud's best practices.

Whether you're launching a side project or scaling a multi-agent app, Defang ensures secure, smooth, scalable cloud-native deployments.

Defang Deployment Features at a Glance

  • One Command Deployment: Run defang compose up and you're live
  • Secure and Scalable: Built-in TLS, secrets, autoscaling, IAM, and HTTPS
  • Multi-Cloud Ready: Deploy to your cloud (AWS, GCP, DO) using your own credentials
  • Language & framework agnostic: Next.js, Go, Python (Django/Flask), C#, …
  • Managed LLM: Add x-defang-llm: true and Defang auto-configures cloud-native LLMs like Bedrock, Vertex AI, and the Defang Playground
  • Configures managed services (e.g. managed Postgres, MongoDB, Redis) using the target cloud's native services (e.g. RDS for Postgres on AWS, Cloud SQL on GCP).
  • Tailored deployment modes (e.g. affordable, balance, high-availability) optimized for different environments (dev, staging, production)
  • AI Debugging: Get context-aware assistance to quickly identify and fix deployment issues

Native Integration with AI-Assisted Coding Tools

Defang can be accessed directly from within your favorite IDE - Cursor, Windsurf, VS Code, Claude, or Kiro - via Defang's MCP Server. You can now deploy to the cloud with a natural language command like "deploy my app with Defang".

For Developers and CTOs Who Want to Move Fast

If you're a developer shipping fast or a CTO scaling lean, Defang acts as your drop-in DevOps engineer without needing to build a team around it.

You focus on building great software.
Defang gets it live.

Try Defang Now

· 3 min read

Defang Compose Update

June was a big month at Defang. We rolled out powerful features across our CLI, Playground, and Portal, expanded support for both AWS and GCP, and introduced new tools to help you ship faster and smarter. From real-time cloud cost estimation to internal infra upgrades and community highlights, here’s everything we accomplished.

🚀 Live AWS Cost Estimation

We just launched something we’re really excited about: live AWS cost estimation before you deploy. Most devs ship to the cloud without knowing what it’s going to cost and that’s exactly the problem we’re solving. With Defang, you can now estimate the cost of deployment of an Docker Compose application and choose the deployment mode - affordable / balanced / high_availability - that best suits your needs.

👉 Check out the docs

🧠 CrewAI + Defang Starter Kit

In June, we launched a full-stack starter kit for building real-time RAG and multi-agent apps with CrewAI + Defang. It’s designed to help you move fast with a production-style setup — including Django, Celery, Channels, Postgres (with pgvector), Redis for live updates, and Dockerized model runners you can easily customize. CrewAI handles the agent workflows, and with Defang, you can deploy the whole thing to the cloud in a single command. Whether you’re building a smart Q&A tool or a multi-agent research assistant, this stack gives you everything you need to get started.

👉 Try it out here

📊 Deployment Info in Portal

We’ve added active deployment information to the Defang Portal. You can now see your currently active deployments across various cloud providers and understand the details of each, while still managing your cloud environments through the provider’s own tools (e.g. the AWS Console).

☁️ Playground Now Runs on AWS + GCP

Internally, we also hit a big milestone: The Defang Playground now runs on both AWS and GCP, showing the power of Defang’s multi-cloud infrastructure. We’ve also enabled load balancing between the two platforms and plan to share a detailed blog post on how it works soon.

🧩 VS Code Extension Released

We also released the Defang VS Code Extension, making it even easier to deploy and manage cloud apps right from your editor. No terminal needed.

  • One-click deploy
  • Built-in tools to manage services
  • Zero config, fast setup

👉 Try it out here

💬 Ask Defang via Intercom

You can now try out the Ask Defang chatbot directly within Intercom! This new integration makes it easier than ever to get instant answers and support while you work. Ask Defang itself is deployed using Defang to our own cloud infrastructure.

🐳 Docker x Defang White Paper

And one more thing: bridging local development and cloud deployment just got easier. We’ve published white papers on how Defang extends Docker Compose and GCP workflows to the cloud — using familiar tools at scale. An AWS white paper is coming soon.

👉 Read the white paper here

👉 Read the GCP white paper

Events and Community

In June, we showcased a powerful new demo at AWS events: “What If You Could See AWS Costs Before You Deployed?” Jordan Stephens walked through how to go from Docker Compose to AWS infra with real-time cost estimates and easy teardown, all via Defang.

👉 Watch the demo here

We can’t wait to see what you deploy with Defang.
👉 Join our Discord

More coming in July.

· 3 min read

Defang Compose Update

May was a big month at Defang. We shipped support for managed LLMs in Playground, added MongoDB support on AWS, improved the Defang MCP Server, and dropped new AI samples to make deploying faster than ever.

🚀 Managed LLMs in Playground

You can now try managed LLMs directly in the Defang Playground. Defang makes it easy to use cloud-native language models across providers — and now you can test them instantly in the Playground.

  • Managed LLM support
  • Playground-ready
  • Available in CLI v1.1.22 or higher

To use managed language models in your own Defang services, just add x-defang-llm: true — Defang will configure the appropriate roles and permissions for you.

Already built on the OpenAI API? No need to rewrite anything.

With Defang's OpenAI Access Gateway, you can run your existing apps on Claude, DeepSeek, Mistral, and more — using the same OpenAI format.

Learn more here.

Try it out here.

📦 MongoDB Preview on AWS

Last month, we added support for MongoDB-compatible workloads on AWS via Amazon DocumentDB.

Just add this to your compose.yaml:

services:
db:
x-defang-mongodb: true

Once you add x-defang-mongodb: true, Defang will auto-spin a DocumentDB cluster in your AWS — no setup needed.

🛠 MCP Server Improvements

We've made the MCP Server and CLI easier to use and deploy:

  • Users are now prompted to agree to Terms of Service via the portal login
  • MCP Server and CLI are now containerized, enabling faster setup, smoother deployments, and better portability across environments

🌎 Events and Community

We kicked off the month by sponsoring Vancouver's first Vibe Coding IRL Sprint. Jordan Stephens from Defang ran a hands-on workshop on "Ship AI Faster with Vertex AI" with GDG Vancouver (GDG Vancouver). Around the same time, our CTO and Co-founder Lio joined the GenAI Founders Fireside panel hosted by AInBC and AWS.

Big moment for the team — we won the Best Canadian Cloud Award at the Vancouver Cloud Summit. Right after, we hit the expo floor at Web Summit Vancouver as part of the BETA startup program and got featured by FoundersBeta as one of the Top 16 Startups to Watch.

Our Campus Advocates also kept the momentum going, hosting Defang events around the world with live demos and workshops.

Last month's Defang Coffee Chat brought together the community for product updates, live demos, and a great convo on vibe deploying.

We're back again on June 25 at 10 AM PST. Save your spot here.

We can't wait to see what you deploy with Defang. Join our Discord to ask questions, get support, and share your builds.

More coming in June.

· 2 min read

Defang Compose Update

April flew by with big momentum at Defang. From deeper investments in the Model Context Protocol (MCP), to deploying LLM-based inferencing apps, to live demos of Vibe Deploying, we're making it easier than ever to go from idea to cloud.

MCP + Vibe Deploying

This month we focused on making cloud deployments as easy as writing a prompt. Our latest Vibe Deploying blog shows how you can launch full-stack apps right from your IDE just by chatting.

Whether you're working in Cursor, Windsurf, VS Code, or Claude, Defang's MCP integration lets you deploy to the cloud just as easily as conversing with the AI to generate your app. For more details, check out the docs for the Defang Model Context Protocol Server – it explains how it works, how to use it, and why it's a game changer for deploying to the cloud. You can also watch our tutorials for Cursor, Windsurf, and VS Code.

Managed LLMs

Last month we shipped the x-defang-llm compose service extension to easily deploy inferencing apps that use managed LLM services such as AWS Bedrock. This month, we're excited to announce the same support for GCP Vertex AI – give it a try and let us know your feedback!

Events and Programs

On April 28, we kicked things off with an epic night of demos, dev energy, and cloud magic at RAG & AI in Action. Our own Kevin Vo showed how fast and easy it is to deploy AI apps from Windsurf to the cloud using just the Defang MCP. The crowd got a front-row look at how Vibe Deploying turns cloud infra into a background detail.

We finished the month with our signature Defang Coffee Chat, a casual hangout with product updates, live Q&A, and great conversations with our community. Our Campus Advocates also hosted workshops around the world, bringing Defang to new students and builders.

We wrapped up the month with our latest Defang Coffee Chat, featuring live demos, product updates, and a solid conversation around vibe deploying. Thanks to everyone who joined.

The next one is on May 21 at 10 AM PST. Save your spot here.

Looking Ahead

Here's what's coming in May:

  • Web Summit Vancouver – Defang will be a startup sponsor, please come see us on the expo floor.
  • More MCP tutorials and dev tools.

Let's keep building. 🚀