Skip to main content

5 posts tagged with "Defang"

View All Tags

Beyond Heroku: Owning Your Deployments

· 7 min read
Defang Team

When you launch a new app, convenience rules. Platforms like Heroku offer a beautifully simple way to push code and see it live in minutes. You don’t need to think about servers, networks or databases. You just deploy. That’s why so many of us start there.

But convenience has a cost. As your product grows, you want more control over performance and security. You want to integrate your own services, tune the infrastructure and optimize your spend. Heroku’s dyno‑based pricing, which starts around $25/month for a modest dyno and climbs to hundreds of dollars for high‑performance dynos, can become prohibitive for serious production workloads. And while Heroku abstracts away the underlying cloud, that abstraction also means you can’t fine‑tune the way your application runs.

This trade‑off eventually becomes untenable. Teams need the simplicity of a platform like Heroku and the power and trust of running inside their own AWS account. This post unpacks why migrating off Heroku matters, highlights the friction points when you try to move to AWS yourself, and shows how the Defang CLI bridges the gap.

Heroku vs. AWS

Heroku is a Platform‑as‑a‑Service very focused on simplicity and ease of use, while AWS offers a huge array of extremely powerful services that can be difficult to navigate alone. What you get from Heroku is the ability to deploy your application with a simple git push, scale by adding dynos and pick from a marketplace of add‑ons. But you miss out on much of the power of AWS, like the ability to organize and network services the way you want, the ability to deploy in a huge number of regions, their reliability, and much of the control you need to be called enterprise-ready. AWS also tends to be more cost effective as you scale and offers a wide variety of scalable storage options including Postgres, MongoDB, Redis, and more.

Pricing and scale

Heroku’s pricing is tied to dynos. Eco dynos cost about $5/month for 1,000 hours, while standard dynos run $25–$50/month and performance dynos jump to $250–$1,500/month. Those numbers are predictable, but if your traffic spikes or you need more compute, your dyno bill scales quickly. Databases and Redis add‑ons are also billed per gigabyte, adding to the total cost.

AWS uses a pay‑as‑you‑go model: you pay for the exact resources you use, whether on‑demand compute, reserved instances or spot capacity. This model can be far cheaper at scale, especially if you commit to reserved or savings plans, but it also introduces complexity. Besides compute, you need to factor in elastic IPs, data transfer, load balancers and NAT gateways. AWS rewards expertise: you can optimize costs but only if you understand its pricing levers.

Why leave Heroku?

For many teams, Heroku is the right starting point. But there are clear inflection points when it makes sense to graduate:

  • Escalating costs. As your user base grows, dyno bills rise exponentially. At some point, the predictable price premium no longer justifies the convenience.

  • Performance and scalability demands. High‑traffic applications need flexible scaling and the ability to choose instance sizes and storage types. Heroku’s dyno types can be limiting for CPU‑ or memory‑intensive workloads.

  • Compliance and data sovereignty. Customers in regulated industries often require apps to run in their own cloud account and under their own compliance controls.

  • Customization. You might need to integrate bespoke networking, private databases or other services not available as Heroku add‑ons. AWS’s vast ecosystem of more than 240 services makes these integrations possible.

Yet the path off Heroku isn’t trivial. Re‑platforming often means rewriting your application to use AWS services directly, building new CI/CD pipelines, managing IAM roles and provisioning infrastructure by hand. That’s a big lift for developers who just want to ship features.

Migration in minutes: how the Defang CLI works

In our recent video (“How to migrate from Heroku to AWS in 5 minutes!”), we demonstrated a Django + Postgres app running on Heroku. The goal: deploy it into our own AWS account without rewriting anything. Here’s how it works:

Import your Heroku app. After installing and logging into the Defang CLI, run:

defang init

Then select Migrate from Heroku.

Defang connects to the Heroku API, inspects your app’s dynos, add‑ons and configuration variables, and generates a Docker Compose file. It translates Procfile commands into services and records dependencies like Postgres and Redis.

Review the Compose file. You should always examine the generated compose.yaml. You can adjust ports or remove unnecessary services. In the demo we changed the exposed port to 8000 and confirmed everything looked reasonable.

Select your cloud. We authenticated against AWS and selected a profile (AWS_PROFILE=defang-lab). In Defang, you set a provider with an environment variable DEFANG_PROVIDER=aws.

You can either export these, or pass them to each command:

AWS_PROFILE=defang-lab DEFANG_PROVIDER=aws defang <command>

Or you can set them as environment variables:

export AWS_PROFILE=defang-lab DEFANG_PROVIDER=aws

Set your secrets. You then run defang config set to provide any secrets (database user, password, database name) that were previously stored in Heroku. These secrets are encrypted at rest and passed securely to your services deployment.

Deploy with one command. Finally, execute:

defang up

Defang provisions an ECS cluster, RDS database, VPC, security groups, DNS records, load balancer, and more for your application. It also provisions a release service to handle migrations and brings up your web service once the database is ready. Eventually you get a public URL for your working application.

Verify and scale.

The entire migration took roughly five minutes from start to finish, with zero changes to application code. Instead of rewriting our Django settings or learning the intricacies of ECS, we let Defang automate the heavy lifting.

Why this matters

The migration from Heroku to AWS delivers two critical advantages that matter most to growing teams: cost savings and power and flexibility.

As we covered earlier, Heroku's dyno pricing can quickly escalate from $25/month to hundreds or even thousands as you scale. AWS's pay-as-you-go model, combined with reserved instances and spot capacity, can reduce your infrastructure costs by 60-80% at scale (depends on your use case). You pay only for what you use, when you use it.

More importantly, you gain access to AWS's full ecosystem of 240+ services. Need a specific instance type for CPU-intensive workloads? Custom networking for multi-region deployments? Advanced monitoring and logging? On Heroku, you're limited to what's available in their add-on marketplace. On AWS, you can integrate any service, tune performance at the infrastructure level, and architect solutions that simply aren't possible on a PaaS.

For some teams, there's also the benefit of deploying into customer cloud accounts for compliance and data sovereignty requirements.

Defang bridges this gap by giving you Heroku-like simplicity with AWS power.

Try it yourself

If your team is outgrowing Heroku or you need to bring your application into your customers’ cloud, give our migration workflow a spin. Install the Defang CLI, run defang init migrate-from-heroku, and watch your app come to life in AWS. You can find more details in our official migration guide. We’d love to hear what you deploy and what features you’d like us to add next.

Simple, Secure, and Scalable GCP Deployments from Docker Compose

· 2 min read
Defang Team

Introducing Our New Whitepaper: Simple, Secure, and Scalable GCP Deployments from Docker Compose

We’re excited to share our latest whitepaper, Defang + GCP: Simple, Secure, and Scalable Deployments from Docker Compose.

Want to skip the blog?

Deploying to Google Cloud Platform (GCP) doesn’t have to be complicated. Docker Compose made defining local apps simple, and Defang makes cloud deployments just as easy.

With Defang, you can:

  • Deploy to GCP with a single command. Go from Compose to Cloud Run, Cloud SQL, and more with just defang compose up --provider=gcp.
  • Skip the DevOps overhead. No need for Terraform, Pulumi, or custom scripts. Defang maps your Compose services to the right GCP resources — compute, storage, networking, and even managed LLMs.
  • Enjoy built-in security and scalability. Defang automates GCP best practices, handling service accounts, secret management, HTTPS, auto-scaling, and more.
  • Integrate with your workflow. Deploy from your terminal, GitHub Actions, or even natural language prompts in VS Code, Cursor, or Windsurf.
  • Save costs and avoid surprises. Choose from affordable, balanced, or high-availability modes with built-in cost estimation coming soon.

Our whitepaper walks through how Defang integrates with GCP, including how it:

✅ Builds your containers using Cloud Build
✅ Manages secure deployments via Cloud Run and managed services
✅ Supports custom domains, advanced networking, GPUs, LLMs, and more
✅ Powers CI/CD pipelines with GitHub Actions and Pulumi

It also highlights how Defang itself deploys real apps like our Ask Defang chatbot using less than 100 lines of Compose on GCP. Want to simplify your GCP deployments? Start with Compose, scale with Defang.

Sample: Starter Kit for RAG + Agents with CrewAI

· 7 min read
Defang Team

Why Build a Starter Kit for RAG + Agents?

Let’s be honest: every developer who’s played with LLMs gets that rush of “wow” from the first working demo. But the real headaches show up when you need to stitch LLMs into something production-grade: an app that can pull in real data, coordinate multi-step logic, and more. Suddenly, you’re not just writing single prompts. You’re coordinating between multiple prompts, managing queues, adding vector databases, orchestrating workers, and trying to get things back to the user in real-time. We've found that CrewAI (coordinating prompts, agents, tools) + Django (building an api, managing data), with a bit of Celery (orchestrating workers/async tasks), is a really nice set of tools for this. We're also going to use Django Channels (real-time updates) to push updates back to the user. And of course, we'll use Defang to deploy all that to the cloud.

If this sounds familiar (or if you're dreading the prospect of dealing with it), you’re the target audience for this sample. Instead of slogging through weeks of configuration and permissions hell, you get a ready-made template that runs on your laptop, then scales—unchanged—to Defang’s Playground, and finally to your own AWS or GCP account. All the gnarly infra is abstracted, so you can focus on getting as much value as possible out of that magical combo of CrewAI and Django.

Just want the sample?

You can find it here.

A Demo in 60 Seconds

Imagine you're building a system. It might use multiple LLM calls. It might do complex, branching logic in its prompts. It might need to store embeddings to retrieve things in the future, either to pull them into a prompt, or to return them outright. It might need to store other records that don't have embeddings. Here's a very lightweight version of a system like that, as a starting point:

Architecture at a Glance

Behind the scenes, the workflow is clean and powerful. The browser connects via WebSockets to our app using Django Channels. Heavy work is pushed to a Celery worker. That worker generates an embedding, checks Postgres with pgvector for a match, and either returns the summary or, if there’s no hit, fires up a CrewAI agent to generate one. Every update streams back through Redis and Django Channels so users get progress in real time.

Architecture

Durable state lives in Postgres and Redis. Model services (LLMs and embeddings) are fully swappable, so you can upgrade to different models in the cloud or localize with the Docker Model Runner without rewriting the full stack.

Under the Hood: The Services

Django + Channels

The Django app is the front door, routing HTTP and WebSocket traffic, serving up the admin, and delivering static content. It’s built on Daphne and Django Channels, with Redis as the channel layer for real-time group events. Django’s admin is your friend here: to start you can check what summaries exist, but if you start building out your own app, it'll make it a breeze to debug and manage your system.

PostgreSQL + pgvector

This is where your data lives. Summaries and their 1024-dimension embeddings go here. A simple SQL query checks for close matches by cosine distance, and pgvector’s index keeps search blazing fast. In BYOC (bring-your-own-cloud) mode, flip a single flag and Defang provisions you a production-grade RDS instance.

Redis

Redis is doing triple duty: as the message broker and result backend for Celery, and as the channel layer for real-time WebSocket updates. The pub/sub system lets a single worker update all browser tabs listening to the same group. And if you want to scale up, swap a flag and Defang will run managed ElastiCache in production. No code change required.

Celery Worker

The Celery worker is where the magic happens. It takes requests off the queue, generates embeddings, checks for similar summaries, and—if necessary—invokes a CrewAI agent to get a new summary. It then persists summaries and pushes progress updates back to the user.

LLM and Embedding Services

Thanks to Docker Model Runner, the LLM and embedding services run as containerized, OpenAI-compatible HTTP endpoints. Want to switch to a different model? Change a single line in your compose file. Environment variables like LLM_URL and EMBEDDING_MODEL are injected for you—no secret sharing or hard-coding required.

CrewAI Workflows

With CrewAI, your agent logic is declarative and pluggable. This sample keeps it simple—a single summarization agent—but you can add classification, tool-calling, or chain-of-thought logic without rewriting your task runner.

How the Compose Files Work

In local dev, your compose.local.yaml spins up Gemma and Mixedbread models, running fully locally and with no cloud credentials or API keys required. URLs for service-to-service communication are injected at runtime. When you’re ready to deploy, swap in the main compose.yaml which adds Defang’s x-defang-llm, x-defang-redis, and x-defang-postgres flags. Now, Defang maps your Compose intent to real infrastructure—managed model endpoints, Redis, and Postgres—on cloud providers like AWS or GCP. It handles all networking, secrets, and service discovery for you. There’s no YAML rewriting or “dev vs prod” drift.

The Three-Step Deployment Journey

You can run everything on your laptop with a single docker compose -f ./compose.local.yaml up command—no cloud dependencies, fast iteration, and no risk of cloud charges. When you’re ready for the next step, use defang compose up to push to the Defang Playground. This free hosted sandbox is perfect for trying Defang, demos, or prototyping. It automatically adds TLS to your endpoints and sleeps after a week. For production, use your own AWS or GCP account. DEFANG_PROVIDER=aws defang compose up maps each service to a managed equivalent (ECS, RDS, ElastiCache, Bedrock models), wires up secrets, networking, etc. Your infra. Your data.

Some Best Practices and Design Choices

This sample uses vector similarity to try and fetch summaries that are semantically similar to the input. For more robust results, you might want to embed the original input. You can also think about chunking up longer content for finer-grained matches that you can integrate in your CrewAI workflows. Real-time progress via Django Channels beats HTTP polling, especially for LLM tasks that can take a while. The app service is stateless, which means you can scale it horizontally just by adding more containers which is easy to specify in your compose file.

Going Further: Extending the Sample

You’re not limited to a single summarization agent. CrewAI makes it trivial to add multi-agent flows (classification, tool use, knowledge retrieval). For big docs, chunk-level embeddings allow granular retrieval. You can wire in tool-calling to connect with external APIs or databases. You can integrate more deeply with Django's ORM and the PGVector tooling that we demo'd in the sample to build more complex agents that actually use RAG.

Ready to Build?

With this sample, you’ve got an agent-ready, RAG-ready backend that runs anywhere, with no stacks of YAML or vendor lock-in. Fork it, extend it, productionize it: scale up, add more agents, or swap in different models, or more models!

Quickstart:

# Local
docker compose -f compose.local.yaml up --build
# Playground
defang compose up
# BYOC
# Setup credentials and then swap <provider> with aws or gcp
DEFANG_PROVIDER=<provider> defang compose up

Want more? File an issue to request a sample—we'll do everything we can to help you deploy better and faster!

Bridging Local Development and Cloud Deployment

· 2 min read
Defang Team

Introducing Our New Whitepaper: Bridging Local Development and Cloud Deployment with Docker Compose and Defang

We’re excited to announce the release of our new whitepaper, "Bridging Local Development and Cloud Deployment with Docker Compose and Defang."

Want to skip the blog?

Modern software development moves fast, but deploying to the cloud often remains a complex hurdle. Docker Compose revolutionized local development by providing a simple way to define multi-service apps, but translating that simplicity into cloud deployment has remained challenging—until now.

Defang bridges this gap by extending Docker Compose into native cloud deployments across AWS, GCP, DigitalOcean, and more, all with a single command: defang compose up. This integration empowers developers to:

  • Use familiar Docker Compose definitions for cloud deployment.
  • Enjoy seamless transitions from local to multi-cloud environments.
  • Automate complex infrastructure setups including DNS, networking, autoscaling, managed storage, and even managed LLMs.
  • Estimate cloud costs and choose optimal deployment strategies (affordable, balanced, or high availability).

Our whitepaper dives deep into how Docker Compose paired with Defang significantly reduces complexity, streamlines workflows, and accelerates development and deployment.

Discover how Docker + Defang can simplify your journey from local development to production-ready deployments across your preferred cloud providers.

Hard Lessons From Hardware

· 5 min read
Linda Lee

About the author: Linda Lee is an intern at Defang Software Labs who enjoys learning about computer-related things. She wrote this blog post after having fun with hardware at work.

My Story of Embedded Systems With Defang

Have you ever looked at a touch screen fridge and wondered how it works? Back in my day (not very long ago), a fridge was just a fridge. No fancy built-in interface, no images displayed, and no wifi. But times have changed, and I’ve learned a lot about embedded systems, thanks to Defang!

smart_fridge

From my background, I was more into the web development and software side of things. Buffer flushing? Serial monitors? ESP32-S3? These were unheard of. Then one day at Defang, I was suggested to work on a project with a SenseCAP Indicator, a small programmable touch screen device. Everyone wished me good luck when I started. That’s how I knew it wasn’t going to be an easy ride. But here I am, and I’m glad I did it.

What is embedded systems/programming? It’s combining hardware with software to perform a function, such as interacting with the physical world or accessing cloud services. A common starting point for beginners is an Arduino board, which is what the SenseCAP Indicator has for its hardware. My goal was to make a UI display for this device, and then send its input to a computer, and get that data into the cloud.

hand_typing

The Beginning

My journey kicked off with installing the Arduino IDE on my computer. It took me two hours—far longer than I expected—because the software versions I kept trying were not the right ones. Little did I know that I would encounter this issue many times later, such as when downloading ESP-IDF, a tool for firmware flashing. Figuring out what not to install had become a highly coveted skill.

The next part was writing software to display images and text. This was slightly less of a problem thanks to forums of users who had done the exact same thing several years ago. One tool I used was Squareline Studio, a UX/UI design tool for embedded devices. With a bit of trial and error, I got a simple static program displayed onto the device. Not half bad looking either. Here’s what it looked like:

ui_static

The Middle

Now came the networking part. Over wifi, I set up a Flask (Python) server on my computer to receive network pings from the SenseCAP Indicator. I used a library called ArduinoHTTPClient. At first, I wanted to ping the server each time a user touched the screen. Then came driver problems, platform incompatibilities, deprecated libraries…

… After weeks of limited progress due to resurfacing issues, I decided to adjust my goal to send pings on a schedule of every 5 seconds, rather than relying on user input. I changed the UI to be more colorful, and for good reason. Now, each network ping appears with a message on the screen. Can you look closely to see what it says?

ui_wifi

This is what the Flask server looked like on my computer as it got pinged:

local_server

Hooray! Once everything was working, It was time to deploy my Flask code as a cloud service so I could access it from any computer, not just my own. Deployment usually takes several hours due to configuring a ton of cloud provider settings. But I ain’t got time for that. Instead, I used Defang to deploy it within minutes, which took care of all that for me. Saved me a lot of time and tears.

Here’s the Flask deployment on Defang’s Portal view:

portal_view

Here’s the Flask server on the cloud, accessed with a deployment link:

deployed_server

The End

After two whole months, I finally completed my journey from start to finish! This project was an insightful dive into the world of embedded systems, internet networking, and cloud deployment.

Before I let you go, here are the hard lessons from hardware, from yours truly:

  1. Learning what not to do can be equally as important.
  2. Some problems are not as unique as you think.
  3. One way to achieve a goal is by modifying it.
  4. Choose the simpler way if it is offered.
  5. That’s where Defang comes in.

Want to try deploying to the cloud yourself? You can try it out here. Keep on composing up! 💪