Skip to main content

4 posts tagged with "DevOps"

View All Tags

Beyond Heroku: Owning Your Deployments

· 7 min read
Defang Team

When you launch a new app, convenience rules. Platforms like Heroku offer a beautifully simple way to push code and see it live in minutes. You don’t need to think about servers, networks or databases. You just deploy. That’s why so many of us start there.

But convenience has a cost. As your product grows, you want more control over performance and security. You want to integrate your own services, tune the infrastructure and optimize your spend. Heroku’s dyno‑based pricing, which starts around $25/month for a modest dyno and climbs to hundreds of dollars for high‑performance dynos, can become prohibitive for serious production workloads. And while Heroku abstracts away the underlying cloud, that abstraction also means you can’t fine‑tune the way your application runs.

This trade‑off eventually becomes untenable. Teams need the simplicity of a platform like Heroku and the power and trust of running inside their own AWS account. This post unpacks why migrating off Heroku matters, highlights the friction points when you try to move to AWS yourself, and shows how the Defang CLI bridges the gap.

Heroku vs. AWS

Heroku is a Platform‑as‑a‑Service very focused on simplicity and ease of use, while AWS offers a huge array of extremely powerful services that can be difficult to navigate alone. What you get from Heroku is the ability to deploy your application with a simple git push, scale by adding dynos and pick from a marketplace of add‑ons. But you miss out on much of the power of AWS, like the ability to organize and network services the way you want, the ability to deploy in a huge number of regions, their reliability, and much of the control you need to be called enterprise-ready. AWS also tends to be more cost effective as you scale and offers a wide variety of scalable storage options including Postgres, MongoDB, Redis, and more.

Pricing and scale

Heroku’s pricing is tied to dynos. Eco dynos cost about $5/month for 1,000 hours, while standard dynos run $25–$50/month and performance dynos jump to $250–$1,500/month. Those numbers are predictable, but if your traffic spikes or you need more compute, your dyno bill scales quickly. Databases and Redis add‑ons are also billed per gigabyte, adding to the total cost.

AWS uses a pay‑as‑you‑go model: you pay for the exact resources you use, whether on‑demand compute, reserved instances or spot capacity. This model can be far cheaper at scale, especially if you commit to reserved or savings plans, but it also introduces complexity. Besides compute, you need to factor in elastic IPs, data transfer, load balancers and NAT gateways. AWS rewards expertise: you can optimize costs but only if you understand its pricing levers.

Why leave Heroku?

For many teams, Heroku is the right starting point. But there are clear inflection points when it makes sense to graduate:

  • Escalating costs. As your user base grows, dyno bills rise exponentially. At some point, the predictable price premium no longer justifies the convenience.

  • Performance and scalability demands. High‑traffic applications need flexible scaling and the ability to choose instance sizes and storage types. Heroku’s dyno types can be limiting for CPU‑ or memory‑intensive workloads.

  • Compliance and data sovereignty. Customers in regulated industries often require apps to run in their own cloud account and under their own compliance controls.

  • Customization. You might need to integrate bespoke networking, private databases or other services not available as Heroku add‑ons. AWS’s vast ecosystem of more than 240 services makes these integrations possible.

Yet the path off Heroku isn’t trivial. Re‑platforming often means rewriting your application to use AWS services directly, building new CI/CD pipelines, managing IAM roles and provisioning infrastructure by hand. That’s a big lift for developers who just want to ship features.

Migration in minutes: how the Defang CLI works

In our recent video (“How to migrate from Heroku to AWS in 5 minutes!”), we demonstrated a Django + Postgres app running on Heroku. The goal: deploy it into our own AWS account without rewriting anything. Here’s how it works:

Import your Heroku app. After installing and logging into the Defang CLI, run:

defang init

Then select Migrate from Heroku.

Defang connects to the Heroku API, inspects your app’s dynos, add‑ons and configuration variables, and generates a Docker Compose file. It translates Procfile commands into services and records dependencies like Postgres and Redis.

Review the Compose file. You should always examine the generated compose.yaml. You can adjust ports or remove unnecessary services. In the demo we changed the exposed port to 8000 and confirmed everything looked reasonable.

Select your cloud. We authenticated against AWS and selected a profile (AWS_PROFILE=defang-lab). In Defang, you set a provider with an environment variable DEFANG_PROVIDER=aws.

You can either export these, or pass them to each command:

AWS_PROFILE=defang-lab DEFANG_PROVIDER=aws defang <command>

Or you can set them as environment variables:

export AWS_PROFILE=defang-lab DEFANG_PROVIDER=aws

Set your secrets. You then run defang config set to provide any secrets (database user, password, database name) that were previously stored in Heroku. These secrets are encrypted at rest and passed securely to your services deployment.

Deploy with one command. Finally, execute:

defang up

Defang provisions an ECS cluster, RDS database, VPC, security groups, DNS records, load balancer, and more for your application. It also provisions a release service to handle migrations and brings up your web service once the database is ready. Eventually you get a public URL for your working application.

Verify and scale.

The entire migration took roughly five minutes from start to finish, with zero changes to application code. Instead of rewriting our Django settings or learning the intricacies of ECS, we let Defang automate the heavy lifting.

Why this matters

The migration from Heroku to AWS delivers two critical advantages that matter most to growing teams: cost savings and power and flexibility.

As we covered earlier, Heroku's dyno pricing can quickly escalate from $25/month to hundreds or even thousands as you scale. AWS's pay-as-you-go model, combined with reserved instances and spot capacity, can reduce your infrastructure costs by 60-80% at scale (depends on your use case). You pay only for what you use, when you use it.

More importantly, you gain access to AWS's full ecosystem of 240+ services. Need a specific instance type for CPU-intensive workloads? Custom networking for multi-region deployments? Advanced monitoring and logging? On Heroku, you're limited to what's available in their add-on marketplace. On AWS, you can integrate any service, tune performance at the infrastructure level, and architect solutions that simply aren't possible on a PaaS.

For some teams, there's also the benefit of deploying into customer cloud accounts for compliance and data sovereignty requirements.

Defang bridges this gap by giving you Heroku-like simplicity with AWS power.

Try it yourself

If your team is outgrowing Heroku or you need to bring your application into your customers’ cloud, give our migration workflow a spin. Install the Defang CLI, run defang init migrate-from-heroku, and watch your app come to life in AWS. You can find more details in our official migration guide. We’d love to hear what you deploy and what features you’d like us to add next.

Deployments in the Agentic Era

· 4 min read
Defang Team

If you want people to adopt your AI product, the deployment story has to be as strong as the features.

Over the past few decades, the software industry has gone through multiple major transitions. Each one reshaped not only how products are delivered, but also how they are trusted.

  • In the Client-Server Era (circa 2000), apps like SAP and PeopleSoft were purchased and deployed by the customer in their own "on-prem" environment. The customer was in control, but also took on the operational complexity of everything from procuring and deploying hardware to the system software and the apps themselves.
  • In the SaaS Era (circa 2010s), apps such as Salesforce and Workday ran in the provider's cloud and were delivered through the browser. While this simplified operations for the customer, it also meant that the customer data was trapped in these applications, with sometimes ambiguous data ownership and usage rules.
  • Today, we are entering the Agentic Era. Agentic apps promise to deliver an unprecedented productivity boost, but to do so, they need access to the most sensitive business data: conversations, documents, decisions. Customers do not want to transfer such data to an unknown and untrusted external provider's environment. Instead, they expect these products to run inside their cloud accounts (whether it be AWS, GCP, or any other), with their compliance, and under their security controls.

Agentic Era

This is not a small adjustment. It is the foundation of how the next generation of software will be trusted and adopted.

Why the Agentic Era Changes the Rules

AI products are not like SaaS tools. They do not just manage workflows, they ingest and act on the crown jewels of a business. To succeed in this environment, three conditions must hold true:

  • Data stays with the customer: no leaking sensitive content outside their environment.
  • Deployments work across clouds: AWS, GCP, Azure, or wherever the customer operates.
  • Security and compliance are built in: IAM, networking, and policies set up correctly from day one.

This is not a technical detail. It is the trust layer that determines whether adoption happens at all.

ekai's Example

ekai is an AI digital twin that boosts productivity by capturing meetings, surfacing action items, and acting as a Slack companion. To be trusted, it has to run inside the customer's cloud account.

ekai needed a single deployment solution that could run on any cloud and deliver a consistent, reliable experience with the same features everywhere. Like many AI builders, they faced the challenge of providing secure, compliant deployments across AWS, GCP, and other environments without spending weeks on custom DevOps for each customer.

That is where Defang came in.

With Defang, ekai defines its application once in Docker Compose. Defang turns that definition into a production-ready deployment inside the customer's own cloud account. Compute, storage, networking, IAM roles, security groups, and even managed LLMs are provisioned automatically, following best practices for each cloud.

What used to take weeks of engineering now happens in hours. More importantly, every deployment is secure, compliant, and customer-owned.

"Defang was the ideal choice for us. We simply describe ekai as a Docker Compose application, and Defang takes care of everything else. From compute and storage to IAM roles and managed LLMs, Defang ensures our deployments are secure, scalable, and cloud-native. That is a huge benefit for us and for our customers."

Ash Tiwari, Founder & CEO, ekai

Defang and the Agentic Era

ekai is not an isolated case. It is a preview of what the Agentic Era demands. As AI products move deeper into mission-critical workflows, deployment will decide adoption.

Defang exists to make this possible.

  • Define your app once, no matter the framework: CrewAI, LangGraph, AutoGen, Strands
  • Deploy to any cloud in a single step
  • Keep customer data inside customer environments
  • Align deployments with cloud-native best practices automatically

Just as SaaS platforms unlocked a decade of cloud adoption, Defang is the foundation for customer-owned AI.

The Takeaway

In the Agentic Era, trust is the product. The next wave of AI adoption will be decided not by features, but by where and how products run. Companies that respect data ownership and deliver secure, cloud-native deployments will earn trust and scale. Those that do not will be left behind.

Defang is the invisible infrastructure making this era possible.

Deploying Agentic Apps to the Cloud Shouldn’t Be This Hard…

· 3 min read
Defang Team

Agentic Apps

Deploying Agentic Apps to the Cloud Shouldn’t Be This Hard…

Agentic apps are redefining how software is built: multi-agent workflows, persistent memory, tool-using LLMs, and orchestrated autonomy. But deploying them to the cloud is still painful - for example, your agentic app typically needs to provision:

  • Managed databases like Postgres or MongoDB
  • Fast, scalable caching (hello Redis)
  • Containerized compute that scales
  • Secure networking and service discovery
  • Managed LLMs like AWS Bedrock or GCP Vertex AI

And for many teams, these apps must run inside the customer’s cloud, where sensitive data lives and compliance rules apply. That means you cannot just spin up your own environment and call it a day. Instead, you are deploying across AWS, GCP, DigitalOcean, or whichever stack your customers demand, each with its own APIs, quirks, and limitations.

Now you are not just building agents; you are picking the right infrastructure, rewriting IaC templates for every provider, and untangling the edge cases of each cloud.

The result: weeks of DevOps headaches, lost momentum, and engineers stuck wiring infrastructure instead of shipping agents.

We Made it Simple with Cloud Native Support for Agentic Apps

That’s where Defang comes in. We make it easy to deploy full-stack agentic apps to your cloud of choice: native, secure, and scalable. Defang understands the common ingredients of agentic apps and makes them first-class citizens:

  • Compute: Your Dockerized services deploy as cloud-native workloads (e.g. AWS ECS, or GCP Cloud Run)
  • Databases: Provision managed Postgres or MongoDB with one config line
  • Caching: Add Redis and Defang spins up a managed Redis instance in your cloud
  • LLMs: Integrate directly with Bedrock or Vertex AI - even provision an OpenAI gateway for compatibility with OpenAI APIs.
  • Secure Defaults: : TLS, secrets, IAM, and service accounts handled out of the box

Built for All your Favorite Agentic Frameworks

Defang works seamlessly with leading agentic frameworks. Try them out with our ready-to-deploy samples:

  • Autogen - demo featuring Mistral AI + FastAPI, deployable with Defang’s OpenAI Access Gateway.
  • CrewAI - sample app showing multi-agent orchestration in action.
  • LangGraph - workflow sample that defines and controls multi-step agentic graphs with LangChain.
  • Agentic Strands - A Strands Agent application.

More framework templates coming soon.

Why It Matters

Agentic apps need to be fast, secure, and ready to scale. Defang delivers cloud-native deployments in your environment (AWS, GCP, DO), so you can move from idea to production quickly with consistent behavior across dev, test, and prod.

The Developer Journey, Simplified

  1. Build your agentic app locally using Docker Compose
  2. Test in Defang's free playground with defang compose up
  3. Deploy to your cloud:
defang compose up --provider=aws  # or gcp, digitalocean

It just works. No Terraform. No YAML explosion. No vendor lock-in.

The Future of AI Apps Is Agentic and Cloud-Native

Agility and scalability should not be a trade-off. With Defang, you get both. Developers focus on agents, tools, and outcomes. Defang takes care of the cloud infrastructure.

Try it out

Explore more samples at docs.defang.io Join our community on Discord

February 2025 Defang Compose Update

· 3 min read
Defang Team

Defang Compose Update

Well, that went by quick! Seems like it was just a couple of weeks ago that we published the Jan update, and it’s already time for the next one. Still, we do have some exciting progress to report in this short month!

  1. Pulumi Provider: We are excited to announce a Preview of the Defang Pulumi Provider. With the Defang Pulumi Provider, you can leverage all the power of Defang with all of the extensibility of Pulumi. Defang will provision infrastructure to deploy your application straight from your Compose file, while allowing you to connect that deployment with other resources you deploy to your cloud account. The new provider makes it easy to leverage Defang if you’re already using Pulumi, and it also provides an upgrade-path for users who need more configurability than the Compose specification can provide.
  2. Portal Update: We are now fully deploying our portal with Defang alone using the defang compose up command. Our original portal architecture was designed before we supported managed storage so we used to use Pulumi to provision and connect external storage. But since we added support in Compose to specify managed storage, we can fully describe our Portal using Compose alone. This has allowed us to rip out hundreds of lines of code and heavily simplify our deployments. To learn more about how we do this, check out our Defang-Deployed-with-Defang (Part 1) blog.
  3. Open-Auth Contribution: In the past couple months we have been communicating with the OpenAuth maintainers and contributors via PRs (#120, #156) and Issues (#127) to enable features like local testing with DynamoDB, enabling support for scopes, improving standards alignment, supporting Redis, and more. We are rebuilding our authentication systems around OpenAuth and are excited about the future of the project.

Events and Social Media

February was an exciting month for the Defang team as we continued to engage with the developer community and showcase what’s possible with Defang. We sponsored and demo’ed at the DevTools Vancouver meetup, as well as sponsored the Vancouver.dev IRL: Building AI Startups event. Also, at the AWS Startup Innovation Showcase in Vancouver, our CTO Lio demonstrated how Defang makes it effortless to deploy secure, scalable, and cost-efficient serverless apps on AWS! And finally, we had a great response to our LinkedIn post on the Model Context Protocol, catching the attention of many observers, including some of our key partners.

We are eager to see what you deploy with Defang. Join our Discord to ask any questions, see what others are building, and share your own experience with Defang. And stay tuned for more to come in March!