Skip to main content

Deploying your OpenAI application to AWS using Bedrock

Let's assume you have an app which is using one of the OpenAI client libraries and you want to deploy your app to AWS so you can leverage Bedrock. This tutorial will show you how Defang makes it easy.

Assume you have a compose file like this:

services:
app:
build:
context: .
ports:
- 3000:3000
environment:
OPENAI_API_KEY:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/"]

Add an LLM service to your compose file

The first step is to add a new service to your compose file: the defangio/openai-access-gateway. This service provides an OpenAI compatible interface to AWS Bedrock. It's easy to configure, first you need to add it to your compose file:

+  llm:
+ image: defangio/openai-access-gateway
+ x-defang-llm: true
+ ports:
+ - target: 80
+ published: 80
+ mode: host
+ environment:
+ - OPENAI_API_KEY

A few things to note here. First the image is a fork of aws-samples/bedrock-access-gateway, with a few modifications to make it easier to use. The source code is available here. Second: the x-defang-llm property. Defang uses extensions like this to signal special handling of certain kinds of services. In this case, it signals to Defang that we need to configure the appropriate IAM Roles and Policies to support your application.

warning

Your OpenAI key

You no longer need to use your original OpenAI API key. We do recommend using something in its place, but feel free to generate a new secret and set it with defang config set OPENAI_API_KEY --random.

This is used to authenticate your application service with the openai-access-gateway.

Redirecting application traffic

Then you need to configure your application to redirect traffic to the openai-access-gateway, like this:

 services:
app:
ports:
- 3000:3000
environment:
OPENAI_API_KEY:
+ OPENAI_BASE_URL: "http://llm/api/v1"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/"]

Selecting a model

You will also need to configure your application to use one of the bedrock models. We recommend configuring an environment variable called MODEL like this:

 services:
app:
ports:
- 3000:3000
environment:
OPENAI_API_KEY:
OPENAI_BASE_URL: "http://llm/api/v1"
+ MODEL: "anthropic.claude-3-sonnet-20240229-v1:0"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/"]
warning

Enabling bedrock model access

AWS currently requires access to be manually configured on a per-model basis in each account. See this guide for how to enable model access.

Complete Example Compose File

services:
app:
build:
context: .
ports:
- 3000:3000
environment:
OPENAI_API_KEY:
OPENAI_BASE_URL: "http://llm/api/v1"
MODEL: "us:anthropic.claude-3-sonnet-20240229-v1:0"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/"]
llm:
image: defangio/openai-access-gateway
x-defang-llm: true
ports:
- target: 80
published: 80
mode: host
environment:
- OPENAI_API_KEY