Day 8 - Deploying a Python Movie API to Google Cloud Run using Bash Scripting, Docker, Artifact Registry and API Gateway
Deploying and managing APIs can feel daunting and time consuming; however, with the help of automation and the right tools in place, the process can become seamless. In this article, we will use a step by step approach to deploy a containerized application to Google Cloud Run and setting up a fully functional API Gateway to expose your service.
The code for this project can be found in this GitHub repo under the Day_8 folder
Prerequisites
Python 3.x
The MovieDB (TMDB) API Key. Create an account, then get your API Key here after logging in.
Google Cloud Project with the
Cloud Run Admin
&ApiGateway Admin
roles granted to a Service Account.The Service Account email
gcloud CLI
installed locally on your computer (Instructions)Docker (Instructions)
Setting up
Writing the app
On your computer, create a directory with the name Movie-API-Cloud_Run-API_Gateway
or a name of your choosing.
Inside the folder, create and activate the virtual environment that we’ll use to install our dependencies:
$ python -m venv venv # Create the virtual environment
$ source venv/bin/activate # For MacOS/Linux
Install the dependencies that we’ll need & add them to the requirements.txt
file
$ pip install Flask requests
# Add them to a requirements.txt file
$ pip freeze > requirements.txt
Proceed to create a file called app.py and paste in the following code. Remember to replace <YOUR-TMDB-API-KEY-HERE>
with your actual API Key you got earlier:
import requests, os
from flask import Flask, jsonify
app = Flask(__name__)
API_KEY = "<YOUR-TMDB-API-KEY-HERE>" # Replace this with your API key
@app.route('/', methods=['GET'])
def get_trending_movies():
url = f"https://api.themoviedb.org/3/trending/movie/day?api_key={API_KEY}&language=en-US"
try:
response = requests.get(url)
data = response.json()
movies = [{"title": movie['title'], "release_date": movie["release_date"]} for movie in data['results']]
return jsonify({"message": "Trending movies fetched successfully.", "movies": movies}), 200
except Exception as e:
return jsonify({"message": "An error occurred.", "error": str(e)}), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
Script explanation:
The script above uses Flask
to fetch the trending movies from the API and returns them in a JSON
format to the user when they run the application.
Run the script
Open your terminal and navigate to the directory containing the app.py
script and execute the following command:
(venv)$ python app.py
Visit the following address in your browser: http://127.0.0.1:8080
The next step we are going to do is create a Dockerfile. In your code editor, create a file in the same location as app.py
and name it Dockerfile
. Paste the following inside it:
FROM python:3.9-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the requirements file into the container
COPY requirements.txt requirements.txt
# Install dependencies
RUN pip install -r requirements.txt
# Copy all files from the current directory into the container
COPY . .
# Expose the port your app runs on
EXPOSE 8080
# Command to run the application
CMD ["python", "app.py"]
The above script in our Dockerfile will containerize our Python app and we can deploy it to the Artifact Registry in Google Cloud
Next, we’re going to build our image Not so fast, before we get ahead of ourselves, we’re going to create a .dockerignore
file. This file will essentially let docker know what file/folder we want to ignore in our image building process. In this case, we want docker to ignore our virtual environment’s folder, and other files will create later in this article. Proceed to create a file called .dockerignore
in the same location as our Dockerfile
and app.py
. Don’t forget the period that precedes the name
venv/
clean-up.sh
deploy-api.sh
Building our image & Infrastructure
Next, we are going to create a YAML file that will be used by our API Gateway during its creation process.
An API Gateway is like a central hub for all your application's services. Instead of each service being directly exposed to the outside world, everything goes through the gateway. Think of it as a gatekeeper. Your application might be made up of many smaller services that do different things. The API Gateway provides a single point of entry for all requests coming from outside like websites or mobile apps.
The gateway then figures out which service a request needs to go to. It's like a traffic officer, sending each request to the right place. The gateway offers benefits such as security (authentication & authorization), rate limiting(preventing too many requests from overloading the services) and collecting metrics about the services.
Let’s go ahead and create the YAML file that the gateway will need. Create a file in the same directory as the app.py
file and name it apispec.yaml
and paste in the following
swagger: "2.0"
info:
title: MovieAPI
description: "Get the trending movies."
version: "1.0.0"
host: 35.196.176.214
schemes:
- "https"
paths:
"/":
get:
description: "Get the trending movies."
operationId: "trendingMovies"
x-google-backend:
address: {{cloud_run_url}}
responses:
200:
description: "Success."
schema:
type: string
400:
description: "The data cannot be fetched."
The above YAML is essentially an OpenAPI document that will be used in the gateway creation process. You can learn more about the use & benefits of OpenAPI from this page
If you look closely, you’ll notice {{cloud_run_url}}
in the file we’ve just created. This is essentially a placeholder that will be replaced by Bash automatically with the actual URL generated from the Cloud Run service running our containerized app.
Writing our deployment bash script
As you can already tell (based on the article title), we will be dealing with a lot of moving parts. The next series of steps are going to be executed entirely in the terminal: from building our docker image locally, creating our artifact repository and pushing our image there, deploying the API to Cloud Run as a service and creating our API Gateway.
We will be utilizing a series gcloud
, docker
as well as bash
scripts in writing our deployment script. Make sure you’ve already set up gcloud and initialized it with your credentials & installed docker before proceeding. If not, revisit the Prerequisites section ☝️
In the same directory as with the other files, create a file called deploy-api.sh
and paste the following
#!/bin/bash
# Set environment variables
export PROJECT_ID="<GCP-PROJECT-ID>" # Replace with your GCP project ID
export REGION="<GCP-REGION>" # GCP region i.e us-east1
export THE_SERVICE_ACCOUNT="<Your-Service-Account-Email-Here>" # The service account email
export LOCAL_CONTAINER_IMAGE="<IMAGE-NAME-HERE>" # The name you'd like to give for your container image during the building process
export ARTIFACT_REGISTRY_REPO="<REPO-NAME-HERE>" # The name you'd like to give for your Artifact Registry repository
export CLOUD_RUN_SERVICE_NAME="<CLOUD-RUN-SERVICE-NAME-HERE>" # The name for the Cloud Run service that will be created
export API_NAME="<API-NAME-HERE>"
export GATEWAY_NAME="<GATEWAY-NAME-HERE>" # The name for the API Gateway that will be created
export OPENAPI_SPEC_PATH="./<API-SPEC-FILE-NAME>.yaml" # Path to your OpenAPI spec file i.e ./apispec.yaml
##### MODIFY THE ABOVE WITH YOUR ACTUAL VALUES BEFORE PROCEEDING !!!! ############
# NO NEED TO CHANGE ANYTHING BELOW THIS SECTION AFTER MODIFYING THE VALUES ABOVE #
######## BUT YOU CAN TAKE A LOOK AT THE SCRIPT AND SEE HOW IT WORKS #############
##################################################################################
# Build our Movie API Container image locally
echo "Build the Movie API container image"
docker build -t $LOCAL_CONTAINER_IMAGE .
echo "✅ $LOCAL_CONTAINER_IMAGE image built successfully"
echo "------------------------------------------------------------------------"
# Create a repository in Google Cloud Artifact Registry
echo "Create a repository in Google Cloud Artifact Registry"
gcloud artifacts repositories create $ARTIFACT_REGISTRY_REPO --repository-format=docker \
--location=$REGION --description="My Docker repository" --project=$PROJECT_ID
echo "Artifact Registry Repo: $ARTIFACT_REGISTRY_REPO" >> deploy_outputs.txt
echo "✅ $ARTIFACT_REGISTRY_REPO repository created successfully"
echo "------------------------------------------------------------------------"
# Configure Docker to use gcloud CLI Authentication
echo "Configure Docker to use gcloud CLI Authentication"
gcloud auth configure-docker $REGION-docker.pkg.dev --quiet
echo "✅ Docker configured with gcloud authentication successfully"
echo "------------------------------------------------------------------------"
# Tag the container image to Artifact Registry repo
echo "Tag the container image to Artifact Registry repo"
docker tag $LOCAL_CONTAINER_IMAGE:latest \
$REGION-docker.pkg.dev/$PROJECT_ID/$ARTIFACT_REGISTRY_REPO/$LOCAL_CONTAINER_IMAGE:v1
echo "REGISTRY_MOVIE_IMAGE: $REGION-docker.pkg.dev/$PROJECT_ID/$ARTIFACT_REGISTRY_REPO/$LOCAL_CONTAINER_IMAGE:v1" >> deploy_outputs.txt
export REGISTRY_MOVIE_IMAGE="$REGION-docker.pkg.dev/$PROJECT_ID/$ARTIFACT_REGISTRY_REPO/$LOCAL_CONTAINER_IMAGE:v1"
echo "✅ Container image tagged with Artifact Registry repo successfully"
echo "------------------------------------------------------------------------"
# Push the container image to the Artifact Registry repo
echo "Push the container image to the Artifact Registry repo"
docker push $REGISTRY_MOVIE_IMAGE
echo "✅ Container image pushed to Artifact Registry repo successfully"
echo "------------------------------------------------------------------------"
# Step 1: Deploy the Cloud Run service
echo "Deploying Cloud Run service..."
gcloud run deploy $CLOUD_RUN_SERVICE_NAME \
--image $REGISTRY_MOVIE_IMAGE \
--platform managed \
--region $REGION \
--allow-unauthenticated \
--quiet
# Step 2: Get the Cloud Run URL
CLOUD_RUN_URL=$(gcloud run services describe $CLOUD_RUN_SERVICE_NAME \
--platform managed \
--region $REGION \
--format "value(status.url)")
export THE_CLOUD_RUN_URL=CLOUD_RUN_URL
export THE_CLOUD_RUN_SERVICE_NAME=CLOUD_RUN_SERVICE_NAME
echo "Cloud run Service name: $CLOUD_RUN_SERVICE_NAME" >> deploy_outputs.txt
echo "Cloud Run Service URL: $CLOUD_RUN_URL" >> deploy_outputs.txt
echo -e "\n✅ Cloud Run Service built Successfully"
echo "Cloud Run Service URL: $CLOUD_RUN_URL"
echo "------------------------------------------------------------------------"
# Step 3: Update OpenAPI spec with Cloud Run URL
echo "Updating OpenAPI spec..."
sed -i "s|{{cloud_run_url}}|$CLOUD_RUN_URL|g" $OPENAPI_SPEC_PATH
# Step 4: Create the API Gateway config
echo "Creating API Gateway config..."
gcloud api-gateway api-configs create "${GATEWAY_NAME}-config" \
--api=$API_NAME \
--openapi-spec=$OPENAPI_SPEC_PATH \
--project=$PROJECT_ID \
--backend-auth-service-account=$THE_SERVICE_ACCOUNT
export API_CONFIG_NAME="${GATEWAY_NAME}-config"
echo "API Config name: ${GATEWAY_NAME}-config" >> deploy_outputs.txt
echo -e "\n✅ ${GATEWAY_NAME}-config API Config created successfully"
echo "------------------------------------------------------------------------"
# Step 5: View API Config details
echo "API Config details"
gcloud api-gateway api-configs describe "${GATEWAY_NAME}-config" \
--api=$API_NAME --project=$PROJECT_ID
# Step 6: Create the API Gateway instance
echo "Creating API Gateway instance..."
gcloud api-gateway gateways create $GATEWAY_NAME-instance \
--api=$API_NAME \
--api-config="${GATEWAY_NAME}-config" \
--location=$REGION \
--project=$PROJECT_ID
export GATEWAY_INSTANCE="$GATEWAY_NAME-instance"
echo "API Name: $API_NAME" >> deploy_outputs.txt
echo "API Gateway Instance: $GATEWAY_NAME-instance" >> deploy_outputs.txt
echo -e "\n✅ $GATEWAY_NAME-instance API Gateway created successfully!"
echo "------------------------------------------------------------------------"
# Step 7: Output the Gateway URL
GATEWAY_URL=$(gcloud api-gateway gateways describe $GATEWAY_NAME-instance \
--project=$PROJECT_ID \
--location=$REGION \
--format "value(defaultHostname)")
echo "API Gateway URL: https://$GATEWAY_URL" >> deploy_outputs.txt
echo -e "\n🥳🥳🥳🥳🥳🥳🥳"
echo -e "\n👉API Gateway URL: https://$GATEWAY_URL"
echo -e "\n\n"
‼️NOTE ‼️: Remember to change the following values to match your values. Read the comments in each of their respective lines for more info: PROJECT_ID
, REGION
, THE_SERVICE_ACCOUNT
, LOCAL_CONTAINER_IMAGE
, ARTIFACT_REGISTRY_REPO
, CLOUD_RUN_SERVICE_NAME
, API_NAME
, GATEWAY_NAME
, OPENAPI_SPEC_PATH
Next, we’re going to create another bash script file. However, this file will help us clean up/delete our resources from the cloud and locally once we’re done with our project; instead of deleting the resources manually one by one.
In your code editor, create a file called clean-up.sh
and paste in the following:
#!/bin/bash
export PROJECT_ID="<GCP-PROJECT-ID-HERE>" # Replace with your GCP project ID
export REGION="<GCP-REGION-HERE>" # GCP region i.e us-east1
export LOCAL_CONTAINER_IMAGE="<IMAGE-NAME-HERE>" # The name you gave for your container image
export ARTIFACT_REGISTRY_REPO="<REPO-NAME-HERE>" # The name you gave for your artifact registry repository
export CLOUD_RUN_SERVICE_NAME="<CLOUD-RUN-SERVICE-NAME-HERE>" # The name you gave for your Cloud Run service
export API_NAME="<API-NAME-HERE>" # The name you gave for your API
####################### THE FOLLOWING TWO ENTRIES ARE NEW ###############################
export API_GATEWAY="movie-api-gateway-instance" # NEW - You'll find this name in the deploy_outputs.txt file generated
export API_GATEWAY_CONFIG="movie-api-gateway-config" # NEW - You'll find this name in the deploy_outputs.txt file generated
#################### NO NEED TO MODIFY ANY LINES BELOW THIS ONE #########################
# Delete API Gateway
gcloud api-gateway gateways delete movie-api-gateway-instance --location=$REGION --quiet
# Delete API Config
gcloud api-gateway api-configs delete $API_GATEWAY_CONFIG --api=$API_NAME --project=$PROJECT_ID --quiet
# Delete the API
gcloud api-gateway apis delete $API_NAME --project=$PROJECT_ID --quiet
# Delete Cloud Run Service
gcloud run services delete $CLOUD_RUN_SERVICE_NAME --region=$REGION --quiet
# Delete Artifact Registry repo
gcloud artifacts repositories delete $ARTIFACT_REGISTRY_REPO --location=$REGION --quiet
# Delete the local container image
docker rmi $LOCAL_CONTAINER_IMAGE:latest
docker rmi $REGION-docker.pkg.dev/$PROJECT_ID/$ARTIFACT_REGISTRY_REPO/$LOCAL_CONTAINER_IMAGE:v1
NOTE: Some of the variables in this clean up file are new. Their values will be found in a file called deploy_outputs.txt
file that will be generated once we run our deploy-api.sh
script.
Once you’ve modified the above and you’re satisfied with your entries, we’ll have to change the permissions of our deploy-api.sh
and clean-up.sh
files and make them executable files.
In the terminal, navigate to the directory that contains the deploy-api.sh
file and run the following command
(venv)$ chmod +x ./deploy-api.sh # or whatever you've named your file
(venv)$ chmod +x ./clean-up.sh # or whatever you've named your file
chmod - which stands for change mode, is a command used in UNIX systems like Linux and MacOS to change the file permissions of files and directories
+x - means that we’re adding executable permissions to our file
Time for the moment of truth…
We’ve written and done a lot from the top. We now need to run our script and watch the magic happen. When we run our deploy-api.sh
script, a new file called deploy_outputs.txt will be created automatically, containing some values from the deployment process i.e values like Cloud Run service URL
, API Gateway URL
etc
Once again, in your terminal, confirm that you’re in the same directory as your files (use the ls -a
command)
Brace yourself, and run the following commands
# Confirming that we are in the right directory
(venv)$ ls -a
.dockerignore Dockerfile app.py deploy-api.sh venv/ apispec.yaml clean-up.sh requirements.txt
# Let's now run our script!!!!!
(venv)$ ./deploy-api.sh
The script will begin the execution, starting off by building the docker image, then create our repository in Artifact Registry and push our local image to our new repo.
The script will then execute a gcloud command to create a Cloud Run service and use our deployed image and then display the service URL in the terminal.
The script will then go ahead and create an API Gateway based our specifications such as location, config file (apispec.yaml
) and return to us the API Gateway URL 🥳🥳
Let’s visit our deployed API Gateway URL in our browser. You can copy/click on the link in the terminal:
We just deployed our API and it works. Pat yourself on the back and be proud of yourself 🥳🥳
You can take your time to to to your Google Cloud Console to verify that the resources were created: API Gateway page, Cloud Run page, Artifact Registry page
You will also notice in the apispec.yaml
file that {{cloud_run_url}}
has been replaced with the actual Cloud Run Service URL
Clean up
While the script was executing, a new file called deploy_outputs.txt
was created and filled automatically with some information. You’ll find this file in the same directory as your other files and it should look something like this
Proceed to open the clean-up.sh file and fill in the missing values. You’ll find some of the values in this deploy_outputs.txt
file.
We’ll then run the following command to clean up/delete the resources that have deployed on the cloud; as well as deleting our locally created docker images
$ ./clean-up.sh
We have successfully automated the deployment process using bash scripts, simplifying the workflow and saving time of the entire process. You can play around and add your own bits to the mix and see what you can come up with. Happy coding!