In this tutorial we will replace the locally installed gcloud
cli tool with the official
gcloud
cli docker image and integrate it in the setup and deployment process.
gcloud
cli docker images exist, and how to pull and run them.
In addition I'll show a way to seamlessly integrate the image in a
make
setup and
in shell
scripts to completely remove the need for a locally installed version
of the gcloud
cli.
Finally, we will adjust the example codebase to use the docker image wherever possible.
All code samples are publicly available in my
Docker PHP Tutorial repository on Github.
You find the branch with the final result of this tutorial at
part-12-use-gcloud-cli-docker-image.
All published parts of the Docker PHP Tutorial are collected under a dedicated page at Docker PHP Tutorial. The previous part was Deploy dockerized PHP Apps to production and the following one is Manage Logfiles in Docker via Volumes and Sidecar containers.
If you want to follow along, please subscribe to the RSS feed or via email to get automatic notifications when the next part comes out :)
Table of contents
- Introduction
- The
gcloud
clidocker
image - Using the
gcloud
clidocker
image in our codebase - Wrapping up
Introduction
The gcloud
cli is the official CLI tool of GCP. We have introduced it in
Run Docker on GCP Compute Instance VMs: Set up the gcloud
CLI tool
and installed it directly on our local machine:
The CLI tool for GCP is called
gcloud
and is available for all operating systems.In this tutorial we are using [it] installed natively on Windows via the GoogleCloudSDKInstaller.exe using the "Bundled Python" option.
Since it became a vital part in the deployment of the application, it is likely to be used by multiple developers in the team. This can lead to problems like
- outdated versions
- unnecessary updating overhead (e.g. due a dependency on a new
python
version) - etc.
We had similar considerations when
implementing git secret
via docker
container:
Plus, we need to ensure that the
git-secret
andgpg
versions are kept up-to-date for everyone to not run into any compatibility issues.
In this tutorial we'll remove this "local" dependency and use the official gcloud
cli
docker
image instead.
Run the code yourself
If you are "just" interested in using gcloud
via docker image, run the following commands:
# Prepare the codebase
git clone https://github.com/paslandau/docker-php-tutorial.git && cd docker-php-tutorial
git checkout part-12-use-gcloud-cli-docker-image
# Run a `gcloud` command in the docker image
make gcp-gcloud ARGS=version
# Run a `gcloud` command locally
make gcp-gcloud ARGS=version EXECUTE_GCLOUD_IN_CONTAINER=false
Output:
$ make gcp-gcloud ARGS=version
docker run -i --rm --workdir="/codebase" --mount type=bind,source="$(pwd)",target=/codebase --mount type=volume,src=gcloud-config,dst=/home/cloudsdk --user cloudsdk gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim gcloud version
Google Cloud SDK 403.0.0
alpha 2022.09.20
beta 2022.09.20
bq 2.0.77
bundled-python3-unix 3.9.12
core 2022.09.20
gcloud-crc32c 1.0.0
gsutil 5.13
$ make gcp-gcloud ARGS=version EXECUTE_GCLOUD_IN_CONTAINER=false
gcloud version
Google Cloud SDK 399.0.0
beta 2022.08.19
bq 2.0.75
core 2022.08.19
gsutil 5.12
Run a full deployment
Since this article is part of a tutorial series, you can also run the full deployment.
I recommend creating a completely new GCP project to have a "clean slate" that ensures that everything works out of the box as intended.
# Prepare the codebase
git clone https://github.com/paslandau/docker-php-tutorial.git && cd docker-php-tutorial
git checkout part-12-use-gcloud-cli-docker-image
# Run the initialization
make dev-init
# Note:
# You don't have to follow the additional instructions of the `dev-init` target
# for this part of the tutorial.
# The following steps need to be done manually:
#
# - Create a new GCP project and "master" service account with Owner permissions.
# - Create a key file for that master service account and place it in the root of the codebase at
# ./gcp-master-service-account-key.json
#
# @see https://www.pascallandau.com/blog/gcp-compute-instance-vm-docker/#preconditions-project-and-owner-service-account
ls ./gcp-master-service-account-key.json
# Should NOT fail with
# ls: cannot access './gcp-master-service-account-key.json': No such file or directory
# Update the variables `DOCKER_REGISTRY` and `GCP_PROJECT_ID` in `.make/variables.env`
projectId="SET YOUR GCP_PROJECT_ID HERE"
# CAUTION: Mac users might need to use `sed -i '' -e` instead of `sed -i`! @see https://stackoverflow.com/a/19457213/413531
sed -i "s/DOCKER_REGISTRY=.*/DOCKER_REGISTRY=gcr.io\/${projectId}/" .make/variables.env
sed -i "s/GCP_PROJECT_ID=.*/GCP_PROJECT_ID=${projectId}/" .make/variables.env
# Set up an SSH key for the `gcloud` container
make gcp-create-ssh-key
# Set up the infrastructure
make infrastructure-setup ROOT_PASSWORD="production_secret_mysql_root_password"
# Authenticate docker to push images
make gcp-authenticate-docker SERVICE_ACCOUNT_KEY_FILE=./gcp-service-account-key.json
make docker-compose-build
make docker-compose-up
make gpg-init
make secret-decrypt
make gcp-activate-master-account
# Retrieve the AUTH string of the redis instance and set it in the `.secrets/prod/app.env` file
auth_string=$(make -s gcp-get-redis-auth)
# CAUTION: Mac users might need to use `sed -i '' -e` instead of `sed -i`! @see https://stackoverflow.com/a/19457213/413531
sed -i "s/REDIS_PASSWORD=.*/REDIS_PASSWORD=${auth_string}/" .secrets/prod/app.env
# Encrypt the secrets and commit the changes
make secret-encrypt
git add . && git commit -m "Update the REDIS_PASSWORD and re-encrypt the secrets"
# Run the deployment
make deploy
# Migrate the database
make deployment-setup-db-on-vm
# Verify the deployment
make deployment-info
A word of caution for Windows users (Git Bash / MINGW / MSYS2)
This section is important, if you encounter one of the following error messages
the working directory 'C:/Program Files/Git/codebase' is invalid, it needs to be an absolute path.
invalid mount config for type "volume": invalid mount path: 'C:/Program Files/Git/home/cloudsdk' mount path must be absolute.
the input device is not a TTY
stdout is not a tty
My development setup at the moment is
- Windows 10
- Docker Desktop v4.12.0
- Git for Windows v2.37.3
I.e. I do not use WSL but Git Bash as my terminal of choice. If you are in the same boat, please make sure to read my Common Issues section, especially The role of winpty: Fixing "The input device is not a TTY" and The path conversion issue:
- Windows needs a special tool called
winpty
to create input that is compatible with a Unix pty and Git Bash andwinpty
"screw up" absolute Unix paths like/codebase
by converting them by default to a Windows path likeC:/Program Files/Git/codebase
. - the pty issue can be solved by prefixing
docker
commands that require user input withwinpty
to avoidThe input device is not a TTY
errors while NOT adding it todocker
commands that need pipes to avoid thestdout is not a tty
error - to solve the path conversion issue, you need to install the latest version of
winpty
and in addition export the variableMSYS_NO_PATHCONV=1
as an environment variable
Long story short: Your concrete TODO in the context of this tutorial is making sure to use the
latest version of winpty
, e.g. by following the steps outlined in
Fixing the path conversion issue for winpty
.
Exporting MSYS_NO_PATHCONV=1
as an environment variable should not be required if you are
using the example code of this tutorial, because I'm taking care of it directly in the main
Makefile
via
OS?=$(shell uname)
ifeq ($(OS),Windows_NT)
# [...]
# Export MSYS_NO_PATHCONV=1 as environment variable to avoid automatic path conversion
# (the export does only apply locally to `make` and the scripts that are invoked,
# it does not affect the global environment)
# @see http://www.pascallandau.com/blog/setting-up-git-bash-mingw-msys2-on-windows/#fixing-the-path-conversion-issue-for-mingw-msys2
export MSYS_NO_PATHCONV=1
# [...]
endif
The gcloud
cli docker
image
Download
You can pull the gcloud
cli docker
image via
docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:$version
# e.g.
# docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
# docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0
# or
docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:$optionalVersion-$type
# e.g.
# docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:alpine
# docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-alpine
where
$version
is eitherlatest
or a concrete version number like403.0.0
- A (big) Debian-based image with additional components
$type
can be one of:slim
: A smaller Debian-based image with less additional componentsemulators
: A smaller Debian-based image with emulator componentsalpine
: An Alpine-based image with no additional componentsdebian_component_based
: Debian-based image with all additional components using the component manager to install components
See also the GCP Cloud SDK docu: Installing the Google Cloud CLI Docker image
for a full overview of the image options. The corresponding Dockerfiles
are available in the
GoogleCloudPlatform/cloud-sdk-docker
Github repository.
Concrete example for pulling a 403.0.0-alpine
image:
$ docker pull gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-alpine
403.0.0-alpine: Pulling from google.com/cloudsdktool/google-cloud-cli
9621f1afde84: Already exists
ac7f9f5825e5: Pulling fs layer
de9993cc056b: Pulling fs layer
a8c020dea79a: Pulling fs layer
9da7a7c33f19: Pulling fs layer
1ab604b5b29f: Pulling fs layer
[...]
Digest: sha256:d451cccaeb65f7878947c921c77e83910acb7465b26465c97451ae4a6a37e8e1
Status: Downloaded newer image for gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-alpine
gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-alpine
You can also create a custom image based on the
slim
and alpine
types if you need a specific component that doesn't come pre-installed - see
the documentation section
Installing additional components.
Image sizes
To get a feeling for the size of the images, I've pulled the current latest
version of each
image - which is 403.0.0
as of 2022-09-25
:
REPOSITORY TAG SIZE IMAGE ID
gcr.io/google.com/cloudsdktool/google-cloud-cli 403.0.0 2.86GB a331adb66c48
gcr.io/google.com/cloudsdktool/google-cloud-cli 403.0.0-debian_component_based 2.57GB d0260cfd2a43
gcr.io/google.com/cloudsdktool/google-cloud-cli 403.0.0-emulators 1.1GB 8a360f740264
gcr.io/google.com/cloudsdktool/google-cloud-cli 403.0.0-slim 1.41GB 2662111f0e6b
gcr.io/google.com/cloudsdktool/google-cloud-cli 403.0.0-alpine 820MB 92eef3663fa5
The smallest one (alpine
; 820MB) is ~3.5x smaller than then biggest one ("no type"; 2.86GB).
Installed components
In oder to understand if a pre-compiled image is sufficient, I've used the following script to print the components for each image type:
version="403.0.0"
printf "Tag: $version\n=\n"
docker run --rm gcr.io/google.com/cloudsdktool/google-cloud-cli:$version gcloud version
tags="debian_component_based emulators slim alpine"
for tag in $tags; do
printf "\n\nTag: $version-$tag\n=\n"
docker run --rm gcr.io/google.com/cloudsdktool/google-cloud-cli:$version-$tag gcloud version
done
Output:
Tag: 403.0.0
=
Google Cloud SDK 403.0.0
alpha 2022.09.20
app-engine-go 1.9.72
app-engine-java 1.9.98.1
app-engine-python 1.9.101
app-engine-python-extras 1.9.97
beta 2022.09.20
bigtable
bq 2.0.77
bundled-python3-unix 3.9.12
cbt 0.12.0
cloud-datastore-emulator 2.2.2
cloud-firestore-emulator 1.14.4
cloud-spanner-emulator 1.4.3
core 2022.09.20
datalab 20190610
gcloud-crc32c 1.0.0
gke-gcloud-auth-plugin 0.3.0
gsutil 5.13
kpt 1.0.0-beta.15
local-extract 1.5.4
pubsub-emulator 0.7.0
Tag: 403.0.0-debian_component_based
=
Google Cloud SDK 403.0.0
alpha 2022.09.20
anthos-auth 1.4.3
app-engine-go 1.9.72
app-engine-java 1.9.98.1
app-engine-python 1.9.101
app-engine-python-extras 1.9.96
appctl 0.1.12
beta 2022.09.20
bigtable
bq 2.0.77
bundled-python3-unix 3.9.12
cbt 0.12.0
cloud-datastore-emulator 2.2.2
core 2022.09.20
datalab 20190610
gcloud-crc32c 1.0.0
gke-gcloud-auth-plugin 0.3.0
gsutil 5.13
kpt 1.0.0-beta.15
kubectl 1.22.12
kustomize 4.4.0
local-extract 1.5.4
minikube 1.26.1
nomos 1.13.0-rc.7
pubsub-emulator 0.7.0
skaffold 1.39.2
Tag: 403.0.0-emulators
=
Google Cloud SDK 403.0.0
beta 2022.09.20
bigtable
bq 2.0.77
bundled-python3-unix 3.9.12
cloud-datastore-emulator 2.2.2
cloud-firestore-emulator 1.14.4
cloud-spanner-emulator 1.4.3
core 2022.09.20
gcloud-crc32c 1.0.0
gsutil 5.13
pubsub-emulator 0.7.0
Tag: 403.0.0-slim
=
Google Cloud SDK 403.0.0
alpha 2022.09.20
beta 2022.09.20
bq 2.0.77
bundled-python3-unix 3.9.12
core 2022.09.20
gcloud-crc32c 1.0.0
gsutil 5.13
Tag: 403.0.0-alpine
=
Google Cloud SDK 403.0.0
bq 2.0.77
bundled-python3-unix 3.9.12
core 2022.09.20
gcloud-crc32c 1.0.0
gsutil 5.13
FYI: For the codebase of this tutorial we need the beta
component for
creating the MySQL Cloud SQL instance
to enable the --allocated-ip-range-name
option.
The smallest image type that contains that component is slim
, i.e. we'll use image
gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim
Usage
To use gcloud
via docker, you need to specify the gcloud
command as first argument after the
image name of docker run
and then "use the commands as you would on a local installation", e.g.
for gcloud version
docker run --rm gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim gcloud version
$ docker run --rm gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim gcloud version
Google Cloud SDK 403.0.0
bq 2.0.77
bundled-python3-unix 3.9.12
core 2022.09.20
gcloud-crc32c 1.0.0
gsutil 5.13
Authentication and persistence
Since we need to be authenticated for most of the commands, we need some way of "persisting" the authentication information. We will use a named volume to retain the information across different container runs.
The following example shows how to authenticate with a service account key file located in the current working directory
docker run \
-i \
--rm \
--workdir="/codebase" \
--mount type=bind,source="$(pwd)",target=/codebase \
--mount type=volume,src=gcloud-config,dst=/home/cloudsdk \
--user cloudsdk \
gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim \
gcloud auth activate-service-account --key-file="./gcp-service-account-key.json" --project="pl-dofroscra-p"
Output:
$ docker run \
-i \
--rm \
--workdir="/codebase" \
--mount type=bind,source="$(pwd)",target=/codebase \
--mount type=volume,src=gcloud-config,dst=/home/cloudsdk \
--user cloudsdk \
gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim \
gcloud auth activate-service-account --key-file="./gcp-service-account-key.json" --project="pl-dofroscra-p"
Activated service account credentials for: [pl-dofroscra-p.iam.gserviceaccount.com]
-i \
--rm \
Run the container in interactive mode (-i
)
to accept input from stdin
and remove it (--rm
) after
running the command as there is no need to keep it running after executing a gcloud
command.
FYI: Accepting input from stdin
is e.g. required for setting the secret gpg
password in
.infrastructure/setup-gcp.sh
at
echo -n "${gpg_secret_key_password}" | gcloud secrets versions add GPG_PASSWORD --data-file=-
Otherwise, we would run into the error
ERROR: (gcloud.secrets.versions.add) INVALID_ARGUMENT: Secret Payload cannot be empty.
--workdir="/codebase" \
--mount type=bind,source="$(pwd)",target=/codebase \
Share the codebase (specified via
the current directory $(pwd)
) with the container as
a bind mount and make it available at path /codebase
. This is important to get access to the
service account key file that we specify via --key-file="./gcp-service-account-key.json"
in the
gcloud auth
command.
FYI: The /codebase
directory is chosen arbitrarily - it should just be a directory that
doesn't exist in the container.
--user cloudsdk \
--mount type=volume,src=gcloud-config,dst=/home/cloudsdk \
When we apply permanent settings to the gcloud
cli config (e.g. the authenticated service
account and the default project), those settings need to be stored "somewhere". By default,
this is happening in the ~/.config/gcloud
directory within the home directory of the current
user,
see CloudSDK docs: Managing gcloud CLI configurations :
Configurations are stored in your user config directory (typically ~/.config/gcloud on MacOS and Linux, or %APPDATA%\gcloud on Windows [...]
To retain those settings across different container runs, we need to "persist" the data in a volume.
The cloudsdk
user comes pre-created with all gcloud
images and its home directory is
/home/cloudsdk
. Thus, we create a volume named gcloud-config
to map it to this path. I.e. all
files and directories within the home directory are stored in the volume - including the
/home/cloudsdk/.config/gcloud
directory.
FYI: We use the cloudsdk
user instead of the default root
user on purpose, because it would
create problems later, e.g. when trying to log into a VM via SSH, because root
login is
disabled by default.
gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim \
gcloud auth activate-service-account --key-file="./gcp-service-account-key.json" --project="pl-dofroscra-p"
Finally, we define the gcloud
image and run the gcloud auth activate-service-account
command.
We can then verify that the authentication worked as expected via
docker run \
-i \
--rm \
--workdir="/codebase" \
--mount type=bind,source="$(pwd)",target=/codebase \
--mount type=volume,src=gcloud-config,dst=/home/cloudsdk \
--user cloudsdk \
gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim \
gcloud info
Output:
$ docker run \
-i \
--rm \
--workdir="/codebase" \
--mount type=bind,source="$(pwd)",target=/codebase \
--mount type=volume,src=gcloud-config,dst=/home/cloudsdk \
--user cloudsdk \
gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim \
gcloud info
Google Cloud SDK [403.0.0]
Platform: [Linux, x86_64] uname_result(system='Linux', node='80b59ba0d70c', release='5.10.124-linuxkit', version='#1 SMP Thu Jun 30 08:19:10 UTC 2022', machine='x86_64')
Locale: (None, None)
Python Version: [3.9.13 (main, Jul 26 2022, 13:12:30) [GCC 10.3.1 20211027]]
Python Location: [/usr/bin/python3]
OpenSSL: [OpenSSL 1.1.1q 5 Jul 2022]
Requests Version: [2.25.1]
urllib3 Version: [1.26.9]
Site Packages: [Disabled]
Installation Root: [/google-cloud-sdk]
Installed Components:
gsutil: [5.13]
gcloud-crc32c: [1.0.0]
bq: [2.0.77]
bundled-python3-unix: [3.9.12]
core: [2022.09.20]
System PATH: [/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin]
Python PATH: [/google-cloud-sdk/lib/third_party:/google-cloud-sdk/lib:/usr/lib/python39.zip:/usr/lib/python3.9:/usr/lib/python3.9/lib-dynload]
Cloud SDK on PATH: [True]
Kubectl on PATH: [False]
Installation Properties: [/google-cloud-sdk/properties]
User Config Directory: [/home/cloudsdk/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/home/cloudsdk/.config/gcloud/configurations/config_default]
Account: [[email protected]]
Project: [pl-dofroscra-p]
Current Properties:
[core]
account: [[email protected]] (property file)
disable_usage_reporting: [True] (property file)
project: [pl-dofroscra-p] (property file)
Logs Directory: [/home/cloudsdk/.config/gcloud/logs]
Last Log File: [/home/cloudsdk/.config/gcloud/logs/2022.09.26/08.40.35.902096.log]
git: [git version 2.34.4]
Note the
Account: [[email protected]]
Project: [pl-dofroscra-p]
part near the end.
FYI: The GCP docs on Installing a specified Docker image propose a different approach:
- create a container named
google-config
that uses an anonymous volume - run the authentication in this container
- use the
--volumes-from
option to references the volumes of thegoogle-config
container on subsequent invocations of thegcloud
docker image
Though I find this approach less intuitive, more difficult and don't see a benefit over simply
using a named volume. The only thing you need to keep in mind is that something like
docker volume prune
would delete the google-config
volume unless it is attached to a running
container.
Suppress warnings
After playing around with the containerized gcloud
, I noticed the warning
WARNING:
To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth
popping up whenever I used the --tunnel-through-iap
flag of the
gcloud compute ssh
command. Since I don't want to create a custom image to install NumPy
, I looked for
alternatives to suppress the warning and came across the
verbosity
configuration for gcloud
,
that is set to warning
by default.
Displaying the warning can be suppressed by adding --verbosity=error
as option to any command
that uses the --tunnel-through-iap
flag.
FYI: You could also adjust the general verbosity
level via
gcloud config set core/verbosity error
but that might also hide some warnings that you DO want to see, and I feel disabling it on a per-case basis is the safer way to go.
A make
variable for targets
So far, we are using gcloud
"directly" in make
targets, e.g. like this:
.PHONY: gcp-info-vms
gcp-info-vms: ## Show VM information
gcloud compute instances list
To replace this usage with the gcloud
image instead, we would have to change the target to
.PHONY: gcp-info-vms
gcp-info-vms: ## Show VM information
docker run \
-i \
--rm \
--workdir="/codebase" \
--mount type=bind,source="$$(pwd)",target=/codebase \
--mount type=volume,src=gcloud-config,dst=/home/cloudsdk \
--user cloudsdk \
gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim \
gcloud compute instances list
Which is quite a lot of noise. Luckily, we have already "solved" this problem earlier and can
use the same "technique" as we did for
managing docker compose
via make
by creating a GCLOUD
variable that holds the fully configured command, i.e.
WINPTY_PREFIX=
# OS is a defined variable for WIN systems, so "uname" will not be executed
OS?=$(shell uname)
ifeq ($(OS),Windows_NT)
WINPTY_PREFIX=winpty
endif
EXECUTE_GCLOUD_IN_CONTAINER?=true
GCLOUD_VERSION:=403.0.0-slim
GCLOUD_DOCKER_IMAGE_USER=cloudsdk
RUN_GCLOUD_DOCKER_IMAGE_ARGS= -i \
--rm \
--workdir="/codebase" \
--mount type=bind,source="$$(pwd)",target=/codebase \
--mount type=volume,src=gcloud-config,dst=/home/$(GCLOUD_DOCKER_IMAGE_USER) \
--user $(GCLOUD_DOCKER_IMAGE_USER)
RUN_GCLOUD_DOCKER_IMAGE:=docker run $(RUN_GCLOUD_DOCKER_IMAGE_ARGS) gcr.io/google.com/cloudsdktool/google-cloud-cli:$(GCLOUD_VERSION)
RUN_GCLOUD_DOCKER_IMAGE_WITH_TTY:=$(WINPTY_PREFIX) docker run -t $(RUN_GCLOUD_DOCKER_IMAGE_ARGS) gcr.io/google.com/cloudsdktool/google-cloud-cli:$(GCLOUD_VERSION)
GCLOUD:=gcloud
GCLOUD_WITH_TTY:=gcloud
ifeq ($(EXECUTE_GCLOUD_IN_CONTAINER),true)
GCLOUD:=$(RUN_GCLOUD_DOCKER_IMAGE) gcloud
GCLOUD_WITH_TTY:=$(RUN_GCLOUD_DOCKER_IMAGE_WITH_TTY) gcloud
endif
By default, the GCLOUD
variable equals to gcloud
(i.e. the locally installed gcloud
cli) but
if the EXECUTE_GCLOUD_IN_CONTAINER
is set to true
, the docker
image will be used instead
(which is the default, as the variable is defined as EXECUTE_GCLOUD_IN_CONTAINER?=true
).
This way, we can write our target as
.PHONY: gcp-info-vms
gcp-info-vms: ## Show VM information
$(GCLOUD) compute instances list
And can even control via EXECUTE_GCLOUD_IN_CONTAINER
if we want to run the docker
image or the
locally installed gcloud
cli. Examples:
Use the docker image
$ make gcp-info-vms -n
docker run --rm --workdir="/codebase" --mount type=bind,source="",target=/codebase --mount type=volume,src=gcloud-config,dst=/home/cloudsdk --user cloudsdk gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim gcloud compute instances list
Use the locally installed gcloud
cli
$ make gcp-info-vms EXECUTE_GCLOUD_IN_CONTAINER=false -n
gcloud compute instances list
In addition, I have added a GCLOUD_WITH_TTY
variable that needs to be used whenever a command
requires user input. Take the
gcp-ssh-login
target for example:
.PHONY: gcp-ssh-login
gcp-ssh-login: validate-gcp-variables ## Log into a VM via IAP tunnel
@$(if $(VM_NAME),,$(error "VM_NAME is undefined"))
$(GCLOUD_WITH_TTY) compute ssh $(VM_NAME) --project $(GCP_PROJECT_ID) --zone $(GCP_ZONE) --tunnel-through-iap --verbosity=error
This target will
open an SSH session over IAP
so that we can then run commands on the remote VM -
since we need to "type" the commands, we need a terminal and thus need to instruct docker
to
allocate a tty
for us. This is done via the -t
option. Thus, we have added the helper variable
RUN_GCLOUD_DOCKER_IMAGE_WITH_TTY:=$(WINPTY_PREFIX) docker run -t $(RUN_GCLOUD_DOCKER_IMAGE_ARGS) gcr.io/google.com/cloudsdktool/google-cloud-cli:$(GCLOUD_VERSION)
The $(WINPTY_PREFIX)
is required for users of Git Bash on Windows, because
they must add winpty
as a prefix,
to enable terminal input.
Creating an SSH key
The RUN_GCLOUD_DOCKER_IMAGE_WITH_TTY
variable is helpful to run other commands in the container.
A concrete example is the generation of the google_compute_engine
SSH keys, that are required for
connecting to the Compute Engine VMs via IAP.
This can be done with the following target:
.PHONY: gcp-create-ssh-key
gcp-create-ssh-key: ## Create an SSH key pair named "google_compute_engine" in the gcloud docker image at ~/.ssh
@$(RUN_GCLOUD_DOCKER_IMAGE_WITH_TTY) ssh-keygen -t rsa -f /home/$(GCLOUD_DOCKER_IMAGE_USER)/.ssh/google_compute_engine -C $(GCLOUD_DOCKER_IMAGE_USER) -N "";
The .ssh
folder is located in the home directory of the user and thus already covered by the
gcloud-config
volume.
FYI: We need a tty
here, because ssh-keygen
doesn't provide an option to force overwriting
an existing key and would prompt us to confirm (or deny) the action. Usage example:
$ make gcp-create-ssh-key
Generating public/private rsa key pair.
/home/cloudsdk/.ssh/google_compute_engine already exists.
Overwrite (y/n)? y
Your identification has been saved in /home/cloudsdk/.ssh/google_compute_engine.
Your public key has been saved in /home/cloudsdk/.ssh/google_compute_engine.pub.
The key fingerprint is:
SHA256:+TMUpyBeVSSjTe728c+xTkEVJAqy3HeK5e7TZcnq9OE cloudsdk
The key's randomart image is:
+---[RSA 2048]----+
| . .=oo..oo|
| . +*.o. . .|
| .o+.++.. . |
| . o +=+o . |
| . S.=o. ...|
| +.. o =.|
| +.o.=+ |
| .+.o+o+|
| .o..Eo|
+----[SHA256]-----+
A bash
function for shell scripts
As in the
previous make
section our shell scripts are currently also using
gcloud
directly and we'll create a "decorator" for the gcloud
command by
overwriting it with our own function:
export MSYS_NO_PATHCONV=1
gcloud() {
gcloudVersion=403.0.0-slim
docker run \
-i \
--rm \
--workdir="/codebase" \
--mount type=bind,source="$(pwd)",target=/codebase \
--mount type=volume,src=gcloud-config,dst=/home/cloudsdk \
--user cloudsdk \
gcr.io/google.com/cloudsdktool/google-cloud-cli:${gcloudVersion} \
gcloud "$@"
}
We're first exporting MSYS_NO_PATHCONV=1
as an environment variable to
disable the automatic path conversion for Git Bash users
and will then use the
already familiar options for the docker
image
to run a container and invoke gcloud
command
with all given arguments via "$@"
.
The function must be defined by each script so that all invocations of gcloud
use it
instead of the actual command. To avoid having to re-define the function multiple times, we can
add it to a file once and include this file in the affected scripts. Take the following
directory structure for example:
./scripts
├── includes/
│ └── gcloud_decorator.sh
└── foo.sh
where ./scripts/includes/gcloud_decorator.sh
contains the decorator mentioned above and
./scripts/foo.sh
contains the following script:
set -x
. $(dirname "$0")/includes/gcloud_decorator.sh
gcloud version
set -x
(Print a trace of simple commands)
is helpful for showing us that our decorator is actually used and the $(dirname "$0")
part in
the include is necessary to
resolve the location of the included script relative to the invoked one.
Running the script generates the following output:
++ dirname ./scripts/foo.sh
+ . .infrastructure/include/gcloud_decorator.sh
+ gcloud version
+ gcloudVersion=403.0.0-slim
++ pwd
+ MSYS_NO_PATHCONV=1
+ docker run -i --rm --workdir=/codebase --mount type=bind,source=/c/codebase/docker-php-tutorial,target=/codebase --mount type=volume,src=gcloud-config,dst=/home/cloudsdk --user cloudsdk gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim gcloud version
Google Cloud SDK 403.0.0
alpha 2022.09.20
beta 2022.09.20
bq 2.0.77
bundled-python3-unix 3.9.12
core 2022.09.20
gcloud-crc32c 1.0.0
gsutil 5.13
Note, that it will actually run the docker
container as expected:
+ docker run -i --rm --workdir=/codebase --mount type=bind,source=/c/codebase/docker-php-tutorial,target=/codebase --mount type=volume,src=gcloud-config,dst=/home/cloudsdk --user cloudsdk gcr.io/google.com/cloudsdktool/google-cloud-cli:403.0.0-slim gcloud version
Using the gcloud
cli docker
image in our codebase
In order to use the gcloud
cli docker
image instead of the locally installed gcloud
cli,
we need to replace all current "direct" gcloud
invocations from our local machine - but
NOT the ones on the remote Compute Engine VM, because gcloud
already comes
pre-installed and authenticated with the deployed service account.
Replacing local gcloud
invocations
We're using the gcloud
cli in three places:
- the
make
setup - the setup scripts
- the deployment
Replacing gcloud
in the make
setup
All usages of the gcloud
cli are happening in file .make/03-00-gcp.mk
and we simply need to
replace gcloud
with the GCLOUD
or GCLOUD_WITH_TTY
variable introduced in section
A make
variable for targets:
##@ [GCP]
EXECUTE_GCLOUD_IN_CONTAINER?=true
GCLOUD_VERSION:=403.0.0-slim
GCLOUD_DOCKER_IMAGE_USER=cloudsdk
RUN_GCLOUD_DOCKER_IMAGE_ARGS= -i \
--rm \
--workdir="/codebase" \
--mount type=bind,source="$$(pwd)",target=/codebase \
--mount type=volume,src=gcloud-config,dst=/home/$(GCLOUD_DOCKER_IMAGE_USER) \
--user $(GCLOUD_DOCKER_IMAGE_USER)
RUN_GCLOUD_DOCKER_IMAGE:=docker run $(RUN_GCLOUD_DOCKER_IMAGE_ARGS) gcr.io/google.com/cloudsdktool/google-cloud-cli:$(GCLOUD_VERSION)
RUN_GCLOUD_DOCKER_IMAGE_WITH_TTY:=$(WINPTY_PREFIX) docker run -t $(RUN_GCLOUD_DOCKER_IMAGE_ARGS) gcr.io/google.com/cloudsdktool/google-cloud-cli:$(GCLOUD_VERSION)
GCLOUD:=gcloud
GCLOUD_WITH_TTY:=gcloud
ifeq ($(EXECUTE_GCLOUD_IN_CONTAINER),true)
GCLOUD:=$(RUN_GCLOUD_DOCKER_IMAGE) gcloud
GCLOUD_WITH_TTY:=$(RUN_GCLOUD_DOCKER_IMAGE_WITH_TTY) gcloud
endif
.PHONY: gcp-gcloud
gcp-gcloud: ## Run an arbitrary `gcloud` command specified via ARGS
$(GCLOUD) $(ARGS)
.PHONY: gcp-create-ssh-key
gcp-create-ssh-key: ## Create an SSH key pair named "google_compute_engine" in the gcloud docker image at ~/.ssh
@$(RUN_GCLOUD_DOCKER_IMAGE_WITH_TTY) ssh-keygen -t rsa -f /home/$(GCLOUD_DOCKER_IMAGE_USER)/.ssh/google_compute_engine -C $(GCLOUD_DOCKER_IMAGE_USER) -N "";
.PHONY: gcp-authenticate-docker
gcp-authenticate-docker: ## Authenticate docker with the JSON key file specified via SERVICE_ACCOUNT_KEY_FILE
@$(if $(SERVICE_ACCOUNT_KEY_FILE),,$(error "SERVICE_ACCOUNT_KEY_FILE is undefined"))
cat "$(SERVICE_ACCOUNT_KEY_FILE)" | docker login -u _json_key --password-stdin https://gcr.io
.PHONY: gcp-activate-service-account
gcp-activate-service-account: validate-gcp-variables ## Initialize the `gcloud` cli with the service account specified via SERVICE_ACCOUNT_KEY_FILE
@$(if $(SERVICE_ACCOUNT_KEY_FILE),,$(error "SERVICE_ACCOUNT_KEY_FILE is undefined"))
$(GCLOUD) auth activate-service-account --key-file="$(SERVICE_ACCOUNT_KEY_FILE)" --project="$(GCP_PROJECT_ID)"
.PHONY: gcp-activate-deployment-account
gcp-activate-deployment-account: validate-gcp-variables ## Initialize the `gcloud` cli with the deployment service account
@$(if $(GCP_DEPLOYMENT_SERVICE_ACCOUNT_KEY_FILE),,$(error "GCP_DEPLOYMENT_SERVICE_ACCOUNT_KEY_FILE is undefined"))
"$(MAKE)" gcp-activate-service-account SERVICE_ACCOUNT_KEY_FILE=$(GCP_DEPLOYMENT_SERVICE_ACCOUNT_KEY_FILE)
# ...
Replacing gcloud
in the setup scripts
The setup scripts for the GCP infrastructure have been introduced in
Create a production infrastructure for dockerized PHP Apps on GCP
and are located in the
.infrastructure
directory.
We are using the decorator function explained in section
A bash
function for shell scripts by placing it in the file
.infrastructure/include/include.sh
and including it in each setup script. Take the
.infrastructure/setup-vm.sh
script for example:
#!/usr/bin/env bash
# Fail immediately if any command fails
set -e
usage="Usage: setup-vm.sh project_id vm_name"
[ -z "$1" ] && echo "No project_id given! $usage" && exit 1
[ -z "$2" ] && echo "No vm_name given! $usage" && exit 1
. $(dirname "$0")/include/include.sh
project_id=$1
vm_name=$2
enable_public_access=$3
vm_zone=us-central1-a
master_service_account_key_location=./gcp-master-service-account-key.json
# ...
printf "${GREEN}Activating master service account${NO_COLOR}\n"
gcloud auth activate-service-account --key-file="${master_service_account_key_location}" --project="${project_id}"
printf "${GREEN}Creating a Compute Instance VM${NO_COLOR}\n"
gcloud compute instances create "${vm_name}" \
--project="${project_id}" \
--zone="${vm_zone}" \
--machine-type=e2-micro \
# ...
Please note, that we are NOT including it in .infrastructure/scripts/deploy.sh
, because this
file is transferred to the remote VM and executed there as part of the deployment. This is
explained in the following section in more detail.
Adjusting gcloud
in the deployment
The deployment consists of a local phase and a remote phase:
- locally:
- build the
docker
images and push them to the registry - create a deployment archive and transfer it to the VMs
- build the
- remotely:
- extract the deployment archive and run the deployment script ...
- ... that will pull and start the containers
Since both phases use make
, we simply need to make sure that the EXECUTE_GCLOUD_IN_CONTAINER
introduced in section Replacing gcloud
in the make
setup
is set to true
in the local phase and false
in the remote one.
Since true
is the default value, we don't need to adjust anything for the local phase.
Remotely, the make
setup is initialized as part of the
application deployment script located at .infrastructure/scripts/deploy.sh
script
and we need to adjust it to set EXECUTE_GCLOUD_IN_CONTAINER=false
via
make make-init ENVS="ENV=prod TAG=latest EXECUTE_GCLOUD_IN_CONTAINER=false"
This ensures, that the gcp-secret-get
target will keep using the local gcloud
cli. Full script:
#!/usr/bin/env bash
set -e
usage="Usage: deploy.sh docker_service_name"
[ -z "$1" ] && echo "No docker_service_name given! $usage" && exit 1
docker_service_name=$1
echo "Initializing the codebase"
make make-init ENVS="ENV=prod TAG=latest EXECUTE_GCLOUD_IN_CONTAINER=false"
echo "Retrieving secrets"
make gcp-secret-get SECRET_NAME=GPG_KEY > secret.gpg
GPG_PASSWORD=$(make gcp-secret-get SECRET_NAME=GPG_PASSWORD)
echo "Creating compose-secrets.env file"
echo "GPG_PASSWORD=$GPG_PASSWORD" > compose-secrets.env
echo "Pulling image for '${docker_service_name}' on the VM from the registry"
make docker-pull DOCKER_SERVICE_NAME="${docker_service_name}"
echo "Stop the '${docker_service_name}' container on the VM"
make docker-stop DOCKER_SERVICE_NAME="${docker_service_name}" || true
make docker-rm DOCKER_SERVICE_NAME="${docker_service_name}" || true
echo "Preparing service IPs as --add-host options"
service_ips=""
while read -r line;
do
service_ips=$service_ips" --add-host $line"
done < service-ips
echo "Start the container for '${docker_service_name}' on the VM"
make docker-run-"${docker_service_name}" HOST_STRING="$service_ips"
Authentication for the Container Registry
In short: We don't need to change anything. Here is why:
We're using GCPs Container Registry and need to be authenticated with a service account (or user) that has the necessary permissions to push and pull images.
Locally, we are using JSON key file authentication
key=./gcp-service-account-key.json
cat "$key" | docker login -u _json_key --password-stdin https://gcr.io
(see Run Docker on GCP Compute Instance VMs: Authenticate docker)
in the make
target gcp-authenticate-docker
defined in .make/03-00-gcp.mk
:
.PHONY: gcp-authenticate-docker
gcp-authenticate-docker: ## Authenticate docker with the JSON key file specified via SERVICE_ACCOUNT_KEY_FILE
@$(if $(SERVICE_ACCOUNT_KEY_FILE),,$(error "SERVICE_ACCOUNT_KEY_FILE is undefined"))
cat "$(SERVICE_ACCOUNT_KEY_FILE)" | docker login -u _json_key --password-stdin https://gcr.io
This method does not rely on gcloud
to be present on the system. Thus, we don't need to
update anything here.
Remotely (on a GCP VM) we are using the
gcloud
credential helper
via
gcloud auth configure-docker
(see Run Docker on GCP Compute Instance VMs: Authenticate docker via gcloud
)
as part of the .infrastructure/provision-vm.sh
setup script:
#!/usr/bin/env bash
# Fail immediately if any command fails
set -e
usage="Usage: provision-vm.sh project_id vm_name"
[ -z "$1" ] && echo "No project_id given! $usage" && exit 1
[ -z "$2" ] && echo "No vm_name given! $usage" && exit 1
. $(dirname "$0")/include/include.sh
project_id=$1
vm_name=$2
vm_zone=us-central1-a
# ...
printf "${GREEN}Authenticating docker via gcloud in the VM${NO_COLOR}\n"
gcloud compute ssh ${vm_name} --zone ${vm_zone} --tunnel-through-iap --project=${project_id} --command="sudo su root -c 'gcloud auth configure-docker --quiet'"
Note in the last line, that we run the ssh
command
sudo su root -c 'gcloud auth configure-docker --quiet'
on the VM. This method requires gcloud
to pe present on the system => which is fine, as
that's the case by default for Compute Engine VMs.
Ensuring SSH keys exist
We are using IAP to log into the Compute Instance VMs as outlined in
Run Docker on GCP Compute Instance VMs: Login using the Identity-Aware Proxy (IAP) concept.
This technique expects a private ssh
key located in the home directory at
~/.ssh/google_compute_engine
. We have already introduced the target gcp-create-ssh-key
in
section Creating an SSH key and need to ensure that it's run once before
the gcloud
container is used.
FYI: In theory, the gcloud
cli would also create the key
automatically if it doesn't exist - but this lead to problems for me when running commands in
parallel, because they would then try to create the missing keys simultaneously and fail.
Wrapping up
Congratulations, you made it! If some things are not completely clear by now, don't hesitate to leave a comment.
Wanna stay in touch?
Since you ended up on this blog, chances are pretty high that you're into Software Development (probably PHP, Laravel, Docker or Google Big Query) and I'm a big fan of feedback and networking.
So - if you'd like to stay in touch, feel free to shoot me an email with a couple of words about yourself and/or connect with me on LinkedIn or Twitter or simply subscribe to my RSS feed or go the crazy route and subscribe via mail and don't forget to leave a comment :)