Original post

Ivor Scott

Part of the and React Series

Lately, I’ve been migrating from Node to Go. Using Node, I had a great fullstack development workflow, but I struggled to achieve one in Go. What I wanted was the ability to live reload a Go and debug it with breakpoints while in a container. In this tutorial we’ll setup the ultimate Go and React development setup with Docker.

I expect you to be familiar with fullstack development. I won’t teach you every painstaking detail about how to create a React app or even a Go API. It’s fine if you’re new to Docker. I’ll explain the basics when needed. So relax, you’ll be able to copy and paste code as you go.

We focus on:

Requirements

Clone the project repo and checkout the starter branch.

The project starter is a simple mono repo containing two folders.

├── README.md
├── api/
├── client/

Docker is useful for operators, system admins, build engineers and developers.

Docker allows you to package your app and host it on any operating system. This means no more, “It works on my machine” dialogue.

Docker supports the full software life cycle from development to production. With Docker, software delivery doesn’t have to be a painful and unpredictable process.

Using Docker often starts with creating a Dockerfile, then building an image, and finally running one or more containers.

Here’s some terminology you should know.

1. Images

A Docker image is your application’s binaries, dependencies, and meta data included in a single entity, made up of multiple static layers that are cached for reuse.

2. Dockerfiles

A Dockerfile is a recipe of instructions for making images. Each instruction forms its own image layer.

3. Containers

A Docker container is an app instance derived from a Docker Image. A container is not a virtual machine. They differ because each container doesn’t require its own operating system. Containers on a single host will actually share a single operating system. This makes them incredibly lightweight. Containers require less system resources, allowing us to run many applications or containers on one machine.

Open VSCode or install it.

Install these three extensions.

1) The Go Extension
Adds rich language support for the Go language.

2) The Docker Extension
Adds syntax highlighting, commands, hover tips, and linting for docker related files.

3) The Hadolint Extension
Integrates hadolint, a Dockerfile linter, into VS Code.

When using Go modules in a mono repo, VSCode seems to complain when our api is not the project root.

Right click below the project tree in the sidebar region to fix this. Click on “Add Folder To Workspace” and select the api folder.

VSCode will need to attach to the delve debugger inside the go container.

Create a hidden folder named .vscode and add launch.json to it.

mkdir .vscode
touch .vscode/launch.json

Add the following contents to launch.json.

Creating the Golang Dockerfile

Add a new Dockerfile to the api folder and open it.

touch api/Dockerfile

Add the following:

Demo

In the root directory run the following to build the api development image:

The docker build command builds a new docker image referencing our Dockerfile.

--target specifies that we only want to target the dev stage in the multi-stage build setup.

Multi-stage builds help us apply separation of concerns. In a multi-stage build setup, you define different stages of a single Dockerfile. Then you reference specific stages later. In our api Dockerfile, we declared the name of our first stage as base.

--tag specifies an image tag. An image tag is just a name we can use to reference the image, it is tagged demo/api.

If your goal is to publish to DockerHub you can make a private or public image. The basic format DockerHub expects is username/image-name. Since we are not publishing images in this tutorial demo doesn’t have to be your real username.

DOCKER_BUILDKIT=1 is a new feature that enables parallel build processing for faster builds. You can read more here.

Our Dockerfiles leverage Aqua Security’s trivy image scanner. Docker images occasionally have vulnerabilities. Image scanners can help by alerting us of any issues. Unlike most image scanners, trivy has no problem detecting vulnerabilities in apline images. Other image scanners run into issues because light weight apline images remove resources required to produce accurate image scans. Watch this video to learn more.

Creating the React Dockerfile

Add a new Dockerfile to the client folder and open it.

touch client/Dockerfile

Add the following contents:

Demo

In the root directory run the following to build the client development image:

In this section, we saw how Dockerfiles can be used to package up our application binaries with dependencies. We also used multi-stage builds to define different images in one Dockerfile: for dev, test and production. Building images was performed manually by running the docker build command and we supplied the --target flag to select a single stage in our multi-stage setup. In the next section we will use docker-compose to build images and run containers.

Running Containers

With our Docker images building successfully we are ready to run our application instances.

docker-compose is a command line tool and configuration file for running containers. You should only use it for local development and test automation. It was never designed for production. For production, you are better off using a production grade orchestrator like Docker Swarm – here’s why.

Note: Kubernetes is another popular production grade orchestrator. In development, I normally don’t use an orchestrator. In future posts I will touch on both Docker Swarm and Kubernetes in production.

With docker-compose we can run a collection of containers with one command. It makes running multiple containers far easier especially when containers have relationships and depend on one another.

In the project root, create a docker-compose.yml file and open it.

touch docker-compose.yml

Add the following:

Create a secrets folder in the project root.

mkdir secrets

Add the following secret files.

└── secrets
├── postgres_db
├── postgres_passwd
└── postgres_user

In each file add some secret value.

The following code in our docker-compose.yml file tells docker-compose that the volume, and networks will be created beforehand (or externally).

So we need to create them upfront. Run the following commands to do so.

docker network create postgres-net
docker network create traefik-public
docker volume create postgres-db

Navigate to your host machine’s /etc/hosts file and open it.

sudo vim /etc/hosts

Add an additional line containing the following domains.

Demo

docker-compose up

In two separate browser tabs, navigate to https://api.local/products and then https://client.local

Note: In your browser, you may see warnings prompting you to click a link to continue to the requested page. This is quite common when using self-signed certificates and shouldn’t be a reason of concern.

The reason we are using self-signed certificates in the first place is to replicate the production environment as much as possible.

You should see the products being shown in the react app, meaning the traefik, api, client, and db containers are communicating successfully.

Cleaning up

Run docker-compose down to stop and remove all the containers we created with the docker-compose up command. In addition to that also remove the external volume and networks we created. In the Using Makefiles section, the makefile will create these for us.

docker-compose down
docker network remove postgres-net
docker network remove traefik-public
docker volume remove postgres-db

Our docker-compose.yml file was already configured to generate self-signed certificates with Traefik.

You might be wondering what Traefik is in the first place.

Traefik’s documentation states:

Traefik is an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.

What sets Traefik apart, besides its many features, is that it automatically discovers the right configuration for your services. The magic happens when Traefik inspects your infrastructure, where it finds relevant information and discovers which service serves which request.

https://docs.traefik.io/#the-traefik-quickstart-using-docker

Revisit the traefik service in our compose file.

We leverage the Traefik image from DockerHub, version 2.1.2. We can configure Traefik using command line flags and labels.

Note: You can also configure Traefik using TOML files and YAML files.

I prefer using CLI flags because I don’t want to worry about storing the TOML file in production. I also like the idea of only relying on one docker-compose.yml file to set everything up.

Line by Line: How It Works

Let’s start with the command line flags.

--api.insecure=true

The API is exposed on the traefik entry point (port 8080).

--api.debug=true

Enable additional endpoints for debugging and profiling.

--log.level=DEBUG

Set the log level to DEBUG. By default, the log level is set to ERROR. Alternative logging levels are DEBUG, PANIC, FATAL, WARN, and INFO.

--providers.docker

There are various providers to chose from, this line explicitly selects docker.

--providers.docker.exposedbydefault=false

Restrict Traefik’s routing configuration from exposing all containers by default. Only containers with traefik.enable=true label will be exposed.

--providers.docker.network=traefik-public

Defines a default network to use for connections to all containers.

--entrypoints.web.address=:80

Create an entrypoint named web on port 80 to handle http connections.

--entrypoints.websecure.address=:443

Create an entrypoint named websecure on port 443 to handle https connections.

Next we will cover the labels on the traefik service.

- "traefik.enable=true"

Tell Traefik to include the service in its routing configuration.

- "traefik.http.routers.traefik.tls=true"

Enable TLS certificates.

- "traefik.http.routers.traefik.rule=Host(`traefik.api.local`)"

Set a host matching rule to redirect all traffic matching this request to the container.

- "traefik.http.routers.traefik.service=api@internal"

If you enable the API, a new special service named api@internal is created and can be referenced in a router. This label attaches to Traefik’s internal api so we can access the dashboard.

The next group of labels creates a router named http-catchall that will catch all HTTP requests and forwards it to a router called redirect-to-https. This has the added benefit of redirecting our traffic to https.

Now revisit the client service.

Line by Line: How It Works

- "traefik.enable=true"

Tell Traefik to include the service in its routing configuration.

- "traefik.http.routers.client.tls=true"

To update the router configuration automatically attached to the application, we add labels starting with:

traefik.http.routers.{router-name-of-your-choice}

followed by the option you want to change. In this case, we enable tls encryption.

- "traefik.http.routers.client.rule=Host(`client.local`)"

Set a host matching rule to redirect all traffic matching this request to the container.

- "traefik.http.routers.client.entrypoints=websecure"

Configure Traefik to expose the container on the websecure entrypoint.

- "traefik.http.services.client.loadbalancer.server.port=3000"

Tell Traefik that the container will be exposed on port 3000 internally.

Before Traefik, I was configuring a nginx reverse proxy from scratch. Each time I added an additional service I had to update my nginx config. Not only is this not scalable it became easy to make a mistake. With Traefik, reverse proxying services is easy.

It can be a hassle to type various docker commands. GNU Make is a build automation tool that automatically builds executable programs from source code by reading files called Makefiles.

Here’s an example makefile:

#!make
hello: hello.c
gcc hello.c -o hello

The main feature we care about is:

[ The ability ] to build and install your package without knowing the details of how that is done — because these details are recorded in the makefile that you supply.

https://www.gnu.org/software/make/

The syntax is:

target: prerequisite prerequisite prerequisite ...
(TAB) commands

Note: targets and prerequisites don’t have to be files.

In the command line, we would run this example makefile by typing make or make hello. Both would work because when a target is not specified the first target in the makefile is executed.

Create a makefile in your project root and open it.

touch makefile

Add the following contents:

Demo

make

When you execute a target, each command in the target’s command body will be printed to stdout in a self documenting way and then executed. If you don’t want a command printed to stdout but you want it executed, you can add the “@” symbol before it. Makefile comments are preceded by a “#” symbol. Using “@#” before a command will hide it from stdout and never execute.

I added documentation to every target using echo to describe what each one does.

Our makefile creates an external database volume and 2 networks when we run make or make api. We don’t want to do this a second time. So we need a way to test if we’ve already done this step.

This is done with the following code:

ifeq (,$(findstring postgres-net,$(NETWORKS)))
# do something
endif

If we find the postgres-net network in the $(NETWORKS) variable we do nothing, otherwise we create the network. This conditional statement may seem a bit strange because the first argument in the condition is empty, perhaps it can be better understood as ifeq (null,$(findstring A,$(B))) but actually the code above is the proper syntax.

Variables

Variables can be defined at the top of a Makefile and referenced later.

#!make
NETWORKS="$(shell docker network ls)"

Using the syntax $(shell <command>) is one way to execute a command and store its value in a variable.

Environment Variables

Environment variables from a .env file can be referenced as long as you include it at the top of the makefile.

#!make
include .env

target:
echo ${MY_ENV_VAR}

Phony Targets

A makefile can’t distinguish between a file target and a phony target.

A phony target is one that is not really the name of a file; rather it is just a name for a recipe to be executed when you make an explicit request. There are two reasons to use a phony target: to avoid a conflict with a file of the same name, and to improve performance.

https://bit.ly/370xohe

Each of our commands are .PHONY: targets because they don’t represent files.

Debugging Postgres in the Terminal

We still haven’t discussed how to interact with Postgres. Eventually you’re going to want to enter the running Postgres container to make queries or debug.

Demo

make debug-db

You should be automatically logged in. Run a couple commands to get a feel for it.

dt

Then run:

select name, price from products

The debug-db target uses an advanced command line interface for Postgres called pgcli.

This is great. We now have a user friendly terminal experience with syntax highlighting and auto completion.

PGAdmin4: Debugging Postgres in the Browser

Not everyone likes the terminal experience when working with Postgres. We also have a browser option using pgAdmin4.

To login, the email is test@example.com and the password is SuperSecret. If you want to change these values they are located in the docker-compose.yml file. Change the environment variables PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD to whatever you want.

Demo

Navigate to https://pgadmin.local in your browser and login.

Click on “Add New Server”.

The two tabs you need to modify are “General” and “Connection”. Add the name of the database under General.

Under the “Connection” tab, fill in the host name which should be db unless you changed it in your docker-compose.yml file. Then add your username, password and check save password. Finally, click “Save”.

To view a database table, you must first cascade the nested tree structure to find the table you wish to select. Then select the table view icon at the top of the page.

Making Postgres Database Backups

Making database backups of your Postgres database is straight forward.

Demo

make dump

You’re probably wondering how the Postgres database got seeded with data in the first place.

The official Postgres image states:

If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary).

After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service.

https://hub.docker.com/_/postgres

The database creation script located under api/scripts/create-db.sh is used to seed the database.

In our docker-compose.yml file, create-db.sh is bind mounted into the db container:

create-db.sh only runs if a backup doesn’t exist. That way, if you make a backup, (which is automatically placed in the api/scripts directory), and you remove the database volume, then restart the database container, the next time around, create-db.sh will be ignored and only the backup will be used.

Our go api is already reloadable thanks to this line in our docker-compose.yml file.

If your familiar with Node, Compile Daemon watches your .go files and restarts your server just like nodemon.

--build is used to specify the command we want to run when rebuilding (this flag is required).

--command is used to specify the command to run after a successful build (this flag defaults to nothing).

Demo

To see this in action make sure your api is running. Then make a change to any api file to see it rebuild.

Delve has become the de facto standard debugger for the Go language. To use it with VSCode, we needed to add a launch script so that VSCode could attach to the debugger in the go container. With delve installed, the following line is how we actually execute it at the container level.

We use dlv debug to compile and begin debugging the main package located in the ./cmd/api/ directory.

--accept-multiclient allows a headless server to accept multiple client connections.

--continue continues the debugged process on start which is not the default.

--headless runs the debug server only in headless mode.

--listen sets the debugging server listen address.

--api-version will specify the API version when headless and

--log enables debugging server logging.

Demo

make debug-api

Go to api/internal/handlers.go and place a breakpoint in one of the handlers. Within VSCode, click the “Launch Remote” button in the debugger tab. Next, navigate to the route that triggers the handler. You should see the editor pause where you placed the breakpoint.

Our development setup wouldn’t be complete without testing. Here we will run two commands to execute unit tests on the client and server side. Since these are unit tests, you are not required to have the applications running to test.

Demo

make test-client
make test-api

In your CI build system you can simply build the test stage of your docker images to run unit tests.

In January I attended GoDays 2020 in Berlin. There was a wonderful presentation showcasing how to run integration and end to end tests in containers. If you wish to do this, check out the slides. The libraries being used are testcontainers-go and moby-ryuk.

I hope you’ve learned a bunch about how you can build the ultimate Go and React development setup with Docker. No matter what language you use on the client or server side, the basic principles still apply. I highly recommend Bret Fisher’s Docker Mastery course on Udemy if you want to learn Docker from a real Docker Captain. Happy Coding.

The next post in the series is Transitioning to Go pt.2

Follow me on Twitter for realtime updates.

Originally published at https://blog.ivorscott.com.