Simple API backed by PostgresQL, Golang and gRPC

Alexandre Beslic is a Software Engineer at vente-privee working on performance critical applications and on the high-availability of some infrastructure components. His work revolves around finding simple solutions to create robust and scalable applications within the group.

C

omplexity is the bane of every software engineer. We often omit to prune what is unnecessary and let our applications grow out of control. This hinders our capacity to ship new features (velocity), or to provide quick answers to outstanding issues, may it be related to design or common software bugs.

In this new age of cloud native applications and performance critical micro-service architectures, we tend to use RPC combined to binary wire formats for service to service communication (such as Protocol buffersThrift or Capn’proto). A non negligible amount of projects these days are adopting these technologies.

Starting out such a project is a breeze, but then we face the time where we need communication with a backend database, monitoring, tracing and other features that adds to the overall complexity. A reflex is often to stack up features and never revisit what could be simplified.

In this post we will describe how to create an API backed by PostgresQL using Golang and gRPC. Our common goal will be to use the least amount of code possible, reducing the complexity and the use of intermediate data structures to a minimum while maintaining an extended feature set.

I won’t go into detail about each and every single line of code for this example, but I will be exposing some tricks and libraries that are critical for keeping the source code lean and simple.

You will be able to leverage these tools to create an API for simple programs or even bigger, more complex programs.

The tools and libraries presented in this post are useful to keep complexity under control and allow you to iterate fast on any project size.

You can find the full sources at this link:

https://github.com/abronan/todo-grpc

A simple Todo app

Because we need to start somewhere, we will use the example of a Todo application that will allow us to createdelete and update Todo items (common CRUD).

Granted, we are not going to put Soyuz into orbit, but this is a nice little bonsai to start with.

Create the service definition

Let’s start with defining the gRPC service definition and protobuf messages:

syntax = "proto3";

package todo.v1;

option go_package = "todo";

import "google/api/annotations.proto";
import "google/protobuf/timestamp.proto";

service TodoService {
	rpc CreateTodo(CreateTodoRequest) returns (CreateTodoResponse) {
		option (google.api.http) ={
			post: "/v1/todo"
			body: "item"
		};
	}
}

message Todo {
	string id = 1;
	string title = 2;
	string description = 3;
	bool completed = 4;
    
	google.protobuf.Timestamp created_at = 5;
	google.protobuf.Timestamp updated_at = 6;
}

message CreateTodoRequest {
	Todo item = 1;
}

message CreateTodoResponse {
	string id = 1;
}

The Todo protobuf service definition

 

This is an excerpt to give you a brief overview. You can find the full API definition in the todo.proto file.

This protobuf file is of the most common type. We start with our service methods and CreateTodo. It defines the signature of the RPC method and a route to reach the service through HTTP using the google.api.http extension (with grpc-gateway that we will introduce later on).

Then we define a Todo message that describes common fields that we will use during our communication with clients using gRPC. It has two timestamp fields to account for the creation and deletion of todo items. Nothing surprising here, moving on.

Backend with PostgresQL: ORM or not ORM?

One of the common questions when searching for a proper database backend access library is whether to use an ORM or not?

It is a tricky one as ORMs are often associated with bloat, producing unoptimized queries, and bad performance with the use of reflection (I invite you to read this post from Martin Fowler treating about exactly this topic: https://www.martinfowler.com/bliki/OrmHate.html).

On the other hand, using plain lib-pq for Golang could result in confusing and hard to maintain code for bigger queries.

So it really depends on the use case and the performance critical aspect of the targeted service.

In this example, we will use go-pg (https://github.com/go-pg/pg) which is a long existing, robust and very complete orm for PosgreSQL offering struct scanning features, bulk insert and updates, support for hstore and jsonb, amongst plenty other features.

go-pg could considerably reduce code bloat while keeping performance on par with other PostgreSQL drivers such as pgx(https://github.com/JackC/pgx).

You can glance over the todo.go file to get an idea of what the source code looks like with go-pg.

gRPC useful tricks and libraries

Generating Protobuf files

One of the pains of using grpc is about having to systematically script a set of protoc commands in order to generate source code.

It works for simple examples, but the struggle manifests when mixing complex flags or using custom binary generators in the case of gogoproto (a fork of golang/protobuf with bells and whistles as well as faster marshalling/unmarshalling functions).

Fortunately there is a tool to help us generate these protoc commands for us: https://github.com/stevvooe/protobuild/

Protobuild walks through your repository and finds .proto files to generate. Instead of scripting this ourselves, we are going to create a single Protobuild.toml file at the root of the project and call the protobuildcommand line tool.

├── api
│   ├── api.pb.txt
│   ├── todo
│   │   └── v1
│   │       ├── doc.go
│   │       └── todo.proto
│   └── version
│       └── v1
│           ├── doc.go
│           └── version.proto
[...]

The Protobuild.toml file defines the common imports necessary for protoc as well as the mappings for gogoproto. Whenever something changes in the structure of the protobuf files or dependencies, we could edit this single file rather than editing countless protoc statements in a makefile or a script. Very convenient!

Injecting tags into protobuf generated structs

go-pg is smart enough to detect the fields of a Golang struct and map it to a database table for a very simple structure. Although when dealing with specific properties or table joins, we have to use struct tags in order to let go-pg know about our specific use case.

The issue there is that gRPC’s protoccommand generates our data structures in a file that we have no control upon. We’d like to avoid maintaining a mapping between a Todo and an InternalTodo just to add these go-pg struct tags.

Fortunately, we have access to a project that achieves this task: https://github.com/favadi/protoc-go-inject-tag

With protoc-go-inject-tag, and as the name suggests, we’ll inject the ORM tags onto the fields of the structs generated by protobuf. This is convenient because we could now use the protobuf generated structs directly into the query statements and avoid having to maintain a mapping between two structs each time we manipulate a Todo item.

In the todo.proto file we’re going to add the following comments on top of the google.protobuf.Timestamp fields:

// @inject_tag: sql:”,notnull,default:false”
bool completed = 3;

// @inject_tag: sql:”type:timestamptz,default:now()”
google.protobuf.Timestamp created_at = 5;

// @inject_tag: sql:”type:timestamptz”
google.protobuf.Timestamp updated_at = 6;
 

Inject pg ORM tags for protobuf generated structs

 

This tells pg to put the completed field to the default value false.

Additionally we also instruct pg to treat the protobuf.Timestamp fields as a timestamptz with PostgresQL (rather than the default timestamp type).

For the field created_at, we add default:now() to the tag in order to initialize created_at when a new entry is inserted into the database.

There is a catch though: in order to seamlessly support gRPC timestamps, we have to extend go-pg (see: https://github.com/go-pg/pg/compare/master...abronan:grpc_timestamp_support).

Now, what if we want a field appearing in the protobuf file but ignored by the ORM? Quite simple:

// @inject_tag: pg:",discard_unknown_columns"
string ignored_field = 7;

Discard a column with the go-pg ORM

 

In the same vein, what if we want a field to be scanned through go-pg but never returned to the client using gRPC? We could just override the main Todo with additional fields:

type SystemTodo struct {
 	// We override/inherit the fields of the main Todo struct.
	articles.Todo `pg:",override"`

 	// This field is decoded by go-pg and used in our application
 	// although never returned to the client with grpc.
	string additional_column
}

Additional column scanned with pg ORM but ignored in protobuf structs

 

We could support joins, a tree-like structure and references to parent todos (using Recursivequeries or the ltreemodule), using postgresql arrays ( hstore) and JSON types, etc. We can do pretty much anything supported by go-pg.

If your schema is too complex or performance critical for these tricks to be applied, then you could just use plain libpq of go-sqlx/pgx(https://github.com/JackC/pgx). go-pg also allows you to write plain queries if you feel restricted somehow.

Generate HTTP REST Reverse Proxy

Now that we have a service that could serve clients through gRPC, we also want to add RESTful API support with minimal effort.

A common way to do this would be the HTTP transport logic and redirect to the GRPC methods back and forth. Again there is an existing project doing just that:

https://github.com/grpc-ecosystem/grpc-gateway

grpc-gateway create a reverse-proxy from gRPC to JSON with minimal effort, using the google.http.api extension in our protobuf files.

We only need to add the routes to the Service methods defined in todo.protoand add the protoc generation command to the makefile. grpc-gateway will automatically generate all the json unmarshalling code and build the expected protobuf message before calling the right service method. Very convenient!

Bonus: It also generates swagger files for your API!

Middleware: tracing and monitoring

Monitoring and tracing are critical aspects of modern software development. This could help you gather insights about why a given method runs poorly in terms of response time or the current status of your components in a micro-service architecture.

Again, instead of creating the middleware part ourselves for each and every method of our service, we are going to use an existing library which does exactly that:

https://github.com/grpc-ecosystem/go-grpc-middleware

go-grpc-middleware uses GRPC interceptors to achieve this task and eases the integration with tools such as Jaeger or Prometheus. Setting up the interceptors is as simple as a few lines of code:

package main

func main() {
	[...]
  
 	// Set GRPC Interceptors
 	grpcServer := grpc.NewServer(
		grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(
			grpc_ctxtags.StreamServerInterceptor(grpc_ctxtags.WithFieldExtractor(grpc_ctxtags.CodeGenRequestFieldExtractor)),
			grpc_opentracing.StreamServerInterceptor(grpc_opentracing.WithTracer(tracer)),
			grpcMetrics.StreamServerInterceptor(),
			StreamServerInterceptor(),
			grpc_logrus.StreamServerInterceptor(logger),
			grpc_logrus.PayloadStreamServerInterceptor(logger, alwaysLoggingDeciderServer),
			grpc_recovery.StreamServerInterceptor(grpc_recovery.WithRecoveryHandler(panicHandler)),
		)),
		grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(
			grpc_ctxtags.UnaryServerInterceptor(grpc_ctxtags.WithFieldExtractor(grpc_ctxtags.CodeGenRequestFieldExtractor)),
			grpc_opentracing.UnaryServerInterceptor(grpc_opentracing.WithTracer(tracer)),
			grpcMetrics.UnaryServerInterceptor(),
			UnaryServerInterceptor(),
			grpc_logrus.UnaryServerInterceptor(logger),
			grpc_logrus.PayloadUnaryServerInterceptor(logger, alwaysLoggingDeciderServer),
			grpc_recovery.UnaryServerInterceptor(grpc_recovery.WithRecoveryHandler(panicHandler)),
		)),
	)
  
	[...]
  
	api.RegisterTodoServiceServer(server, &todo.Service{DB: db})
	grpc_prometheus.Register(server)
	
	[...]
}

Setting up interceptors for tracing, monitoring, logging, etc.

 

The entrypoint for our program is as simple as it could get: we setup our gRPC server with the appropriate interceptors and extensions and simply start it.

To get an overview of the entrypoint for the program, look at the main.go file.

Generating a client/SDK

Additionally and in complement to the use of a RESTful API, we would like our application to interact with clients developed in various languages. Thus we need an SDK available outside of the context of our backend application.

The code of such an SDK is almost always trivial to implement, with transport, connection logic, error management and finally RPC calls/handling of responses.

We could as well generate this part and avoid error prone operations when adding a new service method.

https://github.com/moul/protoc-gen-gotemplate allows us to walk through the AST of the gRPC services, methods and fields in order to generate the SDK accordingly. Thus every time we add a new call to our service, the template generation pipeline could be ran and automatically generate the SDK to match the new changes.

This is a very nice and advanced trick (that I’m not showing in the example repository but you could look at examples there), especially if your services contain a non negligible amount of calls, all requiring to update the SDK or other consumer libraries. The SDK could then be used to conveniently implement integration tests as a complement to local unit tests.

Conclusion

In this post, we walked through quite a few tricks for dealing with projects using Golang and gRPC. We demonstrated that we could deliver a complete and useful API in a few lines of code while still maintaining an extended feature set with the support for monitoring, tracing, database backend, RESTful API, etc.

gRPC is full of libraries and utilities that now makes it a breeze to create and manage an application at scale with Go such as: protobuildgrpc-gatewaygo-grpc-middleware or protoc-gen-gotemplate.

I invite you to read the source code carefully and apply some of these principles and tools to your application to keep complexity under control.

Obviously, this is one example amongst many. Pick up the right tool for the task but always reconsider and question the current state of your application in order to simplify and remove what is unnecessary.

Follow him on Twitter > @abronan   This article on our Medium account

legal notice | privacy policy
Do NOT follow this link or you will be banned from the site!