31 May 2017, 06:23

Notetaking

Page1

Page2

28 May 2017, 19:23

Replacing Eval with Object.send and a self written Parser

Intro

A while ago, I was added as a curator for a Gem called JsonPath. It’s a small but very useful and brilliant gem. It had a couple of problems which I fixed, but the hardest to eliminate proved to be a series of evals throughout the code.

You could opt in using eval with a constructor parameter, but generally, it was considered to be unsafe. Thus, normally when a project was using it, like Huginn they had to opt out by default, thus missing out on sweet parsing like this: $..book[?(@['price'] > 20)].

Eval

In order to remove eval, first I had to understand what it is actually doing. I had to take it apart.

apart

After much digging and understanding the code, I found, all it does is perform the given operations on the current node. And if the operation is true, it will select that node, otherwise, return false, and ignore that node.

For example $..book[?(@['price'] > 20)] could be translated to:

return @_current_node['price'] > 20

Checking first if 'price' is even a key in @_current_node. Once I’ve understood this part, I set on trying to fix eval.

SAFE = 4

In ruby, you could extract the part where you Eval and put it into its own proc and set SAFE = 4 which will disable some things like system calls.

proc do
  SAFE = 4
  eval(some_expression)
end.call

SAFE levels:

$SAFE Description 0 No checking of the use of externally supplied (tainted) data is performed. This is Ruby’s default mode. >= 1 Ruby disallows the use of tainted data by potentially dangerous operations. >= 2 Ruby prohibits the loading of program files from globally writable locations. >= 3 All newly created objects are considered tainted. >= 4 Ruby effectively partitions the running program in two. None - tainted objects may not be modified. Typically, this will be used to create a sandbox: the program sets up an environment using a lower $SAFE level, then resets $SAFE to 4 to prevent subsequent changes to that environment.

This has the disadvantage that anything below 4 is just, meh. But nothing above 1 will actually work with JsonPath so… scratch that.

Sandboxing

We could technically try and sandbox eval into it’s own process with a PID and whitelist methods which are allowed to be called.

Not bad, and there are a few gems out there which are trying to do that like SafeRuby. But all of these project have been abandoned years ago for a good reason.

Object.send

nobodylikesyou

Object.send is the best way to get some flexibility while still being safe. You basically just call methods on objects by describing said method on an object and giving parameters to it, like:

1.send(:+, 2) => 3

This is a very powerful tool in our toolbox which we will exploit immensely.

So let’s get to it.

Writing a parser

Writing a parser in Ruby is a very fluid experience. It has nice tools which support that, and the one I used is StringScanner. It has the ability to track where you are currently at in a string and move a pointer along with regex matches. In fact, JsonPath already employs this method when parsing a json expression. So reusing that logic was in fact… elementary.

The expression

How do we get from this:

$..book[?(@['price'] < 20)]

To this:

@_current_node['price'] < 20

Well. By simple elimination. There are a couple of problems along the way of course. Because this wouldn’t be a parser if it couldn’t handle ALL the other cases…

Removing Clutter

Some of this we don’t need. Like, $..book part.

dontneed1

The other things we don’t need are all the '[]?()

dontneed2

Once this is done, we can move to isolating the important bits.

takingaim

BreakDown

Elements

How does an expression actually look like?

Let’s break it down.

confused

So, this is a handful. Operations can be <=,>=,<,>,==,!= and operands can be either numbers, or words, and element accessor can be nested since something like this is perfectly valid: $..book[?(@.written.year == 1997)].

feedline

To avoid being overwhelmed, ruby has our back with a method called dig.

dig

This, basically lets us pass in some parameters into a dig function on a hash or an array with variadic parameters, which will go on and access those elements in order how they were supplied. Until it either returns a nil or an end result.

For example:

2.3.1 :001 > a = {a: {b: 'c'}}
 => {:a=>{:b=>"c"}}
2.3.1 :002 > a.dig(:a, :b)
 => "c"

Easy. However… Dig was only added after ruby 2.3 thus, I had to write my own dig for now, until I stop supporting anything below 2.3.

At first, I wanted to add it to the hash class, but it proved to be a futile attempt if I wanted to do it nicely, thus the parser got it as a private method.

    def dig(keys, hash)
      return hash unless hash.is_a? Hash
      return nil unless hash.key?(keys.first)
      return hash.fetch(keys.first) if keys.size == 1
      prev = keys.shift
      dig(keys, hash.fetch(prev))
    end

And the corresponding regex behind getting a multitude of elements is as follows:

...
if t = scanner.scan(/\['\w+'\]+/)
...

Operator

Selecting the operator is another interesting part as it can be a single one or multiple and all sorts. Until I realized that no… it can actually be only a couple.

whatone

whattwo

Also, after a bit of fiddling and doing and doing a silly case statement first:

case op
when '>'
  dig(@_current_node, *elements) > operand
when '<'
  dig(@_current_node, *elements) > operand
...
end

…I promptly saw that this is not how it should be done.

And here comes Object.send.

send

This gave me the opportunity to write this:

dig(elements, @_current_node).send(operator, operand)

Much better. Now I could send all the things in the way of a node.

send

Parsing an op be like:

elsif t = scanner.scan(/\s+[<>=][<>=]?\s+?/)

Operand

Now comes the final piece. The value which we are comparing. This could either be a simple integer, a floating number, or a word. Hah. So coming up with a regex which fits this tightly took a little fiddling, but eventually I ended up with this:

elsif t = scanner.scan(/(\s+)?'?(\w+)?[.,]?(\w+)?'?(\s+)?/)

Without StackOverflow I would say this is fine ((although I need to remove all those space check, shees)). What are all the question marks? Basically, everything is optional. Because an this expression $..book[?(@.price)] is valid. Which is basically just asserting if a given node has a price element.

Logical Operators

The last thing that remains is logical operators, which if you are using eval, is pretty straight forward. It takes care of anything that you might add in like &&, ||, |, &, ^ etc etc.

Now, that’s something I did with a case though. Until I find a nicer solution. Since we can already parse a single expression it’s just a question of breaking down a multi structure expression as the following one: $..book[?(@['price'] > 20 && @.written.year == 1998)].

exps = exp.split(/(&&)|(\|\|)/)

This splits up the string by either && or || and the usage of groups () also includes the operators. Than I evaluate the expressions and save the whole thing in an array like [true, '&&', false]. You know what could immediately resolve this? Yep…

saynotoeval.

I’d rather just parse it although technically an eval at this stage wouldn’t be that big of a problem…

def parse(exp)
  exps = exp.split(/(&&)|(\|\|)/)
  ret = parse_exp(exps.shift)
  exps.each_with_index do |item, index|
    case item
    when '&&'
      ret &&= parse_exp(exps[index + 1])
    when '||'
      ret ||= parse_exp(exps[index + 1])
    end
  end
  ret
end

Closing words

That’s it folks. The parser is done. And there is no eval being used. There are some more things here that are interesting. Like, array indexing is allowed in jsonpath which is solved by sending .length to a current node. For example:

if scanner.scan(/\./)
  sym = scanner.scan(/\w+/)
  op = scanner.scan(/./)
  num = scanner.scan(/\d+/)
  return @_current_node.send(sym.to_sym).send(op.to_sym, num.to_i)
end

If an expression begins with a .. So you see that using send will help a lot, and understanding what eval is trying to evaluate and rather writing your own parser, isn’t that hard at all using ruby.

I hope you enjoyed reading this little tid-bit as much as I enjoyed writing and drawing it. Leave a comment if your liked the drawings or if you did not and I should never do them again (( I don’t really care, this is my blog haha. )). Note to self: I shouldn’t draw on the other side of the drawing because of bleed-through.

Thank you! Gergely.

16 Apr 2017, 09:23

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 4

Intro

Hi folks.

Previously on this blog: Part 1. Part 2. Part 3.

In this part we are going to talk about Unit Testing Furnace and how to work some magic with AWS and Go.

Mock Stub Fake Dummy Canned

Unit testing in Go usually follows the Dependency Injection model of dealing with Mocks and Stubs.

## DI

Dependency Inject in short is one object supplying the dependencies of another object. In a longer description, it’s ideal to be used for removing the lock on a third party library, like the AWS client. Imaging having code which solely depends on the AWS client. How would you unit test that code without having to ACTUALLY connect to AWS? You couldn’t. Every time you try to test the code it would run the live code and it would try and connect to AWS and perform the operations it’s design to do. The Ruby library with it’s metaprogramming allows you to set the client globally to stub responses, but, alas, this is not the world of Ruby.

Here is where DI comes to the rescue. If you have control over the AWS client on a very high level, and would pass it around as a function parameter, or create that client in an init() function and have it globally defined; you would be able to implement your own client, and have your code use that with stubbed responses which your tests need. For example, you would like a CreateApplication call to fail, or you would like a DescribeStack which returns an aws.Error(“StackAlreadyExists”).

For this, however, you need the API of the AWS client. Which is provided by AWS.

AWS Client API

In order for DI to work, the injected object needs to be of a certain type for us to inject our own. Luckily, AWS provides an Interface for all of it’s clients. Meaning, we can implement our own version for all of the clients, like S3, CloudFormation, CodeDeploy etc.

For each client you want to mock out, an *iface package should be present like this:

  "github.com/aws/aws-sdk-go/service/cloudformation/cloudformationiface"

In this package you find and use the interface like this:

type fakeCloudFormationClient struct {
	cloudformationiface.CloudFormationAPI
	err error
}

And with this, we have our own CloudFormation client. The real code uses the real clients as function parameters, like this:

// Execute defines what this command does.
func (c *Create) Execute(opts *commander.CommandHelper) {
	log.Println("Creating cloud formation session.")
	sess := session.New(&aws.Config{Region: aws.String(config.REGION)})
	cfClient := cloudformation.New(sess, nil)
	client := CFClient{cfClient}
	createExecute(opts, &client)
}

We can’t test Execute itself, as it’s using the real client here (or you could have a global from some library, thus allowing you to tests even Execute here) but there is very little logic in this function for this very reason. All the logic is in small functions for which the main starting point and our testing opportunity is, createExecute.

Stubbing Calls

Now, that we have our own client, and with the power of Go’s interface embedding as seen above with CloudFormationAPI, we have to only stub the functions which we are actually using, instead of every function of the given interface. This looks like this:

	cfClient := new(CFClient)
	cfClient.Client = &fakeCloudFormationClient{err: nil}

Where cfClient is a struct like this:

// CFClient abstraction for cloudFormation client.
type CFClient struct {
	Client cloudformationiface.CloudFormationAPI
}

And a stubbed call can than be written as follows:

func (fc *fakeCreateCFClient) WaitUntilStackCreateComplete(input *cloudformation.DescribeStacksInput) error {
	return nil
}

This can range from a very trivial example, like the one above, to intricate ones as well, like this gem:

func (fc *fakePushCFClient) ListStackResources(input *cloudformation.ListStackResourcesInput) (*cloudformation.ListStackResourcesOutput, error) {
	if "NoASG" == *input.StackName {
		return &cloudformation.ListStackResourcesOutput{
			StackResourceSummaries: []*cloudformation.StackResourceSummary{
				{
					ResourceType:       aws.String("NoASG"),
					PhysicalResourceId: aws.String("arn::whatever"),
				},
			},
		}, fc.err
	}
	return &cloudformation.ListStackResourcesOutput{
		StackResourceSummaries: []*cloudformation.StackResourceSummary{
			{
				ResourceType:       aws.String("AWS::AutoScaling::AutoScalingGroup"),
				PhysicalResourceId: aws.String("arn::whatever"),
			},
		},
	}, fc.err
}

This ListStackResources stub lets us test two scenarios based on the stackname. If the test stackname is ‘NoASG’ it will return a result which equals to a result containing no AutoScaling Group. Otherwise, it will return the correct ResourceType for an ASG.

It is a common practice to line up several scenario based stubbed responses in order to test the robustness of your code.

Unfortunately, this also means that your tests will be a bit cluttered with stubs and mock structs and whatnots. For that, I’m partially using a package available struct file in which I’m defining most of the mock structs at least. And from there on, the tests will only contain specific stubs for that particular file. This can be further fine grained by having defaults and than only override in case you need something else.

Testing fatals

Now, the other point which is not really AWS related, but still comes to mind when dealing with Furnace, is testing error scenarios.

Because Furnace is a CLI application it uses Fatals to signal if something is wrong and it doesn’t want to continue or recover because, frankly it can’t. If AWS throws an error, that’s it. You can retry, but in 90% of the cases, it’s usually something that you messed up.

So, how do we test for a fatal or an os.Exit? There are a number of points on that if you do a quick search. You may end up on this talk: GoTalk 2014 Testing Slide #23. Which does an interesting thing. It calls the test binary in a separate process and tests the exit code.

Others, and me as well, will say that you have to have your own logger implemented and use a different logger / os.Exit in your test environment.

Others others will tell you to not to have tests around os.Exit and fatal things, rather return an error and only the main should pop a world ending event. I leave it up to you which you want to use. Either is fine.

In Furnace, I’m using a global logger in my error handling util like this:

// HandleFatal handler fatal errors in Furnace.
func HandleFatal(s string, err error) {
	LogFatalf(s, err)
}

And LogFatalf is an exported variable var LogFatalf = log.Fatalf. Than in a test, I just override this variable with a local anonymous function:

func TestCreateExecuteEmptyStack(t *testing.T) {
	failed := false
	utils.LogFatalf = func(s string, a ...interface{}) {
		failed = true
	}
	config.WAITFREQUENCY = 0
	client := new(CFClient)
	stackname := "EmptyStack"
	client.Client = &fakeCreateCFClient{err: nil, stackname: stackname}
	opts := &commander.CommandHelper{}
	createExecute(opts, client)
	if !failed {
		t.Error("expected outcome to fail during create")
	}
}

It can get even more granular by testing for the error message to make sure that it actually fails at the point we think we are testing:

func TestCreateStackReturnsWithError(t *testing.T) {
	failed := false
	expectedMessage := "failed to create stack"
	var message string
	utils.LogFatalf = func(s string, a ...interface{}) {
		failed = true
		if err, ok := a[0].(error); ok {
			message = err.Error()
		}
	}
	config.WAITFREQUENCY = 0
	client := new(CFClient)
	stackname := "NotEmptyStack"
	client.Client = &fakeCreateCFClient{err: errors.New(expectedMessage), stackname: stackname}
	config := []byte("{}")
	create(stackname, config, client)
	if !failed {
		t.Error("expected outcome to fail")
	}
	if message != expectedMessage {
		t.Errorf("message did not equal expected message of '%s', was:%s", expectedMessage, message)
	}
}

Conclusion

This is it. That’s all it took to write Furnace. I hope you enjoyed reading it as much as I enjoyed writing all these thoughts down.

I hope somebody might learn from my journey and also improve upon it.

Any comments are much appreciated and welcomed. Also, PRs and Issues can be submitted on the GitHub page of Furnace.

Thank you for reading! Gergely.

22 Mar 2017, 12:03

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 3

Intro

Hi folks.

Previously on this blog: Part 1. Part 2. Part 4.

In this part, I’m going to talk about the experimental plugin system of Furnace.

Go Experimental Plugins

Since Go 1.8 was released, an exciting and new feature was introduced called a Plug-in system. This system works with dynamic libraries built with a special switch to go build. These libraries, .so or .dylib (later), are than loaded and once that succeeds, specific functions can be called from them (symbol resolution).

We will see how this works. For package information, visit the plugin packages Go doc page here.

Furnace Plugins

So, what does furnace use plugins for? Furnace uses plugins to execute arbitery code in, currently, four given locations / events.

These are: pre_create, post_create, pre_delete, post_delete. These events are called, as their name suggests, before and after the creation and deletion of the CloudFormation stack. It allows the user to execute some code without having to rebuild the whole project. It does that by defining a single entry point for the custom code called RunPlugin. Any number of functions can be implemented, but the plugin MUST provide this single, exported function. Otherwise it will fail and ignore that plugin.

Using Plugins

It’s really easy to implement, and use these plugins. I’m not going into the detail of how to load them, because that is done by Furnace, but only how to write and use them.

To use a plugin, create a go file called: 0001_mailer.go. The 0001 before it will define WHEN it’s executed. Having multiple plugins is completely okay. Execution of order however, depends on the names of the files.

Now, in 0001_mailer.post_create we would have something like this:

package main

import "log"

// RunPlugin runs the plugin.
func RunPlugin() {
	log.Println("My Awesome Pre Create Plugin.")
}

Next step is the build this file to be a plugin library. Note: Right now, this only works on Linux!

To build this file run the following:

go build -buildmode=plugin -o 0001_mailer.pre_create 0001_mailer.go

The important part here is the extension of the file specified with -o. It’s important because that’s how Furnace identifies what plugins it has to run.

Finally, copy this file to ~/.config/go-furnace/plugins and you are all set.

Slack notification Plugin

To demonstrate how a plugin could be used is if you need some kind of notification once a Stack is completed. For example, you might want to send a message to a Slack room. To do this, your plugin would look something like this:

package main

import (
	"fmt"
	"os"

	"github.com/nlopes/slack"
)

func RunPlugin() {
	stackname := os.Getenv("FURNACE_STACKNAME")
	api := slack.New("YOUR_TOKEN_HERE")
	params := slack.PostMessageParameters{}
	channelID, timestamp, err := api.PostMessage("#general", fmt.Sprintf("Stack with name '%s' is Done.", stackname), params)
	if err != nil {
		fmt.Printf("%s\n", err)
		return
	}
	fmt.Printf("Message successfully sent to channel %s at %s", channelID, timestamp)
}

Currently, Furnace has no ability to share information of the stack with an outside plugin. Thus ‘Done’ could be anything from Rollback to Failed to CreateComplete.

Closing Words

That’s it for plugins. Thanks very much for reading! Gergely.

19 Mar 2017, 12:03

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 2

Intro

Hi folks.

Previously on this blog: Part 1, Part 3, Part 4

In this part, I’m going to talk about the AWS Go SDK and begin do dissect the intricacies of Furnace.

AWS SDK

Fortunately, the Go SDK for AWS is quiet verbose and littered with examples of all sorts. But that doesn’t make it less complex and less cryptic at times. I’m here to lift some of the early confusions, in hopes that I can help someone to avoid wasting time.

Getting Started and Developers Guide

As always, and common from AWS, the documentation is top notch. There is a 141 pages long developer’s guide on the SDK containing a getting started section and an API reference. Go check it out. I’ll wait. AWS Go SDK DG PDF. I will only talk about some gotchas and things I encountered, not the basics of the SDK.

aws.String and other types

Something which is immediately visible once we take a look at the API is that everything is a pointer. Now, there are a tremendous amount of discussions about this, but I’m with Amazon. There are various reasons for it, but to list the most prominent ones: - Type completion and compile time type safety. - Values for AWS API calls have valid zero values, in addition to being optional, i.e. not being provided at all. - Other option, like, empty interfaces with maps, or using zero values, or struct wrappers around every type, made life much harder rather than easier or not possible at all. - The AWS API is volatile. You never know when something gets to be optional, or required. Pointers made that decision easy.

There are good number of other discussions around this topic, for example: AWS Go GitHub #363.

In order to use primitives, AWS has helper functions like aws.String. Because &“asdf” is not allowed, you would have to create a variable and use its address in situations where a string pointer is needed, for example, name of the stack. These primitive helpers will make in-lining possible. We’ll see later that they are used to a great extent. Pointers, however, make life a bit difficult when constructing Input structs and make for poor aesthetics.

This is something I’m returning in a test for stubbing a client call:

		return &cloudformation.ListStackResourcesOutput{
			StackResourceSummaries: []*cloudformation.StackResourceSummary{
				{
					ResourceType:       aws.String("NoASG"),
					PhysicalResourceId: aws.String("arn::whatever"),
				},
			},
		}

This doesn’t look so appealing, but one gets used to it quickly.

Error handling

Errors also have their own types. An AWS error looks like this:

if err != nil {
    if awsErr, ok := err.(awserr.Error); ok {
    }
}

First, we check if error is nil, than we type check if the error is an AWS error or something different. In the wild, this will look something like this:

	if err != nil {
		if awsErr, ok := err.(awserr.Error); ok {
			if awsErr.Code() != codedeploy.ErrCodeDeploymentGroupAlreadyExistsException {
				log.Println(awsErr.Code())
				return err
			}
			log.Println("DeploymentGroup already exists. Nothing to do.")
			return nil
		}
		return err
	}

If it’s an AWS error, we can check further for the error code that it returns in order to identify what to handle, or what to throw on to the caller to a potential fatal. Here, I’m ignoring the AlreadyExistsException because, if it does, we just go on to a next action.

Examples

Luckily the API doc is very mature. In most of the cases, they provide an example to an API call. These examples, however, from time to time provide more confusion than clarity. Take CloudFormation. For me, when I first glanced upon the description of the API it wasn’t immediately clear that the TemplateBody was supposed to be the whole template, and that the rest of the fields were almost all optional settings. Or provided overrides in special cases.

And since the template is not an ordinary JAML or JSON file, I was looking for something that parses it into that the Struct I was going to use. After some time, and digging, I realized that I didn’t need that, and that I just need to read in the template, define some extra parameters, and give the TemplateBody the whole of the template. The parameters defined by the CloudFormation template where extracted for me by ValidateTemplate API call which returned all of them in a convenient []*cloudformation.Parameter slice. These things are not described in the document or visible from the examples. I mainly found them through playing with the API and focused experimentation.

Waiters

From other SDK implementations, we got used to Waiters. These handy methods wait for a service to become available or for certain situations to take in effect, like a Stage being CREATE_COMPLETE. The Go waiters, however, don’t allow for callback to be fired, or for running blocks, like the ruby SDK does. For this, I wrote a handy little waiter for myself, which outputs a spinner to see that we are currently waiting for something and not frozen in time. This waiter looks like this:

// WaitForFunctionWithStatusOutput waits for a function to complete its action.
func WaitForFunctionWithStatusOutput(state string, freq int, f func()) {
	var wg sync.WaitGroup
	wg.Add(1)
	done := make(chan bool)
	go func() {
		defer wg.Done()
		f()
		done <- true
	}()
	go func() {
		counter := 0
		for {
			counter = (counter + 1) % len(Spinners[config.SPINNER])
			fmt.Printf("\r[%s] Waiting for state: %s", yellow(string(Spinners[config.SPINNER][counter])), red(state))
			time.Sleep(time.Duration(freq) * time.Second)
			select {
			case <-done:
				fmt.Println()
				break
			default:
			}
		}
	}()

	wg.Wait()
}

And I’m calling it with the following method:

	utils.WaitForFunctionWithStatusOutput("DELETE_COMPLETE", config.WAITFREQUENCY, func() {
		cfClient.Client.WaitUntilStackDeleteComplete(describeStackInput)
	})

This would output these lines to the console:

[\] Waiting for state: DELETE_COMPLETE

The spinner can be configured to be one of the following types:

var Spinners = []string{`←↖↑↗→↘↓↙`,
	`▁▃▄▅▆▇█▇▆▅▄▃`,
	`┤┘┴└├┌┬┐`,
	`◰◳◲◱`,
	`◴◷◶◵`,
	`◐◓◑◒`,
	`⣾⣽⣻⢿⡿⣟⣯⣷`,
	`|/-\`}

Handy.

And with that, let’s dive into the basics of Furnace.

Furnace

Directory Structure and Packages

Furnace is divided into three main packages.

commands

Commands package is where the gist of Furnace lies. These commands represent the commands which are used through the CLI. Each file has the implementation for one command. The structure is devised by this library: Yitsushi’s Command Library. As of the writing of this post, the following commands are available: - create - Creates a stack using the CloudFormation template file under ~/.config/go-furnace - delete - Deletes the created Stack. Doesn’t do anything if the stack doesn’t exist - push - Pushes an application to a stack - status - Displays information about the stack - delete-application - Deletes the CodeDeploy application and deployment group created by push

These commands represent the heart of furnace. I would like to keep these to a minimum, but I do plan on adding more, like update and rollout. Further details and help messages on these commands can be obtained by running: ./furnace help or ./furnace help create.

❯ ./furnace help push
Usage: furnace push appName [-s3]

Push a version of the application to a stack

Examples:
  furnace push
  furnace push appName
  furnace push appName -s3
  furnace push -s3

config

Contains the configuration loader and some project wide defaults which are as follows: - Events for the plugin system - pre-create, post-create, pre-delete, post-delete. - CodeDeploy role name - CodeDeployServiceRole. This is used if none is provided to locate the CodeDeploy IAM role. - Wait frequency - Is the setting which controls how long the waiter should sleep in between status updates. Default is 1s. - Spinner - Is just the number of the spinner to use. - Plugin registry - Is a map of functions to run for the above events.

Further more, config loads the CloudFormation template and checks if some necessary settings are present in the environment, exp: the configuration folder under ~/.config/go-furnace.

utils

These are some helper functions which are used throughout the project. To list them: - error_handler - Is a simple error handler. I’m thinking of refactoring this one to some saner version. - spinner - Sets up which spinner to use in the waiter function. - waiter - Contains the verbose waiter introduced above under Waiters.

Configuration and Environment variables

Furnace is a Go application, thus it doesn’t have the luxury of Ruby or Python where the configuration files are usually bundled with the app. But, it does have a standard for it. Usually, configurations reside in either of these two locations. Environment Properties or|and configuration files under a fixed location ( i.e. HOME/.config/app-name ). Furnace employs both.

Settings like, region, stack name, enable plugin system, are under environment properties ( though this can change ), while the CloudFormation template lives under ~/.config/go-furnace/. Lastly it assumes some things, like the Deployment IAM role just exists under the used AWS account. All these are loaded and handled by the config package described above.

Usage

A typical scenario for Furnace would be the following:

  • Setup your CloudFormation template or use the one provided. The one provided sets up a highly available and self healing setting using Auto-Scaling and Load-Balancing with a single application instance. Edit this template to your liking than copy it to ~/.config/go-furnace.
  • Create the configured stack with ./furnace create.
  • Create will ask for the parameters defined in the template. If defaults are setup, simply hitting enter will use these defaults. Take note, that the provided template sets up SSH access via a provided key. If that key is not present in CF, you won’t be able to SSH into the created instance.
  • Once the stack is completed, the application is ready to be pushed. To do this, run: ./furnace push. This will locate the appropriate version of the app from S3 or GitHub and push that version to the instances in the Auto-Scaling group. To all of them.

General Practices Applied to the Project

Commands

For each command the main entry point is the execute function. These functions are usually calling out the small chunks of distributed methods. Logic was kept to a bare minimum ( probably could be simplified even further ) in the execute functions mostly for testability and the likes. We will see that in a followup post.

Errors

Errors are handled immediately and usually through a fatal. If any error occurs than the application is halted. In followup versions this might become more granular. I.e. don’t immediately stop the world, maybe try to recover, or create a Poller or Re-Tryer, which tries a call again for a configured amount of times.

Output colors

Not that important, but still… Aesthetics. Displaying data to the console in a nice way gives it some extra flare.

Makefile

This project works with a Makefile for various reasons. Later on, once the project might become more complex, a Makefile makes it really easy to handle different ways of packaging the application. Currently, for example, it provides a linux target which will make Go build the project for Linux architecture on any other Architecture i.e. cross-compiling.

It also provides an easy way to run unit tests with make test and installing with make && make install.

Closing Words

That is all for Part 2. Join me in Part 3 where I will talk about the experimental Plugin system that Furnace employs.

Thank you for reading! Gergely.

17 Mar 2017, 09:09

Testing new Hugo if posts are generated properly

Testing.

16 Mar 2017, 21:49

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 1

Other posts:

Part 2, Part 3, Part 4.

Building Furnace: Part 1

Intro

Hi folks.

This is the first part of a 4 part series which talks about the process of building a middlish sized project in Go, with AWS. Including Unit testing and a experimental plugin feature.

The first part will talk about the AWS services used in brief and will contain a basic description for those who are not familiar with them. The second part will talk about the Go SDK and the project structure itself, how it can be used, improved, and how it can help in everyday life. The third part will talk about the experimental plugin system, and finally, we will tackle unit testing AWS in Go.

Let’s begin, shall we?

AWS

CloudFormation

If you haven’t yet read about, or know off, AWS’ CloudFormation service, you can either go ahead and read the Documentation or read on for a very quick summary. If you are familiar with CF, you should skip ahead to CodeDeploy section.

CF is a service which bundles together other AWS services (for example: EC2, S3, ELB, ASG, RDS) into one, easily manageable stack. After a stack has been created, all the resources can be handled as one, located, tagged and used via CF specific console commands. It’s also possible to define any number of parameters, so a stack can actually be very versatile. A parameter can be anything, from SSH IP restriction to KeyPair names and list of tags to create or in what region the stack will be in.

To describe how these parts fit together, one must use a CloudFormation Template file which is either in JSON or in YAML format. A simple example looks like this:

    Parameters:
      KeyName:
        Description: The EC2 Key Pair to allow SSH access to the instance
        Type: AWS::EC2::KeyPair::KeyName
    Resources:
      Ec2Instance:
        Type: AWS::EC2::Instance
        Properties:
          SecurityGroups:
          - Ref: InstanceSecurityGroup
          - MyExistingSecurityGroup
          KeyName:
            Ref: KeyName
          ImageId: ami-7a11e213
      InstanceSecurityGroup:
        Type: AWS::EC2::SecurityGroup
        Properties:
          GroupDescription: Enable SSH access via port 22
          SecurityGroupIngress:
          - IpProtocol: tcp
            FromPort: '22'
            ToPort: '22'
            CidrIp: 0.0.0.0/0

There are a myriad of these template samples here.

I’m not going to explain this in too much detail. Parameters define the parameters, and resources define all the AWS services which we would like to configure. Here we can see, that we are creating an EC2 instance with a custom Security Group plus and already existing security group. ImageId is the AMI which will be used for the EC2 instance. The InstanceSecurityGroup is only defining some SSH access to the instance.

That is pretty much it. This can become bloated relatively quickly once, VPCs, ELBs, and ASGs come into play. And CloudFormation templates can also contain simple logical switches, like, conditions, ref for variables, maps and other shenanigans.

For example consider this part in the above example:

      KeyName:
        Ref: KeyName

Here, we use the KeyName parameter as a Reference Value which will be interpolated to the real value, or the default one, as the template gets processed.

CodeDeploy

If you haven’t heard about CodeDeploy yet, please browse the relevant Documentation or follow along for a “quick” description.

CodeDeploy just does what the name says. It deploys code. Any kind of code, as long as the deployment process is described in a file called appspec.yml. It can be easy as coping a file to a specific location or incredibly complex with builds of various kinds.

For a simple example look at this configuration:

    version: 0.0
    os: linux
    files:
      - source: /index.html
        destination: /var/www/html/
      - source: /healthy.html
        destination: /var/www/html/
    hooks:
      BeforeInstall:
        - location: scripts/install_dependencies
          timeout: 300
          runas: root
        - location: scripts/clean_up
          timeout: 300
          runas: root
        - location: scripts/start_server
          timeout: 300
          runas: root
      ApplicationStop:
        - location: scripts/stop_server
          timeout: 300
          runas: root

CodeDeploy applications have hooks and life-cycle events which can be used to control the deployment process of an like, starting the WebServer; making sure files are in the right location; copying files, running configuration management software like puppet, ansible or chef; etc, etc.

What can be done in an appspec.yml file is described here: Appspec Reference Documentation.

Deployment happens in one of two ways:

GitHub

If the preferred way to deploy the application is from GitHub a commit hash must be used to identify which “version” of the application is to be deployed. For example:

    rev = &codedeploy.RevisionLocation{
        GitHubLocation: &codedeploy.GitHubLocation{
            CommitId:   aws.String("kajdf94j0f9k309klksjdfkj"),
            Repository: aws.String("Skarlso/furnace-codedeploy-app"),
        },
        RevisionType: aws.String("GitHub"),
    }

Commit Id is the hash of the latest release and repository is the full account/repository pointing to the application.

S3

The second way is to use an S3 bucket. The bucket will contain an archived version of the application with a given extension. I’m saying given extension, because it has to be specified like this (and can be either ‘zip’, or ‘tar’ or ‘tgz’):

    rev = &codedeploy.RevisionLocation{
        S3Location: &codedeploy.S3Location{
            Bucket:     aws.String("my_codedeploy_bucket"),
            BundleType: aws.String("zip"),
            Key:        aws.String("my_awesome_app"),
            Version:    aws.String("VersionId"),
        },
        RevisionType: aws.String("S3"),
    }

Here, we specify the bucket name, the extension, the name of the file and an optional version id, which can be ignored.

Deploying

So how does code deploy get either of the applications to our EC2 instances? It uses an agent which is running on all of the instances that we create. In order to do this, the agent needs to be present on our instance. For linux this can be achieved with the following UserData (UserData in CF is the equivalent of a bootsrap script):

    "UserData" : {
        "Fn::Base64" : { "Fn::Join" : [ "\n", [
            "#!/bin/bash -v",
            "sudo yum -y update",
            "sudo yum -y install ruby wget",
            "cd /home/ec2-user/",
            "wget https://aws-codedeploy-eu-central-1.s3.amazonaws.com/latest/install",
            "chmod +x ./install",
            "sudo ./install auto",
            "sudo service codedeploy-agent start",
        ] ] }
    }

A simple user data configuration in the CloudFormation template will make sure that every instance that we create will have the CodeDeploy agent running and waiting for instructions. This agent is self updating. Which can cause some trouble if AWS releases a broken agent. However unlikely, it can happen. Never the less, once installed, it’s no longer a concern to be bothered with.

It communications on HTTPS port 443.

CodeDeploy identifies instances which need to be updated according to our preferences, by tagging the EC2 and Auto Scaling groups. Tagging happens in the CloudFormation template through the AutoScalingGroup settings like this:

    "Tags" : [
        {
            "Key" : "fu_stage",
            "Value" : { "Ref": "AWS::StackName" },
            "PropagateAtLaunch" : true
        }
    ]

This will give the EC2 instance a tag called fu_stage with value equaling to the name of the stack. Once this is done, CodeDeploy looks like this:

    params := &codedeploy.CreateDeploymentInput{
        ApplicationName:               aws.String(appName),
        IgnoreApplicationStopFailures: aws.Bool(true),
        DeploymentGroupName:           aws.String(appName + "DeploymentGroup"),
        Revision:                      revisionLocation(),
        TargetInstances: &codedeploy.TargetInstances{
            AutoScalingGroups: []*string{
                aws.String("AutoScalingGroupPhysicalID"),
            },
            TagFilters: []*codedeploy.EC2TagFilter{
                {
                    Key:   aws.String("fu_stage"),
                    Type:  aws.String("KEY_AND_VALUE"),
                    Value: aws.String(config.STACKNAME),
                },
            },
        },
        UpdateOutdatedInstancesOnly: aws.Bool(false),
    }

CreateDeploymentInput is the entire parameter list that is needed in order to identify instances to deploy code to. We can see here that it looks for an AutoScalingGroup by Physical Id and the tag labeled fu_stage. Once found, it will use UpdateOutdatedInstancesOnly to determine if an instance needs to be updated or not. Set to false means, it always updates.

Furnace

Where does Furnace fit in, in all of this? Furnace provides a very easy mechanism to create, delete and push code to a CloudFormation stack using CodeDeploy, and a couple of environment properties. Furnace create will create a CloudFormation stack according to the provided template, all the while asking for the parameters defined in it for flexibility. delete will remove the stack and all affiliated resources except for the created CodeDeploy application. For that, there is delete-application. status will display information about the stack: Outputs, Parameters, Id, Name, and status. Something like this:

    2017/03/16 21:14:37 Stack state is:  {
      Capabilities: ["CAPABILITY_IAM"],
      CreationTime: 2017-03-16 20:09:38.036 +0000 UTC,
      DisableRollback: false,
      Outputs: [{
          Description: "URL of the website",
          OutputKey: "URL",
          OutputValue: "http://FurnaceSt-ElasticL-ID.eu-central-1.elb.amazonaws.com"
        }],
      Parameters: [
        {
          ParameterKey: "KeyName",
          ParameterValue: "UserKeyPair"
        },
        {
          ParameterKey: "SSHLocation",
          ParameterValue: "0.0.0.0/0"
        },
        {
          ParameterKey: "CodeDeployBucket",
          ParameterValue: "None"
        },
        {
          ParameterKey: "InstanceType",
          ParameterValue: "t2.nano"
        }
      ],
      StackId: "arn:aws:cloudformation:eu-central-1:9999999999999:stack/FurnaceStack/asdfadsf-adsfa3-432d-a-fdasdf",
      StackName: "FurnaceStack",
      StackStatus: "CREATE_COMPLETE"
    }

( This will later be improved to include created resources as well. )

Once the stack is CREATE_COMPLETE a simple push will deliver our application on each instance in the stack. We will get into more detail about how these commands are working in Part 2 of this series.

Final Words

This is it for now.

Join me next time when I will talk about the AWS Go SDK and its intricacies and we will start to look at the basics of Furnace.

As always, Thanks for reading! Gergely.

03 Mar 2017, 18:20

Images on older posts

Hi folks.

Just a quick headsup, that older posts and images, may have been lost unfortunately, because I made the terrible mistake, when I migrated over from my old blog, that I forgot to download all the images from the remote host.

For lack of options, I deleted the images. :/ Sorry for the inconvencience!

Gergely.

15 Feb 2017, 19:20

How to HTTPS with Hugo LetsEncrypt and HAProxy

Intro

Hi folks.

Today, I would like to write about how to do HTTPS for a website, without the need to buy a certificate and set it up via your DNS provider. Let’s begin.

Abstract

What you will achieve by the end of this post: - Every call to HTTP will be redirected to HTTPS via haproxy. - HTTPS will be served with Haproxy and LetsEncrypt as the Certificate provider. - Automatically update the certificate before its expiration. - No need for IPTable rules to route 8080 to 80. - Traffic to and from your page will be encrypted. - This all will cost you nothing.

I will use a static website generator for this called Hugo which, if you know me, is my favorite generator tool. These instructions are for haproxy and hugo, if you wish to use apache and nginx for example, you’ll have to dig for the corresponding settings for letsencrypt and certbot.

What You Will Need

Hugo

You will need hugo, which can be downloaded from here: Hugo. A simple website will be enough. For themes, you can take a look at the humongous list located here: HugoThemes.

Haproxy

Haproxy can be found here: Haproxy. There are a number of options to install haproxy. I chose a simple apt-get install haproxy.

Let’s Encrypt

Information about Let’s Encrypt can be found on their website here: Let’s Encrypt. Let’s Encrypt’s client is now called Certbot which is used to generate the certificates. To get the latest code you either clone the repository Certbot, or use an auto downloader:

user@webserver:~$ wget https://dl.eff.org/certbot-auto
user@webserver:~$ chmod a+x ./certbot-auto
user@webserver:~$ ./certbot-auto --help

Either way, I’m using the current latest version: v0.11.1.

Sudo

This goes without saying, but that these operations will require you to have sudo privileges. I suggest staying in sudo for ease of use. This means that the commands, I’ll write here, will assume you are in sudo su mode thus no sudo prefix will be used.

Portforwarding

In order for your website to work under https this guide assumes that you have port 80 and 443 open on your router / network security group.

Setup

Single Server Environment

It is possible for haproxy, certbot and your website to run on designated servers. Haproxy’s abilities allows to define multiple server sources. In this guide, my haproxy, website and certbot will all run on the same server; thus redirecting to 127.0.0.1 and local ips. This is more convenient, because otherwise the haproxy IP would have to be a permanent local/remote ip. Or an automated script would have to be setup which is notified upon IP change and updates the ip records.

Creating a Certificate

Diving in, the first thing you will require is a certificate. A certificate will allow for encrypted traffic and an authenticated website. Let’s Encrypt which is basically functioning as an independent, free, automated CA (Certificate Authority). Usually, the process would be to pay a CA to give you a signed, generated certificate for your website, and you would have to set that up with your DNS provider. Let’s Encrypt has that all automated, and free of any charge. Neat.

Certbot

So let’s get started. Clone the repository into /opt/letsencrypt for further usage.

git clone https://github.com/certbot/certbot /opt/letsencrypt

Generating the certificate

Make sure that there is nothing listening on ports: 80, 443. To list usage:

netstat -nlt | grep ':80\s'
netstat -nlt | grep ':443\s'

Kill everything that might be on these ports, like apache2 and httpd. These will be used by haproxy and certbot for challenges and redirecting traffic.

You will be creating a standalone certificate. This is the reason we need port 80 and 443 open. Run certbot by defining the certonly and --standalone flags. For domain validation you are going to use port 443, tls-sni-01 challenge. The whole command looks like this:

cd /opt/letsencrypt
./certbot-auto certonly --standalone -d example.com -d www.example.com

If this displays something like, “couldn’t connect” you probably still have something running on a port it tries to use. The generated certificate will be located under /etc/letsencrypt/archive and /etc/letsencrypt/keys while /etc/letsencrypt/live is a symlink to the latest version of the cert. It’s wise to not copy these away from here, since the live link is always updated to the latest version. Our script will handle haproxy, which requires one cert file made from privkey + fullchain|.pem files.

Setup Auto-Renewal

Let’s Encrypt issues short lived certificates (90 days). In order to not have to do this procedure every 89 days, certbot provides a nifty command called renew. However, for the cert to be generated, the port 443 has to be open. This means, haproxy needs to be stopped before doing the renew. Now, you COULD write a script which stops it, and after the certificate has been renewed, starts it again, but certbot has you covered again in that department. It provides hooks called pre-hook and post-hook. Thus, all you have to write is the following:

#!/bin/bash

cd /opt/letsencrypt
./certbot-auto renew --pre-hook "service haproxy stop" --post-hook "service haproxy start"
DOMAIN='example.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

If you would like to test it first, just include the switch --dry-run.

In case of success you should see something like this:

root@raspberrypi:/opt/letsencrypt# ./certbot-auto renew --pre-hook "service haproxy stop" --post-hook "service haproxy start" --dry-run
Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/example.com.conf
-------------------------------------------------------------------------------
Cert not due for renewal, but simulating renewal for dry run
Running pre-hook command: service haproxy stop
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0002_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0002_csr-certbot.pem
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/example.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
Running post-hook command: service haproxy start

Put this script into a crontab to run every 89 days like this:

crontab -e
# Open crontab for edit and paste in this line
* * */89 * * /root/renew-cert.sh

And you should be all set. Now we move on the configure haproxy to redirect and to use our newly generated certificate.

Haproxy

Like I said, haproxy requires a single file certificate in order to encrypt traffic to and from the website. To do this, we need to combine privkey.pem and fullchain.pem. As of this writing, there are a couple of solutions to automate this via a post hook on renewal. And also, there is an open ticket with certbot to implement a simpler solution located here: https://github.com/certbot/certbot/issues/1201. I, for now, have chosen to simply concatenate the two files together with cat like this:

DOMAIN='example.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

It will create a combined cert under /etc/haproxy/certs/example.com.pem.

Haproxy configuration

If haproxy happens to be running, stop it with service haproxy stop.

First, save the default configuration file: cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old. Now, overwrite the old one with this new one (comments about what each setting does, are in-lined; they are safe to copy):

global
    daemon
    # Set this to your desired maximum connection count.
    maxconn 2048
    # https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
    # bit setting for Diffie - Hellman key size.
    tune.ssl.default-dh-param 2048

defaults
    option forwardfor
    option http-server-close

    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

# In case it's a simple http call, we redirect to the basic backend server
# which in turn, if it isn't an SSL call, will redirect to HTTPS that is
# handled by the frontend setting called 'www-https'.
frontend www-http
    # Redirect HTTP to HTTPS
    bind *:80
    # Adds http header to end of end of the HTTP request
    reqadd X-Forwarded-Proto:\ http
    # Sets the default backend to use which is defined below with name 'www-backend'
    default_backend www-backend

# If the call is HTTPS we set a challenge to letsencrypt backend which
# verifies our certificate and than direct traffic to the backend server
# which is the running hugo site that is served under https if the challenge succeeds.
frontend www-https
    # Bind 443 with the generated letsencrypt cert.
    bind *:443 ssl crt /etc/haproxy/certs/skarlso.com.pem
    # set x-forward to https
    reqadd X-Forwarded-Proto:\ https
    # set X-SSL in case of ssl_fc <- explained below
    http-request set-header X-SSL %[ssl_fc]
    # Select a Challenge
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    # Use the challenge backend if the challenge is set
    use_backend letsencrypt-backend if letsencrypt-acl
    default_backend www-backend

backend www-backend
   # Redirect with code 301 so the browser understands it is a redirect. If it's not SSL_FC.
   # ssl_fc: Returns true when the front connection was made via an SSL/TLS transport
   # layer and is locally deciphered. This means it has matched a socket declared
   # with a "bind" line having the "ssl" option.
   redirect scheme https code 301 if !{ ssl_fc }
   # Server for the running hugo site.
   server www-1 192.168.0.17:8080 check

backend letsencrypt-backend
   # Lets encrypt backend server
   server letsencrypt 127.0.0.1:54321

Save this, and start haproxy with services haproxy start. If you did everything right, it should say nothing. If, however, there went something wrong with starting the proxy, it usually displays something like this:

Job for haproxy.service failed. See 'systemctl status haproxy.service' and 'journalctl -xn' for details.

You can also gather some more information on what went wrong from less /var/log/haproxy.log.

Starting the Server

Everything should be ready to go. Hugo has the concept of a baseUrl. Everything that it loads, and tries to access will be prefixed with it. You can either set it through it’s config.yaml file, or from the command line.

To start the server, call this from the site’s root folder:

hugo server --bind=192.168.x.x --port=8080 --baseUrl=https://example.com --appendPort=false

Interesting thing here to note is https and the port. The IP could be 127.0.0.1 as well. I experienced problems though with not binding to network IP when I was debugging the site from a different laptop on the same network.

Once the server is started, you should be able to open up your website from a different browser, not on your local network, and see that it has a valid certificate installed. In Chrome you should see a green icon telling you that the cert is valid.

Last Words

And that is all. The site should be up and running and the proxy should auto-renew your site’s certificate. If you happened to change DNS or change the server, you’ll have to reissue the certificate.

Thanks for reading! Any questions or trouble setting something up, please feel free to leave a comment.

Cheers, Gergely.

02 Nov 2016, 00:00

How to do Google Sign-In with Go - Part 2

Intro

Hi Folks.

This is a follow up on my previous post about Google Sign-In. In this post we will discover what to do with the information retrieved in the first encounter, which you can find here: Google Sign-In Part 1.

Forewords

The Project

Everything I did in the first post, and that I’m going to do in this example, can be found in this project: Google-OAuth-Go-Sample.

Just to recap, we left off previously on the point where we successfully obtained information about the user, with a secure token and a session initiated with them. Google nicely enough provided us with some details which we can use. This information was in JSON format and looked something like this:

{
  "sub": "1111111111111111111111",
  "name": "Your Name",
  "given_name": "Your",
  "family_name": "Name",
  "profile": "https://plus.google.com/1111111111111111111111",
  "picture": "https://lh3.googleusercontent.com/asdfadsf/AAAAAAAAAAI/Aasdfads/Xasdfasdfs/photo.jpg",
  "email": "your@gmail.com",
  "email_verified": true,
  "gender": "male"
}

In my example, to keep things simple, I will use the email address since that has to be unique in the land of Google. You could assign an ID to the user, and you could complicate things even further, but my goal is not to write an academic paper about cryptography here.

Implementation

Making something useful out of the data

In order for the app to recognise a user it must save some data about the user. I’m doing that in MongoDB right now, but that could be any form of persistence layer, like, SQLite3, BoltDB, PostgresDB, etc.

After successful user authorization

Once the user used google to provide us with sufficient information about him/herself, we can retrieve data about that user from our records. The data could be anything that is linked to our unique identifier like: Character Profile, Player Information, Status, Last Logged-In, etcetc. For this, there are two things that need to happen after authorization: Save/Load user information and initiate a session.

The session can be in the form of a cookie, or a Redis storage, or URL re-writing. I’m choosing a cookie here.

Save / Load user information

All I’m doing is a simple, returning / new user handling. The concept is simple. If the email isn’t saved, we save it. If it’s saved, we set a logic to our page render to greet the returning user.

In the AuthHandler I’m doing the following:

...
seen := false
db := database.MongoDBConnection{}
if _, mongoErr := db.LoadUser(u.Email); mongoErr == nil {
    seen = true
} else {
    err = db.SaveUser(&u)
    if err != nil {
        log.Println(err)
        c.HTML(http.StatusBadRequest, "error.tmpl", gin.H{"message": "Error while saving user. Please try again."})
        return
    }
}
c.HTML(http.StatusOK, "battle.tmpl", gin.H{"email": u.Email, "seen": seen})
...

Let’s break this down a bit. There is a db connection here, which calls a function that either returns an error, or it doesn’t. If it doesn’t, that means we have our user. If it does, it means we have to save the user. This is a very simple case (disregard for now, that the error could be something else as well (If you can’t get passed that, you could type check the error or check if the returned record contains the requested user information instead of checking for an error.)).

The template is than rendered depending on the seen boolean like this:

<!DOCTYPE html>
<link rel="icon"
      type="image/png"
      href="/img/favicon.ico" />
<html>
  <head>
    <link rel="stylesheet" href="/css/main.css">
  </head>
  <body>
    {{if .seen}}
        <h1>Welcome back to the battlefield '{{ .email }}'.</h1>
    {{else}}
        <h1>Welcome to the battlefield '{{ .email }}'.</h1>
    {{end}}
  </body>
</html>

You can see here, that if seen is true the header message will say: “Welcome back…“.

Initiating a session

When the user is successfully authenticated, we activate a session so that the user can access pages that require authorization. Here, I have to mention that I’m using Gin, so restricted end-points are made with groups which require a middleware.

As I mentioned earlier, I’m using cookies as session handlers. For this, a new session store has to be created with some secure token. This is achieved with the following code fragments ( note that I’m using a Gin session middleware which uses gorilla’s session handler located here: Gin-Gonic(Sessions)):

// RandToken in handlers.go:
// RandToken generates a random @l length token.
func RandToken(l int) string {
	b := make([]byte, l)
	rand.Read(b)
	return base64.StdEncoding.EncodeToString(b)
}

// quest.go:
// Create the cookie store in main.go.
store := sessions.NewCookieStore([]byte(handlers.RandToken(64)))
store.Options(sessions.Options{
    Path:   "/",
    MaxAge: 86400 * 7,
})

// using the cookie store:
router.Use(sessions.Sessions("goquestsession", store))

After this gin.Context lets us access this session store by doing session := sessions.Default(c). Now, create a session variable called user-id like this:

session.Set("user-id", u.Email)
err = session.Save()
if err != nil {
    log.Println(err)
    c.HTML(http.StatusBadRequest, "error.tmpl", gin.H{"message": "Error while saving session. Please try again."})
    return
}

Don’t forget to save the session. ;) That is it. If I restart the server, the cookie won’t be usable any longer, since it will generate a new token for the cookie store. The user will have to log in again. Note: It might be that you’ll see something like this, from session: [sessions] ERROR! securecookie: the value is not valid. You can ignore this error.

Restricting access to certain end-points with the auth Middleware™

Now, that our session is alive, we can use it to restrict access to some part of the application. With Gin, it looks like this:

authorized := router.Group("/battle")
authorized.Use(middleware.AuthorizeRequest())
{
    authorized.GET("/field", handlers.FieldHandler)
}

This creates a grouping of end-points under /battle. Which means, everything under /battle will only be accessible if the middleware passed to the Use function calls the next handler in the chain. If it aborts the call chain, the end-point will not be accessible. My middleware is pretty simple, but it gets the job done:

// AuthorizeRequest is used to authorize a request for a certain end-point group.
func AuthorizeRequest() gin.HandlerFunc {
	return func(c *gin.Context) {
		session := sessions.Default(c)
		v := session.Get("user-id")
		if v == nil {
			c.HTML(http.StatusUnauthorized, "error.tmpl", gin.H{"message": "Please log in."})
			c.Abort()
		}
		c.Next()
	}
}

Note, that this only check if user-id is set or not. That’s certainly not enough for a secure application. Its only supposed to be a simple example of the mechanics of the auth middleware. Also, the session usually contains more than one parameter. It’s more likely that it contains several variables, which describe the user including a state for CORS protection. For CORS I’d recommend using rs/cors.

If you would try to access http://127.0.0.1:9090/battle/field without logging in, you’d be redirected to an error.tmpl with the message: Please log in..

Final Words

That’s pretty much it. Important parts are:

  • Saving the right information
  • Secure cookie store
  • CORS for sessions
  • Checks of the users details in the cookie
  • Authorised end-points
  • Session handling

Any questions, remarks, ideas, are very welcomed in the comment section. There are plenty of very nice Go frameworks which do Google OAuth2 out of the box. I recommend using them, as they save you a lot of legwork.

Thank you for reading! Gergely.