06 Nov 2017, 20:34

Furnace Ikea Manual

Hi there folks.

Just a quick post, of how I went on and created an IKEA manual about Furnace.

Page 1: Page 1. Page 2: Page 2.

I drew these using Krita. I mostly used a mouse but I also used a Wacom Bamboo drawing tabled, for sketches and such.

Thanks, Gergely.

03 Sep 2017, 10:34

Furnace Binaries

Hey folks.

Quick note. Furnace now comes pre-compiled easy to access binaries which you can download and use out of the box.

No need to install anything, or compile the source. Just download, unzip and use.

Here is the website: Furnace Website.

Enjoy, Cheers, Gergely.

31 May 2017, 06:23

Notetaking

Page1

Page2

28 May 2017, 19:23

Replacing Eval with Object.send and a self written Parser

Intro

A while ago, I was added as a curator for a Gem called JsonPath. It’s a small but very useful and brilliant gem. It had a couple of problems which I fixed, but the hardest to eliminate proved to be a series of evals throughout the code.

You could opt in using eval with a constructor parameter, but generally, it was considered to be unsafe. Thus, normally when a project was using it, like Huginn they had to opt out by default, thus missing out on sweet parsing like this: $..book[?(@['price'] > 20)].

Eval

In order to remove eval, first I had to understand what it is actually doing. I had to take it apart.

apart

After much digging and understanding the code, I found, all it does is perform the given operations on the current node. And if the operation is true, it will select that node, otherwise, return false, and ignore that node.

For example $..book[?(@['price'] > 20)] could be translated to:

return @_current_node['price'] > 20

Checking first if 'price' is even a key in @_current_node. Once I’ve understood this part, I set on trying to fix eval.

SAFE = 4

In ruby, you could extract the part where you Eval and put it into its own proc and set SAFE = 4 which will disable some things like system calls.

proc do
  SAFE = 4
  eval(some_expression)
end.call

SAFE levels:

$SAFE Description 0 No checking of the use of externally supplied (tainted) data is performed. This is Ruby’s default mode. >= 1 Ruby disallows the use of tainted data by potentially dangerous operations. >= 2 Ruby prohibits the loading of program files from globally writable locations. >= 3 All newly created objects are considered tainted. >= 4 Ruby effectively partitions the running program in two. None - tainted objects may not be modified. Typically, this will be used to create a sandbox: the program sets up an environment using a lower $SAFE level, then resets $SAFE to 4 to prevent subsequent changes to that environment.

This has the disadvantage that anything below 4 is just, meh. But nothing above 1 will actually work with JsonPath so… scratch that.

Sandboxing

We could technically try and sandbox eval into it’s own process with a PID and whitelist methods which are allowed to be called.

Not bad, and there are a few gems out there which are trying to do that like SafeRuby. But all of these project have been abandoned years ago for a good reason.

Object.send

nobodylikesyou

Object.send is the best way to get some flexibility while still being safe. You basically just call methods on objects by describing said method on an object and giving parameters to it, like:

1.send(:+, 2) => 3

This is a very powerful tool in our toolbox which we will exploit immensely.

So let’s get to it.

Writing a parser

Writing a parser in Ruby is a very fluid experience. It has nice tools which support that, and the one I used is StringScanner. It has the ability to track where you are currently at in a string and move a pointer along with regex matches. In fact, JsonPath already employs this method when parsing a json expression. So reusing that logic was in fact… elementary.

The expression

How do we get from this:

$..book[?(@['price'] < 20)]

To this:

@_current_node['price'] < 20

Well. By simple elimination. There are a couple of problems along the way of course. Because this wouldn’t be a parser if it couldn’t handle ALL the other cases…

Removing Clutter

Some of this we don’t need. Like, $..book part.

dontneed1

The other things we don’t need are all the '[]?()

dontneed2

Once this is done, we can move to isolating the important bits.

takingaim

BreakDown

Elements

How does an expression actually look like?

Let’s break it down.

confused

So, this is a handful. Operations can be <=,>=,<,>,==,!= and operands can be either numbers, or words, and element accessor can be nested since something like this is perfectly valid: $..book[?(@.written.year == 1997)].

feedline

To avoid being overwhelmed, ruby has our back with a method called dig.

dig

This, basically lets us pass in some parameters into a dig function on a hash or an array with variadic parameters, which will go on and access those elements in order how they were supplied. Until it either returns a nil or an end result.

For example:

2.3.1 :001 > a = {a: {b: 'c'}}
 => {:a=>{:b=>"c"}}
2.3.1 :002 > a.dig(:a, :b)
 => "c"

Easy. However… Dig was only added after ruby 2.3 thus, I had to write my own dig for now, until I stop supporting anything below 2.3.

At first, I wanted to add it to the hash class, but it proved to be a futile attempt if I wanted to do it nicely, thus the parser got it as a private method.

    def dig(keys, hash)
      return hash unless hash.is_a? Hash
      return nil unless hash.key?(keys.first)
      return hash.fetch(keys.first) if keys.size == 1
      prev = keys.shift
      dig(keys, hash.fetch(prev))
    end

And the corresponding regex behind getting a multitude of elements is as follows:

...
if t = scanner.scan(/\['\w+'\]+/)
...

Operator

Selecting the operator is another interesting part as it can be a single one or multiple and all sorts. Until I realized that no… it can actually be only a couple.

whatone

whattwo

Also, after a bit of fiddling and doing and doing a silly case statement first:

case op
when '>'
  dig(@_current_node, *elements) > operand
when '<'
  dig(@_current_node, *elements) > operand
...
end

…I promptly saw that this is not how it should be done.

And here comes Object.send.

send

This gave me the opportunity to write this:

dig(elements, @_current_node).send(operator, operand)

Much better. Now I could send all the things in the way of a node.

send

Parsing an op be like:

elsif t = scanner.scan(/\s+[<>=][<>=]?\s+?/)

Operand

Now comes the final piece. The value which we are comparing. This could either be a simple integer, a floating number, or a word. Hah. So coming up with a regex which fits this tightly took a little fiddling, but eventually I ended up with this:

elsif t = scanner.scan(/(\s+)?'?(\w+)?[.,]?(\w+)?'?(\s+)?/)

Without StackOverflow I would say this is fine ((although I need to remove all those space check, shees)). What are all the question marks? Basically, everything is optional. Because an this expression $..book[?(@.price)] is valid. Which is basically just asserting if a given node has a price element.

Logical Operators

The last thing that remains is logical operators, which if you are using eval, is pretty straight forward. It takes care of anything that you might add in like &&, ||, |, &, ^ etc etc.

Now, that’s something I did with a case though. Until I find a nicer solution. Since we can already parse a single expression it’s just a question of breaking down a multi structure expression as the following one: $..book[?(@['price'] > 20 && @.written.year == 1998)].

exps = exp.split(/(&&)|(\|\|)/)

This splits up the string by either && or || and the usage of groups () also includes the operators. Than I evaluate the expressions and save the whole thing in an array like [true, '&&', false]. You know what could immediately resolve this? Yep…

saynotoeval.

I’d rather just parse it although technically an eval at this stage wouldn’t be that big of a problem…

def parse(exp)
  exps = exp.split(/(&&)|(\|\|)/)
  ret = parse_exp(exps.shift)
  exps.each_with_index do |item, index|
    case item
    when '&&'
      ret &&= parse_exp(exps[index + 1])
    when '||'
      ret ||= parse_exp(exps[index + 1])
    end
  end
  ret
end

Closing words

That’s it folks. The parser is done. And there is no eval being used. There are some more things here that are interesting. Like, array indexing is allowed in jsonpath which is solved by sending .length to a current node. For example:

if scanner.scan(/\./)
  sym = scanner.scan(/\w+/)
  op = scanner.scan(/./)
  num = scanner.scan(/\d+/)
  return @_current_node.send(sym.to_sym).send(op.to_sym, num.to_i)
end

If an expression begins with a .. So you see that using send will help a lot, and understanding what eval is trying to evaluate and rather writing your own parser, isn’t that hard at all using ruby.

I hope you enjoyed reading this little tid-bit as much as I enjoyed writing and drawing it. Leave a comment if your liked the drawings or if you did not and I should never do them again (( I don’t really care, this is my blog haha. )). Note to self: I shouldn’t draw on the other side of the drawing because of bleed-through.

Thank you! Gergely.

16 Apr 2017, 09:23

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 4

Intro

Hi folks.

Previously on this blog: Part 1. Part 2. Part 3.

In this part we are going to talk about Unit Testing Furnace and how to work some magic with AWS and Go.

Mock Stub Fake Dummy Canned

Unit testing in Go usually follows the Dependency Injection model of dealing with Mocks and Stubs.

## DI

Dependency Inject in short is one object supplying the dependencies of another object. In a longer description, it’s ideal to be used for removing the lock on a third party library, like the AWS client. Imaging having code which solely depends on the AWS client. How would you unit test that code without having to ACTUALLY connect to AWS? You couldn’t. Every time you try to test the code it would run the live code and it would try and connect to AWS and perform the operations it’s design to do. The Ruby library with it’s metaprogramming allows you to set the client globally to stub responses, but, alas, this is not the world of Ruby.

Here is where DI comes to the rescue. If you have control over the AWS client on a very high level, and would pass it around as a function parameter, or create that client in an init() function and have it globally defined; you would be able to implement your own client, and have your code use that with stubbed responses which your tests need. For example, you would like a CreateApplication call to fail, or you would like a DescribeStack which returns an aws.Error(“StackAlreadyExists”).

For this, however, you need the API of the AWS client. Which is provided by AWS.

AWS Client API

In order for DI to work, the injected object needs to be of a certain type for us to inject our own. Luckily, AWS provides an Interface for all of it’s clients. Meaning, we can implement our own version for all of the clients, like S3, CloudFormation, CodeDeploy etc.

For each client you want to mock out, an *iface package should be present like this:

  "github.com/aws/aws-sdk-go/service/cloudformation/cloudformationiface"

In this package you find and use the interface like this:

type fakeCloudFormationClient struct {
	cloudformationiface.CloudFormationAPI
	err error
}

And with this, we have our own CloudFormation client. The real code uses the real clients as function parameters, like this:

// Execute defines what this command does.
func (c *Create) Execute(opts *commander.CommandHelper) {
	log.Println("Creating cloud formation session.")
	sess := session.New(&aws.Config{Region: aws.String(config.REGION)})
	cfClient := cloudformation.New(sess, nil)
	client := CFClient{cfClient}
	createExecute(opts, &client)
}

We can’t test Execute itself, as it’s using the real client here (or you could have a global from some library, thus allowing you to tests even Execute here) but there is very little logic in this function for this very reason. All the logic is in small functions for which the main starting point and our testing opportunity is, createExecute.

Stubbing Calls

Now, that we have our own client, and with the power of Go’s interface embedding as seen above with CloudFormationAPI, we have to only stub the functions which we are actually using, instead of every function of the given interface. This looks like this:

	cfClient := new(CFClient)
	cfClient.Client = &fakeCloudFormationClient{err: nil}

Where cfClient is a struct like this:

// CFClient abstraction for cloudFormation client.
type CFClient struct {
	Client cloudformationiface.CloudFormationAPI
}

And a stubbed call can than be written as follows:

func (fc *fakeCreateCFClient) WaitUntilStackCreateComplete(input *cloudformation.DescribeStacksInput) error {
	return nil
}

This can range from a very trivial example, like the one above, to intricate ones as well, like this gem:

func (fc *fakePushCFClient) ListStackResources(input *cloudformation.ListStackResourcesInput) (*cloudformation.ListStackResourcesOutput, error) {
	if "NoASG" == *input.StackName {
		return &cloudformation.ListStackResourcesOutput{
			StackResourceSummaries: []*cloudformation.StackResourceSummary{
				{
					ResourceType:       aws.String("NoASG"),
					PhysicalResourceId: aws.String("arn::whatever"),
				},
			},
		}, fc.err
	}
	return &cloudformation.ListStackResourcesOutput{
		StackResourceSummaries: []*cloudformation.StackResourceSummary{
			{
				ResourceType:       aws.String("AWS::AutoScaling::AutoScalingGroup"),
				PhysicalResourceId: aws.String("arn::whatever"),
			},
		},
	}, fc.err
}

This ListStackResources stub lets us test two scenarios based on the stackname. If the test stackname is ‘NoASG’ it will return a result which equals to a result containing no AutoScaling Group. Otherwise, it will return the correct ResourceType for an ASG.

It is a common practice to line up several scenario based stubbed responses in order to test the robustness of your code.

Unfortunately, this also means that your tests will be a bit cluttered with stubs and mock structs and whatnots. For that, I’m partially using a package available struct file in which I’m defining most of the mock structs at least. And from there on, the tests will only contain specific stubs for that particular file. This can be further fine grained by having defaults and than only override in case you need something else.

Testing fatals

Now, the other point which is not really AWS related, but still comes to mind when dealing with Furnace, is testing error scenarios.

Because Furnace is a CLI application it uses Fatals to signal if something is wrong and it doesn’t want to continue or recover because, frankly it can’t. If AWS throws an error, that’s it. You can retry, but in 90% of the cases, it’s usually something that you messed up.

So, how do we test for a fatal or an os.Exit? There are a number of points on that if you do a quick search. You may end up on this talk: GoTalk 2014 Testing Slide #23. Which does an interesting thing. It calls the test binary in a separate process and tests the exit code.

Others, and me as well, will say that you have to have your own logger implemented and use a different logger / os.Exit in your test environment.

Others others will tell you to not to have tests around os.Exit and fatal things, rather return an error and only the main should pop a world ending event. I leave it up to you which you want to use. Either is fine.

In Furnace, I’m using a global logger in my error handling util like this:

// HandleFatal handler fatal errors in Furnace.
func HandleFatal(s string, err error) {
	LogFatalf(s, err)
}

And LogFatalf is an exported variable var LogFatalf = log.Fatalf. Than in a test, I just override this variable with a local anonymous function:

func TestCreateExecuteEmptyStack(t *testing.T) {
	failed := false
	utils.LogFatalf = func(s string, a ...interface{}) {
		failed = true
	}
	config.WAITFREQUENCY = 0
	client := new(CFClient)
	stackname := "EmptyStack"
	client.Client = &fakeCreateCFClient{err: nil, stackname: stackname}
	opts := &commander.CommandHelper{}
	createExecute(opts, client)
	if !failed {
		t.Error("expected outcome to fail during create")
	}
}

It can get even more granular by testing for the error message to make sure that it actually fails at the point we think we are testing:

func TestCreateStackReturnsWithError(t *testing.T) {
	failed := false
	expectedMessage := "failed to create stack"
	var message string
	utils.LogFatalf = func(s string, a ...interface{}) {
		failed = true
		if err, ok := a[0].(error); ok {
			message = err.Error()
		}
	}
	config.WAITFREQUENCY = 0
	client := new(CFClient)
	stackname := "NotEmptyStack"
	client.Client = &fakeCreateCFClient{err: errors.New(expectedMessage), stackname: stackname}
	config := []byte("{}")
	create(stackname, config, client)
	if !failed {
		t.Error("expected outcome to fail")
	}
	if message != expectedMessage {
		t.Errorf("message did not equal expected message of '%s', was:%s", expectedMessage, message)
	}
}

Conclusion

This is it. That’s all it took to write Furnace. I hope you enjoyed reading it as much as I enjoyed writing all these thoughts down.

I hope somebody might learn from my journey and also improve upon it.

Any comments are much appreciated and welcomed. Also, PRs and Issues can be submitted on the GitHub page of Furnace.

Thank you for reading! Gergely.

22 Mar 2017, 12:03

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 3

Intro

Hi folks.

Previously on this blog: Part 1. Part 2. Part 4.

In this part, I’m going to talk about the experimental plugin system of Furnace.

Go Experimental Plugins

Since Go 1.8 was released, an exciting and new feature was introduced called a Plug-in system. This system works with dynamic libraries built with a special switch to go build. These libraries, .so or .dylib (later), are than loaded and once that succeeds, specific functions can be called from them (symbol resolution).

We will see how this works. For package information, visit the plugin packages Go doc page here.

Furnace Plugins

So, what does furnace use plugins for? Furnace uses plugins to execute arbitery code in, currently, four given locations / events.

These are: pre_create, post_create, pre_delete, post_delete. These events are called, as their name suggests, before and after the creation and deletion of the CloudFormation stack. It allows the user to execute some code without having to rebuild the whole project. It does that by defining a single entry point for the custom code called RunPlugin. Any number of functions can be implemented, but the plugin MUST provide this single, exported function. Otherwise it will fail and ignore that plugin.

Using Plugins

It’s really easy to implement, and use these plugins. I’m not going into the detail of how to load them, because that is done by Furnace, but only how to write and use them.

To use a plugin, create a go file called: 0001_mailer.go. The 0001 before it will define WHEN it’s executed. Having multiple plugins is completely okay. Execution of order however, depends on the names of the files.

Now, in 0001_mailer.post_create we would have something like this:

package main

import "log"

// RunPlugin runs the plugin.
func RunPlugin() {
	log.Println("My Awesome Pre Create Plugin.")
}

Next step is the build this file to be a plugin library. Note: Right now, this only works on Linux!

To build this file run the following:

go build -buildmode=plugin -o 0001_mailer.pre_create 0001_mailer.go

The important part here is the extension of the file specified with -o. It’s important because that’s how Furnace identifies what plugins it has to run.

Finally, copy this file to ~/.config/go-furnace/plugins and you are all set.

Slack notification Plugin

To demonstrate how a plugin could be used is if you need some kind of notification once a Stack is completed. For example, you might want to send a message to a Slack room. To do this, your plugin would look something like this:

package main

import (
	"fmt"
	"os"

	"github.com/nlopes/slack"
)

func RunPlugin() {
	stackname := os.Getenv("FURNACE_STACKNAME")
	api := slack.New("YOUR_TOKEN_HERE")
	params := slack.PostMessageParameters{}
	channelID, timestamp, err := api.PostMessage("#general", fmt.Sprintf("Stack with name '%s' is Done.", stackname), params)
	if err != nil {
		fmt.Printf("%s\n", err)
		return
	}
	fmt.Printf("Message successfully sent to channel %s at %s", channelID, timestamp)
}

Currently, Furnace has no ability to share information of the stack with an outside plugin. Thus ‘Done’ could be anything from Rollback to Failed to CreateComplete.

Closing Words

That’s it for plugins. Thanks very much for reading! Gergely.

19 Mar 2017, 12:03

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 2

Intro

Hi folks.

Previously on this blog: Part 1, Part 3, Part 4

In this part, I’m going to talk about the AWS Go SDK and begin do dissect the intricacies of Furnace.

AWS SDK

Fortunately, the Go SDK for AWS is quiet verbose and littered with examples of all sorts. But that doesn’t make it less complex and less cryptic at times. I’m here to lift some of the early confusions, in hopes that I can help someone to avoid wasting time.

Getting Started and Developers Guide

As always, and common from AWS, the documentation is top notch. There is a 141 pages long developer’s guide on the SDK containing a getting started section and an API reference. Go check it out. I’ll wait. AWS Go SDK DG PDF. I will only talk about some gotchas and things I encountered, not the basics of the SDK.

aws.String and other types

Something which is immediately visible once we take a look at the API is that everything is a pointer. Now, there are a tremendous amount of discussions about this, but I’m with Amazon. There are various reasons for it, but to list the most prominent ones: - Type completion and compile time type safety. - Values for AWS API calls have valid zero values, in addition to being optional, i.e. not being provided at all. - Other option, like, empty interfaces with maps, or using zero values, or struct wrappers around every type, made life much harder rather than easier or not possible at all. - The AWS API is volatile. You never know when something gets to be optional, or required. Pointers made that decision easy.

There are good number of other discussions around this topic, for example: AWS Go GitHub #363.

In order to use primitives, AWS has helper functions like aws.String. Because &“asdf” is not allowed, you would have to create a variable and use its address in situations where a string pointer is needed, for example, name of the stack. These primitive helpers will make in-lining possible. We’ll see later that they are used to a great extent. Pointers, however, make life a bit difficult when constructing Input structs and make for poor aesthetics.

This is something I’m returning in a test for stubbing a client call:

		return &cloudformation.ListStackResourcesOutput{
			StackResourceSummaries: []*cloudformation.StackResourceSummary{
				{
					ResourceType:       aws.String("NoASG"),
					PhysicalResourceId: aws.String("arn::whatever"),
				},
			},
		}

This doesn’t look so appealing, but one gets used to it quickly.

Error handling

Errors also have their own types. An AWS error looks like this:

if err != nil {
    if awsErr, ok := err.(awserr.Error); ok {
    }
}

First, we check if error is nil, than we type check if the error is an AWS error or something different. In the wild, this will look something like this:

	if err != nil {
		if awsErr, ok := err.(awserr.Error); ok {
			if awsErr.Code() != codedeploy.ErrCodeDeploymentGroupAlreadyExistsException {
				log.Println(awsErr.Code())
				return err
			}
			log.Println("DeploymentGroup already exists. Nothing to do.")
			return nil
		}
		return err
	}

If it’s an AWS error, we can check further for the error code that it returns in order to identify what to handle, or what to throw on to the caller to a potential fatal. Here, I’m ignoring the AlreadyExistsException because, if it does, we just go on to a next action.

Examples

Luckily the API doc is very mature. In most of the cases, they provide an example to an API call. These examples, however, from time to time provide more confusion than clarity. Take CloudFormation. For me, when I first glanced upon the description of the API it wasn’t immediately clear that the TemplateBody was supposed to be the whole template, and that the rest of the fields were almost all optional settings. Or provided overrides in special cases.

And since the template is not an ordinary JAML or JSON file, I was looking for something that parses it into that the Struct I was going to use. After some time, and digging, I realized that I didn’t need that, and that I just need to read in the template, define some extra parameters, and give the TemplateBody the whole of the template. The parameters defined by the CloudFormation template where extracted for me by ValidateTemplate API call which returned all of them in a convenient []*cloudformation.Parameter slice. These things are not described in the document or visible from the examples. I mainly found them through playing with the API and focused experimentation.

Waiters

From other SDK implementations, we got used to Waiters. These handy methods wait for a service to become available or for certain situations to take in effect, like a Stage being CREATE_COMPLETE. The Go waiters, however, don’t allow for callback to be fired, or for running blocks, like the ruby SDK does. For this, I wrote a handy little waiter for myself, which outputs a spinner to see that we are currently waiting for something and not frozen in time. This waiter looks like this:

// WaitForFunctionWithStatusOutput waits for a function to complete its action.
func WaitForFunctionWithStatusOutput(state string, freq int, f func()) {
	var wg sync.WaitGroup
	wg.Add(1)
	done := make(chan bool)
	go func() {
		defer wg.Done()
		f()
		done <- true
	}()
	go func() {
		counter := 0
		for {
			counter = (counter + 1) % len(Spinners[config.SPINNER])
			fmt.Printf("\r[%s] Waiting for state: %s", yellow(string(Spinners[config.SPINNER][counter])), red(state))
			time.Sleep(time.Duration(freq) * time.Second)
			select {
			case <-done:
				fmt.Println()
				break
			default:
			}
		}
	}()

	wg.Wait()
}

And I’m calling it with the following method:

	utils.WaitForFunctionWithStatusOutput("DELETE_COMPLETE", config.WAITFREQUENCY, func() {
		cfClient.Client.WaitUntilStackDeleteComplete(describeStackInput)
	})

This would output these lines to the console:

[\] Waiting for state: DELETE_COMPLETE

The spinner can be configured to be one of the following types:

var Spinners = []string{`←↖↑↗→↘↓↙`,
	`▁▃▄▅▆▇█▇▆▅▄▃`,
	`┤┘┴└├┌┬┐`,
	`◰◳◲◱`,
	`◴◷◶◵`,
	`◐◓◑◒`,
	`⣾⣽⣻⢿⡿⣟⣯⣷`,
	`|/-\`}

Handy.

And with that, let’s dive into the basics of Furnace.

Furnace

Directory Structure and Packages

Furnace is divided into three main packages.

commands

Commands package is where the gist of Furnace lies. These commands represent the commands which are used through the CLI. Each file has the implementation for one command. The structure is devised by this library: Yitsushi’s Command Library. As of the writing of this post, the following commands are available: - create - Creates a stack using the CloudFormation template file under ~/.config/go-furnace - delete - Deletes the created Stack. Doesn’t do anything if the stack doesn’t exist - push - Pushes an application to a stack - status - Displays information about the stack - delete-application - Deletes the CodeDeploy application and deployment group created by push

These commands represent the heart of furnace. I would like to keep these to a minimum, but I do plan on adding more, like update and rollout. Further details and help messages on these commands can be obtained by running: ./furnace help or ./furnace help create.

❯ ./furnace help push
Usage: furnace push appName [-s3]

Push a version of the application to a stack

Examples:
  furnace push
  furnace push appName
  furnace push appName -s3
  furnace push -s3

config

Contains the configuration loader and some project wide defaults which are as follows: - Events for the plugin system - pre-create, post-create, pre-delete, post-delete. - CodeDeploy role name - CodeDeployServiceRole. This is used if none is provided to locate the CodeDeploy IAM role. - Wait frequency - Is the setting which controls how long the waiter should sleep in between status updates. Default is 1s. - Spinner - Is just the number of the spinner to use. - Plugin registry - Is a map of functions to run for the above events.

Further more, config loads the CloudFormation template and checks if some necessary settings are present in the environment, exp: the configuration folder under ~/.config/go-furnace.

utils

These are some helper functions which are used throughout the project. To list them: - error_handler - Is a simple error handler. I’m thinking of refactoring this one to some saner version. - spinner - Sets up which spinner to use in the waiter function. - waiter - Contains the verbose waiter introduced above under Waiters.

Configuration and Environment variables

Furnace is a Go application, thus it doesn’t have the luxury of Ruby or Python where the configuration files are usually bundled with the app. But, it does have a standard for it. Usually, configurations reside in either of these two locations. Environment Properties or|and configuration files under a fixed location ( i.e. HOME/.config/app-name ). Furnace employs both.

Settings like, region, stack name, enable plugin system, are under environment properties ( though this can change ), while the CloudFormation template lives under ~/.config/go-furnace/. Lastly it assumes some things, like the Deployment IAM role just exists under the used AWS account. All these are loaded and handled by the config package described above.

Usage

A typical scenario for Furnace would be the following:

  • Setup your CloudFormation template or use the one provided. The one provided sets up a highly available and self healing setting using Auto-Scaling and Load-Balancing with a single application instance. Edit this template to your liking than copy it to ~/.config/go-furnace.
  • Create the configured stack with ./furnace create.
  • Create will ask for the parameters defined in the template. If defaults are setup, simply hitting enter will use these defaults. Take note, that the provided template sets up SSH access via a provided key. If that key is not present in CF, you won’t be able to SSH into the created instance.
  • Once the stack is completed, the application is ready to be pushed. To do this, run: ./furnace push. This will locate the appropriate version of the app from S3 or GitHub and push that version to the instances in the Auto-Scaling group. To all of them.

General Practices Applied to the Project

Commands

For each command the main entry point is the execute function. These functions are usually calling out the small chunks of distributed methods. Logic was kept to a bare minimum ( probably could be simplified even further ) in the execute functions mostly for testability and the likes. We will see that in a followup post.

Errors

Errors are handled immediately and usually through a fatal. If any error occurs than the application is halted. In followup versions this might become more granular. I.e. don’t immediately stop the world, maybe try to recover, or create a Poller or Re-Tryer, which tries a call again for a configured amount of times.

Output colors

Not that important, but still… Aesthetics. Displaying data to the console in a nice way gives it some extra flare.

Makefile

This project works with a Makefile for various reasons. Later on, once the project might become more complex, a Makefile makes it really easy to handle different ways of packaging the application. Currently, for example, it provides a linux target which will make Go build the project for Linux architecture on any other Architecture i.e. cross-compiling.

It also provides an easy way to run unit tests with make test and installing with make && make install.

Closing Words

That is all for Part 2. Join me in Part 3 where I will talk about the experimental Plugin system that Furnace employs.

Thank you for reading! Gergely.

17 Mar 2017, 09:09

Testing new Hugo if posts are generated properly

Testing.

16 Mar 2017, 21:49

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 1

Other posts:

Part 2, Part 3, Part 4.

Building Furnace: Part 1

Intro

Hi folks.

This is the first part of a 4 part series which talks about the process of building a middlish sized project in Go, with AWS. Including Unit testing and a experimental plugin feature.

The first part will talk about the AWS services used in brief and will contain a basic description for those who are not familiar with them. The second part will talk about the Go SDK and the project structure itself, how it can be used, improved, and how it can help in everyday life. The third part will talk about the experimental plugin system, and finally, we will tackle unit testing AWS in Go.

Let’s begin, shall we?

AWS

CloudFormation

If you haven’t yet read about, or know off, AWS’ CloudFormation service, you can either go ahead and read the Documentation or read on for a very quick summary. If you are familiar with CF, you should skip ahead to CodeDeploy section.

CF is a service which bundles together other AWS services (for example: EC2, S3, ELB, ASG, RDS) into one, easily manageable stack. After a stack has been created, all the resources can be handled as one, located, tagged and used via CF specific console commands. It’s also possible to define any number of parameters, so a stack can actually be very versatile. A parameter can be anything, from SSH IP restriction to KeyPair names and list of tags to create or in what region the stack will be in.

To describe how these parts fit together, one must use a CloudFormation Template file which is either in JSON or in YAML format. A simple example looks like this:

    Parameters:
      KeyName:
        Description: The EC2 Key Pair to allow SSH access to the instance
        Type: AWS::EC2::KeyPair::KeyName
    Resources:
      Ec2Instance:
        Type: AWS::EC2::Instance
        Properties:
          SecurityGroups:
          - Ref: InstanceSecurityGroup
          - MyExistingSecurityGroup
          KeyName:
            Ref: KeyName
          ImageId: ami-7a11e213
      InstanceSecurityGroup:
        Type: AWS::EC2::SecurityGroup
        Properties:
          GroupDescription: Enable SSH access via port 22
          SecurityGroupIngress:
          - IpProtocol: tcp
            FromPort: '22'
            ToPort: '22'
            CidrIp: 0.0.0.0/0

There are a myriad of these template samples here.

I’m not going to explain this in too much detail. Parameters define the parameters, and resources define all the AWS services which we would like to configure. Here we can see, that we are creating an EC2 instance with a custom Security Group plus and already existing security group. ImageId is the AMI which will be used for the EC2 instance. The InstanceSecurityGroup is only defining some SSH access to the instance.

That is pretty much it. This can become bloated relatively quickly once, VPCs, ELBs, and ASGs come into play. And CloudFormation templates can also contain simple logical switches, like, conditions, ref for variables, maps and other shenanigans.

For example consider this part in the above example:

      KeyName:
        Ref: KeyName

Here, we use the KeyName parameter as a Reference Value which will be interpolated to the real value, or the default one, as the template gets processed.

CodeDeploy

If you haven’t heard about CodeDeploy yet, please browse the relevant Documentation or follow along for a “quick” description.

CodeDeploy just does what the name says. It deploys code. Any kind of code, as long as the deployment process is described in a file called appspec.yml. It can be easy as coping a file to a specific location or incredibly complex with builds of various kinds.

For a simple example look at this configuration:

    version: 0.0
    os: linux
    files:
      - source: /index.html
        destination: /var/www/html/
      - source: /healthy.html
        destination: /var/www/html/
    hooks:
      BeforeInstall:
        - location: scripts/install_dependencies
          timeout: 300
          runas: root
        - location: scripts/clean_up
          timeout: 300
          runas: root
        - location: scripts/start_server
          timeout: 300
          runas: root
      ApplicationStop:
        - location: scripts/stop_server
          timeout: 300
          runas: root

CodeDeploy applications have hooks and life-cycle events which can be used to control the deployment process of an like, starting the WebServer; making sure files are in the right location; copying files, running configuration management software like puppet, ansible or chef; etc, etc.

What can be done in an appspec.yml file is described here: Appspec Reference Documentation.

Deployment happens in one of two ways:

GitHub

If the preferred way to deploy the application is from GitHub a commit hash must be used to identify which “version” of the application is to be deployed. For example:

    rev = &codedeploy.RevisionLocation{
        GitHubLocation: &codedeploy.GitHubLocation{
            CommitId:   aws.String("kajdf94j0f9k309klksjdfkj"),
            Repository: aws.String("Skarlso/furnace-codedeploy-app"),
        },
        RevisionType: aws.String("GitHub"),
    }

Commit Id is the hash of the latest release and repository is the full account/repository pointing to the application.

S3

The second way is to use an S3 bucket. The bucket will contain an archived version of the application with a given extension. I’m saying given extension, because it has to be specified like this (and can be either ‘zip’, or ‘tar’ or ‘tgz’):

    rev = &codedeploy.RevisionLocation{
        S3Location: &codedeploy.S3Location{
            Bucket:     aws.String("my_codedeploy_bucket"),
            BundleType: aws.String("zip"),
            Key:        aws.String("my_awesome_app"),
            Version:    aws.String("VersionId"),
        },
        RevisionType: aws.String("S3"),
    }

Here, we specify the bucket name, the extension, the name of the file and an optional version id, which can be ignored.

Deploying

So how does code deploy get either of the applications to our EC2 instances? It uses an agent which is running on all of the instances that we create. In order to do this, the agent needs to be present on our instance. For linux this can be achieved with the following UserData (UserData in CF is the equivalent of a bootsrap script):

    "UserData" : {
        "Fn::Base64" : { "Fn::Join" : [ "\n", [
            "#!/bin/bash -v",
            "sudo yum -y update",
            "sudo yum -y install ruby wget",
            "cd /home/ec2-user/",
            "wget https://aws-codedeploy-eu-central-1.s3.amazonaws.com/latest/install",
            "chmod +x ./install",
            "sudo ./install auto",
            "sudo service codedeploy-agent start",
        ] ] }
    }

A simple user data configuration in the CloudFormation template will make sure that every instance that we create will have the CodeDeploy agent running and waiting for instructions. This agent is self updating. Which can cause some trouble if AWS releases a broken agent. However unlikely, it can happen. Never the less, once installed, it’s no longer a concern to be bothered with.

It communications on HTTPS port 443.

CodeDeploy identifies instances which need to be updated according to our preferences, by tagging the EC2 and Auto Scaling groups. Tagging happens in the CloudFormation template through the AutoScalingGroup settings like this:

    "Tags" : [
        {
            "Key" : "fu_stage",
            "Value" : { "Ref": "AWS::StackName" },
            "PropagateAtLaunch" : true
        }
    ]

This will give the EC2 instance a tag called fu_stage with value equaling to the name of the stack. Once this is done, CodeDeploy looks like this:

    params := &codedeploy.CreateDeploymentInput{
        ApplicationName:               aws.String(appName),
        IgnoreApplicationStopFailures: aws.Bool(true),
        DeploymentGroupName:           aws.String(appName + "DeploymentGroup"),
        Revision:                      revisionLocation(),
        TargetInstances: &codedeploy.TargetInstances{
            AutoScalingGroups: []*string{
                aws.String("AutoScalingGroupPhysicalID"),
            },
            TagFilters: []*codedeploy.EC2TagFilter{
                {
                    Key:   aws.String("fu_stage"),
                    Type:  aws.String("KEY_AND_VALUE"),
                    Value: aws.String(config.STACKNAME),
                },
            },
        },
        UpdateOutdatedInstancesOnly: aws.Bool(false),
    }

CreateDeploymentInput is the entire parameter list that is needed in order to identify instances to deploy code to. We can see here that it looks for an AutoScalingGroup by Physical Id and the tag labeled fu_stage. Once found, it will use UpdateOutdatedInstancesOnly to determine if an instance needs to be updated or not. Set to false means, it always updates.

Furnace

Where does Furnace fit in, in all of this? Furnace provides a very easy mechanism to create, delete and push code to a CloudFormation stack using CodeDeploy, and a couple of environment properties. Furnace create will create a CloudFormation stack according to the provided template, all the while asking for the parameters defined in it for flexibility. delete will remove the stack and all affiliated resources except for the created CodeDeploy application. For that, there is delete-application. status will display information about the stack: Outputs, Parameters, Id, Name, and status. Something like this:

    2017/03/16 21:14:37 Stack state is:  {
      Capabilities: ["CAPABILITY_IAM"],
      CreationTime: 2017-03-16 20:09:38.036 +0000 UTC,
      DisableRollback: false,
      Outputs: [{
          Description: "URL of the website",
          OutputKey: "URL",
          OutputValue: "http://FurnaceSt-ElasticL-ID.eu-central-1.elb.amazonaws.com"
        }],
      Parameters: [
        {
          ParameterKey: "KeyName",
          ParameterValue: "UserKeyPair"
        },
        {
          ParameterKey: "SSHLocation",
          ParameterValue: "0.0.0.0/0"
        },
        {
          ParameterKey: "CodeDeployBucket",
          ParameterValue: "None"
        },
        {
          ParameterKey: "InstanceType",
          ParameterValue: "t2.nano"
        }
      ],
      StackId: "arn:aws:cloudformation:eu-central-1:9999999999999:stack/FurnaceStack/asdfadsf-adsfa3-432d-a-fdasdf",
      StackName: "FurnaceStack",
      StackStatus: "CREATE_COMPLETE"
    }

( This will later be improved to include created resources as well. )

Once the stack is CREATE_COMPLETE a simple push will deliver our application on each instance in the stack. We will get into more detail about how these commands are working in Part 2 of this series.

Final Words

This is it for now.

Join me next time when I will talk about the AWS Go SDK and its intricacies and we will start to look at the basics of Furnace.

As always, Thanks for reading! Gergely.

03 Mar 2017, 18:20

Images on older posts

Hi folks.

Just a quick headsup, that older posts and images, may have been lost unfortunately, because I made the terrible mistake, when I migrated over from my old blog, that I forgot to download all the images from the remote host.

For lack of options, I deleted the images. :/ Sorry for the inconvencience!

Gergely.