06 Feb 2018, 23:01

Go Budapest Meetup


So I was at Go Budapest Meetup yesterday, where the brilliant Johan Brandhorst gave a talk about his project based on gRPC using gRPC-web + GopherJS + protobuf. He also has some Go contributions and check out his project here: Protobuf. It’s GopherJS Bindings for ProtobufJS and gRPC-Web.

It was interesting to see where these projects could lead and I see the potential in them. I liked the usage of Protobuf and gRPC, I don’t have THAT much experience with them. However after yesterday, I’m eager to find an excuse to do something with these libraries. I used gRPC indirectly, well, the result of it, when dealing with Google Cloud Platform’s API. Which is largely generated code through gRPC and protobuf.

He also presented a bi-directional stream communication between the gRPC-web client and the server which was an interesting feat to produce. It did involve the use of errgroup. Which is nice.

I didn’t look THAT much into WebAssembly however, again, after yesterday, I will. He gave a shout out to WebAssembly developers that he is ready to tackle the Go bindings for WASM!

It was a good change of pace to look at some Go code being written, I’ll be sure to visit the meetup again, in about three months when the next one will come.

Maybe, I’ll even give a talk if they are looking for speakers. ;)

A huge thank you to Emarsys Budapest for organizing the event and bringing Johan to us for his talk.


23 Jan 2018, 22:34

Ansible + Nginx + LetsEncrypt + Wiki + Nagios


Hi folks.

Today, I would like demonstrate how to use Ansible in order to construct a server hosting multiple HTTPS domains with Nginx and LetsEncrypt. Are you ready? Let’s dive in.

What you will need

There is really only one thing you need in order for this to work and that is Ansible. If you would like to run local tests without a remote server, than you will need Vagrant and VirtualBox. But those two are optional.

What We Are Going To Set Up

The setup is as follows:


We are going to have a Nagios with a custom check for pending security updates. That will run under nagios.example.com.

Hugo Website

The main web site is going to be a basic Hugo site. Hugo is a static Go based web site generator. This Blog is run by it.

We are also going to setup NoIP which will provide the DNS for the sites.


The wiki is a plain, basic DokuWiki.

HTTPS + Nginx

And all the above will be hosted by Nginx with HTTPS provided by letsencrypt. We are going to set all these up with Ansible on top so it will be idempotent.


All of the playbooks and the whole thing together can be viewed here: Github Ansible Server Setup.


I won’t be writing everything down to the basics about Ansible. For that you will need to go and read its documentation. But I will provide ample of clarification for using what I’ll be using.

Some Basics

Ansible is a configuration management tool which, unlike chef or puppet, isn’t master - slave based. It’s using SSH to run a set of instructions on a target machine. The instructions are written in yaml files and look something like this:

# tasks file for ssh
- name: Copy sshd_config
  copy: content="{{sshd_config}}" dest=/etc/ssh/sshd_config
  - SSHD Restart

This is a basic Task which copies over an sshd_config file overwriting the one already being there. It can execute in priviliged mode if root password is provided or the user has sudo rights.

It works from so called hosts files where the server details are described. This is how a basic host file would look like:



Ansible will use these settings to try and access the server. To test if the connection is working, you can send a ping task like this:

ansible all -m ping

Ansible uses variables for things that change. They are defined under each task’s subfolder called vars. Please feel free to change the varialbes there to your liking.

SSH Access

You can either define SSH information per host or per group or globally. In this example I have it under the groups wars called webserver1 like this (vars.yaml):

# SSH sudo keys and pass
ansible_become_pass: '{{vault_ansible_become_pass}}'
ansible_ssh_port: '{{vault_ansible_ssh_port}}'
ansible_ssh_user: '{{vault_ansible_ssh_user}}'
ansible_ssh_private_key_file: '{{vault_ansible_ssh_private_key_file}}'
home_dir: /root

Further reading

Further readings are:


The vault is the place where we can keep secure information. This file is called vault and usually lives under either group_vars or host_vars. The preference is up to you.

This file is encrypted using a password you specify. You can have the vault password stored in the following ways:

  • Store it on a secure drive which is encrypted and only mounted when the playbook is executed
  • Store it on Keybase
  • Store it on an encrypted S3 bucket
  • Store it in a file next to the playbook which is never commited into source control

Either way, in the end, ansible will look for a file called .vault_password for when it’s trying to decrypt the file. You can define a different file in the ansible.cfg file using the vault_password_file option.

You can create a vault like this:

ansible-vault create vault

If you are following along, you are going to need these variables in the vault:

vault_ansible_become_pass: <your_sudo_password> # if applicable
vault_ansible_ssh_user: <ssh_user>
vault_ansible_ssh_private_key_file: /Users/user/.ssh/ida_rsa
vault_nagios_password: supersecurenagiosadminpassword
vault_nagios_username: nagiosadmin
vault_noip_username: youruser@gmail.com
vault_noip_password: "SuperSecureNoIPPassword"
vault_nginx_user: <localuser>

You can always edit the vault later on with:

ansible-vault edit group_vars/webserver1/vault --vault-password-file=.vault_pass


The following are a collection of tasks which execute in order. The end task, which is letsencrypt, relies on all the hosts being present and configured under Nginx. Otherwise it will throw an error that the host you are trying to configure HTTPS for, isn’t defined.


I’m choosing No-ip as a DNS provider because it’s cheap and the sync tool is easy to automate. To automate the CLI of No-IP, I’m using a package called expect. This looks something like this:

cd {{home_dir}}
wget http://www.no-ip.com/client/linux/noip-duc-linux.tar.gz
mkdir -p noip
tar zxf noip-duc-linux.tar.gz -C noip
cd noip/*

/usr/bin/expect <<END_SCRIPT
spawn make install
expect "Please enter the login/email*" { send "{{noip_username}}\r" }
expect "Please enter the password for user*" { send "{{noip_password}}\r" }
expect {
    "Do you wish to have them all updated*" {
        send "y"
expect "Please enter an update interval*" { send "30\r" }
expect "Do you wish to run something at successful update*" {send "N" }

The interesting part is the command running expect. Basically, it’s expecting some kind of output which is outlined there. And has canned answers for those which it sends to the waiting command.

To Util or Not To Util

So, there are small tasks, like installing vim and wget and such which could warrant the existance of a utils task. Utils task would install the packages that are used as convinience and don’t really relate to a singe task.

Yet I settled for the following. Each of my tasks has a dependency part. The given tasks takes care of all the packages it needs so they can be executed on their own as well as in unison.

This looks like this:

# Install dependencies
- name: Install dependencies
  apt: pkg="{{item}}" state=installed
    - "{{deps}}"

For which the deps variable is defined as follows:

# Defined dependencies for letsencrypt task.
deps: ['git', 'python-dev', 'build-essential', 'libpython-dev', 'libpython2.7', 'augeas-lenses', 'libaugeas0', 'libffi-dev', 'libssl-dev', 'python-virtualenv', 'python3-virtualenv', 'virtualenv']

This is much cleaner. And if a task is no longer needed, it’s dependencies will no longer be needed either in most of the cases.


I’m using Nagios 4 which is a real pain in the butt to install. Luckily, thanks to Ansiblei, I only ever had to figure it out once. Now I have a script for that. Installing Nagios demands several, smaller components to be installed. Thus our task uses import from outside tasks like this:

- name: Install Nagios
    - include: create_users.yml # creates the Nagios user
    - include: install_dependencies.yml # installs Nagios dependencies
    - include: core_install.yml # Installs Nagios Core
    - include: plugin_install.yml # Installs Nagios Plugins
    - include: create_htpasswd.yml # Creates a password for Nagios' admin user
    - include: setup_custom_check.yml # Adds a custom check which is to check how many security updates are pending
  when: st.stat.exists == False

The when is a check for a variable created by a file check.

- stat:
    path: /usr/local/nagios/bin/nagios
  register: st

It checks if Nagios is installed or not. If yes, skip.

I’m not going to paste in here all the subtasks because that would be huge. You can check those out in the repository under Nagios.


Hugo is easy to install. Its sole requirement is Go. To install hugo you simply run apt-get install hugo. Setting up the site for me was just checking out the git repo and than execute hugo from the root folder like this:

hugo server --bind= --port=8080 --baseUrl=https://example.com --appendPort=false --logFile hugo.log --verboseLog --verbose -v &


I used DokuWiki because it’s a file based wiki so installation is basically just downloading the archive, extracting it and done. The only thing that’s needed for it, is php-fpm to run it and a few php modules which I’ll outline in the ansible playbook.

The VHOST file for DokuWiki is provided by them and looks like this:

server {
    server_name   {{ wiki_server_name }};
    root {{ wiki_root }};
    index index.php index.html index.htm;
    client_max_body_size 2M;
    client_body_buffer_size 128k;
    location / {
        index doku.php;
        try_files $uri $uri/ @dokuwiki;
    location @dokuwiki {
        rewrite ^/_media/(.*) /lib/exe/fetch.php?media=$1 last;
        rewrite ^/_detail/(.*) /lib/exe/detail.php?media=$1 last;
        rewrite ^/_export/([^/]+)/(.*) /doku.php?do=export_$1&id=$2 last;
        rewrite ^/(.*) /doku.php?id=$1 last;
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    location ~ /\.ht {
        deny all;
    location ~ /(data|conf|bin|inc)/ {
        deny all;


Nginx install is through apt as well. Here, however, there is a bit of magic going on with templates. The templates provide the vhost files for the three hosts we will be running. This looks as follows:

- name: Install vhosts
    - template: src=01_example.com.j2 dest=/etc/nginx/vhosts/01_example.com
      - Restart Nginx
    - template: src=02_wiki.example.com.j2 dest=/etc/nginx/vhosts/02_wiki_example.com
      - Restart Nginx
    - template: src=03_nagios.example.com.j2 dest=/etc/nginx/vhosts/03_nagios.example.com
      - Restart Nginx

Now, you might be wondering what notify is? It’s basically a handler that gets notified to restart nginx. The great part about it is that it does this only once, even if it was called multiple times. The handler looks like this:

- name: Restart Nginx
    name: nginx
    state: restarted

And lives under handlers sub-folder.

With this, Nginx is done and should be providing our sites under plain HTTP.


Now comes the part where we enable HTTPS for all these three domains. Which is as follows:

  • example.com
  • wiki.example.com
  • nagios.example.com

This is actually quiet simple now-a-days with certbot-auto. In fact, it will insert the configurations we need all by itself. The only thing for us to do is to specify what domains we have and what our challenge would be. Also, we have to pass in some variables for certbot-auto to run in a non-interactive mode. This looks as follows:

- name: Generate Certificate for Domains
  shell: ./certbot-auto --authenticator standalone --installer nginx -d '{{ domain_example }}' -d '{{ domain_wiki }}' -d '{{ domain_nagios }}' --email example@gmail.com --agree-tos -n --no-verify-ssl --pre-hook "sudo systemctl stop nginx" --post-hook "sudo systemctl start nginx" --redirect
    chdir: /opt/letsencrypt

And that’s that. The interesting and required part here is the pre-hook and post-hook. Without those it wouldn’t work because the ports that certbot is performing the challenge on would be taken already. This stops nginx, performs the challenge and generates the certs, and starts nginx again. Also note --redirect. This will force HTTPS on the sites and disables plain HTTP.

If all went well our sites should contain information like this:

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/example.com-0001/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com-0001/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

Test Run using Vagrant

If you don’t want to run all this on a live server to test out, you can do either of these two things:

  • Use a remote dedicated test server
  • Use a local virtual machine with Vagrant

Here, I’m giving you an option for the later.

It’s possible for most of the things to be tested on a local Vagrant machine. Most of the time a Vagrant box is enough to test out installing things. A sample Vagrant file looks like this:

# encoding: utf-8
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Box / OS
VAGRANT_BOX = 'ubuntu/xenial64'

VM_NAME = 'ansible-practice'

Vagrant.configure(2) do |config|
  # Vagrant box from Hashicorp
  config.vm.box = VAGRANT_BOX
  # Actual machine name
  config.vm.hostname = VM_NAME
  # Set VM name in Virtualbox
  config.vm.provider 'virtualbox' do |v|
    v.name = VM_NAME
    v.memory = 2048
  # Ansible provision
  config.vm.provision 'ansible_local' do |ansible|
    ansible.limit = 'all'
    ansible.inventory_path = 'hosts'
    ansible.playbook = 'local.yml'

This interesting part here is the ansible provision section. It’s running a version of Ansible that is called ansible_local. It’s local, becuase it will be only on the VirtualBox. Meaning, you don’t have to have Ansible installed to test it on a vagrant box. Neat, huh?

To test your playbook, simply run vagrant up and you should see the provisioning happening.

Room for improvement

And that should be all. Note that this setup isn’t quiet enterprise ready. I would add the following things:

Tests and Checks

A ton of tests and checks if the commands that we are using are actually successful or not. If they aren’t make them report the failure.

Multiple Domains

If you happen to have a ton of domain names to set up, this will not be the most effective way. Right now letsencrypt creates a single certificate file for those three domains with -d and that’s not what you want with potentially hundreds of domains.

In that case, have a list to go through with with_items. Note that you’ll have to restart nginx on each line, because you don’t want one of them fail and stop the process entirely. Rather have a few fail but the rest still work.


That’s it folks. Have fun setting up servers all over the place and enjoy the power of nginx and letsencrypt and not having to worry about adding another server into the bunch.

Thank you for reading,

13 Jan 2018, 22:34

Huge Furnace Update


Hi folks.

In the past couple of months I’ve been slowly updating Furnace.

There are three major changes that happened. Let’s take a look at them, shall we?

Google Cloud Platform

Furnace now supports Google Cloud Platform (GCP). It provides the same API to handle GCP resource as with AWS. Namely, create, delete, status, update. I opted to leave out push because Google mostly works with git based repositories, meaning a push is literary just a push, than Google handles distributing the new code by itself.

All the rest of the commands should work the same way as AWS.

Deployment Manager

GCP has a similar service to AWS CloudFormations called Deployment Manager. The documentation is fairly detailed with a Bookshelf example app to deploy. Code and Templates can be found in their Git repositroy here: Deployment Manager Git Repository.

Setting up GCP

As the README of Furnace outlines…

Please carefully read and follow the instruction outlined in this document: Google Cloud Getting Started. It will describe how to download and install the SDK and initialize cloud to a Project ID.

Take special attention to these documents:

Initializing GCloud Tools Authorizing Tools

Furnace uses a Google Key-File to authenticate with your Google Cloud Account and Project. In the future, Furnace assumes these things are properly set up and in working order.

To initialize the client, it uses the following code:

  ctx := context.Background()
  client, err := google.DefaultClient(ctx, dm.NdevCloudmanScope)

The DefaultClient in turn, does the following:

// FindDefaultCredentials searches for "Application Default Credentials".
// It looks for credentials in the following places,
// preferring the first location found:
//   1. A JSON file whose path is specified by the
//      GOOGLE_APPLICATION_CREDENTIALS environment variable.
//   2. A JSON file in a location known to the gcloud command-line tool.
//      On Windows, this is %APPDATA%/gcloud/application_default_credentials.json.
//      On other systems, $HOME/.config/gcloud/application_default_credentials.json.
//   3. On Google App Engine it uses the appengine.AccessToken function.
//   4. On Google Compute Engine and Google App Engine Managed VMs, it fetches
//      credentials from the metadata server.
//      (In this final case any provided scopes are ignored.)
func FindDefaultCredentials(ctx context.Context, scope ...string) (*DefaultCredentials, error) {

Take note on the order. This is how Google will authenticate your requests.

Running GCP

Running gcp is largely similar to AWS. First, you create the necessary templates to your infrastructure. This is done via the Deployment Manager and it’s templating engine. The GCP templates are Python JINJA files. Examples are provided in the template directory. It’s a bit more complicated than the CloudFormation templates in that it uses outside templates plus schema files to configure dynamic details.

It’s all explained in these documents: Creating a Template Step-by-step and Creating a Basic Template.

It’s not trivial however. And using the API can also be confusing. The Google Code is just a generated Go code file using gRPC. But studying it may provide valuable insigth into how the API is structured. I’m also providing some basic samples that I gathered together and the readme does a bit more explaining on how to use them.

Your First Stack

Once you have everything set-up you’ll need a configuration file for Furnace. The usage is outlined more here YAML Configuration. The configuration file for GCP looks like this:

  project_name: testplatform-1234
  spinner: 1
  template_name: google_template.yaml
  stack_name: test-stack

Where project_name is the name you generate for your first billable Google Cloud Platform project. Template lives next to this yaml file and stack name must be DNS complient.

Once you have a project and a template setup, it’s as simple as calling ./furnace-gcp create or ./furnace-gcp create mycustomstack.


Deleting happens with ./furnace-gcp delete or ./furnace-gcp delete mycustomstack. Luckily, as with AWS, this means that every resource created with the DeploymentManager will be deleted leaving no need for search and cleanup.

Project Name vs. Project ID

Unlike with AWS Google requires your stack name and project id to be DNS complient. This is most likely because all API calls and such contain that information.

Separate Binaries

In order to mitigate some of Furnace’s size, I’m providing separate binaries for each service it supports.

The AWS binaries can be found in aws folder, and respectively, the Google Cloud Platform is located in gcp. Both are build-able by running make.

If you would like to run both with a single command, a top level make file is provided for your convinience. Just run make from the root. That will build all binaries. Later on, Digital Oceans will join the ranks.

YAML Configuration

Last but not least, Furnace now employs YAML files for configuration. However, it isn’t JUST using YAML files. It also employs a smart configuration pattern which works as follows.

Since Furnace is a distributed binary file which could be running from any given location at any time. Because of that, at first I opted for a global configuration directory.

Now, however, furnace uses a furnace configuration file named with the following pattern: .stackalias.furnace. Where stackname, or stack is the name of a custom stack you would like to create for a project. The content of this file is a single entry, which is the location, relative to this file, of the YAML configuration files for the given stack. For example:


This means, that in the directory called stacks there will a yaml configuration file for your database stack. The AWS config file looks like this:

  stackname: FurnaceStack
  spinner: 1
  code_deploy_role: CodeDeployServiceRole
  region: us-east-1
  enable_plugin_system: false
  template_name: cloud_formation.template
  app_name: furnace_app
    # Only needed in case S3 is used for code deployment
    code_deploy_s3_bucket: furnace_code_bucket
    # The name of the zip file in case it's on a bucket
    code_deploy_s3_key: furnace_deploy_app
    # In case a Git Repository is used for the application, define these two settings
    git_account: Skarlso/furnace-codedeploy-app
    git_revision: b89451234...

The important part is the template_name. The template has to be next to this yaml file. To use this file, you simply call any of the AWS or GCP commands with an extra, optional parameter like this:

./furnace-aws create mydatabase

Note that mydatabase will translate to .mydatabase.furnace.

The intelligent part is, that this file could be placed anywhere in the project folder structure; because furnace, when looking for a config file, traverses backwards from the current execution directory up until /. Where root is not included in the search.

Consider the following directory tree:

├── docs
│ ├── furnace-aws status mydatabase
├── stacks
│   ├── mystack.template
│   └── mystack.yaml
└── .mydatabase.furnace

You are currently in your docs directory and would like to ask for the status of your database. You don’t have to move to the location of the setting file, just simply run the command from where you are. This only works if you are above the location of the file. If you would be below, furnace would say it can’t find the file. Because it only traverses upwards.

.mydatabase.furnace here contains only a single entry stacks/mystack.yaml. And that’s it. This way, you could have multiple furnace files, for example a .database.furnace, .front-end.furnace and a .backend.furnace. All three would work in unison, and if want needs updating, simply run ./furnace-aws update backend. And done!

Closing words

As always, contributions are welcomed in the form of issues or pull requests. Questions anything, I tend to answer as soon as I can.

Always run the tests before submitting.

Thank you for reading. Gergely.

04 Dec 2017, 22:34

Commit-Build-Deploy With AWS CodeBuild and Lambda


Hi All.

Today I would like to write about an AWS finger practice.

Previously, I wrote about how I build and deploy my blog with Wercker. Since, I’m a cloud engineer and I dislike Oracle and it’s ever expending tenctacles into the abyss, I wanted to switch to use something else.

My build and deploy cycle is simple.

Commit to Blogsource Repo -> Wercker WebHook -> Builds my blog using Hugo -> Pushed to a Different Repository which my Github Blog.

That’s all.

It’s quiet possible to reproduce this on AWS without infering costs. Unless you publish like… a couple 100 posts / week.

I’m going to use the following services: CloudFormation, AWS Lambda, CodeBuild, S3.

To deploy the below describe architecture in your account in us-east-1 region simply click this button: Launch Stack

BEFORE doing that though you need the following created:

Have a bucket for your lambda function. The lambda function can be found here:

Lambda Repository.

Zip up the lambda folder contents by doing this:

cd lambda
zip -r gitpusher.zip *
aws s3 cp gitpusher.zip s3://your-lambda-bucket

That’s it.

To read a description of the stack, please continue.


The architecture I’m about to lay out is simple in its use and design. I tried not to complicate things, because I think the simpler something is, the less prone to failure it will be.

In its most basic form the flow is as follows:


You push something into a repository you provide. CodeBuild has a webhook to this repository so on each commit it starts to build the blog. The build will use a so called buildspec.yaml file which describes how your blog should be built. Mine looks like this:

version: 0.2

      - echo Installing required packages and Hugo
      - apt-get update
      - apt-get install -y git golang wget
      - wget -q https://github.com/gohugoio/hugo/releases/download/v0.31/hugo_0.31_Linux-64bit.deb -O /tmp/hugo.dep
      - dpkg -i /tmp/hugo.dep
      - echo Downloading source code
      - git clone https://github.com/Skarlso/blogsource.git /opt/app
      - echo Build started on `date`
      - cd /opt/app && hugo --theme purehugo
      - echo Build completed on `date`
    - /opt/app/public/**/*

When it’s finished, CodeBuild will upload everything in the public folder as a zip to a bucket. The bucket has a lambda attached which triggers on putObject event with the extension .zip. It downloads the archive, extracts it and pushes it to another repository, which is the repository for the blog.

And done! That’s it. For an architecture overview, please read on.


Now, we are going to use CloudFormation stack to deploy these resources. Because we aren’t animals to create them by hand, yes?

An overview of my current architecture is best shown by this image:

AWS Stack.

Let’s go over these components one - by - one.

Lambda Role

This is the Role which allows the Lambda to access things in your account. It needs the following service access: s3, logs, lambda; and the following permissions: logs:Create*, logs:PutLogEvents, s3:GetObject, s3:ListBucket.

Code Build Role

This is the role which allows CodeBuild to have access to services it needs. These services are the following: s3, logs, ssm, codebuild. CodeBuild also needs the following actions allowed: logs:Create*, logs:PutLogEvents, s3:GetObject, s3:PutObject, ssm:GetParameters.

Build Bucket

This is the bucket in which CodeBuild will push the generated build artifact.

Blog Pusher Function

This is the heart of this project. It contains the logic to download the zipped artifact, extract it, create a hollow repository from the extracted archive and push the changes to the repository. And just the changes.

This is achieve by a short Python 3.6 script which can be found in the linked repository.


The stack requires you to provide a couple of parameters which are described in the template. Like, bucket name, github repository, git token and such. Please refer to the template for a full description of each.


I recently push a couple of builds to test this configuration and I inferred 0.2 USD in charges. But that was like 10-15 builds a day.


In order to deploy this you can use Furnace to easily manage the template and it’s parameters. Once you copy the template to the target directory, simply run furnace aws create and provide the necessary parameters.


And that is all. A nice little stack which does the same as Wercker without costs but the leisure of simply pushing up some change to a repository of your choosing.

I hope you enjoyed this little write up as much as I enjoyed creating it.

As always, Thanks for reading! Gergely.

06 Nov 2017, 20:34

Furnace Ikea Manual

Hi there folks.

Just a quick post, of how I went on and created an IKEA manual about Furnace.

Page 1: Page 1. Page 2: Page 2.

I drew these using Krita. I mostly used a mouse but I also used a Wacom Bamboo drawing tabled, for sketches and such.

Thanks, Gergely.

03 Sep 2017, 10:34

Furnace Binaries

Hey folks.

Quick note. Furnace now comes pre-compiled easy to access binaries which you can download and use out of the box.

No need to install anything, or compile the source. Just download, unzip and use.

Here is the website: Furnace Website.

Enjoy, Cheers, Gergely.

31 May 2017, 06:23




28 May 2017, 19:23

Replacing Eval with Object.send and a self written Parser


A while ago, I was added as a curator for a Gem called JsonPath. It’s a small but very useful and brilliant gem. It had a couple of problems which I fixed, but the hardest to eliminate proved to be a series of evals throughout the code.

You could opt in using eval with a constructor parameter, but generally, it was considered to be unsafe. Thus, normally when a project was using it, like Huginn they had to opt out by default, thus missing out on sweet parsing like this: $..book[?(@['price'] > 20)].


In order to remove eval, first I had to understand what it is actually doing. I had to take it apart.


After much digging and understanding the code, I found, all it does is perform the given operations on the current node. And if the operation is true, it will select that node, otherwise, return false, and ignore that node.

For example $..book[?(@['price'] > 20)] could be translated to:

return @_current_node['price'] > 20

Checking first if 'price' is even a key in @_current_node. Once I’ve understood this part, I set on trying to fix eval.

SAFE = 4

In ruby, you could extract the part where you Eval and put it into its own proc and set SAFE = 4 which will disable some things like system calls.

proc do
  SAFE = 4

SAFE levels:

$SAFE Description 0 No checking of the use of externally supplied (tainted) data is performed. This is Ruby’s default mode. >= 1 Ruby disallows the use of tainted data by potentially dangerous operations. >= 2 Ruby prohibits the loading of program files from globally writable locations. >= 3 All newly created objects are considered tainted. >= 4 Ruby effectively partitions the running program in two. None - tainted objects may not be modified. Typically, this will be used to create a sandbox: the program sets up an environment using a lower $SAFE level, then resets $SAFE to 4 to prevent subsequent changes to that environment.

This has the disadvantage that anything below 4 is just, meh. But nothing above 1 will actually work with JsonPath so… scratch that.


We could technically try and sandbox eval into it’s own process with a PID and whitelist methods which are allowed to be called.

Not bad, and there are a few gems out there which are trying to do that like SafeRuby. But all of these project have been abandoned years ago for a good reason.



Object.send is the best way to get some flexibility while still being safe. You basically just call methods on objects by describing said method on an object and giving parameters to it, like:

1.send(:+, 2) => 3

This is a very powerful tool in our toolbox which we will exploit immensely.

So let’s get to it.

Writing a parser

Writing a parser in Ruby is a very fluid experience. It has nice tools which support that, and the one I used is StringScanner. It has the ability to track where you are currently at in a string and move a pointer along with regex matches. In fact, JsonPath already employs this method when parsing a json expression. So reusing that logic was in fact… elementary.

The expression

How do we get from this:

$..book[?(@['price'] < 20)]

To this:

@_current_node['price'] < 20

Well. By simple elimination. There are a couple of problems along the way of course. Because this wouldn’t be a parser if it couldn’t handle ALL the other cases…

Removing Clutter

Some of this we don’t need. Like, $..book part.


The other things we don’t need are all the '[]?()


Once this is done, we can move to isolating the important bits.




How does an expression actually look like?

Let’s break it down.


So, this is a handful. Operations can be <=,>=,<,>,==,!= and operands can be either numbers, or words, and element accessor can be nested since something like this is perfectly valid: $..book[?(@.written.year == 1997)].


To avoid being overwhelmed, ruby has our back with a method called dig.


This, basically lets us pass in some parameters into a dig function on a hash or an array with variadic parameters, which will go on and access those elements in order how they were supplied. Until it either returns a nil or an end result.

For example:

2.3.1 :001 > a = {a: {b: 'c'}}
 => {:a=>{:b=>"c"}}
2.3.1 :002 > a.dig(:a, :b)
 => "c"

Easy. However… Dig was only added after ruby 2.3 thus, I had to write my own dig for now, until I stop supporting anything below 2.3.

At first, I wanted to add it to the hash class, but it proved to be a futile attempt if I wanted to do it nicely, thus the parser got it as a private method.

    def dig(keys, hash)
      return hash unless hash.is_a? Hash
      return nil unless hash.key?(keys.first)
      return hash.fetch(keys.first) if keys.size == 1
      prev = keys.shift
      dig(keys, hash.fetch(prev))

And the corresponding regex behind getting a multitude of elements is as follows:

if t = scanner.scan(/\['\w+'\]+/)


Selecting the operator is another interesting part as it can be a single one or multiple and all sorts. Until I realized that no… it can actually be only a couple.



Also, after a bit of fiddling and doing and doing a silly case statement first:

case op
when '>'
  dig(@_current_node, *elements) > operand
when '<'
  dig(@_current_node, *elements) > operand

…I promptly saw that this is not how it should be done.

And here comes Object.send.


This gave me the opportunity to write this:

dig(elements, @_current_node).send(operator, operand)

Much better. Now I could send all the things in the way of a node.


Parsing an op be like:

elsif t = scanner.scan(/\s+[<>=][<>=]?\s+?/)


Now comes the final piece. The value which we are comparing. This could either be a simple integer, a floating number, or a word. Hah. So coming up with a regex which fits this tightly took a little fiddling, but eventually I ended up with this:

elsif t = scanner.scan(/(\s+)?'?(\w+)?[.,]?(\w+)?'?(\s+)?/)

Without StackOverflow I would say this is fine ((although I need to remove all those space check, shees)). What are all the question marks? Basically, everything is optional. Because an this expression $..book[?(@.price)] is valid. Which is basically just asserting if a given node has a price element.

Logical Operators

The last thing that remains is logical operators, which if you are using eval, is pretty straight forward. It takes care of anything that you might add in like &&, ||, |, &, ^ etc etc.

Now, that’s something I did with a case though. Until I find a nicer solution. Since we can already parse a single expression it’s just a question of breaking down a multi structure expression as the following one: $..book[?(@['price'] > 20 && @.written.year == 1998)].

exps = exp.split(/(&&)|(\|\|)/)

This splits up the string by either && or || and the usage of groups () also includes the operators. Than I evaluate the expressions and save the whole thing in an array like [true, '&&', false]. You know what could immediately resolve this? Yep…


I’d rather just parse it although technically an eval at this stage wouldn’t be that big of a problem…

def parse(exp)
  exps = exp.split(/(&&)|(\|\|)/)
  ret = parse_exp(exps.shift)
  exps.each_with_index do |item, index|
    case item
    when '&&'
      ret &&= parse_exp(exps[index + 1])
    when '||'
      ret ||= parse_exp(exps[index + 1])

Closing words

That’s it folks. The parser is done. And there is no eval being used. There are some more things here that are interesting. Like, array indexing is allowed in jsonpath which is solved by sending .length to a current node. For example:

if scanner.scan(/\./)
  sym = scanner.scan(/\w+/)
  op = scanner.scan(/./)
  num = scanner.scan(/\d+/)
  return @_current_node.send(sym.to_sym).send(op.to_sym, num.to_i)

If an expression begins with a .. So you see that using send will help a lot, and understanding what eval is trying to evaluate and rather writing your own parser, isn’t that hard at all using ruby.

I hope you enjoyed reading this little tid-bit as much as I enjoyed writing and drawing it. Leave a comment if your liked the drawings or if you did not and I should never do them again (( I don’t really care, this is my blog haha. )). Note to self: I shouldn’t draw on the other side of the drawing because of bleed-through.

Thank you! Gergely.

16 Apr 2017, 09:23

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 4


Hi folks.

Previously on this blog: Part 1. Part 2. Part 3.

In this part we are going to talk about Unit Testing Furnace and how to work some magic with AWS and Go.

Mock Stub Fake Dummy Canned

Unit testing in Go usually follows the Dependency Injection model of dealing with Mocks and Stubs.

## DI

Dependency Inject in short is one object supplying the dependencies of another object. In a longer description, it’s ideal to be used for removing the lock on a third party library, like the AWS client. Imaging having code which solely depends on the AWS client. How would you unit test that code without having to ACTUALLY connect to AWS? You couldn’t. Every time you try to test the code it would run the live code and it would try and connect to AWS and perform the operations it’s design to do. The Ruby library with it’s metaprogramming allows you to set the client globally to stub responses, but, alas, this is not the world of Ruby.

Here is where DI comes to the rescue. If you have control over the AWS client on a very high level, and would pass it around as a function parameter, or create that client in an init() function and have it globally defined; you would be able to implement your own client, and have your code use that with stubbed responses which your tests need. For example, you would like a CreateApplication call to fail, or you would like a DescribeStack which returns an aws.Error(“StackAlreadyExists”).

For this, however, you need the API of the AWS client. Which is provided by AWS.

AWS Client API

In order for DI to work, the injected object needs to be of a certain type for us to inject our own. Luckily, AWS provides an Interface for all of it’s clients. Meaning, we can implement our own version for all of the clients, like S3, CloudFormation, CodeDeploy etc.

For each client you want to mock out, an *iface package should be present like this:


In this package you find and use the interface like this:

type fakeCloudFormationClient struct {
	err error

And with this, we have our own CloudFormation client. The real code uses the real clients as function parameters, like this:

// Execute defines what this command does.
func (c *Create) Execute(opts *commander.CommandHelper) {
	log.Println("Creating cloud formation session.")
	sess := session.New(&aws.Config{Region: aws.String(config.REGION)})
	cfClient := cloudformation.New(sess, nil)
	client := CFClient{cfClient}
	createExecute(opts, &client)

We can’t test Execute itself, as it’s using the real client here (or you could have a global from some library, thus allowing you to tests even Execute here) but there is very little logic in this function for this very reason. All the logic is in small functions for which the main starting point and our testing opportunity is, createExecute.

Stubbing Calls

Now, that we have our own client, and with the power of Go’s interface embedding as seen above with CloudFormationAPI, we have to only stub the functions which we are actually using, instead of every function of the given interface. This looks like this:

	cfClient := new(CFClient)
	cfClient.Client = &fakeCloudFormationClient{err: nil}

Where cfClient is a struct like this:

// CFClient abstraction for cloudFormation client.
type CFClient struct {
	Client cloudformationiface.CloudFormationAPI

And a stubbed call can than be written as follows:

func (fc *fakeCreateCFClient) WaitUntilStackCreateComplete(input *cloudformation.DescribeStacksInput) error {
	return nil

This can range from a very trivial example, like the one above, to intricate ones as well, like this gem:

func (fc *fakePushCFClient) ListStackResources(input *cloudformation.ListStackResourcesInput) (*cloudformation.ListStackResourcesOutput, error) {
	if "NoASG" == *input.StackName {
		return &cloudformation.ListStackResourcesOutput{
			StackResourceSummaries: []*cloudformation.StackResourceSummary{
					ResourceType:       aws.String("NoASG"),
					PhysicalResourceId: aws.String("arn::whatever"),
		}, fc.err
	return &cloudformation.ListStackResourcesOutput{
		StackResourceSummaries: []*cloudformation.StackResourceSummary{
				ResourceType:       aws.String("AWS::AutoScaling::AutoScalingGroup"),
				PhysicalResourceId: aws.String("arn::whatever"),
	}, fc.err

This ListStackResources stub lets us test two scenarios based on the stackname. If the test stackname is ‘NoASG’ it will return a result which equals to a result containing no AutoScaling Group. Otherwise, it will return the correct ResourceType for an ASG.

It is a common practice to line up several scenario based stubbed responses in order to test the robustness of your code.

Unfortunately, this also means that your tests will be a bit cluttered with stubs and mock structs and whatnots. For that, I’m partially using a package available struct file in which I’m defining most of the mock structs at least. And from there on, the tests will only contain specific stubs for that particular file. This can be further fine grained by having defaults and than only override in case you need something else.

Testing fatals

Now, the other point which is not really AWS related, but still comes to mind when dealing with Furnace, is testing error scenarios.

Because Furnace is a CLI application it uses Fatals to signal if something is wrong and it doesn’t want to continue or recover because, frankly it can’t. If AWS throws an error, that’s it. You can retry, but in 90% of the cases, it’s usually something that you messed up.

So, how do we test for a fatal or an os.Exit? There are a number of points on that if you do a quick search. You may end up on this talk: GoTalk 2014 Testing Slide #23. Which does an interesting thing. It calls the test binary in a separate process and tests the exit code.

Others, and me as well, will say that you have to have your own logger implemented and use a different logger / os.Exit in your test environment.

Others others will tell you to not to have tests around os.Exit and fatal things, rather return an error and only the main should pop a world ending event. I leave it up to you which you want to use. Either is fine.

In Furnace, I’m using a global logger in my error handling util like this:

// HandleFatal handler fatal errors in Furnace.
func HandleFatal(s string, err error) {
	LogFatalf(s, err)

And LogFatalf is an exported variable var LogFatalf = log.Fatalf. Than in a test, I just override this variable with a local anonymous function:

func TestCreateExecuteEmptyStack(t *testing.T) {
	failed := false
	utils.LogFatalf = func(s string, a ...interface{}) {
		failed = true
	client := new(CFClient)
	stackname := "EmptyStack"
	client.Client = &fakeCreateCFClient{err: nil, stackname: stackname}
	opts := &commander.CommandHelper{}
	createExecute(opts, client)
	if !failed {
		t.Error("expected outcome to fail during create")

It can get even more granular by testing for the error message to make sure that it actually fails at the point we think we are testing:

func TestCreateStackReturnsWithError(t *testing.T) {
	failed := false
	expectedMessage := "failed to create stack"
	var message string
	utils.LogFatalf = func(s string, a ...interface{}) {
		failed = true
		if err, ok := a[0].(error); ok {
			message = err.Error()
	client := new(CFClient)
	stackname := "NotEmptyStack"
	client.Client = &fakeCreateCFClient{err: errors.New(expectedMessage), stackname: stackname}
	config := []byte("{}")
	create(stackname, config, client)
	if !failed {
		t.Error("expected outcome to fail")
	if message != expectedMessage {
		t.Errorf("message did not equal expected message of '%s', was:%s", expectedMessage, message)


This is it. That’s all it took to write Furnace. I hope you enjoyed reading it as much as I enjoyed writing all these thoughts down.

I hope somebody might learn from my journey and also improve upon it.

Any comments are much appreciated and welcomed. Also, PRs and Issues can be submitted on the GitHub page of Furnace.

Thank you for reading! Gergely.

22 Mar 2017, 12:03

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 3


Hi folks.

Previously on this blog: Part 1. Part 2. Part 4.

In this part, I’m going to talk about the experimental plugin system of Furnace.

Go Experimental Plugins

Since Go 1.8 was released, an exciting and new feature was introduced called a Plug-in system. This system works with dynamic libraries built with a special switch to go build. These libraries, .so or .dylib (later), are than loaded and once that succeeds, specific functions can be called from them (symbol resolution).

We will see how this works. For package information, visit the plugin packages Go doc page here.

Furnace Plugins

So, what does furnace use plugins for? Furnace uses plugins to execute arbitery code in, currently, four given locations / events.

These are: pre_create, post_create, pre_delete, post_delete. These events are called, as their name suggests, before and after the creation and deletion of the CloudFormation stack. It allows the user to execute some code without having to rebuild the whole project. It does that by defining a single entry point for the custom code called RunPlugin. Any number of functions can be implemented, but the plugin MUST provide this single, exported function. Otherwise it will fail and ignore that plugin.

Using Plugins

It’s really easy to implement, and use these plugins. I’m not going into the detail of how to load them, because that is done by Furnace, but only how to write and use them.

To use a plugin, create a go file called: 0001_mailer.go. The 0001 before it will define WHEN it’s executed. Having multiple plugins is completely okay. Execution of order however, depends on the names of the files.

Now, in 0001_mailer.post_create we would have something like this:

package main

import "log"

// RunPlugin runs the plugin.
func RunPlugin() {
	log.Println("My Awesome Pre Create Plugin.")

Next step is the build this file to be a plugin library. Note: Right now, this only works on Linux!

To build this file run the following:

go build -buildmode=plugin -o 0001_mailer.pre_create 0001_mailer.go

The important part here is the extension of the file specified with -o. It’s important because that’s how Furnace identifies what plugins it has to run.

Finally, copy this file to ~/.config/go-furnace/plugins and you are all set.

Slack notification Plugin

To demonstrate how a plugin could be used is if you need some kind of notification once a Stack is completed. For example, you might want to send a message to a Slack room. To do this, your plugin would look something like this:

package main

import (


func RunPlugin() {
	stackname := os.Getenv("FURNACE_STACKNAME")
	api := slack.New("YOUR_TOKEN_HERE")
	params := slack.PostMessageParameters{}
	channelID, timestamp, err := api.PostMessage("#general", fmt.Sprintf("Stack with name '%s' is Done.", stackname), params)
	if err != nil {
		fmt.Printf("%s\n", err)
	fmt.Printf("Message successfully sent to channel %s at %s", channelID, timestamp)

Currently, Furnace has no ability to share information of the stack with an outside plugin. Thus ‘Done’ could be anything from Rollback to Failed to CreateComplete.

Closing Words

That’s it for plugins. Thanks very much for reading! Gergely.