22 Mar 2017, 12:03

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 3

Intro

Hi folks.

Previously on this blog: Part 1. Part 2. Part 4.

In this part, I’m going to talk about the experimental plugin system of Furnace.

Go Experimental Plugins

Since Go 1.8 was released, an exciting and new feature was introduced called a Plug-in system. This system works with dynamic libraries built with a special switch to go build. These libraries, .so or .dylib (later), are than loaded and once that succeeds, specific functions can be called from them (symbol resolution).

We will see how this works. For package information, visit the plugin packages Go doc page here.

Furnace Plugins

So, what does furnace use plugins for? Furnace uses plugins to execute arbitery code in, currently, four given locations / events.

These are: pre_create, post_create, pre_delete, post_delete. These events are called, as their name suggests, before and after the creation and deletion of the CloudFormation stack. It allows the user to execute some code without having to rebuild the whole project. It does that by defining a single entry point for the custom code called RunPlugin. Any number of functions can be implemented, but the plugin MUST provide this single, exported function. Otherwise it will fail and ignore that plugin.

Using Plugins

It’s really easy to implement, and use these plugins. I’m not going into the detail of how to load them, because that is done by Furnace, but only how to write and use them.

To use a plugin, create a go file called: 0001_mailer.go. The 0001 before it will define WHEN it’s executed. Having multiple plugins is completely okay. Execution of order however, depends on the names of the files.

Now, in 0001_mailer.post_create we would have something like this:

package main

import "log"

// RunPlugin runs the plugin.
func RunPlugin() {
	log.Println("My Awesome Pre Create Plugin.")
}

Next step is the build this file to be a plugin library. Note: Right now, this only works on Linux!

To build this file run the following:

go build -buildmode=plugin -o 0001_mailer.pre_create 0001_mailer.go

The important part here is the extension of the file specified with -o. It’s important because that’s how Furnace identifies what plugins it has to run.

Finally, copy this file to ~/.config/go-furnace/plugins and you are all set.

Slack notification Plugin

To demonstrate how a plugin could be used is if you need some kind of notification once a Stack is completed. For example, you might want to send a message to a Slack room. To do this, your plugin would look something like this:

package main

import (
	"fmt"
	"os"

	"github.com/nlopes/slack"
)

func RunPlugin() {
	stackname := os.Getenv("FURNACE_STACKNAME")
	api := slack.New("YOUR_TOKEN_HERE")
	params := slack.PostMessageParameters{}
	channelID, timestamp, err := api.PostMessage("#general", fmt.Sprintf("Stack with name '%s' is Done.", stackname), params)
	if err != nil {
		fmt.Printf("%s\n", err)
		return
	}
	fmt.Printf("Message successfully sent to channel %s at %s", channelID, timestamp)
}

Currently, Furnace has no ability to share information of the stack with an outside plugin. Thus ‘Done’ could be anything from Rollback to Failed to CreateComplete.

Closing Words

That’s it for plugins. Thanks very much for reading! Gergely.

19 Mar 2017, 12:03

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 2

Intro

Hi folks.

Previously on this blog: Part 1, Part 3, Part 4

In this part, I’m going to talk about the AWS Go SDK and begin do dissect the intricacies of Furnace.

AWS SDK

Fortunately, the Go SDK for AWS is quiet verbose and littered with examples of all sorts. But that doesn’t make it less complex and less cryptic at times. I’m here to lift some of the early confusions, in hopes that I can help someone to avoid wasting time.

Getting Started and Developers Guide

As always, and common from AWS, the documentation is top notch. There is a 141 pages long developer’s guide on the SDK containing a getting started section and an API reference. Go check it out. I’ll wait. AWS Go SDK DG PDF. I will only talk about some gotchas and things I encountered, not the basics of the SDK.

aws.String and other types

Something which is immediately visible once we take a look at the API is that everything is a pointer. Now, there are a tremendous amount of discussions about this, but I’m with Amazon. There are various reasons for it, but to list the most prominent ones: - Type completion and compile time type safety. - Values for AWS API calls have valid zero values, in addition to being optional, i.e. not being provided at all. - Other option, like, empty interfaces with maps, or using zero values, or struct wrappers around every type, made life much harder rather than easier or not possible at all. - The AWS API is volatile. You never know when something gets to be optional, or required. Pointers made that decision easy.

There are good number of other discussions around this topic, for example: AWS Go GitHub #363.

In order to use primitives, AWS has helper functions like aws.String. Because &“asdf” is not allowed, you would have to create a variable and use its address in situations where a string pointer is needed, for example, name of the stack. These primitive helpers will make in-lining possible. We’ll see later that they are used to a great extent. Pointers, however, make life a bit difficult when constructing Input structs and make for poor aesthetics.

This is something I’m returning in a test for stubbing a client call:

		return &cloudformation.ListStackResourcesOutput{
			StackResourceSummaries: []*cloudformation.StackResourceSummary{
				{
					ResourceType:       aws.String("NoASG"),
					PhysicalResourceId: aws.String("arn::whatever"),
				},
			},
		}

This doesn’t look so appealing, but one gets used to it quickly.

Error handling

Errors also have their own types. An AWS error looks like this:

if err != nil {
    if awsErr, ok := err.(awserr.Error); ok {
    }
}

First, we check if error is nil, than we type check if the error is an AWS error or something different. In the wild, this will look something like this:

	if err != nil {
		if awsErr, ok := err.(awserr.Error); ok {
			if awsErr.Code() != codedeploy.ErrCodeDeploymentGroupAlreadyExistsException {
				log.Println(awsErr.Code())
				return err
			}
			log.Println("DeploymentGroup already exists. Nothing to do.")
			return nil
		}
		return err
	}

If it’s an AWS error, we can check further for the error code that it returns in order to identify what to handle, or what to throw on to the caller to a potential fatal. Here, I’m ignoring the AlreadyExistsException because, if it does, we just go on to a next action.

Examples

Luckily the API doc is very mature. In most of the cases, they provide an example to an API call. These examples, however, from time to time provide more confusion than clarity. Take CloudFormation. For me, when I first glanced upon the description of the API it wasn’t immediately clear that the TemplateBody was supposed to be the whole template, and that the rest of the fields were almost all optional settings. Or provided overrides in special cases.

And since the template is not an ordinary JAML or JSON file, I was looking for something that parses it into that the Struct I was going to use. After some time, and digging, I realized that I didn’t need that, and that I just need to read in the template, define some extra parameters, and give the TemplateBody the whole of the template. The parameters defined by the CloudFormation template where extracted for me by ValidateTemplate API call which returned all of them in a convenient []*cloudformation.Parameter slice. These things are not described in the document or visible from the examples. I mainly found them through playing with the API and focused experimentation.

Waiters

From other SDK implementations, we got used to Waiters. These handy methods wait for a service to become available or for certain situations to take in effect, like a Stage being CREATE_COMPLETE. The Go waiters, however, don’t allow for callback to be fired, or for running blocks, like the ruby SDK does. For this, I wrote a handy little waiter for myself, which outputs a spinner to see that we are currently waiting for something and not frozen in time. This waiter looks like this:

// WaitForFunctionWithStatusOutput waits for a function to complete its action.
func WaitForFunctionWithStatusOutput(state string, freq int, f func()) {
	var wg sync.WaitGroup
	wg.Add(1)
	done := make(chan bool)
	go func() {
		defer wg.Done()
		f()
		done <- true
	}()
	go func() {
		counter := 0
		for {
			counter = (counter + 1) % len(Spinners[config.SPINNER])
			fmt.Printf("\r[%s] Waiting for state: %s", yellow(string(Spinners[config.SPINNER][counter])), red(state))
			time.Sleep(time.Duration(freq) * time.Second)
			select {
			case <-done:
				fmt.Println()
				break
			default:
			}
		}
	}()

	wg.Wait()
}

And I’m calling it with the following method:

	utils.WaitForFunctionWithStatusOutput("DELETE_COMPLETE", config.WAITFREQUENCY, func() {
		cfClient.Client.WaitUntilStackDeleteComplete(describeStackInput)
	})

This would output these lines to the console:

[\] Waiting for state: DELETE_COMPLETE

The spinner can be configured to be one of the following types:

var Spinners = []string{`←↖↑↗→↘↓↙`,
	`▁▃▄▅▆▇█▇▆▅▄▃`,
	`┤┘┴└├┌┬┐`,
	`◰◳◲◱`,
	`◴◷◶◵`,
	`◐◓◑◒`,
	`⣾⣽⣻⢿⡿⣟⣯⣷`,
	`|/-\`}

Handy.

And with that, let’s dive into the basics of Furnace.

Furnace

Directory Structure and Packages

Furnace is divided into three main packages.

commands

Commands package is where the gist of Furnace lies. These commands represent the commands which are used through the CLI. Each file has the implementation for one command. The structure is devised by this library: Yitsushi’s Command Library. As of the writing of this post, the following commands are available: - create - Creates a stack using the CloudFormation template file under ~/.config/go-furnace - delete - Deletes the created Stack. Doesn’t do anything if the stack doesn’t exist - push - Pushes an application to a stack - status - Displays information about the stack - delete-application - Deletes the CodeDeploy application and deployment group created by push

These commands represent the heart of furnace. I would like to keep these to a minimum, but I do plan on adding more, like update and rollout. Further details and help messages on these commands can be obtained by running: ./furnace help or ./furnace help create.

❯ ./furnace help push
Usage: furnace push appName [-s3]

Push a version of the application to a stack

Examples:
  furnace push
  furnace push appName
  furnace push appName -s3
  furnace push -s3

config

Contains the configuration loader and some project wide defaults which are as follows: - Events for the plugin system - pre-create, post-create, pre-delete, post-delete. - CodeDeploy role name - CodeDeployServiceRole. This is used if none is provided to locate the CodeDeploy IAM role. - Wait frequency - Is the setting which controls how long the waiter should sleep in between status updates. Default is 1s. - Spinner - Is just the number of the spinner to use. - Plugin registry - Is a map of functions to run for the above events.

Further more, config loads the CloudFormation template and checks if some necessary settings are present in the environment, exp: the configuration folder under ~/.config/go-furnace.

utils

These are some helper functions which are used throughout the project. To list them: - error_handler - Is a simple error handler. I’m thinking of refactoring this one to some saner version. - spinner - Sets up which spinner to use in the waiter function. - waiter - Contains the verbose waiter introduced above under Waiters.

Configuration and Environment variables

Furnace is a Go application, thus it doesn’t have the luxury of Ruby or Python where the configuration files are usually bundled with the app. But, it does have a standard for it. Usually, configurations reside in either of these two locations. Environment Properties or|and configuration files under a fixed location ( i.e. HOME/.config/app-name ). Furnace employs both.

Settings like, region, stack name, enable plugin system, are under environment properties ( though this can change ), while the CloudFormation template lives under ~/.config/go-furnace/. Lastly it assumes some things, like the Deployment IAM role just exists under the used AWS account. All these are loaded and handled by the config package described above.

Usage

A typical scenario for Furnace would be the following:

  • Setup your CloudFormation template or use the one provided. The one provided sets up a highly available and self healing setting using Auto-Scaling and Load-Balancing with a single application instance. Edit this template to your liking than copy it to ~/.config/go-furnace.
  • Create the configured stack with ./furnace create.
  • Create will ask for the parameters defined in the template. If defaults are setup, simply hitting enter will use these defaults. Take note, that the provided template sets up SSH access via a provided key. If that key is not present in CF, you won’t be able to SSH into the created instance.
  • Once the stack is completed, the application is ready to be pushed. To do this, run: ./furnace push. This will locate the appropriate version of the app from S3 or GitHub and push that version to the instances in the Auto-Scaling group. To all of them.

General Practices Applied to the Project

Commands

For each command the main entry point is the execute function. These functions are usually calling out the small chunks of distributed methods. Logic was kept to a bare minimum ( probably could be simplified even further ) in the execute functions mostly for testability and the likes. We will see that in a followup post.

Errors

Errors are handled immediately and usually through a fatal. If any error occurs than the application is halted. In followup versions this might become more granular. I.e. don’t immediately stop the world, maybe try to recover, or create a Poller or Re-Tryer, which tries a call again for a configured amount of times.

Output colors

Not that important, but still… Aesthetics. Displaying data to the console in a nice way gives it some extra flare.

Makefile

This project works with a Makefile for various reasons. Later on, once the project might become more complex, a Makefile makes it really easy to handle different ways of packaging the application. Currently, for example, it provides a linux target which will make Go build the project for Linux architecture on any other Architecture i.e. cross-compiling.

It also provides an easy way to run unit tests with make test and installing with make && make install.

Closing Words

That is all for Part 2. Join me in Part 3 where I will talk about the experimental Plugin system that Furnace employs.

Thank you for reading! Gergely.

17 Mar 2017, 09:09

Testing new Hugo if posts are generated properly

Testing.

16 Mar 2017, 21:49

Furnace - The building of an AWS CLI Tool for CloudFormation and CodeDeploy - Part 1

Other posts:

Part 2, Part 3, Part 4.

Building Furnace: Part 1

Intro

Hi folks.

This is the first part of a 4 part series which talks about the process of building a middlish sized project in Go, with AWS. Including Unit testing and a experimental plugin feature.

The first part will talk about the AWS services used in brief and will contain a basic description for those who are not familiar with them. The second part will talk about the Go SDK and the project structure itself, how it can be used, improved, and how it can help in everyday life. The third part will talk about the experimental plugin system, and finally, we will tackle unit testing AWS in Go.

Let’s begin, shall we?

AWS

CloudFormation

If you haven’t yet read about, or know off, AWS’ CloudFormation service, you can either go ahead and read the Documentation or read on for a very quick summary. If you are familiar with CF, you should skip ahead to CodeDeploy section.

CF is a service which bundles together other AWS services (for example: EC2, S3, ELB, ASG, RDS) into one, easily manageable stack. After a stack has been created, all the resources can be handled as one, located, tagged and used via CF specific console commands. It’s also possible to define any number of parameters, so a stack can actually be very versatile. A parameter can be anything, from SSH IP restriction to KeyPair names and list of tags to create or in what region the stack will be in.

To describe how these parts fit together, one must use a CloudFormation Template file which is either in JSON or in YAML format. A simple example looks like this:

    Parameters:
      KeyName:
        Description: The EC2 Key Pair to allow SSH access to the instance
        Type: AWS::EC2::KeyPair::KeyName
    Resources:
      Ec2Instance:
        Type: AWS::EC2::Instance
        Properties:
          SecurityGroups:
          - Ref: InstanceSecurityGroup
          - MyExistingSecurityGroup
          KeyName:
            Ref: KeyName
          ImageId: ami-7a11e213
      InstanceSecurityGroup:
        Type: AWS::EC2::SecurityGroup
        Properties:
          GroupDescription: Enable SSH access via port 22
          SecurityGroupIngress:
          - IpProtocol: tcp
            FromPort: '22'
            ToPort: '22'
            CidrIp: 0.0.0.0/0

There are a myriad of these template samples here.

I’m not going to explain this in too much detail. Parameters define the parameters, and resources define all the AWS services which we would like to configure. Here we can see, that we are creating an EC2 instance with a custom Security Group plus and already existing security group. ImageId is the AMI which will be used for the EC2 instance. The InstanceSecurityGroup is only defining some SSH access to the instance.

That is pretty much it. This can become bloated relatively quickly once, VPCs, ELBs, and ASGs come into play. And CloudFormation templates can also contain simple logical switches, like, conditions, ref for variables, maps and other shenanigans.

For example consider this part in the above example:

      KeyName:
        Ref: KeyName

Here, we use the KeyName parameter as a Reference Value which will be interpolated to the real value, or the default one, as the template gets processed.

CodeDeploy

If you haven’t heard about CodeDeploy yet, please browse the relevant Documentation or follow along for a “quick” description.

CodeDeploy just does what the name says. It deploys code. Any kind of code, as long as the deployment process is described in a file called appspec.yml. It can be easy as coping a file to a specific location or incredibly complex with builds of various kinds.

For a simple example look at this configuration:

    version: 0.0
    os: linux
    files:
      - source: /index.html
        destination: /var/www/html/
      - source: /healthy.html
        destination: /var/www/html/
    hooks:
      BeforeInstall:
        - location: scripts/install_dependencies
          timeout: 300
          runas: root
        - location: scripts/clean_up
          timeout: 300
          runas: root
        - location: scripts/start_server
          timeout: 300
          runas: root
      ApplicationStop:
        - location: scripts/stop_server
          timeout: 300
          runas: root

CodeDeploy applications have hooks and life-cycle events which can be used to control the deployment process of an like, starting the WebServer; making sure files are in the right location; copying files, running configuration management software like puppet, ansible or chef; etc, etc.

What can be done in an appspec.yml file is described here: Appspec Reference Documentation.

Deployment happens in one of two ways:

GitHub

If the preferred way to deploy the application is from GitHub a commit hash must be used to identify which “version” of the application is to be deployed. For example:

    rev = &codedeploy.RevisionLocation{
        GitHubLocation: &codedeploy.GitHubLocation{
            CommitId:   aws.String("kajdf94j0f9k309klksjdfkj"),
            Repository: aws.String("Skarlso/furnace-codedeploy-app"),
        },
        RevisionType: aws.String("GitHub"),
    }

Commit Id is the hash of the latest release and repository is the full account/repository pointing to the application.

S3

The second way is to use an S3 bucket. The bucket will contain an archived version of the application with a given extension. I’m saying given extension, because it has to be specified like this (and can be either ‘zip’, or ‘tar’ or ‘tgz’):

    rev = &codedeploy.RevisionLocation{
        S3Location: &codedeploy.S3Location{
            Bucket:     aws.String("my_codedeploy_bucket"),
            BundleType: aws.String("zip"),
            Key:        aws.String("my_awesome_app"),
            Version:    aws.String("VersionId"),
        },
        RevisionType: aws.String("S3"),
    }

Here, we specify the bucket name, the extension, the name of the file and an optional version id, which can be ignored.

Deploying

So how does code deploy get either of the applications to our EC2 instances? It uses an agent which is running on all of the instances that we create. In order to do this, the agent needs to be present on our instance. For linux this can be achieved with the following UserData (UserData in CF is the equivalent of a bootsrap script):

    "UserData" : {
        "Fn::Base64" : { "Fn::Join" : [ "\n", [
            "#!/bin/bash -v",
            "sudo yum -y update",
            "sudo yum -y install ruby wget",
            "cd /home/ec2-user/",
            "wget https://aws-codedeploy-eu-central-1.s3.amazonaws.com/latest/install",
            "chmod +x ./install",
            "sudo ./install auto",
            "sudo service codedeploy-agent start",
        ] ] }
    }

A simple user data configuration in the CloudFormation template will make sure that every instance that we create will have the CodeDeploy agent running and waiting for instructions. This agent is self updating. Which can cause some trouble if AWS releases a broken agent. However unlikely, it can happen. Never the less, once installed, it’s no longer a concern to be bothered with.

It communications on HTTPS port 443.

CodeDeploy identifies instances which need to be updated according to our preferences, by tagging the EC2 and Auto Scaling groups. Tagging happens in the CloudFormation template through the AutoScalingGroup settings like this:

    "Tags" : [
        {
            "Key" : "fu_stage",
            "Value" : { "Ref": "AWS::StackName" },
            "PropagateAtLaunch" : true
        }
    ]

This will give the EC2 instance a tag called fu_stage with value equaling to the name of the stack. Once this is done, CodeDeploy looks like this:

    params := &codedeploy.CreateDeploymentInput{
        ApplicationName:               aws.String(appName),
        IgnoreApplicationStopFailures: aws.Bool(true),
        DeploymentGroupName:           aws.String(appName + "DeploymentGroup"),
        Revision:                      revisionLocation(),
        TargetInstances: &codedeploy.TargetInstances{
            AutoScalingGroups: []*string{
                aws.String("AutoScalingGroupPhysicalID"),
            },
            TagFilters: []*codedeploy.EC2TagFilter{
                {
                    Key:   aws.String("fu_stage"),
                    Type:  aws.String("KEY_AND_VALUE"),
                    Value: aws.String(config.STACKNAME),
                },
            },
        },
        UpdateOutdatedInstancesOnly: aws.Bool(false),
    }

CreateDeploymentInput is the entire parameter list that is needed in order to identify instances to deploy code to. We can see here that it looks for an AutoScalingGroup by Physical Id and the tag labeled fu_stage. Once found, it will use UpdateOutdatedInstancesOnly to determine if an instance needs to be updated or not. Set to false means, it always updates.

Furnace

Where does Furnace fit in, in all of this? Furnace provides a very easy mechanism to create, delete and push code to a CloudFormation stack using CodeDeploy, and a couple of environment properties. Furnace create will create a CloudFormation stack according to the provided template, all the while asking for the parameters defined in it for flexibility. delete will remove the stack and all affiliated resources except for the created CodeDeploy application. For that, there is delete-application. status will display information about the stack: Outputs, Parameters, Id, Name, and status. Something like this:

    2017/03/16 21:14:37 Stack state is:  {
      Capabilities: ["CAPABILITY_IAM"],
      CreationTime: 2017-03-16 20:09:38.036 +0000 UTC,
      DisableRollback: false,
      Outputs: [{
          Description: "URL of the website",
          OutputKey: "URL",
          OutputValue: "http://FurnaceSt-ElasticL-ID.eu-central-1.elb.amazonaws.com"
        }],
      Parameters: [
        {
          ParameterKey: "KeyName",
          ParameterValue: "UserKeyPair"
        },
        {
          ParameterKey: "SSHLocation",
          ParameterValue: "0.0.0.0/0"
        },
        {
          ParameterKey: "CodeDeployBucket",
          ParameterValue: "None"
        },
        {
          ParameterKey: "InstanceType",
          ParameterValue: "t2.nano"
        }
      ],
      StackId: "arn:aws:cloudformation:eu-central-1:9999999999999:stack/FurnaceStack/asdfadsf-adsfa3-432d-a-fdasdf",
      StackName: "FurnaceStack",
      StackStatus: "CREATE_COMPLETE"
    }

( This will later be improved to include created resources as well. )

Once the stack is CREATE_COMPLETE a simple push will deliver our application on each instance in the stack. We will get into more detail about how these commands are working in Part 2 of this series.

Final Words

This is it for now.

Join me next time when I will talk about the AWS Go SDK and its intricacies and we will start to look at the basics of Furnace.

As always, Thanks for reading! Gergely.

03 Mar 2017, 18:20

Images on older posts

Hi folks.

Just a quick headsup, that older posts and images, may have been lost unfortunately, because I made the terrible mistake, when I migrated over from my old blog, that I forgot to download all the images from the remote host.

For lack of options, I deleted the images. :/ Sorry for the inconvencience!

Gergely.

15 Feb 2017, 19:20

How to HTTPS with Hugo LetsEncrypt and HAProxy

Intro

Hi folks.

Today, I would like to write about how to do HTTPS for a website, without the need to buy a certificate and set it up via your DNS provider. Let’s begin.

Abstract

What you will achieve by the end of this post: - Every call to HTTP will be redirected to HTTPS via haproxy. - HTTPS will be served with Haproxy and LetsEncrypt as the Certificate provider. - Automatically update the certificate before its expiration. - No need for IPTable rules to route 8080 to 80. - Traffic to and from your page will be encrypted. - This all will cost you nothing.

I will use a static website generator for this called Hugo which, if you know me, is my favorite generator tool. These instructions are for haproxy and hugo, if you wish to use apache and nginx for example, you’ll have to dig for the corresponding settings for letsencrypt and certbot.

What You Will Need

Hugo

You will need hugo, which can be downloaded from here: Hugo. A simple website will be enough. For themes, you can take a look at the humongous list located here: HugoThemes.

Haproxy

Haproxy can be found here: Haproxy. There are a number of options to install haproxy. I chose a simple apt-get install haproxy.

Let’s Encrypt

Information about Let’s Encrypt can be found on their website here: Let’s Encrypt. Let’s Encrypt’s client is now called Certbot which is used to generate the certificates. To get the latest code you either clone the repository Certbot, or use an auto downloader:

user@webserver:~$ wget https://dl.eff.org/certbot-auto
user@webserver:~$ chmod a+x ./certbot-auto
user@webserver:~$ ./certbot-auto --help

Either way, I’m using the current latest version: v0.11.1.

Sudo

This goes without saying, but that these operations will require you to have sudo privileges. I suggest staying in sudo for ease of use. This means that the commands, I’ll write here, will assume you are in sudo su mode thus no sudo prefix will be used.

Portforwarding

In order for your website to work under https this guide assumes that you have port 80 and 443 open on your router / network security group.

Setup

Single Server Environment

It is possible for haproxy, certbot and your website to run on designated servers. Haproxy’s abilities allows to define multiple server sources. In this guide, my haproxy, website and certbot will all run on the same server; thus redirecting to 127.0.0.1 and local ips. This is more convenient, because otherwise the haproxy IP would have to be a permanent local/remote ip. Or an automated script would have to be setup which is notified upon IP change and updates the ip records.

Creating a Certificate

Diving in, the first thing you will require is a certificate. A certificate will allow for encrypted traffic and an authenticated website. Let’s Encrypt which is basically functioning as an independent, free, automated CA (Certificate Authority). Usually, the process would be to pay a CA to give you a signed, generated certificate for your website, and you would have to set that up with your DNS provider. Let’s Encrypt has that all automated, and free of any charge. Neat.

Certbot

So let’s get started. Clone the repository into /opt/letsencrypt for further usage.

git clone https://github.com/certbot/certbot /opt/letsencrypt

Generating the certificate

Make sure that there is nothing listening on ports: 80, 443. To list usage:

netstat -nlt | grep ':80\s'
netstat -nlt | grep ':443\s'

Kill everything that might be on these ports, like apache2 and httpd. These will be used by haproxy and certbot for challenges and redirecting traffic.

You will be creating a standalone certificate. This is the reason we need port 80 and 443 open. Run certbot by defining the certonly and --standalone flags. For domain validation you are going to use port 443, tls-sni-01 challenge. The whole command looks like this:

cd /opt/letsencrypt
./certbot-auto certonly --standalone -d example.com -d www.example.com

If this displays something like, “couldn’t connect” you probably still have something running on a port it tries to use. The generated certificate will be located under /etc/letsencrypt/archive and /etc/letsencrypt/keys while /etc/letsencrypt/live is a symlink to the latest version of the cert. It’s wise to not copy these away from here, since the live link is always updated to the latest version. Our script will handle haproxy, which requires one cert file made from privkey + fullchain|.pem files.

Setup Auto-Renewal

Let’s Encrypt issues short lived certificates (90 days). In order to not have to do this procedure every 89 days, certbot provides a nifty command called renew. However, for the cert to be generated, the port 443 has to be open. This means, haproxy needs to be stopped before doing the renew. Now, you COULD write a script which stops it, and after the certificate has been renewed, starts it again, but certbot has you covered again in that department. It provides hooks called pre-hook and post-hook. Thus, all you have to write is the following:

#!/bin/bash

cd /opt/letsencrypt
./certbot-auto renew --pre-hook "service haproxy stop" --post-hook "service haproxy start"
DOMAIN='example.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

If you would like to test it first, just include the switch --dry-run.

In case of success you should see something like this:

root@raspberrypi:/opt/letsencrypt# ./certbot-auto renew --pre-hook "service haproxy stop" --post-hook "service haproxy start" --dry-run
Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/example.com.conf
-------------------------------------------------------------------------------
Cert not due for renewal, but simulating renewal for dry run
Running pre-hook command: service haproxy stop
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0002_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0002_csr-certbot.pem
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/example.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
Running post-hook command: service haproxy start

Put this script into a crontab to run every 89 days like this:

crontab -e
# Open crontab for edit and paste in this line
* * */89 * * /root/renew-cert.sh

And you should be all set. Now we move on the configure haproxy to redirect and to use our newly generated certificate.

Haproxy

Like I said, haproxy requires a single file certificate in order to encrypt traffic to and from the website. To do this, we need to combine privkey.pem and fullchain.pem. As of this writing, there are a couple of solutions to automate this via a post hook on renewal. And also, there is an open ticket with certbot to implement a simpler solution located here: https://github.com/certbot/certbot/issues/1201. I, for now, have chosen to simply concatenate the two files together with cat like this:

DOMAIN='example.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

It will create a combined cert under /etc/haproxy/certs/example.com.pem.

Haproxy configuration

If haproxy happens to be running, stop it with service haproxy stop.

First, save the default configuration file: cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old. Now, overwrite the old one with this new one (comments about what each setting does, are in-lined; they are safe to copy):

global
    daemon
    # Set this to your desired maximum connection count.
    maxconn 2048
    # https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
    # bit setting for Diffie - Hellman key size.
    tune.ssl.default-dh-param 2048

defaults
    option forwardfor
    option http-server-close

    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

# In case it's a simple http call, we redirect to the basic backend server
# which in turn, if it isn't an SSL call, will redirect to HTTPS that is
# handled by the frontend setting called 'www-https'.
frontend www-http
    # Redirect HTTP to HTTPS
    bind *:80
    # Adds http header to end of end of the HTTP request
    reqadd X-Forwarded-Proto:\ http
    # Sets the default backend to use which is defined below with name 'www-backend'
    default_backend www-backend

# If the call is HTTPS we set a challenge to letsencrypt backend which
# verifies our certificate and than direct traffic to the backend server
# which is the running hugo site that is served under https if the challenge succeeds.
frontend www-https
    # Bind 443 with the generated letsencrypt cert.
    bind *:443 ssl crt /etc/haproxy/certs/skarlso.com.pem
    # set x-forward to https
    reqadd X-Forwarded-Proto:\ https
    # set X-SSL in case of ssl_fc <- explained below
    http-request set-header X-SSL %[ssl_fc]
    # Select a Challenge
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    # Use the challenge backend if the challenge is set
    use_backend letsencrypt-backend if letsencrypt-acl
    default_backend www-backend

backend www-backend
   # Redirect with code 301 so the browser understands it is a redirect. If it's not SSL_FC.
   # ssl_fc: Returns true when the front connection was made via an SSL/TLS transport
   # layer and is locally deciphered. This means it has matched a socket declared
   # with a "bind" line having the "ssl" option.
   redirect scheme https code 301 if !{ ssl_fc }
   # Server for the running hugo site.
   server www-1 192.168.0.17:8080 check

backend letsencrypt-backend
   # Lets encrypt backend server
   server letsencrypt 127.0.0.1:54321

Save this, and start haproxy with services haproxy start. If you did everything right, it should say nothing. If, however, there went something wrong with starting the proxy, it usually displays something like this:

Job for haproxy.service failed. See 'systemctl status haproxy.service' and 'journalctl -xn' for details.

You can also gather some more information on what went wrong from less /var/log/haproxy.log.

Starting the Server

Everything should be ready to go. Hugo has the concept of a baseUrl. Everything that it loads, and tries to access will be prefixed with it. You can either set it through it’s config.yaml file, or from the command line.

To start the server, call this from the site’s root folder:

hugo server --bind=192.168.x.x --port=8080 --baseUrl=https://example.com --appendPort=false

Interesting thing here to note is https and the port. The IP could be 127.0.0.1 as well. I experienced problems though with not binding to network IP when I was debugging the site from a different laptop on the same network.

Once the server is started, you should be able to open up your website from a different browser, not on your local network, and see that it has a valid certificate installed. In Chrome you should see a green icon telling you that the cert is valid.

Last Words

And that is all. The site should be up and running and the proxy should auto-renew your site’s certificate. If you happened to change DNS or change the server, you’ll have to reissue the certificate.

Thanks for reading! Any questions or trouble setting something up, please feel free to leave a comment.

Cheers, Gergely.

02 Nov 2016, 00:00

How to do Google Sign-In with Go - Part 2

Intro

Hi Folks.

This is a follow up on my previous post about Google Sign-In. In this post we will discover what to do with the information retrieved in the first encounter, which you can find here: Google Sign-In Part 1.

Forewords

The Project

Everything I did in the first post, and that I’m going to do in this example, can be found in this project: Google-OAuth-Go-Sample.

Just to recap, we left off previously on the point where we successfully obtained information about the user, with a secure token and a session initiated with them. Google nicely enough provided us with some details which we can use. This information was in JSON format and looked something like this:

{
  "sub": "1111111111111111111111",
  "name": "Your Name",
  "given_name": "Your",
  "family_name": "Name",
  "profile": "https://plus.google.com/1111111111111111111111",
  "picture": "https://lh3.googleusercontent.com/asdfadsf/AAAAAAAAAAI/Aasdfads/Xasdfasdfs/photo.jpg",
  "email": "your@gmail.com",
  "email_verified": true,
  "gender": "male"
}

In my example, to keep things simple, I will use the email address since that has to be unique in the land of Google. You could assign an ID to the user, and you could complicate things even further, but my goal is not to write an academic paper about cryptography here.

Implementation

Making something useful out of the data

In order for the app to recognise a user it must save some data about the user. I’m doing that in MongoDB right now, but that could be any form of persistence layer, like, SQLite3, BoltDB, PostgresDB, etc.

After successful user authorization

Once the user used google to provide us with sufficient information about him/herself, we can retrieve data about that user from our records. The data could be anything that is linked to our unique identifier like: Character Profile, Player Information, Status, Last Logged-In, etcetc. For this, there are two things that need to happen after authorization: Save/Load user information and initiate a session.

The session can be in the form of a cookie, or a Redis storage, or URL re-writing. I’m choosing a cookie here.

Save / Load user information

All I’m doing is a simple, returning / new user handling. The concept is simple. If the email isn’t saved, we save it. If it’s saved, we set a logic to our page render to greet the returning user.

In the AuthHandler I’m doing the following:

...
seen := false
db := database.MongoDBConnection{}
if _, mongoErr := db.LoadUser(u.Email); mongoErr == nil {
    seen = true
} else {
    err = db.SaveUser(&u)
    if err != nil {
        log.Println(err)
        c.HTML(http.StatusBadRequest, "error.tmpl", gin.H{"message": "Error while saving user. Please try again."})
        return
    }
}
c.HTML(http.StatusOK, "battle.tmpl", gin.H{"email": u.Email, "seen": seen})
...

Let’s break this down a bit. There is a db connection here, which calls a function that either returns an error, or it doesn’t. If it doesn’t, that means we have our user. If it does, it means we have to save the user. This is a very simple case (disregard for now, that the error could be something else as well (If you can’t get passed that, you could type check the error or check if the returned record contains the requested user information instead of checking for an error.)).

The template is than rendered depending on the seen boolean like this:

<!DOCTYPE html>
<link rel="icon"
      type="image/png"
      href="/img/favicon.ico" />
<html>
  <head>
    <link rel="stylesheet" href="/css/main.css">
  </head>
  <body>
    {{if .seen}}
        <h1>Welcome back to the battlefield '{{ .email }}'.</h1>
    {{else}}
        <h1>Welcome to the battlefield '{{ .email }}'.</h1>
    {{end}}
  </body>
</html>

You can see here, that if seen is true the header message will say: “Welcome back…“.

Initiating a session

When the user is successfully authenticated, we activate a session so that the user can access pages that require authorization. Here, I have to mention that I’m using Gin, so restricted end-points are made with groups which require a middleware.

As I mentioned earlier, I’m using cookies as session handlers. For this, a new session store has to be created with some secure token. This is achieved with the following code fragments ( note that I’m using a Gin session middleware which uses gorilla’s session handler located here: Gin-Gonic(Sessions)):

// RandToken in handlers.go:
// RandToken generates a random @l length token.
func RandToken(l int) string {
	b := make([]byte, l)
	rand.Read(b)
	return base64.StdEncoding.EncodeToString(b)
}

// quest.go:
// Create the cookie store in main.go.
store := sessions.NewCookieStore([]byte(handlers.RandToken(64)))
store.Options(sessions.Options{
    Path:   "/",
    MaxAge: 86400 * 7,
})

// using the cookie store:
router.Use(sessions.Sessions("goquestsession", store))

After this gin.Context lets us access this session store by doing session := sessions.Default(c). Now, create a session variable called user-id like this:

session.Set("user-id", u.Email)
err = session.Save()
if err != nil {
    log.Println(err)
    c.HTML(http.StatusBadRequest, "error.tmpl", gin.H{"message": "Error while saving session. Please try again."})
    return
}

Don’t forget to save the session. ;) That is it. If I restart the server, the cookie won’t be usable any longer, since it will generate a new token for the cookie store. The user will have to log in again. Note: It might be that you’ll see something like this, from session: [sessions] ERROR! securecookie: the value is not valid. You can ignore this error.

Restricting access to certain end-points with the auth Middleware™

Now, that our session is alive, we can use it to restrict access to some part of the application. With Gin, it looks like this:

authorized := router.Group("/battle")
authorized.Use(middleware.AuthorizeRequest())
{
    authorized.GET("/field", handlers.FieldHandler)
}

This creates a grouping of end-points under /battle. Which means, everything under /battle will only be accessible if the middleware passed to the Use function calls the next handler in the chain. If it aborts the call chain, the end-point will not be accessible. My middleware is pretty simple, but it gets the job done:

// AuthorizeRequest is used to authorize a request for a certain end-point group.
func AuthorizeRequest() gin.HandlerFunc {
	return func(c *gin.Context) {
		session := sessions.Default(c)
		v := session.Get("user-id")
		if v == nil {
			c.HTML(http.StatusUnauthorized, "error.tmpl", gin.H{"message": "Please log in."})
			c.Abort()
		}
		c.Next()
	}
}

Note, that this only check if user-id is set or not. That’s certainly not enough for a secure application. Its only supposed to be a simple example of the mechanics of the auth middleware. Also, the session usually contains more than one parameter. It’s more likely that it contains several variables, which describe the user including a state for CORS protection. For CORS I’d recommend using rs/cors.

If you would try to access http://127.0.0.1:9090/battle/field without logging in, you’d be redirected to an error.tmpl with the message: Please log in..

Final Words

That’s pretty much it. Important parts are:

  • Saving the right information
  • Secure cookie store
  • CORS for sessions
  • Checks of the users details in the cookie
  • Authorised end-points
  • Session handling

Any questions, remarks, ideas, are very welcomed in the comment section. There are plenty of very nice Go frameworks which do Google OAuth2 out of the box. I recommend using them, as they save you a lot of legwork.

Thank you for reading! Gergely.

06 Oct 2016, 00:00

RScrap scraper

Intro

Hey folks.

So, there is this project called Huginn which I absolutely love.

But the thing is, that for a couple of scrappers ( at least for me ), I don’t want to spin up a whole rails app.

Hence, I’ve come up with RScrap. Which is a bunch of Ruby scripts run as cron jobs on a raspberry pi. And because I dislike emails as well, and most of the time, I don’t read them, I opted for a nicer solution. Enter the world of Telegram. They provide you with the ability to create bots. You basically get an API key, and than using that key, you can send private messages, or even create an interactive bot which you can send messages too.

In my simple example, I’m using it to send private messages to myself, but I could just as well, make it interactive and than tell it to run one of the scripts.

The Code

Let’s take a look at what we got.

The main scraper

The main scraper, is simply bunch of convenience methods that wrap handling and working with the database and the telegram bot. That’s all. It’s very simple. Very short. The Telegram part is just this bit:

def send_message(text)
  Telegram::Bot::Client.run(@token) do |bot|
    bot.api.send_message(chat_id: @id, text: text)
  end
end

Straightforward. Creating an interactive bot, would look something like this:

#!/usr/bin/env ruby
require 'telegram/bot'

token = 'YOUR_TELEGRAM_BOT_API_TOKEN'

Telegram::Bot::Client.run(token) do |bot|
  bot.listen do |message|
    case message.text
    when '/start'
      bot.api.send_message(chat_id: message.chat.id, text: "Hello, #{message.from.first_name}")
    when '/stop'
      bot.api.send_message(chat_id: message.chat.id, text: "Bye, #{message.from.first_name}")
    end
  end
end

Basically, it will listen, and than you can send it messages and based on the parsed message.text you can define functions to call. For example, for rscrap I could define something like run_script(script). And the command would be: /run reddit. Which will execute my reddit script. The possibilities are endless.

The scripts

The scripts use nokogiri to parse a web page, and than return a URL which will be sent by the TelegramBot. They are also saved in the database so that when a new comic strip comes out, I know that it’s new. For reddit, I’m saving a timestamp as well, and I collect everything after that timestamp through the reddit API as JSON, and send it as a bundled message with shortified links to the posts using bit.ly.

The scraping is most of the times the same for every comic. Thus, there is a helper method for it. The script itself, is very short. For example, lets look at gunnerkrigg court.

require_relative '../rscrap'
require 'nokogiri'
require 'open-uri'

url = 'http://www.gunnerkrigg.com'
scrap = Rscrap.new
page = Nokogiri::HTML(open(url))
comic_id = page.css('img.comic_image')[0].select { |e| e if e[0] == 'src' }[0][1]
new_comic = "#{url}#{comic_id}"
scrap.send_new_comic(url, new_comic)

The interesting part of it is this bit: comic_id = page.css('img.comic_image')[0].select { |e| e if e[0] == 'src' }[0][1]. It extracts the URL for the comic image, and stores it as an “id” of the comic. This than, is sent as a message which Telegram will embed. There is no need to visit the web page, the image is in your feed and you can view it directly. Just like an RSS ready.

Cron

These scripts are best used in a cron job. The comics are usually running with a daily frequency, where as the reddit gatherer is running with an hour frequency. Basically, I’m receiving updates on an hourly basis if there are new posts by then. Running ruby from cron was a bit tricky. I’m using bundler for the environment, and came up with this:

0 6-23 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/reddit.rb'
0 8,22 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/gunnerkrigg.rb'
0 8,22 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/aws_blog.rb'
0 5,23 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/goblinscomic.rb'
0 6,20 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/xkcd.rb'
0 7,19 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/commitstrip.rb'
0 8 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/sequiential_art.rb'

And a telegram message for all these things, looks like this: Reddit: TelegramIMReddit Comics: TelegramIMComics

Conclusion

That’s it folks. Adding a new scraper is easy. I added the aws blog as a new entry as well by just copying the comics scripts. And I’m also getting Weather Reports delivered every morning to me.

Have fun. Any questions, please feel free to leave a comment!

Thanks, Gergely.

17 Sep 2016, 00:00

Budget Home Theather with a Headless Raspberry Pi and Flirc for Remote Controlling

Intro

Hello folks.

Today, I would like to tell you about my configuration for a low budget Home Theater setup.

My tools are as follows:

TL;DR

Use Flirc for remote control, omxplayer for streaming the movie from an SSD on a headless PI controller via SSH and enjoy a nice, cold Lemon - Menta beer.

Flirc

First, the remote control. So, I like to sit in my couch and watch the movie from there. I hate getting up, or having a keyboard at arm length to control the pi. Flirc is a very easy way of doing just that with a simple remote control.

It costs ~$22 and is easy to setup. Works with any kind of remote control. Setting up key bindings for the control, is as simple as starting the Flirc software and pressing buttons on the remote to map to keyboard keys. Now, my pi is running headless, and the Flirc binary isn’t quite working with raspbian; so to do the binding, I just did that on my main machine. When I was done, I just plugged in the Flirc, and proceeded to setup the pi.

Raspberry Pi 2

The pi 2 is a small powerhouse. However, the SD card on which it sits is simply not fast enough. From time to time, I experienced lateness in sound, or stutter in video. So, instead of having the movie on the pi, I’m streaming through a faster SSD with SSHFS. For playing, I’m using omxplayer. With omxplayer, I had a few problems, because sound was not coming through the HDMI cable. A little bit of research lead me to this change in the pi’s boot config. Uncomment this line:

#hdmi_driver=2

After rebooting, I also, did this thing:

sudo apt-get install alsa-utils
sudo modprobe snd_bcm2835
sudo amixer -c 0 cset numid=3 2

This saved my bacon. The whole answer can be found here: Stackoverflow.

Once SSHFS was working, and HDMI received sound, I just executed this command: omxplayer -o hdmi /media/stream/my_movie.mkv. This told omxplayer to use the local HDMI connection for video output.

All this was from my computer through an SSH session so I never controlled the pi directly. Once done, I proceeded to sit down with a nice, cold Lemon - Menta beer and a remote control.

Once little gotcha – omxplayer is controlled through the buttons + (volume up), - (volume down), (stop, play), and q for quitting. Flirc is able to map any key combinations on a keyboard as well to any button on the remote. Combinations can be done by selecting a control key and pressing another key. So mapping + to the volume up button was by pressing shift and then ‘=’.

Wrapping Up

I enjoyed the movie while being able to adjust the volume, or pause it, when my popcorn was ready, and close the player when the movie was done. There are a number of other ways to do this, like using kodi + yatse. Which lets you remote control a media software with your mobile phone. But I’m using the pi for a number of other things and the GUI is rather resource heavy.

There you have it folks. Might not be the easiest setup, but it’s pretty awesome anyways.

Cheers, Gergely.

19 Aug 2016, 00:00

Always Go with []byte

Another quick reminder… Always go with []byte if possible. I said it before, and I’m going to say it over and over again. It’s crucial.

Here is a little code from exercism.io. First, with strings:

package igpay

import (
    "strings"
)

// PigLatin translates reguler old English into awesome pig-latin.
func PigLatin(in string) (ret string) {
    for _, v := range strings.Fields(in) {
        ret += pigLatin(v) + " "
    }

    return strings.Trim(ret, " ")
}

func pigLatin(in string) (ret string) {
    if strings.IndexAny(in, "aeiou") == 0 {
        ret += in + "ay"
        return
    }

    for i := 0; i < len(in); i++ {
        vowelPos := strings.IndexAny(in, "aeiou")

        if (in[0] == 'y' || in[0] == 'x') && vowelPos > 1 {
            vowelPos = 0
            ret = in
        }
        if vowelPos != 0 {
            adjustPosition := vowelPos

            if in[adjustPosition] == 'u' && in[adjustPosition - 1] == 'q' {
                adjustPosition++
            }

            ret = in[adjustPosition:] + in[:adjustPosition]
        }
    }
    ret += "ay"
    return
}

Than with []byte:

package igpay

import (
    // "fmt"
    "bytes"
)

// PigLatin translates reguler old English into awesome pig-latin.
func PigLatin(in string) (ret string) {
    inBytes := []byte(in)
    var retBytes [][]byte
    for _, v := range bytes.Fields(inBytes) {
        v2 := make([]byte, len(v))
        copy(v2, v)
        retBytes = append(retBytes, pigLatin(v2))
    }

    ret = string(bytes.Join(retBytes, []byte(" ")))
    return
}

func pigLatin(in []byte) (ret []byte) {
    if bytes.IndexAny(in, "aeiou") == 0 {
        ret = append(in, []byte("ay")...)
        return
    }

    for i := 0; i < len(in); i++ {
        vowelPos := bytes.IndexAny(in, "aeiou")

        if (in[0] == 'y' || in[0] == 'x') && vowelPos > 1 {
            vowelPos = 0
            ret = in
        }
        if vowelPos != 0 {
            adjustPosition := vowelPos

            if in[adjustPosition] == 'u' && in[adjustPosition - 1] == 'q' {
                adjustPosition++
            }

            in = append(in[adjustPosition:], in[:adjustPosition]...)
            ret = in
            // fmt.Printf("%s\n", ret)
        }
    }
    ret = append(ret, []byte("ay")...)
    return
}

And than,the benchmarks of course:

BenchmarkPigLatin-8          	  200000	     10688 ns/op
BenchmarkPigLatinStrings-8   	  100000	     15211 ns/op
PASS

The improvement is not massive in this case, but it’s more than enough to matter. And in a bigger, more complicated program, string concatenation will take a LOT of time away.

In Go, the bytes package has a 1-1 map compared to the strings packages, so chances are, if you are doing strings concatenations you will be able to port that piece of code easily to []byte.

That’s all folks.

Happy coding, Gergely.