15 Feb 2017, 19:20

How to HTTPS with Hugo LetsEncrypt and HAProxy

Intro

Hi folks.

Today, I would like to write about how to do HTTPS for a website, without the need to buy a certificate and set it up via your DNS provider. Let’s begin.

Abstract

What you will achieve by the end of this post: - Every call to HTTP will be redirected to HTTPS via haproxy. - HTTPS will be served with Haproxy and LetsEncrypt as the Certificate provider. - Automatically update the certificate before its expiration. - No need for IPTable rules to route 8080 to 80. - Traffic to and from your page will be encrypted. - This all will cost you nothing.

I will use a static website generator for this called Hugo which, if you know me, is my favorite generator tool. These instructions are for haproxy and hugo, if you wish to use apache and nginx for example, you’ll have to dig for the corresponding settings for letsencrypt and certbot.

What You Will Need

Hugo

You will need hugo, which can be downloaded from here: Hugo. A simple website will be enough. For themes, you can take a look at the humongous list located here: HugoThemes.

Haproxy

Haproxy can be found here: Haproxy. There are a number of options to install haproxy. I chose a simple apt-get install haproxy.

Let’s Encrypt

Information about Let’s Encrypt can be found on their website here: Let’s Encrypt. Let’s Encrypt’s client is now called Certbot which is used to generate the certificates. To get the latest code you either clone the repository Certbot, or use an auto downloader:

user@webserver:~$ wget https://dl.eff.org/certbot-auto
user@webserver:~$ chmod a+x ./certbot-auto
user@webserver:~$ ./certbot-auto --help

Either way, I’m using the current latest version: v0.11.1.

Sudo

This goes without saying, but that these operations will require you to have sudo privileges. I suggest staying in sudo for ease of use. This means that the commands, I’ll write here, will assume you are in sudo su mode thus no sudo prefix will be used.

Portforwarding

In order for your website to work under https this guide assumes that you have port 80 and 443 open on your router / network security group.

Setup

Single Server Environment

It is possible for haproxy, certbot and your website to run on designated servers. Haproxy’s abilities allows to define multiple server sources. In this guide, my haproxy, website and certbot will all run on the same server; thus redirecting to 127.0.0.1 and local ips. This is more convenient, because otherwise the haproxy IP would have to be a permanent local/remote ip. Or an automated script would have to be setup which is notified upon IP change and updates the ip records.

Creating a Certificate

Diving in, the first thing you will require is a certificate. A certificate will allow for encrypted traffic and an authenticated website. Let’s Encrypt which is basically functioning as an independent, free, automated CA (Certificate Authority). Usually, the process would be to pay a CA to give you a signed, generated certificate for your website, and you would have to set that up with your DNS provider. Let’s Encrypt has that all automated, and free of any charge. Neat.

Certbot

So let’s get started. Clone the repository into /opt/letsencrypt for further usage.

git clone https://github.com/certbot/certbot /opt/letsencrypt

Generating the certificate

Make sure that there is nothing listening on ports: 80, 443. To list usage:

netstat -nlt | grep ':80\s'
netstat -nlt | grep ':443\s'

Kill everything that might be on these ports, like apache2 and httpd. These will be used by haproxy and certbot for challenges and redirecting traffic.

You will be creating a standalone certificate. This is the reason we need port 80 and 443 open. Run certbot by defining the certonly and --standalone flags. For domain validation you are going to use port 443, tls-sni-01 challenge. The whole command looks like this:

cd /opt/letsencrypt
./certbot-auto certonly --standalone -d example.com -d www.example.com

If this displays something like, “couldn’t connect” you probably still have something running on a port it tries to use. The generated certificate will be located under /etc/letsencrypt/archive and /etc/letsencrypt/keys while /etc/letsencrypt/live is a symlink to the latest version of the cert. It’s wise to not copy these away from here, since the live link is always updated to the latest version. Our script will handle haproxy, which requires one cert file made from privkey + fullchain|.pem files.

Setup Auto-Renewal

Let’s Encrypt issues short lived certificates (90 days). In order to not have to do this procedure every 89 days, certbot provides a nifty command called renew. However, for the cert to be generated, the port 443 has to be open. This means, haproxy needs to be stopped before doing the renew. Now, you COULD write a script which stops it, and after the certificate has been renewed, starts it again, but certbot has you covered again in that department. It provides hooks called pre-hook and post-hook. Thus, all you have to write is the following:

#!/bin/bash

cd /opt/letsencrypt
./certbot-auto renew --pre-hook "service haproxy stop" --post-hook "service haproxy start"
DOMAIN='example.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

If you would like to test it first, just include the switch --dry-run.

In case of success you should see something like this:

root@raspberrypi:/opt/letsencrypt# ./certbot-auto renew --pre-hook "service haproxy stop" --post-hook "service haproxy start" --dry-run
Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/example.com.conf
-------------------------------------------------------------------------------
Cert not due for renewal, but simulating renewal for dry run
Running pre-hook command: service haproxy stop
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0002_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0002_csr-certbot.pem
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/example.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
Running post-hook command: service haproxy start

Put this script into a crontab to run every 89 days like this:

crontab -e
# Open crontab for edit and paste in this line
* * */89 * * /root/renew-cert.sh

And you should be all set. Now we move on the configure haproxy to redirect and to use our newly generated certificate.

Haproxy

Like I said, haproxy requires a single file certificate in order to encrypt traffic to and from the website. To do this, we need to combine privkey.pem and fullchain.pem. As of this writing, there are a couple of solutions to automate this via a post hook on renewal. And also, there is an open ticket with certbot to implement a simpler solution located here: https://github.com/certbot/certbot/issues/1201. I, for now, have chosen to simply concatenate the two files together with cat like this:

DOMAIN='example.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

It will create a combined cert under /etc/haproxy/certs/example.com.pem.

Haproxy configuration

If haproxy happens to be running, stop it with service haproxy stop.

First, save the default configuration file: cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old. Now, overwrite the old one with this new one (comments about what each setting does, are in-lined; they are safe to copy):

global
    daemon
    # Set this to your desired maximum connection count.
    maxconn 2048
    # https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
    # bit setting for Diffie - Hellman key size.
    tune.ssl.default-dh-param 2048

defaults
    option forwardfor
    option http-server-close

    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

# In case it's a simple http call, we redirect to the basic backend server
# which in turn, if it isn't an SSL call, will redirect to HTTPS that is
# handled by the frontend setting called 'www-https'.
frontend www-http
    # Redirect HTTP to HTTPS
    bind *:80
    # Adds http header to end of end of the HTTP request
    reqadd X-Forwarded-Proto:\ http
    # Sets the default backend to use which is defined below with name 'www-backend'
    default_backend www-backend

# If the call is HTTPS we set a challenge to letsencrypt backend which
# verifies our certificate and than direct traffic to the backend server
# which is the running hugo site that is served under https if the challenge succeeds.
frontend www-https
    # Bind 443 with the generated letsencrypt cert.
    bind *:443 ssl crt /etc/haproxy/certs/skarlso.com.pem
    # set x-forward to https
    reqadd X-Forwarded-Proto:\ https
    # set X-SSL in case of ssl_fc <- explained below
    http-request set-header X-SSL %[ssl_fc]
    # Select a Challenge
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    # Use the challenge backend if the challenge is set
    use_backend letsencrypt-backend if letsencrypt-acl
    default_backend www-backend

backend www-backend
   # Redirect with code 301 so the browser understands it is a redirect. If it's not SSL_FC.
   # ssl_fc: Returns true when the front connection was made via an SSL/TLS transport
   # layer and is locally deciphered. This means it has matched a socket declared
   # with a "bind" line having the "ssl" option.
   redirect scheme https code 301 if !{ ssl_fc }
   # Server for the running hugo site.
   server www-1 192.168.0.17:8080 check

backend letsencrypt-backend
   # Lets encrypt backend server
   server letsencrypt 127.0.0.1:54321

Save this, and start haproxy with services haproxy start. If you did everything right, it should say nothing. If, however, there went something wrong with starting the proxy, it usually displays something like this:

Job for haproxy.service failed. See 'systemctl status haproxy.service' and 'journalctl -xn' for details.

You can also gather some more information on what went wrong from less /var/log/haproxy.log.

Starting the Server

Everything should be ready to go. Hugo has the concept of a baseUrl. Everything that it loads, and tries to access will be prefixed with it. You can either set it through it’s config.yaml file, or from the command line.

To start the server, call this from the site’s root folder:

hugo server --bind=192.168.x.x --port=8080 --baseUrl=https://example.com --appendPort=false

Interesting thing here to note is https and the port. The IP could be 127.0.0.1 as well. I experienced problems though with not binding to network IP when I was debugging the site from a different laptop on the same network.

Once the server is started, you should be able to open up your website from a different browser, not on your local network, and see that it has a valid certificate installed. In Chrome you should see a green icon telling you that the cert is valid.

Last Words

And that is all. The site should be up and running and the proxy should auto-renew your site’s certificate. If you happened to change DNS or change the server, you’ll have to reissue the certificate.

Thanks for reading! Any questions or trouble setting something up, please feel free to leave a comment.

Cheers, Gergely.

02 Nov 2016, 00:00

How to do Google Sign-In with Go - Part 2

Intro

Hi Folks.

This is a follow up on my previous post about Google Sign-In. In this post we will discover what to do with the information retrieved in the first encounter, which you can find here: Google Sign-In Part 1.

Forewords

The Project

Everything I did in the first post, and that I’m going to do in this example, can be found in this project: Google-OAuth-Go-Sample.

Just to recap, we left off previously on the point where we successfully obtained information about the user, with a secure token and a session initiated with them. Google nicely enough provided us with some details which we can use. This information was in JSON format and looked something like this:

{
  "sub": "1111111111111111111111",
  "name": "Your Name",
  "given_name": "Your",
  "family_name": "Name",
  "profile": "https://plus.google.com/1111111111111111111111",
  "picture": "https://lh3.googleusercontent.com/asdfadsf/AAAAAAAAAAI/Aasdfads/Xasdfasdfs/photo.jpg",
  "email": "your@gmail.com",
  "email_verified": true,
  "gender": "male"
}

In my example, to keep things simple, I will use the email address since that has to be unique in the land of Google. You could assign an ID to the user, and you could complicate things even further, but my goal is not to write an academic paper about cryptography here.

Implementation

Making something useful out of the data

In order for the app to recognise a user it must save some data about the user. I’m doing that in MongoDB right now, but that could be any form of persistence layer, like, SQLite3, BoltDB, PostgresDB, etc.

After successful user authorization

Once the user used google to provide us with sufficient information about him/herself, we can retrieve data about that user from our records. The data could be anything that is linked to our unique identifier like: Character Profile, Player Information, Status, Last Logged-In, etcetc. For this, there are two things that need to happen after authorization: Save/Load user information and initiate a session.

The session can be in the form of a cookie, or a Redis storage, or URL re-writing. I’m choosing a cookie here.

Save / Load user information

All I’m doing is a simple, returning / new user handling. The concept is simple. If the email isn’t saved, we save it. If it’s saved, we set a logic to our page render to greet the returning user.

In the AuthHandler I’m doing the following:

...
seen := false
db := database.MongoDBConnection{}
if _, mongoErr := db.LoadUser(u.Email); mongoErr == nil {
    seen = true
} else {
    err = db.SaveUser(&u)
    if err != nil {
        log.Println(err)
        c.HTML(http.StatusBadRequest, "error.tmpl", gin.H{"message": "Error while saving user. Please try again."})
        return
    }
}
c.HTML(http.StatusOK, "battle.tmpl", gin.H{"email": u.Email, "seen": seen})
...

Let’s break this down a bit. There is a db connection here, which calls a function that either returns an error, or it doesn’t. If it doesn’t, that means we have our user. If it does, it means we have to save the user. This is a very simple case (disregard for now, that the error could be something else as well (If you can’t get passed that, you could type check the error or check if the returned record contains the requested user information instead of checking for an error.)).

The template is than rendered depending on the seen boolean like this:

<!DOCTYPE html>
<link rel="icon"
      type="image/png"
      href="/img/favicon.ico" />
<html>
  <head>
    <link rel="stylesheet" href="/css/main.css">
  </head>
  <body>
    {{if .seen}}
        <h1>Welcome back to the battlefield '{{ .email }}'.</h1>
    {{else}}
        <h1>Welcome to the battlefield '{{ .email }}'.</h1>
    {{end}}
  </body>
</html>

You can see here, that if seen is true the header message will say: “Welcome back…“.

Initiating a session

When the user is successfully authenticated, we activate a session so that the user can access pages that require authorization. Here, I have to mention that I’m using Gin, so restricted end-points are made with groups which require a middleware.

As I mentioned earlier, I’m using cookies as session handlers. For this, a new session store has to be created with some secure token. This is achieved with the following code fragments ( note that I’m using a Gin session middleware which uses gorilla’s session handler located here: Gin-Gonic(Sessions)):

// RandToken in handlers.go:
// RandToken generates a random @l length token.
func RandToken(l int) string {
	b := make([]byte, l)
	rand.Read(b)
	return base64.StdEncoding.EncodeToString(b)
}

// quest.go:
// Create the cookie store in main.go.
store := sessions.NewCookieStore([]byte(handlers.RandToken(64)))
store.Options(sessions.Options{
    Path:   "/",
    MaxAge: 86400 * 7,
})

// using the cookie store:
router.Use(sessions.Sessions("goquestsession", store))

After this gin.Context lets us access this session store by doing session := sessions.Default(c). Now, create a session variable called user-id like this:

session.Set("user-id", u.Email)
err = session.Save()
if err != nil {
    log.Println(err)
    c.HTML(http.StatusBadRequest, "error.tmpl", gin.H{"message": "Error while saving session. Please try again."})
    return
}

Don’t forget to save the session. ;) That is it. If I restart the server, the cookie won’t be usable any longer, since it will generate a new token for the cookie store. The user will have to log in again. Note: It might be that you’ll see something like this, from session: [sessions] ERROR! securecookie: the value is not valid. You can ignore this error.

Restricting access to certain end-points with the auth Middleware™

Now, that our session is alive, we can use it to restrict access to some part of the application. With Gin, it looks like this:

authorized := router.Group("/battle")
authorized.Use(middleware.AuthorizeRequest())
{
    authorized.GET("/field", handlers.FieldHandler)
}

This creates a grouping of end-points under /battle. Which means, everything under /battle will only be accessible if the middleware passed to the Use function calls the next handler in the chain. If it aborts the call chain, the end-point will not be accessible. My middleware is pretty simple, but it gets the job done:

// AuthorizeRequest is used to authorize a request for a certain end-point group.
func AuthorizeRequest() gin.HandlerFunc {
	return func(c *gin.Context) {
		session := sessions.Default(c)
		v := session.Get("user-id")
		if v == nil {
			c.HTML(http.StatusUnauthorized, "error.tmpl", gin.H{"message": "Please log in."})
			c.Abort()
		}
		c.Next()
	}
}

Note, that this only check if user-id is set or not. That’s certainly not enough for a secure application. Its only supposed to be a simple example of the mechanics of the auth middleware. Also, the session usually contains more than one parameter. It’s more likely that it contains several variables, which describe the user including a state for CORS protection. For CORS I’d recommend using rs/cors.

If you would try to access http://127.0.0.1:9090/battle/field without logging in, you’d be redirected to an error.tmpl with the message: Please log in..

Final Words

That’s pretty much it. Important parts are:

  • Saving the right information
  • Secure cookie store
  • CORS for sessions
  • Checks of the users details in the cookie
  • Authorised end-points
  • Session handling

Any questions, remarks, ideas, are very welcomed in the comment section. There are plenty of very nice Go frameworks which do Google OAuth2 out of the box. I recommend using them, as they save you a lot of legwork.

Thank you for reading! Gergely.

06 Oct 2016, 00:00

RScrap scraper

Intro

Hey folks.

So, there is this project called Huginn which I absolutely love.

But the thing is, that for a couple of scrappers ( at least for me ), I don’t want to spin up a whole rails app.

Hence, I’ve come up with RScrap. Which is a bunch of Ruby scripts run as cron jobs on a raspberry pi. And because I dislike emails as well, and most of the time, I don’t read them, I opted for a nicer solution. Enter the world of Telegram. They provide you with the ability to create bots. You basically get an API key, and than using that key, you can send private messages, or even create an interactive bot which you can send messages too.

In my simple example, I’m using it to send private messages to myself, but I could just as well, make it interactive and than tell it to run one of the scripts.

The Code

Let’s take a look at what we got.

The main scraper

The main scraper, is simply bunch of convenience methods that wrap handling and working with the database and the telegram bot. That’s all. It’s very simple. Very short. The Telegram part is just this bit:

def send_message(text)
  Telegram::Bot::Client.run(@token) do |bot|
    bot.api.send_message(chat_id: @id, text: text)
  end
end

Straightforward. Creating an interactive bot, would look something like this:

#!/usr/bin/env ruby
require 'telegram/bot'

token = 'YOUR_TELEGRAM_BOT_API_TOKEN'

Telegram::Bot::Client.run(token) do |bot|
  bot.listen do |message|
    case message.text
    when '/start'
      bot.api.send_message(chat_id: message.chat.id, text: "Hello, #{message.from.first_name}")
    when '/stop'
      bot.api.send_message(chat_id: message.chat.id, text: "Bye, #{message.from.first_name}")
    end
  end
end

Basically, it will listen, and than you can send it messages and based on the parsed message.text you can define functions to call. For example, for rscrap I could define something like run_script(script). And the command would be: /run reddit. Which will execute my reddit script. The possibilities are endless.

The scripts

The scripts use nokogiri to parse a web page, and than return a URL which will be sent by the TelegramBot. They are also saved in the database so that when a new comic strip comes out, I know that it’s new. For reddit, I’m saving a timestamp as well, and I collect everything after that timestamp through the reddit API as JSON, and send it as a bundled message with shortified links to the posts using bit.ly.

The scraping is most of the times the same for every comic. Thus, there is a helper method for it. The script itself, is very short. For example, lets look at gunnerkrigg court.

require_relative '../rscrap'
require 'nokogiri'
require 'open-uri'

url = 'http://www.gunnerkrigg.com'
scrap = Rscrap.new
page = Nokogiri::HTML(open(url))
comic_id = page.css('img.comic_image')[0].select { |e| e if e[0] == 'src' }[0][1]
new_comic = "#{url}#{comic_id}"
scrap.send_new_comic(url, new_comic)

The interesting part of it is this bit: comic_id = page.css('img.comic_image')[0].select { |e| e if e[0] == 'src' }[0][1]. It extracts the URL for the comic image, and stores it as an “id” of the comic. This than, is sent as a message which Telegram will embed. There is no need to visit the web page, the image is in your feed and you can view it directly. Just like an RSS ready.

Cron

These scripts are best used in a cron job. The comics are usually running with a daily frequency, where as the reddit gatherer is running with an hour frequency. Basically, I’m receiving updates on an hourly basis if there are new posts by then. Running ruby from cron was a bit tricky. I’m using bundler for the environment, and came up with this:

0 6-23 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/reddit.rb'
0 8,22 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/gunnerkrigg.rb'
0 8,22 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/aws_blog.rb'
0 5,23 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/goblinscomic.rb'
0 6,20 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/xkcd.rb'
0 7,19 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/commitstrip.rb'
0 8 * * * /bin/bash -l -c 'cd /home/<youruser>/rubyproj/rscrap && bundle exec ruby scripts/sequiential_art.rb'

And a telegram message for all these things, looks like this: Reddit: TelegramIMReddit Comics: TelegramIMComics

Conclusion

That’s it folks. Adding a new scraper is easy. I added the aws blog as a new entry as well by just copying the comics scripts. And I’m also getting Weather Reports delivered every morning to me.

Have fun. Any questions, please feel free to leave a comment!

Thanks, Gergely.

17 Sep 2016, 00:00

Budget Home Theather with a Headless Raspberry Pi and Flirc for Remote Controlling

Intro

Hello folks.

Today, I would like to tell you about my configuration for a low budget Home Theater setup.

My tools are as follows:

TL;DR

Use Flirc for remote control, omxplayer for streaming the movie from an SSD on a headless PI controller via SSH and enjoy a nice, cold Lemon - Menta beer.

Flirc

First, the remote control. So, I like to sit in my couch and watch the movie from there. I hate getting up, or having a keyboard at arm length to control the pi. Flirc is a very easy way of doing just that with a simple remote control.

It costs ~$22 and is easy to setup. Works with any kind of remote control. Setting up key bindings for the control, is as simple as starting the Flirc software and pressing buttons on the remote to map to keyboard keys. Now, my pi is running headless, and the Flirc binary isn’t quite working with raspbian; so to do the binding, I just did that on my main machine. When I was done, I just plugged in the Flirc, and proceeded to setup the pi.

Raspberry Pi 2

The pi 2 is a small powerhouse. However, the SD card on which it sits is simply not fast enough. From time to time, I experienced lateness in sound, or stutter in video. So, instead of having the movie on the pi, I’m streaming through a faster SSD with SSHFS. For playing, I’m using omxplayer. With omxplayer, I had a few problems, because sound was not coming through the HDMI cable. A little bit of research lead me to this change in the pi’s boot config. Uncomment this line:

#hdmi_driver=2

After rebooting, I also, did this thing:

sudo apt-get install alsa-utils
sudo modprobe snd_bcm2835
sudo amixer -c 0 cset numid=3 2

This saved my bacon. The whole answer can be found here: Stackoverflow.

Once SSHFS was working, and HDMI received sound, I just executed this command: omxplayer -o hdmi /media/stream/my_movie.mkv. This told omxplayer to use the local HDMI connection for video output.

All this was from my computer through an SSH session so I never controlled the pi directly. Once done, I proceeded to sit down with a nice, cold Lemon - Menta beer and a remote control.

Once little gotcha – omxplayer is controlled through the buttons + (volume up), - (volume down), (stop, play), and q for quitting. Flirc is able to map any key combinations on a keyboard as well to any button on the remote. Combinations can be done by selecting a control key and pressing another key. So mapping + to the volume up button was by pressing shift and then ‘=’.

Wrapping Up

I enjoyed the movie while being able to adjust the volume, or pause it, when my popcorn was ready, and close the player when the movie was done. There are a number of other ways to do this, like using kodi + yatse. Which lets you remote control a media software with your mobile phone. But I’m using the pi for a number of other things and the GUI is rather resource heavy.

There you have it folks. Might not be the easiest setup, but it’s pretty awesome anyways.

Cheers, Gergely.

19 Aug 2016, 00:00

Always Go with []byte

Another quick reminder… Always go with []byte if possible. I said it before, and I’m going to say it over and over again. It’s crucial.

Here is a little code from exercism.io. First, with strings:

package igpay

import (
    "strings"
)

// PigLatin translates reguler old English into awesome pig-latin.
func PigLatin(in string) (ret string) {
    for _, v := range strings.Fields(in) {
        ret += pigLatin(v) + " "
    }

    return strings.Trim(ret, " ")
}

func pigLatin(in string) (ret string) {
    if strings.IndexAny(in, "aeiou") == 0 {
        ret += in + "ay"
        return
    }

    for i := 0; i < len(in); i++ {
        vowelPos := strings.IndexAny(in, "aeiou")

        if (in[0] == 'y' || in[0] == 'x') && vowelPos > 1 {
            vowelPos = 0
            ret = in
        }
        if vowelPos != 0 {
            adjustPosition := vowelPos

            if in[adjustPosition] == 'u' && in[adjustPosition - 1] == 'q' {
                adjustPosition++
            }

            ret = in[adjustPosition:] + in[:adjustPosition]
        }
    }
    ret += "ay"
    return
}

Than with []byte:

package igpay

import (
    // "fmt"
    "bytes"
)

// PigLatin translates reguler old English into awesome pig-latin.
func PigLatin(in string) (ret string) {
    inBytes := []byte(in)
    var retBytes [][]byte
    for _, v := range bytes.Fields(inBytes) {
        v2 := make([]byte, len(v))
        copy(v2, v)
        retBytes = append(retBytes, pigLatin(v2))
    }

    ret = string(bytes.Join(retBytes, []byte(" ")))
    return
}

func pigLatin(in []byte) (ret []byte) {
    if bytes.IndexAny(in, "aeiou") == 0 {
        ret = append(in, []byte("ay")...)
        return
    }

    for i := 0; i < len(in); i++ {
        vowelPos := bytes.IndexAny(in, "aeiou")

        if (in[0] == 'y' || in[0] == 'x') && vowelPos > 1 {
            vowelPos = 0
            ret = in
        }
        if vowelPos != 0 {
            adjustPosition := vowelPos

            if in[adjustPosition] == 'u' && in[adjustPosition - 1] == 'q' {
                adjustPosition++
            }

            in = append(in[adjustPosition:], in[:adjustPosition]...)
            ret = in
            // fmt.Printf("%s\n", ret)
        }
    }
    ret = append(ret, []byte("ay")...)
    return
}

And than,the benchmarks of course:

BenchmarkPigLatin-8          	  200000	     10688 ns/op
BenchmarkPigLatinStrings-8   	  100000	     15211 ns/op
PASS

The improvement is not massive in this case, but it’s more than enough to matter. And in a bigger, more complicated program, string concatenation will take a LOT of time away.

In Go, the bytes package has a 1-1 map compared to the strings packages, so chances are, if you are doing strings concatenations you will be able to port that piece of code easily to []byte.

That’s all folks.

Happy coding, Gergely.

16 Aug 2016, 00:00

Global variable for never changing regex

Quick reminder. If you have a never changing regex in Go, do NOT put it into a frequently called function. ALWAYS put it into a global variable. I’ll show you why.

Benchmark for code with a variable in a frequently called function:

BenchmarkNumber-8     	   30000	     41633 ns/op
BenchmarkAreaCode-8   	   50000	     27736 ns/op
BenchmarkFormat-8     	   50000	     29263 ns/op
PASS
ok  	_/phone-number	5.110s

Benchmark for code with the same variable outside in a global scope:

BenchmarkNumber-8     	  300000	      5618 ns/op
BenchmarkAreaCode-8   	  500000	      3884 ns/op
BenchmarkFormat-8     	  300000	      4696 ns/op
PASS
ok  	_/phone-number	5.197s

Notice the magnitude change in ns/op! That’s something to keep an eye out for.

Thanks for reading! Cheers, Gergely.

13 Aug 2016, 00:00

Drupal missing ToolBar and settings not saving

Hi folks.

Quick gotcha, when working with Drupal. If you just freshly installed it, and everything seems to work fine, and yet you are experiencing things like, the admin toolbar is randomly disappearing, or configuration is not saved; than you might not have modrewrite enabled on your apache server.

Because, by default, Drupal has clean url enabled, that needs URL rewriting on apache.

So, step one.

Have this in your .htaccess file:

<IfModule mod_rewrite.c>
  RewriteEngine on
  ... # and than a bunch of rewrite rules according to your leisure

Than look up this line in your httpd.conf file and remove the prefix ‘#’.

#LoadModule rewrite_module libexec/apache2/mod_rewrite.so

That is all. From there on, everything should work. If, you don’t want the clean url setting, yet you can’t disable it, and don’t want to restart the server and edit the settings.php file; use drush like this:

drush vset clean_url 0 --yes

This should disable it and bust the cache in the process so it’s immediately visible.

That is all folks.

Cheers, Gergely.

28 Jul 2016, 00:00

Jenkins Best Practices Talk

Hi folks.

I wanted to take the time to share with you a talk that I recently did.

The slides and the source I used, can be found here: Github.

And then, there is also a docker image which contains all the plugins, job configurations and all the practices which I did during the talk. Please feel free to have a go with it. DockerHub - Jenkins Best Practices.

For easy access and reading, here are the slides on Slideshare: Jenkins Best Practices Slides.

I, gladly answer any questions which should arise.

Thanks! Gergely.

12 Jul 2016, 00:00

Ruby Sieve

Though it could be done better, I’m sure, but I’m actually pretty satisfied with this one. It loops only twice as opposed to filtered ranges and whatnot other solutions to the sieve. I was thinking of rather creating a list and deleting elements from it, but that’s already three loops.

Maybe I’ll do a benchmark later on more solutions.

# Sieve contains a function to return a set of primes
class Sieve
  def initialize(n)
    @n = n
  end

  # Returns a list of primes up to a certain limit
  # @param n limit
  # @return list of primes
  def primes
    marked = []
    primes = []
    (2..@n).each do |e|
      unless marked.include?(e)
        primes.push e
        (e..@n).step(e) { |s| marked.push s }
      end
    end
    primes
  end
end

Cheers, Gergely.

12 Jul 2016, 00:00

Simple hook to rid of trouble

Hi folks.

This is but a simple git hook to run a test in order to ensure you can push. It also ignores the vendor folder if you happen to have on in your directory.

Edit the file under .git/hooks/pre-push.sample and add this at the end before the exit 0.

go test $(go list ./... |grep -v vendor)
RESULT=$?
if [ $RESULT -ne 0 ]; then
    echo "Failed test run. Disallowing push."
    exit 1
fi

After this, rename the file to pre-push removing the .sample from it.

If you now, mess something up, you should see something like this before your push:

# github.com/Skarlso/goprogressquest
./create.go:40: undefined: sha1 in sha1.Sum
./create.go:41: undefined: fmt in fmt.Sprintf
./create.go:115: undefined: json in json.Unmarshal
./create.go:130: undefined: json in json.Unmarshal
FAIL	github.com/Skarlso/goprogressquest [build failed]
Failed test run. Disallowing push.
error: failed to push some refs to 'git@github.com:Skarlso/goprogressquest.git'

That is all.

Cheers, Gergely.