The One Hundred Day GitHub Challenge

Hello folks. Today, I present to you the One Hundred Day Github Challenge. The rules are simple: Minimum of One commit every day for a Hundred days. Commit has to be meaningful but can be as little as a fix in a Readme.md. Doesn’t matter if you are on vacation, there are no exceptions. There. Are. No. Exceptions. If you fail a day, you have to start over. No cheating. You only cheat yourself, so this is really up to you. Let me be more clear here, because it seems I wasn’t clear enough. What you make out of this challenge, it’s up to you. If you just update a readme.md for hundred days, that’s fine. Just do it every day. It’s a commitment. At least you’ll have a nice Readme. ...

November 15, 2015 · 1 min · hannibal

Go Progress Quest

Hi Folks. I started to build a Progress Quest type of web app in Go. If you’d like to join, or just tag along, please drop by here => Go Progress Quest and feel free to submit an issue if you have an idea, or would like to contribute! I will try and document the Progress. Thank you for reading! Gergely.

November 9, 2015 · 1 min · hannibal

Kill a Program on Connecting to a specific WiFi – OSX

Hi folks. If you have the tendency, like me, to forget that you are on the corporate VPN, or leave a certain software open when you bring your laptop to work, this might be helpful to you too. It’s a small script which kills a program when you change your Wifi network. Script: #!/bin/bash function log { directory="/Users/<username>/wifi_detect" log_dir_exists=true if [ ! -d $directory ]; then echo "Attempting to create => $directory" mkdir -p $directory if [ ! -d $directory ]; then echo "Could not create directory. Continue to log to echo." log_dir_exists=false fi fi if $log_dir_exists ; then echo "$(date):$1" >> "$directory/log.txt" else echo "$(date):$1" fi } function check_program { to_kill="[${1::1}]${1:1}" log "Checking if $to_kill really quit." ps=$(ps aux |grep "$to_kill") log "ps => $ps" if [ -z "$ps" ]; then # 0 - True return else # 1 - False return 1 fi } function kill_program { log "Killing program" `pkill -f "$1"` sleep 1 if ! check_program $1 ; then log "$1 Did not quit!" else log "$1 quit successfully" fi } wifi_name=$(networksetup -getairportnetwork en0 |awk -F": " '{print $2}') log "Wifi name: $wifi_name" if [ "$wifi_name" = "<wifi_name>" ]; then log "On corporate network... Killing Program" kill_program "<programname>" elif [ "$wifi_name" = "<home_wifi_name>" ]; then # Kill <program> if enabled and if on <home_wifi> and if Tunnelblick is running. log "Not on corporate network... Killing <program> if Tunnelblick is active." if ! check_program "Tunnelblick" ; then log "Tunnelblick is active. Killing <program>" kill_program "<program>" else log "All good... Happy coding." fi else log "No known Network..." fi Now, the trick is, on OSX to only trigger this when your network changes. For this, you can have a ’launchd’ daemon, which is configured to watch three files which relate to a network being changed. ...

October 26, 2015 · 2 min · hannibal

Circular buffer in Go

I’m proud of this one too. No peaking. I like how go let’s you do this kind of stuff in a very nice way. package circular import "fmt" //TestVersion testVersion const TestVersion = 1 //Buffer buffer type type Buffer struct { buffer []byte full int size int s, e int } //NewBuffer creates a new Buffer func NewBuffer(size int) *Buffer { return &Buffer{buffer: make([]byte, size), s: 0, e: 0, size: size, full: 0} } //ReadByte reads a byte from b Buffer func (b *Buffer) ReadByte() (byte, error) { if b.full == 0 { return 0, fmt.Errorf("Danger Will Robinson: %s", b) } readByte := b.buffer[b.s] b.s = (b.s + 1) % b.size b.full-- return readByte, nil } //WriteByte writes c byte to the buffer func (b *Buffer) WriteByte(c byte) error { if b.full+1 > b.size { return fmt.Errorf("Danger Will Robinson: %s", b) } b.buffer[b.e] = c b.e = (b.e + 1) % b.size b.full++ return nil } //Overwrite overwrites the oldest byte in Buffer func (b *Buffer) Overwrite(c byte) { b.buffer[b.s] = c b.s = (b.s + 1) % b.size } //Reset resets the buffer func (b *Buffer) Reset() { *b = *NewBuffer(b.size) } func (b *Buffer) String() string { return fmt.Sprintf("Buffer: %d, %d, %d, %d", b.buffer, b.s, b.e, b.size) }

October 15, 2015 · 1 min · hannibal

Jenkins Job DSL and Groovy goodness

Hi Folks. Ever used Job DSL plugin for Jenkins? What is that you say? Well, it’s TEH most awesome plug-in for Jenkins to have, because you can CODE your job configuration and put it under source control. Today, however, I’m not going to write about that because the tutorials on Jenkins JOB DSL are very extensive and very well done. Anyone can pick them up. Today, I would like to write about a part of it which is even more interesting. And that is, extracting re-occurring parts in your job configurations. ...

October 15, 2015 · 4 min · hannibal

DataMunger Kata with Go

Quickly wrote up the Data Munger code kata in Go. Next time, I want better abstractions. And a way to select columns based on their header data. For now, this is not bad. package main import ( "bufio" "fmt" "log" "math" "os" "regexp" "strconv" "strings" ) //Data which is Data type Data struct { columnName string compareOne float64 compareTwo float64 } func main() { // datas := []Data{WeatherData{}, FootballData{}} fmt.Println("Minimum weather data:", GetDataMinimumDiff("weather.dat", , 1, 2)) fmt.Println("Minimum football data:", GetDataMinimumDiff("football.dat", 1, 6, 7)) } //GetDataMinimumDiff gathers data from file to fill up Columns. func GetDataMinimumDiff(filename string, nameColumn int, compareColOne int, compareColTwo int) Data { data := Data{} minimum := math.MaxFloat64 readLines := ReadFile(filename) for _, value := range readLines { valueArrays := strings.Split(value, ",") name := valueArrays[nameColumn] trimmedFirst, _ := strconv.ParseFloat(valueArrays[compareColOne], 64) trimmedSecond, _ := strconv.ParseFloat(valueArrays[compareColTwo], 64) diff := trimmedFirst - trimmedSecond diff = math.Abs(diff) if diff <= minimum { minimum = diff data.columnName = name data.compareOne = trimmedFirst data.compareTwo = trimmedSecond } } return data } //ReadFile reads lines from a file and gives back a string array which contains the lines. func ReadFile(fileName string) (fileLines []string) { file, err := os.Open(fileName) if err != nil { log.Fatal(err) } defer file.Close() scanner := bufio.NewScanner(file) //Skipping the first line which is the header. scanner.Scan() for scanner.Scan() { line := scanner.Text() re := regexp.MustCompile("\\w+") lines := re.FindAllString(line, -1) if len(lines) > { fileLines = append(fileLines, strings.Join(lines, ",")) } } if err := scanner.Err(); err != nil { log.Fatal(err) } return }

October 4, 2015 · 2 min · hannibal

How to Aggregate Tests with Jenkins with Aggregate Plugin on non-relating jobs

Hello folks. Today, I would like to talk about something I came in contact with, and was hard to find a proper answer / solution for it. So I’m writing this down to document my findings. Like the title says, this is about aggregating test result with Jenkins, using the plug-in provided. If you, like me, have a pipeline structure which do not work on the same artifact, but do have a upstream-downstream relationship, you will have a hard time configuring and making Aggregation work. So here is how, I fixed the issue. ...

October 2, 2015 · 4 min · hannibal

I used to have great ideas on the toilet, but I no longer do.

I used to have great ideas on the toilet, but I no longer do. And I would like to reflect on that. So this is not going to be a technical post, rather some ramblings. I already had a post similar to this one, but I failed to follow up on it, and now I’m re-visiting the question. With technology on the rise, embedded systems, chips, augmented biology and information being available at our fingertips, I have but one concern. I don’t want to sound like an old guy reflecting on history, that now everything is changing and that we need to have a sight on the past and bla bla bla. I do have one valid concern though. We are in danger of loosing ourselves. ...

September 7, 2015 · 4 min · hannibal

Sieve of Eratosthenes in Go

I’m pretty proud of this one as well. package sieve //Sieve Uses the Sieve of Eratosthenes to calculate primes to a certain limit func Sieve(limit int) []int { var listOfPrimes []int markers := make([]bool, limit) for i := 2; i < limit; i++ { if !markers[i] { for j := i + i; j < limit; j += i { markers[j] = true } listOfPrimes = append(listOfPrimes, i) } } return listOfPrimes }

July 30, 2015 · 1 min · hannibal

Quick Tip for Debugging Headless Locally

If you are installing something with Packer and you have Headless enabled(and you are lazy and don’t want to switch it off), it gets difficult, to see output. Especially on a windows install the Answer File / Unattended install can be like => Waiting for SSH. for about an hour or two! If you are doing this locally fret not. Just start VirtualBox, and watch the Preview section which will display the current state even if it’s a headless install! ...

July 22, 2015 · 1 min · hannibal