15 Feb 2017, 19:20

How to HTTPS with Hugo LetsEncrypt and HAProxy

Intro

Hi folks.

Today, I would like to write about how to do HTTPS for a website, without the need to buy a certificate and set it up via your DNS provider. Let’s begin.

Abstract

What you will achieve by the end of this post: - Every call to HTTP will be redirected to HTTPS via haproxy. - HTTPS will be served with Haproxy and LetsEncrypt as the Certificate provider. - Automatically update the certificate before its expiration. - No need for IPTable rules to route 8080 to 80. - Traffic to and from your page will be encrypted. - This all will cost you nothing.

I will use a static website generator for this called Hugo which, if you know me, is my favorite generator tool. These instructions are for haproxy and hugo, if you wish to use apache and nginx for example, you’ll have to dig for the corresponding settings for letsencrypt and certbot.

What You Will Need

Hugo

You will need hugo, which can be downloaded from here: Hugo. A simple website will be enough. For themes, you can take a look at the humongous list located here: HugoThemes.

Haproxy

Haproxy can be found here: Haproxy. There are a number of options to install haproxy. I chose a simple apt-get install haproxy.

Let’s Encrypt

Information about Let’s Encrypt can be found on their website here: Let’s Encrypt. Let’s Encrypt’s client is now called Certbot which is used to generate the certificates. To get the latest code you either clone the repository Certbot, or use an auto downloader:

user@webserver:~$ wget https://dl.eff.org/certbot-auto
user@webserver:~$ chmod a+x ./certbot-auto
user@webserver:~$ ./certbot-auto --help

Either way, I’m using the current latest version: v0.11.1.

Sudo

This goes without saying, but that these operations will require you to have sudo privileges. I suggest staying in sudo for ease of use. This means that the commands, I’ll write here, will assume you are in sudo su mode thus no sudo prefix will be used.

Portforwarding

In order for your website to work under https this guide assumes that you have port 80 and 443 open on your router / network security group.

Setup

Single Server Environment

It is possible for haproxy, certbot and your website to run on designated servers. Haproxy’s abilities allows to define multiple server sources. In this guide, my haproxy, website and certbot will all run on the same server; thus redirecting to 127.0.0.1 and local ips. This is more convenient, because otherwise the haproxy IP would have to be a permanent local/remote ip. Or an automated script would have to be setup which is notified upon IP change and updates the ip records.

Creating a Certificate

Diving in, the first thing you will require is a certificate. A certificate will allow for encrypted traffic and an authenticated website. Let’s Encrypt which is basically functioning as an independent, free, automated CA (Certificate Authority). Usually, the process would be to pay a CA to give you a signed, generated certificate for your website, and you would have to set that up with your DNS provider. Let’s Encrypt has that all automated, and free of any charge. Neat.

Certbot

So let’s get started. Clone the repository into /opt/letsencrypt for further usage.

git clone https://github.com/certbot/certbot /opt/letsencrypt

Generating the certificate

Make sure that there is nothing listening on ports: 80, 443. To list usage:

netstat -nlt | grep ':80\s'
netstat -nlt | grep ':443\s'

Kill everything that might be on these ports, like apache2 and httpd. These will be used by haproxy and certbot for challenges and redirecting traffic.

You will be creating a standalone certificate. This is the reason we need port 80 and 443 open. Run certbot by defining the certonly and --standalone flags. For domain validation you are going to use port 443, tls-sni-01 challenge. The whole command looks like this:

cd /opt/letsencrypt
./certbot-auto certonly --standalone -d example.com -d www.example.com

If this displays something like, “couldn’t connect” you probably still have something running on a port it tries to use. The generated certificate will be located under /etc/letsencrypt/archive and /etc/letsencrypt/keys while /etc/letsencrypt/live is a symlink to the latest version of the cert. It’s wise to not copy these away from here, since the live link is always updated to the latest version. Our script will handle haproxy, which requires one cert file made from privkey + fullchain|.pem files.

Setup Auto-Renewal

Let’s Encrypt issues short lived certificates (90 days). In order to not have to do this procedure every 89 days, certbot provides a nifty command called renew. However, for the cert to be generated, the port 443 has to be open. This means, haproxy needs to be stopped before doing the renew. Now, you COULD write a script which stops it, and after the certificate has been renewed, starts it again, but certbot has you covered again in that department. It provides hooks called pre-hook and post-hook. Thus, all you have to write is the following:

#!/bin/bash

cd /opt/letsencrypt
./certbot-auto renew --pre-hook "service haproxy stop" --post-hook "service haproxy start"
DOMAIN='example.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

If you would like to test it first, just include the switch --dry-run.

In case of success you should see something like this:

root@raspberrypi:/opt/letsencrypt# ./certbot-auto renew --pre-hook "service haproxy stop" --post-hook "service haproxy start" --dry-run
Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/example.com.conf
-------------------------------------------------------------------------------
Cert not due for renewal, but simulating renewal for dry run
Running pre-hook command: service haproxy stop
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0002_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0002_csr-certbot.pem
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/example.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
Running post-hook command: service haproxy start

Put this script into a crontab to run every 89 days like this:

crontab -e
# Open crontab for edit and paste in this line
* * */89 * * /root/renew-cert.sh

And you should be all set. Now we move on the configure haproxy to redirect and to use our newly generated certificate.

Haproxy

Like I said, haproxy requires a single file certificate in order to encrypt traffic to and from the website. To do this, we need to combine privkey.pem and fullchain.pem. As of this writing, there are a couple of solutions to automate this via a post hook on renewal. And also, there is an open ticket with certbot to implement a simpler solution located here: https://github.com/certbot/certbot/issues/1201. I, for now, have chosen to simply concatenate the two files together with cat like this:

DOMAIN='example.com' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

It will create a combined cert under /etc/haproxy/certs/example.com.pem.

Haproxy configuration

If haproxy happens to be running, stop it with service haproxy stop.

First, save the default configuration file: cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old. Now, overwrite the old one with this new one (comments about what each setting does, are in-lined; they are safe to copy):

global
    daemon
    # Set this to your desired maximum connection count.
    maxconn 2048
    # https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
    # bit setting for Diffie - Hellman key size.
    tune.ssl.default-dh-param 2048

defaults
    option forwardfor
    option http-server-close

    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

# In case it's a simple http call, we redirect to the basic backend server
# which in turn, if it isn't an SSL call, will redirect to HTTPS that is
# handled by the frontend setting called 'www-https'.
frontend www-http
    # Redirect HTTP to HTTPS
    bind *:80
    # Adds http header to end of end of the HTTP request
    reqadd X-Forwarded-Proto:\ http
    # Sets the default backend to use which is defined below with name 'www-backend'
    default_backend www-backend

# If the call is HTTPS we set a challenge to letsencrypt backend which
# verifies our certificate and than direct traffic to the backend server
# which is the running hugo site that is served under https if the challenge succeeds.
frontend www-https
    # Bind 443 with the generated letsencrypt cert.
    bind *:443 ssl crt /etc/haproxy/certs/skarlso.com.pem
    # set x-forward to https
    reqadd X-Forwarded-Proto:\ https
    # set X-SSL in case of ssl_fc <- explained below
    http-request set-header X-SSL %[ssl_fc]
    # Select a Challenge
    acl letsencrypt-acl path_beg /.well-known/acme-challenge/
    # Use the challenge backend if the challenge is set
    use_backend letsencrypt-backend if letsencrypt-acl
    default_backend www-backend

backend www-backend
   # Redirect with code 301 so the browser understands it is a redirect. If it's not SSL_FC.
   # ssl_fc: Returns true when the front connection was made via an SSL/TLS transport
   # layer and is locally deciphered. This means it has matched a socket declared
   # with a "bind" line having the "ssl" option.
   redirect scheme https code 301 if !{ ssl_fc }
   # Server for the running hugo site.
   server www-1 192.168.0.17:8080 check

backend letsencrypt-backend
   # Lets encrypt backend server
   server letsencrypt 127.0.0.1:54321

Save this, and start haproxy with services haproxy start. If you did everything right, it should say nothing. If, however, there went something wrong with starting the proxy, it usually displays something like this:

Job for haproxy.service failed. See 'systemctl status haproxy.service' and 'journalctl -xn' for details.

You can also gather some more information on what went wrong from less /var/log/haproxy.log.

Starting the Server

Everything should be ready to go. Hugo has the concept of a baseUrl. Everything that it loads, and tries to access will be prefixed with it. You can either set it through it’s config.yaml file, or from the command line.

To start the server, call this from the site’s root folder:

hugo server --bind=192.168.x.x --port=8080 --baseUrl=https://example.com --appendPort=false

Interesting thing here to note is https and the port. The IP could be 127.0.0.1 as well. I experienced problems though with not binding to network IP when I was debugging the site from a different laptop on the same network.

Once the server is started, you should be able to open up your website from a different browser, not on your local network, and see that it has a valid certificate installed. In Chrome you should see a green icon telling you that the cert is valid.

Last Words

And that is all. The site should be up and running and the proxy should auto-renew your site’s certificate. If you happened to change DNS or change the server, you’ll have to reissue the certificate.

Thanks for reading! Any questions or trouble setting something up, please feel free to leave a comment.

Cheers, Gergely.

08 Dec 2015, 00:00

Go Development Environment

Hello folks.

Here is a little something I’ve put together, since I’m doing it a lot.

Go Development Environment

If I have a project I’d like to contribute, like GoHugo, I have to setup a development environment, because most of the times, I’m on a Mac. And on OSX things work differently. I like to work in a Linux environment since that’s what most of the projects are built on.

So here you go. Just download the files, and say vagrant up which will do the magic.

This sets up vim-go with code completion given by YouCompleteMe and some go features like, fmt on save and build error highlighting.

Also sets up ctags which will give you tags and the ability to do GoTo Declaration.

Installs a bunch of utilities, and configures Go. There is an option to install docker as well. But it’s ignored at the moment.

Just uncomment this line:

#config.vm.provision "shell", path: "install_docker.sh"

Any questions or request, feel free to submit an Issue!

Thanks for reading! Gergely.

26 Oct 2015, 00:00

Kill a Program on Connecting to a specific WiFi – OSX

Hi folks.

If you have the tendency, like me, to forget that you are on the corporate VPN, or leave a certain software open when you bring your laptop to work, this might be helpful to you too.

It’s a small script which kills a program when you change your Wifi network.

Script:

#!/bin/bash
 
function log {
    directory="/Users/<username>/wifi_detect"
    log_dir_exists=true
    if [ ! -d $directory ]; then
        echo "Attempting to create => $directory"
        mkdir -p $directory
        if [ ! -d $directory ]; then
            echo "Could not create directory. Continue to log to echo."
            log_dir_exists=false
        fi
    fi
    if $log_dir_exists ; then
        echo "$(date):$1" >> "$directory/log.txt"
    else
        echo "$(date):$1"
    fi
}
 
function check_program {
    to_kill="[${1::1}]${1:1}"
    log "Checking if $to_kill really quit."
    ps=$(ps aux |grep "$to_kill")
    log "ps => $ps"
    if [ -z "$ps" ]; then
    # 0 - True
        return 
    else
    # 1 - False
        return 1
    fi
}
 
function kill_program {
    log "Killing program"
    `pkill -f "$1"`
    sleep 1
    if ! check_program $1 ; then
    log "$1 Did not quit!"
    else
    log "$1 quit successfully"
    fi
}
 
wifi_name=$(networksetup -getairportnetwork en0 |awk -F": " '{print $2}')
log "Wifi name: $wifi_name"
if [ "$wifi_name" = "<wifi_name>" ]; then
    log "On corporate network... Killing Program"
    kill_program "<programname>"
elif [ "$wifi_name" = "<home_wifi_name>" ]; then
    # Kill <program> if enabled and if on <home_wifi> and if Tunnelblick is running.
    log "Not on corporate network... Killing <program> if Tunnelblick is active."
    if ! check_program "Tunnelblick" ; then
    log "Tunnelblick is active. Killing <program>"
    kill_program "<program>"
    else
    log "All good... Happy coding."
    fi
else
    log "No known Network..."
fi

Now, the trick is, on OSX to only trigger this when your network changes. For this, you can have a ‘launchd’ daemon, which is configured to watch three files which relate to a network being changed.

The script sits under your ~/Library/LaunchAgents folder. Create something like, com.username.checknetwork.plist.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" \
 "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>ifup.ddns</string>
 
  <key>LowPriorityIO</key>
  <true/>
 
  <key>ProgramArguments</key>
  <array>
    <string>/Users/username/scripts/ddns-update.sh</string>
  </array>
 
  <key>WatchPaths</key>
  <array>
    <string>/etc/resolv.conf</string>
    <string>/Library/Preferences/SystemConfiguration/NetworkInterfaces.plist</string>
    <string>/Library/Preferences/SystemConfiguration/com.apple.airport.preferences.plist</string>
  </array>
 
  <key>RunAtLoad</key>
  <true/>
</dict>
</plist>

Now, when you change your network, to whatever your corporate network is, you’ll kill Sublime.

Hope this helps somebody.

Cheers,

Gergely.

15 Oct 2015, 00:00

Jenkins Job DSL and Groovy goodness

Hi Folks.

Ever used Job DSL plugin for Jenkins? What is that you say? Well, it’s TEH most awesome plug-in for Jenkins to have, because you can CODE your job configuration and put it under source control.

Today, however, I’m not going to write about that because the tutorials on Jenkins JOB DSL are very extensive and very well done. Anyone can pick them up.

Today, I would like to write about a part of it which is even more interesting. And that is, extracting re-occurring parts in your job configurations.

If you have jobs, which have a common part that is repeated everywhere, you usually have an urge to extracted that into one place, lest it changes and you have to go an apply the change everywhere. That’s not very efficient. But how do you do that in something which looks like a JSON descriptor?

Fret not, it is just Groovy. And being just groovy, you can use Groovy to implement parts of the job description and then apply that implementation to the job in the DSL.

Suppose you have an email which you send after every job for which the DSL looks like this:

job('MyTestJob') {
    description '<strong>GENERATED - do not modify</strong>'
    label('machine_label')
    logRotator(30, -1, -1, 5)
    parameters {
        stringParam('somestringparam', 'default_valye', 'Description')
    }
    wrappers {
        timeout {
            noActivity(600)
            abortBuild()
            failBuild()
            writeDescription('Build failed due to timeout after {0} minutes')
        }
    }
    deliveryPipelineConfiguration("Main", "MyTestJob")
    wrappers {
        preBuildCleanup {
            deleteDirectories()
        }
        timestamps()
    }
    triggers {
        cron('H 12 * * 1,2')
    }
    steps {
        batchFile(readFileFromWorkspace('relative/path/to/file'))
    }
            publishers {
                wsCleanup()
                extendedEmail('email@address.com', '$DEFAULT_SUBJECT', '$DEFAULT_CONTENT') {
                    configure { node ->
                        node / presendScript << readFileFromWorkspace('email_templates/emailtemplate.groovy')
                        node / replyTo << '$DEFAULT_REPLYTO'
                        node / contentType << 'default'
                    }
                    trigger(triggerName: 'StillUnstable', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                    trigger(triggerName: 'Fixed', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                    trigger(triggerName: 'Failure', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                }
 
            }
}

Now, that big chunk of email setting is copied into a bunch of files, which is pretty ugly. And once you try to change it, you’ll have to change it everywhere. Also, the interesting bits here are those readFileFromWorkspace parts. Those allow us to export even larger chunks of the script into external files. Now, because the slave might be located somewhere else, you should not use new File(‘file’).text in your job DSL. readFileFromWorkspace in the background does that, but applies correct way to the PATH it looks on for the file specified.

Let’s put this into a groovy script, shall we? Create a utilities folder where the DSL is and create a groovy file in it like this one:

package utilities
 
public class JobCommonTemplate {
    public static void addEmailTemplate(def job, def dslFactory) {
        String emailScript = dslFactory.readFileFromWorkspace("email_template/EmailTemplate.groovy")
        job.with {
            publishers {
                wsCleanup()
                extendedEmail('email@address.com', '$DEFAULT_SUBJECT', '$DEFAULT_CONTENT') {
                    configure { node ->
                        node / presendScript << emailScript
                        node / replyTo << '$DEFAULT_REPLYTO'
                        node / contentType << 'default'
                    }
                    trigger(triggerName: 'StillUnstable', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                    trigger(triggerName: 'Fixed', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                    trigger(triggerName: 'Failure', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                }
 
            }
        }
    }
}

The function addEmailTemplate gets two parameters. A job, which is an implementation of a Job, and a dslFactory which is a DslFactory. That factory is an interface which defines our readFileFromWorkspace. Where do we get the implementation from then? That will be from the Job. Let’s alter our job to apply this Groovy script.

import utilities.JobCommonTemplate
 
def myJob = job('MyTestJob') {
    description '<strong>GENERATED - do not modify</strong>'
    label('machine_label')
    logRotator(30, -1, -1, 5)
    parameters {
        stringParam('somestringparam', 'default_valye', 'Description')
    }
    wrappers {
        timeout {
            noActivity(600)
            abortBuild()
            failBuild()
            writeDescription('Build failed due to timeout after {0} minutes')
        }
    }
    deliveryPipelineConfiguration("Main", "MyTestJob")
    wrappers {
        preBuildCleanup {
            deleteDirectories()
        }
        timestamps()
    }
    triggers {
        cron('H 12 * * 1,2')
    }
    steps {
        batchFile(readFileFromWorkspace('relative/path/to/file'))
    }
}
 
JobCommonTemplate.addEmailTemplate(myJob, this)

Notice three things here.

#1 => import. We import the script from utilities folder which we created and placed the script into it.

#2 => def myJob. We create a variable which will contain our job’s description.

#3 => this. ‘this’ will be the DslFactory. That’s where we get our readFileFromWorkspace implementation.

And that’s it. We have extracted a part of our job which is re-occurring and we found our implementation for our readFileFromWorkspace. DslFactory has most of the things which you need in a job description, would you want to expand on this and extract other bits and pieces.

Have fun, and happy coding!

As always,

Thanks for reading!

Gergely.

02 Oct 2015, 00:00

How to Aggregate Tests with Jenkins with Aggregate Plugin on non-relating jobs

Hello folks.

Today, I would like to talk about something I came in contact with, and was hard to find a proper answer / solution for it.

So I’m writing this down to document my findings. Like the title says, this is about aggregating test result with Jenkins, using the plug-in provided. If you, like me, have a pipeline structure which do not work on the same artifact, but do have a upstream-downstream relationship, you will have a hard time configuring and making Aggregation work. So here is how, I fixed the issue.

Connection

In order for the aggregation to work, there needs to be an artifact connection between the upstream and downstream projects. And that is the key. But if you don’t have that, well, let’s create one. I have a parent job configured like this one. =>

<?xml version='1.0' encoding='UTF-8'?>
<project>
  <actions/>
  <description></description>
  <keepDependencies>false</keepDependencies>
  <properties/>
  <scm class="hudson.scm.NullSCM"/>
  <canRoam>true</canRoam>
  <disabled>false</disabled>
  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
  <triggers/>
  <concurrentBuild>false</concurrentBuild>
  <builders/>
  <publishers>
    <hudson.tasks.test.AggregatedTestResultPublisher plugin="junit@1.9">
      <includeFailedBuilds>false</includeFailedBuilds>
    </hudson.tasks.test.AggregatedTestResultPublisher>
    <hudson.tasks.BuildTrigger>
      <childProjects>ChildJob</childProjects>
      <threshold>
        <name>SUCCESS</name>
        <ordinal></ordinal>
        <color>BLUE</color>
        <completeBuild>true</completeBuild>
      </threshold>
    </hudson.tasks.BuildTrigger>
  </publishers>
  <buildWrappers/>
</project>

As you can see, it’s pretty basic. It isn’t much. It’s supposed to be a trigger job for downstream projects. You could have this one at anything. Maybe scheduled, or have some kind of gathering here of some results, and so on and so forth. The end part of the configuration is the interesting bit.

Aggregation is setup, but it won’t work, because despite there being an upstream/downstream relationship, there also needs to be an artifact connection which uses fingerprinting. Fingerprinting for Jenkins is needed in oder to make the physical connection between the jobs via hashes. This is what you will get if that is not setup:

But if there is no artifact between them, what do you do? You create one.

The Artifact which Binds Us

Adding a simple timestamp file is enough to make a connection. So let’s do that. This is how it will look like =>

The important bits about this picture are the small echo which simply creates a file which will contain some time stamp data, and after that the archive artifact, which also fingerprints that file, marking it with a hash which identifies this job as using that particular artifact.

Now, the next step is to create the connection. For that, you need the artifact copy plugin => Copy Artifact Plugin.

With this, we create the childs configuration like this:

<?xml version='1.0' encoding='UTF-8'?>
<project>
  <actions/>
  <description></description>
  <keepDependencies>false</keepDependencies>
  <properties/>
  <scm class="hudson.plugins.git.GitSCM" plugin="git@2.4.0">
    <configVersion>2</configVersion>
    <userRemoteConfigs>
      <hudson.plugins.git.UserRemoteConfig>
        <url>https://github.com/Skarlso/DataMung.git</url>
      </hudson.plugins.git.UserRemoteConfig>
    </userRemoteConfigs>
    <branches>
      <hudson.plugins.git.BranchSpec>
        <name>*/master</name>
      </hudson.plugins.git.BranchSpec>
    </branches>
    <doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
    <submoduleCfg class="list"/>
    <extensions/>
  </scm>
  <canRoam>true</canRoam>
  <disabled>false</disabled>
  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
  <triggers/>
  <concurrentBuild>false</concurrentBuild>
  <builders>
    <hudson.plugins.gradle.Gradle plugin="gradle@1.24">
      <description></description>
      <switches></switches>
      <tasks>assemble check</tasks>
      <rootBuildScriptDir></rootBuildScriptDir>
      <buildFile>build.gradle</buildFile>
      <gradleName>(Default)</gradleName>
      <useWrapper>true</useWrapper>
      <makeExecutable>false</makeExecutable>
      <fromRootBuildScriptDir>true</fromRootBuildScriptDir>
      <useWorkspaceAsHome>false</useWorkspaceAsHome>
    </hudson.plugins.gradle.Gradle>
    <hudson.plugins.copyartifact.CopyArtifact plugin="copyartifact@1.36">
      <project>ParentJob</project>
      <filter>timestamp.data</filter>
      <target></target>
      <excludes></excludes>
      <selector class="hudson.plugins.copyartifact.TriggeredBuildSelector">
        <upstreamFilterStrategy>UseGlobalSetting</upstreamFilterStrategy>
      </selector>
      <doNotFingerprintArtifacts>false</doNotFingerprintArtifacts>
    </hudson.plugins.copyartifact.CopyArtifact>
  </builders>
  <publishers>
    <hudson.tasks.junit.JUnitResultArchiver plugin="junit@1.9">
      <testResults>build/test-results/*.xml</testResults>
      <keepLongStdio>false</keepLongStdio>
      <healthScaleFactor>1.0</healthScaleFactor>
    </hudson.tasks.junit.JUnitResultArchiver>
  </publishers>
  <buildWrappers>
    <hudson.plugins.ws__cleanup.PreBuildCleanup plugin="ws-cleanup@0.28">
      <deleteDirs>false</deleteDirs>
      <cleanupParameter></cleanupParameter>
      <externalDelete></externalDelete>
    </hudson.plugins.ws__cleanup.PreBuildCleanup>
  </buildWrappers>
</project>

Again, the improtant bit is this:

After the copy is setup, we launch our parent job and if everything is correct, you should see something like this:

Wrapping it Up

For final words, important bit to take away from this is that you need an artifact connection between the jobs to make this work. Whatever your downstream / upstream connection is, it doesn’t matter. Also, there can be a problem that you have everything set up, and there are artifacts which bind the jobs together but you still can’t see the results, then your best option is to specify the jobs BY NAME in the aggregate test plug-in like this:

I know this is a pain if there are multiple jobs, but at least, jenkins is providing you with Autoexpande once you start typing.

Of course this also works with multiple downstream jobs if they copy the artifact to themselves.

Any questions, please feel free to comment and I will answer to the best of my knowledge.

Cheers, Gergely.

16 Jul 2015, 00:00

Selenium Testing with Packer and Vagrant

So, recently, the tester team talked to me, that their build takes too long, and why is that? A quick look at their configuration and build scripts showed me, that they are actually using a vagrant box, which never gets destroyed or re-started at least. To remedy this problem, I came up with the following solution…

Same old…

Same as in my previous post, we are going to build a Windows Machine for this purpose. The only addition to my previous settings, will be some Java install, downloading selenium and installing Chrome, and Firefox.

Installation

Answer File

Here is the configuration and setup of Windows before the provision phase.

...
               <SynchronousCommand wcm:action="add">
                  <CommandLine>cmd.exe /c C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File a:\jdk_inst.ps1 -AutoStart</CommandLine>
                  <Description>Install Java</Description>
                  <Order>103</Order>
                  <RequiresUserInput>true</RequiresUserInput>
               </SynchronousCommand>
...

This is the part were I’m installing Java. The script for the jdk_inst.ps1 is in my previous post, but I’ll paste it here for ease of read.

function LogWrite {
   Param ([string]$logstring)
   $now = Get-Date -format s
   Add-Content $Logfile -value "$now $logstring"
   Write-Host $logstring
}
 
$Logfile = "C:\Windows\Temp\jdk-install.log"
 
$JDK_VER="7u75"
$JDK_FULL_VER="7u75-b13"
$JDK_PATH="1.7.0_75"
$source86 = "http://download.oracle.com/otn-pub/java/jdk/$JDK_FULL_VER/jdk-$JDK_VER-windows-i586.exe"
$source64 = "http://download.oracle.com/otn-pub/java/jdk/$JDK_FULL_VER/jdk-$JDK_VER-windows-x64.exe"
$destination86 = "C:\Windows\Temp\$JDK_VER-x86.exe"
$destination64 = "C:\Windows\Temp\$JDK_VER-x64.exe"
$client = new-object System.Net.WebClient
$cookie = "oraclelicense=accept-securebackup-cookie"
$client.Headers.Add([System.Net.HttpRequestHeader]::Cookie, $cookie)
 
LogWrite "Setting Execution Policy level to Bypass"
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Bypass -Force
 
LogWrite 'Checking if Java is already installed'
if ((Test-Path "c:\Program Files (x86)\Java") -Or (Test-Path "c:\Program Files\Java")) {
    LogWrite 'No need to Install Java'
    Exit
}
 
LogWrite 'Downloading x86 to $destination86'
try {
  $client.downloadFile($source86, $destination86)
  if (!(Test-Path $destination86)) {
      LogWrite "Downloading $destination86 failed"
      Exit
  }
  LogWrite 'Downloading x64 to $destination64'
 
  $client.downloadFile($source64, $destination64)
  if (!(Test-Path $destination64)) {
      LogWrite "Downloading $destination64 failed"
      Exit
  }
} catch [Exception] {
  LogWrite '$_.Exception is' $_.Exception
}
 
try {
    LogWrite 'Installing JDK-x64'
    $proc1 = Start-Process -FilePath "$destination64" -ArgumentList "/s REBOOT=ReallySuppress" -Wait -PassThru
    $proc1.waitForExit()
    LogWrite 'Installation Done.'
 
    LogWrite 'Installing JDK-x86'
    $proc2 = Start-Process -FilePath "$destination86" -ArgumentList "/s REBOOT=ReallySuppress" -Wait -PassThru
    $proc2.waitForExit()
    LogWrite 'Installtion Done.'
} catch [exception] {
    LogWrite '$_ is' $_
    LogWrite '$_.GetType().FullName is' $_.GetType().FullName
    LogWrite '$_.Exception is' $_.Exception
    LogWrite '$_.Exception.GetType().FullName is' $_.Exception.GetType().FullName
    LogWrite '$_.Exception.Message is' $_.Exception.Message
}
 
if ((Test-Path "c:\Program Files (x86)\Java") -Or (Test-Path "c:\Program Files\Java")) {
    LogWrite 'Java installed successfully.'
} else {
    LogWrite 'Java install Failed!'
}
LogWrite 'Setting up Path variables.'
[System.Environment]::SetEnvironmentVariable("JAVA_HOME", "c:\Program Files (x86)\Java\jdk$JDK_PATH", "Machine")
[System.Environment]::SetEnvironmentVariable("PATH", $Env:Path + ";c:\Program Files (x86)\Java\jdk$JDK_PATH\bin", "Machine")
LogWrite 'Done. Goodbye.'

This installs both x86 and 64 bit version of Java.

Provision

I decided to put these into the provision phase to get log messages written out properly. Because in the unattended file, you can’t see any progress.

Chrome And Firefox

Installing these two proved a little bit more difficult. Chrome didn’t really like me to download their installer without accepting something first, like Java. Luckily, after a LOT of digging, I found a chrome installer which lets you install silently. Here is the script to install the two.

function LogWrite {
    Param ([string]$logstring)
    $now = Get-Date -format s
    Add-Content $Logfile -value "$now $logstring"
    Write-Host $logstring
}
 
function CheckLocation {
    Param ([string]$location)
    if (!(Test-Path  $location)) {
        throw [System.IO.FileNotFoundException] "Could not download to Destination $location."
    }
}
 
$Logfile = "C:\Windows\Temp\chrome-firefox-install.log"
 
$chrome_source = "http://dl.google.com/chrome/install/375.126/chrome_installer.exe"
$chrome_destination = "C:\Windows\Temp\chrome_installer.exe"
$firefox_source = "https://download-installer.cdn.mozilla.net/pub/firefox/releases/39.0/win32/hu/Firefox%20Setup%2039.0.exe"
$firefox_destination = "C:\Windows\Temp\firefoxinstaller.exe"
 
LogWrite 'Starting to download files.'
try {
    LogWrite 'Downloading Chrome...'
    (New-Object System.Net.WebClient).DownloadFile($chrome_source, $chrome_destination)
    CheckLocation $chrome_destination
    LogWrite 'Done...'
    LogWrite 'Downloading Firefox...'
    (New-Object System.Net.WebClient).DownloadFile($firefox_source, $firefox_destination)
    CheckLocation $firefox_destination
} catch [Exception] {
    LogWrite "Exception during download. Probable cause could be that the directory or the file didn't exist."
    LogWrite '$_.Exception is' $_.Exception
}
 
LogWrite 'Starting firefox install process.'
try {
    Start-Process -FilePath $firefox_destination -ArgumentList "-ms" -Wait -PassThru
} catch [Exception] {
    LogWrite 'Exception during install process.'
    LogWrite '$_.Exception is' $_.Exception
}
LogWrite 'Starting chrome install process.'
 
try {
    Start-Process -FilePath $chrome_destination -ArgumentList "/silent /install" -Wait -PassThru
} catch [Exception] {
    LogWrite 'Exception during install process.'
    LogWrite '$_.Exception is' $_.Exception
}
 
LogWrite 'All done. Goodbye.'

They both install silently. Pretty neat.

Selenium

This only has to be downloaded, so this is pretty simple. Vagrant will handle the startup of course when it does a vagrant up.

function LogWrite {
   Param ([string]$logstring)
   $now = Get-Date -format s
   Add-Content $Logfile -value "$now $logstring"
   Write-Host $logstring
}
 
$Logfile = "C:\Windows\Temp\selenium-install.log"
 
$source = "http://selenium-release.storage.googleapis.com/2.46/selenium-server-standalone-2.46.0.jar"
$destination = "C:\Windows\Temp\selenium-server.jar"
LogWrite 'Starting to download selenium file.'
try {
  (New-Object System.Net.WebClient).DownloadFile($source, $destination)
} catch [Exception] {
  LogWrite "Exception during download. Probable cause could be that the directory or the file didn't exist."
  LogWrite '$_.Exception is' $_.Exception
}
LogWrite 'Download done. Checking if file exists.'
if (!(Test-Path $destination)) {
  LogWrite 'Downloading dotnet Failed!'
} else {
  LogWrite 'Download successful.'
}
 
LogWrite 'All done. Goodbye.'

Straightforward.

The Packer Json File

So putting this all together, here is the Packer JSON file for this:

{
      "variables": {
      "vm_name": "win7x64selenium",
      "output_dir": "output_win7_x64_selenium",
      "vagrant_box_output": "box_output",
      "cpu_number": "2",
      "memory_size": "4096",
      "machine_type": "pc-1.2",
      "accelerator": "kvm",
      "disk_format": "qcow2",
      "disk_interface": "virtio",
      "net_device": "virtio-net",
      "cpu_model": "host",
      "disk_cache": "writeback",
      "disk_io": "native"
   },
 
  "builders": [
    {
      "type": "virtualbox-iso",
      "iso_url": "/home/user/vms/windows7.iso",
      "iso_checksum_type": "sha1",
      "iso_checksum": "0BCFC54019EA175B1EE51F6D2B207A3D14DD2B58",
      "headless": true,
      "boot_wait": "2m",
      "ssh_username": "vagrant",
      "ssh_password": "vagrant",
      "ssh_wait_timeout": "8h",
      "shutdown_command": "shutdown /s /t 10 /f /d p:4:1 /c \"Packer Shutdown\"",
      "guest_os_type": "Windows7_64",
      "disk_size": 61440,
      "floppy_files": [
        "./answer_files/7-selenium/Autounattend.xml",
        "./scripts/dis-updates.ps1",
        "./scripts/microsoft-updates.bat",
        "./scripts/openssh.ps1",
        "./scripts/jdk_inst.ps1"
      ],
      "vboxmanage": [
        [
          "modifyvm",
          "{{.Name}}",
          "--memory",
          "{{user `memory_size`}}"
        ],
        [
          "modifyvm",
          "{{.Name}}",
          "--cpus",
          "{{user `cpu_number`}}"
        ]
      ]
    }
  ],
  "provisioners": [
    {
      "type": "powershell",
      "scripts" : [
        "./scripts/install-selenium-server.ps1",
        "./scripts/install-chrome-firefox.ps1"
      ]
    },{
      "type": "shell",
      "remote_path": "/tmp/script.bat",
      "execute_command": "{{.Vars}} cmd /c C:/Windows/Temp/script.bat",
      "scripts": [
        "./scripts/vm-guest-tools.bat",
        "./scripts/vagrant-ssh.bat",
        "./scripts/rsync.bat",
        "./scripts/enable-rdp.bat"
      ]
    }
  ],
    "post-processors": [
    {
      "type": "vagrant",
      "keep_input_artifact": false,
      "output": "{{user `vm_name`}}_{{.Provider}}.box",
      "vagrantfile_template": "vagrantfile-template"
    }
    ]
}

Additional Software

This is not done here. Obviously, in order to test your stuff, you first need to install your software on this box. Ideally, everything you need should be in the code you clone to this box, and should be contained mostly. And your application deployment should take core of that. But, if you require something like a DB, postgres, oracle, whatnot, than this is the place where you would install all that.

Vagrant and Using the Packer Box

Now, this has been interesting so far, but how do you actually go about using this image? That’s the real question now, isn’t it? Having a box, just sitting on a shared folder, doesn’t do you too much good. So let’s create a Jenkins job, which utilizes this box in a job which runs a bunch of tests for some application.

Vagrantfile

Your vagrant file, could either be generated automatically, under source control ( which is preferred ) or sitting somewhere entirely elsewhere. In any case, it would look something like this.

# -*- mode: ruby -*-
# vi: set ft=ruby :
 
VAGRANTFILE_API_VERSION = "2"
 
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 
  config.vm.provider "virtualbox"
 
  config.vm.define "selenium-box" do |vs2013|
    vs2013.vm.box = "windows7-x64-04-selenium"
    vs2013.vm.box_url = "path/to/your/share/win7x64_selenium_virtualbox.box"
  end
 
  config.env.enable
 
  config.vm.guest = :windows
  config.vm.communicator = "winrm"
  config.winrm.username = "vagrant"
  config.winrm.password = "vagrant"
  config.windows.set_work_network = true
  config.vm.network :forwarded_port, guest: 3389, host: ENV['RDESKTOP_PORT'], host_ip: "0.0.0.0"
  config.vm.network :forwarded_port, guest: 5985, host: 5985, id: "winrm", auto_correct: true, host_ip: "0.0.0.0"
  config.vm.network :forwarded_port, guest: 9991, host: 9991, id: "selenium", auto_correct: true, host_ip: "0.0.0.0"
  config.vm.provider :virtualbox do |vbox|
    vbox.gui = false
    vbox.memory = 4096
    vbox.cpus = 2
  end
 
 
  config.winrm.max_tries = 10
  config.vm.synced_folder ".", "/vagrant", type: "rsync"
  config.vm.provision "shell", path: "init.bat"
  config.vm.provision "shell", path: "utils_inst.bat"
  config.vm.provision "shell", path: "jenkins_reg.ps1"
  config.vm.provision "shell", path: "start_selenium.bat"
end

Easy, no? Here is the script to start selenium.

    java -jar c:\Windows\Temp\selenium-server.jar -Dhttp.proxyPort=9991

Straight forward. We also are forwarding the port on which Selenium is running in order for the test to see it.

The Jenkins Job

The job can be anything. This is actually too large to cover here. It could be a gradle job, a maven job, an ant, a nant – or whatever is running the test -, job; it’s up to you.

Just make sure that before the test runs, do a vagrant up and after the test finishes, in an ALWAYS TO BE EXECUTED HOOK -like gradle’s finalizedBy , call a vagrant destroy. This way, your test will always run on a clean instance that has the necessary stuff on it.

Closing words

So, there you have it. It’s relatively simple. Tying this all into your infrastructure might prove difficult though depending on how rigid your deployment is. But it will always help you make your tests a bit more robust.

Also, you could run the whole deployment and test phase on a vagrant box, from the start, which is tied to jenkins as a slave and gets started when the job starts and destroyed when the job ends. That way you wouldn’t have to create a, box in a box running on a box, kind of effect.

Thanks for reading,

Gergely.

01 Jul 2015, 00:00

Packer 0.8.1.

Previously I wrote that the scripts I’m writing, are failing because Packer hangs.

Apparently, this was a known issue. And apparently, I was using an older version, 0.7.5. After I updated everything is working wonderfully!!!

And for my thanks, here is an updated PowerShell script for provisioning my dotnet stuff.

$source = "http://download.microsoft.com/download/1/6/7/167F0D79-9317-48AE-AEDB-17120579F8E2/NDP451-KB2858728-x86-x64-AllOS-ENU.exe"
$destination = "C:\Windows\Temp\dotnet.exe"
Write-Host 'Starting to download dotnet file.'
try {
  (New-Object System.Net.WebClient).DownloadFile($source, $destination)
} catch [Exception] {
  Write-Host "Exception during download. Probable cause could be that the directory or the file didn't exist."
  Write-Host '$_.Exception is' $_.Exception
}
Write-Host 'Download done. Checking if file exists.'
if (!(Test-Path $destination)) {
  Write-Host 'Downloading dotnet Failed!'
} else {
  Write-Host 'Download successful.'
}
 
Write-Host 'Starting install process.'
try {
  Start-Process -FilePath $source -ArgumentList "/q /norestart" -Wait -PassThru
} catch [Exception] {
  Write-Host 'Exception during install process.'
  Write-Host '$_.Exception is' $_.Exception
}
 
Write-Host 'All done. Goodbye.'

Thanks for reading!

Gergely.

30 Jun 2015, 00:00

Powershell can also be nice -Or Installing Java silently and waiting

Hello folks.

Today, I would like to show you a small script. It installs Java JDK, both version, x86 and 64 bit, silently, and wait for that process to finish.

The wait is necessary because /s on a java install has the nasty habit of running in the background. If you are using a .bat file, you shouldn’t, than you would use something like: start /w jdk-setup.exe /s. This gets it done, but is ugly. Also, if you are using Packer and PowerShell provisioning, you might want to set up some environment variables as well for the next script. And you want that property to be available and you don’t want to mess it up with setting a path into a file and then re-setting your path on the begin of your other script. Or pass it around with Packer. No. Use a proper PowerShell script. Learn it. It’s not that hard. Be a professional. Don’t hack something together for the next person to suffer at.

Here is how I did it. Hope it helps somebody out.

$JDK_VER="7u75"
$JDK_FULL_VER="7u75-b13"
$JDK_PATH="1.7.0_75"
$source86 = "http://download.oracle.com/otn-pub/java/jdk/$JDK_FULL_VER/jdk-$JDK_VER-windows-i586.exe"
$source64 = "http://download.oracle.com/otn-pub/java/jdk/$JDK_FULL_VER/jdk-$JDK_VER-windows-x64.exe"
$destination86 = "C:\vagrant\$JDK_VER-x86.exe"
$destination64 = "C:\vagrant\$JDK_VER-x64.exe"
$client = new-object System.Net.WebClient
$cookie = "oraclelicense=accept-securebackup-cookie"
$client.Headers.Add([System.Net.HttpRequestHeader]::Cookie, $cookie)
 
Write-Host 'Checking if Java is already installed'
if ((Test-Path "c:\Program Files (x86)\Java") -Or (Test-Path "c:\Program Files\Java")) {
    Write-Host 'No need to Install Java'
    Exit
}
 
Write-Host 'Downloading x86 to $destination86'
 
$client.downloadFile($source86, $destination86)
if (!(Test-Path $destination86)) {
    Write-Host "Downloading $destination86 failed"
    Exit
}
Write-Host 'Downloading x64 to $destination64'
 
$client.downloadFile($source64, $destination64)
if (!(Test-Path $destination64)) {
    Write-Host "Downloading $destination64 failed"
    Exit
}
 
 
try {
    Write-Host 'Installing JDK-x64'
    $proc1 = Start-Process -FilePath "$destination64" -ArgumentList "/s REBOOT=ReallySuppress" -Wait -PassThru
    $proc1.waitForExit()
    Write-Host 'Installation Done.'
 
    Write-Host 'Installing JDK-x86'
    $proc2 = Start-Process -FilePath "$destination86" -ArgumentList "/s REBOOT=ReallySuppress" -Wait -PassThru
    $proc2.waitForExit()
    Write-Host 'Installtion Done.'
} catch [exception] {
    write-host '$_ is' $_
    write-host '$_.GetType().FullName is' $_.GetType().FullName
    write-host '$_.Exception is' $_.Exception
    write-host '$_.Exception.GetType().FullName is' $_.Exception.GetType().FullName
    write-host '$_.Exception.Message is' $_.Exception.Message
}
 
if ((Test-Path "c:\Program Files (x86)\Java") -Or (Test-Path "c:\Program Files\Java")) {
    Write-Host 'Java installed successfully.'
}
Write-Host 'Setting up Path variables.'
[System.Environment]::SetEnvironmentVariable("JAVA_HOME", "c:\Program Files (x86)\Java\jdk$JDK_PATH", "Machine")
[System.Environment]::SetEnvironmentVariable("PATH", $Env:Path + ";c:\Program Files (x86)\Java\jdk$JDK_PATH\bin", "Machine")
Write-Host 'Done. Goodbye.'

Now, there is room for improvement here. Like checking exit code, doing something extra after a failed exit. Throwing an exception, and so on and so forth. But this is a much needed improvement from calling a BAT file.

And you would use this in a Packer JSON file like this..

{
      "type": "powershell",
      "scripts": [
        "./scripts/jdk_inst.ps1"
      ]
}

Easy. And at the end, the System.Environment actually writes out into the registry permanently so no need to pass it around in a file or something ugly like that.

Hope this helps somebody.

Thanks for reading.

Gergely.

27 Jun 2015, 00:00

The Packer, The Windows, and the Vagrant box

Hello folks.

Today, I would like to write about something close to my heart recently. I’ve been fiddling with Packer, Windows and Vagrant these days. Trying to get a Windows box up in running is a pain in the arse though, so I thought I share my pain with you nice folks out there. Let’s begin.

Setup

First things first. You need Packer, and Vagrant obviously. I’ll leave the install up to you. Next, you should clone this git repo => Packer Windows Plugin. This plugin contains all the files necessary to get, install, and provision Windows boxes. Luckily, some very nice and clever folks, figured out a lot of things about how to install stuff on Windows. And given that people at Microsoft realised that sys admins would like to install stuff remotely, there are a bunch of forums and places where you can search for how to install software without user interaction. And this is the keyword you should look for => unattended Windows install.

This will lead you further into the bowls of Windows technology and silent / quiet installs all over the place.

Packer and Answer Files

When it comes to installing software on Windows, you have quite a few obstacles to overtake. One of the biggest obstacle you are facing, are restarts. Windows has a special place in hell for that. Every time you install something important which requires system libraries or other kind of configuration which “will only take effect after you restart Windows” you have to do a restart. Now, a little background on how Packer interacts with Windows. At the moment, it uses OpenSSH to talk to the box which has to be the last which comes up. If it looses connection to openssh because, I don’t know, it restarted itself, you loose communication to the box, and the setup process stops in mid tracks.

If you read about that in an earlier attempt to overtake this, you saw that you could use time-outs. You could kill ssh process which presumably makes packer do an attempt to start a new connection. If you are like me, you experienced that Packer does indeed NOT re-try. Because the previous task couldn’t finish, the restart killed the ssh service which could tell Packer that the previous task, an install for example, has finished. Hence, Packer will stay there and wait for that task to complete; which will never happen at this point.

What can we do? Enter the world of Answer Files. Basically, it’s an xml file which sets up Windows. When Packer is running this file, the last service which should be installed, must be openSSH. And after that, in the provisioning phase, you should only install software which does not require restarts.

Let’s look at an example.

Example #1: Windows Updates

This is another layer of purgatory for Windows. It’s updates. The updates take massive amount of times, if you are doing them from scratch, and also require several restart before it’s actually done. You could speed up the process a little bit, if you have a private network share where all of the Windows updates are sitting. At least that way you don’t have to download them every time you are creating a box. But you can’t avert the install process itself.

Let’s look at a setup for packer. Packer works with JSON files for it’s configuration. An example for a Windows 7 box would look something like this:

{
  "builders": [
    {
      "type": "vmware-iso",
      "iso_url": "http://care.dlservice.microsoft.com/dl/download/evalx/win7/x64/EN/7600.16385.090713-1255_x64fre_enterprise_en-us_EVAL_Eval_Enterprise-GRMCENXEVAL_EN_DVD.iso",
      "iso_checksum_type": "md5",
      "iso_checksum": "1d0d239a252cb53e466d39e752b17c28",
      "headless": true,
      "boot_wait": "2m",
      "ssh_username": "vagrant",
      "ssh_password": "vagrant",
      "ssh_wait_timeout": "8h",
      "shutdown_command": "shutdown /s /t 10 /f /d p:4:1 /c \"Packer Shutdown\"",
      "guest_os_type": "windows7-64",
      "tools_upload_flavor": "windows",
      "disk_size": 61440,
      "vnc_port_min": 5900,
      "vnc_port_max": 5980,
      "floppy_files": [
        "./answer_files/7/Autounattend.xml",
        "./scripts/dis-updates.ps1",
        "./scripts/microsoft-updates.bat",
        "./scripts/win-updates.ps1",
        "./scripts/openssh.ps1"
      ],
      "vmx_data": {
        "RemoteDisplay.vnc.enabled": "false",
        "RemoteDisplay.vnc.port": "5900",
        "memsize": "2048",
        "numvcpus": "2",
        "scsi0.virtualDev": "lsisas1068"
      }
    },
    {
      "type": "virtualbox-iso",
      "iso_url": "http://care.dlservice.microsoft.com/dl/download/evalx/win7/x64/EN/7600.16385.090713-1255_x64fre_enterprise_en-us_EVAL_Eval_Enterprise-GRMCENXEVAL_EN_DVD.iso",
      "iso_checksum_type": "md5",
      "iso_checksum": "1d0d239a252cb53e466d39e752b17c28",
      "headless": true,
      "boot_wait": "2m",
      "ssh_username": "vagrant",
      "ssh_password": "vagrant",
      "ssh_wait_timeout": "8h",
      "shutdown_command": "shutdown /s /t 10 /f /d p:4:1 /c \"Packer Shutdown\"",
      "guest_os_type": "Windows7_64",
      "disk_size": 61440,
      "floppy_files": [
        "./answer_files/7/Autounattend.xml",
        "./scripts/dis-updates.ps1",
        "./scripts/microsoft-updates.bat",
        "./scripts/win-updates.ps1",
        "./scripts/openssh.ps1",
        "./scripts/oracle-cert.cer"
      ],
      "vboxmanage": [
        [
          "modifyvm",
          "{{.Name}}",
          "--memory",
          "2048"
        ],
        [
          "modifyvm",
          "{{.Name}}",
          "--cpus",
          "2"
        ]
      ]
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "remote_path": "/tmp/script.bat",
      "execute_command": "{{.Vars}} cmd /c C:/Windows/Temp/script.bat",
      "scripts": [
        "./scripts/vm-guest-tools.bat",
        "./scripts/chef.bat",
        "./scripts/vagrant-ssh.bat",
        "./scripts/disable-auto-logon.bat",
        "./scripts/enable-rdp.bat",
        "./scripts/compile-dotnet-assemblies.bat",
        "./scripts/compact.bat"
      ]
    }
  ],
  "post-processors": [
    {
      "type": "vagrant",
      "keep_input_artifact": false,
      "output": "windows_7_{{.Provider}}.box",
      "vagrantfile_template": "vagrantfile-windows_7.template"
    }
  ]
}

If it feels daunting, don’t worry. You’ll get used to it fairly quickly. Let’s go over section by section on what this does.

Builders

Packer uses builders for, well, building stuff. These two builders are virtualbox and vmware. I’m only interested in virtualbox. This builder downloads win7 and sets up some virtual box details like, disk size, vagrant user, memory, and so and so forth. The interesting part is the floppy part. Here, we can add in some files for setup. We will use this part later on.

Provisioners

Now here is an interesting tid-bit. There are a bunch of provisioners available as plugin for packer. Installing them is fairly easy. Packer needs binary plugins. Just copy them into ~/.packer.d/plugins or directly into the packer home directly. I’d advice against that. Have them in your own packer.d, that’s much cleaner. For binary plugin releases in the Windows side, look here => https://github.com/packer-community/packer-windows-plugins/releases. If you would like to build them yourself from source, download the source and use go gcc to build it. You will have to go get a few packages though. Also you will have to have $GOPATH (pointing to your own workspace) and $GOROOT (pointing to your working go) setup. But this is not a Go guide. After that just do go build main.go and you have your plugin.

Provisioners are like vagrant provision they will execute post setup stuff on your box. Like installing utils, 7zip, choco, nuget, and so and so forth. There are a few interesting Windows provisioners, like restart-windows, powershell, and Windows shell. Which is like shell, but without the need of pre-setup if you are trying to use it on Windows. The basic shell on Windows is a little clanky and can hang from time-to-time so I recommend using PowerShell or WindowsShell provisioner if you are dealing with Windows post-setup Setup.

Post-Processor

This will create the Vagrant box after everything is done.

Running the Update

For use, two things are interesting from here at this moment. These guys =>

        "./scripts/microsoft-updates.bat",
        "./scripts/win-updates.ps1",

These two contain most of the logic which is part of the update process. You should see it in your checked out source. There is some very interesting logic in there which describes how the update happens. Basically it’s a loop which re-checks if there are updates available or if a re-start is needed. Packer handles re-starts well at this point in the install because it simply waits for SSH to come only. The rest is handled by Windows.

These scripts are called in the Answer File which the Windows Setup uses for configuration purposes. Take a look at this section:

                <SynchronousCommand wcm:action="add">
                    <CommandLine>cmd.exe /c a:\microsoft-updates.bat</CommandLine>
                    <Order>98</Order>
                    <Description>Enable Microsoft Updates</Description>
                </SynchronousCommand>
                <SynchronousCommand wcm:action="add">
                    <CommandLine>cmd.exe /c C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File a:\win-updates.ps1 -MaxUpdatesPerCycle 30</CommandLine>
                    <Description>Install Windows Updates</Description>
                    <Order>100</Order>
                    <RequiresUserInput>true</RequiresUserInput>
                </SynchronousCommand>

This is were the floppy part comes on handy. This part uses the scripts bound by floppy and which will be available from a:.

This will install all the updates available. It will take a while. A very very long while… But let’s go a step further.

Example #2: Installing DotNet 4.5

Let’s assume you want to create a box with visual studio 2013, office, and have choco on it, and a couple of more things for which you need lots of restarts. You could try installing with /norestart switch, which also works; however if you definitely need it to restart I suggest installing stuff with the Answer File. For this, let’s create a PowerShell script which downloads and installs dotnet 451 which is needed for visual studio ultimate 2013.

$Logfile = "C:\Windows\Temp\dotnet-install.log"
function LogWrite {
   Param ([string]$logstring)
   $now = Get-Date -format s
   Add-Content $Logfile -value "$now $logstring"
   Write-Host $logstring
}
 
LogWrite "Downlading dotNetFx40_Full_x86_x64."
try {
    (New-Object System.Net.WebClient).DownloadFile('http://download.microsoft.com/download/1/6/7/167F0D79-9317-48AE-AEDB-17120579F8E2/NDP451-KB2858728-x86-x64-AllOS-ENU.exe', 'C:\Windows\Temp\dotnet.exe')
} catch {
    LogWrite $_.Exception | Format-List -force
    LogWrite "Failed to download file."
}
 
LogWrite "Starting installation process..."
try {
    Start-Process -FilePath "C:\Windows\Temp\dotnet.exe" -ArgumentList "/I /q /norestart" -Wait -PassThru
} catch {
    LogWrite $_.Exception | Format-List -force
    LogWrite "Exception during install process."    
}

So this downloads it right from the source. As mentioned earlier, you could have this on a nice shared drive so downloading from the internet is not necessary. The installer is in fact a bit friendly. It has a switch called /q /norestart. The /q is called silent install and the /norestart speaks for itself. If you leave it out, you can use /forcerestart or you could have the following two lines after this finishes: LogWrite “Resarting Computer.” Restart-Computer -Force. This will force a restart. You need the -Force because otherwise it won’t let it restart while there are active sessions logged on the computer.

Now, let’s add this to the answer file:

                <SynchronousCommand wcm:action="add">
                    <CommandLine>cmd.exe /c C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File a:\install-dotnet-451.ps1 -AutoStart</CommandLine>
                    <Order>98</Order>
                    <Description>Install DotNet 4.5.1.</Description>
                </SynchronousCommand>

See, how easy this is? And now we make use of the floppy part of the windows-7.json by adding this line: “./scripts/install-dotnet-451.ps1”. Don’t forget to append the “,” at the end of the previous line. This is an array.

We are ready to go. Just run packer build -only=virtualbox-iso windows-7.json and you should be done!

Example #3: Installing Visual Studio Ultimate

Installing visual studio is almost trivial as well. With the addition that visual studio requires an admin.xml for silent install which has a bunch of settings. When you have the admin.xml just bind it into the floppy drive as well and call the visual studio install powershell script like this:

    Start-Process -FilePath "C:\Windows\Temp\visualstudioultimate.exe" -ArgumentList "/Quiet /NoRestart /admin a:\admin.xml" -Wait -PassThru

Again, this will take a while………….

Post Setup Provisioning

When all this is done, you can still add some provisioning steps to add some utils with PowerShell or WindowsShell provisioner. I would advice against using simple shell. Bare in mind one other thing. If you have a batch file, and you are calling another batch file in that batch file, like choco install 7zip, it will happen that the install process will hang on installing 7zip. Because in Windows land the called script will not return the exec handler to the caller unless specifically asking for it with call. Which means your bat file will look something like this:

call choco install 7zip
call choco install notepadplusplus

or

cmd /c choco install 7zip
cmd /c choco install notepadplusplus

And so on, and so forth.

Wrap-Up

So, what have we learned? We have learned that installing software which requires re-start is better left to Windows itself with an answer file. Batch files will not return the handler. SSH MUST be the last thing you start up in the answer file. Use PowerShell provisioner or WindowsShell provisioner on Windows.

Hope this helped.

Happy installing, and as always,

Thanks for reading.

Gergely.

06 Jun 2015, 00:00

Docker + Java + Vagrant+ GO.CD

Hello folks.

Today, I would like to write about something interesting and close to me at the moment. I’m going to setup Go.cd with Docker, and I’m going to get a Ruby Lotus app running. Let’s get started.

Fluff

Now, obviously, you don’t really need Go.Cd or Docker to setup a Java Gradle application, since it’s dead easy. But I’m going to do it just for the heck of it.

Setup

Okay, lets start with Vagrant. Docker’s strength is coming from Linux’s process isolation capabilities it’s not yet properly working on OSX or Windows. You have a couple of options if you’d like to try never the less, like boot2docker, or a Tiny Linux kernel, but at that point, I think it’s easier to use a VM.

Vagrant

So, let’s start with my small Vagrantfile.

# -*- mode: ruby -*-
# vi: set ft=ruby :
 
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.
 
  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://atlas.hashicorp.com/search.
  config.vm.box = "trusty"
  config.vm.box_url = "https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
  config.vm.network "forwarded_port", guest: 2300, host: 2300
  config.vm.network "forwarded_port", guest: 8153, host: 8153
  config.vm.provision "shell", path: "setup.sh"
  config.vm.provider "virtualbox" do |v|
    v.memory = 8192
    v.cpus = 2
  end
end

Very simple. I’m setting up a trusty64(because docker requires 3.10 <= x) box and then doing a simple shell provision. Also, I gave it a bit juice, since go-server requires a raw power. Here is the shell script:

!#/bin/bash
sudo apt-get update
sudo apt-get install -y software-properties-common python-software-properties
sudo apt-get update
sudo apt-get install -y vim
sudo add-apt-repository -y "ppa:webupd8team/java"
sudo apt-get update
echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections &amp;&amp; echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
sudo apt-get install -y oracle-java8-installer
sudo apt-get update
wget -qO- https://get.docker.com/ | sh
route add -net 172.17.0.0 netmask 255.255.255.0 gw 172.17.42.1

The debconf at the end accepts java8’s terms and conditions. And the last line installs docker in my box. This runs for a little while…

The routing on the end routes every traffic from 172.17.*.* to my vagrant box, which in turn I’ll be able to use from my mac local, like 127.0.0.1:8153/go/home.

After a vagrant up, my box is ready to be used.

Docker

When that’s finished, we can move on to the next part, which is writing a little Dockerfile for our image. Go.cd will require java and a couple of other things, so let’s automate the installation of that so we don’t have to do it by hand.

Here is a Dockerfile I came up with:

FROM ubuntu
MAINTAINER Skarlso
 
############ SETUP #############
RUN apt-get update
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository -y "ppa:webupd8team/java"
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections &amp;&amp; echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
RUN apt-get update
RUN apt-get install -y oracle-java8-installer
RUN apt-get install -y vim
RUN apt-get install -y unzip
RUN apt-get install -y git

So, our docker images have to be setup with Java as well for go.cd which I’m taking care of here, and a little bit extra, to add vim, and unzip, which is required for dpkg later.

At this point run: docker build -t ubuntu:go . -> This will use the dockerfile and create the ubuntu:go image. Note the at the end.

Go.cd

Now, I’m creating two containers. One, go-server, will be the go server, and the other, go-agent, will be the go agent.

First, go-server:

docker run -i -t --name go-server --hostname=go-server -p 8153:8153 ubuntu:go /bin/bash
wget http://download.go.cd/gocd-deb/go-server-15.1.0-1863.deb
dpkg -i go-server-15.1.0-1863.deb

Pretty straight forward, no? We forward 8153 to vagrant (which forwards it to my mac), so after we start go-server service we should be able to visit: http://127.0.0.1:8153/go/home.

Lo’, and behold, go server. Let’s add an agent too.

docker run -i -t --name go-agent --hostname=go-agent ubuntu:go /bin/bash
wget http://download.go.cd/gocd-deb/go-agent-15.1.0-1863.deb
dpkg -i go-agent-15.1.0-1863.deb
vim /etc/default/go-agent
GO_SERVER=172.17.0.1
service go-agent start

No need to forward anything here. And as you can see, my agent was added successfully.

All nice, and dandy. The agent is there, and I enabled it, so it’s ready to work. Let’s give it something to do, shall we?

The App

I’m going to use my gradle project which is on github. This one => https://github.com/Skarlso/DataMung.git.

Very basic setup. Just check it out and then build & run tests. Easy, right?

First step in this process, define the pipeline. I’m going to keep it simple. Name the pipeline DataMunger. Group is Linux. Now, in go.cd you have to define something called, an environment. Environment can be anything you want, I’m going to go with Linux. You have to assign agents to this environment who fulfil it and the pipeline which will use that environment. More on that you can read in the go.cd documentation. This is how you would handle a pipeline which uses linux, and a windows environment at the same time.

In step one you have to define something called the Material. That will be the source on which the agent will work. This can be multiple, in different folders within the confines of the pipeline, or singular.

I defined my git project and tested the connection OK. Next up is the first Stage and the initial Job to perform. This, for me, will be a compile or an assemble, and later on a test run.

Now, Go is awesome in parallelising jobs. If my project would be large enough, I could have multiple jobs here. But for now, I’ll use stages because they run subsequently. So, first stage, compile. Next stage, testing and archiving the results.

I added the next stage and defined the artefact. Go supports test-reports. If you define the path to a test artefact than go will parse it and create a nice report out of it.

Now, let’s run it. It will probably fail on something. 😉

Well, I’ll be… It worked on the first run.

And here are the test results.

Wrap-up

Well, that’s it folks. Gradle project, with vagrant, docker, and go.cd. I hope you all enjoyed reading about it as much as I did doing it.

Any questions, please feel free to ask it in the comment section below.

Cheers, Have a nice weekend, Gergely.