Friday, October 25, 2019

Run a sample application with Dapr with OpenShift

Dapr web site - link

Summary: Dapr: 'An event-driven, portable runtime for building micro-services on cloud and edge'


The point of this page is to demonstrate that the framework can be tried on Openshift as well as minikube, which is documented as the normal way to run the sample code. Please note that the security changes shown are not recommended other than for a proof of concept and that more fine grained permissions should to be set.

How to run a sample application on OpenShift

The following instructions take the sample (2. Hello-Kubernetes) tutorial and get it running on OpenShift. This sample is located: https://github.com/dapr/samples/tree/master/2.hello-kubernetes

Pre-reqs:
  • crc installed locally from RedHat (need a developer account to download from cloud.redhat.com) 
  • dapr installed locally (see dapr site for download instructions) -

Changes from sample documented instructions:
  1. Not sure this is necessary, but I used a different Redis install from here: https://www.callicoder.com/deploy-multi-container-go-redis-app-kubernetes/ this was because I assume I couldn't get the configuration of the password correct and security for Redis is not a priority to just 'kick the tyres'
  2. Run crc  i.e. 'crc start' ensuring it was enough CPU & RAM assigned e.g. on MacBook with 16GB of RAM my config is: 7 cpus, 16384Mb RAM
3.     Login to OpenShift as user kubeadmin & create test project test1: 'oc new-project test1'
4.     Add the following permissions to the project:
o   oc adm policy add-scc-to-user anyuid -z default -n test1
o   oc adm policy add-scc-to-user privileged -z default -n test1
  1. Deploy the redis-master.yaml from the directory
       go-redis-kubernetes/deployments directory: 'oc apply -f redis-master.yaml'
  2. Create a file called redis-state.yaml, and paste the following:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.redis
  metadata:
  - name: redisHost
    value: redis-master:6379 

7.  Create a file called redis-pubsub.yaml, and paste the following:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: messagebus
spec:
  type: pubsub.redis
  metadata:
  - name: redisHost
    value: redis-master:6379 


8. Deploy dapr: for crc only the advanced helm deployment worked:
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:test1:default
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:kube-system:default
helm init
helm repo update
helm install dapr/dapr --name dapr --namespace test1
Note: it may take a few minutes for tiller to be installed (assuming tiller still there) after helm init
If you get the error 'components.dapr.io' already exists then do 'helm delete --purge dapr' 
9.  Add the following permission to the dapr service account:
·       oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:test1:dapr-operator
10. Check that the 3 dapr pods are running ok with no errors logged

11.  Apply both of the above files created in steps (6) and (7) to OpenShift using 'oc apply -f <file>'

12. Change the 2.Hello-Kubernetes redis deployment file in the deply directory, redis.yaml metadata part to just contain the following two lines i.e. remove the password part:
-name redisHost
 value: redis-master:6379
13. Deploy all the files in the deploy directory: redis.yaml, node.yaml & python.yaml
14.  When looking at the nodeapp logs orders should be seen and being seen as persisted.

Run the Binding (Kafka) Sample

Assuming the first example has been performed and crc, helm and dapr already installed.
  1. Install Kafka using the Strimzi operator - see
  2. Run: oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.14.0/strimzi-cluster-operator-0.14.0.yaml -n test1
  3. Download the file:  https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/0.14.0/examples/kafka/kafka-persistent-single.yaml e.g. 'curl -O' or 'wget'  & edit to make the 100Gi volumes 1Gi volumes
  4. Then oc apply -f kafka-persistent-single.yaml -n test1
  5. oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:test1:strimzi-cluster-operator
  6. Create the kafka topic:
oc -n test1  run kafka-producer -ti --image=strimzi/kafka:0.14.0-kafka-2.3.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic sample

  1. In the dapr/samples/5. bindings/deploy file change the kafka_bindings.yaml file and change the value 'dapr-kakfa.kafka:9092' string to 'my-cluster-kafka-bootstrap:9092' to align with the Kafka pod created by the Strimzi operator
  2. Apply kafka_bindings.yaml & node.yaml & python.yaml using 'co apply -f <file>'
  3. The logs from Node should the example working (smile)


Tuesday, February 19, 2019

An easy way to build a RESTful micro service in Go to access a Cassandra Table

TL;DR

This code repo allows one to run a single program to create a RESTful micro service that can read from a Cassandra Table a single row by using the primary key fields to restrict the SELECT clause and return it in JSON format. As of 22nd April 2019 it can also POST into simple tables!

It is hosted by the go-swagger framework and uses gocql as the Cassandra data access layer. The code is all Go and a single main programme provides a single command to accomplish this feat. It should be considered alpha software because I have only tested it using a limited number of use cases, but I hope it should suffice for many uses as is.

Motivation

I had a few reasons to write this code. One was a question from a former client "Can't we generate the micro services to read data from Cassandra?", another was a desire to learn a new programming language, Go, with a real project, but probably the main reason was I wanted to create something other than PowerPoints like I did in the good old days 😏 

My Approach

I wanted to generate all the code so I started looking at the Swagger tooling, but whilst looking good for Java, I didn't find that the generator for Go worked very well. A little looking around and I came across go-swagger, which worked first time. This tool lets one generate server or client side code from an input swagger specification and there are loads of capabilities it provides that I haven't even looked into.

The data access options were quite limited, but once I found gocql  and checked out the code I was happy with my choice. 

Both of the core technologies have active communities and mature code.

That left the question of how to create the handler code to run in the framework created by go-swagger that would retrieve the data. To do this I needed to write some code myself. The approach I took was a 4 step process:
  1. Parse the Cassandra DDL that defined the table and any required types (UDTs) to create the swagger API specification for the table
  2. Use go-swagger to generate the RESTful framework for the API defined in the created swagger file
  3. Using the parser output from (1) create the handler function that uses gocql to access the Cassandra table
  4. Patch the generated go-swagger code to call the functions of my generated data access code
All of the above can (and has) been implemented in a single main.go file

Code Structure & Running Output

The main.go program resides in the root folder, the sub directories contain:
  • handler - the code that generates the data access code
  • parser - the code that parses the Cassandra DDL file 
  • swagger - the code that takes the parser output and creates the swagger output file
The main.go file expects a number of program arguments to be set:
  • -file 
  • -goPackageName
  • -dirToGenerateIn
The file parameter defines the full path of the input Cassandra DDL file to process.

The goPackageName is that of the desired Go package name.

The dirToGenerateIn is optional as it defaults to /tmp, but without go modules needs to be set to a file under $GOPATH/src

The example command I used was:

go run main.go -file=/Users/stevef/Source_Code/go/src/github.com/stevef1uk/test4/t.cql\  -goPackageName=github.com/stevef1uk/test4 \
- dirToGenerateIn=/Users/stevef/Source_Code/go/src/github.com/stevef1uk/test4

There are several other flags that may be set e.g. -debug=true & -endPoint=<end point name> 

Then, in the directory where go-swagger will have created the framework run:

export CASSANDRA_SERVICE_HOST=127.0.0.1
go run cmd/simple-server/main.go --port=5000

Assuming all is well you should see:

2019/02/19 22:18:55 Tring to connect to Cassandra database using  127.0.0.1
2019/02/19 22:18:55 Yay! Connection to Cannandra established
2019/02/19 22:18:55 Serving simple at http://127.0.0.1:5000

At this point the microservice will be accessible at the following URL:

http://127.0.0.1:5000/v1/<table name>

note:  <end point name> would overide <table name> 

e.g. curl -X GET "http://127.0.0.1:5000/v1/accounts4?id=1&name=steve&time1=2013-01-01T00:00:00.000Z"

The Parser -

In the (very distant) past I had used the Unix tools, lex & yacc to create and parse a formal language for the pretentiously named General Purpose Test Tool. This was a Visual Basic like language that I designed to let a software house test its 'C' code in the relatively dark ages of computing (early 1980s). I even had a nice book on these tools, which my wife persuaded me to throw away a year or so ago when she she pointed out that this book had been in our attic for decades and I was not likely to need it again 😢

I took a look at the Cassandra schema for DDL and decided parsing it using a lex & yacc approach was going to be very challenging, so I decided to try using regular expression matching instead. This approach worked, but the code was very hard to follow making it hard to extend and maintain. Therefore, I rewrote it using the Finite State Machine approach together with regular expressions. I did look at a couple of FSM libraries, but in the end wrote my own for this.

Go-Swagger

Once a swagger file is created the command 'swagger generate server -f swagger.json' will create the server code. This command will also list the packages that need to be installed to build the framework successfully.

The file in the restapi folder called configure_simple.go is the one that step (4) above patches. Step (3) creates the generated data access file in a new directory called data. This file is called Generatedhandler.go

Handler Options

By default all of the fields defined as primary fields will be used to select data from the database, if the flag numberOfPrimaryKeys is set to a number then only this number of keys will be used in the select statement in the order in which they are defined. All will still need to be passed on the RESTful API call though.

The consistency flag can be defined to one of the standard gocql consistency modes, the default is gocql.One

The allowFiltering flag was supposed to add this clause to the generated select statement, but I seem to have forgotten to implement it 😑

The handler code has its own Setup() and Stop() functions that establish a connection to the Cassandra database defined by the CASSANDRA_SERVICE_HOST environment variable.

The micro service can be scaled horizontally. If deploying the micro service to something like OpenShift or Kubernetes, I found the hard way that the pod needs to be configured to listen on all networks e.g. --host=0.0.0.0 in order to run successfully.

Tests

I like testing my software as it gives me confidence to refactor so there are test modules in each sub package that contain tests.

The hard part with testing the handler code was having to define Cassandra schemas, populate the resultant tables with data! I have tested some User Defined Types alongside a table.

The most complex test I ran had the following schema:

CREATE TYPE demo.simple (
       dummy text
    );

    CREATE TYPE demo.city (
    id int,
    citycode text,
    cityname text,
    test_int int,
    lastUpdatedAt TIMESTAMP,
    myfloat float,
    events set<int>,
    mymap  map<text, text>,
    address_list set<frozen<simple>>
);

CREATE TABLE demo.employee (
    id int,
    address_set set<frozen<city>>,
    my_List list<frozen<simple>>,
    name text,
    mediate TIMESTAMP,
    second_ts TIMESTAMP,
    tevents set<int>,
    tmylist list<float>,
    tmymap  map<text, text>,
   PRIMARY KEY (id, mediate, second_ts )
 ) WITH CLUSTERING ORDER BY (mediate ASC, second_ts ASC)

Note: using describe table is the best way to populate this file. The FSM is configured to look for the WITH text to end processing. If you have a data type or field with the name WITH then you will need to change this to WITH CLUSTERING.

To insert data into this table I used:

insert into employee ( id, mediate, second_ts, name,  my_list, address_set  ) values (1, '2018-02-17T13:01:05.000Z', '1999-12-01T23:21:59.123Z', 'steve', [{dummy:'fred'}], {{id:1, mymap:{'a':'fred'}, citycode:'Peef',lastupdatedat:'2019-02-18T14:02:06.000Z',address_list:{{dummy:'foobar'}},events:{1,2,3} }} ) ;

Then, running the main & testing gave me:

curl -X GET "http://127.0.0.1:5000/v1/employee?id=1&mediate=2018-02-17T13:01:05.000Z&second_ts=1999-12-01T23:21:59.123Z"
[{"address_set":[{"address_list":[{"dummy":"foobar"}],"citycode":"Peef","events":[1,2,3],"id":1,"lastupdatedat":"2019-02-18 14:02:06 +0000 UTC","mymap":{"a":"fred"}}],"id":1,"mediate":"2018-02-17 13:01:05 +0000 UTC","my_list":[{"dummy":"fred"}],"name":"steve","second_ts":"1999-12-01 23:21:59.123 +0000 UTC","tevents":[],"tmylist":[],"tmymap":{}}]








Friday, June 13, 2014

Here be Dragons! How to cross compile a linux kernel for the RPi

Introduction

It goes without saying that if you aren't a serious geek and you have stumbled upon this blog by mistake then I would leave now. This is about as hard core techie as I get these days.
During my recent spell of relaxing in the Alps and watching the world go by I played with docker on my Raspberry Pi (RPi) since those helpful Resin guys had done all the heavy lifting for me. Great, but the linux distribution they had used was Arch linux which just felt too alien to me since the default Raspbian distribution is based on Debian Wheezy. Wouldn't it be great I thought to install docker on the Raspbian distribution? It can't can't be hard can it since those Resin guys have already done it. Hmmmm!
Anyway, this blog isn't going to explain how to get docker running on Raspbian. I will save that story for another day. This post is about how to cross compile the linux kernel for RPi, so let's get started. Why cross compilation? You can compile and build the linux kernel on the RPi directly. This was my first approach, but it takes a very very long time. The second time I did this I got bored and decided to learn how to cross compile and it was so much faster.
In order to work out how to do this I read some useful blogs, which I will reference at the end.

Prerequisites

I assume you have decent development environment machine such as a Macbook. I used my top-spec mini Air for the job, but I'm sure a Windows machine would do it equally well. The key tool to have as a starting point is vagrant and I used an unbuntu base image for this. I won't explain Vagrant and assume you know how to use this tool.
Vagrant and a fast broadband connection are all you need.
If you want to actually deploy your built kernel to your RPi I would take a copy of it's kernel configuration and scp it to the vagrant directory on your host box first. You can get the configuration on the RPi via the command:
zcat /proc/config.gz > .config

A Step by Step Guide

1. First install all the tools required

sudo apt-get install libncurses5-dev gcc make git bc
sudo apt-get install libc6:i386 libgcc1:i386 gcc-4.6-base:i386 libstdc++5:i386 libstdc++6:i386 lib32z1 lib32ncurses5 lib32bz2-1.0 
The 2nd command insures that you have the 32 bit include files required for the RPi as this is a 32 bit processor.

2. Install the Raspbian Tool Chain for Cross Compiling

sudo su
cd /opt
git clone git://github.com/raspberrypi/tools.git

export CCPREFIX=/opt/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-

3. Download the Linux Kernel for the RPi

cd /opt
mkdir raspberrypi
cd raspberrypi
git clone git://github.com/raspberrypi/linux.git
export KERNEL_SRC=/opt/raspberrypi/linux
For this blog I am assuming you will build the most recent kernel.

4. Configure the Kernel

I prefer to ensure I am starting from a clean state first.
ARCH=arm CROSS_COMPILE=${CCPREFIX} make mrproper
Then copy the existing RPI configuration to act as a starting point
cp /vagrant/.config .
ARCH=arm CROSS_COMPILE=${CCPREFIX} make menuconfig
Unless you are feeling brave or need to change the configuration of the kernel you are building, as you would to build docker, then just save the configuration and exit this GUI.

4. Build the Kernel

Since even on a modern machine this will take a while (about 30 minutes on my machine) you might want to use a script and nohup this, but the raw command is:
ARCH=arm CROSS_COMPILE=${CCPREFIX} make &
Assuming all went well you will have a new linux kernel built and now need to make the kernel modules for it.
ARCH=arm CROSS_COMPILE=${CCPREFIX} make modules_install
Now you should have a new linux kernel here:
/opt/raspberrypi/linux/arch/arm/boot/Image
The modules will have been built here: /lib/modules, e.g. at the time I wrote this blog the kernel was at version 3.12.18+
/lib/modules/3.12.18+
Now there is a tool to run to prepare the kernel for the RPi
cd /opt/tools/mkimage
python ./imagetool-uncompressed.py /opt/raspberrypi/linux/arch/arm/boot/Image
Note: This isn't required for the raspberry Pi 2. For that just copy the zImage kernel to image7.img in /boot 
The compressed image will be in your current directory called kernel.img.
You will need to copy this across to /boot directory on the RPi

5. Install The New Kernel on the RPI

IMPORTANT DISCLAIMER: If you follow these instruction it is possible that your RPi may not boot and you will need to enter emergency recovery procedures. You have been warned and I won't be help accountable or responsible.
First backup your old kernel then copy over the kernel modules and new kernel to the new RPI and reboot.
cp /boot/kernel.img /boot/kernel-old.img

cp kernel.img /boot/

- need to install the kernel modules directory built to the RPi in the /lib/modules directory too. I assume you know how to do this using tar & scp.

reboot
That is it!
Hopefully, you will be running ok on a new kernel and uname -a will show that you are running on the correct kernel version

References

1. Ken Cochrane - Getting docker up on a RaspberryPi
2. RPi Kernel Configuration

How to run Docker on a Raspberry Pi

Introduction

The easy way to just run docker on a RPi is just to follow these instructions, however, this requires you to create a dedicated SD card which runs Arch Linux. Since that distribution isn't my cup of tea I decided I wanted to run Docker on the standard Raspbian Debian Wheezy distribution. Although the docker executable can be downloaded in binary form for the RPi it won't run on the standard Raspbian kernel because it requires LXC containers and AUFS which aren't there. This blog post will explain how you can enable these features.
To prevent you or me having to perform the steps below I have copied the resultant kernel onto github. I did notice that the CPU utilisation was a little high, so I added the follwing line to the file: /boot/config.txt to resolve that:
cgroup_disable=memory
Since this requires making changes to the kernel configuration and building a linux kernel I suggest that you first read my other blog on how to do this. Once you have managed to build a linux kernel then the steps below explain how to customise it to support docker.

Downloading AUFS

Assuming you already have the linux source code for the RPi downloaded, in the vagrant ubuntu VM then follow the following steps.
sudo su
cd /opt/raspberrypi/linux
git clone git://aufs.git.sourceforge.net/gitroot/aufs/aufs3-standalone.git
I had a number of challenges getting AUFS to compile with the Linux kernel and I found the path of least resistance was to use the following versions:
  • Linux rpi-3.10.y
  • AUFS  aufs3.10
Therefore set the appropriate git branches:
git checkout rpi-3.10.y
cd aufs3-standalone
git checkout origin/aufs3.10
Now we need to actually patch the AUFS code into the appropriate linux kernel source code:
cp -rp *.patch ../
cp -rp fs ../
cp -rp Documentation/ ../
cp -rp include/ ../
cd ..

patch -p1 < aufs3-base.patch
patch -p1 < aufs3-mmap.patch
patch -p1 < aufs3-standalone.patch
patch -p1 < aufs3-kbuild.patch
patch -p1 < aufs3-loopback.patch
You should have seen a lot of 'Hunk #x succeeded' messages and two failures. We need to fix one of these as the 2nd can be ignored.
patching file mm/fremap.c 
Hunk #1 FAILED at 202. 1 out of 1 hunk FAILED -- saving rejects to file mm/fremap.c.rej
..
patching file include/uapi/linux/Kbuild
Hunk #1 FAILED at 56.
1 out of 1 hunk FAILED -- saving rejects to file include/uapi/linux/Kbuild.rej
Ok, so now we need to edit mm/fremap.c to fix the issue as follows. I use vi (which shows my age) but with your favourite editor first look at the mm/fremap.c.rej file:
--- mm/fremap.c
+++ mm/fremap.c
@@ -202,11 +202,12 @@
        */
        if (mapping_cap_account_dirty(mapping)) {
                        unsigned long addr;
-                       struct file *file = get_file(vma->vm_file);
+                       struct file *file = vma->vm_file;
+                       vma_get_file(vma);
                        addr = mmap_region(file, start, size,
                                        vma->vm_flags, pgoff);
-                       fput(file);
+                       vma_fput(vma);
                        if (IS_ERR_VALUE(addr)) {
                                err = addr;<
                        } else {
This is a diff format file. The important points are that the changes start at line 202 in the original file and that the lines that start with a '+' need to be added to the original source code and the lines that start with a '-' need to be removed.
Now we understand what we need to do we cn actually edit mm/fremap.c and look at the code at line 202
*/
                if (mapping_cap_account_dirty(mapping)) {
                        unsigned long addr;
                        struct file *file = get_file(vma->vm_file);
                        /* mmap_region may free vma; grab the info now */
                        vm_flags = vma->vm_flags;

                        addr = mmap_region(file, start, size, vm_flags, pgoff);
                        fput(file);
                        if (IS_ERR_VALUE(addr)) {
                                err = addr;
                        } else {
                                BUG_ON(addr != start);
                                err = 0;
                        }
                        goto out_freed;
                }
We can then change this to be:
*/
                if (mapping_cap_account_dirty(mapping)) {
                        unsigned long addr;
                        // Remove struct file *file = get_file(vma->vm_file);
                        struct file *file = vma->vm_file;
                        /* mmap_region may free vma; grab the info now */
                        vm_flags = vma->vm_flags; /* Add */

                        addr = mmap_region(file, start, size, vm_flags, pgoff);
                        // Remove fput(file);
                        vma_fput(vma); /* Add */
                        if (IS_ERR_VALUE(addr)) {
                                err = addr;
                        } else {
                                BUG_ON(addr != start);
                                err = 0;
                        }
                        goto out_freed;
                }
Now we are ready to configure the kernel. I am going to only summarise the changes here since Ken Cochran has provided good screen shots in his blog.
As I mentioned in my last post on how to build a kernel you should first start from the existing RPi configuration file (.config). Assuming you have done this then to configure the linux kernel you need to run:
ARCH=arm CROSS_COMPILE=${CCPREFIX} make menuconfig
The configuration parameters you need to set are as follows:
  1. General -> Control Group Support -> Memory Resource Controller for Control Groups (and its three child options)
    1. To reach the Control Group Support just scroll down and then press enter when on it. Whilst here also enable Cpu set Support (see next point). To set press space bar and you will see an asterisk appear next to the option. The escape key brings you up one level of menu.
  2. General -> Control Group Support -> cpuset support
  3. Device Drivers -> Character Devices -> Support multiple instances of devpts
    1. You will need to hit the escape key several time to reach the main screen to see the original screen and then scroll down to see the Device devices section. As before press space bar with this entry highlighted and scroll down to Character Devices and press space again. The go up one level using escape for the next entry
  4. Device Drivers -> Network Device Support -> Virtual ethernet pair device
  5. File Systems --> Miscellaneous filesystems ->select "Aufs (Advanced multi layered unification filesystem) support (NEW)" (mine was the the very bottom)
  6. Now save the configuration and exist the tool.
I tend to go back into the tool and check that the above configuration items are actually set before kicking off the kernel build as described in my previous blog post, don't forget to ensure that you have set CCPREFIX.
Assuming all is well you will now have a kernel which you should copy to the RPi as I described in my other blog post. Once successfully running on this kernel  you will need to install the LXC libraries used by docker.
I have been able to test the above parts of my blog using a new vagrant image. Now as I don't have a space SD card I am having to rely on the notes that I took when I completed my installation on my RPi. The original blog post that I followed is still useful.
On the RPI:
sudo su
apt 
mkdir /opt/lxc
cd /opt/lxc
git clone https://github.com/lxc/lxc.git
apt-get install automake libcap-dev
cd lxc
./autogen.sh && ./configure && make && make install
Now to check that LXC is working correctly on the RPi type:
pi@raspberrypi /opt $ lxc-checkconfig
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: missing
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig
This will show that the kernel is ready for docker to be installed from here. I installed docker by downloading the tar file from resin (https://github.com/resin-io/lxc-docker-PKGBUILD/releases) & extract as root from /
sudo su /
tar xvf docker*.tar.xz
Since I have a hard drive on my RPi and the docker images can be large I symbolically linked /var/lib/docker to /usbmnt1 (my hard drive mount) having first copied the contents of /var/lib/docker to a directory on the hard drive. Whilst this appeared to work the -v flag to mount local directories across to the docker images did't work. Therefore, I removed the symbolic link and repopulated the /var/lib/docker directory and then used the -g flag when starting docker to store the images on /usbmnt1.
Then you can start docker and pull down a Raspbian image to base images from
sudo su -
export LD_LIBRARY_PATH=/usr/local/lib
nohup docker -d &
docker pull  resin/rpi-raspbian
docker run -i -t rpi-raspbian /bin/bash
Finally, here is a screen shot of docker working on my RPi.
docker






Happy dockering on the RPi!
If you want to download an image with Java & Tomcat already installed I prepared one earlier, which you can pull with the tag seahope/rpidockerjavatomcat from my original trying out of docker on the RPi.

Wednesday, November 16, 2011

How to display Static HTML Pages from Grails

They must be an easy way to do this but after a lot of Googling and some failures I used the following approach.

Store the HTML file in the web-app/WEB-INF folder of your grails application in a suitable subdirectory.

Where you wish to provide a link in a GSP page add the following, changing the file and dir strings as required:

 <g:link controller="ServeStatic" action="index"
params="[file:'disclaimer.html',dir:'/WEB-INF/FutureLetters/legal/']" >
Our WebSite Disclaimer</g:link>

Create the Grails Controller to render the file:

package com.yourdomain

import org.codehaus.groovy.grails.commons.ApplicationHolder

class ServeStaticController {

        def index(  ) {
       
        def file = params.file
        def dir = params.dir
       
        File layoutFolder = ApplicationHolder.application.parentContext.getResource("${dir}").file
       
        def absolutePath = layoutFolder.absolutePath
        def fileName = "${absolutePath}/${file}"
       
        File newLayoutFile = new File(fileName)
        def is = newLayoutFile.newInputStream()

        def String exportTemplate = is.text
       
        render exportTemplate
    }
}

Tuesday, September 13, 2011

A 101 on how to set-up CI for a Grails applications using Cloudbees

Synopsis: In this article I explain how I was able to set-up a Git repository on GitHub.com, populate it with a sample Grails application and then set-up Jenkins on Cloadbees.com to poll the GitHub repository for changes, build the application and deploy the application to the Cloud on Cloudbees. After reading this article you should be able to repeat the process in less  than 30 minutes yourself.


Prerequisites: I assume that you have:
  • Downloaded Grails and have a JVM installed from http://grails.org/
  • Are comfortable using a Unix like terminal shell (I am using Mac Os. I could have used Windows but at home I prefer to use my macbook)
Feedback
Whilst I will endeavour to correctly describe all of the steps required, I am very human so please feel free to submit any corrections. I was motivated to write this as the information I managed to find on the Internet was fairly patchy and didn't quite work for me. I have run this through using new Github and Cloudbees Ids in an attempt to QA my own work.

The Steps to Follow: 

1. Setup Master Git Repository 
  • If you haven't already register for an account on http://github.com/
  •  Click on the Set up Git tab and download and install git for your machine if not already present. This page explains how to get up your Github user name and email address in your local git global configuration file and how to set-up and store your ssh key.
  • You will create a local ssh key for GitHub access as follows:

  • Follow the instruction on how to configure git to be able to access your newly created GitHub repository as follows:

  • Next create a repository on GitHub. I called my first one grailstest so this can later be accessed using the access string: git@github.com:<your-github-id>/<your_repo_name>.git (you can see my name on Github is seahope1 and the access string is shown clearly.


  • Next from your Github Account Settings tab access the SSH Public Keys tab and store your local ssh public key for your machine which you need to create as explained on the Set up Git tab. The default public key to store on GitHub will be stored locally in ~/.ssh/id_rsa.pub


  • You will also need to add the Cloudbees public key which you can find within the Jenkins job which you will create later. I will come back to this.

2. Create a local working git repository
  • In a local terminal window follow the instruction you get when you click on your empty repository on Github. I have added the 'grails add first' command to create a basic grails application to provide files to add to the repository:
git init
grails create-app first
git add first 
git commit -m 'first commit'
git remote add origin git@github.com:<your-github-id>/grailstest.git
git push -u origin master
  • This will upload the default Grails application to your GitHub.com repository. You will be asked for your GitHub usename (not email address) and password when use use git push.
 3. Set-up Cloudbees.com for Continuous Integration
  • If you haven't already, sign-up for an acount at http://cloudbees.com/
  • Make sure you subscribe for DEV@cloud & RUN@cloud services. I selected the free options.

  • From the Jenkins page select Manage Jenkins -> Manage Plugins and enable the Git plugin. Ensure you select Install when you will be asked to restart Jenkins.
  • You also need to enable grails and github support are listed under the Available tab.

  • Again, you will need to restart Jenkins for these changes to take effect, which make take a few minutes.
  • From the Home screen -> Applications -> Add new application and call it: first grails

  • From Jenkins screen create a New Job & select Build a free-style software project and name the job first. 

  • Setup the following fields as follows:
    • Project name: first
    • GitHub project: git@github.com:<your-github-id>/<your-repo>.git
    •  Source Code management -> Git repositories selected. URL of repository: git@github.com:<your-github-id>/<your-repo>.git
    • Branches to build Branch Specifier: **



    • A while ago I mentioned the Cloudbees Public Key. This is located under the heading Cloudbees DEV@CLOUD Authorization (towards the top of the screen). This is the key that needs to be entered in GitHub as described above as another SSH Public key. Once this key is added to GitHub for your account Jenkins will be authorised to access  the repository to get the source to build.
    • Build Triggers ->  select Poll SCM and enter the pattern: * * * * *
    • Build Add Build Step and select Grails



    • Build -> Build with Grails -> Grails Installation: Grails 1.3.7 (or your version)
    • Targets: "war target/first.war"
    • Project base directory: first


    • Post-build actions -> Archive the artifacts Files to archive: first/target/first.war
    • Deploy to Cloudbees selected, Cloudbees site:<your-Cloudbees-id> (auto-populated)
    • Application Id: <your-Cloudbees-id>/firstgrails
    • Filename pattern: first/target/first.war



    • You can safely ignore the error message as it is just telling you that the grails target has not been created through a build yet.
    • Save the job and select Build Now to test the job. 
    • At this point you will be asked to validate your account by clicking on a URL. The page displayed will ask for a telephone number where the automated call back service will provide a pin number which you need to enter to complete the validation process.
    • Once you have successfully validated you will be able to run the job.



    • The Build History window will show progress.  If you click on the build number (hopefully with a blue sucessful icon showing) you can use Console Output [raw] to check to see what happened and if successful you can run the deployed application by clicking the link: http://firstgrails.<your-cloudbees-id>.cloudbees.net to access the running sample application. You can find the link from the Application screen:



    4. Make a change to the application and prove CI & CD works
  • In the Grails application change change the file: grailstest/first/grails-app/view/index.gsp e.g. change the text "Welcome to Grails" to "Welcome you to Grails" in both places. Then execute the commands:
    git add index.gsp 
    git commit -m 'simple change to index.gsp'
    git push -u origin master
  • You will notice on the Cloudbees Jenkins dashboard that after about one minute another build will be executed and once it completes successfully the application deployed will show the change.
    Next Steps
    You will notice that I haven’t tried to run the Grails tests and this step would have to be added before the application is deployed.
    I have not had a chance to explore using the Cloudbees SQL database.
    I have not explored how Grails plug-ins work with Grails on Cloudbees

    Acknowledgements
    1. An interesting blog on how to get up continuous delivery for Grails using Cloudbees