Friday, October 25, 2019

Run a sample application with Dapr with OpenShift

Dapr web site - link

Summary: Dapr: 'An event-driven, portable runtime for building micro-services on cloud and edge'


The point of this page is to demonstrate that the framework can be tried on Openshift as well as minikube, which is documented as the normal way to run the sample code. Please note that the security changes shown are not recommended other than for a proof of concept and that more fine grained permissions should to be set.

How to run a sample application on OpenShift

The following instructions take the sample (2. Hello-Kubernetes) tutorial and get it running on OpenShift. This sample is located: https://github.com/dapr/samples/tree/master/2.hello-kubernetes

Pre-reqs:
  • crc installed locally from RedHat (need a developer account to download from cloud.redhat.com) 
  • dapr installed locally (see dapr site for download instructions) -

Changes from sample documented instructions:
  1. Not sure this is necessary, but I used a different Redis install from here: https://www.callicoder.com/deploy-multi-container-go-redis-app-kubernetes/ this was because I assume I couldn't get the configuration of the password correct and security for Redis is not a priority to just 'kick the tyres'
  2. Run crc  i.e. 'crc start' ensuring it was enough CPU & RAM assigned e.g. on MacBook with 16GB of RAM my config is: 7 cpus, 16384Mb RAM
3.     Login to OpenShift as user kubeadmin & create test project test1: 'oc new-project test1'
4.     Add the following permissions to the project:
o   oc adm policy add-scc-to-user anyuid -z default -n test1
o   oc adm policy add-scc-to-user privileged -z default -n test1
  1. Deploy the redis-master.yaml from the directory
       go-redis-kubernetes/deployments directory: 'oc apply -f redis-master.yaml'
  2. Create a file called redis-state.yaml, and paste the following:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: statestore
spec:
  type: state.redis
  metadata:
  - name: redisHost
    value: redis-master:6379 

7.  Create a file called redis-pubsub.yaml, and paste the following:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: messagebus
spec:
  type: pubsub.redis
  metadata:
  - name: redisHost
    value: redis-master:6379 


8. Deploy dapr: for crc only the advanced helm deployment worked:
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:test1:default
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:kube-system:default
helm init
helm repo update
helm install dapr/dapr --name dapr --namespace test1
Note: it may take a few minutes for tiller to be installed (assuming tiller still there) after helm init
If you get the error 'components.dapr.io' already exists then do 'helm delete --purge dapr' 
9.  Add the following permission to the dapr service account:
·       oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:test1:dapr-operator
10. Check that the 3 dapr pods are running ok with no errors logged

11.  Apply both of the above files created in steps (6) and (7) to OpenShift using 'oc apply -f <file>'

12. Change the 2.Hello-Kubernetes redis deployment file in the deply directory, redis.yaml metadata part to just contain the following two lines i.e. remove the password part:
-name redisHost
 value: redis-master:6379
13. Deploy all the files in the deploy directory: redis.yaml, node.yaml & python.yaml
14.  When looking at the nodeapp logs orders should be seen and being seen as persisted.

Run the Binding (Kafka) Sample

Assuming the first example has been performed and crc, helm and dapr already installed.
  1. Install Kafka using the Strimzi operator - see
  2. Run: oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.14.0/strimzi-cluster-operator-0.14.0.yaml -n test1
  3. Download the file:  https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/0.14.0/examples/kafka/kafka-persistent-single.yaml e.g. 'curl -O' or 'wget'  & edit to make the 100Gi volumes 1Gi volumes
  4. Then oc apply -f kafka-persistent-single.yaml -n test1
  5. oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:test1:strimzi-cluster-operator
  6. Create the kafka topic:
oc -n test1  run kafka-producer -ti --image=strimzi/kafka:0.14.0-kafka-2.3.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic sample

  1. In the dapr/samples/5. bindings/deploy file change the kafka_bindings.yaml file and change the value 'dapr-kakfa.kafka:9092' string to 'my-cluster-kafka-bootstrap:9092' to align with the Kafka pod created by the Strimzi operator
  2. Apply kafka_bindings.yaml & node.yaml & python.yaml using 'co apply -f <file>'
  3. The logs from Node should the example working (smile)