Run a sample application with Dapr with OpenShift
Dapr web site - link
Summary: Dapr: 'An event-driven, portable runtime for building micro-services on cloud and edge'
The point of this page is to demonstrate that
the framework can be tried on Openshift as well as minikube, which is
documented as the normal way to run the sample code. Please note that the security changes shown are not recommended other than for a proof of concept and that more fine grained permissions should to be set.
How to run a sample application on OpenShift
The following instructions take the sample (2. Hello-Kubernetes) tutorial
and get it running on OpenShift. This sample is located: https://github.com/dapr/samples/tree/master/2.hello-kubernetes
Pre-reqs:
- crc installed locally from RedHat (need a
developer account to download from cloud.redhat.com)
- dapr installed locally (see dapr site for download
instructions) -
Changes from sample documented instructions:
- Not sure this is necessary, but I used a different
Redis install from here: https://www.callicoder.com/deploy-multi-container-go-redis-app-kubernetes/ this
was because I assume I couldn't get the configuration of the password
correct and security for Redis is not a priority to just 'kick the tyres'
- Run crc i.e. 'crc start' ensuring it was enough
CPU & RAM assigned e.g. on MacBook with 16GB of RAM my config is: 7 cpus, 16384Mb RAM
3.
Login to OpenShift as user kubeadmin & create test project test1: 'oc new-project test1'
4.
Add the following permissions to the project:
o
oc adm policy add-scc-to-user anyuid -z default
-n test1
o
oc adm policy add-scc-to-user privileged -z
default -n test1
- Deploy the redis-master.yaml from the directory
go-redis-kubernetes/deployments directory: 'oc apply -f
redis-master.yaml'
- Create a file called redis-state.yaml, and paste the
following:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
metadata:
- name: redisHost
value: redis-master:6379
7. Create a file called redis-pubsub.yaml, and paste the following:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: redis-master:6379
8. Deploy dapr: for crc only the advanced
helm deployment worked:
oc adm policy add-cluster-role-to-user
cluster-admin system:serviceaccount:test1:default
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:kube-system:default
helm init
helm repo add dapr https://daprio.azurecr.io/helm/v1/repo
helm repo update
helm install dapr/dapr --name dapr --namespace
test1
Note: it may take a few minutes for tiller to be
installed (assuming tiller still there) after helm init
If you get the error 'components.dapr.io' already
exists then do 'helm delete --purge dapr'
9. Add the following permission to the dapr service account:
· oc
adm policy add-cluster-role-to-user cluster-admin
system:serviceaccount:test1:dapr-operator
10. Check that the 3 dapr pods are running ok with no errors logged
11. Apply both of the above files created in steps (6) and (7) to OpenShift using 'oc apply -f <file>'
12. Change the 2.Hello-Kubernetes redis deployment file in the deply directory, redis.yaml
metadata part to just contain the following two lines i.e. remove the password
part:
-name redisHost
value: redis-master:6379
13. Deploy all the files in the deploy directory: redis.yaml, node.yaml &
python.yaml
14. When looking at the nodeapp logs orders should be seen and being seen as
persisted.
go-redis-kubernetes/deployments directory: 'oc apply -f redis-master.yaml'
7. Create a file called redis-pubsub.yaml, and paste the following: