Kategorien
Allgemein

CRC Openshift 4 continued

Bereits im blog beschrieben: die Openshift 4 Installation mit CRC (CodeReady Containers, OpenShift 4 preconfigured VM), zum Kennenlernen auf dem eigenen laptop. Hier geht es weiter – was kann ich mit dem CRC cluster anfangen ?

Update: nicht viel auf Mac Airbook mit 8GB RAM 🙂

Nach crc start # –log-level debug …interessanter log output… mit crc console die Openshift 4 console im default browser starten. (login credentials sieht man mit crc console –credentials).

DEBU (crc) DBG | time="2021-12-10T20:58:08+01:00" level=debug msg="Using hyperkit binary from /Applications/CodeReady Containers.app/Contents/Resources/hyperkit"
DEBU (crc) DBG | time="2021-12-10T20:58:08+01:00" level=debug msg="Starting with cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-c41155c5a057a0d7ba2089e713927666cb971f33d9d24689bdd9e1a13bca3f95/vmlinuz-4.18.0-305.25.1.el8_4.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.1/rhcos/c41155c5a057a0d7ba2089e713927666cb971f33d9d24689bdd9e1a13bca3f95/0 root=UUID=fb2cf4da-ebd5-4da3-8df1-5f4167ec5fcb rw rootflags=prjquota"
DEBU (crc) DBG | time="2021-12-10T20:58:08+01:00" level=debug msg="Trying to execute /Applications/CodeReady Containers.app/Contents/Resources/hyperkit -A -u -F /Users/sandorm/.crc/machines/crc/hyperkit.pid -c 4 -m 9216M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-vpnkit,path=/Users/sandorm/.crc/tap.sock,uuid=c3d68012-0208-11ea-9fd7-f2189899ab08 -U c3d68012-0208-11ea-9fd7-f2189899ab08 -s 2:0,virtio-blk,file:///Users/sandorm/.crc/machines/crc/crc.qcow2,format=qcow -s 3,virtio-sock,guest_cid=3,path=/Users/sandorm/.crc/machines/crc -s 4,virtio-rnd -l com1,autopty=/Users/sandorm/.crc/machines/crc/tty,log=/Users/sandorm/.crc/machines/crc/console-ring -f kexec,/Users/sandorm/.crc/cache/crc_hyperkit_4.9.8/vmlinuz-4.18.0-305.25.1.el8_4.x86_64,/Users/sandorm/.crc/cache/crc_hyperkit_4.9.8/initramfs-4.18.0-305.25.1.el8_4.x86_64.img,earlyprintk=serial BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-c41155c5a057a0d7ba2089e713927666cb971f33d9d24689bdd9e1a13bca3f95/vmlinuz-4.18.0-305.25.1.el8_4.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.1/rhcos/c41155c5a057a0d7ba2089e713927666cb971f33d9d24689bdd9e1a13bca3f95/0 root=UUID=fb2cf4da-ebd5-4da3-8df1-5f4167ec5fcb rw rootflags=prjquota"
DEBU (crc) DBG | time="2021-12-10T20:58:08+01:00" level=debug msg="error: Temporary Error: hyperkit not running yet - sleeping 1s"
DEBU (crc) DBG | time="2021-12-10T20:58:09+01:00" level=debug msg="retry loop 1"
DEBU (crc) Calling .GetConfigRaw
DEBU Waiting for machine to be running, this may take a few minutes..
...
DEBU Using ssh private keys: [$HOME/.crc/cache/crc_hyperkit_4.9.8/id_ecdsa_crc /Users/sandorm/.crc/machines/crc/id_ecdsa]
...

Der debug log output zeigt viele Details zur hyperkit VM, z.B. die ssh Parameter, die für das login verwendet werden.

Die UI sieht dann so aus:

CRC installed operators

k9s (quote: manage your clusters in style !) ist ein schönes Tool, daß kubernetes cluster Ressourcen übersichtlich anzeigt.

Um komfortabel mit kubernetes tools auf den lokalen CRC cluster zugreifen zu können ein kubeconfig file in $HOME/.kube/crcoc4 anlegen und mit export KUBECONFIG=$HOME/.kube/crcoc4 aktivieren.

# crapiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://api.crc.testing:6443
  name: api-crc-testing:6443

Danach kann sich k9s mit dem CRC cluster verbinden:

# k9s nutzt KUBECONFIG
k9s

Hier zu sehen, der postgres operator….

Dieser wurde über die UI installiert:

Und wie wird der Crunchy postgres operator jetzt genutzt ? pgo installieren und eine Custom Resource PostgresCluster anlegen:

# do clone example repo
git clone https://github.com/CrunchyData/postgres-operator-examples.git

# install pgo in postgres-operator namespace 
cd postgres-operator-examples.git
kubernetes apply -k kustomize/install/
namespace/postgres-operator created
Warning: resource customresourcedefinitions/postgresclusters.postgres-operator.crunchydata.com is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/postgresclusters.postgres-operator.crunchydata.com configured
serviceaccount/pgo created
clusterrole.rbac.authorization.k8s.io/postgres-operator created
clusterrolebinding.rbac.authorization.k8s.io/postgres-operator created
deployment.apps/pgo created

# create hippo postgres cluster
kubectl apply -k kustomize/postgres
postgrescluster.postgres-operator.crunchydata.com/hippo created

Die resource lässt sich mit describe näher untersuchen:

kubectl -n postgres-operator describe postgresclusters.postgres-operator.crunchydata.com hippo
Name:         hippo
Namespace:    postgres-operator
Labels:       <none>
Annotations:  <none>
API Version:  postgres-operator.crunchydata.com/v1beta1
Kind:         PostgresCluster
Metadata:
  Creation Timestamp:  2021-12-06T23:02:49Z
  Finalizers:
    postgres-operator.crunchydata.com/finalizer
  Generation:  1
  Managed Fields:
    API Version:  postgres-operator.crunchydata.com/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:backups:
          .:
          f:pgbackrest:
...
  Image:                  registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-13.5-0
  Instances:
    Data Volume Claim Spec:
      Access Modes:
        ReadWriteOnce
      Resources:
        Requests:
          Storage:   1Gi
    Name:            instance1
    Replicas:        1
  Port:              5432
  Postgres Version:  13
...
Events:
  Type    Reason           Age    From                        Message
  ----    ------           ----   ----                        -------
  Normal  RepoHostCreated  4m50s  postgrescluster-controller  created pgBackRest repository host StatefulSet/hippo-repo-host
  Normal  RepoHostCreated  4m50s  postgrescluster-controller  created pgBackRest repository host StatefulSet/hippo-repo-host
  Normal  StanzasCreated   55s    postgrescluster-controller  pgBackRest stanza creation completed successfully
  Normal  StanzasCreated   50s    postgrescluster-controller  pgBackRest stanza creation completed successfully

Services auflisten:

kubectl -n postgres-operator get svc --selector=postgres-operator.crunchydata.com/cluster=hippo
NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
hippo-ha          ClusterIP   10.217.4.199   <none>        5432/TCP   11m
hippo-ha-config   ClusterIP   None           <none>        <none>     11m
hippo-pods        ClusterIP   None           <none>        <none>     11m
hippo-primary     ClusterIP   None           <none>        5432/TCP   11m
hippo-replicas    ClusterIP   10.217.5.128   <none>        5432/TCP   11m

hippo-primary ist der service, der die DB Verbindung auf port 5432 bereitstellt.

Weitere Werte gibt es im hippo-pguser-hippo secret (mit edit in k9s zu sehen):

Die Werte sind base64 encoded:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  dbname: aGlwcG8=
  host: aGlwcG8tcHJpbWFyeS5wb3N0Z3Jlcy1vcGVyYXRvci5zdmM=

# decode like this
base64 -d <<< NTQzMg==
hippo

Die DB ist zunächst einmal nur aus dem OpenShift 4 cluster heraus erreichbar, der connection string findet sich ebenfalls im secret.

postgresql://hippo:xxxxxxxxxxx@hippo-primary.postgres-operator.svc:5432/hippo

Durch eine kleine Erweiterung im pgadmin.yaml und ein erneutes kustomize apply wird der service von ClusterIp auf NodePort umgestellt

# append to kustomize/postgres/postgres.yaml
  service:
    type: NodePort

# apply all 
kustomize -f apply kustomize/postgres

Fehlt noch die Erreichbarkeit von ‚aussen‘. Ein Lösung ist port forwarding, dadurch wird die postgres DB vom host (meinem laptop und nicht nur aus den CRC containern heraus) erreichbar:

# open terminal, type forward command
PG_CLUSTER_PRIMARY_POD=$(kubectl get pod -n postgres-operator -o name \
  -l postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/role=master)

kubectl -n postgres-operator port-forward "${PG_CLUSTER_PRIMARY_POD}" 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
... let this run...

# in another terminal, connect directly to postgres
# use information from secret without showing it
psql $(kubectl -n postgres-operator get secrets hippo-pguser-hippo -o go-template='{{.data.uri | base64decode}}')
psql (14.1, server 13.5)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.

hippo=>

Connected !

Das von mir hier nachgestellte Beispiel ist aus dem Tutorial https://access.crunchydata.com/documentation/postgres-operator/5.0.4/tutorial/create-cluster/

Danke an Crunchy !

Ein Wort zu DNS

The OpenShift cluster managed by CodeReady Containers uses 2 DNS domain names, crc.testing and apps-crc.testing. The crc.testing domain is for core OpenShift services. The apps-crc.testing domain is for accessing OpenShift applications deployed on the cluster. The crc setup command detects and adjusts your system DNS configuration so that it can resolve these domains.

Was bedeutet das auf meinem Mac ?

#/etc/resolver/testing created
port 53
domain testing
nameserver 192.168.64.6
search_order 1

#/etc/hosts modified:
127.0.0.1 localhost sandormpro hippo-primary.postgres-operator.svc api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing

CRC meldet im startup log, wenn eine neue Version bereitsteht.

==> crc.log <==
time="2021-12-16T09:33:57+01:00" level=debug msg="CodeReady Containers version: 1.36.0+c0f4e0d3\n"
time="2021-12-16T09:33:57+01:00" level=debug msg="OpenShift version: 4.9.8 (bundle installed at /Applications/CodeReady Containers.app/Contents/Resources/crc_hyperkit_4.9.8.crcbundle)\n"
time="2021-12-16T09:33:57+01:00" level=debug msg="Running 'crc start'"
time="2021-12-16T09:33:57+01:00" level=debug msg="Total memory of system is 17179869184 bytes"
WARN A new version (1.37.0) has been published on https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/1.37.0/crc-macos-amd64.pkg 

Voraussetzung für den start der CRC VM ist der Start der CodeReady Container.app, über UI oder command line:

startup crc daemon