SUSE CaaS Platform/FAQ

Fonte: https://wiki.microfocus.com/index.php/SUSE_CaaS_Platform/FAQ

This FAQ will answer common questions about SUSE CaaS Platform. It is a living document, so update it with new content!

Contents

 [hide

Generic Questions

SUSE MicroOS - the Container Host OS

How can I install additional packages?

The root filesystem is read-only, instead of using rpm or zypper use the transactional-update program to install packages:

  transactional-update pkg install PACKAGENAME

Afterwards you need to reboot the machine to use the change.

  Usage: transactional-update --help|--version
         transactional-update [cleanup][up|dup|patch|initrd][kdump][reboot]
         transactional-update [cleanup] [reboot] pkg install|remove|update PKG1..PKGN
         transactional-update rollback [number]

In case an RPM needs to write to /opt and /opt is not a writeable subvolume this can be created following these steps:

mount -> find out which snapshot is currently active - i.e /dev/sda2 on / btrfs (ro....,subvol=/@/.snapshots/5/snapshot) (here "5" is the one we want to modify) Use "5" where <NR> is written in the following commands:

        btrfs property set -ts /.snapshots/<NR>/snapshot ro false
        mount -o remount,rw /
        mv /opt /opt.old
        mksubvolume /opt
        btrfs property set -ts /.snapshots/<NR>/snapshot ro true
        reboot

Now try to install the RPMs again with transactional-update pkg install.

After node update, Velum removes the PTF I installed previously. How can I stop that?

If you run an update through Velum after applying a PTF, it can happen that zypper removes the PTF. If that is happening, do the following:

Install the PTFs with the command:

# transactional-update reboot *PTF*.rpm install

Once the system is back up, make sure that the rpm is installed with:

# rpm -qa |grep PTF

Create a lock on zypper so the next time an update comes by, the PTF package rpm you installed is not removed.

# zypper al <PTF PACKAGE NAME>

Make sure the lock was created with

# zypper ll

ATTENTION: Once the PTF is released/included on our normal update channels, remove the lock you just created so the package can be updated.

How do I manually update my nodes?

At this time it is not possible to manually update your nodes. You have to use the update button within the CaaS Platform Web UI to do this. If you use the transactional-update commands above to update base system packages your cluster will eventually break and recovery from this will not be covered by support so please ensure you always use the CaaS Platform web interface to perform updates. See: Cluster Update for more info.

How can I register to SMT after installation?

Run the following steps on each node:

# SUSEConnect -d
# SUSEConnect --cleanup
# cd /etc/pki/trust/anchors/
# curl smtserver.domain.com/smt.crt -o registration-server.pem
# update-ca-certificates
# SUSEConnect --write-config --url https://smtserver.domain.com
# SUSEConnect --status-text (this should confirm the registration with a status of "Registered".)

Then check the repository list with zypper repos which should no longer contain any SCC repositories.

How can I configure check_mk

As there is no xinetd anymore on MicroOS it is required to use systemd socket.

How can I use the toolchain module to install e.g. a compiler?

SUSE CaaS Platform 3 brings the Toolchain module that allows compilation of the kernel modules like the Nvidia kernel driver.

You need to do the following:

  • Register the module with "transactional-update reboot register -p caasp-toolchain/3.0/x86_64". You need to reboot the system to use the module.
  • Install packages with transactional-updates
  • Reboot again to use those packages.

Btw. to remove the module, you use "transactional-update [reboot] register -d -p caasp-toolchain/3.0/x86_64".

Kubernetes questions

How can I troubleshoot etcd in the cluster?

To work with etcd, you have to pass some additional parameters for the certificates.

SSH on to one of the cluster nodes e.g. your first master and use the following command as an example on how to work with etcd:

  source /etc/sysconfig/etcdctl 
  etcdctl --endpoints $ETCDCTL_ENDPOINT --ca-file $ETCDCTL_CA_FILE --cert-file $ETCDCTL_CERT_FILE --key-file $ETCDCTL_KEY_FILE cluster-health

How can I configure Kubernetes to allow a private registry with a self signed certificate?

Note well: this is no longer needed starting from V3, a dedicated UI has been created. Both insecure registries (no TLS certificate) and registries using self signed certificates can be handled via Velum.

Do the following steps on all the worker nodes within the kubernetes cluster:

  1. Copy the public certificate of the CA to /etc/pki/trust/anchors/
  2. Execute update-ca-certificates
  3. Restart the docker daemon with systemctl restart docker

How to integrate my corporate LDAP/Active Directory server with Kubernetes

Currently this feature is not available.

How can I create an LDAP user, include this in an LDAP group and give this group to a newly created namespace

Here some example files that allow this (see CaaS Platform 2 documentation for details on how to access LDAP etc..)

LDIF for new user "test1":

  dn: uid=test1,ou=People,dc=infra,dc=caasp,dc=local
  cn: A User
  objectClass: person
  objectClass: inetOrgPerson
  uid: test1
  userPassword: {SHA}2ptZuljuS4nboL1IM6VC+34c4+I=
  givenName: A
  sn: User
  mail: test1@suse.com 

LDIF for new group "Administrators-test1"

  dn: cn=Administrators-test1,ou=Groups,dc=infra,dc=caasp,dc=local
  cn: Administrators-test1
  objectClass: top
  objectClass: groupOfUniqueNames
  uniqueMember: uid=test1,ou=People,dc=infra,dc=caasp,dc=local

JSON for new Namespace "test1"

  {
    "kind": "Namespace",
    "apiVersion": "v1",
    "metadata": {
      "name": "test1",
      "labels": {
  	  "name": "test1"
      }
    }
  }

Role and role binding.yaml:

  ---
  # Define the Role's permissions in Kubernetes
  kind: Role
  apiVersion: rbac.authorization.k8s.io/v1
  metadata:
    name: manage-test1
    namespace: test1
  rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
  ---
  # Map a LDAP group to this Kubernetes role
  kind: RoleBinding
  apiVersion: rbac.authorization.k8s.io/v1
  metadata:
    name: Administrators-manage-test1-binding
    namespace: test1
  subjects:
  - kind: Group
    name: Administrators-test1
    apiGroup: rbac.authorization.k8s.io
  roleRef:
    kind: Role
    name: manage-test1
    apiGroup: rbac.authorization.k8s.io
  

--> now user "test1" is in group "Administrators-test1" and has full access to the new namespace "test1".

How can I download the kubectl config file without logging in to velum

Every user should be able to access https://<velum-dashboard-internal-or-external-fqdn>/kubectl-config which should the forward to the right place.

An other option is to use the caasp-cli:

   caasp-cli login -s https://<k8s-api-fqdn>:6443 -u user-email-address -p user-password

It is possible to install the caasp-cli tool and the helm/kubecli an any SLES 12 SP3 server and use it!

Install the required RPMs (use the latest versions included in the SUSE CaaS Platform channels):

   zypper in http://<smt-server>/SUSE/Products/SUSE-CAASP/3.0/x86_64/product/x86_64/kubernetes-client-1.9.8-2.1.x86_64.rpm http://<smt-server>/SUSE/Products/SUSE-CAASP/3.0/x86_64/product/x86_64/kubernetes-common-1.9.8-2.1.x86_64.rpm http://<smt-server>/SUSE/Products/SUSE-CAASP/3.0/x86_64/product/x86_64/caasp-cli-3.0.0+20180515.git_r38_7843d12-1.4.x86_64.rpm http://<smt-server>/SUSE/Products/SUSE-CAASP/3.0/x86_64/product/x86_64/helm-2.8.2-1.6.x86_64.rpm

Add the public key of the CaaS Platform CA as trusted CA:

   scp <caasp-admin>:/etc/pki/trust/anchors/SUSE_CaaSP_CA.crt /etc/pki/trust/anchors/
   update-ca-certificates

Depending on the CaaS Platform 3 patchlevel allow unauthenticated read to the dex-service (execute the following on the CaaS Platform admin node):

   kubectl -n kube-system create rolebinding suse:caasp:read-dex-service --role=suse:caasp:read-dex-service --group=system:authenticated --group=system:unauthenticated

Now it should be possible to use caasp-cli, kubectl and helm on this SLES 12 SP3 server.

How do I use Cinder on SUSE OpenStack Cloud for SUSE CaaS Platform as persistent storage?

Deploy the SUSE CaaS Platform 3 with the OpenStack specific images and provide the OpenStack configuration details during the setup in velum.

After the bootstrapping of the cluster, create a storage-class.yaml definition like this:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
   name: cinder
provisioner: kubernetes.io/cinder
parameters:
   type: rbd
   availability: nova

Change the value for "type" from "rbd" to your desired storage class type. To get a list of your types in OpenStack, you can use the command "cinder type-list" on one of your SUSE OpenStack Cloud Controller Nodes. Apply the file with kubectl apply -f storage-class.yaml.

How do I use SUSE Enterprise Storage with a Storage Class for persistent storage?

Go on your workstation where kubectl is installed and able to communicate with you CaaS Platform. Create a file called storage-class.yaml with the following content:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ceph
provisioner: kubernetes.io/rbd
parameters:
  monitors: IP-MONITOR1:6789, IP-MONITOR2:6789, IP-MONITOR3:6789
  adminId: admin
  adminSecretName: ceph-secret-admin
  adminSecretNamespace: default
  pool: caasp
  userId: caasp
  userSecretName: ceph-secret-user

Make sure to change the IP-MONITOR1, IP-MONITOR2 and IP-MONITOR3 IP adresses to match your SUSE Enterprise Storage environment.

Next, create a secret for your Ceph admin user in SUSE CaaS Platform by creating a file called ceph-secret-admin.yaml with the following content:

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin
  namespace: default
type: "kubernetes.io/rbd" 
data:
# key from client.admin converted to base64
  key: INSERT-KEY-HERE

Replace the INSERT-KEY-HERE with a base64 encoded version of your ceph client.admin key. You can find that key on your monitor nodes. To get a base64 of it, use the base64 command:

$ echo "MYKEY" | base64

Repeat the step for the normal user the CaaS Platform should use, if not done, create the user in SUSE Enterprise Storage first and name it client.caasp. Then create a file ceph-secret-user.yaml for the secret with the following content:

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-user
  namespace: default
type: "kubernetes.io/rbd" 
data:
# key from client.caasp converted to base64
  key: INSERT-KEY-HERE

Replace the INSERT-KEY-HERE also with a base64 encoded version of your client.caasp key.

You're ready to deploy the yaml files now and test your storage class with the following commands:

$ kubectl apply -f storage-class.yaml
$ kubectl apply -f ceph-secret-admin.yaml
$ kubectl apply -f ceph-secret-user.yaml
$ kubectl get sc -n default

This will create the storage class and secrets within the default namespace. If you want to access the storage class from other namespaces, you have to copy the secrets to the other namespace NEW-NAMESPACE with the following command:

$ kubectl get secret ceph-secret-admin -o json --namespace default | \
sed 's/"namespace": "default"/"namespace": "NEW-NAMESPACE"/' | kubectl create -f -
$ kubectl get secret ceph-secret-user -o json --namespace default | \
sed 's/"namespace": "default"/"namespace": "NEW-NAMESPACE"/' | kubectl create -f -

To test the storage class, create a file storage-claim.yaml for an example storage claim with the following content:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  storageClassName: ceph
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Apply the claim and test it with the commands:

$ kubectl apply -f storage-claim.yaml
$ kubectl get pv -n default

How do I set up a simple NFS based StorageClass for test purposes?

This is unsupported and provided for convenience only

Sometimes it is useful to set up a NFS based StorageClass for test purposes. Please use the instructions above for SES based StorageClasses however if you want to create a NFS based StorageClass for test purposes please do the following:

cd $(mktemp -d)
git clone https://gist.github.com/HartS/ec698dc04d1a55dec473607684e8dad9 nfs-provisioner
kubectl create -f nfs-provisioner
 

This will create a NFS server inside your cluster that will serve files to all your other nodes. You can then use this StorageClass with the name of "persistent".

If you want this to become a default storageclass please modifiy the class.yaml file to look like the following:

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: persistent
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
 

This will then make this NFS backed StorageClass the default for your cluster.

Please note that this will set up a NFS server on one of your nodes. Please ensure this node has enough disk space in /var for your needs (the NFS server stores data in /var/lib/docker/tmp/nfs).

How to enable Kubernetes feature gates

Feature gates are a way used by kubernetes to enable experimental features in advance.

It's possible to enable Kubernetes feature gates on SUSE CaaS Platform 3.

Please note: feature gates are experimental features, hence they won't be supported by SUSE.

Let's assume a user wants to use two feature gates:

  • DevicePlugins
  • ReadOnlyAPIDataVolumes

The user would have to log into the admin node and execute this command:

 docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) entrypoint.sh bundle exec rails runner "Pillar.apply(kubernetes_feature_gates: 'DevicePlugins=true,ReadOnlyAPIDataVolumes=true')"

And then issue an orchestration. This can be done using the following command on the admin node:

 docker exec $(docker ps | grep salt-master | awk {'print $1'}) salt-run state.orchestrate orch.kubernetes

Provide extra arguments to Kubernetes components

There are cases where Velum doesn't yet expose a certain argument of one of the Kubernetes components. When you run into this situation please get in touch with SUSE and explain your use case.

Starting from the first maintenance update of SUSE CaaS v3 it will be possible to provide custom arguments to each Kubernetes component.

Please note: only advanced users should do that, supportability of the cluster is not guaranteed.

Connect to the admin node and issue the following command:

 docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) entrypoint.sh bundle exec rails runner "Pillar.apply(<pillar name>: '<argument>')"

<pillar name> can have the following values:

  • components_apiserver_args: targets Kubernetes' API server component
  • components_controller_manager_args: targets Kubernetes' controller manager component
  • components_scheduler_args: targets Kubernetes' scheduler component
  • components_kubelet_args: targets kubelet component
  • components_proxy_args: targets kube-proxy component

Once the pillar is set a new orchestration has to be triggered. This can be done using the following command on the admin node:

 docker exec $(docker ps | grep salt-master | awk {'print $1'}) salt-run state.orchestrate orch.kubernetes

How can I install tiller on a running cluster?

Warning: SUSE does not support the version of tiller that is installed when using the tool "helm init". The only tiller version SUSE supports is the one installed using the bootstrap process in Velum. Make sure to use "helm init" only with the parameter "--client-only".

Tiller is an optional component that can be installed at the creation of the cluster in Velum. If you didn't choose to install it during that step, this can be done afterwards.

Connect to the admin node and issue the following command:

 docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) entrypoint.sh bundle exec rails runner "Pillar.apply(tiller: 'true')"

Once the pillar is modified a new orchestration has to be triggered. This can be done using the following command on the admin node:

 docker exec $(docker ps | grep salt-master | awk {'print $1'}) salt-run state.orchestrate orch.kubernetes

How can I install Kubernetes Dashboard via helm chart?

The deployment via helm chart requires internet access for your kubernetes cluster to download the container images. If no direct internet access is available, you need to configure the _proxy settings_ during the deployment of the SUSE CaaS Platform. Another option is to configure a private registry as a mirror for the images and manually mirror the images to this private registry.

Do the following steps to create the dashboard with graphs powered by heapster.

 helm install --name heapster-default --namespace=kube-system stable/heapster --set rbac.create=true
 helm install stable/kubernetes-dashboard --namespace kube-system --name kubernetes-dashboard --set service.type=NodePort


To access the kubernetes-dashboard, you require a token. Use the following command to output your token.

 grep "id-token" ~/.kube/config  | awk '{print $2}'


Use the following commands to get the url of the kubernetes-dashboard and open the kubernetes-dashboard url in your browser.

 export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services kubernetes-dashboard -n kube-system)
 export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}" -n kube-system)
 echo https://$NODE_IP:$NODE_PORT/


Now open the url, select token and insert the token you've saved before.

How can I install a Monitoring Stack based on Prometheus and Grafana via helm charts?

Before you proceed, make sure that you've already created a storage class to be used as a persistant storage within kubernetes. In the following example, we assume that the storage class name is _myStorageClass_

1. Create a separate namespace "monitoring" to separate the monitoring stack from the other namespaces.

 kubectl create namespace monitoring

2. The SUSE CaaS Platform 3 has enhanced security settings which per default prevents privileged access to the cluster. To grant this access for the monitoring stack, you need to specifiy a separate ClusterRoleBinding for the serviceaccounts used by prometheus and grafana.

Execute the following command to create the required ClusterRoleBinding:

 kubectl apply -f - << *EOF*
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: monitoring
 roleRef:
   kind: ClusterRole
   name: suse:caasp:psp:privileged
   apiGroup: rbac.authorization.k8s.io
 subjects:
 - kind: ServiceAccount
   name: default
   namespace: monitoring
 - kind: ServiceAccount
   name: prometheus-alertmanager
   namespace: monitoring
 - kind: ServiceAccount
   name: prometheus-kube-state-metrics
   namespace: monitoring
 - kind: ServiceAccount
   name: prometheus-node-exporter
   namespace: monitoring  
 - kind: ServiceAccount
   name: prometheus-pushgateway
   namespace: monitoring  
 - kind: ServiceAccount
   name: prometheus-server
   namespace: monitoring  
 - kind: ServiceAccount
   name: grafana
   namespace: monitoring    
 *EOF*

3. Use the following command to install prometheus in the "monitoring" namespace. Change the storageClass values matching your existing storageclass.

 helm install stable/prometheus --namespace monitoring --name prometheus --set alertmanager.persistentVolume.storageClass=myStorageClass --set server.persistentVolume.storageClass=myStorageClass

4. Create a grafana-values.yaml file with the following content and change the storageClassName and adminPassword according to your values.

 service:
   type: NodePort
 
 persistence:
   enabled: true
   storageClassName: myStorageClass
   accessModes:
     - ReadWriteOnce
   size: 10Gi
   annotations: {}
   subPath: ""  
 
 adminUser: admin
 adminPassword: YOURSTRONGPASSWORD
 
 datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
    - name: Prometheus
      type: prometheus
      url: http://prometheus-server.monitoring
      access: proxy
      isDefault: true

5. Now install grafana with the following command:

 helm install stable/grafana --namespace monitoring --name grafana --values grafana-values.yaml


6. Use the following commands to get the url of grafana and open the url in your browser.

 export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services grafana -n monitoring)
 export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
 echo http://$NODE_IP:$NODE_PORT/

To make use of Grafana you need to create your first dashboard.

To import an existing dashboard, do the following steps:

1. Open Grafana in your browser.

2. On the home page of Grafana, hover your mousecursor over the "+" button on the left sidebar and click on the "import" menuitem.

3. Paste in the url "https://grafana.com/dashboards/3131" into the first input field to import the "Kubernetes all Nodes" Grafana Dashboard. After pasting in the url, the view will change to another form.

4. Now select the "Prometheus" datasource in the prometheus field and click on the "import" button.

5. The browser will redirect you to your newly created dashboard.

To import more dashboards, repeat the steps and replace the dashboard url with another dashboard from the Grafana dashboard repository.

How can I configure the HTTP(S) Proxy?

The HTTP(S) Proxy can be configured at the creation of the cluster in Velum. If you didn't configure it during that step or you want to change the configuration, this can be done afterwards.

There are two options for the proxy configuration, by default it is only applied for the container engine but it can be applied for the entire host.

Please note: This step is only required if you want to apply the proxy on the entire host:

 docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) entrypoint.sh bundle exec rails runner "Pillar.apply(proxy_systemwide: 'true')"

Then you can change/update the values with the following commands:

 docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) entrypoint.sh bundle exec rails runner "Pillar.apply(http_proxy: 'http://PROXY_ADDRESS:3128')"
 docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) entrypoint.sh bundle exec rails runner "Pillar.apply(https_proxy: 'http://PROXY_ADDRESS:3128')"
 docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) entrypoint.sh bundle exec rails runner "Pillar.apply(no_proxy: 'domain1.lan')"

Once the pillar are modified a new orchestration has to be triggered. This can be done using the following command on the admin node:

 docker exec $(docker ps | grep salt-master | awk {'print $1'}) salt-run state.orchestrate orch.kubernetes