Kubernetes Operator :: Cluster Config :: Mount Issue

Enonic version: current
OS: MAC and WIN

I am preparing my production environment and test to create a xp-cluster in my Test-Environment (K3-Cluster on QNAP).

I successfully was able to deploy a single node configuration.
With the cluster-configuration I get the following error on all worker and master-pods

Unable to attach or mount volumes: unmounted volumes=[blobstore snapshots export], unattached volumes=[kube-api-access-dhp98 config extra-config deploy index blobstore snapshots export]: timed out waiting for the condition

I have no error on the persistent volumes and persistern volume claims


I have NFS-Service on the Host and also the nfs-server-nfs-server-provisioner running.

Any ideas what I am missing ?

Hi! Could you share your deployment configuration, please?

@vbr :here we go :slight_smile:

# Create a namespace
apiVersion: v1
kind: Namespace
metadata:
  name: osde-ns
  annotations:
    # Delete this namespace it the deployment is deleted
    enonic.cloud/remove.with.xp7deployment: osde-deploy
---
# Create deployment in the namespace
apiVersion: enonic.cloud/v1
kind: Xp7Deployment
metadata:
  name: osde-deploy
  namespace: osde-ns
spec:
  enabled: true
  xpVersion: 7.12.2

  # Preinstall snapshotter on all nodes
  nodesPreinstalledApps:
    - name: snapshotter
      url: https://repo.enonic.com/public/com/enonic/app/snapshotter/3.0.2/snapshotter-3.0.2.jar

  # Create volumes shared by all nodes in this deployment
  nodesSharedDisks:
    - name: blobstore
      size: 1Gi

    - name: snapshots
      size: 1Gi

    - name: export # Dumps and other data
      size: 1Gi

  # Create nodes
  nodeGroups:
    # 3 master nodes
    - name: master
      replicas: 3

      data: false
      master: true

      resources:
        cpu: "0.5"
        memory: 1Gi

        # Volumes private to the node
        disks:
          - name: deploy  # Apps installed in the deploy folder
            size: 1Gi
          - name: index   # Node ES index
            size: 1Gi

    # 2 data nodes
    - name: worker
      replicas: 2

      data: true
      master: false

      resources:
        cpu: "1"
        memory: 1Gi

        # Volumes private to the node
        disks:
          - name: deploy  # Apps installed in the deploy folder
            size: 1Gi
          - name: index   # Node ES index
            size: 1Gi
---
# Install content studio
apiVersion: enonic.cloud/v1
kind: Xp7App
metadata:
  name: contentstudio
  namespace: osde-ns
spec:
  url: https://repo.enonic.com/public/com/enonic/app/contentstudio/4.5.1/contentstudio-4.5.2.jar
  sha512: e5662edb8757ceb6f085d1a8d85abf965c0f45a98acbc767ef31f4e8d860fc88cc995aa7cfcb1167f356c3e261129524f3c38de93cdc08baca95e51943a99365
# Add your own custom config
#apiVersion: enonic.cloud/v1
#kind: Xp7Config
#metadata:
#  name: my-config
#  namespace: osde-ns
#spec:
#  nodeGroup: all
#  file: com.my-app.cfg
#  data: |
#    my = config
---
# Expose XP site on frontend nodes through an ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-domain-com-site
  namespace: osde-ns
  annotations:
    enonic.cloud/xp7.vhost.mapping.my-mapping-site.source: /
    enonic.cloud/xp7.vhost.mapping.my-mapping-site.target: /site/default/master/homepage
spec:
  rules:
    - host: lie-nas-2.m27.local
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: worker
                port:
                  number: 8080
---
# Expose XP admin on admin nodes through an ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: lie-nas-2-m27-admin
  namespace: osde-ns
  annotations:
    # Enable sticy sessions with nginx
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "stickyXpAdmin"
    nginx.ingress.kubernetes.io/session-cookie-expires: "129600" # 36 hours
    nginx.ingress.kubernetes.io/session-cookie-max-age: "129600" # 36 hours
    nginx.ingress.kubernetes.io/session-cookie-change-on-failure: "true"

    enonic.cloud/xp7.vhost.mapping.my-mapping-admin.source: /admin
    enonic.cloud/xp7.vhost.mapping.my-mapping-admin.target: /admin
    enonic.cloud/xp7.vhost.mapping.my-mapping-admin.idproviders: system
spec:
  rules:
    - host: lie-nas-2.m27.local
      http:
        paths:
          - path: /admin
            pathType: ImplementationSpecific
            backend:
              service:
                name: worker
                port:
                  number: 8080

The config is correct. What about the events, anything that looks like a lead there?

@vbr which Events do you mean ?

I mean events from the created namespace, you can get it by a command like:
kubectl get events -n osde-ns --sort-by=‘.metadata.creationTimestamp’

Unforntunately that does not tell us more :frowning:

23s         Warning   FailedMount   pod/master-2   Unable to attach or mount volumes: unmounted volumes=[export blobstore snapshots], unattached volumes=[export kube-api-access-b9wzn config extra-config deploy index blobstore snapshots]: timed out waiting for the condition

Hmm, I was able to run your config both on local minikube and Google Cloud. But I know about some issues with pvc on Azure Cloud. Could you check how your cluster works with the requested volumes and volume claims types?

You can play with storage types by this config:
(sharedDisks) → operator.charts.values.storage.shared.storageClassName=nfs
(node volumes) → operator.charts.values.storage.default.storageClassName=standard