APPC install error: It may not be safe to bootstrap the cluster from this node. #appc


Dear community,

after more than 2 months (with Casabalanca) APPC ansible server went to CrashLoopBackOff state. I tried to update its deployment without success. One day after, other APPC pods went to CrashLoopBackOff state too.

Now I am trying with a simple deployment that consists only of APPC service.

Below is the state of pods from Kubernetes dashboard:

Describing "dev-appc-appc-db-0":

ubuntu@rancher:~/oom/kubernetes$ kubectl describe pod/dev-appc-appc-db-0 -n onap
Name:           dev-appc-appc-db-0
Namespace:      onap
Node:           k8s-dev/
Start Time:     Wed, 27 Mar 2019 14:16:01 +0000
Labels:         app=dev-appc-appc-db
Status:         Running
Controlled By:  StatefulSet/dev-appc-appc-db
Init Containers:
    Container ID:  docker://610ebd70b379ac6e7a223ba7f4dfbb7b6becf2a8965f0850ad8b9cf11f28b520
    Image ID:      docker-pullable://
    Port:          <none>
    Host Port:     <none>
      chown -R 27:27 /var/lib/mysql
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 27 Mar 2019 14:16:14 +0000
      Finished:     Wed, 27 Mar 2019 14:16:14 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
      /var/lib/mysql from dev-appc-appc-db-data (rw)
      /var/run/secrets/ from default-token-sbhz4 (ro)
    Container ID:   docker://03cfd676d76fc852dd7c75fb795e0223c731b4f4a22ee903f9784df70737d080
    Image ID:       docker-pullable://
    Ports:          3306/TCP, 4444/TCP, 4567/TCP, 4568/TCP
    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 27 Mar 2019 14:19:42 +0000
      Finished:     Wed, 27 Mar 2019 14:19:45 +0000
    Ready:          False
    Restart Count:  5
    Liveness:       exec [mysqladmin ping] delay=30s timeout=5s period=10s #success=1 #failure=3
    Readiness:      exec [/usr/share/container-scripts/mysql/] delay=15s timeout=1s period=10s #success=1 #failure=3
      POD_NAMESPACE:        onap (v1:metadata.namespace)
      MYSQL_USER:           my-user
      MYSQL_PASSWORD:       <set to the key 'user-password' in secret 'dev-appc-appc-db'>  Optional: false
      MYSQL_DATABASE:       my-database
      MYSQL_ROOT_PASSWORD:  <set to the key 'db-root-password' in secret 'dev-appc-appc-db'>  Optional: false
      /etc/localtime from localtime (ro)
      /var/lib/mysql from dev-appc-appc-db-data (rw)
      /var/run/secrets/ from default-token-sbhz4 (ro)
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  dev-appc-appc-db-data-dev-appc-appc-db-0
    ReadOnly:   false
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sbhz4
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations: for 300s
        for 300s
  Type     Reason            Age              From               Message
  ----     ------            ----             ----               -------
  Warning  FailedScheduling  4m (x3 over 4m)  default-scheduler  pod has unbound PersistentVolumeClaims
  Normal   Scheduled         4m               default-scheduler  Successfully assigned onap/dev-appc-appc-db-0 to k8s-dev
  Normal   Pulling           4m               kubelet, k8s-dev   pulling image ""
  Normal   Pulled            4m               kubelet, k8s-dev   Successfully pulled image ""
  Normal   Created           4m               kubelet, k8s-dev   Created container
  Normal   Started           4m               kubelet, k8s-dev   Started container
  Normal   Pulled            3m (x4 over 4m)  kubelet, k8s-dev   Container image "" already present on machine
  Normal   Created           3m (x4 over 4m)  kubelet, k8s-dev   Created container
  Normal   Started           3m (x4 over 4m)  kubelet, k8s-dev   Started container
  Warning  BackOff           3m (x9 over 4m)  kubelet, k8s-dev   Back-off restarting failed container

And, logs from "appc-db" container:

+ CONTAINER_SCRIPTS_DIR=/usr/share/container-scripts/mysql
+ EXTRA_DEFAULTS_FILE=/etc/my.cnf.d/galera.cnf
+ '[' -z onap ']'
+ echo 'Galera: Finding peers'
Galera: Finding peers
++ hostname -f
++ cut -d. -f2
+ K8S_SVC_NAME=appc-dbhost
+ echo 'Using service name: appc-dbhost'
+ cp /usr/share/container-scripts/mysql/galera.cnf /etc/my.cnf.d/galera.cnf
Using service name: appc-dbhost
+ /usr/bin/peer-finder -on-start=/usr/share/container-scripts/mysql/ -service=appc-dbhost
2019/03/27 15:27:45 Peer list updated
was []
now [dev-appc-appc-db-0.appc-dbhost.onap.svc.cluster.local]
2019/03/27 15:27:45 execing: /usr/share/container-scripts/mysql/ with stdin: dev-appc-appc-db-0.appc-dbhost.onap.svc.cluster.local
2019/03/27 15:27:45
2019/03/27 15:27:46 Peer finder exiting
+ '[' '!' -d /var/lib/mysql/mysql ']'
+ exec mysqld
2019-03-27 15:27:46 139879155882240 [Note] mysqld (mysqld 10.1.24-MariaDB) starting as process 1 ...
2019-03-27 15:27:47 139879155882240 [Note] WSREP: Read nil XID from storage engines, skipping position init
2019-03-27 15:27:47 139879155882240 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/'
2019-03-27 15:27:47 139879155882240 [Note] WSREP: wsrep_load(): Galera 25.3.20(r3703) by Codership Oy &lt;info@...&gt; loaded successfully.
2019-03-27 15:27:47 139879155882240 [Note] WSREP: CRC-32C: using hardware acceleration.
2019-03-27 15:27:47 139879155882240 [Note] WSREP: Found saved state: 84b0f5c0-12b6-11e9-a817-1b6ad3281ac6:-1, safe_to_bootsrap: 0
2019-03-27 15:27:47 139879155882240 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = dev-appc-appc-db-0.appc-dbhost.onap.svc.cluster.local; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = no; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_
2019-03-27 15:27:47 139879155882240 [Note] WSREP: GCache history reset: old(84b0f5c0-12b6-11e9-a817-1b6ad3281ac6:0) -&gt; new(84b0f5c0-12b6-11e9-a817-1b6ad3281ac6:-1)
2019-03-27 15:27:47 139879155882240 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
2019-03-27 15:27:47 139879155882240 [Note] WSREP: wsrep_sst_grab()
2019-03-27 15:27:47 139879155882240 [Note] WSREP: Start replication
2019-03-27 15:27:47 139879155882240 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2019-03-27 15:27:47 139879155882240 [ERROR] WSREP: It may not be safe to bootstrap the cluster from this node. It was not the last one to leave the cluster and may not contain all the updates. To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1 .
2019-03-27 15:27:47 139879155882240 [ERROR] WSREP: wsrep::connect(gcomm://) failed: 7
2019-03-27 15:27:47 139879155882240 [ERROR] Aborting

Seems it is related to "safe_to_bootstrap" feature (some info here). Not sure how to change this parameter and why now deploying APPC has this issue (I am using same images, same environment, everything; just deploy/undeploy).

Would appreciate some help!

Kind regards,

Join to automatically receive all group messages.