Topics

Casablanca oof module pods are waiting on init status #oof


gulsum atici <gulsumatici@...>
 

Hello,

OOF  module pods are  waiting  for init  for hours  abd  can't  become  running  status.  They have  some dependencies  inside  OOF module  and   AAF module. 

AAF module  is completely  running.   After  recreating  OOF  module  pods   several times, status  didn't  change.  

dev-aaf-aaf-cm-858d9bbd58-qlbzr                               1/1       Running            0          1h
dev-aaf-aaf-cs-db78f4b6-ph6c4                                 1/1       Running            0          1h
dev-aaf-aaf-fs-cc68f85f7-5jzbm                                1/1       Running            0          1h
dev-aaf-aaf-gui-8f979c4d9-tqj7q                               1/1       Running            0          1h
dev-aaf-aaf-hello-84df87c74b-9xp8x                            1/1       Running            0          1h
dev-aaf-aaf-locate-74466c9857-xqqp5                           1/1       Running            0          1h
dev-aaf-aaf-oauth-65db47977f-7rfk5                            1/1       Running            0          1h
dev-aaf-aaf-service-77454cb8c-b9bmp                           1/1       Running            0          1h
dev-aaf-aaf-sms-7b5db59d6-vkjlm                               1/1       Running            0          1h
dev-aaf-aaf-sms-quorumclient-0                                1/1       Running            0          1h
dev-aaf-aaf-sms-quorumclient-1                                1/1       Running            0          1h
dev-aaf-aaf-sms-quorumclient-2                                1/1       Running            0          1h
dev-aaf-aaf-sms-vault-0                                       2/2       Running            1          1h
 

dev-oof-music-tomcat-64d4c64db7-gff9j                         0/1       Init:1/3           5          1h
dev-oof-music-tomcat-64d4c64db7-kqg9m                         0/1       Init:1/3           5          1h
dev-oof-music-tomcat-64d4c64db7-vmhdt                         0/1       Init:1/3           5          1h
dev-oof-oof-7b4bccc8d7-4wdx7                                  1/1       Running            0          1h
dev-oof-oof-cmso-service-55499fdf4c-h2pnx                     1/1       Running            0          1h
dev-oof-oof-has-api-7d9b977b48-vq8bq                          0/1       Init:0/3           5          1h
dev-oof-oof-has-controller-7f5b6c5f7-2fq68                    0/1       Init:0/3           6          1h
dev-oof-oof-has-data-b57bd54fb-xp942                          0/1       Init:0/4           5          1h
dev-oof-oof-has-healthcheck-mrd87                             0/1       Init:0/1           5          1h
dev-oof-oof-has-onboard-jtxsv                                 0/1       Init:0/2           6          1h
dev-oof-oof-has-reservation-5869b786b-8krtc                   0/1       Init:0/4           5          1h
dev-oof-oof-has-solver-5c75888465-f7pfm                       0/1       Init:0/4           6          1h

In the  pod  logs, there isn't  any  errors, containers are  all  waiting for  initiliaze  but  stuck at that  point.


Events:
  Type    Reason   Age              From           Message
  ----    ------   ----             ----           -------
  Normal  Pulling  5m (x7 over 1h)  kubelet, kub1  pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled   5m (x7 over 1h)  kubelet, kub1  Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal  Created  5m (x7 over 1h)  kubelet, kub1  Created container
  Normal  Started  5m (x7 over 1h)  kubelet, kub1  Started container
ubuntu@kub2:~$ kubectl logs -f   dev-oof-oof-has-api-7d9b977b48-vq8bq -n onap 
Error from server (BadRequest): container "oof-has-api" in pod "dev-oof-oof-has-api-7d9b977b48-vq8bq" is waiting to start: PodInitializing
***************************************************
Events:
  Type    Reason   Age              From           Message
  ----    ------   ----             ----           -------
  Normal  Pulling  3m (x8 over 1h)  kubelet, kub3  pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled   3m (x8 over 1h)  kubelet, kub3  Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal  Created  3m (x8 over 1h)  kubelet, kub3  Created container
  Normal  Started  3m (x8 over 1h)  kubelet, kub3  Started container
ubuntu@kub3:$ kubectl  logs  -f   dev-oof-oof-has-onboard-jtxsv  -n onap 
Error from server (BadRequest): container "oof-has-onboard" in pod "dev-oof-oof-has-onboard-jtxsv" is waiting to start: PodInitializing
 
 
 


Borislav Glozman
 

Hi,

 

You can look in the log of the init containers of those pods to see what they are waiting for.

Find the init containers by running kubectl get po -n <namespace> <pod name>

Use kubectl logs -n <namespace> <pod name> -c <name of the init container>

 

Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726

amdocs-a

Amdocs a Platinum member of ONAP

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of gulsum atici
Sent: Monday, December 24, 2018 4:57 PM
To: onap-discuss@...
Subject: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Hello,

OOF  module pods are  waiting  for init  for hours  abd  can't  become  running  status.  They have  some dependencies  inside  OOF module  and   AAF module. 

AAF module  is completely  running.   After  recreating  OOF  module  pods   several times, status  didn't  change.  

dev-aaf-aaf-cm-858d9bbd58-qlbzr                               1/1       Running            0          1h

dev-aaf-aaf-cs-db78f4b6-ph6c4                                 1/1       Running            0          1h

dev-aaf-aaf-fs-cc68f85f7-5jzbm                                1/1       Running            0          1h

dev-aaf-aaf-gui-8f979c4d9-tqj7q                               1/1       Running            0          1h

dev-aaf-aaf-hello-84df87c74b-9xp8x                            1/1       Running            0          1h

dev-aaf-aaf-locate-74466c9857-xqqp5                           1/1       Running            0          1h

dev-aaf-aaf-oauth-65db47977f-7rfk5                            1/1       Running            0          1h

dev-aaf-aaf-service-77454cb8c-b9bmp                           1/1       Running            0          1h

dev-aaf-aaf-sms-7b5db59d6-vkjlm                               1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-0                                1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-1                                1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-2                                1/1       Running            0          1h

dev-aaf-aaf-sms-vault-0                                       2/2       Running            1          1h

 

 

dev-oof-music-tomcat-64d4c64db7-gff9j                         0/1       Init:1/3           5          1h

dev-oof-music-tomcat-64d4c64db7-kqg9m                         0/1       Init:1/3           5          1h

dev-oof-music-tomcat-64d4c64db7-vmhdt                         0/1       Init:1/3           5          1h

dev-oof-oof-7b4bccc8d7-4wdx7                                  1/1       Running            0          1h

dev-oof-oof-cmso-service-55499fdf4c-h2pnx                     1/1       Running            0          1h

dev-oof-oof-has-api-7d9b977b48-vq8bq                          0/1       Init:0/3           5          1h

dev-oof-oof-has-controller-7f5b6c5f7-2fq68                    0/1       Init:0/3           6          1h

dev-oof-oof-has-data-b57bd54fb-xp942                          0/1       Init:0/4           5          1h

dev-oof-oof-has-healthcheck-mrd87                             0/1       Init:0/1           5          1h

dev-oof-oof-has-onboard-jtxsv                                 0/1       Init:0/2           6          1h

dev-oof-oof-has-reservation-5869b786b-8krtc                   0/1       Init:0/4           5          1h

dev-oof-oof-has-solver-5c75888465-f7pfm                       0/1       Init:0/4           6          1h

In the  pod  logs, there isn't  any  errors, containers are  all  waiting for  initiliaze  but  stuck at that  point.

Events:

  Type    Reason   Age              From           Message

  ----    ------   ----             ----           -------

  Normal  Pulling  5m (x7 over 1h)  kubelet, kub1  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   5m (x7 over 1h)  kubelet, kub1  Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created  5m (x7 over 1h)  kubelet, kub1  Created container

  Normal  Started  5m (x7 over 1h)  kubelet, kub1  Started container

ubuntu@kub2:~$ kubectl logs -f   dev-oof-oof-has-api-7d9b977b48-vq8bq -n onap 

Error from server (BadRequest): container "oof-has-api" in pod "dev-oof-oof-has-api-7d9b977b48-vq8bq" is waiting to start: PodInitializing
***************************************************

Events:

  Type    Reason   Age              From           Message

  ----    ------   ----             ----           -------

  Normal  Pulling  3m (x8 over 1h)  kubelet, kub3  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   3m (x8 over 1h)  kubelet, kub3  Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created  3m (x8 over 1h)  kubelet, kub3  Created container

  Normal  Started  3m (x8 over 1h)  kubelet, kub3  Started container

ubuntu@kub3:$ kubectl  logs  -f   dev-oof-oof-has-onboard-jtxsv  -n onap 

Error from server (BadRequest): container "oof-has-onboard" in pod "dev-oof-oof-has-onboard-jtxsv" is waiting to start: PodInitializing

 

 

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Borislav Glozman
 

Correction: kubectl describe po -n <namespace> <pod name>

 

Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726

amdocs-a

Amdocs a Platinum member of ONAP

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Borislav Glozman
Sent: Tuesday, December 25, 2018 10:37 AM
To: onap-discuss@...; gulsumatici@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Hi,

 

You can look in the log of the init containers of those pods to see what they are waiting for.

Find the init containers by running kubectl get po -n <namespace> <pod name>

Use kubectl logs -n <namespace> <pod name> -c <name of the init container>

 

Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726

amdocs-a

Amdocs a Platinum member of ONAP

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of gulsum atici
Sent: Monday, December 24, 2018 4:57 PM
To: onap-discuss@...
Subject: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Hello,

OOF  module pods are  waiting  for init  for hours  abd  can't  become  running  status.  They have  some dependencies  inside  OOF module  and   AAF module. 

AAF module  is completely  running.   After  recreating  OOF  module  pods   several times, status  didn't  change.  

dev-aaf-aaf-cm-858d9bbd58-qlbzr                               1/1       Running            0          1h

dev-aaf-aaf-cs-db78f4b6-ph6c4                                 1/1       Running            0          1h

dev-aaf-aaf-fs-cc68f85f7-5jzbm                                1/1       Running            0          1h

dev-aaf-aaf-gui-8f979c4d9-tqj7q                               1/1       Running            0          1h

dev-aaf-aaf-hello-84df87c74b-9xp8x                            1/1       Running            0          1h

dev-aaf-aaf-locate-74466c9857-xqqp5                           1/1       Running            0          1h

dev-aaf-aaf-oauth-65db47977f-7rfk5                            1/1       Running            0          1h

dev-aaf-aaf-service-77454cb8c-b9bmp                           1/1       Running            0          1h

dev-aaf-aaf-sms-7b5db59d6-vkjlm                               1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-0                                1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-1                                1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-2                                1/1       Running            0          1h

dev-aaf-aaf-sms-vault-0                                       2/2       Running            1          1h

 

 

dev-oof-music-tomcat-64d4c64db7-gff9j                         0/1       Init:1/3           5          1h

dev-oof-music-tomcat-64d4c64db7-kqg9m                         0/1       Init:1/3           5          1h

dev-oof-music-tomcat-64d4c64db7-vmhdt                         0/1       Init:1/3           5          1h

dev-oof-oof-7b4bccc8d7-4wdx7                                  1/1       Running            0          1h

dev-oof-oof-cmso-service-55499fdf4c-h2pnx                     1/1       Running            0          1h

dev-oof-oof-has-api-7d9b977b48-vq8bq                          0/1       Init:0/3           5          1h

dev-oof-oof-has-controller-7f5b6c5f7-2fq68                    0/1       Init:0/3           6          1h

dev-oof-oof-has-data-b57bd54fb-xp942                          0/1       Init:0/4           5          1h

dev-oof-oof-has-healthcheck-mrd87                             0/1       Init:0/1           5          1h

dev-oof-oof-has-onboard-jtxsv                                 0/1       Init:0/2           6          1h

dev-oof-oof-has-reservation-5869b786b-8krtc                   0/1       Init:0/4           5          1h

dev-oof-oof-has-solver-5c75888465-f7pfm                       0/1       Init:0/4           6          1h

In the  pod  logs, there isn't  any  errors, containers are  all  waiting for  initiliaze  but  stuck at that  point.

Events:

  Type    Reason   Age              From           Message

  ----    ------   ----             ----           -------

  Normal  Pulling  5m (x7 over 1h)  kubelet, kub1  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   5m (x7 over 1h)  kubelet, kub1  Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created  5m (x7 over 1h)  kubelet, kub1  Created container

  Normal  Started  5m (x7 over 1h)  kubelet, kub1  Started container

ubuntu@kub2:~$ kubectl logs -f   dev-oof-oof-has-api-7d9b977b48-vq8bq -n onap 

Error from server (BadRequest): container "oof-has-api" in pod "dev-oof-oof-has-api-7d9b977b48-vq8bq" is waiting to start: PodInitializing
***************************************************

Events:

  Type    Reason   Age              From           Message

  ----    ------   ----             ----           -------

  Normal  Pulling  3m (x8 over 1h)  kubelet, kub3  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   3m (x8 over 1h)  kubelet, kub3  Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created  3m (x8 over 1h)  kubelet, kub3  Created container

  Normal  Started  3m (x8 over 1h)  kubelet, kub3  Started container

ubuntu@kub3:$ kubectl  logs  -f   dev-oof-oof-has-onboard-jtxsv  -n onap 

Error from server (BadRequest): container "oof-has-onboard" in pod "dev-oof-oof-has-onboard-jtxsv" is waiting to start: PodInitializing

 

 

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


gulsum atici <gulsumatici@...>
 

Dear Borislav,

I grab some  logs  from the pod init containers. I have recreated all pods including  dbs  several times. However  latest situation didn't  change.

dev-oof-cmso-db-0                                             1/1       Running                 0          33m       10.42.140.74    kub3      <none>
dev-oof-music-cassandra-0                                     1/1       Running                 0          32m       10.42.254.144   kub3      <none>
dev-oof-music-cassandra-1                                     1/1       Running                 0          1h        10.42.244.161   kub4      <none>
dev-oof-music-cassandra-2                                     1/1       Running                 0          1h        10.42.56.156    kub2      <none>
dev-oof-music-tomcat-685fd777c9-8qmll                         0/1       Init:1/3                3          35m       10.42.159.78    kub3      <none>
dev-oof-music-tomcat-685fd777c9-crdf6                         0/1       Init:1/3                3          35m       10.42.167.24    kub2      <none>
dev-oof-music-tomcat-84bc66c649-7xf8q                         0/1       Init:1/3                6          1h        10.42.19.117    kub1      <none>
dev-oof-music-tomcat-84bc66c649-lzmtj                         0/1       Init:1/3                6          1h        10.42.198.179   kub4      <none>
dev-oof-oof-8ff8b46f5-8sbwv                                   1/1       Running                 0          35m       10.42.35.56     kub3      <none>
dev-oof-oof-cmso-service-6c485cdff-pbzb6                      0/1       Init:CrashLoopBackOff   10         35m       10.42.224.93    kub3      <none>
dev-oof-oof-has-api-74c6695b64-kcr4n                          0/1       Init:0/3                2          35m       10.42.70.206    kub1      <none>
dev-oof-oof-has-controller-7cb97bbd4f-n7k9j                   0/1       Init:0/3                3          35m       10.42.194.39    kub3      <none>
dev-oof-oof-has-data-5b4f76fc7b-t92r6                         0/1       Init:0/4                3          35m       10.42.205.181   kub1      <none>
dev-oof-oof-has-healthcheck-8hqbt                             0/1       Init:0/1                3          35m       10.42.131.183   kub3      <none>
dev-oof-oof-has-onboard-mqglv                                 0/1       Init:0/2                3          35m       10.42.34.251    kub1      <none>
dev-oof-oof-has-reservation-5b899687db-dgjnh                  0/1       Init:0/4                3          35m       10.42.245.175   kub1      <none>
dev-oof-oof-has-solver-65486d5fc7-s84w4                       0/1       Init:0/4                3          35m       10.42.35.223    kub3      <none>
 

ubuntu@kub4:~$ kubectl  describe  pod  dev-oof-music-tomcat-685fd777c9-8qmll  -n  onap 
Name:           dev-oof-music-tomcat-685fd777c9-8qmll
Namespace:      onap
Node:           kub3/192.168.13.151
Start Time:     Tue, 25 Dec 2018 11:20:04 +0000
Labels:         app=music-tomcat
                pod-template-hash=2419833375
                release=dev-oof
Annotations:    <none>
Status:         Pending
IP:             10.42.159.78
Controlled By:  ReplicaSet/dev-oof-music-tomcat-685fd777c9
Init Containers:
  music-tomcat-zookeeper-readiness:
    Container ID:  docker://79b0507168a8590b10f0b1eb8c720e04cd173914b6365834d5b6c9c6f86a074d
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      zookeeper
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 25 Dec 2018 11:20:57 +0000
      Finished:     Tue, 25 Dec 2018 11:21:32 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
  music-tomcat-cassandra-readiness:
    Container ID:  docker://36b752b9b2d96d6437992cab6d63d32b80107799b34b0420056656fcc4476213
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/job_complete.py
    Args:
      -j
      dev-oof-music-cassandra-job-config
    State:          Running
      Started:      Tue, 25 Dec 2018 11:41:58 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 25 Dec 2018 11:31:49 +0000
      Finished:     Tue, 25 Dec 2018 11:41:53 +0000
    Ready:          False
    Restart Count:  2
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
  music-tomcat-war:
    Container ID:  
    Image:         nexus3.onap.org:10001/onap/music/music:3.0.24
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
      /app/MUSIC.war
      /webapps
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
      /webapps from shared-data (rw)
Containers:
  music-tomcat:
    Container ID:   
    Image:          nexus3.onap.org:10001/library/tomcat:8.5
    Image ID:       
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3
    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /etc/localtime from localtime (ro)
      /opt/app/music/etc/music.properties from properties-music (rw)
      /usr/local/tomcat/webapps from shared-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  shared-data:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
  properties-music:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-tomcat-configmap
    Optional:  false
  default-token-rm7hn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rm7hn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age               From               Message
  ----    ------     ----              ----               -------
  Normal  Scheduled  27m               default-scheduler  Successfully assigned onap/dev-oof-music-tomcat-685fd777c9-8qmll to kub3
  Normal  Pulling    26m               kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled     26m               kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal  Created    26m               kubelet, kub3      Created container
  Normal  Started    26m               kubelet, kub3      Started container
  Normal  Pulling    5m (x3 over 25m)  kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled     5m (x3 over 25m)  kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal  Created    5m (x3 over 25m)  kubelet, kub3      Created container
  Normal  Started    5m (x3 over 25m)  kubelet, kub3      Started container
ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c music-tomcat-zookeeper-readiness -n onap 
2018-12-25 11:20:58,478 - INFO - Checking if zookeeper  is ready
2018-12-25 11:21:32,325 - INFO - zookeeper is ready!
2018-12-25 11:21:32,326 - INFO - zookeeper is ready!
ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c  music-tomcat-cassandra-readiness  -n onap 
2018-12-25 11:41:59,688 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:00,014 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:05,019 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:05,305 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:10,310 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:10,681 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:15,686 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:16,192 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:21,198 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:22,058 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:27,063 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:28,051 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:33,054 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:35,798 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:40,802 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:42,112 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:47,117 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:48,173 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:53,176 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:54,378 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:59,382 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:00,239 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:05,245 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:05,925 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:10,930 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:11,930 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:16,934 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:19,212 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:24,217 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:25,102 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:30,106 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:32,245 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:37,254 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:37,534 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:42,539 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:44,826 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:49,830 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:50,486 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:55,490 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:56,398 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:01,403 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:02,134 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:07,139 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:07,834 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:12,837 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:13,026 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:18,030 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:19,561 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:24,566 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:25,153 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

ubuntu@kub4:~$ kubectl describe pod  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -n onap 
Name:           dev-oof-oof-cmso-service-6c485cdff-pbzb6
Namespace:      onap
Node:           kub3/192.168.13.151
Start Time:     Tue, 25 Dec 2018 11:20:07 +0000
Labels:         app=oof-cmso-service
                pod-template-hash=270417899
                release=dev-oof
Annotations:    <none>
Status:         Pending
IP:             10.42.224.93
Controlled By:  ReplicaSet/dev-oof-oof-cmso-service-6c485cdff
Init Containers:
  oof-cmso-service-readiness:
    Container ID:  docker://bb4ccdfaf3ba6836e606685de4bbe069da2e5193f165ae466f768dad85b71908
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      cmso-db
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 25 Dec 2018 11:22:53 +0000
      Finished:     Tue, 25 Dec 2018 11:25:01 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
  db-init:
    Container ID:   docker://dbc9fadd1140584043b8f690974a4d626f64d12ef5002108b7b5c29148981e23
    Image:          nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1
    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/optf-cmso-dbinit@sha256:c5722a319fb0d91ad4d533597cdee2b55fc5c51d0a8740cf02cbaa1969c8554f
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 25 Dec 2018 11:48:31 +0000
      Finished:     Tue, 25 Dec 2018 11:48:41 +0000
    Ready:          False
    Restart Count:  9
    Environment:
      DB_HOST:      oof-cmso-dbhost.onap
      DB_PORT:      3306
      DB_USERNAME:  root
      DB_SCHEMA:    cmso
      DB_PASSWORD:  <set to the key 'db-root-password' in secret 'dev-oof-cmso-db'>  Optional: false
    Mounts:
      /share/etc/config from dev-oof-oof-cmso-service-config (rw)
      /share/logs from dev-oof-oof-cmso-service-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
Containers:
  oof-cmso-service:
    Container ID:   
    Image:          nexus3.onap.org:10001/onap/optf-cmso-service:1.0.1
    Image ID:       
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       tcp-socket :8080 delay=120s timeout=50s period=10s #success=1 #failure=3
    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3
    Environment:
      DB_HOST:      oof-cmso-dbhost.onap
      DB_PORT:      3306
      DB_USERNAME:  cmso-admin
      DB_SCHEMA:    cmso
      DB_PASSWORD:  <set to the key 'user-password' in secret 'dev-oof-cmso-db'>  Optional: false
    Mounts:
      /share/debug-logs from dev-oof-oof-cmso-service-logs (rw)
      /share/etc/config from dev-oof-oof-cmso-service-config (rw)
      /share/logs from dev-oof-oof-cmso-service-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  dev-oof-oof-cmso-service-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-oof-cmso-service
    Optional:  false
  dev-oof-oof-cmso-service-logs:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  default-token-rm7hn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rm7hn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From               Message
  ----     ------                  ----               ----               -------
  Normal   Scheduled               30m                default-scheduler  Successfully assigned onap/dev-oof-oof-cmso-service-6c485cdff-pbzb6 to kub3
  Warning  FailedCreatePodSandBox  29m                kubelet, kub3      Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to set up pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: No MAC address found, failed to clean up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to teardown pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: failed to get IP addresses for "eth0": <nil>]
  Normal   SandboxChanged          29m                kubelet, kub3      Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling                 27m                kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"
  Normal   Pulled                  27m                kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal   Created                 27m                kubelet, kub3      Created container
  Normal   Started                 27m                kubelet, kub3      Started container
  Normal   Pulling                 23m (x4 over 25m)  kubelet, kub3      pulling image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"
  Normal   Pulled                  23m (x4 over 25m)  kubelet, kub3      Successfully pulled image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"
  Normal   Created                 23m (x4 over 25m)  kubelet, kub3      Created container
  Normal   Started                 23m (x4 over 25m)  kubelet, kub3      Started container
  Warning  BackOff                 4m (x80 over 24m)  kubelet, kub3      Back-off restarting failed container
ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c oof-cmso-service-readiness -n onap 
2018-12-25 11:22:54,683 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:02,186 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:09,950 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:12,938 - INFO - cmso-db is not ready.
2018-12-25 11:23:17,963 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:20,091 - INFO - cmso-db is not ready.
2018-12-25 11:23:25,111 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:27,315 - INFO - cmso-db is not ready.
2018-12-25 11:23:32,329 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:35,390 - INFO - cmso-db is not ready.
2018-12-25 11:23:40,407 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:43,346 - INFO - cmso-db is not ready.
2018-12-25 11:23:48,371 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:53,848 - INFO - cmso-db is not ready.
2018-12-25 11:23:58,870 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:02,188 - INFO - cmso-db is not ready.
2018-12-25 11:24:07,207 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:10,598 - INFO - cmso-db is not ready.
2018-12-25 11:24:15,622 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:18,936 - INFO - cmso-db is not ready.
2018-12-25 11:24:23,955 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:26,794 - INFO - cmso-db is not ready.
2018-12-25 11:24:31,813 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:35,529 - INFO - cmso-db is not ready.
2018-12-25 11:24:40,566 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:44,374 - INFO - cmso-db is not ready.
2018-12-25 11:24:49,403 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:53,222 - INFO - cmso-db is not ready.
2018-12-25 11:24:58,238 - INFO - Checking if cmso-db  is ready
2018-12-25 11:25:01,340 - INFO - cmso-db is ready!
ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c  db-init  -n onap 
VM_ARGS=
 
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.0.6.RELEASE)
 
2018-12-25 11:48:36.187  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : Starting LiquibaseApplication on dev-oof-oof-cmso-service-6c485cdff-pbzb6 with PID 8 (/opt/app/cmso-dbinit/app.jar started by root in /opt/app/cmso-dbinit)
2018-12-25 11:48:36.199  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : No active profile set, falling back to default profiles: default
2018-12-25 11:48:36.310  INFO 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@d44fc21: startup date [Tue Dec 25 11:48:36 UTC 2018]; root of context hierarchy
2018-12-25 11:48:40.336  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2018-12-25 11:48:40.754  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
2018-12-25 11:48:41.044  WARN 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
2018-12-25 11:48:41.045  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown initiated...
2018-12-25 11:48:41.109  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown completed.
2018-12-25 11:48:41.177  INFO 8 --- [           main] ConditionEvaluationReportLoggingListener : 
 
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2018-12-25 11:48:41.223 ERROR 8 --- [           main] o.s.boot.SpringApplication               : Application run failed
 
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1694) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:759) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:548) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.onap.optf.cmso.liquibase.LiquibaseApplication.main(LiquibaseApplication.java:45) [classes!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [app.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [app.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [app.jar:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [app.jar:na]
Caused by: liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:242) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:170) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.Liquibase.update(Liquibase.java:196) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.Liquibase.update(Liquibase.java:192) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:431) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:388) ~[liquibase-core-3.5.5.jar!/:na]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1753) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1690) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
... 23 common frames omitted
Caused by: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1159) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:233) ~[liquibase-core-3.5.5.jar!/:na]
... 30 common frames omitted
Caused by: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:126) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1157) ~[liquibase-core-3.5.5.jar!/:na]
... 31 common frames omitted
Caused by: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:179) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:334) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:386) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbConnection.commit(MariaDbConnection.java:709) ~[mariadb-java-client-2.2.6.jar!/:na]
at com.zaxxer.hikari.pool.ProxyConnection.commit(ProxyConnection.java:368) ~[HikariCP-2.7.9.jar!/:na]
at com.zaxxer.hikari.pool.HikariProxyConnection.commit(HikariProxyConnection.java) ~[HikariCP-2.7.9.jar!/:na]
at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:123) ~[liquibase-core-3.5.5.jar!/:na]
... 32 common frames omitted
Caused by: java.sql.SQLException: Deadlock found when trying to get lock; try restarting transaction
Query is: COMMIT
at org.mariadb.jdbc.internal.util.LogQueryTool.exceptionWithQuery(LogQueryTool.java:119) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:200) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:328) ~[mariadb-java-client-2.2.6.jar!/:na]
... 37 common frames omitted
 
ubuntu@kub4:~$ 
 
 


Borislav Glozman
 

Hi,

 

You will probably need further assistance from OOF team (regarding the exception).

I also did not see dev-oof-music-cassandra-job-config job.

 

Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726

amdocs-a

Amdocs a Platinum member of ONAP

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of gulsum atici
Sent: Tuesday, December 25, 2018 1:56 PM
To: Borislav Glozman <Borislav.Glozman@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear Borislav,

I grab some  logs  from the pod init containers. I have recreated all pods including  dbs  several times. However  latest situation didn't  change.

dev-oof-cmso-db-0                                             1/1       Running                 0          33m       10.42.140.74    kub3      <none>

dev-oof-music-cassandra-0                                     1/1       Running                 0          32m       10.42.254.144   kub3      <none>

dev-oof-music-cassandra-1                                     1/1       Running                 0          1h        10.42.244.161   kub4      <none>

dev-oof-music-cassandra-2                                     1/1       Running                 0          1h        10.42.56.156    kub2      <none>

dev-oof-music-tomcat-685fd777c9-8qmll                         0/1       Init:1/3                3          35m       10.42.159.78    kub3      <none>

dev-oof-music-tomcat-685fd777c9-crdf6                         0/1       Init:1/3                3          35m       10.42.167.24    kub2      <none>

dev-oof-music-tomcat-84bc66c649-7xf8q                         0/1       Init:1/3                6          1h        10.42.19.117    kub1      <none>

dev-oof-music-tomcat-84bc66c649-lzmtj                         0/1       Init:1/3                6          1h        10.42.198.179   kub4      <none>

dev-oof-oof-8ff8b46f5-8sbwv                                   1/1       Running                 0          35m       10.42.35.56     kub3      <none>

dev-oof-oof-cmso-service-6c485cdff-pbzb6                      0/1       Init:CrashLoopBackOff   10         35m       10.42.224.93    kub3      <none>

dev-oof-oof-has-api-74c6695b64-kcr4n                          0/1       Init:0/3                2          35m       10.42.70.206    kub1      <none>

dev-oof-oof-has-controller-7cb97bbd4f-n7k9j                   0/1       Init:0/3                3          35m       10.42.194.39    kub3      <none>

dev-oof-oof-has-data-5b4f76fc7b-t92r6                         0/1       Init:0/4                3          35m       10.42.205.181   kub1      <none>

dev-oof-oof-has-healthcheck-8hqbt                             0/1       Init:0/1                3          35m       10.42.131.183   kub3      <none>

dev-oof-oof-has-onboard-mqglv                                 0/1       Init:0/2                3          35m       10.42.34.251    kub1      <none>

dev-oof-oof-has-reservation-5b899687db-dgjnh                  0/1       Init:0/4                3          35m       10.42.245.175   kub1      <none>

dev-oof-oof-has-solver-65486d5fc7-s84w4                       0/1       Init:0/4                3          35m       10.42.35.223    kub3      <none>

 

 

ubuntu@kub4:~$ kubectl  describe  pod  dev-oof-music-tomcat-685fd777c9-8qmll  -n  onap 

Name:           dev-oof-music-tomcat-685fd777c9-8qmll

Namespace:      onap

Node:           kub3/192.168.13.151

Start Time:     Tue, 25 Dec 2018 11:20:04 +0000

Labels:         app=music-tomcat

                pod-template-hash=2419833375

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.159.78

Controlled By:  ReplicaSet/dev-oof-music-tomcat-685fd777c9

Init Containers:

  music-tomcat-zookeeper-readiness:

    Container ID:  docker://79b0507168a8590b10f0b1eb8c720e04cd173914b6365834d5b6c9c6f86a074d

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      zookeeper

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Tue, 25 Dec 2018 11:20:57 +0000

      Finished:     Tue, 25 Dec 2018 11:21:32 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  music-tomcat-cassandra-readiness:

    Container ID:  docker://36b752b9b2d96d6437992cab6d63d32b80107799b34b0420056656fcc4476213

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-music-cassandra-job-config

    State:          Running

      Started:      Tue, 25 Dec 2018 11:41:58 +0000

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Tue, 25 Dec 2018 11:31:49 +0000

      Finished:     Tue, 25 Dec 2018 11:41:53 +0000

    Ready:          False

    Restart Count:  2

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  music-tomcat-war:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/music/music:3.0.24

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      cp

      /app/MUSIC.war

      /webapps

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Environment:    <none>

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

      /webapps from shared-data (rw)

Containers:

  music-tomcat:

    Container ID:   

    Image:          nexus3.onap.org:10001/library/tomcat:8.5

    Image ID:       

    Port:           8080/TCP

    Host Port:      0/TCP

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /opt/app/music/etc/music.properties from properties-music (rw)

      /usr/local/tomcat/webapps from shared-data (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  shared-data:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

  properties-music:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-tomcat-configmap

    Optional:  false

  default-token-rm7hn:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-rm7hn

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason     Age               From               Message

  ----    ------     ----              ----               -------

  Normal  Scheduled  27m               default-scheduler  Successfully assigned onap/dev-oof-music-tomcat-685fd777c9-8qmll to kub3

  Normal  Pulling    26m               kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled     26m               kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created    26m               kubelet, kub3      Created container

  Normal  Started    26m               kubelet, kub3      Started container

  Normal  Pulling    5m (x3 over 25m)  kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled     5m (x3 over 25m)  kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created    5m (x3 over 25m)  kubelet, kub3      Created container

  Normal  Started    5m (x3 over 25m)  kubelet, kub3      Started container

ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c music-tomcat-zookeeper-readiness -n onap 

2018-12-25 11:20:58,478 - INFO - Checking if zookeeper  is ready

2018-12-25 11:21:32,325 - INFO - zookeeper is ready!

2018-12-25 11:21:32,326 - INFO - zookeeper is ready!

ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c  music-tomcat-cassandra-readiness  -n onap 

2018-12-25 11:41:59,688 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:00,014 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:05,019 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:05,305 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:10,310 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:10,681 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:15,686 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:16,192 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:21,198 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:22,058 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:27,063 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:28,051 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:33,054 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:35,798 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:40,802 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:42,112 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:47,117 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:48,173 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:53,176 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:54,378 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:59,382 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:00,239 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:05,245 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:05,925 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:10,930 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:11,930 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:16,934 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:19,212 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:24,217 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:25,102 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:30,106 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:32,245 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:37,254 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:37,534 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:42,539 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:44,826 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:49,830 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:50,486 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:55,490 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:56,398 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:01,403 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:02,134 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:07,139 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:07,834 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:12,837 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:13,026 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:18,030 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:19,561 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:24,566 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:25,153 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

ubuntu@kub4:~$ kubectl describe pod  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -n onap 

Name:           dev-oof-oof-cmso-service-6c485cdff-pbzb6

Namespace:      onap

Node:           kub3/192.168.13.151

Start Time:     Tue, 25 Dec 2018 11:20:07 +0000

Labels:         app=oof-cmso-service

                pod-template-hash=270417899

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.224.93

Controlled By:  ReplicaSet/dev-oof-oof-cmso-service-6c485cdff

Init Containers:

  oof-cmso-service-readiness:

    Container ID:  docker://bb4ccdfaf3ba6836e606685de4bbe069da2e5193f165ae466f768dad85b71908

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      cmso-db

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Tue, 25 Dec 2018 11:22:53 +0000

      Finished:     Tue, 25 Dec 2018 11:25:01 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  db-init:

    Container ID:   docker://dbc9fadd1140584043b8f690974a4d626f64d12ef5002108b7b5c29148981e23

    Image:          nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1

    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/optf-cmso-dbinit@sha256:c5722a319fb0d91ad4d533597cdee2b55fc5c51d0a8740cf02cbaa1969c8554f

    Port:           <none>

    Host Port:      <none>

    State:          Waiting

      Reason:       CrashLoopBackOff

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Tue, 25 Dec 2018 11:48:31 +0000

      Finished:     Tue, 25 Dec 2018 11:48:41 +0000

    Ready:          False

    Restart Count:  9

    Environment:

      DB_HOST:      oof-cmso-dbhost.onap

      DB_PORT:      3306

      DB_USERNAME:  root

      DB_SCHEMA:    cmso

      DB_PASSWORD:  <set to the key 'db-root-password' in secret 'dev-oof-cmso-db'>  Optional: false

    Mounts:

      /share/etc/config from dev-oof-oof-cmso-service-config (rw)

      /share/logs from dev-oof-oof-cmso-service-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Containers:

  oof-cmso-service:

    Container ID:   

    Image:          nexus3.onap.org:10001/onap/optf-cmso-service:1.0.1

    Image ID:       

    Port:           8080/TCP

    Host Port:      0/TCP

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :8080 delay=120s timeout=50s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Environment:

      DB_HOST:      oof-cmso-dbhost.onap

      DB_PORT:      3306

      DB_USERNAME:  cmso-admin

      DB_SCHEMA:    cmso

      DB_PASSWORD:  <set to the key 'user-password' in secret 'dev-oof-cmso-db'>  Optional: false

    Mounts:

      /share/debug-logs from dev-oof-oof-cmso-service-logs (rw)

      /share/etc/config from dev-oof-oof-cmso-service-config (rw)

      /share/logs from dev-oof-oof-cmso-service-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  dev-oof-oof-cmso-service-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-oof-cmso-service

    Optional:  false

  dev-oof-oof-cmso-service-logs:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  default-token-rm7hn:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-rm7hn

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type     Reason                  Age                From               Message

  ----     ------                  ----               ----               -------

  Normal   Scheduled               30m                default-scheduler  Successfully assigned onap/dev-oof-oof-cmso-service-6c485cdff-pbzb6 to kub3

  Warning  FailedCreatePodSandBox  29m                kubelet, kub3      Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to set up pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: No MAC address found, failed to clean up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to teardown pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: failed to get IP addresses for "eth0": <nil>]

  Normal   SandboxChanged          29m                kubelet, kub3      Pod sandbox changed, it will be killed and re-created.

  Normal   Pulling                 27m                kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal   Pulled                  27m                kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal   Created                 27m                kubelet, kub3      Created container

  Normal   Started                 27m                kubelet, kub3      Started container

  Normal   Pulling                 23m (x4 over 25m)  kubelet, kub3      pulling image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"

  Normal   Pulled                  23m (x4 over 25m)  kubelet, kub3      Successfully pulled image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"

  Normal   Created                 23m (x4 over 25m)  kubelet, kub3      Created container

  Normal   Started                 23m (x4 over 25m)  kubelet, kub3      Started container

  Warning  BackOff                 4m (x80 over 24m)  kubelet, kub3      Back-off restarting failed container

ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c oof-cmso-service-readiness -n onap 

2018-12-25 11:22:54,683 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:02,186 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:09,950 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:12,938 - INFO - cmso-db is not ready.

2018-12-25 11:23:17,963 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:20,091 - INFO - cmso-db is not ready.

2018-12-25 11:23:25,111 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:27,315 - INFO - cmso-db is not ready.

2018-12-25 11:23:32,329 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:35,390 - INFO - cmso-db is not ready.

2018-12-25 11:23:40,407 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:43,346 - INFO - cmso-db is not ready.

2018-12-25 11:23:48,371 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:53,848 - INFO - cmso-db is not ready.

2018-12-25 11:23:58,870 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:02,188 - INFO - cmso-db is not ready.

2018-12-25 11:24:07,207 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:10,598 - INFO - cmso-db is not ready.

2018-12-25 11:24:15,622 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:18,936 - INFO - cmso-db is not ready.

2018-12-25 11:24:23,955 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:26,794 - INFO - cmso-db is not ready.

2018-12-25 11:24:31,813 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:35,529 - INFO - cmso-db is not ready.

2018-12-25 11:24:40,566 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:44,374 - INFO - cmso-db is not ready.

2018-12-25 11:24:49,403 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:53,222 - INFO - cmso-db is not ready.

2018-12-25 11:24:58,238 - INFO - Checking if cmso-db  is ready

2018-12-25 11:25:01,340 - INFO - cmso-db is ready!

ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c  db-init  -n onap 

VM_ARGS=

 

  .   ____          _            __ _ _

 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )

  '  |____| .__|_| |_|_| |_\__, | / / / /

 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::        (v2.0.6.RELEASE)

 

2018-12-25 11:48:36.187  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : Starting LiquibaseApplication on dev-oof-oof-cmso-service-6c485cdff-pbzb6 with PID 8 (/opt/app/cmso-dbinit/app.jar started by root in /opt/app/cmso-dbinit)

2018-12-25 11:48:36.199  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : No active profile set, falling back to default profiles: default

2018-12-25 11:48:36.310  INFO 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@d44fc21: startup date [Tue Dec 25 11:48:36 UTC 2018]; root of context hierarchy

2018-12-25 11:48:40.336  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...

2018-12-25 11:48:40.754  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.

2018-12-25 11:48:41.044  WARN 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

2018-12-25 11:48:41.045  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown initiated...

2018-12-25 11:48:41.109  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown completed.

2018-12-25 11:48:41.177  INFO 8 --- [           main] ConditionEvaluationReportLoggingListener : 

 

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.

2018-12-25 11:48:41.223 ERROR 8 --- [           main] o.s.boot.SpringApplication               : Application run failed

 

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1694) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:759) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:548) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.onap.optf.cmso.liquibase.LiquibaseApplication.main(LiquibaseApplication.java:45) [classes!/:na]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]

at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]

at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [app.jar:na]

at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [app.jar:na]

at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [app.jar:na]

at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [app.jar:na]

Caused by: liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:242) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:170) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.Liquibase.update(Liquibase.java:196) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.Liquibase.update(Liquibase.java:192) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:431) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:388) ~[liquibase-core-3.5.5.jar!/:na]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1753) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1690) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

... 23 common frames omitted

Caused by: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1159) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:233) ~[liquibase-core-3.5.5.jar!/:na]

... 30 common frames omitted

Caused by: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:126) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1157) ~[liquibase-core-3.5.5.jar!/:na]

... 31 common frames omitted

Caused by: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:179) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:334) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:386) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbConnection.commit(MariaDbConnection.java:709) ~[mariadb-java-client-2.2.6.jar!/:na]

at com.zaxxer.hikari.pool.ProxyConnection.commit(ProxyConnection.java:368) ~[HikariCP-2.7.9.jar!/:na]

at com.zaxxer.hikari.pool.HikariProxyConnection.commit(HikariProxyConnection.java) ~[HikariCP-2.7.9.jar!/:na]

at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:123) ~[liquibase-core-3.5.5.jar!/:na]

... 32 common frames omitted

Caused by: java.sql.SQLException: Deadlock found when trying to get lock; try restarting transaction

Query is: COMMIT

at org.mariadb.jdbc.internal.util.LogQueryTool.exceptionWithQuery(LogQueryTool.java:119) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:200) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:328) ~[mariadb-java-client-2.2.6.jar!/:na]

... 37 common frames omitted

 

ubuntu@kub4:~$ 

 

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Ying, Ruoyu
 

Hi,

 

May you help use the cmd ‘kubectl describe job dev-oof-music-cassandra-job-config –n onap’ to show the status of the job?

Thanks.

 

Best Regards,

Ruoyu

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of gulsum atici
Sent: Tuesday, December 25, 2018 7:56 PM
To: Borislav Glozman <Borislav.Glozman@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear Borislav,

I grab some  logs  from the pod init containers. I have recreated all pods including  dbs  several times. However  latest situation didn't  change.

dev-oof-cmso-db-0                                             1/1       Running                 0          33m       10.42.140.74    kub3      <none>

dev-oof-music-cassandra-0                                     1/1       Running                 0          32m       10.42.254.144   kub3      <none>

dev-oof-music-cassandra-1                                     1/1       Running                 0          1h        10.42.244.161   kub4      <none>

dev-oof-music-cassandra-2                                     1/1       Running                 0          1h        10.42.56.156    kub2      <none>

dev-oof-music-tomcat-685fd777c9-8qmll                         0/1       Init:1/3                3          35m       10.42.159.78    kub3      <none>

dev-oof-music-tomcat-685fd777c9-crdf6                         0/1       Init:1/3                3          35m       10.42.167.24    kub2      <none>

dev-oof-music-tomcat-84bc66c649-7xf8q                         0/1       Init:1/3                6          1h        10.42.19.117    kub1      <none>

dev-oof-music-tomcat-84bc66c649-lzmtj                         0/1       Init:1/3                6          1h        10.42.198.179   kub4      <none>

dev-oof-oof-8ff8b46f5-8sbwv                                   1/1       Running                 0          35m       10.42.35.56     kub3      <none>

dev-oof-oof-cmso-service-6c485cdff-pbzb6                      0/1       Init:CrashLoopBackOff   10         35m       10.42.224.93    kub3      <none>

dev-oof-oof-has-api-74c6695b64-kcr4n                          0/1       Init:0/3                2          35m       10.42.70.206    kub1      <none>

dev-oof-oof-has-controller-7cb97bbd4f-n7k9j                   0/1       Init:0/3                3          35m       10.42.194.39    kub3      <none>

dev-oof-oof-has-data-5b4f76fc7b-t92r6                         0/1       Init:0/4                3          35m       10.42.205.181   kub1      <none>

dev-oof-oof-has-healthcheck-8hqbt                             0/1       Init:0/1                3          35m       10.42.131.183   kub3      <none>

dev-oof-oof-has-onboard-mqglv                                 0/1       Init:0/2                3          35m       10.42.34.251    kub1      <none>

dev-oof-oof-has-reservation-5b899687db-dgjnh                  0/1       Init:0/4                3          35m       10.42.245.175   kub1      <none>

dev-oof-oof-has-solver-65486d5fc7-s84w4                       0/1       Init:0/4                3          35m       10.42.35.223    kub3      <none>

 

 

ubuntu@kub4:~$ kubectl  describe  pod  dev-oof-music-tomcat-685fd777c9-8qmll  -n  onap 

Name:           dev-oof-music-tomcat-685fd777c9-8qmll

Namespace:      onap

Node:           kub3/192.168.13.151

Start Time:     Tue, 25 Dec 2018 11:20:04 +0000

Labels:         app=music-tomcat

                pod-template-hash=2419833375

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.159.78

Controlled By:  ReplicaSet/dev-oof-music-tomcat-685fd777c9

Init Containers:

  music-tomcat-zookeeper-readiness:

    Container ID:  docker://79b0507168a8590b10f0b1eb8c720e04cd173914b6365834d5b6c9c6f86a074d

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      zookeeper

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Tue, 25 Dec 2018 11:20:57 +0000

      Finished:     Tue, 25 Dec 2018 11:21:32 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  music-tomcat-cassandra-readiness:

    Container ID:  docker://36b752b9b2d96d6437992cab6d63d32b80107799b34b0420056656fcc4476213

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-music-cassandra-job-config

    State:          Running

      Started:      Tue, 25 Dec 2018 11:41:58 +0000

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Tue, 25 Dec 2018 11:31:49 +0000

      Finished:     Tue, 25 Dec 2018 11:41:53 +0000

    Ready:          False

    Restart Count:  2

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  music-tomcat-war:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/music/music:3.0.24

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      cp

      /app/MUSIC.war

      /webapps

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Environment:    <none>

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

      /webapps from shared-data (rw)

Containers:

  music-tomcat:

    Container ID:   

    Image:          nexus3.onap.org:10001/library/tomcat:8.5

    Image ID:       

    Port:           8080/TCP

    Host Port:      0/TCP

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /opt/app/music/etc/music.properties from properties-music (rw)

      /usr/local/tomcat/webapps from shared-data (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  shared-data:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

  properties-music:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-tomcat-configmap

    Optional:  false

  default-token-rm7hn:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-rm7hn

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason     Age               From               Message

  ----    ------     ----              ----               -------

  Normal  Scheduled  27m               default-scheduler  Successfully assigned onap/dev-oof-music-tomcat-685fd777c9-8qmll to kub3

  Normal  Pulling    26m               kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled     26m               kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created    26m               kubelet, kub3      Created container

  Normal  Started    26m               kubelet, kub3      Started container

  Normal  Pulling    5m (x3 over 25m)  kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled     5m (x3 over 25m)  kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created    5m (x3 over 25m)  kubelet, kub3      Created container

  Normal  Started    5m (x3 over 25m)  kubelet, kub3      Started container

ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c music-tomcat-zookeeper-readiness -n onap 

2018-12-25 11:20:58,478 - INFO - Checking if zookeeper  is ready

2018-12-25 11:21:32,325 - INFO - zookeeper is ready!

2018-12-25 11:21:32,326 - INFO - zookeeper is ready!

ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c  music-tomcat-cassandra-readiness  -n onap 

2018-12-25 11:41:59,688 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:00,014 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:05,019 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:05,305 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:10,310 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:10,681 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:15,686 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:16,192 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:21,198 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:22,058 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:27,063 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:28,051 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:33,054 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:35,798 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:40,802 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:42,112 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:47,117 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:48,173 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:53,176 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:54,378 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:59,382 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:00,239 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:05,245 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:05,925 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:10,930 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:11,930 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:16,934 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:19,212 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:24,217 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:25,102 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:30,106 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:32,245 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:37,254 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:37,534 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:42,539 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:44,826 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:49,830 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:50,486 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:55,490 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:56,398 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:01,403 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:02,134 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:07,139 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:07,834 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:12,837 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:13,026 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:18,030 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:19,561 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:24,566 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:25,153 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

ubuntu@kub4:~$ kubectl describe pod  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -n onap 

Name:           dev-oof-oof-cmso-service-6c485cdff-pbzb6

Namespace:      onap

Node:           kub3/192.168.13.151

Start Time:     Tue, 25 Dec 2018 11:20:07 +0000

Labels:         app=oof-cmso-service

                pod-template-hash=270417899

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.224.93

Controlled By:  ReplicaSet/dev-oof-oof-cmso-service-6c485cdff

Init Containers:

  oof-cmso-service-readiness:

    Container ID:  docker://bb4ccdfaf3ba6836e606685de4bbe069da2e5193f165ae466f768dad85b71908

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      cmso-db

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Tue, 25 Dec 2018 11:22:53 +0000

      Finished:     Tue, 25 Dec 2018 11:25:01 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  db-init:

    Container ID:   docker://dbc9fadd1140584043b8f690974a4d626f64d12ef5002108b7b5c29148981e23

    Image:          nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1

    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/optf-cmso-dbinit@sha256:c5722a319fb0d91ad4d533597cdee2b55fc5c51d0a8740cf02cbaa1969c8554f

    Port:           <none>

    Host Port:      <none>

    State:          Waiting

      Reason:       CrashLoopBackOff

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Tue, 25 Dec 2018 11:48:31 +0000

      Finished:     Tue, 25 Dec 2018 11:48:41 +0000

    Ready:          False

    Restart Count:  9

    Environment:

      DB_HOST:      oof-cmso-dbhost.onap

      DB_PORT:      3306

      DB_USERNAME:  root

      DB_SCHEMA:    cmso

      DB_PASSWORD:  <set to the key 'db-root-password' in secret 'dev-oof-cmso-db'>  Optional: false

    Mounts:

      /share/etc/config from dev-oof-oof-cmso-service-config (rw)

      /share/logs from dev-oof-oof-cmso-service-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Containers:

  oof-cmso-service:

    Container ID:   

    Image:          nexus3.onap.org:10001/onap/optf-cmso-service:1.0.1

    Image ID:       

    Port:           8080/TCP

    Host Port:      0/TCP

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :8080 delay=120s timeout=50s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Environment:

      DB_HOST:      oof-cmso-dbhost.onap

      DB_PORT:      3306

      DB_USERNAME:  cmso-admin

      DB_SCHEMA:    cmso

      DB_PASSWORD:  <set to the key 'user-password' in secret 'dev-oof-cmso-db'>  Optional: false

    Mounts:

      /share/debug-logs from dev-oof-oof-cmso-service-logs (rw)

      /share/etc/config from dev-oof-oof-cmso-service-config (rw)

      /share/logs from dev-oof-oof-cmso-service-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  dev-oof-oof-cmso-service-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-oof-cmso-service

    Optional:  false

  dev-oof-oof-cmso-service-logs:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  default-token-rm7hn:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-rm7hn

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type     Reason                  Age                From               Message

  ----     ------                  ----               ----               -------

  Normal   Scheduled               30m                default-scheduler  Successfully assigned onap/dev-oof-oof-cmso-service-6c485cdff-pbzb6 to kub3

  Warning  FailedCreatePodSandBox  29m                kubelet, kub3      Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to set up pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: No MAC address found, failed to clean up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to teardown pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: failed to get IP addresses for "eth0": <nil>]

  Normal   SandboxChanged          29m                kubelet, kub3      Pod sandbox changed, it will be killed and re-created.

  Normal   Pulling                 27m                kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal   Pulled                  27m                kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal   Created                 27m                kubelet, kub3      Created container

  Normal   Started                 27m                kubelet, kub3      Started container

  Normal   Pulling                 23m (x4 over 25m)  kubelet, kub3      pulling image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"

  Normal   Pulled                  23m (x4 over 25m)  kubelet, kub3      Successfully pulled image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"

  Normal   Created                 23m (x4 over 25m)  kubelet, kub3      Created container

  Normal   Started                 23m (x4 over 25m)  kubelet, kub3      Started container

  Warning  BackOff                 4m (x80 over 24m)  kubelet, kub3      Back-off restarting failed container

ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c oof-cmso-service-readiness -n onap 

2018-12-25 11:22:54,683 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:02,186 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:09,950 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:12,938 - INFO - cmso-db is not ready.

2018-12-25 11:23:17,963 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:20,091 - INFO - cmso-db is not ready.

2018-12-25 11:23:25,111 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:27,315 - INFO - cmso-db is not ready.

2018-12-25 11:23:32,329 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:35,390 - INFO - cmso-db is not ready.

2018-12-25 11:23:40,407 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:43,346 - INFO - cmso-db is not ready.

2018-12-25 11:23:48,371 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:53,848 - INFO - cmso-db is not ready.

2018-12-25 11:23:58,870 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:02,188 - INFO - cmso-db is not ready.

2018-12-25 11:24:07,207 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:10,598 - INFO - cmso-db is not ready.

2018-12-25 11:24:15,622 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:18,936 - INFO - cmso-db is not ready.

2018-12-25 11:24:23,955 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:26,794 - INFO - cmso-db is not ready.

2018-12-25 11:24:31,813 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:35,529 - INFO - cmso-db is not ready.

2018-12-25 11:24:40,566 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:44,374 - INFO - cmso-db is not ready.

2018-12-25 11:24:49,403 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:53,222 - INFO - cmso-db is not ready.

2018-12-25 11:24:58,238 - INFO - Checking if cmso-db  is ready

2018-12-25 11:25:01,340 - INFO - cmso-db is ready!

ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c  db-init  -n onap 

VM_ARGS=

 

  .   ____          _            __ _ _

 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )

  '  |____| .__|_| |_|_| |_\__, | / / / /

 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::        (v2.0.6.RELEASE)

 

2018-12-25 11:48:36.187  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : Starting LiquibaseApplication on dev-oof-oof-cmso-service-6c485cdff-pbzb6 with PID 8 (/opt/app/cmso-dbinit/app.jar started by root in /opt/app/cmso-dbinit)

2018-12-25 11:48:36.199  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : No active profile set, falling back to default profiles: default

2018-12-25 11:48:36.310  INFO 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@d44fc21: startup date [Tue Dec 25 11:48:36 UTC 2018]; root of context hierarchy

2018-12-25 11:48:40.336  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...

2018-12-25 11:48:40.754  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.

2018-12-25 11:48:41.044  WARN 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

2018-12-25 11:48:41.045  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown initiated...

2018-12-25 11:48:41.109  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown completed.

2018-12-25 11:48:41.177  INFO 8 --- [           main] ConditionEvaluationReportLoggingListener : 

 

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.

2018-12-25 11:48:41.223 ERROR 8 --- [           main] o.s.boot.SpringApplication               : Application run failed

 

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1694) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:759) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:548) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.onap.optf.cmso.liquibase.LiquibaseApplication.main(LiquibaseApplication.java:45) [classes!/:na]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]

at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]

at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [app.jar:na]

at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [app.jar:na]

at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [app.jar:na]

at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [app.jar:na]

Caused by: liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:242) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:170) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.Liquibase.update(Liquibase.java:196) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.Liquibase.update(Liquibase.java:192) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:431) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:388) ~[liquibase-core-3.5.5.jar!/:na]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1753) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1690) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

... 23 common frames omitted

Caused by: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1159) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:233) ~[liquibase-core-3.5.5.jar!/:na]

... 30 common frames omitted

Caused by: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:126) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1157) ~[liquibase-core-3.5.5.jar!/:na]

... 31 common frames omitted

Caused by: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:179) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:334) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:386) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbConnection.commit(MariaDbConnection.java:709) ~[mariadb-java-client-2.2.6.jar!/:na]

at com.zaxxer.hikari.pool.ProxyConnection.commit(ProxyConnection.java:368) ~[HikariCP-2.7.9.jar!/:na]

at com.zaxxer.hikari.pool.HikariProxyConnection.commit(HikariProxyConnection.java) ~[HikariCP-2.7.9.jar!/:na]

at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:123) ~[liquibase-core-3.5.5.jar!/:na]

... 32 common frames omitted

Caused by: java.sql.SQLException: Deadlock found when trying to get lock; try restarting transaction

Query is: COMMIT

at org.mariadb.jdbc.internal.util.LogQueryTool.exceptionWithQuery(LogQueryTool.java:119) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:200) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:328) ~[mariadb-java-client-2.2.6.jar!/:na]

... 37 common frames omitted

 

ubuntu@kub4:~$ 

 

 


gulsum atici <gulsumatici@...>
 

Dear Ruoyu,

Please  find the  job  details.

Thanks a lot.

ubuntu@kub1:~$ kubectl  describe  job  dev-oof-music-cassandra-job-config   -n onap 
Name:           dev-oof-music-cassandra-job-config
Namespace:      onap
Selector:       controller-uid=0e1b2cf2-08ec-11e9-b23c-028b437f6721
Labels:         app=music-cassandra-job-job
                chart=music-cassandra-job-3.0.0
                heritage=Tiller
                release=dev-oof
Annotations:    <none>
Parallelism:    1
Completions:    1
Pods Statuses:  0 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=music-cassandra-job-job
           controller-uid=0e1b2cf2-08ec-11e9-b23c-028b437f6721
           job-name=dev-oof-music-cassandra-job-config
           release=dev-oof
  Init Containers:
   music-cassandra-job-readiness:
    Image:      oomk8s/readiness-check:2.0.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      music-cassandra
    Environment:
      NAMESPACE:   (v1:metadata.namespace)
    Mounts:       <none>
  Containers:
   music-cassandra-job-update-job:
    Image:      nexus3.onap.org:10001/onap/music/cassandra_job:3.0.24
    Port:       <none>
    Host Port:  <none>
    Environment:
      CASS_HOSTNAME:  music-cassandra
      USERNAME:       nelson24
      PORT:           9042
      PASSWORD:       nelson24
      TIMEOUT:        30
      DELAY:          120
    Mounts:
      /cql/admin.cql from music-cassandra-job-cql (rw)
      /cql/admin_pw.cql from music-cassandra-job-cql (rw)
      /cql/extra from music-cassandra-job-extra-cql (rw)
  Volumes:
   music-cassandra-job-cql:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-cassandra-job-cql
    Optional:  false
   music-cassandra-job-extra-cql:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-cassandra-job-extra-cql
    Optional:  false
Events:        <none>
ubuntu@kub1:~$ 
 


Ying, Ruoyu
 

It seems like the job has not been finished successfully. When you try to redeploy OOF, do you ever clean the oof folder under /dockerdata-nfs?

 

Thanks.

 

Best Regards,

Ruoyu

 

From: gulsum atici [mailto:gulsumatici@...]
Sent: Wednesday, December 26, 2018 6:10 PM
To: Ying; Ying, Ruoyu <ruoyu.ying@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear Ruoyu,

Please  find the  job  details.

Thanks a lot.

ubuntu@kub1:~$ kubectl  describe  job  dev-oof-music-cassandra-job-config   -n onap 

Name:           dev-oof-music-cassandra-job-config

Namespace:      onap

Selector:       controller-uid=0e1b2cf2-08ec-11e9-b23c-028b437f6721

Labels:         app=music-cassandra-job-job

                chart=music-cassandra-job-3.0.0

                heritage=Tiller

                release=dev-oof

Annotations:    <none>

Parallelism:    1

Completions:    1

Pods Statuses:  0 Running / 0 Succeeded / 0 Failed

Pod Template:

  Labels:  app=music-cassandra-job-job

           controller-uid=0e1b2cf2-08ec-11e9-b23c-028b437f6721

           job-name=dev-oof-music-cassandra-job-config

           release=dev-oof

  Init Containers:

   music-cassandra-job-readiness:

    Image:      oomk8s/readiness-check:2.0.0

    Port:       <none>

    Host Port:  <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-cassandra

    Environment:

      NAMESPACE:   (v1:metadata.namespace)

    Mounts:       <none>

  Containers:

   music-cassandra-job-update-job:

    Image:      nexus3.onap.org:10001/onap/music/cassandra_job:3.0.24

    Port:       <none>

    Host Port:  <none>

    Environment:

      CASS_HOSTNAME:  music-cassandra

      USERNAME:       nelson24

      PORT:           9042

      PASSWORD:       nelson24

      TIMEOUT:        30

      DELAY:          120

    Mounts:

      /cql/admin.cql from music-cassandra-job-cql (rw)

      /cql/admin_pw.cql from music-cassandra-job-cql (rw)

      /cql/extra from music-cassandra-job-extra-cql (rw)

  Volumes:

   music-cassandra-job-cql:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-cassandra-job-cql

    Optional:  false

   music-cassandra-job-extra-cql:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-cassandra-job-extra-cql

    Optional:  false

Events:        <none>

ubuntu@kub1:~$ 

 


gulsum atici <gulsumatici@...>
 

Dear  Ruoyu,

 When  I  redeploy OOF,  I didn't  clean the   folder under /dockerdata-nfs. 

Should  I  clean it ?

Thanks,


Ying, Ruoyu
 

Hi,

 

Yes. You’d better clean the folder under /dockerdata-nfs , e.g. dev-oof/. Can you try redeploy it again?

Thanks

 

Best Regards,

Ruoyu

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of gulsum atici
Sent: Thursday, December 27, 2018 2:29 PM
To: Ying; Ying, Ruoyu <ruoyu.ying@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear  Ruoyu,

 When  I  redeploy OOF,  I didn't  clean the   folder under /dockerdata-nfs. 

Should  I  clean it ?

Thanks,


gulsum atici <gulsumatici@...>
 

Dear Ruoyu,

I have  undeploy  some  modules, aaf, oof, so, deleted   resources  which  is shown  by  'kubectl get all -n  onap'  regarding  with  those  modules, deleted  the  pv  and  pvc  of  those  modules, cleaned  those  modules'  files  under  /dockerdata-nfs .  Then  deployed  the  modules  again.  
However,  jobs  are not executing.   

Here is  one  of the  job  describtion. What  can  be  the  reason  that  those  jobs  are  not  working?


ubuntu@kub1:~$ kubectl  describe  job dev-oof-music-cassandra-job-config   -n onap 
Name:           dev-oof-music-cassandra-job-config
Namespace:      onap
Selector:       controller-uid=d65b9d35-09a8-11e9-b23c-028b437f6721
Labels:         app=music-cassandra-job-job
                chart=music-cassandra-job-3.0.0
                heritage=Tiller
                release=dev-oof
Annotations:    <none>
Parallelism:    1
Completions:    1
Pods Statuses:  0 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=music-cassandra-job-job
           controller-uid=d65b9d35-09a8-11e9-b23c-028b437f6721
           job-name=dev-oof-music-cassandra-job-config
           release=dev-oof
  Init Containers:
   music-cassandra-job-readiness:
    Image:      oomk8s/readiness-check:2.0.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      music-cassandra
    Environment:
      NAMESPACE:   (v1:metadata.namespace)
    Mounts:       <none>
  Containers:
   music-cassandra-job-update-job:
    Image:      nexus3.onap.org:10001/onap/music/cassandra_job:3.0.24
    Port:       <none>
    Host Port:  <none>
    Environment:
      CASS_HOSTNAME:  music-cassandra
      USERNAME:       nelson24
      PORT:           9042
      PASSWORD:       nelson24
      TIMEOUT:        30
      DELAY:          120
    Mounts:
      /cql/admin.cql from music-cassandra-job-cql (rw)
      /cql/admin_pw.cql from music-cassandra-job-cql (rw)
      /cql/extra from music-cassandra-job-extra-cql (rw)
  Volumes:
   music-cassandra-job-cql:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-cassandra-job-cql
    Optional:  false
   music-cassandra-job-extra-cql:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-cassandra-job-extra-cql
    Optional:  false
Events:        <none>
 

ubuntu@kub1:~$ kubectl  describe  job  dev-oof-oof-has-onboard  -n onap 
Name:           dev-oof-oof-has-onboard
Namespace:      onap
Selector:       controller-uid=d6895c92-09a8-11e9-b23c-028b437f6721
Labels:         app=oof-has
                chart=oof-has-3.0.0
                heritage=Tiller
                release=dev-oof
Annotations:    <none>
Parallelism:    1
Completions:    1
Pods Statuses:  0 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=oof-has
           controller-uid=d6895c92-09a8-11e9-b23c-028b437f6721
           job-name=dev-oof-oof-has-onboard
           release=dev-oof
  Init Containers:
   oof-has-readiness:
    Image:      oomk8s/readiness-check:2.0.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      music-tomcat
      --container-name
      music-cassandra
    Environment:
      NAMESPACE:   (v1:metadata.namespace)
    Mounts:       <none>
   oof-has-music-db-readiness:
    Image:      oomk8s/readiness-check:2.0.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /root/job_complete.py
    Args:
      -j
      dev-oof-music-cassandra-job-config
    Environment:
      NAMESPACE:   (v1:metadata.namespace)
    Mounts:       <none>
  Containers:
   oof-has-onboard:
    Image:      nexus3.onap.org:10001/onap/optf-has:1.2.4
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
      -c
      curl -X POST http://music-tomcat.onap:8080/MUSIC/rest/v2/admin/onboardAppWithMusic \
-H "Content-Type: application/json" \
-H "Authorization: Basic Y29uZHVjdG9yOmMwbmR1Y3Qwcg==" \
--data @onboard.json
 
    Environment:  <none>
    Mounts:
      /etc/localtime from localtime (ro)
      /has/onboard.json from onap-oof-has-config (rw)
  Volumes:
   localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
   onap-oof-has-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      onap-oof-has-configmap
    Optional:  false
Events:        <none>
 

NAME                                                 DESIRED   SUCCESSFUL   AGE
job.batch/dev-aaf-aaf-sms-preload                    1         0            16m
job.batch/dev-aaf-aaf-sshsm-distcenter               1         0            16m
job.batch/dev-aaf-aaf-sshsm-testca                   1         0            16m
job.batch/dev-aai-aai-graphadmin-create-db-schema    1         1            6d
job.batch/dev-aai-aai-traversal-update-query-data    1         1            6d
job.batch/dev-contrib-netbox-app-provisioning        1         1            6d
job.batch/dev-oof-music-cassandra-job-config         1         0            18m
job.batch/dev-oof-oof-has-healthcheck                1         0            18m
job.batch/dev-oof-oof-has-onboard                    1         0            18m
job.batch/dev-portal-portal-db-config                1         1            6d
job.batch/dev-sdc-sdc-be-config-backend              1         1            6d
job.batch/dev-sdc-sdc-cs-config-cassandra            1         1            6d
job.batch/dev-sdc-sdc-dcae-be-tools                  1         1            6d
job.batch/dev-sdc-sdc-es-config-elasticsearch        1         1            6d
job.batch/dev-sdc-sdc-onboarding-be-cassandra-init   1         1            6d
job.batch/dev-sdc-sdc-wfd-be-workflow-init           1         1            6d
job.batch/dev-vid-vid-galera-config                  1         1            6d
job.batch/dev-vnfsdk-vnfsdk-init-postgres            1         1            6d
 


Ying, Ruoyu
 

Hi,

 

I haven’t encountered situations like that. Do you ever meet the same problem while deploying other components?

 

Best Regards,

Ruoyu

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of gulsum atici
Sent: Thursday, December 27, 2018 3:55 PM
To: gulsum atici <gulsumatici@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear Ruoyu,

I have  undeploy  some  modules, aaf, oof, so, deleted   resources  which  is shown  by  'kubectl get all -n  onap'  regarding  with  those  modules, deleted  the  pv  and  pvc  of  those  modules, cleaned  those  modules'  files  under  /dockerdata-nfs .  Then  deployed  the  modules  again.  
However,  jobs  are not executing.   

Here is  one  of the  job  describtion. What  can  be  the  reason  that  those  jobs  are  not  working?

ubuntu@kub1:~$ kubectl  describe  job dev-oof-music-cassandra-job-config   -n onap 

Name:           dev-oof-music-cassandra-job-config

Namespace:      onap

Selector:       controller-uid=d65b9d35-09a8-11e9-b23c-028b437f6721

Labels:         app=music-cassandra-job-job

                chart=music-cassandra-job-3.0.0

                heritage=Tiller

                release=dev-oof

Annotations:    <none>

Parallelism:    1

Completions:    1

Pods Statuses:  0 Running / 0 Succeeded / 0 Failed

Pod Template:

  Labels:  app=music-cassandra-job-job

           controller-uid=d65b9d35-09a8-11e9-b23c-028b437f6721

           job-name=dev-oof-music-cassandra-job-config

           release=dev-oof

  Init Containers:

   music-cassandra-job-readiness:

    Image:      oomk8s/readiness-check:2.0.0

    Port:       <none>

    Host Port:  <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-cassandra

    Environment:

      NAMESPACE:   (v1:metadata.namespace)

    Mounts:       <none>

  Containers:

   music-cassandra-job-update-job:

    Image:      nexus3.onap.org:10001/onap/music/cassandra_job:3.0.24

    Port:       <none>

    Host Port:  <none>

    Environment:

      CASS_HOSTNAME:  music-cassandra

      USERNAME:       nelson24

      PORT:           9042

      PASSWORD:       nelson24

      TIMEOUT:        30

      DELAY:          120

    Mounts:

      /cql/admin.cql from music-cassandra-job-cql (rw)

      /cql/admin_pw.cql from music-cassandra-job-cql (rw)

      /cql/extra from music-cassandra-job-extra-cql (rw)

  Volumes:

   music-cassandra-job-cql:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-cassandra-job-cql

    Optional:  false

   music-cassandra-job-extra-cql:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-cassandra-job-extra-cql

    Optional:  false

Events:        <none>

 

 

ubuntu@kub1:~$ kubectl  describe  job  dev-oof-oof-has-onboard  -n onap 

Name:           dev-oof-oof-has-onboard

Namespace:      onap

Selector:       controller-uid=d6895c92-09a8-11e9-b23c-028b437f6721

Labels:         app=oof-has

                chart=oof-has-3.0.0

                heritage=Tiller

                release=dev-oof

Annotations:    <none>

Parallelism:    1

Completions:    1

Pods Statuses:  0 Running / 0 Succeeded / 0 Failed

Pod Template:

  Labels:  app=oof-has

           controller-uid=d6895c92-09a8-11e9-b23c-028b437f6721

           job-name=dev-oof-oof-has-onboard

           release=dev-oof

  Init Containers:

   oof-has-readiness:

    Image:      oomk8s/readiness-check:2.0.0

    Port:       <none>

    Host Port:  <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-tomcat

      --container-name

      music-cassandra

    Environment:

      NAMESPACE:   (v1:metadata.namespace)

    Mounts:       <none>

   oof-has-music-db-readiness:

    Image:      oomk8s/readiness-check:2.0.0

    Port:       <none>

    Host Port:  <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-music-cassandra-job-config

    Environment:

      NAMESPACE:   (v1:metadata.namespace)

    Mounts:       <none>

  Containers:

   oof-has-onboard:

    Image:      nexus3.onap.org:10001/onap/optf-has:1.2.4

    Port:       <none>

    Host Port:  <none>

    Command:

      /bin/sh

      -c

-H "Content-Type: application/json" \

-H "Authorization: Basic Y29uZHVjdG9yOmMwbmR1Y3Qwcg==" \

--data @onboard.json

 

    Environment:  <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /has/onboard.json from onap-oof-has-config (rw)

  Volumes:

   localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

   onap-oof-has-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      onap-oof-has-configmap

    Optional:  false

Events:        <none>

 


NAME                                                 DESIRED   SUCCESSFUL   AGE

job.batch/dev-aaf-aaf-sms-preload                    1         0            16m

job.batch/dev-aaf-aaf-sshsm-distcenter               1         0            16m

job.batch/dev-aaf-aaf-sshsm-testca                   1         0            16m

job.batch/dev-aai-aai-graphadmin-create-db-schema    1         1            6d

job.batch/dev-aai-aai-traversal-update-query-data    1         1            6d

job.batch/dev-contrib-netbox-app-provisioning        1         1            6d

job.batch/dev-oof-music-cassandra-job-config         1         0            18m

job.batch/dev-oof-oof-has-healthcheck                1         0            18m

job.batch/dev-oof-oof-has-onboard                    1         0            18m

job.batch/dev-portal-portal-db-config                1         1            6d

job.batch/dev-sdc-sdc-be-config-backend              1         1            6d

job.batch/dev-sdc-sdc-cs-config-cassandra            1         1            6d

job.batch/dev-sdc-sdc-dcae-be-tools                  1         1            6d

job.batch/dev-sdc-sdc-es-config-elasticsearch        1         1            6d

job.batch/dev-sdc-sdc-onboarding-be-cassandra-init   1         1            6d

job.batch/dev-sdc-sdc-wfd-be-workflow-init           1         1            6d

job.batch/dev-vid-vid-galera-config                  1         1            6d

job.batch/dev-vnfsdk-vnfsdk-init-postgres            1         1            6d

 


gulsum atici <gulsumatici@...>
 

Dear Ruoyu,

No  matter, I  made a  fresh install again.  However, some pods waiting  to  initialize  for  more  than  3 days.

oocmso-db  pvc  is  waiting on pending.

ubuntu@kub1:~$ kubectl get  pvc   -n onap  |  grep -i pending
dev-appc-appc-db                                                             Pending                                                                                                                1d
dev-oof-cmso-db                                                              Pending                                                                                                                1d
dev-sdnc-controller-blueprints-db                                            Pending                                                                                                                1d
dev-sdnc-nengdb                                                              Pending    
 
ubuntu@kub1:~$ kubectl describe  pvc  dev-oof-cmso-db   -n onap 
Name:          dev-oof-cmso-db
Namespace:     onap
StorageClass:  
Status:        Pending
Volume:        
Labels:        app=cmso-db
               chart=mariadb-galera-3.0.0
               heritage=Tiller
               release=dev-oof
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type    Reason         Age                 From                         Message
  ----    ------         ----                ----                         -------
  Normal  FailedBinding  10m (x321 over 1h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
  Normal  FailedBinding  10s (x17 over 4m)   persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

dev-oof-cmso-db-0                                             1/1       Running            0          3d
dev-oof-music-cassandra-0                                     1/1       Running            0          3d
dev-oof-music-cassandra-1                                     1/1       Running            0          3d
dev-oof-music-cassandra-2                                     1/1       Running            0          3d
dev-oof-music-cassandra-job-config-rnrs2                      0/1       Completed          0          3d
dev-oof-music-tomcat-64d4c64db7-vrhbp                         1/1       Running            0          3d
dev-oof-music-tomcat-64d4c64db7-vttlr                         1/1       Running            0          3d
dev-oof-music-tomcat-64d4c64db7-wq95z                         1/1       Running            0          3d
dev-oof-oof-7b4bccc8d7-5slv2                                  1/1       Running            0          3d
dev-oof-oof-cmso-service-55499fdf4c-cw2qb                     1/1       Running            0          3d
dev-oof-oof-has-api-7d9b977b48-bvgdc                          1/1       Running            0          3d
dev-oof-oof-has-controller-7f5b6c5f7-9ttcv                    1/1       Running            0          3d
dev-oof-oof-has-data-b57bd54fb-chztq                          0/1       Init:2/4           502        3d
dev-oof-oof-has-onboard-2v8km                                 0/1       Completed          0          3d
dev-oof-oof-has-reservation-5869b786b-8z7gt                   0/1       Init:2/4           497        3d
dev-oof-oof-has-solver-5c75888465-4fkrr                       0/1       Init:2/4           503        3d
dev-oof-zookeeper-0                                           1/1       Running            0          3d
dev-oof-zookeeper-1                                           1/1       Running            0          3d
dev-oof-zookeeper-2                                           1/1       Running            0          3d

ubuntu@kub1:~$ kubectl  describe  pod  dev-oof-music-cassandra-job-config-rnrs2  -n  onap 
Name:           dev-oof-music-cassandra-job-config-rnrs2
Namespace:      onap
Node:           kub4/192.168.13.162
Start Time:     Thu, 27 Dec 2018 14:46:05 +0000
Labels:         app=music-cassandra-job-job
                controller-uid=22ea9645-09e6-11e9-9f85-028b4359678c
                job-name=dev-oof-music-cassandra-job-config
                release=dev-oof
Annotations:    <none>
Status:         Succeeded
IP:             10.42.9.206
Controlled By:  Job/dev-oof-music-cassandra-job-config
Init Containers:
  music-cassandra-job-readiness:
    Container ID:  docker://3de813213d2ecc31798668e6018944239820c5fa029efd17bd3243f2d0564f24
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      music-cassandra
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 27 Dec 2018 15:29:51 +0000
      Finished:     Thu, 27 Dec 2018 15:30:08 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
Containers:
  music-cassandra-job-update-job:
    Container ID:   docker://9ae4cfb9b78b4b33faa5bc9c9a2f1478ee090d7cf83ea2370fac983e59c06c25
    Image:          nexus3.onap.org:10001/onap/music/cassandra_job:3.0.24
    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/music/cassandra_job@sha256:b21578fc4cf68585909bd82fe3e8a8621ed1a48196fbaf399f182bbe147f7fa5
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 27 Dec 2018 15:53:42 +0000
      Finished:     Thu, 27 Dec 2018 15:56:00 +0000
    Ready:          False
    Restart Count:  0
    Environment:
      CASS_HOSTNAME:  music-cassandra
      USERNAME:       nelson24
      PORT:           9042
      PASSWORD:       nelson24
      TIMEOUT:        30
      DELAY:          120
    Mounts:
      /cql/admin.cql from music-cassandra-job-cql (rw)
      /cql/admin_pw.cql from music-cassandra-job-cql (rw)
      /cql/extra from music-cassandra-job-extra-cql (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  music-cassandra-job-cql:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-cassandra-job-cql
    Optional:  false
  music-cassandra-job-extra-cql:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-cassandra-job-extra-cql
    Optional:  false
  default-token-5kd4q:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5kd4q
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

*****************************

ubuntu@kub1:~$ kubectl logs -f   dev-oof-music-cassandra-job-config-rnrs2  -n  onap 
Sleeping for 120 seconds before running cql
#############################################
############## Let run cql's ################
#############################################
Current Variables in play
Default User
DEF_USER=cassandra
DEF_PASS=***********
New User
USERNAME=nelson24
PASSWORD=***********
TIMEOUT=30
Running cqlsh --request-timeout=30 -u cassandra -p cassandra -e "describe keyspaces;" music-cassandra 9042;
 
system_traces  system_schema  system_auth  system  system_distributed
 
Cassandra user still avalable, will continue as usual
Running admin.cql file:
Running cqlsh -u cassandra -p cassandra -f /cql/admin.cql music-cassandra 9042
 
admin  system_schema  system_auth  system  system_distributed  system_traces
 
Success - admin.cql - Admin keyspace created
Running admin_pw.cql file:
Running cqlsh -u cassandra -p cassandra -f /cql/admin_pw.cql music-cassandra 9042
Success - admin_pw.cql - Password Changed
Running Test - look for admin keyspace:
Running cqlsh -u nelson24 -p nelson24 -e select bin boot cql dev docker-entrypoint.sh etc home lib lib64 media mnt opt proc root run runcql.sh sbin srv sys tmp usr var from system_auth.roles
/runcql.sh: line 77:  music-cassandra 9042: command not found
 
 role      | can_login | is_superuser | member_of | salted_hash
-----------+-----------+--------------+-----------+--------------------------------------------------------------
  nelson24 |      True |         True |      null | $2a$10$nDMFWNw2EOp6B1y37bg84eK0MEyaEC3fG.UbkSlQ21v8rEq7YTBVG
 cassandra |      True |         True |      null | $2a$10$H2RxK6f0yrofoq6U9IzU4ObHhlZylXiZUJW56H8d8OnGWuaILFbgO
 
(2 rows)
Success - running test
**********************************************
ubuntu@kub1:~$ kubectl  logs  -f  dev-oof-oof-has-data-b57bd54fb-chztq  -n onap 
Error from server (BadRequest): container "oof-has-data" in pod "dev-oof-oof-has-data-b57bd54fb-chztq" is waiting to start: PodInitializing
ubuntu@kub1:~$ kubectl describe  pod   dev-oof-oof-has-data-b57bd54fb-chztq  -n onap 
Name:           dev-oof-oof-has-data-b57bd54fb-chztq
Namespace:      onap
Node:           kub4/
Start Time:     Thu, 27 Dec 2018 14:46:04 +0000
Labels:         app=oof-has-data
                pod-template-hash=613681096
                release=dev-oof
Annotations:    <none>
Status:         Pending
IP:             10.42.21.219
Controlled By:  ReplicaSet/dev-oof-oof-has-data-b57bd54fb
Init Containers:
  oof-has-data-readiness:
    Container ID:  docker://e33b41354013f475e091f583d307b57dcf115d17e5cbfc7168368d4a474c814a
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      music-tomcat
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 27 Dec 2018 16:32:43 +0000
      Finished:     Thu, 27 Dec 2018 16:41:22 +0000
    Ready:          True
    Restart Count:  4
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
  oof-has-data-onboard-readiness:
    Container ID:  docker://b490562a8d649fa31ed515c665ce5ac97ce9e092e4663a558b6c03b462bd325e
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/job_complete.py
    Args:
      -j
      dev-oof-oof-has-onboard
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 27 Dec 2018 16:41:28 +0000
      Finished:     Thu, 27 Dec 2018 16:47:25 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
  oof-has-data-health-readiness:
    Container ID:  docker://5f3a54c5d9cf8b10cf3e7fe1e4e36c131f8a17d52c61d16f4468f6d6343e8618
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/job_complete.py
    Args:
      -j
      dev-oof-oof-has-healthcheck
    State:          Running
      Started:      Mon, 31 Dec 2018 06:02:25 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 31 Dec 2018 05:52:15 +0000
      Finished:     Mon, 31 Dec 2018 06:02:20 +0000
    Ready:          False
    Restart Count:  498
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
  oof-has-data-data-sms-readiness:
    Container ID:  
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      resp="FAILURE"; until [ $resp = "200" ]; do resp=$(curl -s -o /dev/null -k --write-out %{http_code} https://aaf-sms.onap:10443/v1/sms/domain/has/secret); echo $resp; sleep 2; done
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
Containers:
  oof-has-data:
    Container ID:  
    Image:         nexus3.onap.org:10001/onap/optf-has:1.2.4
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      python
    Args:
      /usr/local/bin/conductor-data
      --config-file=/usr/local/bin/conductor.conf
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /etc/localtime from localtime (ro)
      /usr/local/bin/AAF_RootCA.cer from onap-oof-has-config (rw)
      /usr/local/bin/aai_cert.cer from onap-oof-has-config (rw)
      /usr/local/bin/aai_key.key from onap-oof-has-config (rw)
      /usr/local/bin/conductor.conf from onap-oof-has-config (rw)
      /usr/local/bin/healthy.sh from onap-oof-has-config (rw)
      /usr/local/bin/log.conf from onap-oof-has-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
  onap-oof-has-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      onap-oof-has-configmap
    Optional:  false
  default-token-5kd4q:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5kd4q
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age                 From           Message
  ----    ------   ----                ----           -------
  Normal  Pulling  10m (x499 over 3d)  kubelet, kub4  pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled   10m (x499 over 3d)  kubelet, kub4  Successfully pulled image "oomk8s/readiness-check:2.0.0"
ubuntu@kub1:~$ kubectl logs -f     dev-oof-oof-has-data-b57bd54fb-chztq -c oof-has-data-data-sms-readiness  -n onap 
Error from server (BadRequest): container "oof-has-data-data-sms-readiness" in pod "dev-oof-oof-has-data-b57bd54fb-chztq" is waiting to start: PodInitializing
***********************************

ubuntu@kub1:~$ kubectl  logs -f  dev-oof-oof-has-reservation-5869b786b-8z7gt  -n onap 
Error from server (BadRequest): container "oof-has-reservation" in pod "dev-oof-oof-has-reservation-5869b786b-8z7gt" is waiting to start: PodInitializing
ubuntu@kub1:~$ kubectl describe  pod  dev-oof-oof-has-solver-5c75888465-4fkrr   -n onap 
Name:           dev-oof-oof-has-solver-5c75888465-4fkrr
Namespace:      onap
Node:           kub2/
Start Time:     Thu, 27 Dec 2018 14:46:04 +0000
Labels:         app=oof-has-solver
                pod-template-hash=1731444021
                release=dev-oof
Annotations:    <none>
Status:         Pending
IP:             10.42.5.137
Controlled By:  ReplicaSet/dev-oof-oof-has-solver-5c75888465
Init Containers:
  oof-has-solver-readiness:
    Container ID:  docker://607c01dbc5d145392ff7ad6d576cf30ff038dbd65e95c75354347bc904a70641
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      music-tomcat
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 27 Dec 2018 16:40:27 +0000
      Finished:     Thu, 27 Dec 2018 16:41:28 +0000
    Ready:          True
    Restart Count:  3
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
  oof-has-solver-onboard-readiness:
    Container ID:  docker://9d66b194b045f72bac0fbef3edbf9703fd24331fbb5f6266461fb0581218d520
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/job_complete.py
    Args:
      -j
      dev-oof-oof-has-onboard
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 27 Dec 2018 16:45:54 +0000
      Finished:     Thu, 27 Dec 2018 16:47:25 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
  oof-has-solver-health-readiness:
    Container ID:  docker://7c9b31b37bd9d81082138215d27c225a903b3d04cce21525f7d655109757d315
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/job_complete.py
    Args:
      -j
      dev-oof-oof-has-healthcheck
    State:          Running
      Started:      Mon, 31 Dec 2018 06:19:02 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 31 Dec 2018 06:08:51 +0000
      Finished:     Mon, 31 Dec 2018 06:18:56 +0000
    Ready:          False
    Restart Count:  502
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
  oof-has-solver-solvr-sms-readiness:
    Container ID:  
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      resp="FAILURE"; until [ $resp = "200" ]; do resp=$(curl -s -o /dev/null -k --write-out %{http_code} https://aaf-sms.onap:10443/v1/sms/domain/has/secret); echo $resp; sleep 2; done
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
Containers:
  oof-has-solver:
    Container ID:  
    Image:         nexus3.onap.org:10001/onap/optf-has:1.2.4
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      python
    Args:
      /usr/local/bin/conductor-solver
      --config-file=/usr/local/bin/conductor.conf
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /etc/localtime from localtime (ro)
      /usr/local/bin/AAF_RootCA.cer from onap-oof-has-config (rw)
      /usr/local/bin/conductor.conf from onap-oof-has-config (rw)
      /usr/local/bin/healthy.sh from onap-oof-has-config (rw)
      /usr/local/bin/log.conf from onap-oof-has-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
  onap-oof-has-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      onap-oof-has-configmap
    Optional:  false
  default-token-5kd4q:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5kd4q
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age                From           Message
  ----    ------   ----               ----           -------
  Normal  Pulling  6m (x503 over 3d)  kubelet, kub2  pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled   6m (x503 over 3d)  kubelet, kub2  Successfully pulled image "oomk8s/readiness-check:2.0.0"
 


Michael O'Brien <frank.obrien@...>
 

Gulsum,

   Hi, the pending pods you are seeing may be related to image downloads in progress – the nexus3.onap.org:10001 server is better now  - but will still suffer from a 2-3 hour pull inside some networks (40 min in aws/azure for the 75G of images). 

    The other cause is saturation of the vCores, HD or network for a particular VM – sequencing the deploys will workaround this

 

    I would use a local proxy In that case like I did

https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-NexusProxy

 

   I raised a cluster yesterday that had no OOF issues for the 3.0.0-ONAP tag of the Casablanca branch – I did however prepull all docker images and put a 3 min wait state to sequence the 30 pod deploys in the deploy.sh script

 

onap          onap-oof-cmso-db-0                                            1/1       Running            0          3h
onap          onap-oof-music-cassandra-0                                    1/1       Running            2          3h
onap          onap-oof-music-cassandra-1                                    1/1       Running            0          3h
onap          onap-oof-music-cassandra-2                                    1/1       Running            0          3h
onap          onap-oof-music-cassandra-job-config-ndtr4                     0/1       Completed          0          3h
onap          onap-oof-music-tomcat-5bb9ddcb46-dzz2m                        1/1       Running            0          3h
onap          onap-oof-music-tomcat-5bb9ddcb46-swkp2                        1/1       Running            0          3h
onap          onap-oof-music-tomcat-5bb9ddcb46-tr2rw                        1/1       Running            0          3h
onap          onap-oof-oof-7f4f5bcc8b-gj85n                                 1/1       Running            0          3h
onap          onap-oof-oof-cmso-service-5b6f8fd4cb-jvdbm                    1/1       Running            0          3h
onap          onap-oof-oof-has-api-554f8fdc64-gprp9                         1/1       Running            0          3h
onap          onap-oof-oof-has-controller-69869fb6fc-9jh6w                  1/1       Running            0          3h
onap          onap-oof-oof-has-data-9bdfd8869-77zcn                         1/1       Running            0          3h
onap          onap-oof-oof-has-healthcheck-dwrrt                            0/1       Completed          0          3h
onap          onap-oof-oof-has-onboard-h6shk                                0/1       Completed          0          3h
onap          onap-oof-oof-has-reservation-5cd655b79f-56cn7                 1/1       Running            1          3h
onap          onap-oof-oof-has-solver-6c9864bff4-9kbrb                      1/1       Running            0          3h
onap          onap-oof-zookeeper-0                                          1/1       Running            0          3h
onap          onap-oof-zookeeper-1                                          1/1       Running            0          3h
onap          onap-oof-zookeeper-2                                          1/1       Running            0          3h

 

Triage: there are a lot of secondary things that can go wrong – like config job restarts and timeouts (being replaced by helm hooks) – for now you are safe if you try to slow down the parallel deployments and sequence them – the following normally do run OK – if they fail after 1 hour  -then they will never restart without a pod restart (you will need to manually clear dockerdata-nfs and delete pv/pvc manually in this case) -  Above I was able to get the jobs to complete if I ran oof in a 3 min window all by itself.

 

Above

https://git.onap.org/oom/tree/kubernetes/helm/plugins/deploy/deploy.sh#n203

put

sleep 300

review will be in

https://jira.onap.org/browse/OOM-1571

 

dev-oof-oof-has-data-b57bd54fb-chztq                          0/1       Init:2/4           502        3d

dev-oof-oof-has-onboard-2v8km                                 0/1       Completed          0          3d

dev-oof-oof-has-reservation-5869b786b-8z7gt                   0/1       Init:2/4           497        3d

dev-oof-oof-has-solver-5c75888465-4fkrr                       0/1       Init:2/4           503        3d

 

 

   Try prepulling the images – there is a script below that keys off the manifest (should be identical to what is running in all values.yamls

https://jira.onap.org/browse/LOG-905

https://git.onap.org/logging-analytics/plain/deploy/docker_prepull.sh

 

https://jira.onap.org/browse/LOG-898

 

casablanca 3.0.0-ONAP deploy with docker_prepull.sh and sequenced deployment - dmaap/aaf first then each pod at 3 min intervals - put a 3 min sleep at line

 

As of 20181231 – there are only issues with dmaap and aai

https://jira.onap.org/browse/OOM-1560

 Note only the following are problematic (no 0/1 Completed for a particular job)

onap          dep-dcae-ves-collector-d964fbc5-bnpxc                         1/2       Running            0          3h

onap          onap-aai-aai-59686b87c7-zzddq                                 0/1       Init:0/1           0          4h

onap          onap-aai-aai-champ-7f8c7cfffd-472fv                           1/2       Running            0          4h

onap          onap-aai-aai-sparky-be-6d489d4dc9-fwcq4                       0/2       Init:0/1           0          4h

onap          onap-aai-aai-traversal-5b496d986c-vb876                       1/2       Running            36         4h

onap          onap-aai-aai-traversal-update-query-data-fcn4h                0/1       Init:0/1           24         4h

onap          onap-dmaap-dmaap-dr-node-cf6dc5cd-6bvw4                       0/1       Init:0/1           27         4h

onap          onap-dmaap-dmaap-dr-prov-7f8bd9ff65-5jwpn                     0/1       CrashLo

 

ubuntu@a-cd-cas0:~/oom/kubernetes/robot$ sudo ./ete-k8s.sh onap health
Basic A&AI Health Check                                               | FAIL |
Basic DMAAP Data Router Health Check                                  [ WARN ] 
 
Testsuites.Health-Check :: Testing ecomp components are available ... | FAIL |
51 critical tests, 49 passed, 2 failed

 

/michael

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of gulsum atici
Sent: Monday, December 31, 2018 1:44 AM
To: Ying; Ruoyu <ruoyu.ying@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear Ruoyu,

No  matter, I  made a  fresh install again.  However, some pods waiting  to  initialize  for  more  than  3 days.

oocmso-db  pvc  is  waiting on pending.

ubuntu@kub1:~$ kubectl get  pvc   -n onap  |  grep -i pending

dev-appc-appc-db                                                             Pending                                                                                                                1d

dev-oof-cmso-db                                                              Pending                                                                                                                1d

dev-sdnc-controller-blueprints-db                                            Pending                                                                                                                1d

dev-sdnc-nengdb                                                              Pending    

 

ubuntu@kub1:~$ kubectl describe  pvc  dev-oof-cmso-db   -n onap 

Name:          dev-oof-cmso-db

Namespace:     onap

StorageClass:  

Status:        Pending

Volume:        

Labels:        app=cmso-db

               chart=mariadb-galera-3.0.0

               heritage=Tiller

               release=dev-oof

Annotations:   <none>

Finalizers:    [kubernetes.io/pvc-protection]

Capacity:      

Access Modes:  

Events:

  Type    Reason         Age                 From                         Message

  ----    ------         ----                ----                         -------

  Normal  FailedBinding  10m (x321 over 1h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

  Normal  FailedBinding  10s (x17 over 4m)   persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

 

dev-oof-cmso-db-0                                             1/1       Running            0          3d

dev-oof-music-cassandra-0                                     1/1       Running            0          3d

dev-oof-music-cassandra-1                                     1/1       Running            0          3d

dev-oof-music-cassandra-2                                     1/1       Running            0          3d

dev-oof-music-cassandra-job-config-rnrs2                      0/1       Completed          0          3d

dev-oof-music-tomcat-64d4c64db7-vrhbp                         1/1       Running            0          3d

dev-oof-music-tomcat-64d4c64db7-vttlr                         1/1       Running            0          3d

dev-oof-music-tomcat-64d4c64db7-wq95z                         1/1       Running            0          3d

dev-oof-oof-7b4bccc8d7-5slv2                                  1/1       Running            0          3d

dev-oof-oof-cmso-service-55499fdf4c-cw2qb                     1/1       Running            0          3d

dev-oof-oof-has-api-7d9b977b48-bvgdc                          1/1       Running            0          3d

dev-oof-oof-has-controller-7f5b6c5f7-9ttcv                    1/1       Running            0          3d

dev-oof-oof-has-data-b57bd54fb-chztq                          0/1       Init:2/4           502        3d

dev-oof-oof-has-onboard-2v8km                                 0/1       Completed          0          3d

dev-oof-oof-has-reservation-5869b786b-8z7gt                   0/1       Init:2/4           497        3d

dev-oof-oof-has-solver-5c75888465-4fkrr                       0/1       Init:2/4           503        3d

dev-oof-zookeeper-0                                           1/1       Running            0          3d

dev-oof-zookeeper-1                                           1/1       Running            0          3d

dev-oof-zookeeper-2                                           1/1       Running            0          3d

 

ubuntu@kub1:~$ kubectl  describe  pod  dev-oof-music-cassandra-job-config-rnrs2  -n  onap 

Name:           dev-oof-music-cassandra-job-config-rnrs2

Namespace:      onap

Node:           kub4/192.168.13.162

Start Time:     Thu, 27 Dec 2018 14:46:05 +0000

Labels:         app=music-cassandra-job-job

                controller-uid=22ea9645-09e6-11e9-9f85-028b4359678c

                job-name=dev-oof-music-cassandra-job-config

                release=dev-oof

Annotations:    <none>

Status:         Succeeded

IP:             10.42.9.206

Controlled By:  Job/dev-oof-music-cassandra-job-config

Init Containers:

  music-cassandra-job-readiness:

    Container ID:  docker://3de813213d2ecc31798668e6018944239820c5fa029efd17bd3243f2d0564f24

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-cassandra

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 15:29:51 +0000

      Finished:     Thu, 27 Dec 2018 15:30:08 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Containers:

  music-cassandra-job-update-job:

    Container ID:   docker://9ae4cfb9b78b4b33faa5bc9c9a2f1478ee090d7cf83ea2370fac983e59c06c25

    Image:          nexus3.onap.org:10001/onap/music/cassandra_job:3.0.24

    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/music/cassandra_job@sha256:b21578fc4cf68585909bd82fe3e8a8621ed1a48196fbaf399f182bbe147f7fa5

    Port:           <none>

    Host Port:      <none>

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 15:53:42 +0000

      Finished:     Thu, 27 Dec 2018 15:56:00 +0000

    Ready:          False

    Restart Count:  0

    Environment:

      CASS_HOSTNAME:  music-cassandra

      USERNAME:       nelson24

      PORT:           9042

      PASSWORD:       nelson24

      TIMEOUT:        30

      DELAY:          120

    Mounts:

      /cql/admin.cql from music-cassandra-job-cql (rw)

      /cql/admin_pw.cql from music-cassandra-job-cql (rw)

      /cql/extra from music-cassandra-job-extra-cql (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Conditions:

  Type              Status

  Initialized       True 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  music-cassandra-job-cql:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-cassandra-job-cql

    Optional:  false

  music-cassandra-job-extra-cql:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-cassandra-job-extra-cql

    Optional:  false

  default-token-5kd4q:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-5kd4q

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:          <none>


*****************************

ubuntu@kub1:~$ kubectl logs -f   dev-oof-music-cassandra-job-config-rnrs2  -n  onap 

Sleeping for 120 seconds before running cql

#############################################

############## Let run cql's ################

#############################################

Current Variables in play

Default User

DEF_USER=cassandra

DEF_PASS=***********

New User

USERNAME=nelson24

PASSWORD=***********

TIMEOUT=30

Running cqlsh --request-timeout=30 -u cassandra -p cassandra -e "describe keyspaces;" music-cassandra 9042;

 

system_traces  system_schema  system_auth  system  system_distributed

 

Cassandra user still avalable, will continue as usual

Running admin.cql file:

Running cqlsh -u cassandra -p cassandra -f /cql/admin.cql music-cassandra 9042

 

admin  system_schema  system_auth  system  system_distributed  system_traces

 

Success - admin.cql - Admin keyspace created

Running admin_pw.cql file:

Running cqlsh -u cassandra -p cassandra -f /cql/admin_pw.cql music-cassandra 9042

Success - admin_pw.cql - Password Changed

Running Test - look for admin keyspace:

Running cqlsh -u nelson24 -p nelson24 -e select bin boot cql dev docker-entrypoint.sh etc home lib lib64 media mnt opt proc root run runcql.sh sbin srv sys tmp usr var from system_auth.roles

/runcql.sh: line 77:  music-cassandra 9042: command not found

 

 role      | can_login | is_superuser | member_of | salted_hash

-----------+-----------+--------------+-----------+--------------------------------------------------------------

  nelson24 |      True |         True |      null | $2a$10$nDMFWNw2EOp6B1y37bg84eK0MEyaEC3fG.UbkSlQ21v8rEq7YTBVG

 cassandra |      True |         True |      null | $2a$10$H2RxK6f0yrofoq6U9IzU4ObHhlZylXiZUJW56H8d8OnGWuaILFbgO

 

(2 rows)

Success - running test

**********************************************

ubuntu@kub1:~$ kubectl  logs  -f  dev-oof-oof-has-data-b57bd54fb-chztq  -n onap 

Error from server (BadRequest): container "oof-has-data" in pod "dev-oof-oof-has-data-b57bd54fb-chztq" is waiting to start: PodInitializing

ubuntu@kub1:~$ kubectl describe  pod   dev-oof-oof-has-data-b57bd54fb-chztq  -n onap 

Name:           dev-oof-oof-has-data-b57bd54fb-chztq

Namespace:      onap

Node:           kub4/

Start Time:     Thu, 27 Dec 2018 14:46:04 +0000

Labels:         app=oof-has-data

                pod-template-hash=613681096

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.21.219

Controlled By:  ReplicaSet/dev-oof-oof-has-data-b57bd54fb

Init Containers:

  oof-has-data-readiness:

    Container ID:  docker://e33b41354013f475e091f583d307b57dcf115d17e5cbfc7168368d4a474c814a

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-tomcat

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 16:32:43 +0000

      Finished:     Thu, 27 Dec 2018 16:41:22 +0000

    Ready:          True

    Restart Count:  4

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-data-onboard-readiness:

    Container ID:  docker://b490562a8d649fa31ed515c665ce5ac97ce9e092e4663a558b6c03b462bd325e

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-oof-has-onboard

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 16:41:28 +0000

      Finished:     Thu, 27 Dec 2018 16:47:25 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-data-health-readiness:

    Container ID:  docker://5f3a54c5d9cf8b10cf3e7fe1e4e36c131f8a17d52c61d16f4468f6d6343e8618

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-oof-has-healthcheck

    State:          Running

      Started:      Mon, 31 Dec 2018 06:02:25 +0000

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Mon, 31 Dec 2018 05:52:15 +0000

      Finished:     Mon, 31 Dec 2018 06:02:20 +0000

    Ready:          False

    Restart Count:  498

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-data-data-sms-readiness:

    Container ID:  

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      sh

      -c

      resp="FAILURE"; until [ $resp = "200" ]; do resp=$(curl -s -o /dev/null -k --write-out %{http_code} https://aaf-sms.onap:10443/v1/sms/domain/has/secret); echo $resp; sleep 2; done

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Containers:

  oof-has-data:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/optf-has:1.2.4

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      python

    Args:

      /usr/local/bin/conductor-data

      --config-file=/usr/local/bin/conductor.conf

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3

    Readiness:      exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /usr/local/bin/AAF_RootCA.cer from onap-oof-has-config (rw)

      /usr/local/bin/aai_cert.cer from onap-oof-has-config (rw)

      /usr/local/bin/aai_key.key from onap-oof-has-config (rw)

      /usr/local/bin/conductor.conf from onap-oof-has-config (rw)

      /usr/local/bin/healthy.sh from onap-oof-has-config (rw)

      /usr/local/bin/log.conf from onap-oof-has-config (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

  onap-oof-has-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      onap-oof-has-configmap

    Optional:  false

  default-token-5kd4q:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-5kd4q

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason   Age                 From           Message

  ----    ------   ----                ----           -------

  Normal  Pulling  10m (x499 over 3d)  kubelet, kub4  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   10m (x499 over 3d)  kubelet, kub4  Successfully pulled image "oomk8s/readiness-check:2.0.0"

ubuntu@kub1:~$ kubectl logs -f     dev-oof-oof-has-data-b57bd54fb-chztq -c oof-has-data-data-sms-readiness  -n onap 

Error from server (BadRequest): container "oof-has-data-data-sms-readiness" in pod "dev-oof-oof-has-data-b57bd54fb-chztq" is waiting to start: PodInitializing

***********************************

ubuntu@kub1:~$ kubectl  logs -f  dev-oof-oof-has-reservation-5869b786b-8z7gt  -n onap 

Error from server (BadRequest): container "oof-has-reservation" in pod "dev-oof-oof-has-reservation-5869b786b-8z7gt" is waiting to start: PodInitializing

ubuntu@kub1:~$ kubectl describe  pod  dev-oof-oof-has-solver-5c75888465-4fkrr   -n onap 

Name:           dev-oof-oof-has-solver-5c75888465-4fkrr

Namespace:      onap

Node:           kub2/

Start Time:     Thu, 27 Dec 2018 14:46:04 +0000

Labels:         app=oof-has-solver

                pod-template-hash=1731444021

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.5.137

Controlled By:  ReplicaSet/dev-oof-oof-has-solver-5c75888465

Init Containers:

  oof-has-solver-readiness:

    Container ID:  docker://607c01dbc5d145392ff7ad6d576cf30ff038dbd65e95c75354347bc904a70641

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-tomcat

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 16:40:27 +0000

      Finished:     Thu, 27 Dec 2018 16:41:28 +0000

    Ready:          True

    Restart Count:  3

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-solver-onboard-readiness:

    Container ID:  docker://9d66b194b045f72bac0fbef3edbf9703fd24331fbb5f6266461fb0581218d520

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-oof-has-onboard

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 16:45:54 +0000

      Finished:     Thu, 27 Dec 2018 16:47:25 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-solver-health-readiness:

    Container ID:  docker://7c9b31b37bd9d81082138215d27c225a903b3d04cce21525f7d655109757d315

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-oof-has-healthcheck

    State:          Running

      Started:      Mon, 31 Dec 2018 06:19:02 +0000

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Mon, 31 Dec 2018 06:08:51 +0000

      Finished:     Mon, 31 Dec 2018 06:18:56 +0000

    Ready:          False

    Restart Count:  502

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-solver-solvr-sms-readiness:

    Container ID:  

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      sh

      -c

      resp="FAILURE"; until [ $resp = "200" ]; do resp=$(curl -s -o /dev/null -k --write-out %{http_code} https://aaf-sms.onap:10443/v1/sms/domain/has/secret); echo $resp; sleep 2; done

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Containers:

  oof-has-solver:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/optf-has:1.2.4

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      python

    Args:

      /usr/local/bin/conductor-solver

      --config-file=/usr/local/bin/conductor.conf

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3

    Readiness:      exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /usr/local/bin/AAF_RootCA.cer from onap-oof-has-config (rw)

      /usr/local/bin/conductor.conf from onap-oof-has-config (rw)

      /usr/local/bin/healthy.sh from onap-oof-has-config (rw)

      /usr/local/bin/log.conf from onap-oof-has-config (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

  onap-oof-has-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      onap-oof-has-configmap

    Optional:  false

  default-token-5kd4q:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-5kd4q

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason   Age                From           Message

  ----    ------   ----               ----           -------

  Normal  Pulling  6m (x503 over 3d)  kubelet, kub2  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   6m (x503 over 3d)  kubelet, kub2  Successfully pulled image "oomk8s/readiness-check:2.0.0"

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service