Re: Casablanca oof module pods are waiting on init status #oof


Michael O'Brien <frank.obrien@...>
 

Gulsum,

   Hi, the pending pods you are seeing may be related to image downloads in progress – the nexus3.onap.org:10001 server is better now  - but will still suffer from a 2-3 hour pull inside some networks (40 min in aws/azure for the 75G of images). 

    The other cause is saturation of the vCores, HD or network for a particular VM – sequencing the deploys will workaround this

 

    I would use a local proxy In that case like I did

https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-NexusProxy

 

   I raised a cluster yesterday that had no OOF issues for the 3.0.0-ONAP tag of the Casablanca branch – I did however prepull all docker images and put a 3 min wait state to sequence the 30 pod deploys in the deploy.sh script

 

onap          onap-oof-cmso-db-0                                            1/1       Running            0          3h
onap          onap-oof-music-cassandra-0                                    1/1       Running            2          3h
onap          onap-oof-music-cassandra-1                                    1/1       Running            0          3h
onap          onap-oof-music-cassandra-2                                    1/1       Running            0          3h
onap          onap-oof-music-cassandra-job-config-ndtr4                     0/1       Completed          0          3h
onap          onap-oof-music-tomcat-5bb9ddcb46-dzz2m                        1/1       Running            0          3h
onap          onap-oof-music-tomcat-5bb9ddcb46-swkp2                        1/1       Running            0          3h
onap          onap-oof-music-tomcat-5bb9ddcb46-tr2rw                        1/1       Running            0          3h
onap          onap-oof-oof-7f4f5bcc8b-gj85n                                 1/1       Running            0          3h
onap          onap-oof-oof-cmso-service-5b6f8fd4cb-jvdbm                    1/1       Running            0          3h
onap          onap-oof-oof-has-api-554f8fdc64-gprp9                         1/1       Running            0          3h
onap          onap-oof-oof-has-controller-69869fb6fc-9jh6w                  1/1       Running            0          3h
onap          onap-oof-oof-has-data-9bdfd8869-77zcn                         1/1       Running            0          3h
onap          onap-oof-oof-has-healthcheck-dwrrt                            0/1       Completed          0          3h
onap          onap-oof-oof-has-onboard-h6shk                                0/1       Completed          0          3h
onap          onap-oof-oof-has-reservation-5cd655b79f-56cn7                 1/1       Running            1          3h
onap          onap-oof-oof-has-solver-6c9864bff4-9kbrb                      1/1       Running            0          3h
onap          onap-oof-zookeeper-0                                          1/1       Running            0          3h
onap          onap-oof-zookeeper-1                                          1/1       Running            0          3h
onap          onap-oof-zookeeper-2                                          1/1       Running            0          3h

 

Triage: there are a lot of secondary things that can go wrong – like config job restarts and timeouts (being replaced by helm hooks) – for now you are safe if you try to slow down the parallel deployments and sequence them – the following normally do run OK – if they fail after 1 hour  -then they will never restart without a pod restart (you will need to manually clear dockerdata-nfs and delete pv/pvc manually in this case) -  Above I was able to get the jobs to complete if I ran oof in a 3 min window all by itself.

 

Above

https://git.onap.org/oom/tree/kubernetes/helm/plugins/deploy/deploy.sh#n203

put

sleep 300

review will be in

https://jira.onap.org/browse/OOM-1571

 

dev-oof-oof-has-data-b57bd54fb-chztq                          0/1       Init:2/4           502        3d

dev-oof-oof-has-onboard-2v8km                                 0/1       Completed          0          3d

dev-oof-oof-has-reservation-5869b786b-8z7gt                   0/1       Init:2/4           497        3d

dev-oof-oof-has-solver-5c75888465-4fkrr                       0/1       Init:2/4           503        3d

 

 

   Try prepulling the images – there is a script below that keys off the manifest (should be identical to what is running in all values.yamls

https://jira.onap.org/browse/LOG-905

https://git.onap.org/logging-analytics/plain/deploy/docker_prepull.sh

 

https://jira.onap.org/browse/LOG-898

 

casablanca 3.0.0-ONAP deploy with docker_prepull.sh and sequenced deployment - dmaap/aaf first then each pod at 3 min intervals - put a 3 min sleep at line

 

As of 20181231 – there are only issues with dmaap and aai

https://jira.onap.org/browse/OOM-1560

 Note only the following are problematic (no 0/1 Completed for a particular job)

onap          dep-dcae-ves-collector-d964fbc5-bnpxc                         1/2       Running            0          3h

onap          onap-aai-aai-59686b87c7-zzddq                                 0/1       Init:0/1           0          4h

onap          onap-aai-aai-champ-7f8c7cfffd-472fv                           1/2       Running            0          4h

onap          onap-aai-aai-sparky-be-6d489d4dc9-fwcq4                       0/2       Init:0/1           0          4h

onap          onap-aai-aai-traversal-5b496d986c-vb876                       1/2       Running            36         4h

onap          onap-aai-aai-traversal-update-query-data-fcn4h                0/1       Init:0/1           24         4h

onap          onap-dmaap-dmaap-dr-node-cf6dc5cd-6bvw4                       0/1       Init:0/1           27         4h

onap          onap-dmaap-dmaap-dr-prov-7f8bd9ff65-5jwpn                     0/1       CrashLo

 

ubuntu@a-cd-cas0:~/oom/kubernetes/robot$ sudo ./ete-k8s.sh onap health
Basic A&AI Health Check                                               | FAIL |
Basic DMAAP Data Router Health Check                                  [ WARN ] 
 
Testsuites.Health-Check :: Testing ecomp components are available ... | FAIL |
51 critical tests, 49 passed, 2 failed

 

/michael

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of gulsum atici
Sent: Monday, December 31, 2018 1:44 AM
To: Ying; Ruoyu <ruoyu.ying@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear Ruoyu,

No  matter, I  made a  fresh install again.  However, some pods waiting  to  initialize  for  more  than  3 days.

oocmso-db  pvc  is  waiting on pending.

ubuntu@kub1:~$ kubectl get  pvc   -n onap  |  grep -i pending

dev-appc-appc-db                                                             Pending                                                                                                                1d

dev-oof-cmso-db                                                              Pending                                                                                                                1d

dev-sdnc-controller-blueprints-db                                            Pending                                                                                                                1d

dev-sdnc-nengdb                                                              Pending    

 

ubuntu@kub1:~$ kubectl describe  pvc  dev-oof-cmso-db   -n onap 

Name:          dev-oof-cmso-db

Namespace:     onap

StorageClass:  

Status:        Pending

Volume:        

Labels:        app=cmso-db

               chart=mariadb-galera-3.0.0

               heritage=Tiller

               release=dev-oof

Annotations:   <none>

Finalizers:    [kubernetes.io/pvc-protection]

Capacity:      

Access Modes:  

Events:

  Type    Reason         Age                 From                         Message

  ----    ------         ----                ----                         -------

  Normal  FailedBinding  10m (x321 over 1h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

  Normal  FailedBinding  10s (x17 over 4m)   persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

 

dev-oof-cmso-db-0                                             1/1       Running            0          3d

dev-oof-music-cassandra-0                                     1/1       Running            0          3d

dev-oof-music-cassandra-1                                     1/1       Running            0          3d

dev-oof-music-cassandra-2                                     1/1       Running            0          3d

dev-oof-music-cassandra-job-config-rnrs2                      0/1       Completed          0          3d

dev-oof-music-tomcat-64d4c64db7-vrhbp                         1/1       Running            0          3d

dev-oof-music-tomcat-64d4c64db7-vttlr                         1/1       Running            0          3d

dev-oof-music-tomcat-64d4c64db7-wq95z                         1/1       Running            0          3d

dev-oof-oof-7b4bccc8d7-5slv2                                  1/1       Running            0          3d

dev-oof-oof-cmso-service-55499fdf4c-cw2qb                     1/1       Running            0          3d

dev-oof-oof-has-api-7d9b977b48-bvgdc                          1/1       Running            0          3d

dev-oof-oof-has-controller-7f5b6c5f7-9ttcv                    1/1       Running            0          3d

dev-oof-oof-has-data-b57bd54fb-chztq                          0/1       Init:2/4           502        3d

dev-oof-oof-has-onboard-2v8km                                 0/1       Completed          0          3d

dev-oof-oof-has-reservation-5869b786b-8z7gt                   0/1       Init:2/4           497        3d

dev-oof-oof-has-solver-5c75888465-4fkrr                       0/1       Init:2/4           503        3d

dev-oof-zookeeper-0                                           1/1       Running            0          3d

dev-oof-zookeeper-1                                           1/1       Running            0          3d

dev-oof-zookeeper-2                                           1/1       Running            0          3d

 

ubuntu@kub1:~$ kubectl  describe  pod  dev-oof-music-cassandra-job-config-rnrs2  -n  onap 

Name:           dev-oof-music-cassandra-job-config-rnrs2

Namespace:      onap

Node:           kub4/192.168.13.162

Start Time:     Thu, 27 Dec 2018 14:46:05 +0000

Labels:         app=music-cassandra-job-job

                controller-uid=22ea9645-09e6-11e9-9f85-028b4359678c

                job-name=dev-oof-music-cassandra-job-config

                release=dev-oof

Annotations:    <none>

Status:         Succeeded

IP:             10.42.9.206

Controlled By:  Job/dev-oof-music-cassandra-job-config

Init Containers:

  music-cassandra-job-readiness:

    Container ID:  docker://3de813213d2ecc31798668e6018944239820c5fa029efd17bd3243f2d0564f24

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-cassandra

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 15:29:51 +0000

      Finished:     Thu, 27 Dec 2018 15:30:08 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Containers:

  music-cassandra-job-update-job:

    Container ID:   docker://9ae4cfb9b78b4b33faa5bc9c9a2f1478ee090d7cf83ea2370fac983e59c06c25

    Image:          nexus3.onap.org:10001/onap/music/cassandra_job:3.0.24

    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/music/cassandra_job@sha256:b21578fc4cf68585909bd82fe3e8a8621ed1a48196fbaf399f182bbe147f7fa5

    Port:           <none>

    Host Port:      <none>

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 15:53:42 +0000

      Finished:     Thu, 27 Dec 2018 15:56:00 +0000

    Ready:          False

    Restart Count:  0

    Environment:

      CASS_HOSTNAME:  music-cassandra

      USERNAME:       nelson24

      PORT:           9042

      PASSWORD:       nelson24

      TIMEOUT:        30

      DELAY:          120

    Mounts:

      /cql/admin.cql from music-cassandra-job-cql (rw)

      /cql/admin_pw.cql from music-cassandra-job-cql (rw)

      /cql/extra from music-cassandra-job-extra-cql (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Conditions:

  Type              Status

  Initialized       True 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  music-cassandra-job-cql:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-cassandra-job-cql

    Optional:  false

  music-cassandra-job-extra-cql:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-cassandra-job-extra-cql

    Optional:  false

  default-token-5kd4q:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-5kd4q

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:          <none>


*****************************

ubuntu@kub1:~$ kubectl logs -f   dev-oof-music-cassandra-job-config-rnrs2  -n  onap 

Sleeping for 120 seconds before running cql

#############################################

############## Let run cql's ################

#############################################

Current Variables in play

Default User

DEF_USER=cassandra

DEF_PASS=***********

New User

USERNAME=nelson24

PASSWORD=***********

TIMEOUT=30

Running cqlsh --request-timeout=30 -u cassandra -p cassandra -e "describe keyspaces;" music-cassandra 9042;

 

system_traces  system_schema  system_auth  system  system_distributed

 

Cassandra user still avalable, will continue as usual

Running admin.cql file:

Running cqlsh -u cassandra -p cassandra -f /cql/admin.cql music-cassandra 9042

 

admin  system_schema  system_auth  system  system_distributed  system_traces

 

Success - admin.cql - Admin keyspace created

Running admin_pw.cql file:

Running cqlsh -u cassandra -p cassandra -f /cql/admin_pw.cql music-cassandra 9042

Success - admin_pw.cql - Password Changed

Running Test - look for admin keyspace:

Running cqlsh -u nelson24 -p nelson24 -e select bin boot cql dev docker-entrypoint.sh etc home lib lib64 media mnt opt proc root run runcql.sh sbin srv sys tmp usr var from system_auth.roles

/runcql.sh: line 77:  music-cassandra 9042: command not found

 

 role      | can_login | is_superuser | member_of | salted_hash

-----------+-----------+--------------+-----------+--------------------------------------------------------------

  nelson24 |      True |         True |      null | $2a$10$nDMFWNw2EOp6B1y37bg84eK0MEyaEC3fG.UbkSlQ21v8rEq7YTBVG

 cassandra |      True |         True |      null | $2a$10$H2RxK6f0yrofoq6U9IzU4ObHhlZylXiZUJW56H8d8OnGWuaILFbgO

 

(2 rows)

Success - running test

**********************************************

ubuntu@kub1:~$ kubectl  logs  -f  dev-oof-oof-has-data-b57bd54fb-chztq  -n onap 

Error from server (BadRequest): container "oof-has-data" in pod "dev-oof-oof-has-data-b57bd54fb-chztq" is waiting to start: PodInitializing

ubuntu@kub1:~$ kubectl describe  pod   dev-oof-oof-has-data-b57bd54fb-chztq  -n onap 

Name:           dev-oof-oof-has-data-b57bd54fb-chztq

Namespace:      onap

Node:           kub4/

Start Time:     Thu, 27 Dec 2018 14:46:04 +0000

Labels:         app=oof-has-data

                pod-template-hash=613681096

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.21.219

Controlled By:  ReplicaSet/dev-oof-oof-has-data-b57bd54fb

Init Containers:

  oof-has-data-readiness:

    Container ID:  docker://e33b41354013f475e091f583d307b57dcf115d17e5cbfc7168368d4a474c814a

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-tomcat

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 16:32:43 +0000

      Finished:     Thu, 27 Dec 2018 16:41:22 +0000

    Ready:          True

    Restart Count:  4

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-data-onboard-readiness:

    Container ID:  docker://b490562a8d649fa31ed515c665ce5ac97ce9e092e4663a558b6c03b462bd325e

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-oof-has-onboard

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 16:41:28 +0000

      Finished:     Thu, 27 Dec 2018 16:47:25 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-data-health-readiness:

    Container ID:  docker://5f3a54c5d9cf8b10cf3e7fe1e4e36c131f8a17d52c61d16f4468f6d6343e8618

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-oof-has-healthcheck

    State:          Running

      Started:      Mon, 31 Dec 2018 06:02:25 +0000

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Mon, 31 Dec 2018 05:52:15 +0000

      Finished:     Mon, 31 Dec 2018 06:02:20 +0000

    Ready:          False

    Restart Count:  498

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-data-data-sms-readiness:

    Container ID:  

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      sh

      -c

      resp="FAILURE"; until [ $resp = "200" ]; do resp=$(curl -s -o /dev/null -k --write-out %{http_code} https://aaf-sms.onap:10443/v1/sms/domain/has/secret); echo $resp; sleep 2; done

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Containers:

  oof-has-data:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/optf-has:1.2.4

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      python

    Args:

      /usr/local/bin/conductor-data

      --config-file=/usr/local/bin/conductor.conf

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3

    Readiness:      exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /usr/local/bin/AAF_RootCA.cer from onap-oof-has-config (rw)

      /usr/local/bin/aai_cert.cer from onap-oof-has-config (rw)

      /usr/local/bin/aai_key.key from onap-oof-has-config (rw)

      /usr/local/bin/conductor.conf from onap-oof-has-config (rw)

      /usr/local/bin/healthy.sh from onap-oof-has-config (rw)

      /usr/local/bin/log.conf from onap-oof-has-config (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

  onap-oof-has-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      onap-oof-has-configmap

    Optional:  false

  default-token-5kd4q:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-5kd4q

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason   Age                 From           Message

  ----    ------   ----                ----           -------

  Normal  Pulling  10m (x499 over 3d)  kubelet, kub4  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   10m (x499 over 3d)  kubelet, kub4  Successfully pulled image "oomk8s/readiness-check:2.0.0"

ubuntu@kub1:~$ kubectl logs -f     dev-oof-oof-has-data-b57bd54fb-chztq -c oof-has-data-data-sms-readiness  -n onap 

Error from server (BadRequest): container "oof-has-data-data-sms-readiness" in pod "dev-oof-oof-has-data-b57bd54fb-chztq" is waiting to start: PodInitializing

***********************************

ubuntu@kub1:~$ kubectl  logs -f  dev-oof-oof-has-reservation-5869b786b-8z7gt  -n onap 

Error from server (BadRequest): container "oof-has-reservation" in pod "dev-oof-oof-has-reservation-5869b786b-8z7gt" is waiting to start: PodInitializing

ubuntu@kub1:~$ kubectl describe  pod  dev-oof-oof-has-solver-5c75888465-4fkrr   -n onap 

Name:           dev-oof-oof-has-solver-5c75888465-4fkrr

Namespace:      onap

Node:           kub2/

Start Time:     Thu, 27 Dec 2018 14:46:04 +0000

Labels:         app=oof-has-solver

                pod-template-hash=1731444021

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.5.137

Controlled By:  ReplicaSet/dev-oof-oof-has-solver-5c75888465

Init Containers:

  oof-has-solver-readiness:

    Container ID:  docker://607c01dbc5d145392ff7ad6d576cf30ff038dbd65e95c75354347bc904a70641

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      music-tomcat

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 16:40:27 +0000

      Finished:     Thu, 27 Dec 2018 16:41:28 +0000

    Ready:          True

    Restart Count:  3

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-solver-onboard-readiness:

    Container ID:  docker://9d66b194b045f72bac0fbef3edbf9703fd24331fbb5f6266461fb0581218d520

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-oof-has-onboard

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Thu, 27 Dec 2018 16:45:54 +0000

      Finished:     Thu, 27 Dec 2018 16:47:25 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-solver-health-readiness:

    Container ID:  docker://7c9b31b37bd9d81082138215d27c225a903b3d04cce21525f7d655109757d315

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-oof-has-healthcheck

    State:          Running

      Started:      Mon, 31 Dec 2018 06:19:02 +0000

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Mon, 31 Dec 2018 06:08:51 +0000

      Finished:     Mon, 31 Dec 2018 06:18:56 +0000

    Ready:          False

    Restart Count:  502

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

  oof-has-solver-solvr-sms-readiness:

    Container ID:  

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      sh

      -c

      resp="FAILURE"; until [ $resp = "200" ]; do resp=$(curl -s -o /dev/null -k --write-out %{http_code} https://aaf-sms.onap:10443/v1/sms/domain/has/secret); echo $resp; sleep 2; done

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Containers:

  oof-has-solver:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/optf-has:1.2.4

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      python

    Args:

      /usr/local/bin/conductor-solver

      --config-file=/usr/local/bin/conductor.conf

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3

    Readiness:      exec [cat /usr/local/bin/healthy.sh] delay=10s timeout=1s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /usr/local/bin/AAF_RootCA.cer from onap-oof-has-config (rw)

      /usr/local/bin/conductor.conf from onap-oof-has-config (rw)

      /usr/local/bin/healthy.sh from onap-oof-has-config (rw)

      /usr/local/bin/log.conf from onap-oof-has-config (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kd4q (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

  onap-oof-has-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      onap-oof-has-configmap

    Optional:  false

  default-token-5kd4q:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-5kd4q

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason   Age                From           Message

  ----    ------   ----               ----           -------

  Normal  Pulling  6m (x503 over 3d)  kubelet, kub2  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   6m (x503 over 3d)  kubelet, kub2  Successfully pulled image "oomk8s/readiness-check:2.0.0"

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

Join onap-discuss@lists.onap.org to automatically receive all group messages.