Re: [ONAP Helpdesk #55374] [critical] Nexus3 config change breaks CD builds until 30+ snapshot are released by PTLs - Revert is OK


Michael O'Brien <Frank.Obrien@...>
 

Jessica,
Thanks for the revert - we are OK
Results of one of the 3 builds kicked off after this change look OK.

LF reverted 1825:
https://lists.onap.org/pipermail/onap-discuss/2018-April/009340.html
retesting OK
http://jenkins.onap.info/job/oom-cd-master/2805
this one goes to kibana.onap.info:5601
http://jenkins.onap.info/job/oom-cd-master2-aws/42/console

We are back to the normal [3-7] image errors from the 57 before
ubuntu@ip-10-0-0-206:~$ kubectl get pods --all-namespaces | grep Image
onap dev-consul-b5d9d5564-q6cz7 0/1 ImagePullBackOff 0 52m
onap dev-consul-server-8f8f8767-sv7tk 0/1 ImagePullBackOff 0 52m
onap dev-smsdb-0 0/2 ErrImagePull

Healthcheck is OK again

00:25:43 41 critical tests, 30 passed, 11 failed

We can work on the future transition.




Thank you
/michael

-----Original Message-----
From: Jessica Wagantall via RT [mailto:onap-helpdesk@...]
Sent: Sunday, April 29, 2018 6:25 PM
To: Michael O'Brien <Frank.Obrien@...>
Cc: gildas.lanilis@...; onap-discuss@...; onap-release@...
Subject: [ONAP Helpdesk #55374] [critical] Nexus3 config change breaks CD builds until 30+ snapshot are released by PTLs

Hi Michael,

Andy has reverted this change back to allowing Nexus3 Snapshot/Staging dependencies.

I understand that this is a big change and we will definitely take it more carefully to re-implement it. For now, we have noticed all projects that should have made their releases and update their dependencies so that we can fix those before re-implementing this change.

Thanks again and my apologies for the inconvenience.
Jess.

On Sat Apr 28 17:28:12 2018, Frank.Obrien@... wrote:
Jessica,
Hi, based on the config change put in to fix nexus3 yesterday -
any full deployment of ONAP that does not have a local nexus proxy
will fail to pull about 30+ images until those are now released by the
PTL's

All deployments will fail until the PTLs that own the containers below
- post their docker images to the LF to release.
However I recommend that we just release these images immediately
without waiting for the PTL's - because we were using these tags OK
until Friday afternoon already.

I am a bit surprised that this change was put in without
communicating/realizing the full impact to every single developer
doing a deployment - it should have been communicated on Friday with
the actual nexux3 fix on Monday so we could address it.
Recommendation: a breaking change to devops posts a 24 hour warning on
the change - if possible

This config change to fix nexus3 is amplified because we currently
build all images regardless of triggered change (on demand is in the
works by the LF) - otherwise we would only fail on images with actual
changes since yesterday.

From my investigation it looks like any pulls as part of a deploy will
fail if they directly access nexux3.
My 3 CD systems all fail because they are not using a proxy The tlab
system passes because it has a proxy? In this case it would be using
cached older versions - hence no issue.

Investigation:
See
https://jira.onap.org/browse/CIMAN-157

(+) proposal - dont wait for PTL approval - we already were running
with these 50 images - just reenable them

All CD deployments fail except those with a proxy
http://jenkins.onap.info/job/oom-cd-master/
http://jenkins.onap.info/job/oom-cd-master2-aws/
passing (with proxy?)
https://jenkins.onap.org/view/External%20Labs/job/lab-tlab-beijing-
oom-deploy/

(+) before
http://jenkins.onap.info/job/oom-cd-master2-aws/24/console
6
{noformat}
13:25:59 onap dev-so-db-684dffd45-cmsth
0/1 ErrImagePull 0 1h
13:25:59 onap dev-smsdb-0
0/2 ErrImagePull 0 1h
13:25:59 onap dev-portal-zookeeper-db466fc-lfgf7
0/1 ErrImagePull 0 1h
13:25:59 onap dev-policydb-5dbb9b54df-ngxwd
0/1 ErrImagePull 0 1h
13:25:59 onap dev-consul-b5d9d5564-85vn2
0/1 ErrImagePull 0 1h
13:25:59 onap dev-consul-server-8f8f8767-6x8cm
0/1 ErrImagePull 0 1h
unknown state
13:25:59 onap dev-vid-mariadb-9bbff4865-2cfkc
0/1 Pending 0 1h
13:25:59 onap dev-vnfsdk-647cc6bc8-bblpz
0/1 Pending 0 1h
{noformat}

(-) after
http://jenkins.onap.info/job/oom-cd-master2-aws/33/console
1 fixed
57 - 5 - (2 clustered) = 50
{noformat}
after 7 hours a better list
ubuntu@ip-10-0-0-206:~$ kubectl get pods --all-namespaces | grep
Image
onap dev-aaf-84dbb784f-wgqp2 0/1
ErrImagePull 0 6h
onap dev-aai-babel-79d9cc58fb-2vv4j 0/1
ErrImagePull 0 6h
onap dev-aai-data-router-cb4cf6b79-4mshj 0/1
ErrImagePull 0 6h
onap dev-aai-gizmo-657cb8556c-dr2qf 1/2
ErrImagePull 0 6h
onap dev-aai-modelloader-7bf7598484-j4hsd 1/2
ErrImagePull 0 6h
onap dev-aai-resources-75c558cfd7-pqcwm 1/2
ErrImagePull 0 6h
onap dev-aai-search-data-8686bbd58c-rlbmn 1/2
ErrImagePull 0 6h
onap dev-appc-0 1/2
ErrImagePull 0 6h
onap dev-appc-1 1/2
ErrImagePull 0 6h
onap dev-appc-2 1/2
ErrImagePull 0 6h
onap dev-appc-cdt-747fb47876-kd647 0/1
ErrImagePull 0 6h
onap dev-appc-dgbuilder-67579b8578-mmqfk 0/1
ErrImagePull 0 6h
onap dev-clamp-9dbb4686d-42zw5 0/1
ErrImagePull 0 6h
onap dev-consul-b5d9d5564-hcgs8 0/1
ErrImagePull 0 6h
onap dev-consul-server-8f8f8767-tvrr5 0/1
ErrImagePull 0 6h
onap dev-dcae-cloudify-manager-d8748c658-ncr98 0/1
ErrImagePull 0 6h
onap dev-dcae-healthcheck-67f5dfc95-fvl52 0/1
ErrImagePull 0 6h
onap dev-dcae-redis-0 0/1
ErrImagePull 0 6h
onap dev-dmaap-5c876685bf-cqvgs 0/1
ErrImagePull 0 6h
onap dev-drools-0 0/1
ErrImagePull 0 6h
onap dev-msb-discovery-6c6dbcd8f-27djv 1/2
ErrImagePull 0 6h
onap dev-multicloud-5d4648c6c5-5g7w5 1/2
ErrImagePull 0 6h
onap dev-multicloud-ocata-6465ddf889-l68n8 1/2
ErrImagePull 0 6h
onap dev-multicloud-vio-5fbf66464-hzqvl 1/2
ErrImagePull 0 6h
onap dev-multicloud-windriver-8579976b7c-6bddk 1/2
ErrImagePull 0 6h
onap dev-pap-bd6f978c5-mkql2 1/2
ErrImagePull 0 6h
onap dev-portal-cassandra-5f477975d4-wj6bk 0/1
ErrImagePull 0 6h
onap dev-portal-db-745747866f-md9v9 0/1
ErrImagePull 0 6h
onap dev-postgres-config-njx74 0/1
ErrImagePull 0 6h
onap dev-robot-5fc4c7846b-t64tt 0/1
ErrImagePull 0 6h
onap dev-sdc-cs-64f45d77dc-xxvp5 0/1
ErrImagePull 0 6h
onap dev-sdc-es-57777d7789-xsl4t 0/1
ErrImagePull 0 6h
onap dev-sdc-wfd-5b5d4f58f6-jmj7n 0/1
ErrImagePull 0 6h
onap dev-sdnc-0 1/2
ErrImagePull 0 6h
onap dev-sdnc-dgbuilder-c85bdcd-zxx99 0/1
ErrImagePull 0 6h
onap dev-sms-857f6dbd87-5854h 0/1
ErrImagePull 0 6h
onap dev-smsdb-0 0/2
ErrImagePull 0 6h
onap dev-so-6ddb6775b9-wfnhg 1/2
ErrImagePull 0 6h
onap dev-vfc-catalog-7d89bc8b9d-h9276 1/2
ErrImagePull 0 6h
onap dev-vfc-ems-driver-864685477c-dsqbj 0/1
ErrImagePull 0 6h
onap dev-vfc-generic-vnfm-driver-544c977695-hp9sq 1/2
ErrImagePull 0 6h
onap dev-vfc-huawei-vnfm-driver-64665b4b4d-lfj9q 1/2
ErrImagePull 0 6h
onap dev-vfc-juju-vnfm-driver-856b9c78ff-ctbkj 1/2
ErrImagePull 0 6h
onap dev-vfc-multivim-proxy-767757dfd8-z8zcc 0/1
ErrImagePull 0 6h
onap dev-vfc-nokia-v2vnfm-driver-7fbc7dd6d6-vb49p 0/1
ErrImagePull 0 6h
onap dev-vfc-nokia-vnfm-driver-5f9c777fd8-fd67p 1/2
ErrImagePull 0 6h
onap dev-vfc-nslcm-76fd6648cc-4bsvk 1/2
ErrImagePull 0 6h
onap dev-vfc-resmgr-7dfb9c7554-vcvk7 1/2
ErrImagePull 0 6h
onap dev-vfc-vnflcm-7467d9c9b7-tc6ch 1/2
ErrImagePull 0 6h
onap dev-vfc-vnfmgr-77855c55f-qmq9p 1/2
ErrImagePull 0 6h
onap dev-vfc-vnfres-5658fc5d74-4sqt4 1/2
ErrImagePull 0 6h
onap dev-vfc-workflow-78f6466f9d-6qcvl 0/1
ErrImagePull 0 6h
onap dev-vfc-workflow-engine-79769874c7-gbtmb 0/1
ErrImagePull 0 6h
onap dev-vfc-zte-sdnc-driver-5b6c7cbd6b-mzfxp 0/1
ErrImagePull 0 6h
onap dev-vfc-zte-vnfm-driver-585cc96b6d-wbddb 1/2
ErrImagePull 0 6h
onap dev-vid-6686d58688-cfh88 1/2
ErrImagePull 0 6h
onap sniro-emulator-7fc8658bcb-xszm4 0/1
ErrImagePull 0 6h
ubuntu@ip-10-0-0-206:~$ kubectl get pods --all-namespaces | grep Image
| wc -l
57


list

Normal Pulling 2m (x14 over 7h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image
"nexus3.onap.org:10001/onap/sniroemulator:latest"
Normal Pulling 34m (x7 over 4h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vid:1.2.1"
Normal Pulling 4m (x10 over 5h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/ztevnfmdriver:1.1.0-SNAPSHOT-latest
Normal Pulling 14m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/ztesdncdriver:1.1.0-SNAPSHOT-latest"
Normal Pulling 13m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/wfengine-activiti:1.1.0-SNAPSHOT-
latest"
Normal Pulling 12m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/wfengine-mgrservice:1.1.0-SNAPSHOT-
latest"
Normal Pulling 16m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/vnfres:1.1.0-SNAPSHOT-latest"
Normal Pulling 13m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/vnfmgr:1.1.0-SNAPSHOT-latest"
Normal Pulling 24m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/vnflcm:1.1.0-SNAPSHOT-latest"
Normal Pulling 24m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/resmanagement:1.1.0-SNAPSHOT-latest"
Normal Pulling 14m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/nslcm:1.1.0-SNAPSHOT-latest"
Normal Pulling 16m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/nokia:1.1.0-SNAPSHOT-
latest"
Normal Pulling 21m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/nokiav2:1.1.0-STAGING-
latest"
Normal Pulling 20m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/multivimproxy:1.0.0-SNAPSHOT-latest"
Normal Pulling 17m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/jujudriver:1.1.0-SNAPSHOT-latest"
Normal Pulling 17m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/huawei:1.1.0-SNAPSHOT-
latest"
Normal Pulling 22m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/gvnfmdriver:1.1.0-SNAPSHOT-latest"
Normal Pulling 17m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/emsdriver:1.1.0-SNAPSHOT-latest"
Normal Pulling 15m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/vfc/catalog:1.1.0-SNAPSHOT-latest"
Normal Pulling 18m (x11 over 6h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/openecomp/mso:1.2-STAGING-latest"
Normal Pulling 22m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/aaf/sms"
below for dev-smsdb-0
Normal Pulling 54m (x7 over 7h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image "nexus3.onap.org:10001/vault:0.9.6"
Normal Pulling 19m (x7 over 6h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image "nexus3.onap.org:10001/consul:1.0.6"
Normal Pulling 17m (x10 over 5h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/ccsdk-dgbuilder-image:0.2.1-SNAPSHOT"
Normal Pulling 19m (x9 over 5h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image "nexus3.onap.org:10001/onap/sdnc-
image:1.3-STAGING-latest"
Normal Pulling 26m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/sdc/sdc-workflow-designer:1.1.0-SNAPSHOT-
STAGING-latest"
Normal Pulling 29m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/sdc-elasticsearch:1.2-STAGING-latest"
Normal Pulling 32m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/sdc-cassandra:1.2-STAGING-latest"
Normal Pulling 26m (x12 over 6h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/testsuite:1.2-STAGING-latest"
Normal Pulling 6m (x14 over 7h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image
"nexus3.onap.org:10001/onap/refrepo/postgres:latest"
Normal Pulling 15m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/portal-db:2.1-STAGING-latest"
Normal Pulling 21m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/music/cassandra_music:latest"
Normal Pulling 3m (x12 over 6h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image "nexus3.onap.org:10001/onap/policy-
pe:1.2-STAGING-latest"
Normal Pulling 9m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/multicloud/openstack-windriver:latest"
Normal Pulling 10m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/multicloud/vio:latest"
Normal Pulling 17m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/multicloud/openstack-ocata:latest"
Normal Pulling 20m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/multicloud/framework:latest"
Normal Pulling 13m (x12 over 6h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/msb/msb_discovery:1.1.0-SNAPSHOT-latest"
Normal Pulling 14m (x12 over 6h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/policy-drools:1.2-STAGING-latest"
Normal Pulling 6m (x10 over 5h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image
"nexus3.onap.org:10001/onap/dmaap/dmaap-mr:1.1.4"
Normal Pulling 20m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.redis-
cluster-container:1.0.0"
Normal Pulling 22m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.healthcheck-
container:1.0.0"
Normal Pulling 1m (x14 over 7h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image
"nexus3.onap.org:10001/onap/org.onap.dcaegen2.deployments.cm-
container:1.3.0"
Normal Pulling 14m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/consul:1.0.6"
Normal Pulling 18m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/consul:1.0.0"
Normal Pulling 10m (x13 over 6h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/clamp"
dev-appc-dgbuilder-67579b8578-mmqfk - duplicate of sdnc
Normal Pulling 12m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/appc-cdt-image:1.3.0-SNAPSHOT-latest"
Normal Pulling 21m (x9 over 5h) kubelet, ip-10-0-0-206.us-east-
2.compute.internal pulling image "nexus3.onap.org:10001/onap/appc-
image:1.3.0-SNAPSHOT-latest"
Normal Pulling 27m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/search-data-service:1.2-STAGING-latest"
Normal Pulling 31m (x12 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/aai-resources:1.2-STAGING-latest"
Normal Pulling 29m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/model-loader:1.2-STAGING-latest"
Normal Pulling 22m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/gizmo:1.1-STAGING-latest"
Normal Pulling 35m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/data-router:1.2-STAGING-latest"
Normal Pulling 34m (x14 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/babel:1.2-STAGING-latest"
Normal Pulling 35m (x13 over 7h) kubelet, ip-10-0-0-206.us-
east-2.compute.internal pulling image
"nexus3.onap.org:10001/onap/aaf/authz-service:latest"
Permalink Edit

PTL?------------
sniro
common - postgres
smsdb
vault
consul

PTLs verified
vid
vfc
so
aaf
aai
appc
ccsdk
dcae
dmaap
clamp
sdnc
sdc
integration
portal
music
policy
multicloud
msb

/michael







-----Original Message-----
From: Jessica Wagantall via RT [mailto:onap-
helpdesk@...]
Sent: Friday, April 27, 2018 11:00 PM
To: Michael O'Brien <Frank.Obrien@...>
Cc: onap-discuss@...; onap-release@...
Subject: [ONAP Helpdesk #55359] RE: [onap-discuss] Nexus3
docker.public configuration changed.

Dear Michael,

This is going to be an effect for jobs using the dependencies.
We need to make sure SDC makes a release of their binaries so we can
move our dependencies to the released version of them.

The image you are trying to fetch just exists in Snapshot and we need
SDC team to request a release (which will be released as onap/sdc/sdc-
workflow-designer:1.1.0)
for it: https://nexus3.onap.org/#browse/search=keyword%3Dsdc-workflow-
designer:c60bc6b9612f47d36fd6ad66a8830cf2

Also, btw, fixing this tag too to match
https://jira.onap.org/browse/CIMAN-132.

Please let me know if this helps explaining.
Thanks!
Jess

On Fri Apr 27 20:51:37 2018, Frank.Obrien@... wrote:
Team, LF,
Seeing issues as of 1400EDT where a major portion (about 35-50
of the 107 total) of the docker container images are failing to
resolve Started around midday 20180427 after the slowdown that was
occurring earlier in the day.
35 on both AWS systems as of a couple hours ago
50 on a slower internal system
http://jenkins.onap.info/job/oom-cd-master2-aws/27/console
http://jenkins.onap.info/job/oom-cd-master/2789/console
ubuntu@ip-10-0-0-144:~$ kubectl get pods --all-namespaces | grep
Image
onap dev-aaf-84dbb784f-62xrj 0/1
ImagePullBackOff 0 41m
onap dev-aai-babel-79d9cc58fb-98blr 0/1
ImagePullBackOff 0 41m
onap dev-aai-data-router-cb4cf6b79-rr5sn 0/1
ImagePullBackOff 0 41m
onap dev-aai-gizmo-657cb8556c-blbcc 1/2
ErrImagePull 0 41m
onap dev-aai-search-data-8686bbd58c-tx84s 1/2
ErrImagePull 0 41m
onap dev-appc-cdt-747fb47876-dffnw 0/1
ImagePullBackOff 0 41m
onap dev-clamp-9dbb4686d-jwvcz 0/1
ImagePullBackOff 0 41m
onap dev-consul-b5d9d5564-c72jq 0/1
ImagePullBackOff 0 40m
onap dev-consul-server-8f8f8767-hwfgk 0/1
ImagePullBackOff 0 40m
onap dev-dcae-cloudify-manager-d8748c658-gt7xd 0/1
ImagePullBackOff 0 40m
onap dev-dcae-healthcheck-67f5dfc95-qf5sx 0/1
ImagePullBackOff 0 40m
onap dev-dcae-redis-0 0/1
ImagePullBackOff 0 41m
onap dev-multicloud-5d4648c6c5-lmgrm 1/2
ErrImagePull 0 40m
onap dev-multicloud-ocata-6465ddf889-dsnw6 1/2
ErrImagePull 0 40m
onap dev-multicloud-windriver-8579976b7c-r6qgz 1/2
ErrImagePull 0 40m
onap dev-portal-cassandra-5f477975d4-h6rbt 0/1
ErrImagePull 0 40m
onap dev-portal-db-745747866f-jh7gn 0/1
ImagePullBackOff 0 40m
onap dev-postgres-config-44fxk 0/1
ErrImagePull 0 41m
onap dev-robot-5fc4c7846b-b52g4 0/1
ImagePullBackOff 0 40m
onap dev-sdc-cs-64f45d77dc-85rhg 0/1
ImagePullBackOff 0 40m
onap dev-sdc-es-57777d7789-27b4b 0/1
ErrImagePull 0 40m
onap dev-sdc-wfd-5b5d4f58f6-4xg86 0/1
ImagePullBackOff 0 40m
onap dev-sms-857f6dbd87-9g87d 0/1
ErrImagePull 0 41m
onap dev-smsdb-0 0/2
ErrImagePull 0 41m
onap dev-vfc-catalog-7d89bc8b9d-5rdx2 1/2
ErrImagePull 0 40m
onap dev-vfc-ems-driver-864685477c-fnthh 0/1
ImagePullBackOff 0 40m
onap dev-vfc-multivim-proxy-767757dfd8-ntzkb 0/1
ImagePullBackOff 0 40m
onap dev-vfc-nokia-v2vnfm-driver-7fbc7dd6d6-vrf85 0/1
ErrImagePull 0 40m
onap dev-vfc-nokia-vnfm-driver-5f9c777fd8-k22hq 1/2
ErrImagePull 0 40m
onap dev-vfc-nslcm-76fd6648cc-gk64s 1/2
ErrImagePull 0 40m
onap dev-vfc-workflow-78f6466f9d-th8zf 0/1
ImagePullBackOff 0 40m
onap dev-vfc-workflow-engine-79769874c7-5q5df 0/1
ImagePullBackOff 0 40m
onap dev-vfc-zte-sdnc-driver-5b6c7cbd6b-2wnj8 0/1
ImagePullBackOff 0 40m
onap sniro-emulator-7fc8658bcb-zst6d 0/1
ImagePullBackOff 0 40m


Warning Failed 24s (x11 over 5h) kubelet, beijing-oom Failed
to pull image "nexus3.onap.org:10001/onap/sdc/sdc-workflow-
designer:1.1.0-SNAPSHOT-STAGING-latest": rpc error: code = Unknown
desc = Error: image onap/sdc/sdc-workflow-designer:1.1.0-SNAPSHOT-
STAGING-latest not found

Thank you
/michael

From: onap-discuss-bounces@... [mailto:onap-discuss-
bounces@...] On Behalf Of Jessica Wagantall
Sent: Friday, April 27, 2018 4:50 PM
To: onap-discuss@...; onap-release <onap-
release@...>; Jeremy Phelps
<jphelps@...>; Andrew Grimberg
<agrimberg@...>; Gildas Lanilis
<gildas.lanilis@...>
Subject: [onap-discuss] Nexus3 docker.public configuration changed.

Dear ONAP team,

Today morning we had some issues with our
docker.io<http://docker.io> proxy for which some teams were not able
to pull some needed libraries. This problem has been corrected this morning.

However, while debugging this issue, we realized our docker.public
(10001) had a configuration
error.

docker.public should only be a group for docker.releases and
docker.io<http://docker.io> (in that order) so that any dependencies
are first looked for in docker.releases and, if not found, it would
be looked for in docker.io<http://docker.io>.

Our configuration was erroneously including docker.snapshots and
docker.staging in the group too.
This configuration has been adjusted and some teams might see some
dependency issues which need to be corrected by the teams making a
release of their binaries so that they become available in the
docker.public group.

Basically, we are following this rule that we are already following
in
Nexus2:
https://wiki.onap.org/display/DW/Release+Versioning+Strategy#Release
Ve
rsioningStrategy-
ONAPVersioningStrategy

I am here to help making any releases if the team needs and of
course with PTL approval and via RT ticket.

Please let me know if you guys have any questions.
Thanks a ton!
Jess
This message and the information contained herein is proprietary
and confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer
<https://www.amdocs.com/about/email-disclaimer>


This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer
<https://www.amdocs.com/about/email-disclaimer>


This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer <https://www.amdocs.com/about/email-disclaimer>

Join {onap-discuss@lists.onap.org to automatically receive all group messages.