Date   
Re: [AAI] Problem accessing AAI GUI from ONAP Portal

Keong Lim
 

Hi Thiriloshini,

I don't know if this will fix your problem, but when I recently tried to use the AAI UI, I also needed to update my hosts file with these additional names:

10.1.8.103 aaf-onap-test.osaaf.org
10.1.8.103 aai.ui.simpledemo.onap.org

Also, I needed to add a root certificate to my browsers so that the self-signed certificates did not cause any problems:
https://wiki.onap.org/download/attachments/64007235/onap_root.pem?api=v2

I tested across Firefox, Chrome and IE 11.

The HTTP port has been closed, so that error is expected.
The direct access to HTTPS is also expected to redirect to the Portal login page.

If you continue to have problems with the AAI UI, we will try to get Francis Paquette to help when he is back from holiday.

Keong

Re: ODP: [onap-discuss] [vLBMS] Create instance into VID fails

krzysztof.kurczewski@...
 

Hi Marco,


Could you provide more details for kubernetes setup?


Where this json is located? I found only few xml files on pod dev-sdnc-sdnc-0 and no trace of "check for VNF-API-preload and copy" action inside any of them.


GENERIC-RESOURCE-API_vf-module-topology-operation-assign-no-preload.xml
GENERIC-RESOURCE-API_vf-module-topology-operation-assign-preload.xml
GENERIC-RESOURCE-API_vf-module-topology-operation-assign.xml

Also, is there any related ticket?


From: onap-discuss@... <onap-discuss@...> on behalf of Marco Platania via Lists.Onap.Org <platania=research.att.com@...>
Sent: Monday, July 8, 2019 11:00:10 PM
To: onap-discuss@...; a.malinconico@...
Subject: Re: [onap-discuss] ODP: [onap-discuss] [vLBMS] Create instance into VID fails
 

Aniello,

 

Not sure you still have this issue, I just ran into it myself. To solve it, just import GENERIC-RESOURCE-API_vf-module-topology-operation-assign.json file into the SDNC DG builder, search for the node called call “check for VNF-API-preload and copy” and remove it.

 

Then, upload the modified DG and retry.

 

Thanks,

Marco

 

From: <onap-discuss@...> on behalf of Aniello Paolo Malinconico <a.malinconico@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "a.malinconico@..." <a.malinconico@...>
Date: Friday, July 5, 2019 at 9:04 AM
To: Aniello Paolo Malinconico <a.malinconico@...>, "onap-discuss@..." <onap-discuss@...>
Subject: Re: [onap-discuss] ODP: [onap-discuss] [vLBMS] Create instance into VID fails

 

More specific log is the file attached fromsdnc container.
Here the error extract:

2019-07-05 12:56:36,408 | INFO  | 1534955105-48303 | PropertiesNode                   | 240 - org.onap.ccsdk.sli.plugins.properties-node-provider - 0.4.4 | +++ prop.restapi.tx-allottedresource: [/restconf/config/GENERIC-RESOURCE-API:tunnelxconn-allotted-resources/tunnelxconn-allotted-resource/{allotted-resource-id}/]
2019-07-05 12:56:36,409 | INFO  | 1534955105-48303 | PropertiesNode                   | 240 - org.onap.ccsdk.sli.plugins.properties-node-provider - 0.4.4 | +++ prop.restapi.connection-oof-url: [
http://oof-osdf:8698/api/oof/v1/route]
2019-07-05 12:56:36,409 | INFO  | 1534955105-48303 | PropertiesNode                   | 240 - org.onap.ccsdk.sli.plugins.properties-node-provider - 0.4.4 | +++ prop.restapi.naming.gen-name.templatefile: [naming-ms-post-gen-name.json]
2019-07-05 12:56:36,410 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 3 (switch)
2019-07-05 12:56:36,411 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 4 (block)
2019-07-05 12:56:36,412 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 6 (switch)
2019-07-05 12:56:36,412 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 7 (block)
2019-07-05 12:56:36,413 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 9 (set)
2019-07-05 12:56:36,414 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 10 (switch)
2019-07-05 12:56:36,414 | INFO  | 1534955105-48303 | SvcLogicExprListener             | 227 - org.onap.ccsdk.sli.core.sli-common - 0.4.4 | Outcome (1) not found, keys are { ("") (Other)}
2019-07-05 12:56:36,415 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 12 (for)
2019-07-05 12:56:36,416 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 13 (switch)
2019-07-05 12:56:36,417 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 14 (block)
2019-07-05 12:56:36,417 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 15 (set)
2019-07-05 12:56:36,418 | INFO  | 1534955105-48303 | SvcLogicServiceImplBase          | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | About to execute node # 16 (break)
2019-07-05 12:56:36,419 | ERROR | 1534955105-48303 | ForNodeExecutor                  | 477 - wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar - 0.0.0 | ForNodeExecutor caught break
org.onap.ccsdk.sli.core.sli.BreakNodeException: BreakNodeExecutor encountered break with nodeId 16
at org.onap.ccsdk.sli.core.sli.provider.base.BreakNodeExecutor.execute(BreakNodeExecutor.java:39) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.SvcLogicServiceImplBase.executeNode(SvcLogicServiceImplBase.java:147) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.BlockNodeExecutor.execute(BlockNodeExecutor.java:62) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.SvcLogicServiceImplBase.executeNode(SvcLogicServiceImplBase.java:147) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.ForNodeExecutor.execute(ForNodeExecutor.java:94) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.SvcLogicServiceImplBase.executeNode(SvcLogicServiceImplBase.java:147) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.BlockNodeExecutor.execute(BlockNodeExecutor.java:62) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.SvcLogicServiceImplBase.executeNode(SvcLogicServiceImplBase.java:147) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.SvcLogicServiceImplBase.execute(SvcLogicServiceImplBase.java:117) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.CallNodeExecutor.execute(CallNodeExecutor.java:131) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.SvcLogicServiceImplBase.executeNode(SvcLogicServiceImplBase.java:147) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.BlockNodeExecutor.execute(BlockNodeExecutor.java:62) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.SvcLogicServiceImplBase.executeNode(SvcLogicServiceImplBase.java:147) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.base.SvcLogicServiceImplBase.execute(SvcLogicServiceImplBase.java:117) [477:wrap_file__opt_opendaylight_system_org_onap_ccsdk_sli_core_sli-provider-base_0.4.4_sli-provider-base-0.4.4.jar:0.0.0]
at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:112) [228:org.onap.ccsdk.sli.core.sli-provider:0.4.4]
at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:91) [228:org.onap.ccsdk.sli.core.sli-provider:0.4.4]
at Proxyb1891dae_ebe0_4112_bdd5_e6f953ee0921.execute(Unknown Source) [?:?]
at Proxye4076403_951d_4506_b1d2_30a529eb1eae.execute(Unknown Source) [?:?]
at org.onap.sdnc.northbound.GenericResourceApiSvcLogicServiceClient.execute(GenericResourceApiSvcLogicServiceClient.java:73) [245:org.onap.sdnc.northbound.generic-resource-api-provider:1.5.3.SNAPSHOT]
at org.onap.sdnc.northbound.GenericResourceApiProvider.tryGetProperties(GenericResourceApiProvider.java:764) [245:org.onap.sdnc.northbound.generic-resource-api-provider:1.5.3.SNAPSHOT]
at org.onap.sdnc.northbound.GenericResourceApiProvider.vfModuleTopologyOperation(GenericResourceApiProvider.java:1324) [245:org.onap.sdnc.northbound.generic-resource-api-provider:1.5.3.SNAPSHOT]

Re: #appc NETCONF/TLS in ODL #appc

francesca.vezzosi@...
 

Hi Patrick,

Thank you for your reply.
I am familiar with the page you found and it indeed works fine in SDNC, but it looks like in APPC the ODL commands listed at the bottom of the page are not implemented (I get the error in my first message when I try using the add-keystone-entry RPC).
The ODL version might be an explanation, but according to this JIRA entry APPC has already been upgraded with Fluorine SR2. Is this not correct? Do you have more information about that?

Regards,
Francesca

Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

Michal Ptacek
 

Added into agenda, I will join PTL call

 

Thanks Catherine,

 

Michal

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Catherine LEFEVRE
Sent: Thursday, July 11, 2019 9:28 PM
To: onap-tsc@...; onap-discuss@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

Good evening Michal,

 

These are great information !

 

Would you be available on July 15th, 2019 to present your findings during the next PTL call?

If yes then please feel free to add this topic to the agenda

https://wiki.onap.org/display/DW/PTL+2019-07-15

 

Many thanks and regards

Catherine

 

From: onap-tsc@... [mailto:onap-tsc@...] On Behalf Of Michal Ptacek via Lists.Onap.Org
Sent: Thursday, July 11, 2019 3:07 PM
To: onap-tsc@...; onap-discuss@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

Thanks Catherine, I collected some more detailed report about Dublin images (see attached)

 

Few suggestions regarding that:

 

  • Maybe we can decrease the size of images by using recommended base images, there is a nice page from Adolfo about that

https://wiki.onap.org/display/DW/ONAP+Normative+container+base+images

 

  • We are using different versions of base images even within single ONAP component (e.g. dcaegen2 -  docker.io/openjdk:11-jre-slim, openjdk:8-jre-alpine, openjdk:8-jre, java:8-jre)

https://wiki.onap.org/display/DW/Docker+images+dependency+list

 

  • Different components are using different versions of same image, which can be unified

(e.g oomk8s_readiness-check_2.0.0.tar, oomk8s_readiness-check_2.0.1.tar, oomk8s_readiness-check_2.0.2.tar)

 

I think those issues were already raised from CIA team but I am not sure if there is sufficient attention and capacity for improving that from projects,

 

Best regards,

Michal

 

From: onap-tsc@... <onap-tsc@...> On Behalf Of Catherine LEFEVRE
Sent: Wednesday, July 10, 2019 8:41 PM
To: onap-tsc@...; onap-discuss@...; m.ptacek@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

Thank you Michal for your feedback. Great job !

 

I have asked the ONAP APPC Team to investigate about

2,9  G nexus3.onap.org_10001_onap_appc-image_1.5.3.tar

 

Best regards

Catherine

 

From: onap-tsc@... [mailto:onap-tsc@...] On Behalf Of VENKATESH KUMAR, VIJAY
Sent: Wednesday, July 10, 2019 6:19 PM
To: onap-tsc@...; onap-discuss@...; m.ptacek@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

Hello Michael,  Just to clarify, both dcaegen2* images are analytics modules dependent on external distribution.

 

org.onap.dcaegen2.deployments.tca-cdap-container uses caskdata/cdap-standalone:4.1.2 which is ~2GB

org.onap.dcaegen2.deployments.pnda-mirror-container sources several upstream pnda dependencies. The image size was indeed causing issue to incorporate into ONAP CI/CD flow hence maintained in external PNDA repo. Cisco team were looking to optimize this build; this also has dependency on upstream PNDA project.

 

Regards,

Vijay

From: onap-tsc@... <onap-tsc@...> On Behalf Of Michal Ptacek via Lists.Onap.Org
Sent: Wednesday, July 10, 2019 11:19 AM
To: onap-discuss@...; m.ptacek@...; onap-tsc@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

I am sorry, need to correct winners for biggest images in Dublin contest …

 

Biggest 3 images in Dublin:

11,0 G pndareg.ctao6.net_onap_org.onap.dcaegen2.deployments.pnda-mirror-container_5.0.0.tar

2,9  G nexus3.onap.org_10001_onap_appc-image_1.5.3.tar

2,6  G nexus3.onap.org_10001_onap_org.onap.dcaegen2.deployments.tca-cdap-container_1.1.2.tar

 

Br,

M.

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Michal Ptacek via Lists.Onap.Org
Sent: Wednesday, July 10, 2019 3:58 PM
To: onap-tsc@...; onap-discuss@...
Subject: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

Hi All,

 

we have reached important milestone in offline deployments and created Dublin branch in oom/offline-installer repo to separate new efforts for El-Alto continuing in master,

 

couple of highlights:

- improvements in Dublin ONAP seen related to being able to work in offline lab (special thanks for Policy for offline friendly new Policy Framework)

- improvements in automatic datalist collecting (automatic parsing OOM for getting list of images for download)

- improvements in redesigned download procedure (faster & more reliable, newly in python3)

- vFWCL fully automated (thanks to AT&T guys)

 

All documents related to offline installer were updated

https://docs.onap.org/en/dublin/submodules/oom/offline-installer.git/docs/index.html

As of now we are still working on Centos 7.6 validation and adding CI pipeline to Orange Lab

 

If we compare Casablanca and Dublin offline platform, we have more images which are in total slightly smaller

but much bigger in blobs format (maybe we have more binary like code in images not shrinking too much to blob)

 

 

Casablanca

Dublin

Nr. of ONAP images

194

229

ONAP images size in tar format

110G

108G

ONAP images size in nexus blobs format

72G

97G

 

Biggest 3 images in Casablanca:

2,8 G nexus3.onap.org_10001_onap_appc-image_1.4.4.tar

2,6 G nexus3.onap.org_10001_onap_org.onap.dcaegen2.deployments.tca-cdap-container_1.1.0.tar

2,3 G nexus3.onap.org_10001_onap_org.onap.dcaegen2.deployments.cm-container_1.4.2.tar

 

Biggest 3 images in Dublin:

11,0 G nexus3.onap.org_10001_onap_org.onap.dcaegen2.deployments.tca-cdap-container_1.1.2.tar

2,9  G nexus3.onap.org_10001_onap_appc-image_1.5.3.tar

2,6  G pndareg.ctao6.net_onap_org.onap.dcaegen2.deployments.pnda-mirror-container_5.0.0.tar

 

Thanks to all involved,

Dublin is big leap forward for ONAP to works more easily offline,

Let’s see if we can finish that effort in El-Alto

 

On behalf of Samsung team,

Michal

 

 

 

 

 

 

  

APPC Cloud properties Ehnancement #configproperties #appc #casablanca

Paulo Duarte
 

Hi All,

I would like to edit the Cloud provider setup in appc.properties. I tried to enhance the appc.properties file that be found in the /opt/onap/appc/data/properties directory of appc container, but this file seems to read-only.
There are any way to configure the Cloud provider without a new deployment?

Note: Using Casablanca Release

Someone can help me?
Thanks,
Paulo

[AAI] Problem accessing AAI GUI from ONAP Portal

Thiriloshini.ThoppeKrishnakumar@us.fujitsu.com
 

Hi Jimmy/ Keong/ Harish,

 

I have a ONAP Dublin cluster which was deployed using OOM RKE Kubernetes.

I am able to access the ONAP Portal but I am unable to access the AAI GUI. Other GUI Applications like SDC, So-Monitoring work fine.

 

### All AAI pods are up and running.

 

### Please find the AAI Sparky service below

 

aai-sparky-be                   NodePort       10.43.107.114   <none>                  8000:30220/TCP                                                2d19h   app=aai-sparky-be,release=dev-aai

 

### I accessed the aai-sparky-be container and it was listening on port 8000.

 

root@dev-aai-aai-sparky-be-6495c4f754-cbh76:/# netstat -tunlp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name

tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      40/java

root@dev-aai-aai-sparky-be-6495c4f754-cbh76:/#

 

### Added the below contents in /etc/hosts

 

10.1.8.103 portal.api.simpledemo.onap.org

10.1.8.103 vid.api.simpledemo.onap.org

10.1.8.103 sdc.api.fe.simpledemo.onap.org

10.1.8.103 sdc.fe.simpledemo.onap.org

10.1.8.103 portal-sdk.simpledemo.onap.org

10.1.8.103 policy.api.simpledemo.onap.org

10.1.8.103 aai.api.sparky.simpledemo.onap.org

10.1.8.103 cli.api.simpledemo.onap.org

10.1.8.103 msb.api.discovery.simpledemo.onap.org

10.1.8.103 portal-app.onap

 

 

### When I try to open the AAI GUI from ONAP portal, it says “temporarily down or may have moved to  a new address”. Other GUIs like SDC, So-Monitoring work fine.

 

 

### I also tried to open AAI GUI from a new tab using the URL http://aai.api.sparky.simpledemo.onap.org:30220/services/aai/webapp/index.html and I get the below error.  When I try using https, it redirects me to the onap portal page.

 

 

 

 

 

I have used the same method in ONAP Casablanca and was able to access the AAI GUI.

Can you please let me know what am I missing or if something has changed in Dublin.

Please let me know if you need more information.

Any pointer is much appreciated.

 

 

Thanks,

Thiriloshini

Re: #appc NETCONF/TLS in ODL #appc

Patrick <patrick.brady@...>
 

I found this page which describes how to add the certificates for netconf TLS: https://onap.readthedocs.io/en/latest/submodules/sdnc/oam.git/docs/cert_installation.html

Also, I have been informed that netconf TLS support possibly was not added until SR2 version of OpenDaylight Fluorine. Appc is currently running on SR1 version, so this could be causing a problem.

Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

Catherine LEFEVRE
 

Good evening Michal,

 

These are great information !

 

Would you be available on July 15th, 2019 to present your findings during the next PTL call?

If yes then please feel free to add this topic to the agenda

https://wiki.onap.org/display/DW/PTL+2019-07-15

 

Many thanks and regards

Catherine

 

From: onap-tsc@... [mailto:onap-tsc@...] On Behalf Of Michal Ptacek via Lists.Onap.Org
Sent: Thursday, July 11, 2019 3:07 PM
To: onap-tsc@...; onap-discuss@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

Thanks Catherine, I collected some more detailed report about Dublin images (see attached)

 

Few suggestions regarding that:

 

  • Maybe we can decrease the size of images by using recommended base images, there is a nice page from Adolfo about that

https://wiki.onap.org/display/DW/ONAP+Normative+container+base+images

 

  • We are using different versions of base images even within single ONAP component (e.g. dcaegen2 -  docker.io/openjdk:11-jre-slim, openjdk:8-jre-alpine, openjdk:8-jre, java:8-jre)

https://wiki.onap.org/display/DW/Docker+images+dependency+list

 

·        Different components are using different versions of same image, which can be unified

(e.g oomk8s_readiness-check_2.0.0.tar, oomk8s_readiness-check_2.0.1.tar, oomk8s_readiness-check_2.0.2.tar)

 

I think those issues were already raised from CIA team but I am not sure if there is sufficient attention and capacity for improving that from projects,

 

Best regards,

Michal

 

From: onap-tsc@... <onap-tsc@...> On Behalf Of Catherine LEFEVRE
Sent: Wednesday, July 10, 2019 8:41 PM
To: onap-tsc@...; onap-discuss@...; m.ptacek@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

Thank you Michal for your feedback. Great job !

 

I have asked the ONAP APPC Team to investigate about

2,9  G nexus3.onap.org_10001_onap_appc-image_1.5.3.tar

 

Best regards

Catherine

 

From: onap-tsc@... [mailto:onap-tsc@...] On Behalf Of VENKATESH KUMAR, VIJAY
Sent: Wednesday, July 10, 2019 6:19 PM
To: onap-tsc@...; onap-discuss@...; m.ptacek@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

Hello Michael,  Just to clarify, both dcaegen2* images are analytics modules dependent on external distribution.

 

org.onap.dcaegen2.deployments.tca-cdap-container uses caskdata/cdap-standalone:4.1.2 which is ~2GB

org.onap.dcaegen2.deployments.pnda-mirror-container sources several upstream pnda dependencies. The image size was indeed causing issue to incorporate into ONAP CI/CD flow hence maintained in external PNDA repo. Cisco team were looking to optimize this build; this also has dependency on upstream PNDA project.

 

Regards,

Vijay

From: onap-tsc@... <onap-tsc@...> On Behalf Of Michal Ptacek via Lists.Onap.Org
Sent: Wednesday, July 10, 2019 11:19 AM
To: onap-discuss@...; m.ptacek@...; onap-tsc@...
Subject: Re: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

I am sorry, need to correct winners for biggest images in Dublin contest …

 

Biggest 3 images in Dublin:

11,0 G pndareg.ctao6.net_onap_org.onap.dcaegen2.deployments.pnda-mirror-container_5.0.0.tar

2,9  G nexus3.onap.org_10001_onap_appc-image_1.5.3.tar

2,6  G nexus3.onap.org_10001_onap_org.onap.dcaegen2.deployments.tca-cdap-container_1.1.2.tar

 

Br,

M.

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Michal Ptacek via Lists.Onap.Org
Sent: Wednesday, July 10, 2019 3:58 PM
To: onap-tsc@...; onap-discuss@...
Subject: [onap-tsc][onap-discuss] Onap Dublin offline platform released

 

Hi All,

 

we have reached important milestone in offline deployments and created Dublin branch in oom/offline-installer repo to separate new efforts for El-Alto continuing in master,

 

couple of highlights:

- improvements in Dublin ONAP seen related to being able to work in offline lab (special thanks for Policy for offline friendly new Policy Framework)

- improvements in automatic datalist collecting (automatic parsing OOM for getting list of images for download)

- improvements in redesigned download procedure (faster & more reliable, newly in python3)

- vFWCL fully automated (thanks to AT&T guys)

 

All documents related to offline installer were updated

https://docs.onap.org/en/dublin/submodules/oom/offline-installer.git/docs/index.html

As of now we are still working on Centos 7.6 validation and adding CI pipeline to Orange Lab

 

If we compare Casablanca and Dublin offline platform, we have more images which are in total slightly smaller

but much bigger in blobs format (maybe we have more binary like code in images not shrinking too much to blob)

 

 

Casablanca

Dublin

Nr. of ONAP images

194

229

ONAP images size in tar format

110G

108G

ONAP images size in nexus blobs format

72G

97G

 

Biggest 3 images in Casablanca:

2,8 G nexus3.onap.org_10001_onap_appc-image_1.4.4.tar

2,6 G nexus3.onap.org_10001_onap_org.onap.dcaegen2.deployments.tca-cdap-container_1.1.0.tar

2,3 G nexus3.onap.org_10001_onap_org.onap.dcaegen2.deployments.cm-container_1.4.2.tar

 

Biggest 3 images in Dublin:

11,0 G nexus3.onap.org_10001_onap_org.onap.dcaegen2.deployments.tca-cdap-container_1.1.2.tar

2,9  G nexus3.onap.org_10001_onap_appc-image_1.5.3.tar

2,6  G pndareg.ctao6.net_onap_org.onap.dcaegen2.deployments.pnda-mirror-container_5.0.0.tar

 

Thanks to all involved,

Dublin is big leap forward for ONAP to works more easily offline,

Let’s see if we can finish that effort in El-Alto

 

On behalf of Samsung team,

Michal

 

 

 

 

 

 

  

Re: #appc NETCONF/TLS in ODL #appc

Patrick <patrick.brady@...>
 

I believe OpenDaylight Fluorine, which is being used by Dublin version of Appc, should have support for TLS netconf. The netconf connector is a function of OpenDaylight itself, so the steps to use netconf over TLS should be the same as it is in any OpenDaylight deployment. Unfortunately, I have not had a chance to setup a TLS connection in OpenDaylight myself, so I'm not sure what the exact steps will look like.

Re: [MultiCloud] : about service/VNF instantiation on a K8S "cloud"

Srini
 

There is readthedocs page too.

 

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Rene Robert via Lists.Onap.Org
Sent: Thursday, July 11, 2019 5:35 AM
To: onap-discuss@...; bin.yang@...
Subject: [onap-discuss][MultiCloud] : about service/VNF instantiation on a K8S "cloud"

 

Hello

 

Can you confirm the procedure ? Is it working in Dublin Release ?

 

https://wiki.onap.org/display/DW/Deploying+vFw+and+EdgeXFoundry+Services+on+Kubernets+Cluster+with+ONAP

 

If yes, I will propose a contribution on ONAP readthedocs.

 

René

 

 

Logo Orange

 

René Robert
«Open and Smart solutions for autOmating Network Services»
ORANGE/IMT/OLN/CNC/NARA/OSONS

 

Fixe : +33 2 96 07 39 29
Mobile : +33 6 74 78 68 43
rene.robert@...

 

 

_________________________________________________________________________________________________________________________
 
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
 
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.

Re: Steps to Remove Deployment of ONAP

Manoj K Nair
 

Thanks Marco and Brian.

 

For the following step

kubectl delete namespace onap

Error from server (Conflict): Operation cannot be fulfilled on namespaces "onap": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

 

I also had to do following to delete the namespace .

kubectl get namespace onap -o json > tmp.json

 

Removed kubernetes under finalizer block in tmp.json

 

kubectl proxy &

 

curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://localhost:8001/api/v1/namespaces/onap/finalize

 

Without the above steps onap namespace was not getting deleted

 

Regards

 

Manoj

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Marco Platania
Sent: Thursday, July 11, 2019 9:04 PM
To: onap-discuss@...; FREEMAN, BRIAN D <bf1936@...>; Manoj K Nair <manoj.k.nair@...>
Subject: Re: [onap-discuss] Steps to Remove Deployment of ONAP

 

Manoj,

 

Two small comments:

1.       ./cleanup.sh so (not dev-so, dev is added automatically by the script)

2.       This is what the script removes: secrets, configmaps, pvc, pv, services, deployments, statefulsets, clusterrolebinding

 

You can try to add job to that list and see what happens. If it works, it would be a nice thing to have.

 

The complete set of operations to delete a specific component is:

·         helm del --purge dev-<component_name>

·         rm -rf /dockerdata-nfs/dev-<component_name>/*

·         ./cleanup.sh <component_name>

 

The reason why we created the cleanup.sh script is because helm del --purge doesn’t always remove everything.

 

Thanks,

Marco

 

From: <onap-discuss@...> on behalf of BRIAN FREEMAN <bf1936@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, BRIAN FREEMAN <bf1936@...>
Date: Thursday, July 11, 2019 at 11:20 AM
To: "'onap-discuss@...'" <onap-discuss@...>, "'manoj.k.nair@...'" <manoj.k.nair@...>
Subject: Re: [onap-discuss] Steps to Remove Deployment of ONAP

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

Sorry that is the script for an individual module

 

./cleanup.sh dev-so  for example.

 

Brian

 

 

From: FREEMAN, BRIAN D
Sent: Thursday, July 11, 2019 11:19 AM
To: onap-discuss@...; manoj.k.nair@...
Subject: RE: [onap-discuss] Steps to Remove Deployment of ONAP

 

I use this script (you have most of the steps) and then check jobs

 

[integration.git] / deployment / heat / onap-rke / scripts / cleanup.sh

 

Cd ...

./cleanup.sh

kubectl -n onap get jobs

kubectl -n onap delete job xxxx

 

(cleanup.sh might have been updated to handle jobs but I’m not sure )

 

Brian

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Manoj K Nair
Sent: Thursday, July 11, 2019 11:13 AM
To: onap-discuss@...
Subject: [onap-discuss] Steps to Remove Deployment of ONAP

 

Hi,

 

I would like to know is there is any recommended steps for removing a bad ONAP deployment (not individual projects, but whole ONAP that failed due to timing issues, sync issues etc).  I noted that in page here, following steps are mentioned

kubectl delete namespace onap

sudo helm delete --purge onap

kubectl delete pv --all

kubectl delete pvc --all

kubectl delete secrets --all

kubectl delete clusterrolebinding --all

sudo rm -rf /dockerdata-nfs/onap-<pod>

 

This used to work in Casablanca. But in Dublin it seems some additional steps are required like patching the finalizer for namespace/pv/pvc etc. Appreciate if someone can suggest the recommended steps for undeploy/redeploy ONAP in case of a failed installation attempt.

 

Regards

 

Manoj

 


The information transmitted herein is intended only for the person or entity to which it is addressed and may contain confidential, proprietary and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.  

 


The information transmitted herein is intended only for the person or entity to which it is addressed and may contain confidential, proprietary and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.
 

Re: Steps to Remove Deployment of ONAP

Marco Platania
 

Manoj,

 

Two small comments:

1.       ./cleanup.sh so (not dev-so, dev is added automatically by the script)

2.       This is what the script removes: secrets, configmaps, pvc, pv, services, deployments, statefulsets, clusterrolebinding

 

You can try to add job to that list and see what happens. If it works, it would be a nice thing to have.

 

The complete set of operations to delete a specific component is:

·         helm del --purge dev-<component_name>

·         rm -rf /dockerdata-nfs/dev-<component_name>/*

·         ./cleanup.sh <component_name>

 

The reason why we created the cleanup.sh script is because helm del --purge doesn’t always remove everything.

 

Thanks,

Marco

 

From: <onap-discuss@...> on behalf of BRIAN FREEMAN <bf1936@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, BRIAN FREEMAN <bf1936@...>
Date: Thursday, July 11, 2019 at 11:20 AM
To: "'onap-discuss@...'" <onap-discuss@...>, "'manoj.k.nair@...'" <manoj.k.nair@...>
Subject: Re: [onap-discuss] Steps to Remove Deployment of ONAP

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.


Sorry that is the script for an individual module

 

./cleanup.sh dev-so  for example.

 

Brian

 

 

From: FREEMAN, BRIAN D
Sent: Thursday, July 11, 2019 11:19 AM
To: onap-discuss@...; manoj.k.nair@...
Subject: RE: [onap-discuss] Steps to Remove Deployment of ONAP

 

I use this script (you have most of the steps) and then check jobs

 

[integration.git] / deployment / heat / onap-rke / scripts / cleanup.sh

 

Cd ...

./cleanup.sh

kubectl -n onap get jobs

kubectl -n onap delete job xxxx

 

(cleanup.sh might have been updated to handle jobs but I’m not sure )

 

Brian

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Manoj K Nair
Sent: Thursday, July 11, 2019 11:13 AM
To: onap-discuss@...
Subject: [onap-discuss] Steps to Remove Deployment of ONAP

 

Hi,

 

I would like to know is there is any recommended steps for removing a bad ONAP deployment (not individual projects, but whole ONAP that failed due to timing issues, sync issues etc).  I noted that in page here, following steps are mentioned

kubectl delete namespace onap

sudo helm delete --purge onap

kubectl delete pv --all

kubectl delete pvc --all

kubectl delete secrets --all

kubectl delete clusterrolebinding --all

sudo rm -rf /dockerdata-nfs/onap-<pod>

 

This used to work in Casablanca. But in Dublin it seems some additional steps are required like patching the finalizer for namespace/pv/pvc etc. Appreciate if someone can suggest the recommended steps for undeploy/redeploy ONAP in case of a failed installation attempt.

 

Regards

 

Manoj

 


The information transmitted herein is intended only for the person or entity to which it is addressed and may contain confidential, proprietary and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.  

Re: Steps to Remove Deployment of ONAP

Brian Freeman
 

Sorry that is the script for an individual module

 

./cleanup.sh dev-so  for example.

 

Brian

 

 

From: FREEMAN, BRIAN D
Sent: Thursday, July 11, 2019 11:19 AM
To: onap-discuss@...; manoj.k.nair@...
Subject: RE: [onap-discuss] Steps to Remove Deployment of ONAP

 

I use this script (you have most of the steps) and then check jobs

 

[integration.git] / deployment / heat / onap-rke / scripts / cleanup.sh

 

Cd ...

./cleanup.sh

kubectl -n onap get jobs

kubectl -n onap delete job xxxx

 

(cleanup.sh might have been updated to handle jobs but I’m not sure )

 

Brian

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Manoj K Nair
Sent: Thursday, July 11, 2019 11:13 AM
To: onap-discuss@...
Subject: [onap-discuss] Steps to Remove Deployment of ONAP

 

Hi,

 

I would like to know is there is any recommended steps for removing a bad ONAP deployment (not individual projects, but whole ONAP that failed due to timing issues, sync issues etc).  I noted that in page here, following steps are mentioned

kubectl delete namespace onap

sudo helm delete --purge onap

kubectl delete pv --all

kubectl delete pvc --all

kubectl delete secrets --all

kubectl delete clusterrolebinding --all

sudo rm -rf /dockerdata-nfs/onap-<pod>

 

This used to work in Casablanca. But in Dublin it seems some additional steps are required like patching the finalizer for namespace/pv/pvc etc. Appreciate if someone can suggest the recommended steps for undeploy/redeploy ONAP in case of a failed installation attempt.

 

Regards

 

Manoj

 


The information transmitted herein is intended only for the person or entity to which it is addressed and may contain confidential, proprietary and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.  

Re: Steps to Remove Deployment of ONAP

Brian Freeman
 

I use this script (you have most of the steps) and then check jobs

 

[integration.git] / deployment / heat / onap-rke / scripts / cleanup.sh

 

Cd ...

./cleanup.sh

kubectl -n onap get jobs

kubectl -n onap delete job xxxx

 

(cleanup.sh might have been updated to handle jobs but I’m not sure )

 

Brian

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Manoj K Nair
Sent: Thursday, July 11, 2019 11:13 AM
To: onap-discuss@...
Subject: [onap-discuss] Steps to Remove Deployment of ONAP

 

Hi,

 

I would like to know is there is any recommended steps for removing a bad ONAP deployment (not individual projects, but whole ONAP that failed due to timing issues, sync issues etc).  I noted that in page here, following steps are mentioned

kubectl delete namespace onap

sudo helm delete --purge onap

kubectl delete pv --all

kubectl delete pvc --all

kubectl delete secrets --all

kubectl delete clusterrolebinding --all

sudo rm -rf /dockerdata-nfs/onap-<pod>

 

This used to work in Casablanca. But in Dublin it seems some additional steps are required like patching the finalizer for namespace/pv/pvc etc. Appreciate if someone can suggest the recommended steps for undeploy/redeploy ONAP in case of a failed installation attempt.

 

Regards

 

Manoj

 


The information transmitted herein is intended only for the person or entity to which it is addressed and may contain confidential, proprietary and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.  

Steps to Remove Deployment of ONAP

Manoj K Nair
 

Hi,

 

I would like to know is there is any recommended steps for removing a bad ONAP deployment (not individual projects, but whole ONAP that failed due to timing issues, sync issues etc).  I noted that in page here, following steps are mentioned

kubectl delete namespace onap

sudo helm delete --purge onap

kubectl delete pv --all

kubectl delete pvc --all

kubectl delete secrets --all

kubectl delete clusterrolebinding --all

sudo rm -rf /dockerdata-nfs/onap-<pod>

 

This used to work in Casablanca. But in Dublin it seems some additional steps are required like patching the finalizer for namespace/pv/pvc etc. Appreciate if someone can suggest the recommended steps for undeploy/redeploy ONAP in case of a failed installation attempt.

 

Regards

 

Manoj

 


The information transmitted herein is intended only for the person or entity to which it is addressed and may contain confidential, proprietary and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.
 

IMPORTANT: Deleting old shutoff VMs in Wind River

Gary Wu
 

Hi all,

The following VMs in Wind River have been shutoff since June 17th (3+ weeks).  Unfortunately, shutoff VMs still reserve vCPU and RAM allocations, and as such we would like to delete VMs that are no longer needed so that we can reorganize the lab capacity for El Alto.

Please check the following list and see if you have any data that you need to backup from the following VMs.  We would like to proceed with their deletion by Monday (July 15th) morning Pacific time.

Thanks,
Gary


A & AI
+--------------------------------------+---------------+---------+-----------------------------------------+--------------------------+-----------+
| ID                                   | Name          | Status  | Networks                                | Image                    | Flavor    |
+--------------------------------------+---------------+---------+-----------------------------------------+--------------------------+-----------+
| c33db10d-cd3b-4776-a5df-dd267196e2cd | sb4-k8s       | SHUTOFF | oam_network_bxun=10.0.0.26, 10.12.5.26  | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| f74a6d67-29f3-4400-aaae-a1a9da4c9aaa | sb4-rancher   | SHUTOFF | oam_network_bxun=10.0.0.19, 10.12.5.51  | ubuntu-16-04-cloud-amd64 | m1.xlarge |
| e1031cd8-8a5c-4723-a0db-c57fbc69bfea | Valet-SK      | SHUTOFF | oam_network_bxun=10.0.0.27, 10.12.6.154 | ubuntu-16-04-cloud-amd64 | m3.xlarge |
| ef814b62-4f8e-48a9-b5f9-ec54dc7b9116 | valet-install | SHUTOFF | oam_network_bxun=10.0.0.22, 10.12.5.157 | ubuntu-16-04-cloud-amd64 | m1.xlarge |
| 626445c7-a833-4399-82eb-19f6ab9d0b4a | valet         | SHUTOFF | oam_network_bxun=10.0.0.7, 10.12.5.41   | ubuntu-16-04-cloud-amd64 | m1.xlarge |
+--------------------------------------+---------------+---------+-----------------------------------------+--------------------------+-----------+

AAF
+--------------------------------------+-----------------+---------+----------------------+--------------------------+-----------+
| ID                                   | Name            | Status  | Networks             | Image                    | Flavor    |
+--------------------------------------+-----------------+---------+----------------------+--------------------------+-----------+
| 95e628af-bf51-4f42-ab6f-b64cf9dc938d | Sai-OOM-K8      | SHUTOFF | external=10.12.6.59  |                          | m2.xlarge |
| d54e5131-d017-456e-94fd-2637db1b5232 | aaf-oom         | SHUTOFF | external=10.12.6.176 |                          | m1.xlarge |
| e74c361e-d2d7-493d-b4e8-047ef848e54d | Sai-OOM-Rancher | SHUTOFF | external=10.12.6.175 |                          | m3.xlarge |
| 9311b5be-7170-4b07-898e-caa967d8006a | mr-k8s2-2       | SHUTOFF | external=10.12.6.98  |                          | m2.xlarge |
| 1cc0486c-5a1b-44af-b79f-052d557d117a | mr-k8s2-1       | SHUTOFF | external=10.12.5.130 |                          | m2.xlarge |
| a51e9706-da8f-46f7-aae5-dbee61e24517 | mr-k8s          | SHUTOFF | external=10.12.7.22  | ubuntu-16-04-cloud-amd64 | m3.xlarge |
| d42ecbe3-9dff-443f-89ff-73c83f06780f | mr_rauncher     | SHUTOFF | external=10.12.7.5   |                          | m3.xlarge |
| 8bc0ea2b-83e9-4ce1-824a-9f2944145f12 | MR TEST         | SHUTOFF | external=10.12.5.108 |                          | m3.xlarge |
+--------------------------------------+-----------------+---------+----------------------+--------------------------+-----------+

APPC
+--------------------------------------+--------------------+---------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------+-----------+
| ID                                   | Name               | Status  | Networks                                                                                                                         | Image                                                 | Flavor    |
+--------------------------------------+--------------------+---------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------+-----------+
| c5d2ca68-6c61-44f7-93df-1d05da787fe4 | appcpgn01          | SHUTOFF | external=10.12.6.104; vFW_appcfwl01_unprotected=192.168.30.200; oam_onap_LH2Z=10.0.100.12                                        | ubuntu-14-04-cloud-amd64                              | m1.medium |
| 5bfc1e4b-be1e-4b2c-93a4-d4d7cfd99760 | appcsnk01          | SHUTOFF | external=10.12.6.129; vFW_appcfwl01_protected=192.168.40.250; oam_onap_LH2Z=10.0.100.13                                          | ubuntu-14-04-cloud-amd64                              | m1.medium |
| 299cf684-b820-4698-8ae9-6c0735403ff7 | appcfwl01          | SHUTOFF | external=10.12.6.65; vFW_appcfwl01_unprotected=192.168.30.100; vFW_appcfwl01_protected=192.168.40.100; oam_onap_LH2Z=10.0.100.11 | ubuntu-14-04-cloud-amd64                              | m1.medium |
| 8e190611-2d67-40b0-8882-df449013a46c | k8s-appc2          | SHUTOFF | external=10.12.5.193; appc-multicloud-integration=10.10.5.22; oam_onap_LH2Z=10.0.0.16                                            | k8s-appc2                                             | m1.xlarge |
| fbc64180-c0e0-4e8f-8cc4-597dd5c26542 | k8s-appc1          | SHUTOFF | external=10.12.5.174; appc-multicloud-integration=10.10.5.17; oam_onap_LH2Z=10.0.0.11                                            | k8s-appc1                                             | m1.xlarge |
| 53d97353-cfec-47f2-b23e-b6196b58bf7e | onap-aai-inst1     | SHUTOFF | external=10.12.5.114; appc-multicloud-integration=10.10.5.15; oam_onap_LH2Z=10.0.1.1                                             | onap-aai-inst1                                        | m1.xlarge |
| a79f389a-40ca-4ad4-a86b-c6e3623827f7 | onap-dns-server    | SHUTOFF | external=10.12.5.59; appc-multicloud-integration=10.10.5.16; oam_onap_LH2Z=10.0.100.1                                            | onap-dns                                              | m1.small  |
| 89bc85d1-a8c2-4427-ac5c-f1efea865bac | onap-appc          | SHUTOFF | external=10.12.5.43; appc-multicloud-integration=10.10.5.10; oam_onap_LH2Z=10.0.2.1                                              | onap-appc                                             | m1.large  |
+--------------------------------------+--------------------+---------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------+-----------+

CC-SDK
+--------------------------------------+----------------------+---------+------------------------------------+--------------------------+--------------------+
| ID                                   | Name                 | Status  | Networks                           | Image                    | Flavor             |
+--------------------------------------+----------------------+---------+------------------------------------+--------------------------+--------------------+
| ba3b8b38-be02-438b-b6fe-0c4f71e28b8b | ra2-k8s-1            | SHUTOFF | cc-sdk-mgmt=10.1.2.12, 10.12.5.197 | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
+--------------------------------------+----------------------+---------+------------------------------------+--------------------------+--------------------+

DCAE
+--------------------------------------+--------------------------+---------+-----------------------------------------+--------------------------+-----------+
| ID                                   | Name                     | Status  | Networks                                | Image                    | Flavor    |
+--------------------------------------+--------------------------+---------+-----------------------------------------+--------------------------+-----------+
| a0bf28f3-34f0-4f9f-a147-ff1758140976 | k8s-rancher              | SHUTOFF | oam_network_M0A7=10.0.0.20, 10.12.5.56  | ubuntu-16-04-cloud-amd64 | m1.large  |
| 31a52315-47b9-4eed-93c6-69d7ed409827 | R3DCAEMIN-dcae           | SHUTOFF | oam_onap_AitD=10.0.4.1, 10.12.6.44      | ubuntu-16-04-cloud-amd64 | m1.xlarge |
| cf1dedaf-8f6c-400d-9044-7416bdf4a914 | R3DCAEMIN-message-router | SHUTOFF | oam_onap_AitD=10.0.11.1, 10.12.5.249    | ubuntu-16-04-cloud-amd64 | m1.large  |
| 7f794a05-ac03-4fd5-8f42-74300626e456 | R3DCAEMIN-dns-server     | SHUTOFF | oam_onap_AitD=10.0.100.1, 10.12.5.177   | ubuntu-16-04-cloud-amd64 | m1.small  |
+--------------------------------------+--------------------------+---------+-----------------------------------------+--------------------------+-----------+

DMaaP
+--------------------------------------+-------------+---------+----------------------+--------------------------+-----------+
| ID                                   | Name        | Status  | Networks             | Image                    | Flavor    |
+--------------------------------------+-------------+---------+----------------------+--------------------------+-----------+
| 0162bdba-4d55-43fa-a18d-9b62275c04c8 | mms-k8s-3   | SHUTOFF | external=10.12.6.5   | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| 671fb549-856d-4c96-afac-375d009d82f7 | mms-k8s-2   | SHUTOFF | external=10.12.6.24  | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| 3c3009e2-85d9-42aa-943a-d3d6cda278d4 | mms-k8s-1   | SHUTOFF | external=10.12.5.216 | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| 6e0daed0-b43c-484e-bf81-d42904591aa9 | mms-rancher | SHUTOFF | external=10.12.6.26  | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| c3074457-751e-44e8-9643-4e5026328bda | dgl-csit    | SHUTOFF | external=10.12.6.189 |                          | m1.xlarge |
| f66ed110-5bac-41ac-8830-7a7c8a2c2800 | dgl-k8s-3   | SHUTOFF | external=10.12.5.248 | ubuntu-14-04-cloud-amd64 | m3.xlarge |
| eedeb812-4f3f-4a70-a977-c3b7c58a9983 | dgl-k8s-2   | SHUTOFF | external=10.12.6.198 | ubuntu-14-04-cloud-amd64 | m3.xlarge |
| 584556c4-d8be-4334-bd94-c0e95f87873f | dgl-k8s-1   | SHUTOFF | external=10.12.5.152 | ubuntu-14-04-cloud-amd64 | m3.xlarge |
+--------------------------------------+-------------+---------+----------------------+--------------------------+-----------+

Holmes
+--------------------------------------+--------------------+---------+---------------------+-------+-----------+
| ID                                   | Name               | Status  | Networks            | Image | Flavor    |
+--------------------------------------+--------------------+---------+---------------------+-------+-----------+
| 81c7cea9-9823-41bf-9b02-3ec3f81c7572 | holmes-integration | SHUTOFF | external=10.12.6.22 |       | m1.xlarge |
+--------------------------------------+--------------------+---------+---------------------+-------+-----------+

Logging
+--------------------------------------+----------------------------+---------+--------------------------------------+--------------------------+-----------+
| ID                                   | Name                       | Status  | Networks                             | Image                    | Flavor    |
+--------------------------------------+----------------------------+---------+--------------------------------------+--------------------------+-----------+
| 4d50b77e-1170-4194-9f5f-6645ddf4a34b | onap-oom-obrien-rancher-e0 | SHUTOFF | oam_onap_IgYU=10.0.16.1, 10.12.6.125 | ubuntu-16-04-cloud-amd64 | m1.xlarge |
| 15bf0a96-7bb3-4c5a-8d09-5f1074b8371f | onap-oom-obrien-rancher-e1 | SHUTOFF | oam_onap_IgYU=10.0.0.7, 10.12.5.162  | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| 93b204ce-aafd-4b90-96d3-40ccb2078485 | onap-oom-obrien-rancher-e4 | SHUTOFF | oam_onap_IgYU=10.0.0.6, 10.12.5.4    | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| 4bc27c18-8d06-4321-bcc1-08887ec10f8f | onap-oom-obrien-rancher-e3 | SHUTOFF | oam_onap_IgYU=10.0.0.10, 10.12.5.102 | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| a2de801a-f98f-4f98-8557-5f8236f32c70 | onap-oom-obrien-rancher-e2 | SHUTOFF | oam_onap_IgYU=10.0.0.9, 10.12.5.198  | ubuntu-16-04-cloud-amd64 | m2.xlarge |
+--------------------------------------+----------------------------+---------+--------------------------------------+--------------------------+-----------+

Microservices
+--------------------------------------+--------------------+---------+----------------------+-------+-----------+
| ID                                   | Name               | Status  | Networks             | Image | Flavor    |
+--------------------------------------+--------------------+---------+----------------------+-------+-----------+
| 8671b9a9-b312-4544-a6e7-a7c710bfc6f6 | bob-esr-k8s-worker | SHUTOFF | external=10.12.6.19  |       | m1.large  |
| c49152de-c3c0-4c02-92ee-f5a3a277c2b6 | bob-esr-k8s-master | SHUTOFF | external=10.12.5.71  |       | m1.large  |
| 7209b166-ba1c-42a0-b072-47f74b1cfa81 | k8s-mnio1          | SHUTOFF | external=10.12.6.3   |       | m1.large  |
| 4b6e716b-5505-4afe-8e07-b8caab741f77 | k8s-minio          | SHUTOFF | external=10.12.5.200 |       | m2.xlarge |
+--------------------------------------+--------------------+---------+----------------------+-------+-----------+

VIM
+--------------------------------------+-----------------+---------+-----------------------------------------+--------------------------+-----------+
| ID                                   | Name            | Status  | Networks                                | Image                    | Flavor    |
+--------------------------------------+-----------------+---------+-----------------------------------------+--------------------------+-----------+
| 95f7a266-5f76-46aa-b9bd-b3079a7e0884 | mc-test-k8s-09  | SHUTOFF | oam_network_8pFU=10.0.0.26, 10.12.5.241 | ubuntu-18.04             | m1.xlarge |
| 4ebaaf32-4bf5-43a8-932b-96faee6b080f | mc-test-k8s-01  | SHUTOFF | oam_network_8pFU=10.0.0.17, 10.12.5.228 | ubuntu-18.04             | m1.xlarge |
| a8406d4a-34d7-4528-91c2-c37b54ba5cd5 | mc-test-k8s-02  | SHUTOFF | oam_network_8pFU=10.0.0.24, 10.12.6.227 | ubuntu-18.04             | m1.xlarge |
| c20add2e-2676-4b0b-a014-6da8e4ed8441 | mc-test-k8s-11  | SHUTOFF | oam_network_8pFU=10.0.0.7, 10.12.6.25   | ubuntu-18.04             | m1.xlarge |
| f9e71dc0-18c5-4be4-aadf-0200bb2dd925 | mc-test-k8s-10  | SHUTOFF | oam_network_8pFU=10.0.0.13, 10.12.5.234 | ubuntu-18.04             | m1.xlarge |
| 06179ab6-327c-44ac-a4da-40039c2632c2 | mc-test-k8s-08  | SHUTOFF | oam_network_8pFU=10.0.0.15, 10.12.6.200 | ubuntu-18.04             | m1.xlarge |
| a6d0301e-0a43-4e0f-b48f-d7bed20822dd | mc-test-k8s-03  | SHUTOFF | oam_network_8pFU=10.0.0.5, 10.12.5.253  | ubuntu-18.04             | m1.xlarge |
| 4a2d9396-7268-4a6a-bf4d-d782bcacf561 | mc-test-k8s-05  | SHUTOFF | oam_network_8pFU=10.0.0.20, 10.12.6.13  | ubuntu-18.04             | m1.xlarge |
| c50e2199-04c4-4d74-bbe9-215556fe546b | mc-test-k8s-06  | SHUTOFF | oam_network_8pFU=10.0.0.4, 10.12.5.227  | ubuntu-18.04             | m1.xlarge |
| 46b4e712-6cb4-45da-8f76-2d920a05e1cd | mc-test-k8s-07  | SHUTOFF | oam_network_8pFU=10.0.0.12, 10.12.5.176 | ubuntu-18.04             | m1.xlarge |
| 53b31729-4644-4f50-a592-d455ef011563 | mc-test-orch-3  | SHUTOFF | oam_network_8pFU=10.0.0.11, 10.12.5.226 | ubuntu-18.04             | m1.large  |
| e9d7607a-f084-45bd-bb88-a4ff9abe4893 | mc-test-orch-2  | SHUTOFF | oam_network_8pFU=10.0.0.8, 10.12.5.141  | ubuntu-18.04             | m1.large  |
| d468dec1-004e-4c48-a967-99fc841111b4 | mc-test-k8s-04  | SHUTOFF | oam_network_8pFU=10.0.0.9, 10.12.5.212  | ubuntu-18.04             | m1.xlarge |
| b6c1278b-ac1f-4d70-a7ec-2a1c8701ae88 | mc-test-k8s-12  | SHUTOFF | oam_network_8pFU=10.0.0.6, 10.12.5.205  | ubuntu-18.04             | m1.xlarge |
| 4a5fc438-14cd-412f-8cc8-d65c2309b24d | mc-test-orch-1  | SHUTOFF | oam_network_8pFU=10.0.0.3, 10.12.5.144  | ubuntu-18.04             | m1.large  |
| c6bcd2b7-e72f-4dcc-9674-71b9c41f8475 | mc-test-rancher | SHUTOFF | oam_network_8pFU=10.0.0.14, 10.12.5.222 | ubuntu-18.04             | m1.large  |
+--------------------------------------+-----------------+---------+-----------------------------------------+--------------------------+-----------+

PFPP
+--------------------------------------+--------------------+---------+------------------------------------------------------------------+--------------------------+--------------------+
| ID                                   | Name               | Status  | Networks                                                         | Image                    | Flavor             |
+--------------------------------------+--------------------+---------+------------------------------------------------------------------+--------------------------+--------------------+
| e56d38fc-1d9b-4848-99f2-83964a785f09 | policy-s3p1-pap-vm | SHUTOFF | policy-s3p1-net1=192.168.200.10, 10.12.6.164                     | ubuntu-16-04-cloud-amd64 | onap.flavor2.large |
| 7616bb5c-e6e1-4955-af25-5d33e55e1ca5 | policy-s3p1-api-vm | SHUTOFF | external=10.12.6.93; policy-s3p1-net1=192.168.200.12, 10.12.7.29 | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
+--------------------------------------+--------------------+---------+------------------------------------------------------------------+--------------------------+--------------------+

SO
+--------------------------------------+------------------+---------+-----------------------------------------+--------------------------+-----------+
| ID                                   | Name             | Status  | Networks                                | Image                    | Flavor    |
+--------------------------------------+------------------+---------+-----------------------------------------+--------------------------+-----------+
| ebdc6117-1cbf-4150-b47f-029f3b5bd51f | so-rancher-k8s_2 | SHUTOFF | oam_network_jWbW=10.0.0.13, 10.12.5.15  | ubuntu-16-04-cloud-amd64 | m1.xlarge |
| 534451e0-f851-4fcf-9df2-d9300dc9d343 | so-rancher-k8s_1 | SHUTOFF | oam_network_jWbW=10.0.0.5, 10.12.5.128  | ubuntu-16-04-cloud-amd64 | m1.xlarge |
| a4eb40d0-d3f7-4ce4-99ce-027355419c10 | so-rancher       | SHUTOFF | oam_network_jWbW=10.0.0.15, 10.12.5.147 | ubuntu-16-04-cloud-amd64 | m1.xlarge |
+--------------------------------------+------------------+---------+-----------------------------------------+--------------------------+-----------+

OOM
+--------------------------------------+----------------------+---------+--------------------------------------+--------------------------+--------------------+
| ID                                   | Name                 | Status  | Networks                             | Image                    | Flavor             |
+--------------------------------------+----------------------+---------+--------------------------------------+--------------------------+--------------------+
| faa713aa-fbcd-4e7e-bbe3-9384e3519cd7 | borislav-node-8      | SHUTOFF | borislav-net=10.0.17.14              | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
| 9ea05052-1d07-4d27-a115-27f0a4bf787e | borislav-node-7      | SHUTOFF | borislav-net=10.0.17.7               | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
| 2a0b018c-40c3-4cd0-b9be-35a625d83edd | borislav-node-6      | SHUTOFF | borislav-net=10.0.17.10              | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
| ece3d145-9c67-4962-bfcc-2718a11434cd | borislav-node-5      | SHUTOFF | borislav-net=10.0.17.16              | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
| c02e231d-82b3-4b85-b02b-9bf1fd43b7ca | borislav-node-4      | SHUTOFF | borislav-net=10.0.17.17              | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
| bfd217eb-43ee-42d9-b6a4-594bad02c363 | borislav-node-3      | SHUTOFF | borislav-net=10.0.17.12              | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
| 11435594-bc04-4a99-89a4-f582327bed12 | borislav-node-2      | SHUTOFF | borislav-net=10.0.17.13, 10.12.6.133 | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
| d8b5a646-1f05-4afc-8784-b4e2c7aa0682 | borislav-node-1      | SHUTOFF | borislav-net=10.0.17.5               | ubuntu-16-04-cloud-amd64 | m2.xlarge          |
| 08869f46-b7de-4f8b-aab8-51fee78c677e | borislav-rancher     | SHUTOFF | borislav-net=10.0.17.11, 10.12.6.234 | ubuntu-16-04-cloud-amd64 | m1.xlarge          |
| 4e687b0f-f2ff-4591-915d-6f5c33e35b0d | onap-nfs-server      | SHUTOFF | oam_onap_wZ5M=10.0.0.24, 10.12.5.206 | ubuntu-18.04             | onap.flavor2.large |
| 6c63c488-a255-47a2-81db-9c7162960208 | onap-k8s-12          | SHUTOFF | oam_onap_wZ5M=10.0.0.7, 10.12.6.82   | ubuntu-18.04             | m1.xlarge          |
| 9e2bdd02-5cdd-4e97-8ac2-7963f7514e65 | onap-k8s-11          | SHUTOFF | oam_onap_wZ5M=10.0.0.18, 10.12.6.74  | ubuntu-18.04             | m1.xlarge          |
| 19faa366-9dc7-445d-83a8-9f11cca7366a | onap-k8s-10          | SHUTOFF | oam_onap_wZ5M=10.0.0.16, 10.12.5.160 | ubuntu-18.04             | m1.xlarge          |
| 7323aad7-2c4a-46b4-a824-c6c6620815c1 | onap-k8s-9           | SHUTOFF | oam_onap_wZ5M=10.0.0.4, 10.12.6.195  | ubuntu-18.04             | m1.xlarge          |
| fd964232-a632-4e76-853e-64976a61c3fa | onap-k8s-8           | SHUTOFF | oam_onap_wZ5M=10.0.0.10, 10.12.6.111 | ubuntu-18.04             | m1.xlarge          |
| 49cf22f1-a895-4dc9-b812-f1bf618ec0c6 | onap-k8s-7           | SHUTOFF | oam_onap_wZ5M=10.0.0.20, 10.12.5.191 | ubuntu-18.04             | m1.xlarge          |
| e94bcae5-bba4-4bfe-bffe-ffb46fcf0556 | onap-k8s-6           | SHUTOFF | oam_onap_wZ5M=10.0.0.17, 10.12.6.249 | ubuntu-18.04             | m1.xlarge          |
| f8cf069d-95ba-465f-b097-03890bf29c68 | onap-k8s-5           | SHUTOFF | oam_onap_wZ5M=10.0.0.9, 10.12.6.244  | ubuntu-18.04             | m1.xlarge          |
| 7266f301-b582-4497-a385-c18202f8de7c | onap-k8s-4           | SHUTOFF | oam_onap_wZ5M=10.0.0.6, 10.12.5.11   | ubuntu-18.04             | m1.xlarge          |
| 92534dec-5296-4404-baf4-915c6ee1cc9b | onap-k8s-3           | SHUTOFF | oam_onap_wZ5M=10.0.0.5, 10.12.6.126  | ubuntu-18.04             | m1.xlarge          |
| a87c95fb-63f3-4d4f-af81-a34e113917df | onap-k8s-2           | SHUTOFF | oam_onap_wZ5M=10.0.0.26, 10.12.6.238 | ubuntu-18.04             | m1.xlarge          |
| 14625886-bcb3-436f-8539-f3772d9baa8f | onap-k8s-1           | SHUTOFF | oam_onap_wZ5M=10.0.0.14, 10.12.5.165 | ubuntu-18.04             | m1.xlarge          |
| 77504101-0c3e-4414-8840-4cebb67dd36a | onap-control-3       | SHUTOFF | oam_onap_wZ5M=10.0.0.12, 10.12.6.89  | ubuntu-18.04             | onap.flavor2.large |
| 2544d20d-ffb3-4707-b6f5-f6ccf8b6bfd8 | onap-control-2       | SHUTOFF | oam_onap_wZ5M=10.0.0.11, 10.12.6.90  | ubuntu-18.04             | onap.flavor2.large |
| 7768daed-51bc-4c19-974a-2248631d74b6 | onap-control-1       | SHUTOFF | oam_onap_wZ5M=10.0.0.8, 10.12.6.85   | ubuntu-18.04             | onap.flavor2.large |
| 0546af64-bfd1-461c-851b-638cf9be57dd | cloud2-k8s-1         | SHUTOFF | oam_onap_wZ5M=10.0.0.15, 10.12.6.84  | ubuntu-16-04-cloud-amd64 | onap.flavor2.large |
| 8a72b740-cd81-4f8a-9b94-79eff30f95ee | cloud1-k8s-1         | SHUTOFF | oam_onap_wZ5M=10.0.0.13, 10.12.6.81  | ubuntu-16-04-cloud-amd64 | m1.xlarge          |
| 0edd8bae-caa9-4e67-b77f-7998534eb4f7 | borislav-rancher-new | SHUTOFF | borislav-net=10.0.17.15, 10.12.6.75  | ubuntu-16-04-cloud-amd64 | m1.xlarge          |
+--------------------------------------+----------------------+---------+--------------------------------------+--------------------------+--------------------+

SDC
+--------------------------------------+------+---------+---------------------------------------------+-------+-----------+
| ID                                   | Name | Status  | Networks                                    | Image | Flavor    |
+--------------------------------------+------+---------+---------------------------------------------+-------+-----------+
| eab54638-7c2e-4aab-919c-cbf362f8fbb5 | sdc  | SHUTOFF | external=10.12.5.75; oam_onap_5BRZ=10.0.0.7 |       | m1.xlarge |
+--------------------------------------+------+---------+---------------------------------------------+-------+-----------+

vCPE
+--------------------------------------+----------+---------+----------------------+--------------------------+-----------+
| ID                                   | Name     | Status  | Networks             | Image                    | Flavor    |
+--------------------------------------+----------+---------+----------------------+--------------------------+-----------+
| e35bd345-95d9-4bd4-a71a-0ef0fa46a0a5 | centos-3 | SHUTOFF | external=10.12.5.143 | CentOS-7                 | m1.medium |
| b8f24a39-18ff-4e20-8975-02f13b757768 | centos-2 | SHUTOFF | external=10.12.6.39  | CentOS-7                 | m1.medium |
| 99fc3bd9-2fef-409c-b11c-c8fee4f72576 | centos-1 | SHUTOFF | external=10.12.5.68  | CentOS-7                 | m1.medium |
| 520f774c-305a-4b45-945c-f85deb145a07 | k8s-3    | SHUTOFF | external=10.12.6.15  | ubuntu-16-04-cloud-amd64 | m1.large  |
| c691cd5c-81b3-41ff-ae52-abf84ec28f22 | k8s-2    | SHUTOFF | external=10.12.6.34  | ubuntu-16-04-cloud-amd64 | m1.large  |
| e7e4af66-eca0-4c1b-ac4a-b3fe38e40318 | k8s-1    | SHUTOFF | external=10.12.6.37  | ubuntu-16-04-cloud-amd64 | m1.large  |
| 2572511d-67fd-44b7-8922-826509689085 | ubuntu1  | SHUTOFF | external=10.12.5.107 | ubuntu-16-04-cloud-amd64 | m3.xlarge |
+--------------------------------------+----------+---------+----------------------+--------------------------+-----------+

Integration-SB-RegionThree
+--------------------------------------+----------------------+---------+-------------------------------------+--------------------------+-----------+
| ID                                   | Name                 | Status  | Networks                            | Image                    | Flavor    |
+--------------------------------------+----------------------+---------+-------------------------------------+--------------------------+-----------+
| dc94b3e3-2f4e-4580-be41-6726935475d6 | abpujari_virtual_box | SHUTOFF | onap_oam_ext=10.100.0.5, 10.12.5.47 | ubuntu-16-04-cloud-amd64 | m2.xlarge |
| 65671514-6f5c-48b6-ada9-a77189f7a478 | abpujari_vm          | SHUTOFF | onap_oam_ext=10.100.0.7             | ubuntu-16-04-cloud-amd64 | m2.xlarge |
+--------------------------------------+----------------------+---------+-------------------------------------+--------------------------+-----------+

OOM-deploy
+--------------------------------------+--------------+---------+----------------------+--------------------------+---------------------+
| ID                                   | Name         | Status  | Networks             | Image                    | Flavor              |
+--------------------------------------+--------------+---------+----------------------+--------------------------+---------------------+
| 0269f1f8-a334-41f7-887d-5f722a383573 | dublin1      | SHUTOFF | external=10.12.6.204 | ubuntu-16-04-cloud-amd64 | m1.xlarge           |
| 1011c5a1-43e7-44b7-b479-3b8a91fd7a5f | dublin2      | SHUTOFF | external=10.12.6.219 | ubuntu-16-04-cloud-amd64 | m1.xlarge           |
| 6425d4bf-0fbf-4566-a875-39f452707b58 | dublin-large | SHUTOFF | external=10.12.6.214 | ubuntu-16-04-cloud-amd64 | m1.xlarge           |
| b41d001b-bfa9-4566-8aa3-6187df364abc | dublin       | SHUTOFF | external=10.12.5.35  | ubuntu-16-04-cloud-amd64 | onap.hpa.set1.large |
+--------------------------------------+--------------+---------+----------------------+--------------------------+---------------------+

Re: [MultiCloud][AAI] : about AAI updates by MultiCloud : is it implemented in Dublin release ?

Srini
 

AAI updates on the status of workload deployments in K8S based Cloud region support was not done yet in R4.  Kiran and Eric (copied here) plan is to get this gap fixed in R5.  BTW, This is also requested by Lukasz. 

 

There are two parts to it:

 

1.      Monitoring the remote resources for completeness/Ready,  after deployment is triggered by Multi-Cloud K8S plugin service.

2.      Updating A&AI

 

I tried to search for the JIRA stories. Could not get it.  I will let Kiran to share the JIRA stories. Any help from the community would be appreciated.

 

Thanks

Srini

 

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Multanen, Eric W
Sent: Thursday, July 11, 2019 7:27 AM
To: onap-discuss@...; rene.robert@...; bin.yang@...
Subject: Re: [onap-discuss][MultiCloud][AAI] : about AAI updates by MultiCloud : is it implemented in Dublin release ?

 

The SO multicloud plugin adapter in Dublin does make the POST call as described in the link during vfmodule creation.

The multicloud openstack plugin(s) does an AAI update with some information.  The multicloud K8S plugin does ‘not’ support it in Dublin.

This functionality has not been fully tested, etc.  – I don’t believe it handles updates, deletes and such.  It may not support all of the same

data that the heatbridge script supports.    So, it’s more of a PoC.

 

Note also – there has been some discussion about how the aai update should be done.  It became too much to handle in Dublin.

My notes on these discussions are captured here:  https://wiki.onap.org/pages/viewpage.action?pageId=58228881

 

Eric

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Rene Robert via Lists.Onap.Org
Sent: Thursday, July 11, 2019 5:08 AM
To: onap-discuss@...; bin.yang@...
Subject: [onap-discuss][MultiCloud][AAI] : about AAI updates by MultiCloud : is it implemented in Dublin release ?

 

Hello

 

There is that description in the documentation.

 

https://docs.onap.org/en/dublin/submodules/multicloud/framework.git/docs/specs/multicloud-heat-aai-bridge.html

 

Is it implemented in Dublin ?

 

 

Logo Orange

 

René Robert
«Open and Smart solutions for autOmating Network Services»
ORANGE/IMT/OLN/CNC/NARA/OSONS

 

Fixe : +33 2 96 07 39 29
Mobile : +33 6 74 78 68 43
rene.robert@...

 

 

 

_________________________________________________________________________________________________________________________
 
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
 
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.

[policy] pilot dockerhub migration and multiplatform support for policy project

Paul Vaduva
 

Dear all,

 

First of all congratulations for the release of Dublin.

I write to you all because you are stakeholders in the migration of onap projects to dockerhub and eventually multiplaform support of course.

I finished a pilot of the policy project (an example on how we can build the docker images with multiplatform support and push them to dockerhub) in the Jenkins Sandbox:

https://jenkins.onap.org/sandbox/

 

Here are the modifications I had to do for the ci-management repo to create multiplatform jobs for reference (specially for linux foundation)

https://github.com/pvaduva/ci-management.git

 

All the issues with the build are the same on the nexus3 pipeline and on both amd64 and arm64 so I assume is not platform specific but I encourage you all to take a look and see how the jobs perform. It's true that I had to make some adjustments to the git repos of the policy project so I used github to store the modified versions but they are public so you can examine the type of modifications that would need to be made. It's true that some of the modifications are made because of the limitations of Jenkins Sandbox, they will not be necessary on the final version but you can see the high level of the idea of a multiplatform capable image and storage to dockerhub.

I added Martin, Dmitry, Simon to see the modifications, we will have to make some similar for all the other projects (together with those respective teams of course).

 

Of course we need input from all the parties involved we can craft the details of the implementation together but this is as previously stated to give a high level idea.

I created a mock account for storing the docker images because some jobs need intermediate images and also for you to examine to see how the images would look like (specially Pam and the policy team)

dockerhub account;

user: onapmulti

pass: Secret1234

 

Best Regards,

Paul Vaduva


This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

Re: [MultiCloud][AAI] : about AAI updates by MultiCloud : is it implemented in Dublin release ?

Multanen, Eric W
 

The SO multicloud plugin adapter in Dublin does make the POST call as described in the link during vfmodule creation.

The multicloud openstack plugin(s) does an AAI update with some information.  The multicloud K8S plugin does ‘not’ support it in Dublin.

This functionality has not been fully tested, etc.  – I don’t believe it handles updates, deletes and such.  It may not support all of the same

data that the heatbridge script supports.    So, it’s more of a PoC.

 

Note also – there has been some discussion about how the aai update should be done.  It became too much to handle in Dublin.

My notes on these discussions are captured here:  https://wiki.onap.org/pages/viewpage.action?pageId=58228881

 

Eric

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Rene Robert via Lists.Onap.Org
Sent: Thursday, July 11, 2019 5:08 AM
To: onap-discuss@...; bin.yang@...
Subject: [onap-discuss][MultiCloud][AAI] : about AAI updates by MultiCloud : is it implemented in Dublin release ?

 

Hello

 

There is that description in the documentation.

 

https://docs.onap.org/en/dublin/submodules/multicloud/framework.git/docs/specs/multicloud-heat-aai-bridge.html

 

Is it implemented in Dublin ?

 

 

Logo Orange

 

René Robert
«Open and Smart solutions for autOmating Network Services»
ORANGE/IMT/OLN/CNC/NARA/OSONS

 

Fixe : +33 2 96 07 39 29
Mobile : +33 6 74 78 68 43
rene.robert@...

 

 

 

_________________________________________________________________________________________________________________________
 
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
 
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.

Re: #appc NETCONF/TLS in ODL #appc

francesca.vezzosi@...
 

Hi,
 
I forgot to mention that that OOM installation I'm using is the Dublin release :)

Regards,
Francesca