Date   

Cancelled Event: Openlab subcommittee meeting (UTC) - Wednesday, 1 May 2019 #cal-cancelled

onap-discuss@lists.onap.org Calendar <onap-discuss@...>
 

Cancelled: Openlab subcommittee meeting (UTC)

This event has been cancelled.

When:
Wednesday, 1 May 2019
11:00am to 12:00pm
(GMT+00:00) Africa/Monrovia

Where:
https://zoom.us/j/621738721

Organizer:
yangyanyj@... ONAP6

Description:
Openlab subcommittee meeting
https://zoom.us/j/621738721


Cancelled Event: #clamp Team (UTC) - Wednesday, 1 May 2019 #clamp #cal-cancelled

onap-discuss@lists.onap.org Calendar <onap-discuss@...>
 

Cancelled: #clamp Team (UTC)

This event has been cancelled.

When:
Wednesday, 1 May 2019
1:00pm to 2:00pm
(GMT+00:00) UTC

Where:
https://zoom.us/j/985787133

Organizer:
gn422w@... Bridge:ONAP9

Description:
Details:

iPhone one-tap (US Toll): +16465588656,985787133# or +14086380968,985787133#

Or Telephone: Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll) +1 855 880 1246 (US Toll Free) +1 877 369 0926 (US Toll Free)
Meeting ID: 985 787 133
International numbers available: https://zoom.us/zoomconference?m=-Nj9qjeFFTIOFpGsEEV5yqOJU7UFyFPN


Common docker images unification and standardization

t.levora@...
 

Hi,

 

We’re working on offline-installer and it means we need to have all resource available in an offline package - for us the size of the package matters. I checked the Dublin resources today:

 

  1. I found out there are five different tags of mariadb version 10 used in Dublin.

So I would like to ask if it’s really necessary to have five different images bringing probably the same functionality or if it can be unified. For us it means to have all the tags available in offline nexus repository deployed on infrastructure server from which are the images distributed to pods.

MARIADB

Docker Image

Project(s)

nexus3.onap.org:10001/mariadb:10.1.38

so/so-mariadb

nexus3.onap.org:10001/mariadb:10.2.14

policy/mariadb

nexus3.onap.org:10001/mariadb:10.3.12

clamp/mariadb

nexus3.onap.org:10001/mariadb:10.3.14

nbi

library/mariadb:10

vid

 

  1. I found five variations of busybox images used – mainly latest tag, but from different registries or using different format of entry from default registry

Is it possible to standardize the tag and registry used for the busybox and rather use some specific tag instead of the latest? For us it means retagging of the images before push to our infrastructure repository before packaging to ensure the image will be pulled correctly on every request by all those names and tags.

BUSYBOX

Docker Image

Project(s)

library/busybox:latest

common/postgres; common/music/music-tomcat; common/music/music-cassandra-job

docker.io/busybox:1.30

dmaap/dmaap-dr-node; dmaap/message-router

docker.io/library/busybox:latest

pomba/pomba-data-router

nexus3.onap.org:10001/busybox

common/mariadb-galera

registry.hub.docker.com/library/busybox:latest

clamp/clamp-dash-es; clamp/clamp-dash-kibana; pomba/pomba-elasticsearch; pomba/pomba-kibana; log/log-kibana; log/log-elasticsearch; multicloud/multicloud-prometheus

 

  1. I found three different tags of readinesscheck images

Is it possible to use only one version? Again, it means to hold all three versions in our package what increases its size.

READINESSCHECK

Docker Image

Project(s)

oomk8s/readiness-check:2.0.0

clamp; dcaegen2/dcae-cloudify-manager; dcaegen2/dcae-redis; dcaegen2/dcae-bootstrap; dcaegen2/dcae-healthcheck; dcaegen2/dcae-policy-handler; dcaegen2/dcae-deployment-handler; dcaegen2/dcae-servicechange-handler; dcaegen2/dcae-servicechange-handler/dcae-inventory-api; dcaegen2/dcae-config-binding-service; oof; oof/oof-cmso/oof-cmso-topology; oof/oof-cmso/oof-cmso-ticketmgt; oof/oof-cmso/oof-cmso-optimizer; oof/oof-cmso/oof-cmso-service; oof/oof-has; oof/oof-has; aaf; aaf/aaf-sms; aaf/aaf-sms; aaf-sms-quorumclient; aaf/aaf-oauth; aaf/aaf-locate; aaf/aaf-fs; aaf/aaf-hello; aaf/aaf-gui; aaf/aaf-service; aaf/aaf-sshsm; aaf/aaf-cm; readiness; pomba/pomba-search-data; pomba/pomba-contextaggregator; pomba/pomba-data-router; pomba/pomba-sdcctxbuilder; pomba/pomba-validation-service; pomba/pomba-kibana; sdc/sdc-dcae-fe; sdc/sdc-dcae-tosca-lab; sdc/sdc-es; sdc/sdc-kb; sdc/sdc-onboarding-be; sdc/sdc-dcae-dt; sdc/sdc-fe; sdc/sdc-dcae-be; sdc/sdc-cs; sdc/sdc-wfd-fe; sdc/sdc-wfd-be; sdc/sdc-be; cli; msb/kube2msb; msb/msb-eag; msb/msb-discovery; msb/msb-iag; helm/starters/onap-app; policy; policy/policy-common; policy/drools; policy/drools/nexus; policy/brmsgw; policy/pdp; contrib/netbox; portal/portal-sdk; portal/portal-mariadb; portal/portal-app; portal/portal-widget; vnfsdk; esr/esr-server; vfc/vfc-db; vfc/vfc-vnfres; vfc/vfc-vnflcm; vfc/vfc-generic-vnfm-driver; vfc/vfc-huawei-vnfm-driver; vfc/vfc-nokia-v2vnfm-driver; vfc/vfc-resmgr; vfc/vfc-zte-sdnc-driver; vfc/vfc-zte-vnfm-driver; vfc/vfc-ems-driver; vfc/vfc-workflow-engine; vfc/vfc-nokia-vnfm-driver; vfc/vfc-catalog; vfc/vfc-workflow; vfc/vfc-vnfmgr; vfc/vfc-nslcm; vfc/vfc-juju-vnfm-driver; vfc/vfc-multivim-proxy; pnda/dcae-pnda-bootstrap; pnda/dcae-pnda-mirror; log/log-kibana; log/log-logstash; multicloud/multicloud-prometheus; common/mysql; common/music; common/music/music-tomcat; common/music/music-cassandra; common/music/music-cassandra-job; common/postgres; common/postgres/pgpool; common/dgbuilder; common/network-name-gen; common/mongo appc; appc/appc-ansible-server; dmaap; dmaap/dmaap-bc; dmaap/message-router; dmaap/message-router/message-router-zookeeper dmaap/message-router/message-router-mirrormaker dmaap/message-router/message-router-mirrormaker; dmaap/message-router/message-router-kafka; dmaap/dmaap-dr-node; dmaap/dmaap-dr-prov; consul; consul/consul-server; cds/cds-controller-blueprints; cds/cds-blueprints-processor; cds/cds-command-executor

oomk8s/readiness-check:2.0.1

vid

oomk8s/readiness-check:2.0.2

so; so/so-vfc-adapter; so/so-monitoring; so/so-mariadb; so/so-openstack-adapter; so/so-sdc-controller; so/so-bpmn-infra; so/so-request-db-adapter; so/so-catalog-db-adapter; so/so-sdnc-adapter; so/so-vnfm-adapter; sdnc; sdnc/sdnc-prom; sdnc/dmaap-listener; sdnc/sdnc-portal; sdnc/ueb-listener; sdnc/sdnc-ansible-server

 

Thank you very much

 

Best regards

 

Tomas

 

 

 

  


Common docker images unification and standardization

t.levora@...
 

Hi,

 

We’re working on offline-installer and it means we need to have all resource available in an offline package - for us the size of the package matters. I checked the Dublin resources today:

 

  1. I found out there are five different tags of mariadb version 10 used in Dublin.

So I would like to ask if it’s really necessary to have five different images bringing probably the same functionality or if it can be unified. For us it means to have all the tags available in offline nexus repository deployed on infrastructure server from which are the images distributed to pods.

MARIADB

Docker Image

Project(s)

nexus3.onap.org:10001/mariadb:10.1.38

so/so-mariadb

nexus3.onap.org:10001/mariadb:10.2.14

policy/mariadb

nexus3.onap.org:10001/mariadb:10.3.12

clamp/mariadb

nexus3.onap.org:10001/mariadb:10.3.14

nbi

library/mariadb:10

vid

 

  1. I found five variations of busybox images used – mainly latest tag, but from different registries or using different format of entry from default registry

Is it possible to standardize the tag and registry used for the busybox and rather use some specific tag instead of the latest? For us it means retagging of the images before push to our infrastructure repository before packaging to ensure the image will be pulled correctly on every request by all those names and tags.

BUSYBOX

Docker Image

Project(s)

library/busybox:latest

common/postgres; common/music/music-tomcat; common/music/music-cassandra-job

docker.io/busybox:1.30

dmaap/dmaap-dr-node; dmaap/message-router

docker.io/library/busybox:latest

pomba/pomba-data-router

nexus3.onap.org:10001/busybox

common/mariadb-galera

registry.hub.docker.com/library/busybox:latest

clamp/clamp-dash-es; clamp/clamp-dash-kibana; pomba/pomba-elasticsearch; pomba/pomba-kibana; log/log-kibana; log/log-elasticsearch; multicloud/multicloud-prometheus

 

  1. I found three different tags of readinesscheck images

Is it possible to use only one version? Again, it means to hold all three versions in our package what increases its size.

READINESSCHECK

Docker Image

Project(s)

oomk8s/readiness-check:2.0.0

clamp; dcaegen2/dcae-cloudify-manager; dcaegen2/dcae-redis; dcaegen2/dcae-bootstrap; dcaegen2/dcae-healthcheck; dcaegen2/dcae-policy-handler; dcaegen2/dcae-deployment-handler; dcaegen2/dcae-servicechange-handler; dcaegen2/dcae-servicechange-handler/dcae-inventory-api; dcaegen2/dcae-config-binding-service; oof; oof/oof-cmso/oof-cmso-topology; oof/oof-cmso/oof-cmso-ticketmgt; oof/oof-cmso/oof-cmso-optimizer; oof/oof-cmso/oof-cmso-service; oof/oof-has; oof/oof-has; aaf; aaf/aaf-sms; aaf/aaf-sms; aaf-sms-quorumclient; aaf/aaf-oauth; aaf/aaf-locate; aaf/aaf-fs; aaf/aaf-hello; aaf/aaf-gui; aaf/aaf-service; aaf/aaf-sshsm; aaf/aaf-cm; readiness; pomba/pomba-search-data; pomba/pomba-contextaggregator; pomba/pomba-data-router; pomba/pomba-sdcctxbuilder; pomba/pomba-validation-service; pomba/pomba-kibana; sdc/sdc-dcae-fe; sdc/sdc-dcae-tosca-lab; sdc/sdc-es; sdc/sdc-kb; sdc/sdc-onboarding-be; sdc/sdc-dcae-dt; sdc/sdc-fe; sdc/sdc-dcae-be; sdc/sdc-cs; sdc/sdc-wfd-fe; sdc/sdc-wfd-be; sdc/sdc-be; cli; msb/kube2msb; msb/msb-eag; msb/msb-discovery; msb/msb-iag; helm/starters/onap-app; policy; policy/policy-common; policy/drools; policy/drools/nexus; policy/brmsgw; policy/pdp; contrib/netbox; portal/portal-sdk; portal/portal-mariadb; portal/portal-app; portal/portal-widget; vnfsdk; esr/esr-server; vfc/vfc-db; vfc/vfc-vnfres; vfc/vfc-vnflcm; vfc/vfc-generic-vnfm-driver; vfc/vfc-huawei-vnfm-driver; vfc/vfc-nokia-v2vnfm-driver; vfc/vfc-resmgr; vfc/vfc-zte-sdnc-driver; vfc/vfc-zte-vnfm-driver; vfc/vfc-ems-driver; vfc/vfc-workflow-engine; vfc/vfc-nokia-vnfm-driver; vfc/vfc-catalog; vfc/vfc-workflow; vfc/vfc-vnfmgr; vfc/vfc-nslcm; vfc/vfc-juju-vnfm-driver; vfc/vfc-multivim-proxy; pnda/dcae-pnda-bootstrap; pnda/dcae-pnda-mirror; log/log-kibana; log/log-logstash; multicloud/multicloud-prometheus; common/mysql; common/music; common/music/music-tomcat; common/music/music-cassandra; common/music/music-cassandra-job; common/postgres; common/postgres/pgpool; common/dgbuilder; common/network-name-gen; common/mongo appc; appc/appc-ansible-server; dmaap; dmaap/dmaap-bc; dmaap/message-router; dmaap/message-router/message-router-zookeeper dmaap/message-router/message-router-mirrormaker dmaap/message-router/message-router-mirrormaker; dmaap/message-router/message-router-kafka; dmaap/dmaap-dr-node; dmaap/dmaap-dr-prov; consul; consul/consul-server; cds/cds-controller-blueprints; cds/cds-blueprints-processor; cds/cds-command-executor

oomk8s/readiness-check:2.0.1

vid

oomk8s/readiness-check:2.0.2

so; so/so-vfc-adapter; so/so-monitoring; so/so-mariadb; so/so-openstack-adapter; so/so-sdc-controller; so/so-bpmn-infra; so/so-request-db-adapter; so/so-catalog-db-adapter; so/so-sdnc-adapter; so/so-vnfm-adapter; sdnc; sdnc/sdnc-prom; sdnc/dmaap-listener; sdnc/sdnc-portal; sdnc/ueb-listener; sdnc/sdnc-ansible-server

 

Thank you very much

 

Best regards

 

Tomas

 

 

 

  


#so trouble logging to MariaDB-Galera [Dublin] #so

Marios Iakovidis <marios.iakovidis@...>
 

Hi,
 
I'm trying to log in the SO catalog DB
kubernetes pod:
dev-mariadb-galera-mariadb-galera-0
 
I am using:
mysql -uroot -ppassword 
 
as login to mysql but i get an access denied.
What is the correct credentials?
 
Thank you,
Marios


Re: Restconf API authentification: User/password fail #appc

Steve Siani <alphonse.steve.siani.djissitchi@...>
 

Does someone found a way to bypass AAF on APPC in code?

It could be helpful for me if I can skip this authentication module and use static parameters in applications properties.

If no answer here, I will create a new topic for that.

Thanks!
Steve


Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

MALINCONICO ANIELLO PAOLO
 

Hi Yuriy,
 
my starting point is to implement the preload's solution, so without CDS.
So can you address me to any help ?

Thanks,
Aniello Paolo Malinconico


Re: Todays SDC weekly

Ofir Sonsino <ofir.sonsino@...>
 

Please join that bridge, as the normal one is taken:

https://zoom.us/j/283628617

 

 

From: onap-sdc@... [mailto:onap-sdc@...] On Behalf Of Sonsino, Ofir
Sent: Tuesday, April 30, 2019 4:13 PM
To: onap-sdc@...; onap-discuss@...
Subject: [onap-sdc] Todays SDC weekly

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

 

Hi,

 

Today's meeting will start 30 minutes later than usual. Sorry for the inconvinience.

 

Thanks

Ofir

 

 

Sent from my Samsung Galaxy smartphone.


Re: Restconf API authentification: User/password fail #appc

Taka Cho
 

On docker compose, yes.

 

On k8s, you need AAF also.

 

Taka

 

From: Steve Siani <alphonse.steve.siani.djissitchi@...>
Sent: Tuesday, April 30, 2019 9:37 AM
To: CHO, TAKAMUNE <tc012c@...>; onap-discuss@...
Subject: Re: [onap-discuss] Restconf API authentification: User/password fail #appc

 

For Casablanca release, I guest we just need AAI right? I am only interest on LCM.

Regards,
Steve


Re: Restconf API authentification: User/password fail #appc

Steve Siani <alphonse.steve.siani.djissitchi@...>
 

For Casablanca release, I guest we just need AAI right? I am only interest on LCM.

Regards,
Steve


Re: basic service distribution failing for 3.0.2 ??

Brian Freeman
 

I think there is a log directory under .helm or you can try the  --verbose option to see what is going on.

 

helm deploy dev-dmaap local/onap -f /root/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f /root/integration-override.yaml --namespace onap  --verbose

 

dublin DMaaP is running fine in our test lab (and has been quite stable this release)

 

If you are doing a new clone of oom, as of this week, make sure to use the  --recurse-submodules  flag

 

https://wiki.onap.org/display/DW/OOM+-+Development+workflow+after+code+transfer+to+tech+teams

 

OPTIONALLY: When cloning the repo, add the "--recurse-submodules" option in git clone command to also fetch the submodules in one step. 

 

Brian

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of libo zhu
Sent: Monday, April 29, 2019 10:21 PM
To: onap-discuss@...; m.ptacek@...; subhash.kumar.singh@...; PLATANIA, MARCO <platania@...>; spsinghxp@...; yang.xu3@...
Cc: 'Seshu m' <seshu.kumar.m@...>; 'huangxiangyu' <huangxiangyu5@...>; 'Chenchuanyu' <chenchuanyu@...>
Subject: Re: [onap-discuss] basic service distribution failing for 3.0.2 ??

 

Dear dmaap experts,

               For Dublin release, does DMAAP could be useable? I’ve tried latest master branch of oom, after enable dmaap, it reports below error:

debug] Created tunnel using local port: '44790'

 

[debug] SERVER: "127.0.0.1:44790"

 

Release "demo-dmaap" does not exist. Installing it now.

[debug] CHART PATH: /root/.helm/plugins/deploy/cache/onap-subcharts/dmaap

 

Error: timed out waiting for the condition

 

               If not, is there any alternative workaround or trial since it blocks us End2End validation?  Do we still need use Casablanca version ?

 

Thanks

Libo

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Michal Ptacek
Sent: Monday, April 29, 2019 3:57 PM
To: onap-discuss@...; subhash.kumar.singh@...; 'PLATANIA, MARCO (MARCO)' <platania@...>; spsinghxp@...; yang.xu3@...
Cc: 'Seshu m' <seshu.kumar.m@...>; 'huangxiangyu' <huangxiangyu5@...>; 'Chenchuanyu' <chenchuanyu@...>
Subject: Re: [onap-discuss] basic service distribution failing for 3.0.2 ??

 

It looks that problem was caused by wrong initialized zookeeper (git clone messageservice repo failure)

based on what Sunil wrote in DMAAP-1144 this is required by SDC team, I think removing this dependency might bring some stability improvements.

Is it planned already ? Anyone from SDC to comment ??

 

Thanks,

Michal

 

From: Michal Ptacek [mailto:m.ptacek@...]
Sent: Sunday, April 28, 2019 6:54 PM
To: 'onap-discuss@...' <onap-discuss@...>; 'subhash.kumar.singh@...' <subhash.kumar.singh@...>; 'PLATANIA, MARCO (MARCO)' <platania@...>; 'spsinghxp@...' <spsinghxp@...>; 'yang.xu3@...' <yang.xu3@...>
Cc: 'Seshu m' <seshu.kumar.m@...>; 'huangxiangyu' <huangxiangyu5@...>; 'Chenchuanyu' <chenchuanyu@...>
Subject: [onap-discuss] basic service distribution failing for 3.0.2 ??

 

(subject changed)

 

Hello again,

 

This or similar problem is mentioned in recently updated tickets DMAAP-157, DMAAP-1007 … unfortunatelly I can’t derive fix from them.

It looks that SDC-DISTR-NOTIF-TOPIC-AUTO don’t have owner so producer can’t subscribe to it ?

 

Was this 3.0.2 release really passing integration tests ?  The only test report I see is for 3.0.1 and couple of months old.

https://wiki.onap.org/display/DW/Casablanca+Maintenance+Release+Integration+Testing+Status

(Unfortunatelly 3.0.1 is also useless these days due to expired certificates)

 

[root@tomas-infra helm_charts]#  curl -X GET http://tomas-node0:30227/topics/

{"topics": [

    "POA-RULE-VALIDATION",

    "AAI-EVENT",

    "unauthenticated.VES_MEASUREMENT_OUTPUT",

    "POA-AUDIT-RESULT",

    "__consumer_offsets",

    "org.onap.dmaap.mr.PNF_READY",

    "unauthenticated.SEC_HEARTBEAT_OUTPUT",

    "org.onap.dmaap.mr.PNF_REGISTRATION",

    "PDPD-CONFIGURATION",

    "SDC-DISTR-NOTIF-TOPIC-AUTO",

    "POA-AUDIT-INIT"

]}[root@tomas-infra helm_charts] curl -X GET http://tomas-node0:30227/topics/SDC-DISTR-NOTIF-TOPIC-AUTO/

{

    "owner": "",

    "readerAcl": {

        "enabled": true,

        "users": []

    },

    "name": "SDC-DISTR-NOTIF-TOPIC-AUTO",

    "description": "ASDC distribution notification topic",

    "writerAcl": {

        "enabled": true,

        "users": []

    }

}[root@tomas-infra helm_charts]#curl -X PUT http://tomas-node0:30227/topics/SDC-DISTR-NOTIF-TOPIC-AUTO/producers/iPIxkpAMI8qTcQj8

org.apache.cxf.interceptor.Fault

        at org.apache.cxf.service.invoker.AbstractInvoker.createFault(AbstractInvoker.java:162)

        at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:128)

        at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:192)

        at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:103)

        at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59)

        at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96)

        at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)

        at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)

        at org.apache.camel.component.cxf.cxfbean.CxfBeanDestination.process(CxfBeanDestination.java:83)

        at org.apache.camel.impl.ProcessorEndpoint.onExchange(ProcessorEndpoint.java:103)

        at org.apache.camel.impl.ProcessorEndpoint$1.process(ProcessorEndpoint.java:71)

        at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)

        at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)

        at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)

        at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)

        at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)

        at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)

        at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)

        at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97)

        at org.apache.camel.http.common.CamelServlet.doService(CamelServlet.java:208)

        at org.apache.camel.http.common.CamelServlet.service(CamelServlet.java:78)

        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)

        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865)

        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655)

        at ajsc.filter.PassthruFilter.doFilter(PassthruFilter.java:26)

        at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347)

        at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263)

        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)

        at com.att.ajsc.csi.writeablerequestfilter.WriteableRequestFilter.doFilter(WriteableRequestFilter.java:41)

        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)

        at org.onap.dmaap.util.DMaaPAuthFilter.doFilter(DMaaPAuthFilter.java:82)

        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)

        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)

        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)

        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)

        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)

        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)

        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)

        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)

        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)

        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)

        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)

        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)

        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)

        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)

        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)

        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

        at org.eclipse.jetty.server.Server.handle(Server.java:531)

        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)

        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)

        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)

        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)

        at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)

        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)

        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)

        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)

        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)

        at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)

        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)

        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)

        at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NullPointerException

        at com.att.nsa.security.NsaAclUtils.updateAcl(NsaAclUtils.java:74)

        at org.onap.dmaap.dmf.mr.beans.DMaaPKafkaMetaBroker$KafkaTopic.updateAcl(DMaaPKafkaMetaBroker.java:446)

        at org.onap.dmaap.dmf.mr.beans.DMaaPKafkaMetaBroker$KafkaTopic.permitWritesFromUser(DMaaPKafkaMetaBroker.java:422)

        at org.onap.dmaap.dmf.mr.service.impl.TopicServiceImpl.permitPublisherForTopic(TopicServiceImpl.java:529)

        at org.onap.dmaap.service.TopicRestService.permitPublisherForTopic(TopicRestService.java:478)

        at sun.reflect.GeneratedMethodAccessor131.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179)

        at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)

        ... 60 more

 

Please advise,

Michal

 

From: Michal Ptacek [mailto:m.ptacek@...]
Sent: Friday, April 26, 2019 5:01 PM
To: 'onap-discuss@...' <onap-discuss@...>; 'subhash.kumar.singh@...' <subhash.kumar.singh@...>; 'PLATANIA, MARCO (MARCO)' <platania@...>
Cc: 'Seshu m' <seshu.kumar.m@...>; 'huangxiangyu' <huangxiangyu5@...>; 'Chenchuanyu' <chenchuanyu@...>
Subject: RE: [onap-discuss] Error code POL5000 : exception on distributing service

 

Hi Guys,

 

I have the same problem and even recreating message-router component doesn’t work for me (tried several times)

What i found so far is that SDC is in

 

      "healthCheckComponent": "DE",

      "healthCheckStatus": "DOWN",

      "description": "U-EB cluster is not available"

 

Because it’s failing to register to topic:

 

2019-04-26T14:08:03.926Z||pool-78-thread-1|registerToTopic||registerToTopic||ERROR|500|PUT http://message-router.onap:3904/topics/SDC-DISTR-NOTIF-TOPIC-AUTO/producers/iPIxkpAMI8qTcQj8 (as i

PIxkpAMI8qTcQj8) ...|

2019-04-26T14:08:03.970Z||pool-76-thread-1|registerToTopic||registerToTopic||ERROR|500| --> HTTP/1.1 500 Server Error|

2019-04-26T14:08:03.972Z||pool-76-thread-1|registerToTopic||registerToTopic||ERROR|500|Error occured during access to U-EB Server. Operation: register to topic as producer|

 

Topic is available in message-router

 

But message-router logs are displaying just following INFO messages:

 

""2019-04-26 14:27:09,761 [qtp1061804750-94] INFO  org.onap.dmaap.service.TopicRestService - Granting write access to producer [iPIxkpAMI8qTcQj8] for topic SDC-DISTR-NOTIF-TOPIC-AUTO

""2019-04-26 14:27:09,765 [qtp1061804750-94] INFO  org.onap.dmaap.dmf.mr.service.impl.TopicServiceImpl - Granting write access to producer [iPIxkpAMI8qTcQj8] for topic SDC-DISTR-NOTIF-TOPIC-AUTO

""2019-04-26 14:27:09,782 [qtp1061804750-96] INFO  org.onap.dmaap.dmf.mr.security.impl.DMaaPOriginalUebAuthenticator - AUTH-LOG(10.42.207.81): No such API key iPIxkpAMI8qTcQj8

""2019-04-26 14:27:09,783 [qtp1061804750-96] INFO  org.onap.dmaap.dmf.mr.security.impl.DMaaPOriginalUebAuthenticator - AUTH-LOG(10.42.207.81): No such API key iPIxkpAMI8qTcQj8

""2019-04-26 14:27:09,788 [qtp1061804750-94] INFO  org.onap.dmaap.dmf.mr.security.impl.DMaaPOriginalUebAuthenticator - AUTH-LOG(10.42.207.81): No such API key iPIxkpAMI8qTcQj8

""2019-04-26 14:27:09,790 [qtp1061804750-94] INFO  org.onap.dmaap.dmf.mr.security.impl.DMaaPOriginalUebAuthenticator - AUTH-LOG(10.42.207.81): No such API key iPIxkpAMI8qTcQj8

""2019-04-26 14:27:09,919 [qtp1061804750-12] INFO  org.onap.dmaap.util.DMaaPAuthFilter - inside servlet filter Cambria Auth Headers checking before doing other Authentication

 

It looks like some AAF related authentication problem …..

Any hint really appreciated,

 

Thanks,

Michal

 

PS: I am testing 3.0.2 release (latest casablanca)

 

Please note: for sdc-be to come-up properly I need to change dir ownership from root to jetty user, otherwise it was complaining with

ERROR in ch.qos.logback.core.rolling.RollingFileAppender[METRICS_ROLLING] - Failed to create parent directories for [/var/lib/jetty/logs/SDC/SDC-BE/metrics.log]

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of subhash kumar singh
Sent: Tuesday, April 23, 2019 9:07 AM
To: PLATANIA, MARCO (MARCO) <platania@...>; onap-discuss@...
Cc: Seshu m <seshu.kumar.m@...>; huangxiangyu <huangxiangyu5@...>; Chenchuanyu <chenchuanyu@...>
Subject: Re: [onap-discuss] Error code POL5000 : exception on distributing service

 

Hello Marco,

 

Thank you for the response.

It worked for me J

 

--

Regards,

Subhash Kumar Singh

From: PLATANIA, MARCO (MARCO) [mailto:platania@...]
Sent: Monday, April 22, 2019 8:49 PM
To: onap-discuss@...; Subhash Kumar Singh
Cc: Seshu m; huangxiangyu; Chenchuanyu
Subject: Re: [onap-discuss] Error code POL5000 : exception on distributing service

 

DMaaP components have to come up in a specific order: Zookeeper, Kafka, and finally message router. Delete DMaaP files in NFS when you rebuild it (/dockerdata-nfs/dev-dmaap/* or /dockerdata-nfs/dev-message-router/*, I can’t remember…) Look also at the SDC backend, that kind of error typically appears when SDC-BE is not fully up.

 

Marco

 

From: <onap-discuss@...> on behalf of subhash kumar singh <subhash.kumar.singh@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "subhash.kumar.singh@..." <subhash.kumar.singh@...>
Date: Monday, April 22, 2019 at 8:43 AM
To: "onap-discuss@..." <onap-discuss@...>
Cc: Seshu m <seshu.kumar.m@...>, huangxiangyu <huangxiangyu5@...>, Chenchuanyu <chenchuanyu@...>
Subject: [onap-discuss] Error code POL5000 : exception on distributing service

 

Hello SDC Team,

 

Could you please help  me to understand reason for receiving Error POL 5000 on distributing service model.

Following message I receive on distributing service from portal

Error code : POL5000

Status code: 500

Internal Server Error. Please try again later.

 

 

Similar exception I can see in SO logs also.

 

Following are logs from SO:

019-04-22T12:29:44.856Z|trace-#| org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init                                                                                             [27/4398]

2019-04-22T12:29:44.857Z|trace-#| org.onap.sdc.impl.DistributionClientImpl - get ueb cluster server list from component(configuration file)

2019-04-22T12:29:44.865Z|trace-#| org.onap.sdc.http.SdcConnectorClient - about to perform getServerList. requestId= 2accf8f7-f355-42ef-8455-c9779a5476f0 url= /sdc/v1/artifactTypes

2019-04-22T12:29:44.865Z|trace-#| org.onap.sdc.http.HttpAsdcClient - url to send https://sdc-be.onap:8443/sdc/v1/artifactTypes

2019-04-22T12:29:45.103Z|trace-#| org.onap.sdc.http.HttpAsdcClient - GET Response Status 200

2019-04-22T12:29:45.104Z|trace-#| org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT, HEAT_ARTIFACT, HEAT_ENV, HEAT_NESTED, HEAT_NET, HEAT_VOL, OTHER, TOSCA_CSAR, VF_MODULES_METADATA] were v

alidated with ASDC server

2019-04-22T12:29:45.104Z|trace-#| org.onap.sdc.impl.DistributionClientImpl - create keys

2019-04-22T12:29:45.104Z|trace-#| com.att.nsa.apiClient.http.HttpClient - POST http://message-router.onap:3904/apiKeys/create will send credentials over a clear channel.

2019-04-22T12:29:45.210Z|trace-#| org.onap.sdc.http.SdcConnectorClient - about to perform registerAsdcTopics. requestId= 39614524-48f0-4ca3-8407-f7be5eeb10bd url= /sdc/v1/registerForDistribution

2019-04-22T12:29:45.381Z|trace-#| org.onap.sdc.http.SdcConnectorClient - status from ASDC is org.onap.sdc.http.HttpAsdcResponse@1f2d8e1a

2019-04-22T12:29:45.381Z|trace-#| org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=ASDC_SERVER_PROBLEM, responseMessage=ASDC server problem]

2019-04-22T12:29:45.381Z|trace-#| org.onap.sdc.http.SdcConnectorClient - error from ASDC is: {

  "requestError": {

    "policyException": {

      "messageId": "POL5000",

      "text": "Error: Internal Server Error. Please try again later.",

      "variables": []

    }

  }

}

 

 

I tried to reinstall dmaap but it did not solve the problem.

As part of my investigation I tried to find related logs (e.g DistributionEngineClusterHealth.java) but I am not sure where I can see these logs and how to enable trace logging.

 

--

Regards,

Subhash Kumar Singh

FNOSS, ONAP

 

Huawei Technologies India Pvt. Ltd.

Survey No. 37, Next to EPIP Area, Kundalahalli, Whitefield

Bengaluru-560066, Karnataka

Tel: + 91-80-49160700 Ext 70992 II Mob: 8050101106 Email: subhash.kumar.singh@...  

 


 

This e-mail and its attachments contain confidential information from HUAWEI, which
is intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

 

 

 

  


Re: Restconf API authentification: User/password fail #appc

Steve Siani <alphonse.steve.siani.djissitchi@...>
 

Thank you very much Taka!

Regards,
Steve


Re: AAI not coming up in casablance #aai

Yogi Knss <knss.yogi@...>
 

Thanks Jimmy Forsyth,

We have reduced the replicaset to 1 from 3, because I read somewhere that even 1 is enough. May be I overlooked something.
I will change the config and check again.

Regards,
Yogi.


Re: Restconf API authentification: User/password fail #appc

Taka Cho
 

All components ONAP are all tied up with AAF.

The minimum set in order to make APPC functioned. You need  AAF – authentication/authorization, and AAI – Inventory , and DMaaP (this is optional, you can use postman, curl or apidoc to post the LCM API instead).

 

On the docker compose, at least you need to install AAI.

 

Taka

 

From: Steve Siani <alphonse.steve.siani.djissitchi@...>
Sent: Monday, April 29, 2019 11:41 PM
To: CHO, TAKAMUNE <tc012c@...>; onap-discuss@...
Subject: Re: [onap-discuss] Restconf API authentification: User/password fail #appc

 

No, I am trying to have a standalone app.

I just need to play with APPC, is there any projects dependency?

Regards,
Steve


Todays SDC weekly

Ofir Sonsino <ofir.sonsino@...>
 


Hi,

Today's meeting will start 30 minutes later than usual. Sorry for the inconvinience.

Thanks
Ofir


Sent from my Samsung Galaxy smartphone.


Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

MALAKOV, YURIY <ym9479@...>
 

Aniello,

 

Are you planning to use the CDS Automation Naming (via naming ms) and IP assignment (via netbox) solution or preload solution for macro orchestration?

 

In tosca for naming policy you can set the policy instance name as default and that should allow sdnc to use the default policy for auto name generation. But before you try to solve this issue you want to address the question above. Depending on the answer the solution shall  be different.

 

 

 

 

 

Yuriy Malakov

SDN-CP Lead Engineer

732-420-3030, Q-Chat

Yuriy.Malakov@...

 

From: MALINCONICO ANIELLO PAOLO <aniello.malinconico@...>
Sent: Tuesday, April 30, 2019 8:22 AM
To: MALAKOV@...; MALAKOV, YURIY <ym9479@...>; onap-discuss@...
Subject: Re: [onap-discuss] [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

 

Hi Yuriy,  yes I have executed the ./demo-k8s.sh onap distributeVFWNG  .... But I do not want to replicate the vFWNFG use case.  
I want to understand how to deploy a service with MACRO FLAG, so I am starting to deploy the simplest custom service (composed by a single vm) but I am always facing into the same issue about naming-policy-generate-name ...
So I think my issue is not related to the vFWNG use case.

Thanks,
Aniello Paolo Malinconico


Re: Naming conflict with cloud region

Marco Platania
 

There were two cloud regions with the same name but different cloud owners and VID seemed confused. I can’t really reproduce the error because I cleaned up the environment. If you want to reproduce it in your local lab, you can use the JSON object below. This is very similar to the AAI cloud region configuration that we had yesterday. We ran GET https://{{aai_ip}}:{{aai_port}}/aai/v13/cloud-infrastructure/cloud-regions?cloud-region-id=RegionOne and got something like this:

 

{

    "cloud-region": [

        {

            "cloud-owner": "CloudOwner",

            "cloud-region-id": "RegionOne",

            "cloud-type": "openstack",

            "cloud-region-version": "v2.5",

            "identity-url": "http://10.43.111.6/api/multicloud/v0/CloudOwner_RegionTwo/identity/v2.0/tokens",

            "cloud-zone": "bm-2",

            "complex-name": "complex-2",

            "resource-version": "1546461316786",

            "relationship-list": {

                "relationship": [

                    {

                        "related-to": "complex",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v13/cloud-infrastructure/complexes/complex/clli2",

                        "relationship-data": [

                            {

                                "relationship-key": "complex.physical-location-id",

                                "relationship-value": "clli2"

                            }

                        ]

                    }

                ]

            }

        },

        {

            "cloud-owner": "CloudOwner2",

            "cloud-region-id": "RegionOne",

            "cloud-type": "openstack",

            "owner-defined-type": "owner type",

            "cloud-region-version": "v2.5",

            "cloud-zone": "bm-1",

            "resource-version": "1546461297568",

            "relationship-list": {

                "relationship": [

                    {

                        "related-to": "complex",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v13/cloud-infrastructure/complexes/complex/clli1",

                        "relationship-data": [

                            {

                                "relationship-key": "complex.physical-location-id",

                                "relationship-value": "clli1"

                            }

                        ]

                    }

                ]

            }

        }

    ]

}

 

Marco

 

From: "Stern, Ittay" <ittay.stern@...>
Date: Tuesday, April 30, 2019 at 3:36 AM
To: "onap-discuss@..." <onap-discuss@...>, "PLATANIA, MARCO (MARCO)" <platania@...>
Subject: RE: Naming conflict with cloud region

 

You’re saying that “VID is receiving multiple cloud regions with the same name and doesn’t know which one to pick”.

Can you explain your scenario?

 

Dublin’s VID is distinguishing regions with different owners:

 

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of PLATANIA, MARCO
Sent: Monday, April 29, 2019 6:56 PM
To: onap-discuss@...
Subject: [onap-discuss] Naming conflict with cloud region
Importance: High

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

All,

 

Who created a new cloud region called RegionOne, with cloud owner CloudOwner2, in Integration-SB-00 lab? See AAI object below. Please let us know because we ended up in a naming conflict and we aren’t able to spin up new VNFs in that lab. VID is receiving multiple cloud regions with the same name and doesn’t know which one to pick.

 

For future reference, please call the cloud region differently, for example RegionFour or something (I think robot has a script that creates RegionTwo and RegionThree), and then link that new region to the actual OpenStack RegionOne in the catalogdb database, cloud_sites table in MariaDB galera cluster (pick one of the 3 cluster nodes, updates will propagate).

 

MariaDB [catalogdb]> select * from cloud_sites;

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

| ID                | REGION_ID | IDENTITY_SERVICE_ID | CLOUD_VERSION | CLLI      | CLOUDIFY_ID | PLATFORM | ORCHESTRATOR | LAST_UPDATED_BY | CREATION_TIMESTAMP  | UPDATE_TIMESTAMP    |

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

| Chicago           | ORD       | RAX_KEYSTONE        | 2.5           | ORD       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

| Dallas            | DFW       | RAX_KEYSTONE        | 2.5           | DFW       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

| DEFAULT           | RegionOne | DEFAULT_KEYSTONE    | 2.5           | RegionOne | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

| Northern Virginia | IAD       | RAX_KEYSTONE        | 2.5           | IAD       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

| RegionOne         | RegionOne | DEFAULT_KEYSTONE    | 2.5           | RegionOne | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

 

The ID (green) is your cloud region tag (DO NOT USE RegionOne !!!), while the Region_ID and CLLI (yellow) refer to your OpenStack actual region. Here you should have RegionOne.

 

Thanks,

Marco

 

 

{

            "cloud-owner": "CloudOwner2",

            "cloud-region-id": "RegionOne",

            "cloud-type": "openstack",

            "owner-defined-type": "t1",

            "cloud-region-version": "titanium_cloud",

            "identity-url": "http://msb-iag.onap:80/api/multicloud-titaniumcloud/v1/CloudOwner2/RegionOne/identity/v2.0",

            "cloud-zone": "z1",

            "complex-name": "clli1",

            "cloud-extra-info": "",

            "orchestration-disabled": false,

            "in-maint": false,

            "resource-version": "1556514985452",

            "relationship-list": {

                "relationship": [

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-05",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-05"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-00",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-00"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-03",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-03"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-08",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-08"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-01",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-01"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-09",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-09"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-06",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-06"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-04",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-04"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-02",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-02"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-12",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-12"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-07",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-07"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-10",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-10"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "complex",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/complexes/complex/clli1",

                        "relationship-data": [

                            {

                                "relationship-key": "complex.physical-location-id",

                                "relationship-value": "clli1"

                            }

                        ]

                    }

                ]

            }


Re: [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

Brian Freeman
 

Dublin SO oom charts like override.yaml have alot of other changes. Its not just the image reference update.

 

I would suggest copying a clone of the oom/kubernetes/so  and child directory into your current oom/kubernetes/so tree and make so; make onap and redeploy SO.

Or some other method to pick up the changes since Casablanca in the configuration data in oom/kuberenetes/so/*

 

Brian

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Sunilkumar Shivangouda Biradar
Sent: Tuesday, April 30, 2019 7:35 AM
To: Seshu m <seshu.kumar.m@...>; onap-discuss@...
Cc: Sanchita Pathak <sanchita@...>
Subject: Re: [onap-discuss] [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

 

Hi Seshu,

 

Thanks for the information.

I have changed the version from 1.5.0 to 1.4.0 in values.yaml file.

But still the pods are in CrashLoopBackOff state only.

 

And I am using so-mariadb version 10.1.38, is it the correct version for mariadb?

Attached debug logs.

 

so-so-7c8568ccb7-jn876                                            1/1       Running            0          1h

so-so-bpmn-infra-57b585cb45-s68sw                                 0/1       CrashLoopBackOff   18         1h

so-so-catalog-db-adapter-7f55755b7d-x56lk                         0/1       CrashLoopBackOff   20         1h

so-so-mariadb-57b6c979f6-tcj2c                                    1/1       Running            0          1h

so-so-monitoring-7867ff7475-2h9fj                                 1/1       Running            0          1h

so-so-openstack-adapter-6bb58c4b98-sjprc                          0/1       CrashLoopBackOff   20         1h

so-so-request-db-adapter-864bb584fd-t6wjm                         1/1       Running            0          1h

so-so-sdc-controller-55994bfc6-s7sbk                              1/1       Running            0          1h

so-so-sdnc-adapter-bfd96bcbf-w5tx9                                1/1       Running            0          1h

so-so-vfc-adapter-6cc6f4cb5f-xpc6d                                0/1       CrashLoopBackOff   20         1h

 

 

Regards,

Sunil B

 

From: Seshu m <seshu.kumar.m@...>
Sent: Tuesday, April 30, 2019 1:08 PM
To: onap-discuss@...; Sunilkumar Shivangouda Biradar <SB00577584@...>
Cc: Sanchita Pathak <sanchita@...>
Subject: RE: [onap-discuss] [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

 

Hi Sunil

 

We have the 1.5.x as the main branch targeting the future release and 1.4.X for the Dublin.

There are some intermediate fixes going on the main (development) branch that could pose issue.

Please try them on Dublin version 1.4.0 and let us know if its fine.

 

Thanks and Regards,

M Seshu Kumar

Senior System Architect

Single OSS India Branch Department. S/W BU.

Huawei Technologies India Pvt. Ltd.

Survey No. 37, Next to EPIP Area, Kundalahalli, Whitefield

Bengaluru-560066, Karnataka.

Tel: + 91-80-49160700 , Mob: 9845355488

Company_logo

___________________________________________________________________________________________________

This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!

-------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Sunilkumar Shivangouda Biradar
Sent: 2019430 12:21
To: onap-discuss@...
Cc: Sanchita Pathak <sanchita@...>
Subject: [onap-discuss] [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

 

Hi SO Team,

 

Issue while upgrading SO from Casablanca image to Dublin Image.

Few pods are in CrashLoopBackOff state.

 

I have changed the vaules.yaml as below

 

oom/kubernetes/so/charts/so-catalog-db-adapter/values.yaml:30:image: onap/so/catalog-db-adapter:1.5.0
oom/kubernetes/so/charts/so-monitoring/values.yaml:35:image: onap/so/so-monitoring:1.5.0
oom/kubernetes/so/charts/so-vfc-adapter/values.yaml:30:image: onap/so/vfc-adapter:1.5.0
oom/kubernetes/so/charts/so-request-db-adapter/values.yaml:30:image: onap/so/request-db-adapter:1.5.0
oom/kubernetes/so/charts/so-bpmn-infra/values.yaml:30:image: onap/so/bpmn-infra:1.5.0
oom/kubernetes/so/charts/so-sdc-controller/values.yaml:30:image: onap/so/sdc-controller:1.5.0
oom/kubernetes/so/charts/so-sdnc-adapter/values.yaml:30:image: onap/so/sdnc-adapter:1.5.0
oom/kubernetes/so/charts/so-openstack-adapter/values.yaml:29:image: onap/so/openstack-adapter:1.5.0

oom/kubernetes/so/values.yaml:30:image: onap/so/api-handler-infra:1.5.0

oom/kubernetes/so/charts/so-mariadb/values.yaml:35:image: mariadb:10.1.38

 

After changing these, have run below commands

  1. make all onap
  2. helm del --purge so
  3. rm -rf /docker-nfs/so
  4. helm install local/so -n so --namespace onap -f /root/integration-override-so.yaml

 

so-so-7b659df6f8-lcht2                                            1/1       Running            0          1h

so-so-bpmn-infra-796477fbd6-tb7pd                                 0/1       CrashLoopBackOff   14         1h

so-so-catalog-db-adapter-575f664674-9jn4t                         1/1       Running            0          1h

so-so-mariadb-57b6c979f6-rk8n5                                    1/1       Running            0          1h

so-so-monitoring-5dbdc97974-72mpz                                 1/1       Running            0          1h

so-so-openstack-adapter-58bf9dd67-xkcnt                           0/1       CrashLoopBackOff   15         1h

so-so-request-db-adapter-6874d4459f-sg88t                         1/1       Running            0          1h

so-so-sdc-controller-747bb56fd9-lcw5v                             1/1       Running            0          1h

so-so-sdnc-adapter-5455b7b9f4-65q9b                               1/1       Running            0          1h

so-so-vfc-adapter-6f874bddc6-g4544                                0/1       CrashLoopBackOff   16         1h

 

Need help in resolving this issue.

 

Regards,

 

Sunil B

Electronic city phase II,

Bangalore 560 100.

Mob: +91 8861180624 
SB00577584@...

   

 

We believe

 

 

============================================================================================================================

Disclaimer:  This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at http://www.techmahindra.com/Disclaimer.html externally http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.

============================================================================================================================


Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

Marco Platania
 

If you are not using CDS (as I understand) make sure that your heat template (yaml and env files) don’t have any reference to sdnc_* variables.

 

Marco

 

From: <onap-discuss@...> on behalf of MALINCONICO ANIELLO PAOLO <aniello.malinconico@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "aniello.malinconico@..." <aniello.malinconico@...>
Date: Tuesday, April 30, 2019 at 8:22 AM
To: "MALAKOV@..." <MALAKOV@...>, "MALAKOV, YURIY" <ym9479@...>, "onap-discuss@..." <onap-discuss@...>
Subject: Re: [onap-discuss] [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

 

Hi Yuriy,  yes I have executed the ./demo-k8s.sh onap distributeVFWNG  .... But I do not want to replicate the vFWNFG use case.  
I want to understand how to deploy a service with MACRO FLAG, so I am starting to deploy the simplest custom service (composed by a single vm) but I am always facing into the same issue about naming-policy-generate-name ...
So I think my issue is not related to the vFWNG use case.

Thanks,
Aniello Paolo Malinconico


Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

MALINCONICO ANIELLO PAOLO
 

Hi Yuriy,  yes I have executed the ./demo-k8s.sh onap distributeVFWNG  .... But I do not want to replicate the vFWNFG use case.  
I want to understand how to deploy a service with MACRO FLAG, so I am starting to deploy the simplest custom service (composed by a single vm) but I am always facing into the same issue about naming-policy-generate-name ...
So I think my issue is not related to the vFWNG use case.

Thanks,
Aniello Paolo Malinconico

6861 - 6880 of 23681