Date   

Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

MALINCONICO ANIELLO PAOLO
 

Hi Yuriy,
 
my starting point is to implement the preload's solution, so without CDS.
So can you address me to any help ?

Thanks,
Aniello Paolo Malinconico


Re: Todays SDC weekly

Ofir Sonsino <ofir.sonsino@...>
 

Please join that bridge, as the normal one is taken:

https://zoom.us/j/283628617

 

 

From: onap-sdc@... [mailto:onap-sdc@...] On Behalf Of Sonsino, Ofir
Sent: Tuesday, April 30, 2019 4:13 PM
To: onap-sdc@...; onap-discuss@...
Subject: [onap-sdc] Todays SDC weekly

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

 

Hi,

 

Today's meeting will start 30 minutes later than usual. Sorry for the inconvinience.

 

Thanks

Ofir

 

 

Sent from my Samsung Galaxy smartphone.


Re: Restconf API authentification: User/password fail #appc

Taka Cho
 

On docker compose, yes.

 

On k8s, you need AAF also.

 

Taka

 

From: Steve Siani <alphonse.steve.siani.djissitchi@...>
Sent: Tuesday, April 30, 2019 9:37 AM
To: CHO, TAKAMUNE <tc012c@...>; onap-discuss@...
Subject: Re: [onap-discuss] Restconf API authentification: User/password fail #appc

 

For Casablanca release, I guest we just need AAI right? I am only interest on LCM.

Regards,
Steve


Re: Restconf API authentification: User/password fail #appc

Steve Siani <alphonse.steve.siani.djissitchi@...>
 

For Casablanca release, I guest we just need AAI right? I am only interest on LCM.

Regards,
Steve


Re: basic service distribution failing for 3.0.2 ??

Brian Freeman
 

I think there is a log directory under .helm or you can try the  --verbose option to see what is going on.

 

helm deploy dev-dmaap local/onap -f /root/oom/kubernetes/onap/resources/environments/public-cloud.yaml -f /root/integration-override.yaml --namespace onap  --verbose

 

dublin DMaaP is running fine in our test lab (and has been quite stable this release)

 

If you are doing a new clone of oom, as of this week, make sure to use the  --recurse-submodules  flag

 

https://wiki.onap.org/display/DW/OOM+-+Development+workflow+after+code+transfer+to+tech+teams

 

OPTIONALLY: When cloning the repo, add the "--recurse-submodules" option in git clone command to also fetch the submodules in one step. 

 

Brian

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of libo zhu
Sent: Monday, April 29, 2019 10:21 PM
To: onap-discuss@...; m.ptacek@...; subhash.kumar.singh@...; PLATANIA, MARCO <platania@...>; spsinghxp@...; yang.xu3@...
Cc: 'Seshu m' <seshu.kumar.m@...>; 'huangxiangyu' <huangxiangyu5@...>; 'Chenchuanyu' <chenchuanyu@...>
Subject: Re: [onap-discuss] basic service distribution failing for 3.0.2 ??

 

Dear dmaap experts,

               For Dublin release, does DMAAP could be useable? I’ve tried latest master branch of oom, after enable dmaap, it reports below error:

debug] Created tunnel using local port: '44790'

 

[debug] SERVER: "127.0.0.1:44790"

 

Release "demo-dmaap" does not exist. Installing it now.

[debug] CHART PATH: /root/.helm/plugins/deploy/cache/onap-subcharts/dmaap

 

Error: timed out waiting for the condition

 

               If not, is there any alternative workaround or trial since it blocks us End2End validation?  Do we still need use Casablanca version ?

 

Thanks

Libo

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Michal Ptacek
Sent: Monday, April 29, 2019 3:57 PM
To: onap-discuss@...; subhash.kumar.singh@...; 'PLATANIA, MARCO (MARCO)' <platania@...>; spsinghxp@...; yang.xu3@...
Cc: 'Seshu m' <seshu.kumar.m@...>; 'huangxiangyu' <huangxiangyu5@...>; 'Chenchuanyu' <chenchuanyu@...>
Subject: Re: [onap-discuss] basic service distribution failing for 3.0.2 ??

 

It looks that problem was caused by wrong initialized zookeeper (git clone messageservice repo failure)

based on what Sunil wrote in DMAAP-1144 this is required by SDC team, I think removing this dependency might bring some stability improvements.

Is it planned already ? Anyone from SDC to comment ??

 

Thanks,

Michal

 

From: Michal Ptacek [mailto:m.ptacek@...]
Sent: Sunday, April 28, 2019 6:54 PM
To: 'onap-discuss@...' <onap-discuss@...>; 'subhash.kumar.singh@...' <subhash.kumar.singh@...>; 'PLATANIA, MARCO (MARCO)' <platania@...>; 'spsinghxp@...' <spsinghxp@...>; 'yang.xu3@...' <yang.xu3@...>
Cc: 'Seshu m' <seshu.kumar.m@...>; 'huangxiangyu' <huangxiangyu5@...>; 'Chenchuanyu' <chenchuanyu@...>
Subject: [onap-discuss] basic service distribution failing for 3.0.2 ??

 

(subject changed)

 

Hello again,

 

This or similar problem is mentioned in recently updated tickets DMAAP-157, DMAAP-1007 … unfortunatelly I can’t derive fix from them.

It looks that SDC-DISTR-NOTIF-TOPIC-AUTO don’t have owner so producer can’t subscribe to it ?

 

Was this 3.0.2 release really passing integration tests ?  The only test report I see is for 3.0.1 and couple of months old.

https://wiki.onap.org/display/DW/Casablanca+Maintenance+Release+Integration+Testing+Status

(Unfortunatelly 3.0.1 is also useless these days due to expired certificates)

 

[root@tomas-infra helm_charts]#  curl -X GET http://tomas-node0:30227/topics/

{"topics": [

    "POA-RULE-VALIDATION",

    "AAI-EVENT",

    "unauthenticated.VES_MEASUREMENT_OUTPUT",

    "POA-AUDIT-RESULT",

    "__consumer_offsets",

    "org.onap.dmaap.mr.PNF_READY",

    "unauthenticated.SEC_HEARTBEAT_OUTPUT",

    "org.onap.dmaap.mr.PNF_REGISTRATION",

    "PDPD-CONFIGURATION",

    "SDC-DISTR-NOTIF-TOPIC-AUTO",

    "POA-AUDIT-INIT"

]}[root@tomas-infra helm_charts] curl -X GET http://tomas-node0:30227/topics/SDC-DISTR-NOTIF-TOPIC-AUTO/

{

    "owner": "",

    "readerAcl": {

        "enabled": true,

        "users": []

    },

    "name": "SDC-DISTR-NOTIF-TOPIC-AUTO",

    "description": "ASDC distribution notification topic",

    "writerAcl": {

        "enabled": true,

        "users": []

    }

}[root@tomas-infra helm_charts]#curl -X PUT http://tomas-node0:30227/topics/SDC-DISTR-NOTIF-TOPIC-AUTO/producers/iPIxkpAMI8qTcQj8

org.apache.cxf.interceptor.Fault

        at org.apache.cxf.service.invoker.AbstractInvoker.createFault(AbstractInvoker.java:162)

        at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:128)

        at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:192)

        at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:103)

        at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59)

        at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96)

        at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)

        at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)

        at org.apache.camel.component.cxf.cxfbean.CxfBeanDestination.process(CxfBeanDestination.java:83)

        at org.apache.camel.impl.ProcessorEndpoint.onExchange(ProcessorEndpoint.java:103)

        at org.apache.camel.impl.ProcessorEndpoint$1.process(ProcessorEndpoint.java:71)

        at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)

        at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)

        at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)

        at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)

        at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)

        at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)

        at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)

        at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97)

        at org.apache.camel.http.common.CamelServlet.doService(CamelServlet.java:208)

        at org.apache.camel.http.common.CamelServlet.service(CamelServlet.java:78)

        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)

        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865)

        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1655)

        at ajsc.filter.PassthruFilter.doFilter(PassthruFilter.java:26)

        at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347)

        at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263)

        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)

        at com.att.ajsc.csi.writeablerequestfilter.WriteableRequestFilter.doFilter(WriteableRequestFilter.java:41)

        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)

        at org.onap.dmaap.util.DMaaPAuthFilter.doFilter(DMaaPAuthFilter.java:82)

        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)

        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)

        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)

        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)

        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)

        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)

        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)

        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)

        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)

        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)

        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)

        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)

        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)

        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)

        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)

        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)

        at org.eclipse.jetty.server.Server.handle(Server.java:531)

        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)

        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)

        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)

        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)

        at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)

        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)

        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)

        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)

        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)

        at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)

        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)

        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)

        at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.NullPointerException

        at com.att.nsa.security.NsaAclUtils.updateAcl(NsaAclUtils.java:74)

        at org.onap.dmaap.dmf.mr.beans.DMaaPKafkaMetaBroker$KafkaTopic.updateAcl(DMaaPKafkaMetaBroker.java:446)

        at org.onap.dmaap.dmf.mr.beans.DMaaPKafkaMetaBroker$KafkaTopic.permitWritesFromUser(DMaaPKafkaMetaBroker.java:422)

        at org.onap.dmaap.dmf.mr.service.impl.TopicServiceImpl.permitPublisherForTopic(TopicServiceImpl.java:529)

        at org.onap.dmaap.service.TopicRestService.permitPublisherForTopic(TopicRestService.java:478)

        at sun.reflect.GeneratedMethodAccessor131.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179)

        at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)

        ... 60 more

 

Please advise,

Michal

 

From: Michal Ptacek [mailto:m.ptacek@...]
Sent: Friday, April 26, 2019 5:01 PM
To: 'onap-discuss@...' <onap-discuss@...>; 'subhash.kumar.singh@...' <subhash.kumar.singh@...>; 'PLATANIA, MARCO (MARCO)' <platania@...>
Cc: 'Seshu m' <seshu.kumar.m@...>; 'huangxiangyu' <huangxiangyu5@...>; 'Chenchuanyu' <chenchuanyu@...>
Subject: RE: [onap-discuss] Error code POL5000 : exception on distributing service

 

Hi Guys,

 

I have the same problem and even recreating message-router component doesn’t work for me (tried several times)

What i found so far is that SDC is in

 

      "healthCheckComponent": "DE",

      "healthCheckStatus": "DOWN",

      "description": "U-EB cluster is not available"

 

Because it’s failing to register to topic:

 

2019-04-26T14:08:03.926Z||pool-78-thread-1|registerToTopic||registerToTopic||ERROR|500|PUT http://message-router.onap:3904/topics/SDC-DISTR-NOTIF-TOPIC-AUTO/producers/iPIxkpAMI8qTcQj8 (as i

PIxkpAMI8qTcQj8) ...|

2019-04-26T14:08:03.970Z||pool-76-thread-1|registerToTopic||registerToTopic||ERROR|500| --> HTTP/1.1 500 Server Error|

2019-04-26T14:08:03.972Z||pool-76-thread-1|registerToTopic||registerToTopic||ERROR|500|Error occured during access to U-EB Server. Operation: register to topic as producer|

 

Topic is available in message-router

 

But message-router logs are displaying just following INFO messages:

 

""2019-04-26 14:27:09,761 [qtp1061804750-94] INFO  org.onap.dmaap.service.TopicRestService - Granting write access to producer [iPIxkpAMI8qTcQj8] for topic SDC-DISTR-NOTIF-TOPIC-AUTO

""2019-04-26 14:27:09,765 [qtp1061804750-94] INFO  org.onap.dmaap.dmf.mr.service.impl.TopicServiceImpl - Granting write access to producer [iPIxkpAMI8qTcQj8] for topic SDC-DISTR-NOTIF-TOPIC-AUTO

""2019-04-26 14:27:09,782 [qtp1061804750-96] INFO  org.onap.dmaap.dmf.mr.security.impl.DMaaPOriginalUebAuthenticator - AUTH-LOG(10.42.207.81): No such API key iPIxkpAMI8qTcQj8

""2019-04-26 14:27:09,783 [qtp1061804750-96] INFO  org.onap.dmaap.dmf.mr.security.impl.DMaaPOriginalUebAuthenticator - AUTH-LOG(10.42.207.81): No such API key iPIxkpAMI8qTcQj8

""2019-04-26 14:27:09,788 [qtp1061804750-94] INFO  org.onap.dmaap.dmf.mr.security.impl.DMaaPOriginalUebAuthenticator - AUTH-LOG(10.42.207.81): No such API key iPIxkpAMI8qTcQj8

""2019-04-26 14:27:09,790 [qtp1061804750-94] INFO  org.onap.dmaap.dmf.mr.security.impl.DMaaPOriginalUebAuthenticator - AUTH-LOG(10.42.207.81): No such API key iPIxkpAMI8qTcQj8

""2019-04-26 14:27:09,919 [qtp1061804750-12] INFO  org.onap.dmaap.util.DMaaPAuthFilter - inside servlet filter Cambria Auth Headers checking before doing other Authentication

 

It looks like some AAF related authentication problem …..

Any hint really appreciated,

 

Thanks,

Michal

 

PS: I am testing 3.0.2 release (latest casablanca)

 

Please note: for sdc-be to come-up properly I need to change dir ownership from root to jetty user, otherwise it was complaining with

ERROR in ch.qos.logback.core.rolling.RollingFileAppender[METRICS_ROLLING] - Failed to create parent directories for [/var/lib/jetty/logs/SDC/SDC-BE/metrics.log]

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of subhash kumar singh
Sent: Tuesday, April 23, 2019 9:07 AM
To: PLATANIA, MARCO (MARCO) <platania@...>; onap-discuss@...
Cc: Seshu m <seshu.kumar.m@...>; huangxiangyu <huangxiangyu5@...>; Chenchuanyu <chenchuanyu@...>
Subject: Re: [onap-discuss] Error code POL5000 : exception on distributing service

 

Hello Marco,

 

Thank you for the response.

It worked for me J

 

--

Regards,

Subhash Kumar Singh

From: PLATANIA, MARCO (MARCO) [mailto:platania@...]
Sent: Monday, April 22, 2019 8:49 PM
To: onap-discuss@...; Subhash Kumar Singh
Cc: Seshu m; huangxiangyu; Chenchuanyu
Subject: Re: [onap-discuss] Error code POL5000 : exception on distributing service

 

DMaaP components have to come up in a specific order: Zookeeper, Kafka, and finally message router. Delete DMaaP files in NFS when you rebuild it (/dockerdata-nfs/dev-dmaap/* or /dockerdata-nfs/dev-message-router/*, I can’t remember…) Look also at the SDC backend, that kind of error typically appears when SDC-BE is not fully up.

 

Marco

 

From: <onap-discuss@...> on behalf of subhash kumar singh <subhash.kumar.singh@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "subhash.kumar.singh@..." <subhash.kumar.singh@...>
Date: Monday, April 22, 2019 at 8:43 AM
To: "onap-discuss@..." <onap-discuss@...>
Cc: Seshu m <seshu.kumar.m@...>, huangxiangyu <huangxiangyu5@...>, Chenchuanyu <chenchuanyu@...>
Subject: [onap-discuss] Error code POL5000 : exception on distributing service

 

Hello SDC Team,

 

Could you please help  me to understand reason for receiving Error POL 5000 on distributing service model.

Following message I receive on distributing service from portal

Error code : POL5000

Status code: 500

Internal Server Error. Please try again later.

 

 

Similar exception I can see in SO logs also.

 

Following are logs from SO:

019-04-22T12:29:44.856Z|trace-#| org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init                                                                                             [27/4398]

2019-04-22T12:29:44.857Z|trace-#| org.onap.sdc.impl.DistributionClientImpl - get ueb cluster server list from component(configuration file)

2019-04-22T12:29:44.865Z|trace-#| org.onap.sdc.http.SdcConnectorClient - about to perform getServerList. requestId= 2accf8f7-f355-42ef-8455-c9779a5476f0 url= /sdc/v1/artifactTypes

2019-04-22T12:29:44.865Z|trace-#| org.onap.sdc.http.HttpAsdcClient - url to send https://sdc-be.onap:8443/sdc/v1/artifactTypes

2019-04-22T12:29:45.103Z|trace-#| org.onap.sdc.http.HttpAsdcClient - GET Response Status 200

2019-04-22T12:29:45.104Z|trace-#| org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT, HEAT_ARTIFACT, HEAT_ENV, HEAT_NESTED, HEAT_NET, HEAT_VOL, OTHER, TOSCA_CSAR, VF_MODULES_METADATA] were v

alidated with ASDC server

2019-04-22T12:29:45.104Z|trace-#| org.onap.sdc.impl.DistributionClientImpl - create keys

2019-04-22T12:29:45.104Z|trace-#| com.att.nsa.apiClient.http.HttpClient - POST http://message-router.onap:3904/apiKeys/create will send credentials over a clear channel.

2019-04-22T12:29:45.210Z|trace-#| org.onap.sdc.http.SdcConnectorClient - about to perform registerAsdcTopics. requestId= 39614524-48f0-4ca3-8407-f7be5eeb10bd url= /sdc/v1/registerForDistribution

2019-04-22T12:29:45.381Z|trace-#| org.onap.sdc.http.SdcConnectorClient - status from ASDC is org.onap.sdc.http.HttpAsdcResponse@1f2d8e1a

2019-04-22T12:29:45.381Z|trace-#| org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=ASDC_SERVER_PROBLEM, responseMessage=ASDC server problem]

2019-04-22T12:29:45.381Z|trace-#| org.onap.sdc.http.SdcConnectorClient - error from ASDC is: {

  "requestError": {

    "policyException": {

      "messageId": "POL5000",

      "text": "Error: Internal Server Error. Please try again later.",

      "variables": []

    }

  }

}

 

 

I tried to reinstall dmaap but it did not solve the problem.

As part of my investigation I tried to find related logs (e.g DistributionEngineClusterHealth.java) but I am not sure where I can see these logs and how to enable trace logging.

 

--

Regards,

Subhash Kumar Singh

FNOSS, ONAP

 

Huawei Technologies India Pvt. Ltd.

Survey No. 37, Next to EPIP Area, Kundalahalli, Whitefield

Bengaluru-560066, Karnataka

Tel: + 91-80-49160700 Ext 70992 II Mob: 8050101106 Email: subhash.kumar.singh@...  

 


 

This e-mail and its attachments contain confidential information from HUAWEI, which
is intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

 

 

 

  


Re: Restconf API authentification: User/password fail #appc

Steve Siani <alphonse.steve.siani.djissitchi@...>
 

Thank you very much Taka!

Regards,
Steve


Re: AAI not coming up in casablance #aai

Yogi Knss <knss.yogi@...>
 

Thanks Jimmy Forsyth,

We have reduced the replicaset to 1 from 3, because I read somewhere that even 1 is enough. May be I overlooked something.
I will change the config and check again.

Regards,
Yogi.


Re: Restconf API authentification: User/password fail #appc

Taka Cho
 

All components ONAP are all tied up with AAF.

The minimum set in order to make APPC functioned. You need  AAF – authentication/authorization, and AAI – Inventory , and DMaaP (this is optional, you can use postman, curl or apidoc to post the LCM API instead).

 

On the docker compose, at least you need to install AAI.

 

Taka

 

From: Steve Siani <alphonse.steve.siani.djissitchi@...>
Sent: Monday, April 29, 2019 11:41 PM
To: CHO, TAKAMUNE <tc012c@...>; onap-discuss@...
Subject: Re: [onap-discuss] Restconf API authentification: User/password fail #appc

 

No, I am trying to have a standalone app.

I just need to play with APPC, is there any projects dependency?

Regards,
Steve


Todays SDC weekly

Ofir Sonsino <ofir.sonsino@...>
 


Hi,

Today's meeting will start 30 minutes later than usual. Sorry for the inconvinience.

Thanks
Ofir


Sent from my Samsung Galaxy smartphone.


Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

MALAKOV, YURIY <ym9479@...>
 

Aniello,

 

Are you planning to use the CDS Automation Naming (via naming ms) and IP assignment (via netbox) solution or preload solution for macro orchestration?

 

In tosca for naming policy you can set the policy instance name as default and that should allow sdnc to use the default policy for auto name generation. But before you try to solve this issue you want to address the question above. Depending on the answer the solution shall  be different.

 

 

 

 

 

Yuriy Malakov

SDN-CP Lead Engineer

732-420-3030, Q-Chat

Yuriy.Malakov@...

 

From: MALINCONICO ANIELLO PAOLO <aniello.malinconico@...>
Sent: Tuesday, April 30, 2019 8:22 AM
To: MALAKOV@...; MALAKOV, YURIY <ym9479@...>; onap-discuss@...
Subject: Re: [onap-discuss] [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

 

Hi Yuriy,  yes I have executed the ./demo-k8s.sh onap distributeVFWNG  .... But I do not want to replicate the vFWNFG use case.  
I want to understand how to deploy a service with MACRO FLAG, so I am starting to deploy the simplest custom service (composed by a single vm) but I am always facing into the same issue about naming-policy-generate-name ...
So I think my issue is not related to the vFWNG use case.

Thanks,
Aniello Paolo Malinconico


Re: Naming conflict with cloud region

Marco Platania
 

There were two cloud regions with the same name but different cloud owners and VID seemed confused. I can’t really reproduce the error because I cleaned up the environment. If you want to reproduce it in your local lab, you can use the JSON object below. This is very similar to the AAI cloud region configuration that we had yesterday. We ran GET https://{{aai_ip}}:{{aai_port}}/aai/v13/cloud-infrastructure/cloud-regions?cloud-region-id=RegionOne and got something like this:

 

{

    "cloud-region": [

        {

            "cloud-owner": "CloudOwner",

            "cloud-region-id": "RegionOne",

            "cloud-type": "openstack",

            "cloud-region-version": "v2.5",

            "identity-url": "http://10.43.111.6/api/multicloud/v0/CloudOwner_RegionTwo/identity/v2.0/tokens",

            "cloud-zone": "bm-2",

            "complex-name": "complex-2",

            "resource-version": "1546461316786",

            "relationship-list": {

                "relationship": [

                    {

                        "related-to": "complex",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v13/cloud-infrastructure/complexes/complex/clli2",

                        "relationship-data": [

                            {

                                "relationship-key": "complex.physical-location-id",

                                "relationship-value": "clli2"

                            }

                        ]

                    }

                ]

            }

        },

        {

            "cloud-owner": "CloudOwner2",

            "cloud-region-id": "RegionOne",

            "cloud-type": "openstack",

            "owner-defined-type": "owner type",

            "cloud-region-version": "v2.5",

            "cloud-zone": "bm-1",

            "resource-version": "1546461297568",

            "relationship-list": {

                "relationship": [

                    {

                        "related-to": "complex",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v13/cloud-infrastructure/complexes/complex/clli1",

                        "relationship-data": [

                            {

                                "relationship-key": "complex.physical-location-id",

                                "relationship-value": "clli1"

                            }

                        ]

                    }

                ]

            }

        }

    ]

}

 

Marco

 

From: "Stern, Ittay" <ittay.stern@...>
Date: Tuesday, April 30, 2019 at 3:36 AM
To: "onap-discuss@..." <onap-discuss@...>, "PLATANIA, MARCO (MARCO)" <platania@...>
Subject: RE: Naming conflict with cloud region

 

You’re saying that “VID is receiving multiple cloud regions with the same name and doesn’t know which one to pick”.

Can you explain your scenario?

 

Dublin’s VID is distinguishing regions with different owners:

 

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of PLATANIA, MARCO
Sent: Monday, April 29, 2019 6:56 PM
To: onap-discuss@...
Subject: [onap-discuss] Naming conflict with cloud region
Importance: High

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

All,

 

Who created a new cloud region called RegionOne, with cloud owner CloudOwner2, in Integration-SB-00 lab? See AAI object below. Please let us know because we ended up in a naming conflict and we aren’t able to spin up new VNFs in that lab. VID is receiving multiple cloud regions with the same name and doesn’t know which one to pick.

 

For future reference, please call the cloud region differently, for example RegionFour or something (I think robot has a script that creates RegionTwo and RegionThree), and then link that new region to the actual OpenStack RegionOne in the catalogdb database, cloud_sites table in MariaDB galera cluster (pick one of the 3 cluster nodes, updates will propagate).

 

MariaDB [catalogdb]> select * from cloud_sites;

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

| ID                | REGION_ID | IDENTITY_SERVICE_ID | CLOUD_VERSION | CLLI      | CLOUDIFY_ID | PLATFORM | ORCHESTRATOR | LAST_UPDATED_BY | CREATION_TIMESTAMP  | UPDATE_TIMESTAMP    |

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

| Chicago           | ORD       | RAX_KEYSTONE        | 2.5           | ORD       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

| Dallas            | DFW       | RAX_KEYSTONE        | 2.5           | DFW       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

| DEFAULT           | RegionOne | DEFAULT_KEYSTONE    | 2.5           | RegionOne | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

| Northern Virginia | IAD       | RAX_KEYSTONE        | 2.5           | IAD       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

| RegionOne         | RegionOne | DEFAULT_KEYSTONE    | 2.5           | RegionOne | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-24 19:54:15 | 2019-04-24 19:54:15 |

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

 

The ID (green) is your cloud region tag (DO NOT USE RegionOne !!!), while the Region_ID and CLLI (yellow) refer to your OpenStack actual region. Here you should have RegionOne.

 

Thanks,

Marco

 

 

{

            "cloud-owner": "CloudOwner2",

            "cloud-region-id": "RegionOne",

            "cloud-type": "openstack",

            "owner-defined-type": "t1",

            "cloud-region-version": "titanium_cloud",

            "identity-url": "http://msb-iag.onap:80/api/multicloud-titaniumcloud/v1/CloudOwner2/RegionOne/identity/v2.0",

            "cloud-zone": "z1",

            "complex-name": "clli1",

            "cloud-extra-info": "",

            "orchestration-disabled": false,

            "in-maint": false,

            "resource-version": "1556514985452",

            "relationship-list": {

                "relationship": [

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-05",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-05"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-00",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-00"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-03",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-03"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-08",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-08"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-01",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-01"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-09",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-09"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-06",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-06"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-04",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-04"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-02",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-02"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-12",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-12"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-07",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-07"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "pserver",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/pservers/pserver/CloudOwner2_RegionOne_compute-10",

                        "relationship-data": [

                            {

                                "relationship-key": "pserver.hostname",

                                "relationship-value": "CloudOwner2_RegionOne_compute-10"

                            }

                        ],

                        "related-to-property": [

                            {

                                "property-key": "pserver.pserver-name2"

                            }

                        ]

                    },

                    {

                        "related-to": "complex",

                        "relationship-label": "org.onap.relationships.inventory.LocatedIn",

                        "related-link": "/aai/v16/cloud-infrastructure/complexes/complex/clli1",

                        "relationship-data": [

                            {

                                "relationship-key": "complex.physical-location-id",

                                "relationship-value": "clli1"

                            }

                        ]

                    }

                ]

            }


Re: [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

Brian Freeman
 

Dublin SO oom charts like override.yaml have alot of other changes. Its not just the image reference update.

 

I would suggest copying a clone of the oom/kubernetes/so  and child directory into your current oom/kubernetes/so tree and make so; make onap and redeploy SO.

Or some other method to pick up the changes since Casablanca in the configuration data in oom/kuberenetes/so/*

 

Brian

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Sunilkumar Shivangouda Biradar
Sent: Tuesday, April 30, 2019 7:35 AM
To: Seshu m <seshu.kumar.m@...>; onap-discuss@...
Cc: Sanchita Pathak <sanchita@...>
Subject: Re: [onap-discuss] [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

 

Hi Seshu,

 

Thanks for the information.

I have changed the version from 1.5.0 to 1.4.0 in values.yaml file.

But still the pods are in CrashLoopBackOff state only.

 

And I am using so-mariadb version 10.1.38, is it the correct version for mariadb?

Attached debug logs.

 

so-so-7c8568ccb7-jn876                                            1/1       Running            0          1h

so-so-bpmn-infra-57b585cb45-s68sw                                 0/1       CrashLoopBackOff   18         1h

so-so-catalog-db-adapter-7f55755b7d-x56lk                         0/1       CrashLoopBackOff   20         1h

so-so-mariadb-57b6c979f6-tcj2c                                    1/1       Running            0          1h

so-so-monitoring-7867ff7475-2h9fj                                 1/1       Running            0          1h

so-so-openstack-adapter-6bb58c4b98-sjprc                          0/1       CrashLoopBackOff   20         1h

so-so-request-db-adapter-864bb584fd-t6wjm                         1/1       Running            0          1h

so-so-sdc-controller-55994bfc6-s7sbk                              1/1       Running            0          1h

so-so-sdnc-adapter-bfd96bcbf-w5tx9                                1/1       Running            0          1h

so-so-vfc-adapter-6cc6f4cb5f-xpc6d                                0/1       CrashLoopBackOff   20         1h

 

 

Regards,

Sunil B

 

From: Seshu m <seshu.kumar.m@...>
Sent: Tuesday, April 30, 2019 1:08 PM
To: onap-discuss@...; Sunilkumar Shivangouda Biradar <SB00577584@...>
Cc: Sanchita Pathak <sanchita@...>
Subject: RE: [onap-discuss] [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

 

Hi Sunil

 

We have the 1.5.x as the main branch targeting the future release and 1.4.X for the Dublin.

There are some intermediate fixes going on the main (development) branch that could pose issue.

Please try them on Dublin version 1.4.0 and let us know if its fine.

 

Thanks and Regards,

M Seshu Kumar

Senior System Architect

Single OSS India Branch Department. S/W BU.

Huawei Technologies India Pvt. Ltd.

Survey No. 37, Next to EPIP Area, Kundalahalli, Whitefield

Bengaluru-560066, Karnataka.

Tel: + 91-80-49160700 , Mob: 9845355488

Company_logo

___________________________________________________________________________________________________

This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!

-------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Sunilkumar Shivangouda Biradar
Sent: 2019430 12:21
To: onap-discuss@...
Cc: Sanchita Pathak <sanchita@...>
Subject: [onap-discuss] [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

 

Hi SO Team,

 

Issue while upgrading SO from Casablanca image to Dublin Image.

Few pods are in CrashLoopBackOff state.

 

I have changed the vaules.yaml as below

 

oom/kubernetes/so/charts/so-catalog-db-adapter/values.yaml:30:image: onap/so/catalog-db-adapter:1.5.0
oom/kubernetes/so/charts/so-monitoring/values.yaml:35:image: onap/so/so-monitoring:1.5.0
oom/kubernetes/so/charts/so-vfc-adapter/values.yaml:30:image: onap/so/vfc-adapter:1.5.0
oom/kubernetes/so/charts/so-request-db-adapter/values.yaml:30:image: onap/so/request-db-adapter:1.5.0
oom/kubernetes/so/charts/so-bpmn-infra/values.yaml:30:image: onap/so/bpmn-infra:1.5.0
oom/kubernetes/so/charts/so-sdc-controller/values.yaml:30:image: onap/so/sdc-controller:1.5.0
oom/kubernetes/so/charts/so-sdnc-adapter/values.yaml:30:image: onap/so/sdnc-adapter:1.5.0
oom/kubernetes/so/charts/so-openstack-adapter/values.yaml:29:image: onap/so/openstack-adapter:1.5.0

oom/kubernetes/so/values.yaml:30:image: onap/so/api-handler-infra:1.5.0

oom/kubernetes/so/charts/so-mariadb/values.yaml:35:image: mariadb:10.1.38

 

After changing these, have run below commands

  1. make all onap
  2. helm del --purge so
  3. rm -rf /docker-nfs/so
  4. helm install local/so -n so --namespace onap -f /root/integration-override-so.yaml

 

so-so-7b659df6f8-lcht2                                            1/1       Running            0          1h

so-so-bpmn-infra-796477fbd6-tb7pd                                 0/1       CrashLoopBackOff   14         1h

so-so-catalog-db-adapter-575f664674-9jn4t                         1/1       Running            0          1h

so-so-mariadb-57b6c979f6-rk8n5                                    1/1       Running            0          1h

so-so-monitoring-5dbdc97974-72mpz                                 1/1       Running            0          1h

so-so-openstack-adapter-58bf9dd67-xkcnt                           0/1       CrashLoopBackOff   15         1h

so-so-request-db-adapter-6874d4459f-sg88t                         1/1       Running            0          1h

so-so-sdc-controller-747bb56fd9-lcw5v                             1/1       Running            0          1h

so-so-sdnc-adapter-5455b7b9f4-65q9b                               1/1       Running            0          1h

so-so-vfc-adapter-6f874bddc6-g4544                                0/1       CrashLoopBackOff   16         1h

 

Need help in resolving this issue.

 

Regards,

 

Sunil B

Electronic city phase II,

Bangalore 560 100.

Mob: +91 8861180624 
SB00577584@...

   

 

We believe

 

 

============================================================================================================================

Disclaimer:  This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at http://www.techmahindra.com/Disclaimer.html externally http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.

============================================================================================================================


Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

Marco Platania
 

If you are not using CDS (as I understand) make sure that your heat template (yaml and env files) don’t have any reference to sdnc_* variables.

 

Marco

 

From: <onap-discuss@...> on behalf of MALINCONICO ANIELLO PAOLO <aniello.malinconico@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "aniello.malinconico@..." <aniello.malinconico@...>
Date: Tuesday, April 30, 2019 at 8:22 AM
To: "MALAKOV@..." <MALAKOV@...>, "MALAKOV, YURIY" <ym9479@...>, "onap-discuss@..." <onap-discuss@...>
Subject: Re: [onap-discuss] [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

 

Hi Yuriy,  yes I have executed the ./demo-k8s.sh onap distributeVFWNG  .... But I do not want to replicate the vFWNFG use case.  
I want to understand how to deploy a service with MACRO FLAG, so I am starting to deploy the simplest custom service (composed by a single vm) but I am always facing into the same issue about naming-policy-generate-name ...
So I think my issue is not related to the vFWNG use case.

Thanks,
Aniello Paolo Malinconico


Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

MALINCONICO ANIELLO PAOLO
 

Hi Yuriy,  yes I have executed the ./demo-k8s.sh onap distributeVFWNG  .... But I do not want to replicate the vFWNFG use case.  
I want to understand how to deploy a service with MACRO FLAG, so I am starting to deploy the simplest custom service (composed by a single vm) but I am always facing into the same issue about naming-policy-generate-name ...
So I think my issue is not related to the vFWNG use case.

Thanks,
Aniello Paolo Malinconico


Re: [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

MALAKOV, YURIY <ym9479@...>
 

Aniello,

 

Did you execute the robot script to distribute the vFW Package?

 

./demo-k8s.sh onap distributeVFWNG

 

 

Please refer to the latest video recording and postman collection as FYI:

https://wiki.onap.org/display/DW/Video+Demo+for+vFW+CDS+Casablanca

 

 

 

Yuriy Malakov

SDN-CP Lead Engineer

732-420-3030, Q-Chat

Yuriy.Malakov@...

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of MALINCONICO ANIELLO PAOLO
Sent: Tuesday, April 30, 2019 6:16 AM
To: onap-discuss@...
Subject: [onap-discuss] [MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

 

Hi all,
I am trying to instantiate a simple service using the MACRO flag for the "oneclick-instantiation" (refers to https://wiki.onap.org/display/DW/vFW+CDS+Casablanca?focusedCommentId=60885317#vFWCDSCasablanca-DataDictionary)
The service is composed only by a vsp that represent a virtual machine .
After the distribution, i have sent the service creation api by Postman :

POST http://{{mso_ip}}:30277/onap/so/infra/serviceInstantiation/v7/serviceInstances

{

  "requestDetails": {

    "subscriberInfo": {

      "globalSubscriberId": "TIM"

    },

    "requestInfo": {

      "suppressRollback": true,

      "productFamilyId": "a9a77d5a-123e-4ca2-9eb9-0b015d2ee0fb",

      "requestorId": "adt",

      "instanceName": "svc_20190430_macro",

      "source": "VID"

    },

    "cloudConfiguration": {

      "lcpCloudRegionId": "RegionOne",

      "tenantId": "34f1fe41d1a0483dbd1aa94c26dc5545"

    },

    "requestParameters": {

      "subscriptionServiceType": "casablanca",

      "userParams": [

        {

          "Homing_Solution": "none"

        },

        {

          "service": {

            "instanceParams": [

               

            ],

            "instanceName": "svc_20190430_macro_1",

            "resources": {

              "vnfs": [

                {

                  "modelInfo": {

                    "modelName": "20190220_vsp_vm",

                    "modelVersionId": "0946fde0-8042-4f73-93f0-aeca50ede63a",

                    "modelInvariantUuid": "8be4a024-4433-49b8-bf7f-de3750496ca3",

                    "modelVersion": "1.0",

                    "modelCustomizationId": "10acbb88-41b8-44f9-88f3-568b1f4a363e",

                    "modelInstanceName": "20190220_vsp_vm 0"

                  },

                  "cloudConfiguration": {

                    "lcpCloudRegionId": "RegionOne",

                    "tenantId": "34f1fe41d1a0483dbd1aa94c26dc5545"

                  },

                  "platform": {

                    "platformName": "test"

                  },

                  "lineOfBusiness": {

                    "lineOfBusinessName": "someValue"

                  },

                  "productFamilyId": "a9a77d5a-123e-4ca2-9eb9-0b015d2ee0fb",

                  "instanceName": "vnf_20190430_vm_1",

                  "instanceParams": [

                    {

         "image_id": "ubuntu-server-14.04",

                     "vf_module_id": "vfmodule_vm_20190301_01",

           "vnf_id": "vm_20190301_01"

                    }

                  ],

                  

                  "vfModules": [

                    {

                      "modelInfo": {

                        "modelName": "20190220VspVm..one_vm_fip_nr_pk..module-0",

                        "modelVersionId": "6e4284f0-588c-4fec-8f24-c9de5d916489",

                        "modelInvariantUuid": "530602ad-f5ff-4775-9ba2-26c4b90d188b",

                        "modelVersion": "1",

                        "modelCustomizationId": "aed63e91-f0d6-4bc8-b3b7-2900d5519e9f"

                      },

                      "instanceName": "vf_module_20190430_1",

                      "instanceParams": [

                      ]

                    }

                  ]

                }

              ]

            },

            "modelInfo": {

              "modelVersion": "1.0",

      "modelVersionId": "c0b00f51-7d41-4668-886f-0e915dde2a96",

      "modelInvariantId": "a4ef9116-44ff-4616-917e-f4d4b77528bc",

      "modelName": "svc_20190430_macro",

              "modelType": "service"

            }

          }

        }

      ],

      "aLaCarte": false

    },

    "project": {

      "projectName": "Project-Demonstration"

    },

    "owningEntity": {

      "owningEntityId": "24ef5425-bec4-4fa3-ab03-c0ecf4ebac96",

      "owningEntityName": "OE-Demonstration-1"

    },

    "modelInfo": {

      "modelVersion": "1.0",

      "modelVersionId": "c0b00f51-7d41-4668-886f-0e915dde2a96",

      "modelInvariantId": "a4ef9116-44ff-4616-917e-f4d4b77528bc",

      "modelName": "svc_20190430_macro",

      "modelType": "service"

    }

  }

}


But it fails with the following error:

2019-04-30T09:44:30.124Z|e8b1ea89-8f99-4a40-8396-13147d7731a9| org.onap.so.client.sdnc.SDNCClient - ResponseCode: 500 ResponseMessage: Unable to generate VM name: naming-policy-generate-name: input.policy-instance-name is not set and input.policy is ASSIGN

2019-04-30T09:44:30.124Z|e8b1ea89-8f99-4a40-8396-13147d7731a9| org.onap.so.client.sdnc.SDNCClient - RA_RESPONSE_FROM_SDNC

2019-04-30T09:44:30.125Z|e8b1ea89-8f99-4a40-8396-13147d7731a9| org.onap.so.client.exception.ExceptionBuilder - Error from SDNC: Unable to generate VM name: naming-policy-generate-name: input.policy-instance-name is not set and input.policy is ASSIGN

org.onap.so.client.exception.BadResponseException: Error from SDNC: Unable to generate VM name: naming-policy-generate-name: input.policy-instance-name is not set and input.policy is ASSIGN

 

 

Anyone can help me with this issue? 
Thanks a lot,

Aniello Paolo Malinconico


Re: [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

Sunilkumar Shivangouda Biradar
 

Hi Seshu,

 

Thanks for the information.

I have changed the version from 1.5.0 to 1.4.0 in values.yaml file.

But still the pods are in CrashLoopBackOff state only.

 

And I am using so-mariadb version 10.1.38, is it the correct version for mariadb?

Attached debug logs.

 

so-so-7c8568ccb7-jn876                                            1/1       Running            0          1h

so-so-bpmn-infra-57b585cb45-s68sw                                 0/1       CrashLoopBackOff   18         1h

so-so-catalog-db-adapter-7f55755b7d-x56lk                         0/1       CrashLoopBackOff   20         1h

so-so-mariadb-57b6c979f6-tcj2c                                    1/1       Running            0          1h

so-so-monitoring-7867ff7475-2h9fj                                 1/1       Running            0          1h

so-so-openstack-adapter-6bb58c4b98-sjprc                          0/1       CrashLoopBackOff   20         1h

so-so-request-db-adapter-864bb584fd-t6wjm                         1/1       Running            0          1h

so-so-sdc-controller-55994bfc6-s7sbk                              1/1       Running            0          1h

so-so-sdnc-adapter-bfd96bcbf-w5tx9                                1/1       Running            0          1h

so-so-vfc-adapter-6cc6f4cb5f-xpc6d                                0/1       CrashLoopBackOff   20         1h

 

 

Regards,

Sunil B

 

From: Seshu m <seshu.kumar.m@...>
Sent: Tuesday, April 30, 2019 1:08 PM
To: onap-discuss@...; Sunilkumar Shivangouda Biradar <SB00577584@...>
Cc: Sanchita Pathak <sanchita@...>
Subject: RE: [onap-discuss] [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

 

Hi Sunil

 

We have the 1.5.x as the main branch targeting the future release and 1.4.X for the Dublin.

There are some intermediate fixes going on the main (development) branch that could pose issue.

Please try them on Dublin version 1.4.0 and let us know if its fine.

 

Thanks and Regards,

M Seshu Kumar

Senior System Architect

Single OSS India Branch Department. S/W BU.

Huawei Technologies India Pvt. Ltd.

Survey No. 37, Next to EPIP Area, Kundalahalli, Whitefield

Bengaluru-560066, Karnataka.

Tel: + 91-80-49160700 , Mob: 9845355488

Company_logo

___________________________________________________________________________________________________

This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!

-------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Sunilkumar Shivangouda Biradar
Sent: 2019430 12:21
To: onap-discuss@...
Cc: Sanchita Pathak <sanchita@...>
Subject: [onap-discuss] [SO] CrashLoopBackOff issue while upgrading the SO image from Casablanca to Dublin

 

Hi SO Team,

 

Issue while upgrading SO from Casablanca image to Dublin Image.

Few pods are in CrashLoopBackOff state.

 

I have changed the vaules.yaml as below

 

oom/kubernetes/so/charts/so-catalog-db-adapter/values.yaml:30:image: onap/so/catalog-db-adapter:1.5.0
oom/kubernetes/so/charts/so-monitoring/values.yaml:35:image: onap/so/so-monitoring:1.5.0
oom/kubernetes/so/charts/so-vfc-adapter/values.yaml:30:image: onap/so/vfc-adapter:1.5.0
oom/kubernetes/so/charts/so-request-db-adapter/values.yaml:30:image: onap/so/request-db-adapter:1.5.0
oom/kubernetes/so/charts/so-bpmn-infra/values.yaml:30:image: onap/so/bpmn-infra:1.5.0
oom/kubernetes/so/charts/so-sdc-controller/values.yaml:30:image: onap/so/sdc-controller:1.5.0
oom/kubernetes/so/charts/so-sdnc-adapter/values.yaml:30:image: onap/so/sdnc-adapter:1.5.0
oom/kubernetes/so/charts/so-openstack-adapter/values.yaml:29:image: onap/so/openstack-adapter:1.5.0

oom/kubernetes/so/values.yaml:30:image: onap/so/api-handler-infra:1.5.0

oom/kubernetes/so/charts/so-mariadb/values.yaml:35:image: mariadb:10.1.38

 

After changing these, have run below commands

1.       make all onap

2.       helm del --purge so

3.       rm -rf /docker-nfs/so

4.       helm install local/so -n so --namespace onap -f /root/integration-override-so.yaml

 

so-so-7b659df6f8-lcht2                                            1/1       Running            0          1h

so-so-bpmn-infra-796477fbd6-tb7pd                                 0/1       CrashLoopBackOff   14         1h

so-so-catalog-db-adapter-575f664674-9jn4t                         1/1       Running            0          1h

so-so-mariadb-57b6c979f6-rk8n5                                    1/1       Running            0          1h

so-so-monitoring-5dbdc97974-72mpz                                 1/1       Running            0          1h

so-so-openstack-adapter-58bf9dd67-xkcnt                           0/1       CrashLoopBackOff   15         1h

so-so-request-db-adapter-6874d4459f-sg88t                         1/1       Running            0          1h

so-so-sdc-controller-747bb56fd9-lcw5v                             1/1       Running            0          1h

so-so-sdnc-adapter-5455b7b9f4-65q9b                               1/1       Running            0          1h

so-so-vfc-adapter-6f874bddc6-g4544                                0/1       CrashLoopBackOff   16         1h

 

Need help in resolving this issue.

 

Regards,

 

Sunil B

Electronic city phase II,

Bangalore 560 100.

Mob: +91 8861180624 
SB00577584@...

   

 

We believe

 

 

============================================================================================================================

Disclaimer:  This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at http://www.techmahindra.com/Disclaimer.html externally http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.

============================================================================================================================


Re: AAI not coming up in casablance #aai

Jimmy Forsyth
 

Hi, Yogi,

 

The createDbSchema job will keep trying to load the schema for JanusGraph into Cassandra until it gets a success; if AAI’s Cassandra is unavailable for any reason, you will see an error like that.  Check your aai Cassandra cluster, in the oom config we specify 3 node cluster and I only see 1 in your output below, which seems strange, perhaps your environment is taking a long time to bring up Cassandra.

 

Thanks,

jimmy

 

From: <onap-discuss@...> on behalf of Yogi Knss <knss.yogi@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "knss.yogi@..." <knss.yogi@...>
Date: Tuesday, April 30, 2019 at 4:39 AM
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] AAI not coming up in casablance #aai

 

Hi,

We are trying to install cassablanca on kubernetes and we are facing issues with AAI. AAi is not able to run because of issue with graphadmin, Can you let us know if we have missed some configuration ? We followed the same sequence that you have specified above. All other components "consul ,msb, dmaap, dcaegen2, aaf, robot" are up and running.

 

The following error is seen in "aai-aai-graphadmin-create-db-schema-mjc4z" JOB.

Project Build Version: 1.0.1
chown: changing ownership of '/opt/app/aai-graphadmin/resources/application.properties': Read-only file system
chown: changing ownership of '/opt/app/aai-graphadmin/resources/etc/appprops/aaiconfig.properties': Read-only file system
chown: changing ownership of '/opt/app/aai-graphadmin/resources/etc/appprops/janusgraph-cached.properties': Read-only file system
chown: changing ownership of '/opt/app/aai-graphadmin/resources/etc/appprops/janusgraph-realtime.properties': Read-only file system
chown: changing ownership of '/opt/app/aai-graphadmin/resources/etc/auth/aai_keystore': Read-only file system
chown: changing ownership of '/opt/app/aai-graphadmin/resources/localhost-access-logback.xml': Read-only file system
chown: changing ownership of '/opt/app/aai-graphadmin/resources/logback.xml': Read-only file system

Wed Apr 24 16:09:21 IST 2019 Starting /opt/app/aai-graphadmin/bin/createDBSchema.sh
---- NOTE --- about to open graph (takes a little while)--------;
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.PropertiesLauncher.main(PropertiesLauncher.java:595)
Caused by: java.lang.ExceptionInInitializerError
at org.onap.aai.dbmap.AAIGraph.getInstance(AAIGraph.java:103)
at org.onap.aai.schema.GenTester.main(GenTester.java:126)
... 8 more
Caused by: java.lang.RuntimeException: Failed to instantiate graphs
at org.onap.aai.dbmap.AAIGraph.<init>(AAIGraph.java:85)
at org.onap.aai.dbmap.AAIGraph.<init>(AAIGraph.java:57)
at org.onap.aai.dbmap.AAIGraph$Helper.<clinit>(AAIGraph.java:90)
... 10 more
Caused by: org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:57)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:159)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration.get(KCVSConfiguration.java:100)
at org.janusgraph.diskstorage.configuration.BasicConfiguration.isFrozen(BasicConfiguration.java:106)
at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1394)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:164)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:133)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:113)
at org.onap.aai.dbmap.AAIGraph.loadGraph(AAIGraph.java:115)
at org.onap.aai.dbmap.AAIGraph.<init>(AAIGraph.java:82)
... 12 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after PT1M
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:101)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:55)
... 21 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:161)
at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:115)
at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getSlice(AstyanaxKeyColumnValueStore.java:104)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:103)
at org.janusgraph.diskstorage.configuration.backend.KCVSConfiguration$1.call(KCVSConfiguration.java:100)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:148)
at org.janusgraph.diskstorage.util.BackendOperation$1.call(BackendOperation.java:162)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:69)
... 22 more
Caused by: com.netflix.astyanax.connectionpool.exceptions.TokenRangeOfflineException: TokenRangeOfflineException: [host=10.233.66.28(10.233.66.28):9160, latency=1(1), attempts=1]UnavailableException()
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:165)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:153)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:352)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4.execute(ThriftColumnFamilyQueryImpl.java:538)
at org.janusgraph.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:159)
... 29 more
Caused by: UnavailableException()
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14687)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result$multiget_slice_resultStandardScheme.read(Cassandra.java:14633)
at org.apache.cassandra.thrift.Cassandra$multiget_slice_result.read(Cassandra.java:14559)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:741)
at org.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:725)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:544)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:541)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
... 35 more
Failed to run the tool /opt/app/aai-graphadmin/bin/createDBSchema.sh successfully
Failed to run the createDBSchema.sh

Status of pods
kubectl get pods -n onap
NAME READY STATUS RESTARTS AGE
tcs-aaf-aaf-cm-76696c8bcf-sqqpw 1/1 Running 0 7h
tcs-aaf-aaf-cs-6c69c87d44-vbtk7 1/1 Running 0 7h
tcs-aaf-aaf-fs-5fd8c8bd8d-fbbwb 1/1 Running 0 7h
tcs-aaf-aaf-gui-777fb85d96-vtwnf 1/1 Running 0 7h
tcs-aaf-aaf-hello-774f4b6f5-lb6t4 1/1 Running 0 7h
tcs-aaf-aaf-locate-fbd8f454b-lgkjd 1/1 Running 0 7h
tcs-aaf-aaf-oauth-77fcbf54cd-trdxx 1/1 Running 0 7h
tcs-aaf-aaf-service-9f76d4746-4knwv 1/1 Running 0 7h
tcs-aaf-aaf-sms-5f764cf7cf-vxl7g 1/1 Running 0 7h
tcs-aaf-aaf-sms-quorumclient-0 1/1 Running 0 7h
tcs-aaf-aaf-sms-quorumclient-1 1/1 Running 0 7h
tcs-aaf-aaf-sms-quorumclient-2 1/1 Running 0 6h
tcs-aaf-aaf-sms-vault-0 2/2 Running 3 7h
tcs-aaf-aaf-sshsm-distcenter-gmbwl 0/1 Completed 0 7h
tcs-aaf-aaf-sshsm-testca-69v6k 0/1 Init:Error 0 7h
tcs-aaf-aaf-sshsm-testca-6ghcc 0/1 Init:Error 0 7h
tcs-aaf-aaf-sshsm-testca-rz58d 0/1 Init:Error 0 7h
tcs-aaf-aaf-sshsm-testca-ttbq7 0/1 Init:Error 0 7h
tcs-aaf-aaf-sshsm-testca-xht79 0/1 Completed 0 6h
tcs-aaf-aaf-sshsm-testca-z6sfw 0/1 Init:Error 0 7h
tcs-aai-aai-5cbbdb4ff5-4hwpb 0/1 Init:0/1 45 7h
tcs-aai-aai-babel-68fc787d74-pvdtr 2/2 Running 0 1d
tcs-aai-aai-cassandra-0 1/1 Running 1 1d
tcs-aai-aai-champ-5b64bd67c7-qnx4j 1/2 Running 0 1d
tcs-aai-aai-data-router-6654597bdd-6nfmw 2/2 Running 6 1d
tcs-aai-aai-elasticsearch-68bcf6c8fd-9stb4 1/1 Running 0 1d
tcs-aai-aai-gizmo-d548b4d9-cvf6d 2/2 Running 0 1d
tcs-aai-aai-graphadmin-67f5965fb7-88dwz 0/2 Init:0/1 165 1d
tcs-aai-aai-graphadmin-create-db-schema-mjc4z 0/1 Error 0 1d
tcs-aai-aai-graphadmin-create-db-schema-p5vms 0/1 Error 0 1d
tcs-aai-aai-graphadmin-create-db-schema-sbhsk 0/1 Error 0 1d
tcs-aai-aai-graphadmin-create-db-schema-wqw29 0/1 Error 0 1d
tcs-aai-aai-graphadmin-create-db-schema-zrmpv 0/1 Error 0 1d
tcs-aai-aai-modelloader-6dfbcd7596-9dd9s 2/2 Running 0 1d
tcs-aai-aai-resources-588998b4ff-qnlvd 0/2 Init:0/1 165 1d
tcs-aai-aai-search-data-57666c9494-n24bc 2/2 Running 0 1d
tcs-aai-aai-sparky-be-7db4b8dcf-qr29r 0/2 Init:0/1 0 1d
tcs-aai-aai-spike-6f9f5f5c9d-fxcrs 2/2 Running 0 1d
tcs-aai-aai-traversal-7df69d5885-72kj5 0/2 Init:0/1 165 1d
tcs-aai-aai-traversal-update-query-data-gr2w8 0/1 Init:0/1 165 1d
tcs-consul-consul-69d7c64bdd-wms5l 1/1 Running 0 1d
tcs-consul-consul-server-0 1/1 Running 0 1d
tcs-consul-consul-server-1 1/1 Running 0 1d
tcs-consul-consul-server-2 1/1 Running 0 1d
tcs-dcaegen2-dcae-bootstrap-6b6bb89cd5-5vhmv 1/1 Running 0 1d
tcs-dcaegen2-dcae-cloudify-manager-6b6f59fc66-k79c9 1/1 Running 0 1d
tcs-dcaegen2-dcae-db-0 1/1 Running 0 1d
tcs-dcaegen2-dcae-db-1 1/1 Running 0 1d
tcs-dcaegen2-dcae-healthcheck-5fc6d94989-ch5kn 1/1 Running 0 1d
tcs-dcaegen2-dcae-pgpool-77b844664d-5hd4c 1/1 Running 0 1d
tcs-dcaegen2-dcae-pgpool-77b844664d-xxnh8 1/1 Running 0 1d
tcs-dcaegen2-dcae-redis-0 1/1 Running 0 1d
tcs-dcaegen2-dcae-redis-1 1/1 Running 0 1d
tcs-dcaegen2-dcae-redis-2 1/1 Running 0 1d
tcs-dcaegen2-dcae-redis-3 1/1 Running 0 1d
tcs-dcaegen2-dcae-redis-4 1/1 Running 0 1d
tcs-dcaegen2-dcae-redis-5 1/1 Running 0 1d
tcs-dmaap-dbc-pg-0 1/1 Running 0 1d
tcs-dmaap-dbc-pg-1 1/1 Running 0 1d
tcs-dmaap-dbc-pgpool-57d7b76446-qgrcs 1/1 Running 0 1d
tcs-dmaap-dbc-pgpool-57d7b76446-qphtf 1/1 Running 0 1d
tcs-dmaap-dmaap-bus-controller-7567b865b7-x7vtz 1/1 Running 0 1d
tcs-dmaap-dmaap-dr-db-655587488d-2j5b2 1/1 Running 1 1d
tcs-dmaap-dmaap-dr-node-649659c584-ctfjs 1/1 Running 0 1d
tcs-dmaap-dmaap-dr-prov-595cd8bc55-6kjtv 1/1 Running 6 1d
tcs-dmaap-message-router-5f7b985c88-dj8j2 1/1 Running 0 1d
tcs-dmaap-message-router-kafka-678fb8558b-hc768 1/1 Running 0 1d
tcs-dmaap-message-router-zookeeper-54bb8cd9cf-kl55w 1/1 Running 0 1d
tcs-esr-esr-gui-699d9f579b-hrhk9 1/1 Running 0 7h
tcs-esr-esr-server-85d8c5f57-vj7m4 2/2 Running 0 7h
tcs-msb-kube2msb-6db5fd8c85-xrbs7 1/1 Running 0 1d
tcs-msb-msb-consul-66445944b6-phjpk 1/1 Running 0 1d
tcs-msb-msb-discovery-6bd858b659-42xbp 2/2 Running 0 1d
tcs-msb-msb-eag-78fbb94cc9-7cn2p 2/2 Running 0 1d
tcs-msb-msb-iag-5d45c9999b-lwcb6 2/2 Running 0 1d
tcs-oof-cmso-db-0 0/1 CrashLoopBackOff 105 7h
tcs-oof-music-cassandra-0 1/1 Running 0 7h
tcs-oof-music-cassandra-1 1/1 Running 0 7h
tcs-oof-music-cassandra-2 1/1 Running 0 7h
tcs-oof-music-cassandra-job-config-z28dh 0/1 Completed 0 7h
tcs-oof-music-tomcat-8f64f65d8-4pbnp 1/1 Running 0 7h
tcs-oof-music-tomcat-8f64f65d8-dl58x 1/1 Running 0 7h
tcs-oof-music-tomcat-8f64f65d8-fmrjn 0/1 Init:0/3 0 7h
tcs-oof-oof-557c8c677d-rjb2b 0/1 Init:0/2 45 7h
tcs-oof-oof-cmso-service-5467475444-ppmd7 0/1 Init:0/2 45 7h
tcs-oof-oof-has-api-5d79d97fb7-nzfkx 0/1 Init:0/3 45 7h
tcs-oof-oof-has-controller-658bbb894-m9dqf 0/1 Init:0/3 45 7h
tcs-oof-oof-has-data-5575788564-zbv4x 0/1 Init:0/4 45 7h
tcs-oof-oof-has-healthcheck-k5swr 0/1 Init:0/1 45 7h
tcs-oof-oof-has-onboard-bvfsz 0/1 Init:0/2 45 7h
tcs-oof-oof-has-reservation-9fd696d8d-dbw2k 0/1 Init:0/4 45 7h
tcs-oof-oof-has-solver-7fd7878df9-dn4xw 0/1 Init:0/4 45 7h
tcs-oof-zookeeper-0 1/1 Running 0 7h
tcs-oof-zookeeper-1 1/1 Running 0 7h
tcs-oof-zookeeper-2 1/1 Running 0 7h
tcs-robot-robot-86d89ffdb9-xd65b 1/1 Running 0 1d


Regards,
Yogi


[MACRO Instantiation] Failed caused by Error from SDNC: Unable to generate VM name

MALINCONICO ANIELLO PAOLO
 

Hi all,
I am trying to instantiate a simple service using the MACRO flag for the "oneclick-instantiation" (refers to https://wiki.onap.org/display/DW/vFW+CDS+Casablanca?focusedCommentId=60885317#vFWCDSCasablanca-DataDictionary)
The service is composed only by a vsp that represent a virtual machine .
After the distribution, i have sent the service creation api by Postman :

POST http://{{mso_ip}}:30277/onap/so/infra/serviceInstantiation/v7/serviceInstances
{
  "requestDetails": {
    "subscriberInfo": {
      "globalSubscriberId": "TIM"
    },
    "requestInfo": {
      "suppressRollback": true,
      "productFamilyId": "a9a77d5a-123e-4ca2-9eb9-0b015d2ee0fb",
      "requestorId": "adt",
      "instanceName": "svc_20190430_macro",
      "source": "VID"
    },
    "cloudConfiguration": {
      "lcpCloudRegionId": "RegionOne",
      "tenantId": "34f1fe41d1a0483dbd1aa94c26dc5545"
    },
    "requestParameters": {
      "subscriptionServiceType": "casablanca",
      "userParams": [
        {
          "Homing_Solution": "none"
        },
        {
          "service": {
            "instanceParams": [
               
            ],
            "instanceName": "svc_20190430_macro_1",
            "resources": {
              "vnfs": [
                {
                  "modelInfo": {
                    "modelName": "20190220_vsp_vm",
                    "modelVersionId": "0946fde0-8042-4f73-93f0-aeca50ede63a",
                    "modelInvariantUuid": "8be4a024-4433-49b8-bf7f-de3750496ca3",
                    "modelVersion": "1.0",
                    "modelCustomizationId": "10acbb88-41b8-44f9-88f3-568b1f4a363e",
                    "modelInstanceName": "20190220_vsp_vm 0"
                  },
                  "cloudConfiguration": {
                    "lcpCloudRegionId": "RegionOne",
                    "tenantId": "34f1fe41d1a0483dbd1aa94c26dc5545"
                  },
                  "platform": {
                    "platformName": "test"
                  },
                  "lineOfBusiness": {
                    "lineOfBusinessName": "someValue"
                  },
                  "productFamilyId": "a9a77d5a-123e-4ca2-9eb9-0b015d2ee0fb",
                  "instanceName": "vnf_20190430_vm_1",
                  "instanceParams": [
                    {
        "image_id": "ubuntu-server-14.04",
                    "vf_module_id": "vfmodule_vm_20190301_01",
          "vnf_id": "vm_20190301_01"
                    }
                  ],
                  
                  "vfModules": [
                    {
                      "modelInfo": {
                        "modelName": "20190220VspVm..one_vm_fip_nr_pk..module-0",
                        "modelVersionId": "6e4284f0-588c-4fec-8f24-c9de5d916489",
                        "modelInvariantUuid": "530602ad-f5ff-4775-9ba2-26c4b90d188b",
                        "modelVersion": "1",
                        "modelCustomizationId": "aed63e91-f0d6-4bc8-b3b7-2900d5519e9f"
                      },
                      "instanceName": "vf_module_20190430_1",
                      "instanceParams": [
                      ]
                    }
                  ]
                }
              ]
            },
            "modelInfo": {
              "modelVersion": "1.0",
      "modelVersionId": "c0b00f51-7d41-4668-886f-0e915dde2a96",
      "modelInvariantId": "a4ef9116-44ff-4616-917e-f4d4b77528bc",
      "modelName": "svc_20190430_macro",
              "modelType": "service"
            }
          }
        }
      ],
      "aLaCarte": false
    },
    "project": {
      "projectName": "Project-Demonstration"
    },
    "owningEntity": {
      "owningEntityId": "24ef5425-bec4-4fa3-ab03-c0ecf4ebac96",
      "owningEntityName": "OE-Demonstration-1"
    },
    "modelInfo": {
      "modelVersion": "1.0",
      "modelVersionId": "c0b00f51-7d41-4668-886f-0e915dde2a96",
      "modelInvariantId": "a4ef9116-44ff-4616-917e-f4d4b77528bc",
      "modelName": "svc_20190430_macro",
      "modelType": "service"
    }
  }
}


But it fails with the following error:

2019-04-30T09:44:30.124Z|e8b1ea89-8f99-4a40-8396-13147d7731a9| org.onap.so.client.sdnc.SDNCClient - ResponseCode: 500 ResponseMessage: Unable to generate VM name: naming-policy-generate-name: input.policy-instance-name is not set and input.policy is ASSIGN
2019-04-30T09:44:30.124Z|e8b1ea89-8f99-4a40-8396-13147d7731a9| org.onap.so.client.sdnc.SDNCClient - RA_RESPONSE_FROM_SDNC
2019-04-30T09:44:30.125Z|e8b1ea89-8f99-4a40-8396-13147d7731a9| org.onap.so.client.exception.ExceptionBuilder - Error from SDNC: Unable to generate VM name: naming-policy-generate-name: input.policy-instance-name is not set and input.policy is ASSIGN
org.onap.so.client.exception.BadResponseException: Error from SDNC: Unable to generate VM name: naming-policy-generate-name: input.policy-instance-name is not set and input.policy is ASSIGN
 

Anyone can help me with this issue? 
Thanks a lot,

Aniello Paolo Malinconico


Not able to start HOLMES with DCAE is behind firewall #holmes #dcaegen2

Yogi Knss <knss.yogi@...>
 

Hi,

We are installing ONAP casablanca on kubernetes behind a firewall.
When trying to install Holmes using Blueprints from dcae-boostrap, REST call is being triggered by  bootstrap to cloudify manager when trying to upload blueprints of Holmes_rules/Holmes_engine. Since cloudify manager is behind firewall, We set the http_proxy and https_proxy details on the container. But upload blueprint is failing with network unreachable message because it is not able to download the required imports.

I would like to understand if the current cloudify_rest_service uses http proxy setting from ENV ?
is there a way to provide the proxy settings to cloudify rest service ?


Regards,
Yogi.


nexus3.onap.org docker login issue

Ambica
 

Hi Keong,

 

I cannot login and pull images from  nexus3 repo. Is the access been revoked ? Or could it be some other issue?

 

 

Thanks & Regards,

Ambica

============================================================================================================================

Disclaimer:  This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at http://www.techmahindra.com/Disclaimer.html externally http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.

============================================================================================================================

7141 - 7160 of 23955