Date   

Re: [MULTICLOUD] - HPA telemetry

Srini
 

Hi Bin,

 

We will come back to you tomorrow on the naming of the repositories.

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of bounce+19846+12393+675801+2740670@...
Sent: Tuesday, September 11, 2018 5:45 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; onap-discuss@...; ethanlynnl@...
Cc: frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>; Xinhui Li <lxinhui@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

               I would suggest to request new repos for them, please suggest the name of each repo and I will check if any obligation, then we can request new repos to LF.

 

Thanks

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...]
Sent: Wednesday, September 12, 2018 2:22 AM
To: onap-discuss@...; ethanlynnl@...
Cc: Yang, Bin; frank.sandoval@...; Multanen, Eric W; Ranganathan, Dileep; Ramki Krishnan; Ranganathan, Raghu; Xinhui Li
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

I understand that code can be anywhere and leveraged from other places. It is just intuitiveness. If it is valid for both openstack and K8S and in future for some other cloud technology, why put it in plugin specific repositories.  I still feel that framework repo is the right place, but I will accept it reluctantly J if that is the majority decision. 

 

Bin,

Shall we go ahead and use K8S repository for telemetry?

Also, shall we go ahead and use K8S repository for SDC client too?

 

Thanks
Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Tuesday, September 11, 2018 2:10 AM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>; Xinhui Li <lxinhui@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi,

  There might be some misunderstanding here, for developing, it could be in one repo. For distribution, if it’s a docker, no mater it’s openstack or k8s, directly pull it in compute node, if it’s a service, just put the binary to compute node. No need to replicate it to other repo. If it’s python code, openstack repo is good, if it’s go code, k8s repo is good.

 

Or apply a new repo for hosting the seed codes would be much better, each service in multicloud has their repo for developing would make review/tests/build more easier.

 

What do you think?

 

 

On 11 Sep 2018, at 12:56 PM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi,

 

This scheme is valid for both openstack and K8S. If you put the code in Openstack plugin, then we need to replicate the code for K8S too. That is not good, right?

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Monday, September 10, 2018 7:34 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>; Xinhui Li <lxinhui@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

I’ve go through the wiki page list here, it seems there are two parts of code, HPA info exporter and Node exporter. It sounds like the node exporter have to be injected to compute hosts to collect data, it works like VESagent in multicloud, just the target server is not DCAE VES collector but prometheus. Since not all the cloud types support to inject the node exporter to compute hosts(e.g. azure, vio), it might not be commonly used across different cloud types, I would suggest to put this kind of code to openstack plugin just like each vesagent did. 

 

 

 

On 11 Sep 2018, at 6:47 AM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi Ethan,

 

Eric started a design page on this.

 

 

We can discuss it there, if  you like.

 

Thanks

Srini

 

 

From: Addepalli, Srinivasa R 
Sent: Monday, September 10, 2018 7:36 AM
To: 'Ethan Lynn' <ethanlynnl@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Ethan,

 

I could not understand some of the questions.

 

I am pasting a picture that was presented by Eric.

 

<image001.png>

 

With respect to above picture, let me answer your questions.

 

 

From: Ethan Lynn [mailto:ethanlynnl@...] 
Sent: Sunday, September 9, 2018 11:32 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini, 

 I have some questions on another emails, could I know the answers for them?

 

What’s the relationship between current VESAgent and HPA telemetry on each plugins?

 

SRINI> What is “plugin” in your mind?  If you mean plugin in the context of Multi-Cloud cloud-technology plugins, then there are no changes expected in each cloud-technology specific plugins.  Compute nodes in the sites export raw information to ‘Stats Aggregation Service’. ‘Stats aggregation service’ gets relevant information from the raw data and sends alerts.  “HPA stats receiving service” then updates appropriate information in A&AI DB.

 

SRINI> “VES Exporter” is for all infrastructure statistics (beyond HPA telemetry) which exports the aggregated information to DCAE for any closed loop actions based on infrastructure metrics.

 

How does this new service work? Will it need APIs for querying or just regularly push aggregated metrics to A&AI/DMaaP/DCAE?

 

SRINI> I guess you meant “HPA Stats Receiving Service” and “VES Exporter”.  They push the aggregated metrics pro-actively.   If somebody wants to read the data on demand basis, Prometheus itself provides the information. For example Grafana visualization tool reads the data from Prometheus service using its own PromQL language.

 

Do we need to install any agents on each clouds to pushing data?

 

SRINI> Yes. Current expectation is that the compute servers in the sites are upgraded with node-exporter & Collectd. In future, in case of K8S, it would be cAdvisor too.  As Ramki mentioned in one of the calls, VIO has its own way of collecting infrastructure statistics and in that case VIO plugin within Multi-Cloud service can export the raw data to Prometheus service.  But that is for further study as I don’t know much about VIO myself.

 

Is it a wrapper of prometheus server or the manager of prometheus server?

 

SRINI> I guess you meant “HPA Stats Receiving Service” and “VES Exporter”.  They are hooked into Alert Manager of Prometheus as alert manager destinations. 

 

Is there any exporters exists for OpenStack/VIO/Kubernetes/Auze, do we need to develop one?

 

SRINI> In R3 (and may be R4), our intention is to limit to compute infrastructure statistics. But, Prometheus project has several exporters meant for cloud operating systems too.   I saw Openstack, AWS, K8S and Azure metrics exporters too.  I don’t know much about VIO metrics.

 

What kind of language do you plan to use for this new service? Go or Python?

 

SRINI> Golang.

 

 

 

On 6 Sep 2018, at 9:33 PM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi Ethan,

 

We are not suggesting to add “HPA receiving service” to be part of the “Broker” micro service. It would be separate micro-service by itself and it will scale on itself.  We are only suggesting to use ‘framework’ repository to host the code & rules.  So, there is no performance issue. Agree?

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Wednesday, September 5, 2018 10:55 PM
To: onap-discuss <onap-discuss@...>; Yang, Bin <bin.yang@...>
Cc: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin, 

  I think it’s better to use a new repository to host HPA telemetry, since the broker are written in python and we need to consider the performance. If massing broker with telemetry, massive telemetries might downgrade the broker’s performance.

 

 

On 6 Sep 2018, at 9:33 AM, Yang Bin <bin.yang@...> wrote:

 

Hi Srini,

 

               I double checked that during last MultiCloud weekly meeting, and I didn’t hear any objection. So I am thinking it is okay to use multicloud framework repository to host the HPA telemetry.

However, before anyone starts the patch upstreaming, I would like to know the details of your seed codes, and see how it fit into the existing framework repo since there is already source code for multicloud broker there. We want to keep thing consistent and easy /straight forward to understand/maintain.

 

So I suggest that you or someone else will showcase the seed code to multicloud team and then decide how it should be proceeded.

 

Thanks.

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Thursday, September 06, 2018 1:26 AM
To: onap-discuss@...; Yang, Bin; frank.sandoval@...
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Thanks Bin. Let us know if everybody is okay to use framework repository or provide suggestions.

 

HPA telemetry is part of HPA umbrella and Alex presented to various committees for R3 additions. Dileep is working on A&AI schema changes and related changes needed in OOF. But in these architecture/use-case meetings, we don’t get into the details of repositories in each project and hence that part was not discussed. In both A&AI and OOF, no need to create any repositories. I hope that we don’t need to create any repository here too.  My view is that framework repo seems to be generic and since HPA telemetry is also generic, related software can be placed there. Let us know.

 

Thanks

Srini

 

 

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of bounce+19846+12237+675801+2740670@...
Sent: Tuesday, September 4, 2018 8:18 PM
To: onap-discuss@...; Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; frank.sandoval@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

               Thanks for your feedback.  I will check with our team to see if any further question/concerns with regarding to this proposal.

 

It seems it is a new feature impacting multiple ONAP projects (MultiCloud, AAI, OOF, more? ), so I would also like to know whether this proposal has been presented to ARC subcommittee? What is the suggestion there?

 

Thanks

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Srini
Sent: Wednesday, September 05, 2018 3:24 AM
To: Yang, Bin; onap-discuss@...; frank.sandoval@...
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin,

 

I try to answer some of the questions inline.

 

In regards to repository, I see following:

<image001.png>

 

Since stats aggregation service is common across Cloud technologies, I think framework repository is a good candidate for adding this generic code and rules. What do you think?

 

 

From: Bin.Yang@... [mailto:Bin.Yang@...] 
Sent: Thursday, August 30, 2018 7:09 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; onap-discuss@...; frank.sandoval@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

               We had conduct an Q&A session during last MultiCloud weekly meeting , the purpose is to dive into more details of intent/design of this HPA telemetry so that we can judge whether it fits into multicloud’s scope and how it should be incubated inside Multicloud . Please refer to the MOM (https://wiki.onap.org/display/DW/MOM+of+Aug.+29th+2018+MultiCloud+weekly+meeting ) for details, I also copied the questions/answers here to facilitate the further discussion/feedback:

 

 

HPA Telemetry

 Q&A:

Q: Delivery plan:

A: Start it in C. Release,  no commitment what can be delivered

 

SRINI> True as we have started this work late. But, minimum we want to do in Casablanca is to integrate metrics collection,  provide rules to aggregate the information and provide visualization via Grafana. Our stretch goal is to populate HPA resource information in A&AI. 

 

Q: Seed code:

A: N/A

 

Q: Deployment topology: will be a single collector running for any kind of underlying VIM/Cloud type.instance?

A: Not clear yet.

 

SRINI> Yes. It is Cloud technology agnostic. As far as service provider has a way to install exporters in compute nodes, this should work irrespective of whether the cloud technology is  VIO, Titanium, upsteam openstack orK8S.

 

Q: Configuration API/portal:

A: N/A for now, but eventually be there,

 

Q: Will the API/IM be generic to be VIM/Cloud agnostic ?

A: not clear yet

SRINI> Yes. Please see above comment.

 

Q: Is there any dependency to AAI model/schema

A: There is a schema change to AAI in progress (Dileep)

Q: Is this schema be generic (VIM/Cloudagnostic?)

A: Supposed to be generic

SRINI> Yes. It is generic.

 

Q: Will the collector?  Be impacted by different agent on VIM/Cloud ?

A: not clear yet

SRINI> Current support is for Prometheus exporters as well as CollectD.  But, it should not be a matter as long as the agent supports exporting metrics as expected by Prometheus service.

 

Q: Why not DCAE VES collector /microservice? Please evaluate this option.

A:  no answer yet

SRINI> This will be below DCAE.  Intention (as roadmap) to send alerts/events/aggregation-data to DCAE via VES.

 

suggestion (Bin Yang ) If there can be just 1 collector for all VIM/Cloud instance/types, you can have dedicate repo

Otherwise, share the existing repos following the broker/plugin topology

AI: Eric Multanen figure out the questions above, then decide whether a dedicated repo needed.

 

 

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Friday, August 31, 2018 12:51 AM
To: onap-discuss@...; frank.sandoval@...; Yang, Bin
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Frank,

 

Please see following links:

 

Intent and high level architecture is presented in various groups:

 

Dileep started design page at (Telemetry for OOF and A&AI)

 

 

There is no design page yet for HPA Telemetry for Multi-Cloud. But there are set of JIRA request on the following EPIC:

 

 

Thanks

Srini

 

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Frank Sandoval
Sent: Wednesday, August 29, 2018 12:51 PM
To: onap-discuss <onap-discuss@...>; bin.yang@...
Cc: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

This topic has been discussed in the OOF group as well. Is there a wiki page, slide deck, or other material describing the proposed design? Thanks


Frank Sandoval

OAM Technologies, representing Arm

OOF committer

 

 

 

On Aug 28, 2018, at 11:51 PM, Yang Bin <bin.yang@...> wrote:

 

Hi Srini,

 

               If possible, let’s continue the discussion on the upcoming MultiCloud weekly meeting.

 

Thanks.

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Friday, August 24, 2018 12:32 AM
To: Multanen, Eric W; onap-discuss@...; Yang, Bin
Cc: Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Thanks Eric.

 

Few more answers embedded below.

 

From: Multanen, Eric W 
Sent: Wednesday, August 22, 2018 5:01 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; onap-discuss@...; bin.yang@...
Cc: Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Bin,

 

Thank you for the agenda slot today, I think we covered most of everything – the call dropped promptly on the hour.

 

 

Following are the questions I collected from the discussion – please amend and clarify if I haven’t

captured your key questions.

  

I expect Srini and others can assist in providing more detailed/correct answers.

 

 

1.      Why is this part (Prometheus) of ONAP/multi-cloud instead of Openstack?

a.      it is used more broadly than just for Openstack

 

SRINI>

-        This is meant for infrastructure and HPA metrics. 

-        We want to be agnostic to VIM technology

-        Many constrained Edge deployments would like minimal functions in the Edge – Like nova, neutron, Cinder, HEAT and leave rest to be put in elsewhere.

-        Prometheus is second only to K8S that got graduated in CNCF (only two so far).

-        Prometheus has very good integration with Openstack & K8S, AWS, AZURE, Bare-metal nodes including collected.

              

2.      What is the relationship between Openstack telemetry and this proposal?

 

SRINI> Prometheus is mainly meant to collect the metric, aggregate/summation, siliencing etc… Prometheus even have a way collect the metric from Openstack.

 

3.      What is the interface and/or API for configuring rules ?

a.      Initially, configuration files for the service. 

 

4.      How are different cloud / infrastructure type supported?

 

SRINI> Fortunately, Prometheus has various integration with popular cloud technologies already.

 

5.      Where is the datastore of the Prometheus service ? What data/info is accessible by ONAP?

a.      plan is to develop the HPA statistics exporting/receiving service to provide specific HPA data to ONAP.

b.      not sure yet if there is a generic access to all data (and whether that is desired)

 

 

Eric

 

 

From: Addepalli, Srinivasa R 
Sent: Tuesday, August 21, 2018 9:36 AM
To: onap-discuss@...; bin.yang@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin,

 

I am travelling.  Eric confirmed and he will talk about the feature and high level design.  If time permits, Dileep can show the A&AI schema proposal.

 

Hi Ramki and Raghu,

Since it is born out of Edge-automation working group, it would be good if you can attend the meeting.

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of bounce+19846+11984+675801+2740670@...
Sent: Tuesday, August 21, 2018 8:38 AM
To: onap-discuss@...; Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

Before I can suggest, I would like to understand this feature more.

Would someone please present the idea/design to MultiCloud team?  Perhaps it's a good chance to do that during the upcoming MultiCloud weekly meeting.

 

Thanks 

Bin


 2018821日,23:29Srini <srinivasa.r.addepalli@...> 写道:

Hi Bin,

 

We have this EPIC as part of Edge automation: https://jira.onap.org/browse/MULTICLOUD-257

 

This is meant to ensure that placement decisions consider current available resources in addition to what we were doing with respect to capabilities till R2.

 

This work got started last week.

 

There would some source code development (We call it –  HPA Telemetry receiving service) which gets aggregated telemetry information, do any massaging/filtering necessary and updates A&AI DB.

 

There would be some additional development with respect rules for Prometheus aggregation service.

 

This is generic service required across Openstack, K8S and Azure. Is there any repository in Multi-Cloud we can use to put this code or do you suggest to request for a new repository?  Please advise.

 

Thanks

Srini

 

 

 

 

 

 

 


Re: [MULTICLOUD] - HPA telemetry

Yang Bin
 

Hi Srini,

 

               I would suggest to request new repos for them, please suggest the name of each repo and I will check if any obligation, then we can request new repos to LF.

 

Thanks

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...]
Sent: Wednesday, September 12, 2018 2:22 AM
To: onap-discuss@...; ethanlynnl@...
Cc: Yang, Bin; frank.sandoval@...; Multanen, Eric W; Ranganathan, Dileep; Ramki Krishnan; Ranganathan, Raghu; Xinhui Li
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

I understand that code can be anywhere and leveraged from other places. It is just intuitiveness. If it is valid for both openstack and K8S and in future for some other cloud technology, why put it in plugin specific repositories.  I still feel that framework repo is the right place, but I will accept it reluctantly J if that is the majority decision. 

 

Bin,

Shall we go ahead and use K8S repository for telemetry?

Also, shall we go ahead and use K8S repository for SDC client too?

 

Thanks
Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Tuesday, September 11, 2018 2:10 AM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>; Xinhui Li <lxinhui@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi,

  There might be some misunderstanding here, for developing, it could be in one repo. For distribution, if it’s a docker, no mater it’s openstack or k8s, directly pull it in compute node, if it’s a service, just put the binary to compute node. No need to replicate it to other repo. If it’s python code, openstack repo is good, if it’s go code, k8s repo is good.

 

Or apply a new repo for hosting the seed codes would be much better, each service in multicloud has their repo for developing would make review/tests/build more easier.

 

What do you think?

 

 

On 11 Sep 2018, at 12:56 PM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi,

 

This scheme is valid for both openstack and K8S. If you put the code in Openstack plugin, then we need to replicate the code for K8S too. That is not good, right?

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Monday, September 10, 2018 7:34 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>; Xinhui Li <lxinhui@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

I’ve go through the wiki page list here, it seems there are two parts of code, HPA info exporter and Node exporter. It sounds like the node exporter have to be injected to compute hosts to collect data, it works like VESagent in multicloud, just the target server is not DCAE VES collector but prometheus. Since not all the cloud types support to inject the node exporter to compute hosts(e.g. azure, vio), it might not be commonly used across different cloud types, I would suggest to put this kind of code to openstack plugin just like each vesagent did. 

 

 

 

On 11 Sep 2018, at 6:47 AM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi Ethan,

 

Eric started a design page on this.

 

 

We can discuss it there, if  you like.

 

Thanks

Srini

 

 

From: Addepalli, Srinivasa R 
Sent: Monday, September 10, 2018 7:36 AM
To: 'Ethan Lynn' <ethanlynnl@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Ethan,

 

I could not understand some of the questions.

 

I am pasting a picture that was presented by Eric.

 

<image001.png>

 

With respect to above picture, let me answer your questions.

 

 

From: Ethan Lynn [mailto:ethanlynnl@...] 
Sent: Sunday, September 9, 2018 11:32 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini, 

 I have some questions on another emails, could I know the answers for them?

 

What’s the relationship between current VESAgent and HPA telemetry on each plugins?

 

SRINI> What is “plugin” in your mind?  If you mean plugin in the context of Multi-Cloud cloud-technology plugins, then there are no changes expected in each cloud-technology specific plugins.  Compute nodes in the sites export raw information to ‘Stats Aggregation Service’. ‘Stats aggregation service’ gets relevant information from the raw data and sends alerts.  “HPA stats receiving service” then updates appropriate information in A&AI DB.

 

SRINI> “VES Exporter” is for all infrastructure statistics (beyond HPA telemetry) which exports the aggregated information to DCAE for any closed loop actions based on infrastructure metrics.

 

How does this new service work? Will it need APIs for querying or just regularly push aggregated metrics to A&AI/DMaaP/DCAE?

 

SRINI> I guess you meant “HPA Stats Receiving Service” and “VES Exporter”.  They push the aggregated metrics pro-actively.   If somebody wants to read the data on demand basis, Prometheus itself provides the information. For example Grafana visualization tool reads the data from Prometheus service using its own PromQL language.

 

Do we need to install any agents on each clouds to pushing data?

 

SRINI> Yes. Current expectation is that the compute servers in the sites are upgraded with node-exporter & Collectd. In future, in case of K8S, it would be cAdvisor too.  As Ramki mentioned in one of the calls, VIO has its own way of collecting infrastructure statistics and in that case VIO plugin within Multi-Cloud service can export the raw data to Prometheus service.  But that is for further study as I don’t know much about VIO myself.

 

Is it a wrapper of prometheus server or the manager of prometheus server?

 

SRINI> I guess you meant “HPA Stats Receiving Service” and “VES Exporter”.  They are hooked into Alert Manager of Prometheus as alert manager destinations. 

 

Is there any exporters exists for OpenStack/VIO/Kubernetes/Auze, do we need to develop one?

 

SRINI> In R3 (and may be R4), our intention is to limit to compute infrastructure statistics. But, Prometheus project has several exporters meant for cloud operating systems too.   I saw Openstack, AWS, K8S and Azure metrics exporters too.  I don’t know much about VIO metrics.

 

What kind of language do you plan to use for this new service? Go or Python?

 

SRINI> Golang.

 

 

 

On 6 Sep 2018, at 9:33 PM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi Ethan,

 

We are not suggesting to add “HPA receiving service” to be part of the “Broker” micro service. It would be separate micro-service by itself and it will scale on itself.  We are only suggesting to use ‘framework’ repository to host the code & rules.  So, there is no performance issue. Agree?

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Wednesday, September 5, 2018 10:55 PM
To: onap-discuss <onap-discuss@...>; Yang, Bin <bin.yang@...>
Cc: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin, 

  I think it’s better to use a new repository to host HPA telemetry, since the broker are written in python and we need to consider the performance. If massing broker with telemetry, massive telemetries might downgrade the broker’s performance.

 

 

On 6 Sep 2018, at 9:33 AM, Yang Bin <bin.yang@...> wrote:

 

Hi Srini,

 

               I double checked that during last MultiCloud weekly meeting, and I didn’t hear any objection. So I am thinking it is okay to use multicloud framework repository to host the HPA telemetry.

However, before anyone starts the patch upstreaming, I would like to know the details of your seed codes, and see how it fit into the existing framework repo since there is already source code for multicloud broker there. We want to keep thing consistent and easy /straight forward to understand/maintain.

 

So I suggest that you or someone else will showcase the seed code to multicloud team and then decide how it should be proceeded.

 

Thanks.

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Thursday, September 06, 2018 1:26 AM
To: onap-discuss@...; Yang, Bin; frank.sandoval@...
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Thanks Bin. Let us know if everybody is okay to use framework repository or provide suggestions.

 

HPA telemetry is part of HPA umbrella and Alex presented to various committees for R3 additions. Dileep is working on A&AI schema changes and related changes needed in OOF. But in these architecture/use-case meetings, we don’t get into the details of repositories in each project and hence that part was not discussed. In both A&AI and OOF, no need to create any repositories. I hope that we don’t need to create any repository here too.  My view is that framework repo seems to be generic and since HPA telemetry is also generic, related software can be placed there. Let us know.

 

Thanks

Srini

 

 

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of bounce+19846+12237+675801+2740670@...
Sent: Tuesday, September 4, 2018 8:18 PM
To: onap-discuss@...; Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; frank.sandoval@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

               Thanks for your feedback.  I will check with our team to see if any further question/concerns with regarding to this proposal.

 

It seems it is a new feature impacting multiple ONAP projects (MultiCloud, AAI, OOF, more? ), so I would also like to know whether this proposal has been presented to ARC subcommittee? What is the suggestion there?

 

Thanks

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Srini
Sent: Wednesday, September 05, 2018 3:24 AM
To: Yang, Bin; onap-discuss@...; frank.sandoval@...
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin,

 

I try to answer some of the questions inline.

 

In regards to repository, I see following:

<image001.png>

 

Since stats aggregation service is common across Cloud technologies, I think framework repository is a good candidate for adding this generic code and rules. What do you think?

 

 

From: Bin.Yang@... [mailto:Bin.Yang@...] 
Sent: Thursday, August 30, 2018 7:09 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; onap-discuss@...; frank.sandoval@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

               We had conduct an Q&A session during last MultiCloud weekly meeting , the purpose is to dive into more details of intent/design of this HPA telemetry so that we can judge whether it fits into multicloud’s scope and how it should be incubated inside Multicloud . Please refer to the MOM (https://wiki.onap.org/display/DW/MOM+of+Aug.+29th+2018+MultiCloud+weekly+meeting ) for details, I also copied the questions/answers here to facilitate the further discussion/feedback:

 

 

HPA Telemetry

 Q&A:

Q: Delivery plan:

A: Start it in C. Release,  no commitment what can be delivered

 

SRINI> True as we have started this work late. But, minimum we want to do in Casablanca is to integrate metrics collection,  provide rules to aggregate the information and provide visualization via Grafana. Our stretch goal is to populate HPA resource information in A&AI. 

 

Q: Seed code:

A: N/A

 

Q: Deployment topology: will be a single collector running for any kind of underlying VIM/Cloud type.instance?

A: Not clear yet.

 

SRINI> Yes. It is Cloud technology agnostic. As far as service provider has a way to install exporters in compute nodes, this should work irrespective of whether the cloud technology is  VIO, Titanium, upsteam openstack orK8S.

 

Q: Configuration API/portal:

A: N/A for now, but eventually be there,

 

Q: Will the API/IM be generic to be VIM/Cloud agnostic ?

A: not clear yet

SRINI> Yes. Please see above comment.

 

Q: Is there any dependency to AAI model/schema

A: There is a schema change to AAI in progress (Dileep)

Q: Is this schema be generic (VIM/Cloudagnostic?)

A: Supposed to be generic

SRINI> Yes. It is generic.

 

Q: Will the collector?  Be impacted by different agent on VIM/Cloud ?

A: not clear yet

SRINI> Current support is for Prometheus exporters as well as CollectD.  But, it should not be a matter as long as the agent supports exporting metrics as expected by Prometheus service.

 

Q: Why not DCAE VES collector /microservice? Please evaluate this option.

A:  no answer yet

SRINI> This will be below DCAE.  Intention (as roadmap) to send alerts/events/aggregation-data to DCAE via VES.

 

suggestion (Bin Yang ) If there can be just 1 collector for all VIM/Cloud instance/types, you can have dedicate repo

Otherwise, share the existing repos following the broker/plugin topology

AI: Eric Multanen figure out the questions above, then decide whether a dedicated repo needed.

 

 

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Friday, August 31, 2018 12:51 AM
To: onap-discuss@...; frank.sandoval@...; Yang, Bin
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Frank,

 

Please see following links:

 

Intent and high level architecture is presented in various groups:

 

Dileep started design page at (Telemetry for OOF and A&AI)

 

 

There is no design page yet for HPA Telemetry for Multi-Cloud. But there are set of JIRA request on the following EPIC:

 

 

Thanks

Srini

 

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Frank Sandoval
Sent: Wednesday, August 29, 2018 12:51 PM
To: onap-discuss <onap-discuss@...>; bin.yang@...
Cc: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

This topic has been discussed in the OOF group as well. Is there a wiki page, slide deck, or other material describing the proposed design? Thanks


Frank Sandoval

OAM Technologies, representing Arm

OOF committer

 

 

 

On Aug 28, 2018, at 11:51 PM, Yang Bin <bin.yang@...> wrote:

 

Hi Srini,

 

               If possible, let’s continue the discussion on the upcoming MultiCloud weekly meeting.

 

Thanks.

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Friday, August 24, 2018 12:32 AM
To: Multanen, Eric W; onap-discuss@...; Yang, Bin
Cc: Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Thanks Eric.

 

Few more answers embedded below.

 

From: Multanen, Eric W 
Sent: Wednesday, August 22, 2018 5:01 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; onap-discuss@...; bin.yang@...
Cc: Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Bin,

 

Thank you for the agenda slot today, I think we covered most of everything – the call dropped promptly on the hour.

 

 

Following are the questions I collected from the discussion – please amend and clarify if I haven’t

captured your key questions.

  

I expect Srini and others can assist in providing more detailed/correct answers.

 

 

1.      Why is this part (Prometheus) of ONAP/multi-cloud instead of Openstack?

a.      it is used more broadly than just for Openstack

 

SRINI>

-        This is meant for infrastructure and HPA metrics. 

-        We want to be agnostic to VIM technology

-        Many constrained Edge deployments would like minimal functions in the Edge – Like nova, neutron, Cinder, HEAT and leave rest to be put in elsewhere.

-        Prometheus is second only to K8S that got graduated in CNCF (only two so far).

-        Prometheus has very good integration with Openstack & K8S, AWS, AZURE, Bare-metal nodes including collected.

              

2.      What is the relationship between Openstack telemetry and this proposal?

 

SRINI> Prometheus is mainly meant to collect the metric, aggregate/summation, siliencing etc… Prometheus even have a way collect the metric from Openstack.

 

3.      What is the interface and/or API for configuring rules ?

a.      Initially, configuration files for the service. 

 

4.      How are different cloud / infrastructure type supported?

 

SRINI> Fortunately, Prometheus has various integration with popular cloud technologies already.

 

5.      Where is the datastore of the Prometheus service ? What data/info is accessible by ONAP?

a.      plan is to develop the HPA statistics exporting/receiving service to provide specific HPA data to ONAP.

b.      not sure yet if there is a generic access to all data (and whether that is desired)

 

 

Eric

 

 

From: Addepalli, Srinivasa R 
Sent: Tuesday, August 21, 2018 9:36 AM
To: onap-discuss@...; bin.yang@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin,

 

I am travelling.  Eric confirmed and he will talk about the feature and high level design.  If time permits, Dileep can show the A&AI schema proposal.

 

Hi Ramki and Raghu,

Since it is born out of Edge-automation working group, it would be good if you can attend the meeting.

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of bounce+19846+11984+675801+2740670@...
Sent: Tuesday, August 21, 2018 8:38 AM
To: onap-discuss@...; Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

Before I can suggest, I would like to understand this feature more.

Would someone please present the idea/design to MultiCloud team?  Perhaps it's a good chance to do that during the upcoming MultiCloud weekly meeting.

 

Thanks 

Bin


 2018821日,23:29Srini <srinivasa.r.addepalli@...> 写道:

Hi Bin,

 

We have this EPIC as part of Edge automation: https://jira.onap.org/browse/MULTICLOUD-257

 

This is meant to ensure that placement decisions consider current available resources in addition to what we were doing with respect to capabilities till R2.

 

This work got started last week.

 

There would some source code development (We call it –  HPA Telemetry receiving service) which gets aggregated telemetry information, do any massaging/filtering necessary and updates A&AI DB.

 

There would be some additional development with respect rules for Prometheus aggregation service.

 

This is generic service required across Openstack, K8S and Azure. Is there any repository in Multi-Cloud we can use to put this code or do you suggest to request for a new repository?  Please advise.

 

Thanks

Srini

 

 

 

 

 

 

 


[msb] Prometheus Integration

Dileep Ranganathan
 

Hi Huabing,

 

1.     In reference to the JIRA item MSB-209, I was wondering if the Prometheus and Grafana is available to use for all ONAP services?

2.     I would like to use the Prometheus as a common service to collect metrics for ONAP Optimization framework service. Please let me know if I can use Prometheus service brought up by MSB?

 

Thanks,

Dileep

3.      

 


[appc] Cancelling tomorrow's meeting (9/12)

Taka Cho
 

Dear all,

 

We need more time to work on the action items for M4. I am cancelling tomorrow’s APPC weekly call. If you have any issue, please send email to onap-discuss to address your issue for APPC.

 

Thanks

 

Taka Cho @ att.com


Re: [PORTAL]: cannot log in after install

Hector Anapan
 

Hi David,

 

The first rule of thumb to make sure your pods are healthy is to see that all the containers inside each pod are in “ready” or “green” state as below:

 

root@rancher:~# kubectl get pods -a --namespace=onap | grep portal

dev-portal-app-fb6fd5f84-8s49f                 2/2       Running     0          16m

dev-portal-cassandra-5d6649dfb6-fxngd          1/1       Running     0          15d

dev-portal-db-56bdf48468-ftwq6                 1/1       Running     0          33m

dev-portal-db-config-9z5md                     0/2       Completed   0          6d

dev-portal-sdk-f4d454ddc-h57br                 2/2       Running     0          15d

dev-portal-widget-55b4d88875-29n28             1/1       Running     0          15d

dev-portal-zookeeper-f649b6d49-d7dql           1/1       Running     0          15d

 

The only pods that are okay to see containers in not ready state are the pods triggered by kubernetes jobs. In the case above, it’s the “dev-portal-db-config-9z5md” pod which as you can see is in “completed” state. This means that the job’s finite set of actions was successfully completed.

 

If you are meeting the conditions above but still getting the invalid username/password error, please check the following:

 

  • Login to the portal-db (not the portal-db-config) pod, enter the mysql console (“mysql -u root -p” where password is Aa123456), and check if the demo user is in the fn_user table (USE portal; SELECT first_name, org_user_id, email FROM fn_user) where “org_user_id” is the column that lists the login usernames to access the portal GUI.

 

  • If demo user is not in the table above, then the job didn’t complete correctly. Please delete the job, and re-run the helm release with helm upgrade.

 

  • If demo user is already there, then please delete the “portal-app” pod and wait for all its pods to be in ready state again.

 

Please let me know your findings.

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of David Darbinyan
Sent: Tuesday, September 11, 2018 3:29 AM
To: onap-discuss@...
Subject: [onap-discuss] [PORTAL]: cannot log in after install

 

hi list!

using Rancher+Kubernetes

I can successfully reach to ONAP login interface, but demo/demo123456! gives me "Invalid username or password. Please try again."

Presently i use only [ portal ], [ multicloud ], [ so ].

with this setup all my Pods are in "green" state except of "portal-db-config"

 

Should any other pod be installed for logging in ? Or may be the demo user reseted manually ???

 

Thanks

DD

 


Requesting to add SonarKotlin plugin

Brinda S Muthuramalingam
 

Team,

Requesting to add SonarKotlin plugin in ONAP SonarQube server to analyze Kotlin files. Below the documentation link.
 

https://docs.sonarqube.org/display/PLUG/SonarKotlin

Regards
Brinda Santh M | brindasanth@... | bsminus@... | bs2796@... | +1 732 781 5923


Re: [MULTICLOUD] - HPA telemetry

Srini
 

I understand that code can be anywhere and leveraged from other places. It is just intuitiveness. If it is valid for both openstack and K8S and in future for some other cloud technology, why put it in plugin specific repositories.  I still feel that framework repo is the right place, but I will accept it reluctantly J if that is the majority decision. 

 

Bin,

Shall we go ahead and use K8S repository for telemetry?

Also, shall we go ahead and use K8S repository for SDC client too?

 

Thanks
Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Tuesday, September 11, 2018 2:10 AM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>; Xinhui Li <lxinhui@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi,

  There might be some misunderstanding here, for developing, it could be in one repo. For distribution, if it’s a docker, no mater it’s openstack or k8s, directly pull it in compute node, if it’s a service, just put the binary to compute node. No need to replicate it to other repo. If it’s python code, openstack repo is good, if it’s go code, k8s repo is good.

 

Or apply a new repo for hosting the seed codes would be much better, each service in multicloud has their repo for developing would make review/tests/build more easier.

 

What do you think?

 

 

On 11 Sep 2018, at 12:56 PM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi,

 

This scheme is valid for both openstack and K8S. If you put the code in Openstack plugin, then we need to replicate the code for K8S too. That is not good, right?

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Monday, September 10, 2018 7:34 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>; Xinhui Li <lxinhui@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

I’ve go through the wiki page list here, it seems there are two parts of code, HPA info exporter and Node exporter. It sounds like the node exporter have to be injected to compute hosts to collect data, it works like VESagent in multicloud, just the target server is not DCAE VES collector but prometheus. Since not all the cloud types support to inject the node exporter to compute hosts(e.g. azure, vio), it might not be commonly used across different cloud types, I would suggest to put this kind of code to openstack plugin just like each vesagent did. 

 

 

 

On 11 Sep 2018, at 6:47 AM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi Ethan,

 

Eric started a design page on this.

 

 

We can discuss it there, if  you like.

 

Thanks

Srini

 

 

From: Addepalli, Srinivasa R 
Sent: Monday, September 10, 2018 7:36 AM
To: 'Ethan Lynn' <ethanlynnl@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Ethan,

 

I could not understand some of the questions.

 

I am pasting a picture that was presented by Eric.

 

<image001.png>

 

With respect to above picture, let me answer your questions.

 

 

From: Ethan Lynn [mailto:ethanlynnl@...] 
Sent: Sunday, September 9, 2018 11:32 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: onap-discuss@...; Yang, Bin <bin.yang@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini, 

 I have some questions on another emails, could I know the answers for them?

 

What’s the relationship between current VESAgent and HPA telemetry on each plugins?

 

SRINI> What is “plugin” in your mind?  If you mean plugin in the context of Multi-Cloud cloud-technology plugins, then there are no changes expected in each cloud-technology specific plugins.  Compute nodes in the sites export raw information to ‘Stats Aggregation Service’. ‘Stats aggregation service’ gets relevant information from the raw data and sends alerts.  “HPA stats receiving service” then updates appropriate information in A&AI DB.

 

SRINI> “VES Exporter” is for all infrastructure statistics (beyond HPA telemetry) which exports the aggregated information to DCAE for any closed loop actions based on infrastructure metrics.

 

How does this new service work? Will it need APIs for querying or just regularly push aggregated metrics to A&AI/DMaaP/DCAE?

 

SRINI> I guess you meant “HPA Stats Receiving Service” and “VES Exporter”.  They push the aggregated metrics pro-actively.   If somebody wants to read the data on demand basis, Prometheus itself provides the information. For example Grafana visualization tool reads the data from Prometheus service using its own PromQL language.

 

Do we need to install any agents on each clouds to pushing data?

 

SRINI> Yes. Current expectation is that the compute servers in the sites are upgraded with node-exporter & Collectd. In future, in case of K8S, it would be cAdvisor too.  As Ramki mentioned in one of the calls, VIO has its own way of collecting infrastructure statistics and in that case VIO plugin within Multi-Cloud service can export the raw data to Prometheus service.  But that is for further study as I don’t know much about VIO myself.

 

Is it a wrapper of prometheus server or the manager of prometheus server?

 

SRINI> I guess you meant “HPA Stats Receiving Service” and “VES Exporter”.  They are hooked into Alert Manager of Prometheus as alert manager destinations. 

 

Is there any exporters exists for OpenStack/VIO/Kubernetes/Auze, do we need to develop one?

 

SRINI> In R3 (and may be R4), our intention is to limit to compute infrastructure statistics. But, Prometheus project has several exporters meant for cloud operating systems too.   I saw Openstack, AWS, K8S and Azure metrics exporters too.  I don’t know much about VIO metrics.

 

What kind of language do you plan to use for this new service? Go or Python?

 

SRINI> Golang.

 

 

 

On 6 Sep 2018, at 9:33 PM, Addepalli, Srinivasa R <srinivasa.r.addepalli@...> wrote:

 

Hi Ethan,

 

We are not suggesting to add “HPA receiving service” to be part of the “Broker” micro service. It would be separate micro-service by itself and it will scale on itself.  We are only suggesting to use ‘framework’ repository to host the code & rules.  So, there is no performance issue. Agree?

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of ethanlynnl
Sent: Wednesday, September 5, 2018 10:55 PM
To: onap-discuss <onap-discuss@...>; Yang, Bin <bin.yang@...>
Cc: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; frank.sandoval@...; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; Ramki Krishnan <ramkik@...>; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin, 

  I think it’s better to use a new repository to host HPA telemetry, since the broker are written in python and we need to consider the performance. If massing broker with telemetry, massive telemetries might downgrade the broker’s performance.

 

 

On 6 Sep 2018, at 9:33 AM, Yang Bin <bin.yang@...> wrote:

 

Hi Srini,

 

               I double checked that during last MultiCloud weekly meeting, and I didn’t hear any objection. So I am thinking it is okay to use multicloud framework repository to host the HPA telemetry.

However, before anyone starts the patch upstreaming, I would like to know the details of your seed codes, and see how it fit into the existing framework repo since there is already source code for multicloud broker there. We want to keep thing consistent and easy /straight forward to understand/maintain.

 

So I suggest that you or someone else will showcase the seed code to multicloud team and then decide how it should be proceeded.

 

Thanks.

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Thursday, September 06, 2018 1:26 AM
To: onap-discuss@...; Yang, Bin; frank.sandoval@...
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Thanks Bin. Let us know if everybody is okay to use framework repository or provide suggestions.

 

HPA telemetry is part of HPA umbrella and Alex presented to various committees for R3 additions. Dileep is working on A&AI schema changes and related changes needed in OOF. But in these architecture/use-case meetings, we don’t get into the details of repositories in each project and hence that part was not discussed. In both A&AI and OOF, no need to create any repositories. I hope that we don’t need to create any repository here too.  My view is that framework repo seems to be generic and since HPA telemetry is also generic, related software can be placed there. Let us know.

 

Thanks

Srini

 

 

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of bounce+19846+12237+675801+2740670@...
Sent: Tuesday, September 4, 2018 8:18 PM
To: onap-discuss@...; Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; frank.sandoval@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

               Thanks for your feedback.  I will check with our team to see if any further question/concerns with regarding to this proposal.

 

It seems it is a new feature impacting multiple ONAP projects (MultiCloud, AAI, OOF, more? ), so I would also like to know whether this proposal has been presented to ARC subcommittee? What is the suggestion there?

 

Thanks

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Srini
Sent: Wednesday, September 05, 2018 3:24 AM
To: Yang, Bin; onap-discuss@...; frank.sandoval@...
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin,

 

I try to answer some of the questions inline.

 

In regards to repository, I see following:

<image001.png>

 

Since stats aggregation service is common across Cloud technologies, I think framework repository is a good candidate for adding this generic code and rules. What do you think?

 

 

From: Bin.Yang@... [mailto:Bin.Yang@...] 
Sent: Thursday, August 30, 2018 7:09 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; onap-discuss@...; frank.sandoval@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

               We had conduct an Q&A session during last MultiCloud weekly meeting , the purpose is to dive into more details of intent/design of this HPA telemetry so that we can judge whether it fits into multicloud’s scope and how it should be incubated inside Multicloud . Please refer to the MOM (https://wiki.onap.org/display/DW/MOM+of+Aug.+29th+2018+MultiCloud+weekly+meeting ) for details, I also copied the questions/answers here to facilitate the further discussion/feedback:

 

 

HPA Telemetry

 Q&A:

Q: Delivery plan:

A: Start it in C. Release,  no commitment what can be delivered

 

SRINI> True as we have started this work late. But, minimum we want to do in Casablanca is to integrate metrics collection,  provide rules to aggregate the information and provide visualization via Grafana. Our stretch goal is to populate HPA resource information in A&AI. 

 

Q: Seed code:

A: N/A

 

Q: Deployment topology: will be a single collector running for any kind of underlying VIM/Cloud type.instance?

A: Not clear yet.

 

SRINI> Yes. It is Cloud technology agnostic. As far as service provider has a way to install exporters in compute nodes, this should work irrespective of whether the cloud technology is  VIO, Titanium, upsteam openstack orK8S.

 

Q: Configuration API/portal:

A: N/A for now, but eventually be there,

 

Q: Will the API/IM be generic to be VIM/Cloud agnostic ?

A: not clear yet

SRINI> Yes. Please see above comment.

 

Q: Is there any dependency to AAI model/schema

A: There is a schema change to AAI in progress (Dileep)

Q: Is this schema be generic (VIM/Cloudagnostic?)

A: Supposed to be generic

SRINI> Yes. It is generic.

 

Q: Will the collector?  Be impacted by different agent on VIM/Cloud ?

A: not clear yet

SRINI> Current support is for Prometheus exporters as well as CollectD.  But, it should not be a matter as long as the agent supports exporting metrics as expected by Prometheus service.

 

Q: Why not DCAE VES collector /microservice? Please evaluate this option.

A:  no answer yet

SRINI> This will be below DCAE.  Intention (as roadmap) to send alerts/events/aggregation-data to DCAE via VES.

 

suggestion (Bin Yang ) If there can be just 1 collector for all VIM/Cloud instance/types, you can have dedicate repo

Otherwise, share the existing repos following the broker/plugin topology

AI: Eric Multanen figure out the questions above, then decide whether a dedicated repo needed.

 

 

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Friday, August 31, 2018 12:51 AM
To: onap-discuss@...; frank.sandoval@...; Yang, Bin
Cc: Multanen, Eric W; Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Frank,

 

Please see following links:

 

Intent and high level architecture is presented in various groups:

 

Dileep started design page at (Telemetry for OOF and A&AI)

 

 

There is no design page yet for HPA Telemetry for Multi-Cloud. But there are set of JIRA request on the following EPIC:

 

 

Thanks

Srini

 

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Frank Sandoval
Sent: Wednesday, August 29, 2018 12:51 PM
To: onap-discuss <onap-discuss@...>; bin.yang@...
Cc: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

This topic has been discussed in the OOF group as well. Is there a wiki page, slide deck, or other material describing the proposed design? Thanks


Frank Sandoval

OAM Technologies, representing Arm

OOF committer

 

 

 

On Aug 28, 2018, at 11:51 PM, Yang Bin <bin.yang@...> wrote:

 

Hi Srini,

 

               If possible, let’s continue the discussion on the upcoming MultiCloud weekly meeting.

 

Thanks.

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 

From: Addepalli, Srinivasa R [mailto:srinivasa.r.addepalli@...] 
Sent: Friday, August 24, 2018 12:32 AM
To: Multanen, Eric W; onap-discuss@...; Yang, Bin
Cc: Ranganathan, Dileep; ramkik@...; Ranganathan, Raghu
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Thanks Eric.

 

Few more answers embedded below.

 

From: Multanen, Eric W 
Sent: Wednesday, August 22, 2018 5:01 PM
To: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>; onap-discuss@...; bin.yang@...
Cc: Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Bin,

 

Thank you for the agenda slot today, I think we covered most of everything – the call dropped promptly on the hour.

 

 

Following are the questions I collected from the discussion – please amend and clarify if I haven’t

captured your key questions.

  

I expect Srini and others can assist in providing more detailed/correct answers.

 

 

1.      Why is this part (Prometheus) of ONAP/multi-cloud instead of Openstack?

a.      it is used more broadly than just for Openstack

 

SRINI>

-        This is meant for infrastructure and HPA metrics. 

-        We want to be agnostic to VIM technology

-        Many constrained Edge deployments would like minimal functions in the Edge – Like nova, neutron, Cinder, HEAT and leave rest to be put in elsewhere.

-        Prometheus is second only to K8S that got graduated in CNCF (only two so far).

-        Prometheus has very good integration with Openstack & K8S, AWS, AZURE, Bare-metal nodes including collected.

              

2.      What is the relationship between Openstack telemetry and this proposal?

 

SRINI> Prometheus is mainly meant to collect the metric, aggregate/summation, siliencing etc… Prometheus even have a way collect the metric from Openstack.

 

3.      What is the interface and/or API for configuring rules ?

a.      Initially, configuration files for the service. 

 

4.      How are different cloud / infrastructure type supported?

 

SRINI> Fortunately, Prometheus has various integration with popular cloud technologies already.

 

5.      Where is the datastore of the Prometheus service ? What data/info is accessible by ONAP?

a.      plan is to develop the HPA statistics exporting/receiving service to provide specific HPA data to ONAP.

b.      not sure yet if there is a generic access to all data (and whether that is desired)

 

 

Eric

 

 

From: Addepalli, Srinivasa R 
Sent: Tuesday, August 21, 2018 9:36 AM
To: onap-discuss@...; bin.yang@...
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>; ramkik@...; Ranganathan, Raghu <rraghu@...>
Subject: RE: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Bin,

 

I am travelling.  Eric confirmed and he will talk about the feature and high level design.  If time permits, Dileep can show the A&AI schema proposal.

 

Hi Ramki and Raghu,

Since it is born out of Edge-automation working group, it would be good if you can attend the meeting.

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of bounce+19846+11984+675801+2740670@...
Sent: Tuesday, August 21, 2018 8:38 AM
To: onap-discuss@...; Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Cc: Multanen, Eric W <eric.w.multanen@...>; Ranganathan, Dileep <dileep.ranganathan@...>
Subject: Re: [onap-discuss] [MULTICLOUD] - HPA telemetry

 

Hi Srini,

 

Before I can suggest, I would like to understand this feature more.

Would someone please present the idea/design to MultiCloud team?  Perhaps it's a good chance to do that during the upcoming MultiCloud weekly meeting.

 

Thanks 

Bin


 2018821日,23:29Srini <srinivasa.r.addepalli@...> 写道:

Hi Bin,

 

We have this EPIC as part of Edge automation: https://jira.onap.org/browse/MULTICLOUD-257

 

This is meant to ensure that placement decisions consider current available resources in addition to what we were doing with respect to capabilities till R2.

 

This work got started last week.

 

There would some source code development (We call it –  HPA Telemetry receiving service) which gets aggregated telemetry information, do any massaging/filtering necessary and updates A&AI DB.

 

There would be some additional development with respect rules for Prometheus aggregation service.

 

This is generic service required across Openstack, K8S and Azure. Is there any repository in Multi-Cloud we can use to put this code or do you suggest to request for a new repository?  Please advise.

 

Thanks

Srini

 

 

 

 

 

 

 


[coe] Cancelling today's meeting

Victor Morales <victor.morales@...>
 

Howdy,

 

We have an internal Intel meeting that conflicts with this one. So, most of us are not going to be able to make it.

 

Thanks

Victor Morales


Re: [vid] API's available in VID

Srini
 

Just browsed through it. It helps in understanding the user point of view of ONAP. Very good work on usage of SO programmatically.  Thanks for sharing.

 

Srini

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Rene Robert
Sent: Tuesday, September 11, 2018 10:37 AM
To: onap-discuss@...; marcin.przybysz@...
Subject: Re: [onap-discuss] [vid] API's available in VID

 

Hi Marcin,

VID is using SO Api to instantiate services, VNF, VFmodules and networks.
Have a look at SO Api.

You can also find some postman collections in https://gitlab.com/Orange-OpenSource/onap-tests

Also, onap-tests is a about onboarding and instantiation based on using ONAP API.

Envoyé depuis mon smartphone



---- Przybysz, Marcin (Nokia - PL/Wroclaw) a écrit ----

Hi,

 

I’m searching for list of available API’s in VID where I can instantiate, search, browse, modify services, components …

I would like to omit GUI and trigger whole behavior via API’s.

 

BR

 

Marcin Przybysz

Nokia

_________________________________________________________________________________________________________________________
 
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
 
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: [vid] API's available in VID

Rene Robert <rene.robert@...>
 

Hi Marcin,

VID is using SO Api to instantiate services, VNF, VFmodules and networks.
Have a look at SO Api.

You can also find some postman collections in https://gitlab.com/Orange-OpenSource/onap-tests

Also, onap-tests is a about onboarding and instantiation based on using ONAP API.

Envoyé depuis mon smartphone



---- Przybysz, Marcin (Nokia - PL/Wroclaw) a écrit ----

Hi,

 

I’m searching for list of available API’s in VID where I can instantiate, search, browse, modify services, components …

I would like to omit GUI and trigger whole behavior via API’s.

 

BR

 

Marcin Przybysz

Nokia

_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: vFW Closed Loop - Operational Policy issues in Beijing #install #usecaseui #kubernetes #policy #drools

Jorge Hernandez
 

I see .. I think the lab installation is not in good state. It may be beyond the policy component, if you cannot ping the "message-router" service, which may be an indication of bigger problems. Check that you can ping it from other locations outside the brmsgw container, to see if it is a general problem. Verify that the message-router service shows with "kubectl get services ..".

With regards to the POLICY-1097 fix, it only affects the oom beijing branch, so was only submitted there. The master branch should be ok for Casablanca in that regard. You could patch your oom/kubernetes beijing install or pull the latest changes from git - oom beijing branch, then as usual, do the "make all" to make sure your helm charts are updated/

From policy standpoint, I suggest to start with clean data, and make sure first, every component can talk to each other including the message-router before doing the push-policies.sh. The /dockerdata-nfs/<release>/mariadb and /dockerdata-nfs/<release>/nexus directories (PVs) contain policy specific data, and I think safe to remove previous to do a helm upgrade/install to pick up the latest changes mentioned in the previous paragraph.

Hope it helps.
Jorge

-----Original Message-----
From: onap-discuss@lists.onap.org [mailto:onap-discuss@lists.onap.org] On Behalf Of Cristina Precup via Lists.Onap.Org
Sent: Tuesday, September 11, 2018 9:59 AM
To: onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] vFW Closed Loop - Operational Policy issues in Beijing #kubernetes #policy #drools #dcaegen2 #install #usecaseui

Hello Jorge,

Thank you for pointing me in the right direction. I did go through the wiki page that you are mentioning. Here is the overview in short:

- Healthcheck: PDP and PAP are unreachable
- Policy healthcheck fails
- There is no default group in the PDP Tab of the Policy UI
- brmsgw cannot ping nexus, drools and message-router

I understand that the POLICY-1097 fix has been applied in the Casablanca release. Would you suggest taking the changes as a patch into Beijing? What would be the recommended approach here?


Best regards,
--
Cristina Precup


Re: [control-loop][policy][appc] Why policy does not publish expected event to DMaaP for APPC restart action

Jorge Hernandez
 

Hello Bin,

 

Because you are creating it from an archetype, the configuration is not the full blown one.   Note that the

            "topic": "APPC-LCM-READ",

            "topicCommInfrastructure": "NOOP"

 

Uses a “noop” sink, therefore the output won’t be put in the dmaap bus.

 

The change is simple, change the “${POLICY_HOME}/config/amsterdam-controller.properties” – “noop” to “dmaap” or “ueb” (will behave equivalently) .. use as a reference one of the ones that we use for official installations as a reference.

 

It will help if you use the telemetry shell to explore runtime data and configuration, simply type “telemetry” in the drools container, and then navigate resources with “cd”, see resources with “ls”, and invoke “get” commands to navigate it.

 

Note also that the first time you post a message to an anonymous dmaap topic, it will create the topic, but the message won’t be processed.   The second onwards would be.

 

Not sure about the message contents you are describing below, in theory the Restart etc .. APPC messages have been significantly tested in conjunction with the integration team.

 

Best regards,

Jorge

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Yang Bin
Sent: Tuesday, September 11, 2018 4:57 AM
To: DRAGOSH, PAM <pdragosh@...>; onap-discuss@...
Cc: MAHER, RANDA <rx196w@...>
Subject: [onap-discuss][control-loop][policy][appc] Why policy does not publish expected event to DMaaP for APPC restart action

 

Dear policy team,

 

Would you help me identify what is the root cause of following failure?

 

I come up with my own operational policy and provisioned to drools container, however, it does not work as expected. The policy and the testing steps are attached. The specific problems are:

 

 

######### decoding the "topicSinks/recentEvents":

 

        {

            "alive": true,

            "locked": false,

            "recentEvents": [

                "{\n  \"body\": {\n    \"input\": {\n      \"common-header\": {\n        \"timestamp\": \"2018-09-11T09:38:32.486Z\",\n        \"api-ver\": \"2.00\",\n        \"originator-id\": \"8c1b8bd8-06f7-493f-8ed7-daaa4cc481bc\",\n        \"request-id\": \"8c1b8bd8-06f7-493f-8ed7-daaa4cc481bc\",\n        \"sub-request-id\": \"1\",\n        \"flags\": {}\n      },\n      \"action\": \"Restart\",\n      \"action-identifiers\": {\n        \"vnf-id\": \"zdfw1lb01lb01\"\n      }\n    }\n  },\n  \"version\": \"2.0\",\n  \"rpc-name\": \"restart\",\n  \"correlation-id\": \"8c1b8bd8-06f7-493f-8ed7-daaa4cc481bc-1\",\n  \"type\": \"request\"\n}"

            ],

            "servers": [

                "vm1.mr.simpledemo.openecomp.org"

            ],

            "topic": "APPC-LCM-READ",

            "topicCommInfrastructure": "NOOP"

        },

 

                              ==>

 

                              {

                                             "body": {

                                                            "input": {

                                                                           "common-header": {

                                                                                          "timestamp": "2018-09-11T08:43:10.499Z",

                                                                                          "api-ver": "2.00",

                                                                                          "originator-id": "8c1b8bd8-06f7-493f-8ed7-daaa4cc481bc",

                                                                                          "request-id": "8c1b8bd8-06f7-493f-8ed7-daaa4cc481bc",

                                                                                          "sub-request-id": "1",

                                                                                          "flags": {}

                                                                           },

                                                                           "action": "Restart",

                                                                           "action-identifiers": {

                                                                                          "vnf-id": "zdfw1lb01lb01"

                                                                           }

                                                            }

                                             },

                                             "version": "2.0",

                                             "rpc-name": "restart",

                                             "correlation-id": "8c1b8bd8-06f7-493f-8ed7-daaa4cc481bc-1",

                                             "type": "request"

                              }

 

 

Issue 1: the published event cannot be captured from dmaap

               curl -X GET \

  'http://10.12.6.210:3904/events/APPC-LCM-READ/EVENT-LISTENER-POSTMAN/304?timeout=6000&limit=10&filter=' \

  -H 'Cache-Control: no-cache' \

  -H 'Content-Type: application/json' \

  -H 'Postman-Token: 8f320b21-8ff0-4908-ad7a-b45f38deba95' \

  -H 'X-FromAppId: 121' \

  -H 'X-TransactionId: 9999'

 

Issue 2: timestamp does not match APPC ,  which will result in : "message": "EXPIRED REQUEST"

               "timestamp": "2018-09-11T08:43:10.499Z",

 

Issue 3: flags is missing ttl, which will result in : "message": "EXPIRED REQUEST"

               "flags": {"ttl": 36000}

              

 

Issue 4: vnf-id is wrong

               \"vnf-id\": \"zdfw1lb01lb01\"

              

               "vnf-id" should be retrieved from input: {"generic-vnf.vnf-id": "69bc974a-50b7-4a61-bc5d-dc1728a3fe89",}, while is "vserver.vserver-name"

 

BTW, The expected event (verified with postman) looks like:

 

{

               "version": "2.0",

               "type": "request",

               "correlation-id": "c09ac7d1-de62-0016-2000-e63701125557-201",

               "cambria.partition": "APPC",

               "rpc-name": "restart",

               "body": {

                       "input": {

                               "common-header": {

                                       "timestamp": "2018-09-11T02:35:04.45Z",

                                       "api-ver": "2.00",

                                       "originator-id": "APPC",

                                       "request-id": "1",

                                       "sub-request-id": "2",

                                       "flags": {

                                               "ttl": 36000

                                       }

                               },

                               "action-identifiers": {

                                       "service-instance-id": "",

                                       "vnf-id": "69bc974a-50b7-4a61-bc5d-dc1728a3fe89",

                                       "vnfc-name": "zdfw1lb01lb01",

                                       "vserver-id": "http://10.12.25.2:8774/v2.1/0e148b76ee8c42f78d37013bf6b7b1ae/servers/726ce4c0-4f09-437c-8ecf-d509f1181f7c"

                               },

                               "action": "Restart",

                               "payload": ""

                       }

               }

 

}

 

Thanks

 

Best Regards,

Bin Yang,    Solution Engineering Team,    Wind River

ONAP Multi-VIM/Cloud PTL

Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189

Skype: yangbincs993

 


Re: SB04 for CDS F2F

Brian Freeman
 

Gonna have to rebuild SB04 to pick up the latest AAI dockers (oom charts updated just  after SB04 was built on Friday)

 

Brian

 

 

From: FREEMAN, BRIAN D
Sent: Tuesday, September 11, 2018 10:03 AM
To: MALAKOV, YURIY <ym9479@...>; SMOKOWSKI, STEVEN <ss835w@...>; seshu.kumar.m@...; onap-discuss@...
Cc: 'de Talhouet, Alexis' <alexis.de_talhouet@...>; Yang Xu (Yang, Fixed Network) <Yang.Xu3@...>; PLATANIA, MARCO <platania@...>
Subject: SB04 for CDS F2F

 

I have a smaller distribution of ONAP OOM in SB04 up and running.

 

./ete-k8s.sh onap sb04

 

This is simply a custom tag in testsuites/health_check.robot so that we skip the long timeouts for the components that dont pass health check yet.

 

It will run a health check on the main components that are both installed and working – more are installed but not passing healthchecks.

 

==============================================================================

OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...

==============================================================================

Basic A&AI Health Check                                               | PASS |

------------------------------------------------------------------------------

Basic DMAAP Message Router Health Check                               | PASS |

------------------------------------------------------------------------------

Basic Policy Health Check                                             | PASS |

------------------------------------------------------------------------------

Basic Portal Health Check                                             | PASS |

------------------------------------------------------------------------------

Basic SDC Health Check                                                | PASS |

------------------------------------------------------------------------------

Basic SDNC Health Check                                               | PASS |

------------------------------------------------------------------------------

Basic SO Health Check                                                 | PASS |

------------------------------------------------------------------------------

OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | PASS |

7 critical tests, 7 passed, 0 failed

7 tests total, 7 passed, 0 failed

 

I am still working on getting distribution to succeed via ete-k8s.sh onap healthdist tests.

 

Brian

 

 


Re: vFW Closed Loop - Operational Policy issues in Beijing #install #usecaseui #kubernetes #policy #drools

Cristina Precup
 

Hello Jorge,

Thank you for pointing me in the right direction. I did go through the wiki page that you are mentioning. Here is the overview in short:

- Healthcheck: PDP and PAP are unreachable
- Policy healthcheck fails
- There is no default group in the PDP Tab of the Policy UI
- brmsgw cannot ping nexus, drools and message-router

I understand that the POLICY-1097 fix has been applied in the Casablanca release. Would you suggest taking the changes as a patch into Beijing? What would be the recommended approach here?


Best regards,
--
Cristina Precup


Re: vFW Closed Loop - Operational Policy issues in Beijing #install #usecaseui #kubernetes #policy #drools

Jorge Hernandez
 

Hello Cristina,

A bug has been recently found in the latest Beijing version that you may be hitting (POLICY-1097). A fix has been merged recently.

Please also take a look at https://wiki.onap.org/display/DW/Policy+on+OOM to look at state of things after your installation.

If you are running the vFW use case, note that you could avoid the use the update-vfw-op-policy.sh script. You could instead, before invoking the push-policies.sh, edit the file, the vFirewall encoded operational policy piece, and modify the resourceID to match the one you are using in your lab (which is the input parameter to the update-vfw-op-policy.sh) directly. That is in essence what the update-vfw-op-policy.sh does.

One caveat to this approach is that the kubernetes install mounts the push-policies.sh in a read-only file system, so within the container you would move the push-policies.sh to a dir with write permissions, make your changes, and invoke the push-policies scripts, as suggested in the wiki page above. Good luck!

Jorge

-----Original Message-----
From: onap-discuss@lists.onap.org [mailto:onap-discuss@lists.onap.org] On Behalf Of Cristina Precup via Lists.Onap.Org
Sent: Tuesday, September 11, 2018 6:59 AM
To: onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] vFW Closed Loop - Operational Policy issues in Beijing #kubernetes #policy #drools #dcaegen2 #install #usecaseui

Hello,

Thank you for the reference. I did do the onboarding step mentioned here, making sure to replace the field with the correct PG model-invariant-id in the posh-policies.sh script. However, I don't think this script actually does the onboarding in my case:

kubectl exec -it scapula-pap-5bf5f48d7b-v7fld -c pap -n onap -- bash -c "export PRELOAD_POLICIES=true; /home/policy/push-policies.sh"
Upload BRMS Param Template
--2018-09-11 11:32:53-- https://urldefense.proofpoint.com/v2/url?u=https-3A__git.onap.org_policy_drools-2Dapplications_plain_controlloop_templates_archetype-2Dcl-2Damsterdam_src_main_resources_archetype-2Dresources_src_main_resources_-5F-5FclosedLoopControlName-5F-5F.drl&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=AOclne09odx6cmeimzFUhQ&m=eo2mpzN21NLU2O9aeoUve_IdCqj3Mt7LyBtmVo34emA&s=RukLCAdJ2Ombv0eHqd38YY_A-YsFYOikOQiHNt408uU&e=
Resolving git.onap.org (git.onap.org)... 198.145.29.92 Connecting to git.onap.org (git.onap.org)|198.145.29.92|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 58366 (57K) [text/plain]
Saving to: 'cl-amsterdam-template.drl'

100%[==============================================================================>] 58,366 193KB/s in 0.3s

2018-09-11 11:32:54 (193 KB/s) - 'cl-amsterdam-template.drl' saved [58366/58366]

* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
POST /pdp/api/policyEngineImport HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 58757
Expect: 100-continue
Content-Type: multipart/form-data;
boundary=------------------------110622b19dc01d62
* Connection #0 to host pdp left intact
PPRELOAD_POLICIES is true
Create BRMSParam Operational Policies
Create BRMSParamvFirewall Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1309
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamvDNS Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1148
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamVOLTE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1140
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamvCPE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1139
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroService Config Policies
Create MicroServicevFirewall Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1689
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroServicevDNS Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1306
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroServicevCPE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1640
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreating Decision Guard policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 463
* upload completely sent off: 463 out of 463 bytes
* Connection #0 to host pdp left intact
PPush Decision policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 97
* upload completely sent off: 97 out of 97 bytes
* Connection #0 to host pdp left intact
PPushing BRMSParam Operational policies
pushPolicy : PUT : com.BRMSParamvFirewall
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamvDNS
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 94
* upload completely sent off: 94 out of 94 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamVOLTE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 95
* upload completely sent off: 95 out of 95 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamvCPE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 94
* upload completely sent off: 94 out of 94 bytes
* Connection #0 to host pdp left intact
PPushing MicroService Config policies
pushPolicy : PUT : com.MicroServicevFirewall
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 104
* upload completely sent off: 104 out of 104 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.MicroServicevDNS
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.MicroServicevCPE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact

Checking further on PAP if there are any policies configured gives me nothing:

policy@scapula-pap-5bf5f48d7b-v7fld:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vFirewall.*"}' https://urldefense.proofpoint.com/v2/url?u=http-3A__pdp-3A8081_pdp_api_getConfig&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=AOclne09odx6cmeimzFUhQ&m=eo2mpzN21NLU2O9aeoUve_IdCqj3Mt7LyBtmVo34emA&s=qrlYk0UJGahv2ljyxj7A2vn3njL3ZfCkhnjpbysX7Dg&e=

policy@scapula-pap-5bf5f48d7b-v7fld:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": "*"}' https://urldefense.proofpoint.com/v2/url?u=http-3A__pdp-3A8081_pdp_api_getConfig&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=AOclne09odx6cmeimzFUhQ&m=eo2mpzN21NLU2O9aeoUve_IdCqj3Mt7LyBtmVo34emA&s=qrlYk0UJGahv2ljyxj7A2vn3njL3ZfCkhnjpbysX7Dg&e=


Best regards,
--
Cristina Precup


Re: [so] SO API version

Sanchita Pathak
 

The issue was due to using incorrect SO Docker image (may be Amstersam one ! ). With pulling SO image again, this issue has gone.


Named-queries and instance-filters #aai

joss.armstrong@...
 


I am trying to find out what the relevant type to use as an "instance-filter" for each of the named-query types is. There are a couple which are documented but for some it is not obvious from the description what the instance filter should be set to. Is there a specific place in the code that checks the instance filter objects for the named-queries that I could find out from?

Thanks
Joss

 


SB04 for CDS F2F

Brian Freeman
 

I have a smaller distribution of ONAP OOM in SB04 up and running.

 

./ete-k8s.sh onap sb04

 

This is simply a custom tag in testsuites/health_check.robot so that we skip the long timeouts for the components that dont pass health check yet.

 

It will run a health check on the main components that are both installed and working – more are installed but not passing healthchecks.

 

==============================================================================

OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp components are...

==============================================================================

Basic A&AI Health Check                                               | PASS |

------------------------------------------------------------------------------

Basic DMAAP Message Router Health Check                               | PASS |

------------------------------------------------------------------------------

Basic Policy Health Check                                             | PASS |

------------------------------------------------------------------------------

Basic Portal Health Check                                             | PASS |

------------------------------------------------------------------------------

Basic SDC Health Check                                                | PASS |

------------------------------------------------------------------------------

Basic SDNC Health Check                                               | PASS |

------------------------------------------------------------------------------

Basic SO Health Check                                                 | PASS |

------------------------------------------------------------------------------

OpenECOMP ETE.Robot.Testsuites.Health-Check :: Testing ecomp compo... | PASS |

7 critical tests, 7 passed, 0 failed

7 tests total, 7 passed, 0 failed

 

I am still working on getting distribution to succeed via ete-k8s.sh onap healthdist tests.

 

Brian

 

 


Re: onap-sdnc-ansible-server cannot launch

Brian Freeman
 

I’m not the expert on this but perhaps this will help.

 

Are you building your own SDNC docker containers ?

 

 

Looks like the problem is  in the

[sdnc/oam.git] / installation / ansible-server / src / main / scripts / startAnsibleServer.sh

 

  11     apt-add-repository -y ppa:ansible/ansible

  12     apt-get -y install ansible

 

But I am not sure.

 

Can you try that on a plain ubuntu and see if the ppa is out of synch with ubuntu archives.

 

You can always pull the sdnc ansible docker container from nexus3.onap.org if its blocking for you.

 

Brina

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Abdelmuhaimen Seaudi
Sent: Tuesday, September 11, 2018 6:20 AM
To: onap-discuss@...
Subject: [onap-discuss] onap-sdnc-ansible-server cannot launch

 

Hi, I am using beijing branch for my OOM.

 

onap-sdnc-ansible-server cannot launch, the logs give the error below: 

 

 

how can i correct this ?

 

Reading package lists...

Building dependency tree...

Reading state information...

The following additional packages will be installed:

  apt-utils cron dh-python distro-info-data gir1.2-glib-2.0 iso-codes

  libapt-inst2.0 libdbus-glib-1-2 libgirepository-1.0-1 libpython3-stdlib

  lsb-release python-apt-common python3 python3-apt python3-dbus python3-gi

  python3-minimal python3-pycurl python3-software-properties python3.5

  python3.5-minimal unattended-upgrades

Suggested packages:

  anacron logrotate checksecurity exim4 | postfix | mail-transport-agent

  isoquery lsb python3-doc python3-tk python3-venv python3-apt-dbg

  python-apt-doc python-dbus-doc python3-dbus-dbg libcurl4-gnutls-dev

  python-pycurl-doc python3-pycurl-dbg python3.5-venv python3.5-doc

  binfmt-support bsd-mailx mail-transport-agent

The following NEW packages will be installed:

  apt-utils cron dh-python distro-info-data gir1.2-glib-2.0 iso-codes

  libapt-inst2.0 libdbus-glib-1-2 libgirepository-1.0-1 libpython3-stdlib

  lsb-release python-apt-common python3 python3-apt python3-dbus python3-gi

  python3-minimal python3-pycurl python3-software-properties python3.5

  python3.5-minimal software-properties-common unattended-upgrades

0 upgraded, 23 newly installed, 0 to remove and 2 not upgraded.

Need to get 5255 kB of archives.

After this operation, 31.7 MB of additional disk space will be used.

Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 cron amd64 3.0pl1-128ubuntu2 [68.4 kB]

Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3.5-minimal amd64 3.5.2-2ubuntu0~16.04.4 [1597 kB]

Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-minimal amd64 3.5.1-3 [23.3 kB]

Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3.5 amd64 3.5.2-2ubuntu0~16.04.4 [165 kB]

Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libpython3-stdlib amd64 3.5.1-3 [6818 B]

Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dh-python all 2.20151103ubuntu1.1 [74.1 kB]

Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3 amd64 3.5.1-3 [8710 B]

Err:8 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-inst2.0 amd64 1.2.26

  404  Not Found [IP: 91.189.88.161 80]

Err:9 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-utils amd64 1.2.26

  404  Not Found [IP: 91.189.88.161 80]

Get:10 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 distro-info-data all 0.28ubuntu0.8 [4502 B]

Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 lsb-release all 9.20160110ubuntu0.2 [11.8 kB]

Get:12 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgirepository-1.0-1 amd64 1.46.0-3ubuntu1 [88.3 kB]

Get:13 http://archive.ubuntu.com/ubuntu xenial/main amd64 gir1.2-glib-2.0 amd64 1.46.0-3ubuntu1 [127 kB]

Get:14 http://archive.ubuntu.com/ubuntu xenial/main amd64 iso-codes all 3.65-1 [2268 kB]

Get:15 http://archive.ubuntu.com/ubuntu xenial/main amd64 libdbus-glib-1-2 amd64 0.106-1 [67.1 kB]

Err:16 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-apt-common all 1.1.0~beta1ubuntu0.16.04.1

  404  Not Found [IP: 91.189.88.161 80]

Err:17 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-apt amd64 1.1.0~beta1ubuntu0.16.04.1

  404  Not Found [IP: 91.189.88.161 80]

Get:18 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-dbus amd64 1.2.0-3 [83.1 kB]

Get:19 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-gi amd64 3.20.0-0ubuntu1 [153 kB]

Get:20 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-pycurl amd64 7.43.0-1ubuntu1 [42.3 kB]

Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-software-properties all 0.96.20.7 [20.3 kB]

Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 software-properties-common all 0.96.20.7 [9452 B]

Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 unattended-upgrades all 0.90ubuntu0.9 [32.3 kB]

Fetched 4850 kB in 0s (12.4 MB/s)

E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/a/apt/libapt-inst2.0_1.2.26_amd64.deb  404  Not Found [IP: 91.189.88.161 80]

 

E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/a/apt/apt-utils_1.2.26_amd64.deb  404  Not Found [IP: 91.189.88.161 80]

 

 

 

E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

./startAnsibleServer.sh: line 11: apt-add-repository: command not found

Reading package lists...

Building dependency tree...

Reading state information...

The following additional packages will be installed:

  ieee-data libyaml-0-2 python-crypto python-ecdsa python-httplib2

  python-jinja2 python-markupsafe python-netaddr python-paramiko

  python-selinux python-six python-yaml

Suggested packages:

  sshpass python-crypto-dbg python-crypto-doc python-jinja2-doc ipython

  python-netaddr-docs

The following NEW packages will be installed:

  ansible ieee-data libyaml-0-2 python-crypto python-ecdsa python-httplib2

  python-jinja2 python-markupsafe python-netaddr python-paramiko

  python-selinux python-six python-yaml

0 upgraded, 13 newly installed, 0 to remove and 2 not upgraded.

Need to get 2904 kB of archives.

After this operation, 17.6 MB of additional disk space will be used.

Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libyaml-0-2 amd64 0.1.6-3 [47.6 kB]

Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-crypto amd64 2.6.1-6ubuntu0.16.04.3 [246 kB]

Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-markupsafe amd64 0.23-2build2 [15.5 kB]

Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-jinja2 all 2.8-1 [109 kB]

Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-six all 1.10.0-3 [10.9 kB]

Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-ecdsa all 0.13-2 [34.0 kB]

Get:7 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-paramiko all 1.16.0-1ubuntu0.1 [109 kB]

Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-yaml amd64 3.11-3build1 [105 kB]

Get:9 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-httplib2 all 0.9.1+dfsg-1 [34.2 kB]

Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 ieee-data all 20150531.1 [830 kB]

Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-netaddr all 0.7.18-1 [174 kB]

Err:12 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 ansible all 2.0.0.2-2ubuntu1

  404  Not Found [IP: 91.189.88.152 80]

Get:13 http://archive.ubuntu.com/ubuntu xenial/universe amd64 python-selinux amd64 2.4-3build2 [173 kB]

Fetched 1888 kB in 0s (9038 kB/s)

 

E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

cp: cannot stat '/etc/ansible/ansible.cfg': No such file or directory

./startAnsibleServer.sh: line 15: /etc/ansible/ansible.cfg: No such file or directory

cat: /etc/ansible/ansible.cfg.orig: No such file or directory

Traceback (most recent call last):

  File "RestServer.py", line 34, in <module>

    import cherrypy

  File "/usr/local/lib/python2.7/dist-packages/cherrypy/__init__.py", line 66, in <module>

    from ._cperror import (

  File "/usr/local/lib/python2.7/dist-packages/cherrypy/_cperror.py", line 122, in <module>

    import urllib.parse

ImportError: No module named parse

_________________________________________________________________________________________________________________________
 
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
 
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: [onap-tsc] [onap-discuss] [doc] DOC Project PTL Election - Self-Nomination Phase Open

Rich Bennett
 

I have drafted a committer promotion request, will review in a regular meeting this morning, and post as a “Request for committers to vote” after the meeting.

https://wiki.onap.org/pages/viewpage.action?pageId=41421458

 

Rich

 

From: Stephen Terrill <stephen.terrill@...>
Sent: Tuesday, September 11, 2018 3:04 AM
To: ONAP-TSC@...; onap-discuss@...; kpaul@...; BENNETT, RICH <rb2745@...>
Cc: Sofia Wallin <sofia.wallin@...>
Subject: RE: [onap-tsc] [onap-discuss] [doc] DOC Project PTL Election - Self-Nomination Phase Open

 

That is a good idea!

 

From: ONAP-TSC@... <ONAP-TSC@...> On Behalf Of Christopher Donley
Sent: Monday, September 10, 2018 11:00 PM
To: ONAP-TSC@...; onap-discuss@...; kpaul@...; rich bennett <rb2745@...>
Cc: Sofia Wallin <sofia.wallin@...>
Subject: Re: [onap-tsc] [onap-discuss] [doc] DOC Project PTL Election - Self-Nomination Phase Open

 

Could we just do a quick vote amongst the DOC committers to promote Sofia to committer, then the election could proceed apace with no further exceptions needed from the TSC?  She'll need to be a committer, anyways, to serve as PTL.

 

While I'm not a DOC committer, I support Sofia's candidacy based on her work in OPNFV.

 

Chris

 

From: <ONAP-TSC@...> on behalf of Stephen Terrill <stephen.terrill@...>
Reply-To: "ONAP-TSC@..." <ONAP-TSC@...>
Date: Monday, September 10, 2018 at 1:53 PM
To: "onap-discuss@..." <onap-discuss@...>, "kpaul@..." <kpaul@...>, rich bennett <rb2745@...>, "ONAP-TSC@..." <ONAP-TSC@...>
Cc: Sofia Wallin <sofia.wallin@...>
Subject: Re: [onap-tsc] [onap-discuss] [doc] DOC Project PTL Election - Self-Nomination Phase Open

 

Hi Kenny,

 

Regarding the exception, it’s a corner case not considered in the charter – agree.    I would propose the following:

  • Written support from the acting PTL (or any other committer) for allowing a PTL not derived from a committer to be included in the nominations
  • TSC approval of the recommendation for allowing a PTL not derived from the comitters to be included in the nominations.

TSC agenda for this week?

 

The project can then vote.

 

BR,

 

Steve

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Kenny Paul
Sent: Monday, September 10, 2018 8:25 PM
To:
onap-discuss@...; Stephen Terrill <stephen.terrill@...>; rich bennett <rb2745@...>
Cc: Sofia Wallin <
sofia.wallin@...>
Subject: Re: [onap-discuss] [doc] DOC Project PTL Election - Self-Nomination Phase Open

 

I just want to chime in to provide some background and express my support here as this is out of the norm.

 

Given the fact that there have been multiple unanswered calls for a Documentation PTL I believe that there is sufficient justification for a process exception to be granted by the ONAP community on this should no one else step up with a PTL nomination.    

 

Sofia has a long history in open source and would bring some great leadership with her to ONAP.

 

Thanks!

-kenny

 

 

From: <onap-discuss@...> on behalf of Stephen Terrill <stephen.terrill@...>
Reply-To: <
onap-discuss@...>, <stephen.terrill@...>
Date: Monday, September 10, 2018 at 8:33 AM
To: "
onap-discuss@..." <onap-discuss@...>, rich bennett <rb2745@...>
Cc: Sofia Wallin <
sofia.wallin@...>
Subject: Re: [onap-discuss] [doc] DOC Project PTL Election - Self-Nomination Phase Open

 

Hi Ritch, All,

 

Sofia Wallin would like to self-nominate.  She has PC problems hence she requested me to send this on her behalf.

 

Best Regards,

 

Steve

 

--

Hi,

I would like to nominate myself for the role as PTL for the ONAP documentation project.

 

As PTL for the OPNFV docs project and driver for the LFN cross community working group for docs I believe that I would contribute with good understanding of open source documentation, its importance, need of visibility and way of making documentation an important part of the code development.

 

Best regards 

Sofia 

--

 

 

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Rich Bennett
Sent: Thursday, September 06, 2018 10:48 PM
To:
onap-discuss@...
Subject: [onap-discuss] [doc] DOC Project PTL Election - Self-Nomination Phase Open

 

Documentation Project Committers,

 

The ONAP PTL Elections process is described in the ONAP wiki at the following link : https://wiki.onap.org/display/DW/Annual+Community+Elections

 

The self-nomination phase is now open and will end in 2 business days from the time this email appears on the discuss list.

If you are interested in running for the PTL position, please reply all to this message to self-nominate.

 

The list of committers for the DOC project may be found here

https://gerrit.onap.org/r/gitweb?p=doc.git;a=blob_plain;f=INFO.yaml;hb=HEAD

 

Regards,

Rich Bennett

10961 - 10980 of 23353