Date   

ONAP Container optimization effort

Eric Debeau
 

Hello

 

Following the TSC call, please find some information about the current situation on the existing Docker images on the wiki:

https://wiki.onap.org/display/DW/Reduction+effort+between+Casablanca+and+Dublin (thanks to Sylvain)

 

Best Regards

 

Eric

_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: [casablanca] : Service "sdc-wfd-fe" is invalid: spec.ports[0].nodePort: Invalid value: 30256: provided port is already allocated

Pondel, Marek (Nokia - PL/Wroclaw)
 

little more update :

further testing on other onap casablanca setup pointed next problem from the same category
this time NodePort collision took place with xdcae-ves-collector and vvp-ext-haproxy

30235 is default nodeport for ves based on its blueprint :

[root@dev-dcaegen2-dcae-bootstrap-6c58dbc695-k8xfj blueprints]# grep -A3 external_port: k8s-ves.yaml
  external_port:
    type: string
    description: Kubernetes node port on which collector is exposed
    default: "30235"
[root@dev-dcaegen2-dcae-bootstrap-6c58dbc695-k8xfj blueprints]#

and other service like vvp-ext-haproxy should not occupy it

root@esohn30-lab20n-tag-957396-rancher-master:~# kubectl -n onap get svc | grep 30235
vvp-ext-haproxy                    NodePort       10.43.194.44    <none>                                 80:30590/TCP,443:30235/TCP,22:30477/TCP,9000:31731/TCP        4h
root@esohn30-lab20n-tag-957396-rancher-master:~#




W dniu 28.02.2019 o 16:22, marek.pondel@... pisze:

hey


met port collision issue also with msb-iag and vvp-jenkins services


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# helm deploy dev-msb local/onap --namespace onap --verbose
fetching local/onap
Release "dev-msb" does not exist. Installing it now.
Error: release dev-msb failed: Service "msb-iag" is invalid: spec.ports[0].nodePort: Invalid value: 30280: provided port is already allocated
dev-msb               1           Thu Feb 28 14:46:57 2019    FAILED      msb-3.0.0               onap    
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap get svc | grep 30280
vvp-jenkins                        NodePort       10.43.34.61     <none>                                 8080:30280/TCP                                                3h
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap describe svc vvp-jenkins
Name:                     vvp-jenkins
Namespace:                onap
Labels:                   app=vvp-jenkins
Annotations:              <none>
Selector:                 app=vvp-jenkins
Type:                     NodePort
IP:                       10.43.34.61
Port:                     jenkins  8080/TCP
TargetPort:               8080/TCP
NodePort:                 jenkins  30280/TCP
Endpoints:                10.42.154.77:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#


salutes
marek



W dniu 28.02.2019 o 13:04, marek.pondel@... pisze:
hey All


in Casablanca just met following error during sdc deployment :


fetching local/onap
release "dev" deployed
release "dev-aaf" deployed
release "dev-aai" deployed
release "dev-appc" deployed
release "dev-clamp" deployed
release "dev-cli" deployed
release "dev-consul" deployed
release "dev-contrib" deployed
release "dev-dcaegen2" deployed
release "dev-dmaap" deployed
release "dev-esr" deployed
release "dev-log" deployed
release "dev-msb" deployed
release "dev-multicloud" deployed
release "dev-nbi" deployed
release "dev-oof" deployed
release "dev-pnda" deployed
release "dev-policy" deployed
release "dev-pomba" deployed
release "dev-portal" deployed
release "dev-robot" deployed
release "dev-sdc" deployed
release "dev-sdnc" deployed
release "dev-sniro-emulator" deployed
release "dev-so" deployed
release "dev-uui" deployed
release "dev-vfc" deployed
release "dev-vid" deployed
release "dev-vnfsdk" deployed
release "dev-vvp" deployed
dev-sdc               1           Thu Feb 28 11:27:20 2019    FAILED      sdc-3.0.0               onap


sdc re-deploy gave


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# helm deploy dev-sdc local/onap --namespace onap --verbose
fetching local/onap
Release "dev-sdc" does not exist. Installing it now.
Error: release dev-sdc failed: Service "sdc-wfd-fe" is invalid: spec.ports[0].nodePort: Invalid value: 30256: provided port is already allocated
dev-sdc               1           Thu Feb 28 11:34:16 2019    FAILED      sdc-3.0.0               onap    
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#


looks dcae-pnda-mirror service is using reguired port 30256


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap get svc | grep 30256
dcae-pnda-mirror                   LoadBalancer   10.43.212.32    192.168.0.22                           80:30256/TCP                                                  9m
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap describe svc dcae-pnda-mirror
Name:                     dcae-pnda-mirror
Namespace:                onap
Labels:                   app=dcae-pnda-mirror
                          chart=dcae-pnda-mirror-3.0.0
                          heritage=Tiller
                          release=dev-pnda
Annotations:              <none>
Selector:                 app=dcae-pnda-mirror,release=dev-pnda
Type:                     LoadBalancer
IP:                       10.43.212.32
LoadBalancer Ingress:     192.168.0.22
Port:                     client  80/TCP
TargetPort:               80/TCP
NodePort:                 client  30256/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  22m   service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   21m   service-controller  Ensured load balancer
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#



has someone maybe faced the same so far ?



salutes
marek



Re: [aai] Standalone AAI UI

Chandra
 

Hi Arul,

Is the standalone UI is available for AAI in Casablanca? Or it is still under development  for Dublin?

 

From: Arul Nambi [mailto:Arul.Nambi@...]
Sent: 06 November 2018 19:47
To: Chandrashekhar Thakare <CT00548828@...>; onap-discuss@...
Subject: RE: [onap-discuss] [aai] Standalone AAI UI

 

Hi Chandra,

Yes you are right, you cannot use sparky without portal in Beijing L

Regards

Arul

 

From: Chandrashekhar Thakare [mailto:CT00548828@...]
Sent: Tuesday, November 6, 2018 3:44 AM
To: onap-discuss@...; Arul Nambi <Arul.Nambi@...>
Subject: RE: [onap-discuss] [aai] Standalone AAI UI

 

Thanks Arul.

Understood that the portal redirection can not be disabled  but does this mean that standalone AAI application UI cannot be accessed in Beijing and we will have to use Casablanca or Master for same?

 

 

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Arul Nambi
Sent: 05 November 2018 19:01
To: Chandrashekhar Thakare <CT00548828@...>; onap-discuss@...
Subject: Re: [onap-discuss] [aai] Standalone AAI UI

 

Hey Chandra,

Sorry unfortunately we were in the middle of configuration changes in Beijing and there is no way to disable portal in Beijing. Can you move to master/Casablanca? It is stable at this point in time.

Regards

Arul

 

From: Chandrashekhar Thakare [mailto:CT00548828@...]
Sent: Monday, November 5, 2018 4:33 AM
To: Arul Nambi <
Arul.Nambi@...>; onap-discuss@...
Subject: RE: [aai] Standalone AAI UI

 

Hi Arul,

Thanks for information.

I am currently using the  following and using Beijing installation. Its heat based installation.

nexus3.onap.org:10001/onap/sparky-be             1.2.1

 

Where exactly I can locate this file ? Inside any specific docker ? I tried to search but could not get the portal profile related details inside aai-resources/aai-traversal.

 

From: Arul Nambi [mailto:Arul.Nambi@...]
Sent: 02 November 2018 16:00
To:
onap-discuss@...; Chandrashekhar Thakare <CT00548828@...>
Subject: RE: [aai] Standalone AAI UI

 

Hi Chandra,

Are you which version of sparky are you using? If you using the one from master, then you should be able to disable the portal profile from application.properties in backend. This will make sure that you are not redirected to portal for authentication.

Regards

Arul

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Chandra
Sent: Friday, November 2, 2018 5:21 AM
To:
onap-discuss@...
Subject: [onap-discuss] [aai] Standalone AAI UI

 

Hi,

I am trying to  access the standalone UI of AAI with below URL-

 

http://<AAI IP address>:9517/services/aai/webapp/index.html

 

However; it is pointing to

 

http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm

 

Is there any way to disable this redirection?

============================================================================================================================

Disclaimer:  This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at http://www.techmahindra.com/Disclaimer.html externally http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.

============================================================================================================================

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based system. Any emails sent to Amdocs will be processed and stored using such system and are accessible by third party providers of such system on a limited basis. Your sending of emails to Amdocs evidences your consent to the use of such system and such processing, storing and access”.

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based system. Any emails sent to Amdocs will be processed and stored using such system and are accessible by third party providers of such system on a limited basis. Your sending of emails to Amdocs evidences your consent to the use of such system and such processing, storing and access”.

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based system. Any emails sent to Amdocs will be processed and stored using such system and are accessible by third party providers of such system on a limited basis. Your sending of emails to Amdocs evidences your consent to the use of such system and such processing, storing and access”.


Seek help--errors about local sdnc environment setup

Guofengbei (Bryan) <guofengbei@...>
 

Hi sdnc team

I encounter a problem with local SDNC environment which composed of windows+virtualbox+Ubuntu+docker. At the beginning I setup environment reference to https://wiki.onap.org/display/DW/SDN-C+Development+Environment+Setup

but it cannot work as I run command: ./tools/run.sh sdnc  by powershell in windows OS. It shows error command not found such assource /var/onap/functions.

As the above cause, I pull sdnc-image(latest) and deploy it manually, then I test it by hello worlddemo through chrome successfully.

But the real problem is when I run DGs it report error below in karaf.log

2019-02-27T08:12:11,298 | INFO  | DBResourcemanagerWatchThread | CachedDataSource                 | 218 - org.onap.ccsdk.sli.core.dblib-provider - 0.4.1.SNAPSHOT |  -  | SQL DataSource < sdnctldb01 > test failed. Cause : Could not connect to address=(host=sdnctldb01)(port=3306)(type=master) : sdnctldb01> test failed. Cause : {}

2019-02-27T08:12:26,303 | ERROR | DBResourcemanagerWatchThread | ConnectionPool                   | 152 - org.apache.tomcat.jdbc - 8.5.14 |  -  | Unable to create initial connections of pool. java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=sdnctldb01)(port=3306)(type=master) : sdnctldb01

I think it missed mysql connection but I don't know how to fix it

Question:

[1] can I setup sdnc environment with windows+virtualbox+Ubuntu+docker?

[2] if [1] is yes. What steps did I miss cause the errors? the guide is not clear, I manually install sdnc-image .it is blocked with showing error in console waiting for mysql…….. when excute docker run…” cmd.

[3] besides sdnc-image, what other docker do I need to install?

 

Look forward to any reply. Thank you.

 

 

Best Regards

Guo Fengbei(Bryan) / 郭凤碑

Mobile: +8615999599876

315px-Huawei   Southbound & Northbound Integration and Ecosystem Management Dept

This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure,reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it !

本邮件及其附件含有华为公司的保密信息,仅限于发送给上面 地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!


 


撤回: Seek help--errors about sdnc environment setup

Guofengbei (Bryan) <guofengbei@...>
 

Guofengbei (Bryan) 将撤回邮件“Seek help--errors about sdnc environment setup”。


Re: Error during service distribution #sdc

sonju143@...
 

Hello,

I'm facing the same problem while trying the vFW Demo.

While running the health getting two failed health check : 
Basic DCAE Health Check  | FAIL |  500 != 200

Basic SDC Health Check    | FAIL |  None != UP

If anyone solve the problem please help.

Thanks in advance.


AAI Modeling Design Principles Session

Jimmy Forsyth
 

When: Thursday, February 28, 2019 11:00 AM-12:00 PM. (UTC-05:00) Eastern Time (US & Canada)
Where: https://zoom.us/j/633397921

*~*~*~*~*~*~*~*~*~*

Update: There is another meeting on the zoom, moving this out ½ hour.


Thanks,

Jimmy


----


All,


This is a meeting to discuss the proposed modeling principles document.


Thanks,

jimmy


Re: [casablanca] : Service "sdc-wfd-fe" is invalid: spec.ports[0].nodePort: Invalid value: 30256: provided port is already allocated

Pondel, Marek (Nokia - PL/Wroclaw)
 

hey


met port collision issue also with msb-iag and vvp-jenkins services


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# helm deploy dev-msb local/onap --namespace onap --verbose
fetching local/onap
Release "dev-msb" does not exist. Installing it now.
Error: release dev-msb failed: Service "msb-iag" is invalid: spec.ports[0].nodePort: Invalid value: 30280: provided port is already allocated
dev-msb               1           Thu Feb 28 14:46:57 2019    FAILED      msb-3.0.0               onap    
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap get svc | grep 30280
vvp-jenkins                        NodePort       10.43.34.61     <none>                                 8080:30280/TCP                                                3h
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap describe svc vvp-jenkins
Name:                     vvp-jenkins
Namespace:                onap
Labels:                   app=vvp-jenkins
Annotations:              <none>
Selector:                 app=vvp-jenkins
Type:                     NodePort
IP:                       10.43.34.61
Port:                     jenkins  8080/TCP
TargetPort:               8080/TCP
NodePort:                 jenkins  30280/TCP
Endpoints:                10.42.154.77:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#


salutes
marek



W dniu 28.02.2019 o 13:04, marek.pondel@... pisze:

hey All


in Casablanca just met following error during sdc deployment :


fetching local/onap
release "dev" deployed
release "dev-aaf" deployed
release "dev-aai" deployed
release "dev-appc" deployed
release "dev-clamp" deployed
release "dev-cli" deployed
release "dev-consul" deployed
release "dev-contrib" deployed
release "dev-dcaegen2" deployed
release "dev-dmaap" deployed
release "dev-esr" deployed
release "dev-log" deployed
release "dev-msb" deployed
release "dev-multicloud" deployed
release "dev-nbi" deployed
release "dev-oof" deployed
release "dev-pnda" deployed
release "dev-policy" deployed
release "dev-pomba" deployed
release "dev-portal" deployed
release "dev-robot" deployed
release "dev-sdc" deployed
release "dev-sdnc" deployed
release "dev-sniro-emulator" deployed
release "dev-so" deployed
release "dev-uui" deployed
release "dev-vfc" deployed
release "dev-vid" deployed
release "dev-vnfsdk" deployed
release "dev-vvp" deployed
dev-sdc               1           Thu Feb 28 11:27:20 2019    FAILED      sdc-3.0.0               onap


sdc re-deploy gave


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# helm deploy dev-sdc local/onap --namespace onap --verbose
fetching local/onap
Release "dev-sdc" does not exist. Installing it now.
Error: release dev-sdc failed: Service "sdc-wfd-fe" is invalid: spec.ports[0].nodePort: Invalid value: 30256: provided port is already allocated
dev-sdc               1           Thu Feb 28 11:34:16 2019    FAILED      sdc-3.0.0               onap    
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#


looks dcae-pnda-mirror service is using reguired port 30256


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap get svc | grep 30256
dcae-pnda-mirror                   LoadBalancer   10.43.212.32    192.168.0.22                           80:30256/TCP                                                  9m
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap describe svc dcae-pnda-mirror
Name:                     dcae-pnda-mirror
Namespace:                onap
Labels:                   app=dcae-pnda-mirror
                          chart=dcae-pnda-mirror-3.0.0
                          heritage=Tiller
                          release=dev-pnda
Annotations:              <none>
Selector:                 app=dcae-pnda-mirror,release=dev-pnda
Type:                     LoadBalancer
IP:                       10.43.212.32
LoadBalancer Ingress:     192.168.0.22
Port:                     client  80/TCP
TargetPort:               80/TCP
NodePort:                 client  30256/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  22m   service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   21m   service-controller  Ensured load balancer
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#



has someone maybe faced the same so far ?



salutes
marek


dmaap-message-router NodePort not recheable

Calamita Agostino
 

Hi all,

I have an issue releted to connectivity from sdc-be pod and dmaap-message-router.

My installation is Casablanca 3.0.0 on 7 kubernetes VM cluster.

 

All dmaap pods are up and running:

 

dev-dmaap-dbc-pg-0                                            1/1       Running            0          1d        10.42.173.158   onapkm5   <none>

dev-dmaap-dbc-pg-1                                            1/1       Running            0          1d        10.42.188.140   onapkm2   <none>

dev-dmaap-dbc-pgpool-7b748d5894-mr2m9                         1/1       Running            0          1d        10.42.237.193   onapkm3   <none>

dev-dmaap-dbc-pgpool-7b748d5894-n6dks                         1/1       Running            0          1d        10.42.192.244   onapkm2   <none>

dev-dmaap-dmaap-bus-controller-6757c4c86-8rq5p                1/1       Running            0          1d        10.42.185.132   onapkm1   <none>

dev-dmaap-dmaap-dr-db-bb4c67cfd-tm7td                         1/1       Running            0          1d        10.42.152.59    onapkm1   <none>

dev-dmaap-dmaap-dr-node-66c8749959-tpdtf                      1/1       Running            0          1d        10.42.216.13    onapkm2   <none>

dev-dmaap-dmaap-dr-prov-5c766b8d69-qzqn2                      1/1       Running            0          1d        10.42.115.247   onapkm6   <none>

dev-dmaap-message-router-fb9f4bc7d-5z52j                      1/1       Running            0          6h        10.42.138.31    onapkm3   <none>

dev-dmaap-message-router-kafka-5fbc897f48-4bpb6               1/1       Running            0          1d        10.42.78.141    onapkm4   <none>

dev-dmaap-message-router-zookeeper-557954854-8d6p9            1/1       Running            0          1d        10.42.169.205   onapkm1   <none>

 

but when I try to distribute a service, from SDC Portal, I got “Internal Server Error”.

 

SDC-BE log file traces:

 

2019-02-28T08:50:35.318Z        [qtp215145189-159837]   INFO    o.o.sdc.be.filters.BeServletFilter      ResponseCode=500       

InstanceUUID=null RequestId=dab0fd50-b06e-4a65-b4a8-7d7edeae3e01   AlertSeverity=0 ElapsedTime=99  EndTimestamp=2019-02-28 08:50:35.318Z PartnerName=op0001      auditOn=true       ServerFQDN=dev-sdc-sdc-be-656bd64b9b-jh57x      StatusCode=ERROR       

TargetEntity=Distribution Engine is DOWN       

CustomField1=POST: http://sdc-be.onap:8080/sdc2/rest/v1/catalog/services/02e0c5a4-be65-4d09-9f1e-49a2dab0f865/distribution/PROD/activate  

timer=99        CustomField2=500   AuditBeginTimestamp=2019-02-28 08:50:35.219Z    RemoteHost=10.42.194.84 ErrorCategory=ERROR    

ServerIPAddress=10.42.179.134   ServiceName=/v1/catalog/services/02e0c5a4-be65-4d09-9f1e-49a2dab0f865/distribution/PROD/activate  

ServiceInstanceId=null  ClassName=org.openecomp.sdc.be.filters.BeServletFilter     ResponseDescription=Internal Server Error      

ErrorCode=500   null

 

Also SDC healthcheck reports that U-EB Cluster is DOWN.

 

Inside SDC-BE POD, I tried to make a traceroute to “message-router-zookeeper” and to “message-router”.

 

This is the result ( the first is OK, the second one NOT OK ):

 

bash-4.4# traceroute  message-router-zookeeper

traceroute to message-router-zookeeper (10.42.169.205), 30 hops max, 46 byte packets

1  10.42.7.46 (10.42.7.46)  0.213 ms  0.005 ms  0.005 ms

2  10.42.190.179 (10.42.190.179)  0.194 ms  0.145 ms  0.135 ms

3  10.42.169.205 (10.42.169.205)  0.461 ms  0.160 ms  0.134 ms

bash-4.4# traceroute  message-router

traceroute to message-router (10.43.1.20), 30 hops max, 46 byte packets

1  10.42.0.1 (10.42.0.1)  0.009 ms  0.005 ms  0.005 ms

2  itpat1ng505.palermo.italtel.it (138.132.168.173)  0.344 ms  2.211 ms  1.910 ms     ß 138.132.168.X  is VM public network

 3  138.132.169.2 (138.132.169.2)  5.063 ms  3.859 ms  3.934 ms

4  *  *  *

5  *  *  *

6  *  *  *

 

traceroute to message-router-kafka (10.43.148.154), 30 hops max, 46 byte packets

1  10.42.0.1 (10.42.0.1)  0.006 ms  0.005 ms  0.004 ms

2  itpat1ng505.palermo.italtel.it (138.132.168.173)  0.391 ms  0.337 ms  0.314 ms

3  138.132.169.2 (138.132.169.2)  0.803 ms  0.748 ms  0.807 ms

4  *  *  *

5  *  *  *

6  *  *  *

 

It seems that I cannot reach NodePort or ClusterIP inside a POD. This is routing table inside POD:

 

bash-4.4# netstat -rn

Kernel IP routing table

Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface

0.0.0.0         10.42.0.1       0.0.0.0         UG        0 0          0 eth0

10.42.0.0       0.0.0.0         255.255.0.0     U         0 0          0 eth0

 

What can I check on Kubernetes Cluster ?

 

Thanks.

Agostino.

 

Internet Email Confidentiality Footer ** La presente comunicazione, con le informazioni in essa contenute e ogni documento o file allegato, e' rivolta unicamente alla/e persona/e cui e' indirizzata ed alle altre da questa autorizzata/e a riceverla. Se non siete i destinatari/autorizzati siete avvisati che qualsiasi azione, copia, comunicazione, divulgazione o simili basate sul contenuto di tali informazioni e' vietata e potrebbe essere contro la legge vigente (ad es. art. 616 C.P., D.Lgs n. 196/2003 Codice Privacy, Regolamento Europeo n. 679/2016/GDPR). Se avete ricevuto questa comunicazione per errore, vi preghiamo di darne immediata notizia al mittente e di distruggere il messaggio originale e ogni file allegato senza farne copia alcuna o riprodurne in alcun modo il contenuto. Al link seguente e' disponibile l'informativa Privacy: http://www.italtel.com/it/about/privacy/ ** This e-mail and its attachments are intended for the addressee(s) only and are confidential and/or may contain legally privileged information. If you have received this message by mistake or are not one of the addressees above, you may take no action based on it, and you may not copy or show it to anyone; please reply to this e-mail and point out the error which has occurred. Click here to read your privacy notice: http://www.italtel.com/it/about/privacy/


{onap-discuss}[integration} VPP for vFWCL usecase

Paul Vaduva
 

                Hi Eric,

               

I asked the Integration team about the vFWCL usecase and thay said you are the VPP expert within ONAP.

I am currently trying to port vFWCL use case to be run on openstack deployment for arm64.

My question is aboutn the libevel.a library and it's use in the virtual firewall closed loop usecase

We managed to build it for arm64 architecutre but I was just wondering if it is used anywhere

in the virtual firewall closed loop usecase or we can just ignore it for this usecase ?

 

Thanks you,

Paul Vaduva


This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.


Re: [ONAP][Integration] appc-ansible-server a,d sdnc-ansible-server pod issues

Marco Platania
 

Morgan,

 

I’m not sure whether there’s a fix for Casablanca. However, you can use onap/ccsdk-ansible-server-image:0.4.1-STAGING-latest for APPC and onap/sdnc-ansible-server-image:1.5-STAGING-latest for SDNC.

 

These are both up and running in the integration lab.

 

Marco

 

From: "morgan.richomme@..." <morgan.richomme@...>
Date: Thursday, February 28, 2019 at 5:00 AM
To: "PLATANIA, MARCO (MARCO)" <platania@...>
Cc: "onap-discuss@..." <onap-discuss@...>
Subject: [ONAP][Integration] appc-ansible-server a,d sdnc-ansible-server pod issues

 

Hi Marco,

 

you mentioned yesterday that you were dealing with the ansible servers.

Currently on our daily Casablanca these are the last 2 remaining pods in CrashLoopBackOff

 

Do you have any JIRA ref to track the issues?

 

Thanks

 

Morgan

 

 

2 pods (on 215) are not in Running state

--------------------------------------------------------------------------------------

NAME                                                           READY     STATUS             RESTARTS   AGE

onap-appc-appc-ansible-server-598558f644-7xrvr                 0/1       CrashLoopBackOff   73         6h

onap-sdnc-sdnc-ansible-server-5886ff7f69-ldwhw                 0/1       CrashLoopBackOff   83         7h

_________________________________________________________________________________________________________________________
 
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
 
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: [integration] Error for demo.sh onap init

Brian Freeman
 

Check you integration-override parametes for the Openstack usnername, encrypted password and url.

The first thing init does is try to go to openstack to retrieve some data to update A&AI.

 

Its failing to authenticate with openstack.

 

If you need to see the credentials used go into runTags.sh in the robot container  and remove the

VARIABLES="--removekeywords name:keystone_interface.*"

Set it to

VARIABLES=””

 

Then you will see the credentials and perhaps more error messages back from keystone

 

Brian

 

 

 

 

From: Liu Chenglong <lcl7608@...>
Sent: Thursday, February 28, 2019 1:55 AM
To: Yang Xu <yang.xu3@...>; FREEMAN, BRIAN D <bf1936@...>
Cc: Liu Chenglong <lcl7608@...>; onap-discuss@...; sunxl.bri@...
Subject: [onap-discuss][integration] Error for demo.sh onap init

 

 

Hi:

      In C release ONAP,  we running ./demo.sh onap init command.  But we occur a error like this:

 

 

 

 

 

 

  please help us to solve this issue.

 

 

----------------------------------------------

Regards,

Liu Chenglong

 

 

 


Re: [doc] Dublin release update for docs

Sonia Sangari
 

Dear Sofia,
I will not be able to attend the meeting today due to an appointment.
BR,
Sonia


From: onap-discuss@... <onap-discuss@...> on behalf of Sofia Wallin <sofia.wallin@...>
Sent: Thursday, February 28, 2019 13:53
To: onap-discuss@...
Subject: [onap-discuss] [doc] Dublin release update for docs
 

Hi everyone,

As communicated earlier the documentation project is focusing on improving the structure and usability for docs.onap.org.

We are welcoming the community’s feedback on what we done so far.

 

All remaining tasks can be found in the Jira backlog – Documentation (DOC).

 

Best regards,

Sofia

 

 

From: Sofia Wallin <sofia.wallin@...>
Date: Wednesday, 23 January 2019 at 15:54
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] [doc] ONAP DDF summary and plans for the Dublin release

 

 Hello everyone,

I would like to give an overview of the feedback that I received during the ONAP DDF in Paris and what the main focus will be for the docs project for the Dublin release.

 

During the week in Paris I received a lot of good feedback based on what the community have achieved up until now.

But we also identified gaps that we have and what needs to be improved and/or defined.

 

Most discussions were around who our targeted audience is, the struggle of finding the right content and how we can provide better understanding on how to use ONAP.

We also discussed the division between the wiki and RTD, how we can improve the documentation templates among other things.

 

I will try to capture these things in JIRA and leave them unassigned just to make sure that we don’t lose track of the findings.  

 

For the Dublin release then,

The docs project have decided to focus on improving the structure on docs.onap.org.

Since we identified a few gaps in our documentation but yet struggling with the current structure we will improve the structure and the usability.

 

We will also add google analytics, which is enabled trough RTD. This will give a good overview on what’s been used/viewed. We should make sure that we put time and effort on the things that are in our users interest.

 

The milestone plan for documentation has also been revised, I presented the proposal in the PTL call earlier this week and it will be brought up in the TSC meeting next week. The PTLs were positive to the changes, and if this gets approved I will of course also focus on supporting everyone accordingly.  

 

Please join our weekly docs call if you have questions, feedback and/or interest in supporting the project.

 

Best regards,

Sofia


#doc Read the Docs Landing Page/Reorganization & New Content for Dublin #doc

Rich Bennett
 

Contributors to Dublin Projects,

 

If you are contributing new sections of documentation for the Dublin release, attending one of the regular doc project meetings is a good opportunity to understand general reorganization work in progress ( https://jira.onap.org/browse/DOC-350 ) and identify where new content you are providing fits into and/or should reference others sections in the new organization.

 

Regular doc project meeting info can be found here https://wiki.onap.org/pages/viewpage.action?pageId=8225074

 

Regards

 

Rich Bennett


[doc] Dublin release update for docs

Sofia Wallin
 

Hi everyone,

As communicated earlier the documentation project is focusing on improving the structure and usability for docs.onap.org.

We are welcoming the community’s feedback on what we done so far.

 

All remaining tasks can be found in the Jira backlog – Documentation (DOC).

 

Best regards,

Sofia

 

 

From: Sofia Wallin <sofia.wallin@...>
Date: Wednesday, 23 January 2019 at 15:54
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] [doc] ONAP DDF summary and plans for the Dublin release

 

 Hello everyone,

I would like to give an overview of the feedback that I received during the ONAP DDF in Paris and what the main focus will be for the docs project for the Dublin release.

 

During the week in Paris I received a lot of good feedback based on what the community have achieved up until now.

But we also identified gaps that we have and what needs to be improved and/or defined.

 

Most discussions were around who our targeted audience is, the struggle of finding the right content and how we can provide better understanding on how to use ONAP.

We also discussed the division between the wiki and RTD, how we can improve the documentation templates among other things.

 

I will try to capture these things in JIRA and leave them unassigned just to make sure that we don’t lose track of the findings.  

 

For the Dublin release then,

The docs project have decided to focus on improving the structure on docs.onap.org.

Since we identified a few gaps in our documentation but yet struggling with the current structure we will improve the structure and the usability.

 

We will also add google analytics, which is enabled trough RTD. This will give a good overview on what’s been used/viewed. We should make sure that we put time and effort on the things that are in our users interest.

 

The milestone plan for documentation has also been revised, I presented the proposal in the PTL call earlier this week and it will be brought up in the TSC meeting next week. The PTLs were positive to the changes, and if this gets approved I will of course also focus on supporting everyone accordingly.  

 

Please join our weekly docs call if you have questions, feedback and/or interest in supporting the project.

 

Best regards,

Sofia


Re: M3 template for use cases/functional requirements

Alla Goldner
 

Hi all,

 

I uploaded use case/functional requirements M3 template https://wiki.onap.org/pages/viewpage.action?pageId=58233064

 

Use cases/functional requirements owners – please use it and upload your corresponding M3 status.

We will start reviewing those during the upcoming Monday meeting.

 

Best Regards, Alla

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Alla Goldner
Sent: Tuesday, February 26, 2019 7:14 PM
To: onap-usecasesub@...
Cc: onap-tsc@...; onap-discuss <onap-discuss@...>
Subject: [onap-discuss] M3 template for use cases/functional requirements

 

Hi all,

 

Please find attached the template’s draft I’ve created for M3 use cases/functional requirements review.

The motivation behind the included scope – as tests, security etc. would be reported per different projects, the key per use case/functional requirement is to see if the Use case/functional requirement’s corresponding APIs were included to all relevant projects reviews with the Architecture Committee (ARC), and, if not, what is status of discussions.

 

I would like to upload it tomorrow EOD CET, so we can get a reports during the next week Usecase subcommittee meeting.

Hence, please provide your comments and suggestions.

 

Best Regards, Alla

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


[casablanca] : Service "sdc-wfd-fe" is invalid: spec.ports[0].nodePort: Invalid value: 30256: provided port is already allocated

Pondel, Marek (Nokia - PL/Wroclaw)
 

hey All


in Casablanca just met following error during sdc deployment :


fetching local/onap
release "dev" deployed
release "dev-aaf" deployed
release "dev-aai" deployed
release "dev-appc" deployed
release "dev-clamp" deployed
release "dev-cli" deployed
release "dev-consul" deployed
release "dev-contrib" deployed
release "dev-dcaegen2" deployed
release "dev-dmaap" deployed
release "dev-esr" deployed
release "dev-log" deployed
release "dev-msb" deployed
release "dev-multicloud" deployed
release "dev-nbi" deployed
release "dev-oof" deployed
release "dev-pnda" deployed
release "dev-policy" deployed
release "dev-pomba" deployed
release "dev-portal" deployed
release "dev-robot" deployed
release "dev-sdc" deployed
release "dev-sdnc" deployed
release "dev-sniro-emulator" deployed
release "dev-so" deployed
release "dev-uui" deployed
release "dev-vfc" deployed
release "dev-vid" deployed
release "dev-vnfsdk" deployed
release "dev-vvp" deployed
dev-sdc               1           Thu Feb 28 11:27:20 2019    FAILED      sdc-3.0.0               onap


sdc re-deploy gave


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# helm deploy dev-sdc local/onap --namespace onap --verbose
fetching local/onap
Release "dev-sdc" does not exist. Installing it now.
Error: release dev-sdc failed: Service "sdc-wfd-fe" is invalid: spec.ports[0].nodePort: Invalid value: 30256: provided port is already allocated
dev-sdc               1           Thu Feb 28 11:34:16 2019    FAILED      sdc-3.0.0               onap    
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#


looks dcae-pnda-mirror service is using reguired port 30256


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap get svc | grep 30256
dcae-pnda-mirror                   LoadBalancer   10.43.212.32    192.168.0.22                           80:30256/TCP                                                  9m
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#


root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes# kubectl -n onap describe svc dcae-pnda-mirror
Name:                     dcae-pnda-mirror
Namespace:                onap
Labels:                   app=dcae-pnda-mirror
                          chart=dcae-pnda-mirror-3.0.0
                          heritage=Tiller
                          release=dev-pnda
Annotations:              <none>
Selector:                 app=dcae-pnda-mirror,release=dev-pnda
Type:                     LoadBalancer
IP:                       10.43.212.32
LoadBalancer Ingress:     192.168.0.22
Port:                     client  80/TCP
TargetPort:               80/TCP
NodePort:                 client  30256/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  EnsuringLoadBalancer  22m   service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   21m   service-controller  Ensured load balancer
root@esohn30-lab19n-tag-957238-rancher-master:~/oom/kubernetes#



has someone maybe faced the same so far ?



salutes
marek


[ONAP][Integration] appc-ansible-server a,d sdnc-ansible-server pod issues

Morgan Richomme
 

Hi Marco,

you mentioned yesterday that you were dealing with the ansible servers.
Currently on our daily Casablanca these are the last 2 remaining pods in CrashLoopBackOff

Do you have any JIRA ref to track the issues?

Thanks

Morgan


2 pods (on 215) are not in Running state
--------------------------------------------------------------------------------------
NAME                                                           READY     STATUS             RESTARTS   AGE
onap-appc-appc-ansible-server-598558f644-7xrvr                 0/1       CrashLoopBackOff   73         6h
onap-sdnc-sdnc-ansible-server-5886ff7f69-ldwhw                 0/1       CrashLoopBackOff   83         7h
_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: SO High availability environment deployment

Piyush Garg <piyush.garg1@...>
 

Hi Srini,

 

Currently we are not planning to do the performance benchmarking.

For load balancing and pod selection (among replicas), we are planning to rely on Kubernetes services.

 

Regards,

Piyush

 

From: Addepalli, Srinivasa R <srinivasa.r.addepalli@...>
Sent: Wednesday, February 27, 2019 8:28 PM
To: onap-discuss@...; Piyush Garg <Piyush.Garg1@...>
Subject: RE: [onap-discuss] SO High availability environment deployment

 

Hi Piyush,

 

Yes, if the intention is to share the load across multiple instances.  I am curious to understand on whether there is any effort to do performance benchmarking. What is the type of input load?  What metrics are you collecting -  Number of SO service instantiations/hour?

 

Other reason you had given is to reduce the amount of time the service is down.  Yes, yet times, container restart can take minutes.  Good point.

 

When you have multiple instances of a micro-service, then the consuming micro-services need to have a way to figure out the running instances and then figure out the right instance within them to talk to.  In K8S world, service meshes (e.g ISTIO) are used for this purpose. Are you using any service mesh to experiment the scenario you described?

 

Thanks

Srini

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Piyush Garg
Sent: Wednesday, February 27, 2019 2:15 AM
To: Piyush Garg <piyush.garg1@...>; onap-discuss@...
Subject: Re: [onap-discuss] SO High availability environment deployment

 

Hi Srini,

 

Yes, for now we are not planning to handle the failure during in-flight requests.

With replica count of 2 or more (on different nodes), we want to achieve load sharing as well as we want to avoid a single point of failure in case a k8s node goes down. K8s can bring up the container in case of failure, but having more than 1 pods will keep system available while failed pod comes back.


Regards,
Piyush

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Re: Logging/POMBA arch review for M3

Stephen Terrill
 

Hi Michael,

 

Lets do it that way then, thanks for taking the initiative.

 

Team – comments by close of business on Monday 4th Feb, then I can summarize as OK or Not in ArchCom.

 

Michael, the JIRA for the records is: https://jira.onap.org/secure/RapidBoard.jspa?rapidView=79&view=detail&selectedIssue=ONAPARC-422

 

BR,

 

Steve

 

From: onap-arc@... <onap-arc@...> On Behalf Of Michael O'Brien
Sent: Wednesday 27 February 2019 23:25
To: onap-discuss@...; onap-arc@...
Subject: [Onap-arc] Logging/POMBA arch review for M3

 

Team,

   We would like to request an email/wiki based review as there are minimal changes this release – same model as VID for example.

   Page is at

https://wiki.onap.org/display/DW/Logging+Dublin+M3+Architecture+Review

 

   thank you

   /michael

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

7561 - 7580 of 23354