[OOM] helm install local/onap fails - fix for #clamp #aaf #log


Michael O'Brien <frank.obrien@...>
 

Team,

    AAF, CLAMP, POMBA, Integration, OOM,

    As James has iterated – when you add/change a nodeport – do a full helm install/upgrade of your onap deployment – with “all” pods on – not just the vFW subset where several pods are disabled via –set <component>.enabled=false – use the default/inferred deployment with no -f override.  Running a subset of onap for development, to fit in a 64g vm or to reduce the pod count is OK but when submitting run the full system if possible or at least your change with onap-minimal (the traditional vFW pods) – and then a 2nd deploy with the rest of ONAP for vCPE/vVOLTE (the parts that are normally disabled for vFW)

 

    I put a fix in for clamp where they now have nodeport 58 (previously reserved for log/logdemonode) in https://wiki.onap.org/display/DW/OOM+NodePort+List  instead of the conflicted port with pomba – 34.

    I’ll fix my ELK/log RI container to use 304xx later – it is not part of the requirements.yaml (it only shares the namespace) – so there is no conflict there until I bring up the 2nd deployment.

    https://gerrit.onap.org/r/#/c/64545/

    Tested and ready for merge.

 

    Some background on 63235 - as we need a 304xx nodeport - this jira is really 2 parts (logging and exposing a new port) - we failed to test the conflict with pomba because my AWS CD system was temporarily not running - after the AAF/CLAMP merge for OOM-1174 in https://gerrit.onap.org/r/#/c/63235/

https://gerrit.onap.org/r/#/c/63235/1/kubernetes/clamp/values.yaml

https://git.onap.org/oom/tree/kubernetes/clamp/values.yaml#n98

 

reproduction:

amdocs@ubuntu:~/_dev/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap -f dev.yaml

Error: UPGRADE FAILED: failed to create resource: Service "pomba-kibana" is invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already allocated

 

See AAF enablement in CLAMP

https://jira.onap.org/browse/OOM-1174

https://jira.onap.org/browse/OOM-1364

duped to

https://jira.onap.org/browse/OOM-1366

 

To speed this up - and not have to goto 304/306xx - Giving aaf/clamp my last 302 - 30258

https://git.onap.org/logging-analytics/tree/reference/logging-kubernetes/logdemonode/charts/logdemonode/values.yaml#n76

 

That I was using for the logging RI container (reserved on july 26) - I will use a 304xx port for logging instead

 

https://wiki.onap.org/pages/viewpage.action?pageId=38112900

adjusting page

 

https://wiki.onap.org/display/DW/OOM+NodePort+List

 

Also note that for any hanging PV or secret when using Helm 2.9.1 under Kubernetes 1.10.x (via Rancher 1.6.18) use the following

https://wiki.onap.org/display/DW/Logging+Developer+Guide#LoggingDeveloperGuide-Deployingdemopod

normal bounding of a pod

/oom/kubernetes$ sudo helm install local/onap -n onap --namespace onap -f onap/resources/environments/disable-allcharts.yaml --set log.enabled=false

/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap -f onap/resources/environments/disable-allcharts.yaml --set log.enabled=true

                       

Mitigation commands to completely clean the system

sudo helm delete --purge onap

kubectl delete namespace onap

 

 

tested

https://wiki.onap.org/display/DW/OOM+NodePort+List#OOMNodePortList-VerifyNoPortConflictsduringfullONAPHelmDeploy

sudo helm delete --purge onap

kubectl delete namespace onap

sudo make all

sudo helm install local/onap -n onap --namespace onap -f dev.yaml

amdocs@ubuntu:~/_dev/oom/kubernetes$ kubectl get services --all-namespaces

NAMESPACE     NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE

default       kubernetes                         ClusterIP   10.43.0.1       <none>        443/TCP                         29d

kube-system   heapster                           ClusterIP   10.43.42.68     <none>        80/TCP                          2d

kube-system   kube-dns                           ClusterIP   10.43.0.10      <none>        53/UDP,53/TCP                   2d

kube-system   kubernetes-dashboard               ClusterIP   10.43.251.134   <none>        80/TCP                          2d

kube-system   monitoring-grafana                 ClusterIP   10.43.179.125   <none>        80/TCP                          2d

kube-system   monitoring-influxdb                ClusterIP   10.43.12.136    <none>        8086/TCP                        2d

kube-system   tiller-deploy                      ClusterIP   10.43.130.135   <none>        44134/TCP                       3d

onap          aai                                NodePort    10.43.211.200   <none>        8080:30232/TCP,8443:30233/TCP   27s

onap          aai-babel                          NodePort    10.43.140.119   <none>        9516:30279/TCP                  27s

onap          aai-cassandra                      ClusterIP   None            <none>        9042/TCP,9160/TCP,61621/TCP     27s

onap          aai-champ                          NodePort    10.43.212.151   <none>        9522:30278/TCP                  27s

onap          aai-crud-service                   NodePort    10.43.56.153    <none>        9520:30268/TCP                  27s

onap          aai-elasticsearch                  ClusterIP   None            <none>        9200/TCP                        27s

onap          aai-modelloader                    NodePort    10.43.148.57    <none>        8080:30210/TCP,8443:30229/TCP   27s

onap          aai-resources                      ClusterIP   None            <none>        8447/TCP,5005/TCP               27s

onap          aai-search-data                    ClusterIP   None            <none>        9509/TCP                        27s

onap          aai-sparky-be                      NodePort    10.43.204.78    <none>        9517:30220/TCP                  27s

onap          aai-traversal                      ClusterIP   None            <none>        8446/TCP,5005/TCP               27s

onap          cdash-es                           ClusterIP   10.43.159.150   <none>        9200/TCP                        27s

onap          cdash-es-tcp                       ClusterIP   10.43.80.11     <none>        9300/TCP                        27s

onap          cdash-kibana                       NodePort    10.43.188.64    <none>        5601:30290/TCP                  27s

onap          cdash-ls                           ClusterIP   10.43.37.108    <none>        9600/TCP                        27s

onap          clamp                              NodePort    10.43.199.87    <none>        8080:30295/TCP,8443:30258/TCP   27s

onap          clampdb                            ClusterIP   10.43.117.156   <none>        3306/TCP                        27s

onap          log-es                             NodePort    10.43.123.176   <none>        9200:30254/TCP                  27s

onap          log-es-tcp                         ClusterIP   10.43.204.70    <none>        9300/TCP                        27s

onap          log-kibana                         NodePort    10.43.138.201   <none>        5601:30253/TCP                  27s

onap          log-ls                             NodePort    10.43.184.100   <none>        5044:30255/TCP                  26s

onap          log-ls-http                        ClusterIP   10.43.166.10    <none>        9600/TCP                        27s

onap          pomba-aaictxbuilder                ClusterIP   10.43.114.84    <none>        9530/TCP                        26s

onap          pomba-data-router                  NodePort    10.43.150.81    <none>        9502:30249/TCP                  26s

onap          pomba-es                           ClusterIP   10.43.44.189    <none>        9200/TCP                        26s

onap          pomba-es-tcp                       ClusterIP   10.43.16.122    <none>        9300/TCP                        26s

onap          pomba-kibana                       NodePort    10.43.74.233    <none>        5601:30234/TCP                  26s

onap          pomba-networkdiscovery             ClusterIP   10.43.217.33    <none>        9531/TCP                        26s

onap          pomba-networkdiscoveryctxbuilder   ClusterIP   10.43.136.187   <none>        9530/TCP                        26s

onap          pomba-sdcctxbuilder                ClusterIP   10.43.54.138    <none>        9530/TCP                        26s

onap          pomba-search-data                  ClusterIP   10.43.141.203   <none>        9509/TCP                        26s

onap          pomba-servicedecomposition         ClusterIP   10.43.208.23    <none>        9532/TCP                        26s

onap          pomba-validation-service           ClusterIP   10.43.163.9     <none>        9529/TCP                        26s

onap          robot                              NodePort    10.43.158.247   <none>        88:30209/TCP                    26s

 

/michael

From: onap-discuss@... <onap-discuss@...> On Behalf Of Gary Wu
Sent: Tuesday, September 4, 2018 3:36 PM
To: onap-discuss@...; James MacNider <James.MacNider@...>; georgehclapp@...; sd378r@...
Subject: Re: [onap-discuss] [OOM] helm install local/onap fails

 

I logged a Jira ticket on this issue a few days ago:  https://jira.onap.org/browse/OOM-1364

 

Thanks,

Gary

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of James MacNider
Sent: Tuesday, September 04, 2018 5:23 AM
To: onap-discuss@...; georgehclapp@...; sd378r@...
Subject: Re: [onap-discuss] [OOM] helm install local/onap fails

 

Hi George,

 

After a bit of digging, it looks like a recent merge in the Clamp helm chart (https://gerrit.onap.org/r/#/c/63235) has created a nodePort conflict with Pomba.  Until this is resolved, the workaround is to disable one of them if don’t require both components.

 

As a reminder to all, when adding new nodePorts to charts, ensure that they are properly reserved and unique:  https://wiki.onap.org/display/DW/OOM+NodePort+List

 

Thanks,

James

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of George Clapp
Sent: Monday, September 3, 2018 7:31 PM
To: onap-discuss@...
Subject: [onap-discuss] [OOM] helm install local/onap fails

 

I am attempting for the first time to install ONAP using the instructions at “ONAP on Kubernetes with Rancher” and “OOM User Guide” in an OpenStack environment.  I got to the point of the command:

 

helm install local/onap --name development

 

And got these error messages:

 

% helm install local/onap --name development

Error: release development failed: Service "pomba-kibana" is invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already allocated

% helm install local/onap --name development

Error: release development failed: secrets "development-aaf-cs" already exists

 

I searched but nothing came up about this error.  I would greatly appreciate any suggestions.

 

Thanks,

George

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based system. Any emails sent to Amdocs will be processed and stored using such system and are accessible by third party providers of such system on a limited basis. Your sending of emails to Amdocs evidences your consent to the use of such system and such processing, storing and access”.

This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement,


George Clapp <georgehclapp@...>
 

I tried again to do a complete installation.  I checked out the master branch and did a pull to update and then entered:

 

sudo helm delete --purge development (the name I had used)

 

There were some residual deployments that survived the delete, most of them related to DMaaP.  I deleted them manually and then upgraded helm from 2.8.2 to 2.9.1, removed and rebuilt the helm repo local, and did a ‘make all’ in oom/kubernetes.  I then entered:

 

sudo helm install local/onap -n onap --namespace onap

 

But got this error: Error: release onap failed: Service "dmaap-dr-prov" is invalid: spec.ports[0].nodePort: Invalid value: 30259: provided port is already allocated

 

I’m pretty sure that I had deleted all of the residual DMaaP deployments.  Any suggestions?

 

Thanks,

George

 

From: Michael O'Brien <Frank.Obrien@...>
Sent: Tuesday, September 04, 2018 6:32 PM
To: onap-discuss@...; gary.i.wu@...; James MacNider <James.MacNider@...>; georgehclapp@...; sd378r@...
Subject: RE: [onap-discuss] [OOM] helm install local/onap fails - fix for #clamp #aaf #pomba

 

Team,

    AAF, CLAMP, POMBA, Integration, OOM,

    As James has iterated – when you add/change a nodeport – do a full helm install/upgrade of your onap deployment – with “all” pods on – not just the vFW subset where several pods are disabled via –set <component>.enabled=false – use the default/inferred deployment with no -f override.  Running a subset of onap for development, to fit in a 64g vm or to reduce the pod count is OK but when submitting run the full system if possible or at least your change with onap-minimal (the traditional vFW pods) – and then a 2nd deploy with the rest of ONAP for vCPE/vVOLTE (the parts that are normally disabled for vFW)

 

    I put a fix in for clamp where they now have nodeport 58 (previously reserved for log/logdemonode) in https://wiki.onap.org/display/DW/OOM+NodePort+List  instead of the conflicted port with pomba – 34.

    I’ll fix my ELK/log RI container to use 304xx later – it is not part of the requirements.yaml (it only shares the namespace) – so there is no conflict there until I bring up the 2nd deployment.

    https://gerrit.onap.org/r/#/c/64545/

    Tested and ready for merge.

 

    Some background on 63235 - as we need a 304xx nodeport - this jira is really 2 parts (logging and exposing a new port) - we failed to test the conflict with pomba because my AWS CD system was temporarily not running - after the AAF/CLAMP merge for OOM-1174 in https://gerrit.onap.org/r/#/c/63235/

https://gerrit.onap.org/r/#/c/63235/1/kubernetes/clamp/values.yaml

https://git.onap.org/oom/tree/kubernetes/clamp/values.yaml#n98

 

reproduction:

amdocs@ubuntu:~/_dev/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap -f dev.yaml

Error: UPGRADE FAILED: failed to create resource: Service "pomba-kibana" is invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already allocated

 

See AAF enablement in CLAMP

https://jira.onap.org/browse/OOM-1174

https://jira.onap.org/browse/OOM-1364

duped to

https://jira.onap.org/browse/OOM-1366

 

To speed this up - and not have to goto 304/306xx - Giving aaf/clamp my last 302 - 30258

https://git.onap.org/logging-analytics/tree/reference/logging-kubernetes/logdemonode/charts/logdemonode/values.yaml#n76

 

That I was using for the logging RI container (reserved on july 26) - I will use a 304xx port for logging instead

 

https://wiki.onap.org/pages/viewpage.action?pageId=38112900

adjusting page

 

https://wiki.onap.org/display/DW/OOM+NodePort+List

 

Also note that for any hanging PV or secret when using Helm 2.9.1 under Kubernetes 1.10.x (via Rancher 1.6.18) use the following

https://wiki.onap.org/display/DW/Logging+Developer+Guide#LoggingDeveloperGuide-Deployingdemopod

normal bounding of a pod

/oom/kubernetes$ sudo helm install local/onap -n onap --namespace onap -f onap/resources/environments/disable-allcharts.yaml --set log.enabled=false

/oom/kubernetes$ sudo helm upgrade -i onap local/onap --namespace onap -f onap/resources/environments/disable-allcharts.yaml --set log.enabled=true

                       

Mitigation commands to completely clean the system

sudo helm delete --purge onap

kubectl delete namespace onap

 

 

tested

https://wiki.onap.org/display/DW/OOM+NodePort+List#OOMNodePortList-VerifyNoPortConflictsduringfullONAPHelmDeploy

sudo helm delete --purge onap

kubectl delete namespace onap

sudo make all

sudo helm install local/onap -n onap --namespace onap -f dev.yaml

amdocs@ubuntu:~/_dev/oom/kubernetes$ kubectl get services --all-namespaces

NAMESPACE     NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE

default       kubernetes                         ClusterIP   10.43.0.1       <none>        443/TCP                         29d

kube-system   heapster                           ClusterIP   10.43.42.68     <none>        80/TCP                          2d

kube-system   kube-dns                           ClusterIP   10.43.0.10      <none>        53/UDP,53/TCP                   2d

kube-system   kubernetes-dashboard               ClusterIP   10.43.251.134   <none>        80/TCP                          2d

kube-system   monitoring-grafana                 ClusterIP   10.43.179.125   <none>        80/TCP                          2d

kube-system   monitoring-influxdb                ClusterIP   10.43.12.136    <none>        8086/TCP                        2d

kube-system   tiller-deploy                      ClusterIP   10.43.130.135   <none>        44134/TCP                       3d

onap          aai                                NodePort    10.43.211.200   <none>        8080:30232/TCP,8443:30233/TCP   27s

onap          aai-babel                          NodePort    10.43.140.119   <none>        9516:30279/TCP                  27s

onap          aai-cassandra                      ClusterIP   None            <none>        9042/TCP,9160/TCP,61621/TCP     27s

onap          aai-champ                          NodePort    10.43.212.151   <none>        9522:30278/TCP                  27s

onap          aai-crud-service                   NodePort    10.43.56.153    <none>        9520:30268/TCP                  27s

onap          aai-elasticsearch                  ClusterIP   None            <none>        9200/TCP                        27s

onap          aai-modelloader                    NodePort    10.43.148.57    <none>        8080:30210/TCP,8443:30229/TCP   27s

onap          aai-resources                      ClusterIP   None            <none>        8447/TCP,5005/TCP               27s

onap          aai-search-data                    ClusterIP   None            <none>        9509/TCP                        27s

onap          aai-sparky-be                      NodePort    10.43.204.78    <none>        9517:30220/TCP                  27s

onap          aai-traversal                      ClusterIP   None            <none>        8446/TCP,5005/TCP               27s

onap          cdash-es                           ClusterIP   10.43.159.150   <none>        9200/TCP                        27s

onap          cdash-es-tcp                       ClusterIP   10.43.80.11     <none>        9300/TCP                        27s

onap          cdash-kibana                       NodePort    10.43.188.64    <none>        5601:30290/TCP                  27s

onap          cdash-ls                           ClusterIP   10.43.37.108    <none>        9600/TCP                        27s

onap          clamp                              NodePort    10.43.199.87    <none>        8080:30295/TCP,8443:30258/TCP   27s

onap          clampdb                            ClusterIP   10.43.117.156   <none>        3306/TCP                        27s

onap          log-es                             NodePort    10.43.123.176   <none>        9200:30254/TCP                  27s

onap          log-es-tcp                         ClusterIP   10.43.204.70    <none>        9300/TCP                        27s

onap          log-kibana                         NodePort    10.43.138.201   <none>        5601:30253/TCP                  27s

onap          log-ls                             NodePort    10.43.184.100   <none>        5044:30255/TCP                  26s

onap          log-ls-http                        ClusterIP   10.43.166.10    <none>        9600/TCP                        27s

onap          pomba-aaictxbuilder                ClusterIP   10.43.114.84    <none>        9530/TCP                        26s

onap          pomba-data-router                  NodePort    10.43.150.81    <none>        9502:30249/TCP                  26s

onap          pomba-es                           ClusterIP   10.43.44.189    <none>        9200/TCP                        26s

onap          pomba-es-tcp                       ClusterIP   10.43.16.122    <none>        9300/TCP                        26s

onap          pomba-kibana                       NodePort    10.43.74.233    <none>        5601:30234/TCP                  26s

onap          pomba-networkdiscovery             ClusterIP   10.43.217.33    <none>        9531/TCP                        26s

onap          pomba-networkdiscoveryctxbuilder   ClusterIP   10.43.136.187   <none>        9530/TCP                        26s

onap          pomba-sdcctxbuilder                ClusterIP   10.43.54.138    <none>        9530/TCP                        26s

onap          pomba-search-data                  ClusterIP   10.43.141.203   <none>        9509/TCP                        26s

onap          pomba-servicedecomposition         ClusterIP   10.43.208.23    <none>        9532/TCP                        26s

onap          pomba-validation-service           ClusterIP   10.43.163.9     <none>        9529/TCP                        26s

onap          robot                              NodePort    10.43.158.247   <none>        88:30209/TCP                    26s

 

/michael

From: onap-discuss@... <onap-discuss@...> On Behalf Of Gary Wu
Sent: Tuesday, September 4, 2018 3:36 PM
To: onap-discuss@...; James MacNider <James.MacNider@...>; georgehclapp@...; sd378r@...
Subject: Re: [onap-discuss] [OOM] helm install local/onap fails

 

I logged a Jira ticket on this issue a few days ago:  https://jira.onap.org/browse/OOM-1364

 

Thanks,

Gary

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of James MacNider
Sent: Tuesday, September 04, 2018 5:23 AM
To: onap-discuss@...; georgehclapp@...; sd378r@...
Subject: Re: [onap-discuss] [OOM] helm install local/onap fails

 

Hi George,

 

After a bit of digging, it looks like a recent merge in the Clamp helm chart (https://gerrit.onap.org/r/#/c/63235) has created a nodePort conflict with Pomba.  Until this is resolved, the workaround is to disable one of them if don’t require both components.

 

As a reminder to all, when adding new nodePorts to charts, ensure that they are properly reserved and unique:  https://wiki.onap.org/display/DW/OOM+NodePort+List

 

Thanks,

James

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of George Clapp
Sent: Monday, September 3, 2018 7:31 PM
To: onap-discuss@...
Subject: [onap-discuss] [OOM] helm install local/onap fails

 

I am attempting for the first time to install ONAP using the instructions at “ONAP on Kubernetes with Rancher” and “OOM User Guide” in an OpenStack environment.  I got to the point of the command:

 

helm install local/onap --name development

 

And got these error messages:

 

% helm install local/onap --name development

Error: release development failed: Service "pomba-kibana" is invalid: spec.ports[0].nodePort: Invalid value: 30234: provided port is already allocated

% helm install local/onap --name development

Error: release development failed: secrets "development-aaf-cs" already exists

 

I searched but nothing came up about this error.  I would greatly appreciate any suggestions.

 

Thanks,

George

“Amdocs’ email platform is based on a third-party, worldwide, cloud-based system. Any emails sent to Amdocs will be processed and stored using such system and are accessible by third party providers of such system on a limited basis. Your sending of emails to Amdocs evidences your consent to the use of such system and such processing, storing and access”.

This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement,