Date   

Re: #appc Unable to execute stop lcm operation from APPC #appc

Brian Freeman
 

Isnt payload a json encoded string ?

 

 

https://gerrit.onap.org/r/gitweb?p=ccsdk/sli/northbound.git;a=blob;f=lcm/model/src/main/yang/lcm.yang;h=a03fff60754adaa8feff2ab182d96373baf7f541;hb=refs/heads/casablanca

 

  78     typedef payload {

  79            type string ;

  80            description "The payload can be any valid JSON string value. Json escape characters need to be added when required to include an inner json within the payload to make it a valid json string value";

  81     }

  82

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Gopigiri, Sirisha via Lists.Onap.Org
Sent: Thursday, April 25, 2019 5:09 AM
To: Lathish <lathishbabu.ganesan@...>; onap-discuss@...
Subject: Re: [onap-discuss] #appc Unable to execute stop lcm operation from APPC

 

Hi Brian,

Except Ansible Server pod all the other pods are running. I am able to execute the mysql commands in the DB pod. I redeployed appc still facing the same issue.And the sequence of the pods are appc-db, appc and then ansible pod.  Not sure what failed.

Hi Lathish,

There are no entries in VNF_DG_MAPPING table, I have inserted manually. And now I could see the same Invalid URL error in both of my setups which has underlying VIM as https and http.

Hi Steve,

I am trying to execute restart LCM operation and it is supported by both VNF/VM. And according to the doc there should be vm-id sent in payload. But when I send the vm-id in the payload I could see that request is not being accepted with schema error and 400 error response code.

Here is the request payload I am using for restart appc

{
  "input": {
    "common-header": {
      "timestamp": "2019-04-25T06:05:04.244Z",
      "api-ver": "2.00",
      "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
      "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
      "sub-request-id": "1",
      "flags": {
          "force" : "TRUE",
          "ttl" : 60000
      }
    },
    "action": "Restart",
    "action-identifiers": {
      "vnf-id":"d86f68e7-9cb2-48cc-b82d-ffd54602b91b"
    },
    "payload":{
       "vm-id":"http://x.x.x.x:yyyy/v2.1/servers/88b65041-81d6-41bc-b6ff-3785e461541e"
    }
  }
}

And here is the error message I see when I send payload field in request.

{
 "errors": {
   "error": [
     {
       "error-type": "protocol",
       "error-tag": "malformed-message",
       "error-message": "Error parsing input: Schema node with name vm-id was not found under (org:onap:appc:lcm?revision=2016-01-08)payload.",
       "error-info": "Schema node with name vm-id was not found under (org:onap:appc:lcm?revision=2016-01-08)payload."
     }
   ]
 }
}

Without payload field in the request payload I get Invalid URL error in APPC logs.

Am I missing anything in the request payload that is sent to APPC?

Thank you in advance!

Regards
Sirisha Gopigiri


Re: Access for MSO. #so

Brian Freeman
 

What version of ONAP are you trying to access ? 404 means that url isnt correct.

 

That is old syntax and I believe the catalogdb now has a more flexible interface that matches the database table schema directly.

 

Robot doesnt use that uri anymore for queries to the catalog and that isnt the URI to submit requests into the request database.

 

Brian

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of akash ravishankar
Sent: Thursday, April 25, 2019 6:53 AM
To: onap-discuss@...
Subject: [onap-discuss] Access for MSO. #so

 

Hello team,
                  This is the URL we are using : http://10.168.155.92:30277/ecomp/mso/catalog/v2/serviceVnfs 
inorder to access  MSO. We are using credentials of SO for authorization. We are passing port number of SO. We are getting a 404 response status code.

Thanks and regards,
Akash


Re: #integration #policy Broken policy-csit-health suites in Jenkins #integration #policy

Daniel Rose
 

I talked with lasse, he has the fix and I will merge it.

 

Daniel Rose

ECOMP / ONAP

com.att.ecomp

732-420-7308

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of FREEMAN, BRIAN D
Sent: Thursday, April 25, 2019 8:41 AM
To: onap-discuss@...; l.kaihlavirt@...; DRAGOSH, PAM <pdragosh@...>
Subject: Re: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

***Security Advisory: This Message Originated Outside of AT&T ***
Reference http://cso.att.com/EmailSecurity/IDSP.html for more information.

Problem as Lasse pointed out is the the run-csit.sh doesnt pull a particular branch and hasnt been updated with yesterday’s change to the location of the setup.py for the utilities.

 

Dan R can you take a look ?

 

Brian

 

 

Casablanca csit

  97 # install eteutils

  98 mkdir -p ${ROBOT_VENV}/src/onap

  99 rm -rf ${ROBOT_VENV}/src/onap/testsuite

100 git clone https://gerrit.onap.org/r/testsuite/python-testing-utils.git ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils

101 pip install --upgrade ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils

102

 

Master csit

  97 # install eteutils

  98 mkdir -p ${ROBOT_VENV}/src/onap

  99 rm -rf ${ROBOT_VENV}/src/onap/testsuite

100 git clone https://gerrit.onap.org/r/testsuite/python-testing-utils.git ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils

101 pip install --upgrade ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils

102

 

Likely change requied:

 

101 pip install --upgrade ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils/robotframework-onap

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Lasse Kaihlavirta
Sent: Thursday, April 25, 2019 8:28 AM
To: DRAGOSH, PAM <pdragosh@...>; onap-discuss@...
Subject: Re: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Sorry,

 

run-csit.sh is already fixed in Dublin and master now for the added robotframework-onap dir. I just couldn’t find any such change linked to TEST-141,

 

br,

 

Lasse

 

 

From: Lasse Kaihlavirta <l.kaihlavirt@...>
Sent: torstaina 25. huhtikuuta 2019 15.23
To: 'DRAGOSH, PAMELA L (PAM)' <pdragosh@...>; 'onap-discuss@...' <onap-discuss@...>
Subject: RE: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

 

thanks Pamela,

 

Ø  Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure.

 

Yes, there is – run-csit.sh always pulls and installs python-testing-utils from master regardless of the branch of the integration-csit doing the pulling.

 

Ø  They may run into the same problem for Dublin.

 

This is certainly a problem in Dublin as well. Fix is simple though, the setup.py has moved under robotframework-onap along with the rest of the content and the pip install command in run-csit.sh should be adjusted accordingly.

 

br,

 

Lasse

 

From: DRAGOSH, PAMELA L (PAM) <pdragosh@...>
Sent: torstaina 25. huhtikuuta 2019 14.57
To: onap-discuss@...; l.kaihlavirt@...
Subject: Re: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

For the Casablanca failure, the Integration team should take a look at the Casablanca branch. Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure. Since 3.0.2 is scheduled to be released today, the policy jobs can be removed. But perhaps I’ll leave it open so Integration can fix their branch. They may run into the same problem for Dublin.

 

The other master failure is due to multiple issues. One, we had contributions from the CIA project to update our images using a common alpine image. The merge jobs are now broken as well as the CSIT will need updating to support this. The team is also working on cleaning up the CSIT due to our new components and are moving scripts around. Unfortunately, these can’t be done in one shot.

 

Pam

 

From: <onap-discuss@...> on behalf of "l.kaihlavirt@..." <l.kaihlavirt@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "l.kaihlavirt@..." <l.kaihlavirt@...>
Date: Thursday, April 25, 2019 at 7:32 AM
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

policy's health csit suite has started failing in both master and casablanca jobs:

https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1660/

https://jenkins.onap.org/view/policy/job/policy-casablanca-csit-health/177/

apparently due to this change: https://gerrit.onap.org/r/#/c/85700/
(as part of https://jira.onap.org/browse/TEST-141 - I also asked about these under the ticket)
i.e. eteutils has been moved under robotframework-onap subdirectory.  

This raises some questions:
- change in python-testing-utils master broke a casablanca CSIT test. Should run-csit.sh (or something else in the execution procedure) be improved to ensure that CSIT execution in branches remains stable?
- is fixing that casablanca job any kind of priority at this point?
- is someone already aware of and working on (any of) these?

...also, the very latest master job (https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1661/) is running into some version incompatibility problems that prevent the execution even getting to robot test phase (I haven't taken any deeper look at that one, anyone know what's going on there?)

br,

Lasse Kaihlavirta

 

 

  


Re: #integration #policy Broken policy-csit-health suites in Jenkins #integration #policy

Brian Freeman
 

Problem as Lasse pointed out is the the run-csit.sh doesnt pull a particular branch and hasnt been updated with yesterday’s change to the location of the setup.py for the utilities.

 

Dan R can you take a look ?

 

Brian

 

 

Casablanca csit

  97 # install eteutils

  98 mkdir -p ${ROBOT_VENV}/src/onap

  99 rm -rf ${ROBOT_VENV}/src/onap/testsuite

100 git clone https://gerrit.onap.org/r/testsuite/python-testing-utils.git ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils

101 pip install --upgrade ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils

102

 

Master csit

  97 # install eteutils

  98 mkdir -p ${ROBOT_VENV}/src/onap

  99 rm -rf ${ROBOT_VENV}/src/onap/testsuite

100 git clone https://gerrit.onap.org/r/testsuite/python-testing-utils.git ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils

101 pip install --upgrade ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils

102

 

Likely change requied:

 

101 pip install --upgrade ${ROBOT_VENV}/src/onap/testsuite/python-testing-utils/robotframework-onap

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Lasse Kaihlavirta
Sent: Thursday, April 25, 2019 8:28 AM
To: DRAGOSH, PAM <pdragosh@...>; onap-discuss@...
Subject: Re: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Sorry,

 

run-csit.sh is already fixed in Dublin and master now for the added robotframework-onap dir. I just couldn’t find any such change linked to TEST-141,

 

br,

 

Lasse

 

 

From: Lasse Kaihlavirta <l.kaihlavirt@...>
Sent: torstaina 25. huhtikuuta 2019 15.23
To: 'DRAGOSH, PAMELA L (PAM)' <pdragosh@...>; 'onap-discuss@...' <onap-discuss@...>
Subject: RE: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

 

thanks Pamela,

 

  • Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure.

 

Yes, there is – run-csit.sh always pulls and installs python-testing-utils from master regardless of the branch of the integration-csit doing the pulling.

 

  • They may run into the same problem for Dublin.

 

This is certainly a problem in Dublin as well. Fix is simple though, the setup.py has moved under robotframework-onap along with the rest of the content and the pip install command in run-csit.sh should be adjusted accordingly.

 

br,

 

Lasse

 

From: DRAGOSH, PAMELA L (PAM) <pdragosh@...>
Sent: torstaina 25. huhtikuuta 2019 14.57
To: onap-discuss@...; l.kaihlavirt@...
Subject: Re: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

For the Casablanca failure, the Integration team should take a look at the Casablanca branch. Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure. Since 3.0.2 is scheduled to be released today, the policy jobs can be removed. But perhaps I’ll leave it open so Integration can fix their branch. They may run into the same problem for Dublin.

 

The other master failure is due to multiple issues. One, we had contributions from the CIA project to update our images using a common alpine image. The merge jobs are now broken as well as the CSIT will need updating to support this. The team is also working on cleaning up the CSIT due to our new components and are moving scripts around. Unfortunately, these can’t be done in one shot.

 

Pam

 

From: <onap-discuss@...> on behalf of "l.kaihlavirt@..." <l.kaihlavirt@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "l.kaihlavirt@..." <l.kaihlavirt@...>
Date: Thursday, April 25, 2019 at 7:32 AM
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

policy's health csit suite has started failing in both master and casablanca jobs:

https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1660/

https://jenkins.onap.org/view/policy/job/policy-casablanca-csit-health/177/

apparently due to this change: https://gerrit.onap.org/r/#/c/85700/
(as part of https://jira.onap.org/browse/TEST-141 - I also asked about these under the ticket)
i.e. eteutils has been moved under robotframework-onap subdirectory.  

This raises some questions:
- change in python-testing-utils master broke a casablanca CSIT test. Should run-csit.sh (or something else in the execution procedure) be improved to ensure that CSIT execution in branches remains stable?
- is fixing that casablanca job any kind of priority at this point?
- is someone already aware of and working on (any of) these?

...also, the very latest master job (https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1661/) is running into some version incompatibility problems that prevent the execution even getting to robot test phase (I haven't taken any deeper look at that one, anyone know what's going on there?)

br,

Lasse Kaihlavirta

 

 

  


Re: #integration #policy Broken policy-csit-health suites in Jenkins #integration #policy

Lasse Kaihlavirta
 

One more sorry :P

 

It’s not fixed in any branch yet, I was accidentally looking at my own local correction when I thought I looked at fresh Dublin and master branches.

 

Is someone going to do this as part of TEST-141 or should I commit it myself?

 

br,

 

Lasse

 

From: Lasse Kaihlavirta <l.kaihlavirt@...>
Sent: torstaina 25. huhtikuuta 2019 15.28
To: 'DRAGOSH, PAMELA L (PAM)' <pdragosh@...>; 'onap-discuss@...' <onap-discuss@...>
Subject: RE: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Sorry,

 

run-csit.sh is already fixed in Dublin and master now for the added robotframework-onap dir. I just couldn’t find any such change linked to TEST-141,

 

br,

 

Lasse

 

 

From: Lasse Kaihlavirta <l.kaihlavirt@...>
Sent: torstaina 25. huhtikuuta 2019 15.23
To: 'DRAGOSH, PAMELA L (PAM)' <pdragosh@...>; 'onap-discuss@...' <onap-discuss@...>
Subject: RE: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

 

thanks Pamela,

 

  • Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure.

 

Yes, there is – run-csit.sh always pulls and installs python-testing-utils from master regardless of the branch of the integration-csit doing the pulling.

 

  • They may run into the same problem for Dublin.

 

This is certainly a problem in Dublin as well. Fix is simple though, the setup.py has moved under robotframework-onap along with the rest of the content and the pip install command in run-csit.sh should be adjusted accordingly.

 

br,

 

Lasse

 

From: DRAGOSH, PAMELA L (PAM) <pdragosh@...>
Sent: torstaina 25. huhtikuuta 2019 14.57
To: onap-discuss@...; l.kaihlavirt@...
Subject: Re: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

For the Casablanca failure, the Integration team should take a look at the Casablanca branch. Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure. Since 3.0.2 is scheduled to be released today, the policy jobs can be removed. But perhaps I’ll leave it open so Integration can fix their branch. They may run into the same problem for Dublin.

 

The other master failure is due to multiple issues. One, we had contributions from the CIA project to update our images using a common alpine image. The merge jobs are now broken as well as the CSIT will need updating to support this. The team is also working on cleaning up the CSIT due to our new components and are moving scripts around. Unfortunately, these can’t be done in one shot.

 

Pam

 

From: <onap-discuss@...> on behalf of "l.kaihlavirt@..." <l.kaihlavirt@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "l.kaihlavirt@..." <l.kaihlavirt@...>
Date: Thursday, April 25, 2019 at 7:32 AM
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

policy's health csit suite has started failing in both master and casablanca jobs:

https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1660/

https://jenkins.onap.org/view/policy/job/policy-casablanca-csit-health/177/

apparently due to this change: https://gerrit.onap.org/r/#/c/85700/
(as part of https://jira.onap.org/browse/TEST-141 - I also asked about these under the ticket)
i.e. eteutils has been moved under robotframework-onap subdirectory.  

This raises some questions:
- change in python-testing-utils master broke a casablanca CSIT test. Should run-csit.sh (or something else in the execution procedure) be improved to ensure that CSIT execution in branches remains stable?
- is fixing that casablanca job any kind of priority at this point?
- is someone already aware of and working on (any of) these?

...also, the very latest master job (https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1661/) is running into some version incompatibility problems that prevent the execution even getting to robot test phase (I haven't taken any deeper look at that one, anyone know what's going on there?)

br,

Lasse Kaihlavirta

 

  


Re: #integration #policy Broken policy-csit-health suites in Jenkins #integration #policy

Lasse Kaihlavirta
 

Sorry,

 

run-csit.sh is already fixed in Dublin and master now for the added robotframework-onap dir. I just couldn’t find any such change linked to TEST-141,

 

br,

 

Lasse

 

 

From: Lasse Kaihlavirta <l.kaihlavirt@...>
Sent: torstaina 25. huhtikuuta 2019 15.23
To: 'DRAGOSH, PAMELA L (PAM)' <pdragosh@...>; 'onap-discuss@...' <onap-discuss@...>
Subject: RE: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

 

thanks Pamela,

 

  • Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure.

 

Yes, there is – run-csit.sh always pulls and installs python-testing-utils from master regardless of the branch of the integration-csit doing the pulling.

 

  • They may run into the same problem for Dublin.

 

This is certainly a problem in Dublin as well. Fix is simple though, the setup.py has moved under robotframework-onap along with the rest of the content and the pip install command in run-csit.sh should be adjusted accordingly.

 

br,

 

Lasse

 

From: DRAGOSH, PAMELA L (PAM) <pdragosh@...>
Sent: torstaina 25. huhtikuuta 2019 14.57
To: onap-discuss@...; l.kaihlavirt@...
Subject: Re: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

For the Casablanca failure, the Integration team should take a look at the Casablanca branch. Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure. Since 3.0.2 is scheduled to be released today, the policy jobs can be removed. But perhaps I’ll leave it open so Integration can fix their branch. They may run into the same problem for Dublin.

 

The other master failure is due to multiple issues. One, we had contributions from the CIA project to update our images using a common alpine image. The merge jobs are now broken as well as the CSIT will need updating to support this. The team is also working on cleaning up the CSIT due to our new components and are moving scripts around. Unfortunately, these can’t be done in one shot.

 

Pam

 

From: <onap-discuss@...> on behalf of "l.kaihlavirt@..." <l.kaihlavirt@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "l.kaihlavirt@..." <l.kaihlavirt@...>
Date: Thursday, April 25, 2019 at 7:32 AM
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

policy's health csit suite has started failing in both master and casablanca jobs:

https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1660/

https://jenkins.onap.org/view/policy/job/policy-casablanca-csit-health/177/

apparently due to this change: https://gerrit.onap.org/r/#/c/85700/
(as part of https://jira.onap.org/browse/TEST-141 - I also asked about these under the ticket)
i.e. eteutils has been moved under robotframework-onap subdirectory.  

This raises some questions:
- change in python-testing-utils master broke a casablanca CSIT test. Should run-csit.sh (or something else in the execution procedure) be improved to ensure that CSIT execution in branches remains stable?
- is fixing that casablanca job any kind of priority at this point?
- is someone already aware of and working on (any of) these?

...also, the very latest master job (https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1661/) is running into some version incompatibility problems that prevent the execution even getting to robot test phase (I haven't taken any deeper look at that one, anyone know what's going on there?)

br,

Lasse Kaihlavirta

 

  


Re: #integration #policy Broken policy-csit-health suites in Jenkins #integration #policy

Lasse Kaihlavirta
 

Hi,

 

thanks Pamela,

 

  • Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure.

 

Yes, there is – run-csit.sh always pulls and installs python-testing-utils from master regardless of the branch of the integration-csit doing the pulling.

 

  • They may run into the same problem for Dublin.

 

This is certainly a problem in Dublin as well. Fix is simple though, the setup.py has moved under robotframework-onap along with the rest of the content and the pip install command in run-csit.sh should be adjusted accordingly.

 

br,

 

Lasse

 

From: DRAGOSH, PAMELA L (PAM) <pdragosh@...>
Sent: torstaina 25. huhtikuuta 2019 14.57
To: onap-discuss@...; l.kaihlavirt@...
Subject: Re: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

For the Casablanca failure, the Integration team should take a look at the Casablanca branch. Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure. Since 3.0.2 is scheduled to be released today, the policy jobs can be removed. But perhaps I’ll leave it open so Integration can fix their branch. They may run into the same problem for Dublin.

 

The other master failure is due to multiple issues. One, we had contributions from the CIA project to update our images using a common alpine image. The merge jobs are now broken as well as the CSIT will need updating to support this. The team is also working on cleaning up the CSIT due to our new components and are moving scripts around. Unfortunately, these can’t be done in one shot.

 

Pam

 

From: <onap-discuss@...> on behalf of "l.kaihlavirt@..." <l.kaihlavirt@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "l.kaihlavirt@..." <l.kaihlavirt@...>
Date: Thursday, April 25, 2019 at 7:32 AM
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

policy's health csit suite has started failing in both master and casablanca jobs:

https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1660/

https://jenkins.onap.org/view/policy/job/policy-casablanca-csit-health/177/

apparently due to this change: https://gerrit.onap.org/r/#/c/85700/
(as part of https://jira.onap.org/browse/TEST-141 - I also asked about these under the ticket)
i.e. eteutils has been moved under robotframework-onap subdirectory.  

This raises some questions:
- change in python-testing-utils master broke a casablanca CSIT test. Should run-csit.sh (or something else in the execution procedure) be improved to ensure that CSIT execution in branches remains stable?
- is fixing that casablanca job any kind of priority at this point?
- is someone already aware of and working on (any of) these?

...also, the very latest master job (https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1661/) is running into some version incompatibility problems that prevent the execution even getting to robot test phase (I haven't taken any deeper look at that one, anyone know what's going on there?)

br,

Lasse Kaihlavirta

 

  


Re: [E] [onap-discuss] aai-graphadmin pod start error in k8 using helm

Keong Lim
 

The shared Cassandra charts by Mahendra were recently merged.

If the name is “aai-cassandra” then that is the older charts.

If the name is “cassandra" then that is the newer charts.

So it depends on exactly when you pulled the OOM repository.

 

 

Keong

 

From: Thiruveedula, Bharath [mailto:bharath.thiruveedula@...]
Sent: Thursday, April 25, 2019 12:36 AM
To: onap-discuss <onap-discuss@...>; AC00586367@...
Cc: Michael O'Brien <frank.obrien@...>; Keong Lim <Keong.Lim@...>
Subject: Re: [E] [onap-discuss] aai-graphadmin pod start error in k8 using helm

 

Hi,

 

I am facing same issue. I can see graphadmin-create-db-schema pod is dependent on cassandra pod. But it is not clear whether is it "cassandra" from common?

 

Best Regards

Bharath T

 

On Wed, Apr 24, 2019 at 6:35 PM Ambica <AC00586367@...> wrote:

 

Hi Brien,

 

I have followed the below steps:

 

1) pulled image from nexus3.org.onap -->/onap/aai-graphadmin:1.0-STAGING-latest

2) created container using "docker run -d -p 8449:8449 <image-name>"

3) inside the graphadmin container added attributes and new edgerules-

     in resources/schema/onap/oxm/v14 and  resources/schema/onap/dbedgerules/v14

4) then did "docker commit container-name new-image-name"

 

What is the  difference   when image is built from Dockerfile and when image is created doing docker commit on container.

 

5)Used the above image and deployed on k8's environment using helm.

 

The below two pods are not running:

 

dev-aai-aai-graphadmin-96779b95b-5sq7q   ---stays in init mode           
dev-aai-aai-graphadmin-create-db-schema-fm9t8     ----stays in running mode instead of completed[Error:The graphadmin microservice is started and does not execute the createDbSchema script.]   

 

Please help me through this. I am stuck here for a very long time.

 

Regards,

Ambica


From: onap-discuss@... <onap-discuss@...> on behalf of Ambica <AC00586367@...>
Sent: Friday, April 19, 2019 8:01 PM
To: KAJUR, HARISH V; onap-discuss@...; keong.lim@...
Subject: Re: [onap-discuss] aai-graphadmin pod start error in k8 using helm

 

I wanted a new image that would reflect the changes made in aai_oxm_v14.xml and Dbedgerules_v14.json file in aai-graphadmin .

 

Can we create a new image from a modified container using DockerFile.If Yes,

Can you please share the commands.

 

Regards,

Ambica


From: KAJUR, HARISH V <vk250x@...>
Sent: Friday, April 19, 2019 7:24 PM
To: Ambica Chattoraj; onap-discuss@...; keong.lim@...
Subject: RE: [onap-discuss] aai-graphadmin pod start error in k8 using helm

 

Hi Ambica,

 

I sent the link as a reference as they are facing an similar issue due to the fact they are trying to mount a directory on the host machine to a file inside the container.

 

This was the error that you are seeing right:

 

kubelet, onap-k8s-2.mgmt  Error: Error response from daemon: cannot mount volume over existing file, file exists /var/lib/docker/overlay/a7a05a40cfc65adfb4a7646794fb5dd9b2abf03a57c4b9550adde1949e4b948f/merged/opt/app/aai-graphadmin/resources/etc/auth/realm.properties

 

Note:-There is no folder "a7a05a40cfc65adfb4a7646794fb5dd9b2abf03a57c4b9550adde1949e4b948f" in /var/lib/docker/overlay.

 

From this error, it seems to be the case that within the container there is a file called: /opt/app/aai-graphadmin/resources/etc/auth/realm.properties

But for some reason, you seem to be mounting an directory on that location as that could be the reason causing it.

If that isn’t the case, I would suggest you go do the following on the new image and old image:

 

docker run -it --rm --entrypoint=/bin/bash nexus3.onap.org:10001/onap/aai-graphadmin:1.0-STAGING-latest

 

Then go to the /opt/app/aai-graphadmin folder and look at all the sub directories and files and try to identify the difference between 1.0-STAGING-latest and your committed container.

If you see that one is a file in the 1.0-STAGING-latest and the image you created is a directory, that could be the culprit.

Also, why did you commit the running container to create a new image instead of just creating an new docker image from Dockerfile?

 

Thanks,

Harish

 

From: Ambica Chattoraj <AC00586367@...>
Sent: Friday, April 19, 2019 3:53 AM
To: KAJUR, HARISH V <vk250x@...>; onap-discuss@...; keong.lim@...
Subject: Re: [onap-discuss] aai-graphadmin pod start error in k8 using helm

 

Hi Harsh,

 

From the link you have forwarded it is unclear what exactly need to be run. The commands specified are for running the image .

 

Can you please tell where exactly i need to remove/make changes inside the container .so that when creating new image doing commit does not use the previous mount location.

 

Thanks & Regards,

Ambica


From: KAJUR, HARISH V <vk250x@...>
Sent: Thursday, April 18, 2019 10:41 PM
To: onap-discuss@...; keong.lim@...; Ambica Chattoraj
Subject: RE: [onap-discuss] aai-graphadmin pod start error in k8 using helm

 

Hi Ambica,

From the error that you are getting, you seem to be voluming a directory where a file already exists in the container.
You can volume a file to file or a directory from host to directory in container but if you try to volume from host a directory where there is an file in the container it doesn't work.
Please look at this stackoverflow question as a reference:

https://stackoverflow.com/questions/33903621/docker-cannot-mount-volume-over-existing-file-file-exists

Please take a look inside your docker container from your new image to the old image on the path that its complaining to see when you did the docker commit.

Thanks,
Harish

-----Original Message-----
From: onap-discuss@... <onap-discuss@...> On Behalf Of Keong Lim
Sent: Thursday, April 18, 2019 10:42 AM
To: Ambica <AC00586367@...>; onap-discuss@...
Subject: Re: [onap-discuss] aai-graphadmin pod start error in k8 using helm

Hi Ambica,

Sorry I will be on leave for Easter. I don't know what is happening with the overlay filesystem in this case.
Maybe Harish can help you debug it.

Keong

============================================================================================================================

Disclaimer:  This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at http://www.techmahindra.com/Disclaimer.html externally http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.

============================================================================================================================


Re: #integration #policy Broken policy-csit-health suites in Jenkins #integration #policy

Pamela Dragosh
 

For the Casablanca failure, the Integration team should take a look at the Casablanca branch. Perhaps there is some code pulling in from the master branch getting the environment confused thus causing the failure. Since 3.0.2 is scheduled to be released today, the policy jobs can be removed. But perhaps I’ll leave it open so Integration can fix their branch. They may run into the same problem for Dublin.

 

The other master failure is due to multiple issues. One, we had contributions from the CIA project to update our images using a common alpine image. The merge jobs are now broken as well as the CSIT will need updating to support this. The team is also working on cleaning up the CSIT due to our new components and are moving scripts around. Unfortunately, these can’t be done in one shot.

 

Pam

 

From: <onap-discuss@...> on behalf of "l.kaihlavirt@..." <l.kaihlavirt@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "l.kaihlavirt@..." <l.kaihlavirt@...>
Date: Thursday, April 25, 2019 at 7:32 AM
To: "onap-discuss@..." <onap-discuss@...>
Subject: [onap-discuss] #integration #policy Broken policy-csit-health suites in Jenkins

 

Hi,

policy's health csit suite has started failing in both master and casablanca jobs:

https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1660/

https://jenkins.onap.org/view/policy/job/policy-casablanca-csit-health/177/

apparently due to this change: https://gerrit.onap.org/r/#/c/85700/
(as part of https://jira.onap.org/browse/TEST-141 - I also asked about these under the ticket)
i.e. eteutils has been moved under robotframework-onap subdirectory.  

This raises some questions:
- change in python-testing-utils master broke a casablanca CSIT test. Should run-csit.sh (or something else in the execution procedure) be improved to ensure that CSIT execution in branches remains stable?
- is fixing that casablanca job any kind of priority at this point?
- is someone already aware of and working on (any of) these?

...also, the very latest master job (https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1661/) is running into some version incompatibility problems that prevent the execution even getting to robot test phase (I haven't taken any deeper look at that one, anyone know what's going on there?)

br,

Lasse Kaihlavirta


#integration #policy Broken policy-csit-health suites in Jenkins #integration #policy

Lasse Kaihlavirta
 

Hi,

policy's health csit suite has started failing in both master and casablanca jobs:

https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1660/

https://jenkins.onap.org/view/policy/job/policy-casablanca-csit-health/177/

apparently due to this change: https://gerrit.onap.org/r/#/c/85700/
(as part of https://jira.onap.org/browse/TEST-141 - I also asked about these under the ticket)
i.e. eteutils has been moved under robotframework-onap subdirectory.  

This raises some questions:
- change in python-testing-utils master broke a casablanca CSIT test. Should run-csit.sh (or something else in the execution procedure) be improved to ensure that CSIT execution in branches remains stable?
- is fixing that casablanca job any kind of priority at this point?
- is someone already aware of and working on (any of) these?

...also, the very latest master job (https://jenkins.onap.org/view/policy/job/policy-master-csit-health/1661/) is running into some version incompatibility problems that prevent the execution even getting to robot test phase (I haven't taken any deeper look at that one, anyone know what's going on there?)

br,

Lasse Kaihlavirta


Re: Robot Test with Keystone V3 (OpenStack Queen) #casablanca #robot #so

satish kumar <satish1044@...>
 

Thanks, BRIAN.

I will deploy the Heat service on OpenStack then. Will let you know how it goes after that. 

Regards, Satish



On Wed, Apr 24, 2019 at 4:28 PM FREEMAN, BRIAN D <bf1936@...> wrote:

Satish,

 

That is the problem then. This is a HEAT based instantiate and SO can find a Heat endpoint from MultiCloud since Multicloud isnt connecting to heat in your openstack.

 

Brian

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of satish kumar
Sent: Wednesday, April 24, 2019 11:21 AM
To: FREEMAN, BRIAN D <bf1936@...>
Cc: onap-discuss@...
Subject: Re: [onap-discuss] Robot Test with Keystone V3 (OpenStack Queen) #casablanca #Robot #SO

 

Hello BRIAN,

 

Thanks a lot. 

This is the list of available endpoints corresponding to RegionOne. There are total 8 end-points are available. 

 

openstack endpoint list | grep admin (The same list is available for public and internal interface as well)

 

| 192d6f7fbf4e49b5ba4769cd6e4b133b | RegionOne | nova         | compute             | True    | admin     | http://aio-2450:8774/v2.1              |

| 22fecf88e7d44c4597230468f51f0e96 | RegionOne | glance       | image               | True    | admin     | http://aio-2450:9292                   |

| 35ea4cc5888e4807a062371330c77e2a | RegionOne | cinderv3     | volumev3            | True    | admin     | http://aio-2450:8776/v3/%(project_id)s |

| 71de45e639194afd8758ad4c93f4e528 | RegionOne | placement    | placement           | True    | admin     | http://aio-2450:8778                   |

| 87627f1be6cc406eb4d29f2bc221f2b6 | RegionOne | neutron      | network             | True    | admin     | http://aio-2450:9696                   |

| 88c9b48587304db39d7d7d4f654eba5b | RegionOne | murano       | application-catalog | True    | admin     | http://aio-2450:8082                   |

| 8a48b2d73745485282bda3a59d2b9537 | RegionOne | keystone     | identity            | True    | admin     | http://aio-2450:5000/v3/               |

| e73e5e029e0644049491293cf1a53fe6 | RegionOne | cinderv2     | volumev2            | True    | admin     | http://aio-2450:8776/v2/%(project_id)s

 

 

 

Then, I followed the debug.log file in the so-openstack-adaptor. A snap shot is given below for your quick view:

 

 tenant=Tenant [id=ecf1539291894c76a6b57308ae4726a3, name=admin, description=null, enabled=true]], serviceCatalog=[ 

 Service [type=volumev3, name=cinderv3, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8776/v3/ecf1539291894c76a6b57308ae4726a3, internalURL=null, adminURL=null]], endpointsLinks=null],  

 Service [type=image, name=glance, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:9292, internalURL=null, adminURL=null]], endpointsLinks=null],  

 Service [type=application-catalog, name=murano, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8082, internalURL=null, adminURL=null]], endpointsLinks=null],  

 Service [type=network, name=neutron, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:9696, internalURL=null, adminURL=null]], endpointsLinks=null],  

 Service [type=placement, name=placement, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8778, internalURL=null, adminURL=null]], endpointsLinks=null],  

 Service [type=volumev2, name=cinderv2, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8776/v2/ecf1539291894c76a6b57308ae4726a3, internalURL=null,                                          adminURL=null]], endpointsLinks=null],  

 Service [type=identity, name=keystone, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:5000/v3/, internalURL=null, adminURL=null]], endpointsLinks=null],  

 Service [type=compute, name=nova, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8774/v2.1, internalURL=null, adminURL=null]], endpointsLinks=null]],  

                        user=User [id=ce13f5cba38646db89d51b4fb296c245, name=admin, username=null, roles=null, rolesLinks=null], metadata=null]

 

 

The above list is the serviceCatalog for openStack tenant. There are 8 openstack endpoints/services to which multicloud trying to connect. 

 

 

try {

                    // Isolate trying to printout the region IDs

                    try {

                        logger.debug("access={}", access.toString());

                        for (Access.Service service : access.getServiceCatalog()) {

                            List<Access.Service.Endpoint> endpoints = service.getEndpoints();

                            for (Access.Service.Endpoint endpoint : endpoints) {

                                logger.debug("AIC returned region={}", endpoint.getRegion());

                            }

 

--------------------------------------------------------------------------------------------------------------------------------------------

This is the debug output for the above code in the so-openstack-adapter:

2019-04-23T13:47:10.984Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T13:47:10.984Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T13:47:10.985Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T13:47:10.985Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T13:47:10.985Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T13:47:10.985Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T13:47:10.986Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T13:47:10.986Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T13:47:10.987Z|4d8e9f2e-73e7-4ec2-8bc1-5e7130479322| org.onap.so.openstack.utils.MsoHeatUtils - RA_CONNECTION_EXCEPTION

 

It may be observed that for the 8 services in the serviceCatalog, multicloud is able to get the region. However for the 9th service it is showing "RA_CONNECTION_EXCEPTION".

 

--------------------------------------------------------------------------------------------------------------------------------------------

                        }

                    } catch (Exception e) {

                        logger.debug("Encountered an error trying to printout Access object returned from AIC. {}",

                                e.getMessage(), e);

                    }

                    heatUrl = KeystoneUtils.findEndpointURL(access.getServiceCatalog(), "orchestration", region,

                            "public");

                    logger.debug("heatUrl={}, region={}", heatUrl, region);

 

--------------------------------------------------------------------------------------------------------------------------

This is the error corresponding to the above code:

 

org.onap.so.openstack.exceptions.MsoAdapterException: AIC did not match an orchestration service for: region=RegionOne,cloud=http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0 

 

Here, xx.xx.xx.xx is the oom master node ip.

I think multicloud is trying to connect 9th endpoint/service called orchestration for heatUrl and it is getting error for this service. 

 

-----------------------------------------------------------------------------------------------------------------------------

 

                } catch (RuntimeException e) {

                    // This comes back for not found (probably an incorrect region ID)

                    String error = "AIC did not match an orchestration service for: region=" + region + ",cloud="

                            + cloudIdentity.getIdentityUrl();

                    throw new MsoAdapterException(error, e);

                }

                tokenId = access.getToken().getId();

                expiration = access.getToken().getExpires();

}

 

 

 

I would be grateful,  if you help me to get a better understanding of whole process. 

 

Many Thanks,

Satish

 

 

 

 

 

 

On Wed, Apr 24, 2019 at 1:00 PM FREEMAN, BRIAN D <bf1936@...> wrote:

Satish,

 

I’ve gotten that error message when the REGION_ID in the cloud_sites table didnt match with what Openstack had configured for it’s region.

Region in openstack is like host aggregates and availability zones and that string has to match.

 

SO maps the ONAP cloud_site name RegionOne to the Openstack “region” in the tenant through the cloud_sites table.

It does not have to be a region=”RegionOne” in openstack.

 

There is an openstack command to retreive it since I dont think its readily available on the horizon portal (but it may be for an admin user)

 

See if those two fields are not in synch

 

Brian

 

 

From: satish kumar <satish1044@...>
Sent: Wednesday, April 24, 2019 7:48 AM
To: FREEMAN, BRIAN D <bf1936@...>
Cc: onap-discuss@...
Subject: Re: [onap-discuss] Robot Test with Keystone V3 (OpenStack Queen) #casablanca #Robot #SO

 

Hello Brian,

 

Thanks for your reply. you are right, SO is talking to MultiVIM first and then OpenStack.

 

I have followed this link:

 

This is my updated cloud_sites information:

 

MariaDB [catalogdb]> select * from cloud_sites;

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

| ID                | REGION_ID | IDENTITY_SERVICE_ID | CLOUD_VERSION | CLLI      | CLOUDIFY_ID | PLATFORM | ORCHESTRATOR | LAST_UPDATED_BY | CREATION_TIMESTAMP  | UPDATE_TIMESTAMP    |

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

| Chicago           | ORD       | RAX_KEYSTONE        | 2.5           | ORD       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-19 19:52:05 | 2019-04-19 19:52:05 |

| Dallas            | DFW       | RAX_KEYSTONE        | 2.5           | DFW       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-19 19:52:05 | 2019-04-19 19:52:05 |

| DEFAULT           | RegionOne | DEFAULT_KEYSTONE    | 2.5           | RegionOne | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-19 19:52:05 | 2019-04-19 19:52:05 |

| Northern Virginia | IAD       | RAX_KEYSTONE        | 2.5           | IAD       | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-19 19:52:05 | 2019-04-19 19:52:05 |

| RegionOne         | RegionOne | DEFAULT_KEYSTONE    | 2.5           | RegionOne | NULL        | NULL     | NULL         | FLYWAY          | 2019-04-19 19:52:05 | 2019-04-19 19:52:05 |

+-------------------+-----------+---------------------+---------------+-----------+-------------+----------+--------------+-----------------+---------------------+---------------------+

5 rows in set (0.00 sec)

 

 

Here is identity services information: Here xx.xx.xx.xx is the oom master node ip. 

 

MariaDB [catalogdb]> select * from identity_services;

+------------------+------------------------------------------------------------------------------+----------------------+----------------------------------+--------------+-------------+-----------------+----------------------+------------------------------+-----------------+---------------------+---------------------+

| ID               | IDENTITY_URL                                                                 | MSO_ID               | MSO_PASS                         | ADMIN_TENANT | MEMBER_ROLE | TENANT_METADATA | IDENTITY_SERVER_TYPE | IDENTITY_AUTHENTICATION_TYPE | LAST_UPDATED_BY | CREATION_TIMESTAMP  | UPDATE_TIMESTAMP    |

+------------------+------------------------------------------------------------------------------+----------------------+----------------------------------+--------------+-------------+-----------------+----------------------+------------------------------+-----------------+---------------------+---------------------+

| DEFAULT_KEYSTONE | http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0 | admin                | 5e34f1f7c1a80a101f2bf1f41f629479 | service      | admin       |               1 | KEYSTONE             | USERNAME_PASSWORD            | FLYWAY          | 2019-04-19 19:52:05 | 2019-04-19 19:52:05 |

| RAX_KEYSTONE     | https://identity.api.rackspacecloud.com/v2.0                                 | RACKSPACE_ACCOUNT_ID | RACKSPACE_ACCOUNT_APIKEY         | service      | admin       |               1 | KEYSTONE             | RACKSPACE_APIKEY             | FLYWAY          | 2019-04-19 19:52:05 | 2019-04-19 19:52:05 |

+------------------+------------------------------------------------------------------------------+----------------------+----------------------------------+--------------+-------------+-----------------+----------------------+------------------------------+-----------------+---------------------+---------------------+

2 rows in set (0.00 sec)

 

 

I was trying to understand the errors in the 'so-openstack-adapter' debug log file:

 

2019-04-23T11:58:07.066Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - RA_CONNECTION_EXCEPTION

2019-04-23T11:58:07.072Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.adapters.vnf.MsoVnfAdapterImpl - RA_QUERY_VNF_ERR

org.onap.so.openstack.exceptions.MsoAdapterException: AIC did not match an orchestration service for: region=RegionOne,cloud=http://10.5.24.21:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0

 

 

After searched I found that, the Error: --->  org.onap.so.openstack.exceptions.MsoAdapterException: AIC did not match an orchestration service for: 

 

 

 

catch (RuntimeException e) {

// This comes back for not found (probably an incorrect region ID)

    tring error = "AIC did not match an orchestration service for: region=" + region + ",cloud="

     + cloudIdentity.getIdentityUrl();

       throw new MsoAdapterException(error, e);

}I 

 

I have no idea where I am doing wrong. Please help me to debug this issue. 

 

many thanks,

Satish

 

 

 

 

On Tue, Apr 23, 2019 at 4:04 PM FREEMAN, BRIAN D <bf1936@...> wrote:

OK that makes more sense then.

 

The reason is that openstack keystone is replying with the hostname – no way around it unless you change openstack but to put it in /etc/hosts inside robot container or the dns your environment is using.

 

But I am confused – I thought you were talking to MultiVIM not direct SO to Openstack ?

 

Brian

 

 

 

 

From: satish kumar <satish1044@...>
Sent: Tuesday, April 23, 2019 10:13 AM
To: onap-discuss@...; FREEMAN, BRIAN D <bf1936@...>
Subject: Re: [onap-discuss] Robot Test with Keystone V3 (OpenStack Queen) #casablanca #Robot #SO

 

Dear Brian,

 

Thank you for your reply. 

<OpenStackHostname>  represents the openStack server host name. I have manually changed it before attaching the log in the email. For your reference, the actual debug file is:

 

2019-04-23T11:58:07.061Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - access=Access [token=Token [id=gAAAAABcvv3P64kQ-zlmqVRgBvP4gGzffJ8NrjuJoH6M4R8-PGbblzVfOHyPsWmYk-G6nK1R_u6-SZVX4WVDGQEH8AJ7zO0dtHrqg1zRNDirRP5iflYoWP7xsn2JRHCoEyJN627Cz36raDRXMd6UL0zayCxyQMYP0Qm1rs3muSZthcIb3bIzPjg, Issued_at=java.util.GregorianCalendar[time=1556020687000,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2019,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=23,DAY_OF_YEAR=113,DAY_OF_WEEK=3,DAY_OF_WEEK_IN_MONTH=4,AM_PM=0,HOUR=11,HOUR_OF_DAY=11,MINUTE=58,SECOND=7,MILLISECOND=0,ZONE_OFFSET=0,DST_OFFSET=0], expires=java.util.GregorianCalendar[time=1556024287000,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2019,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=23,DAY_OF_YEAR=113,DAY_OF_WEEK=3,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=0,HOUR_OF_DAY=12,MINUTE=58,SECOND=7,MILLISECOND=0,ZONE_OFFSET=0,DST_OFFSET=0], tenant=Tenant [id=ecf1539291894c76a6b57308ae4726a3, name=admin, description=null, enabled=true]], serviceCatalog=[Service [type=volumev3, name=cinderv3, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8776/v3/ecf1539291894c76a6b57308ae4726a3, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=image, name=glance, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:9292, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=application-catalog, name=murano, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8082, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=network, name=neutron, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:9696, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=placement, name=placement, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8778, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=volumev2, name=cinderv2, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8776/v2/ecf1539291894c76a6b57308ae4726a3, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=identity, name=keystone, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:5000/v3/, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=compute, name=nova, endpoints=[Endpoint [region=RegionOne, publicURL=http://aio-2450:8774/v2.1, internalURL=null, adminURL=null]], endpointsLinks=null]], user=User [id=ce13f5cba38646db89d51b4fb296c245, name=admin, username=null, roles=null, rolesLinks=null], metadata=null]

 

 

Here, 'aio-2450' represents the OpenStack server IP. I have not configured OpenStack server host name in value.yaml file (Not even in any file). I think robot is able communicate with OpenStack server initially with IP (that is why it have host name) but further it is trying to communicate with host name (I have no idea why it is happening).  

 

I have overwrite the encrypted the password for Robot and SO. Still, I don't know why I am getting this error. 

 

 

Many Thanks,

Satish

 

 

 

 

 

 


Thanks and Regards,

 

Satish Kumar, PhD

========================================

Research Fellow

5G Innovation Centre,

University of Surrey, 

Guildford, UK

========================================

Web page:

https://www.surrey.ac.uk/people/satish-kumar                

========================================

 

 

 

On Tue, Apr 23, 2019 at 2:48 PM Brian <bf1936@...> wrote:

If you are using Multicloud then SO talk V2 to multicloud.

 

Check the openstack parameters in your values.yaml for robot , SO and MultiVIM and make sure you have encrypted the password for Robot and SO appropriately per the readthedocs.

 

It looks like you did not override somethin host='<OpenStackHostname> since that is a variable string not the actual uri/IP address that we would see if your configuration was right.

 

Brian

 

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of satish1044@...
Sent: Tuesday, April 23, 2019 9:28 AM
To: onap-discuss@...
Subject: [onap-discuss] Robot Test with Keystone V3 (OpenStack Queen) #casablanca #Robot #SO

 

Hello Team,

I have deployed ONAP Casablanca on Kubernetes with Rancher.  

I am trying to run the robot scripts with Keystone version V3 (OpenStack Queen). 

1: ./demo-k8s.sh onap init
I am able to successfully run this test by changing the version V2 to V3. 

2. ./demo-k8s.sh onap init_robot
For this test, I am getting error:
ConnectionError: 
HTTPConnectionPool(host='<OpenStackHostname>', port=8774): Max retries exceeded with url: /v2.1/servers/detail (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb38083f6d0>: Failed to establish a new connection: [Errno -2] Name or service not known',))

This error is mainly because, Robot pod is trying to connect OpenStack server by their host name.
However, this problem I have solved by updating the /etc/hosts file with the ip address of the OpenStack Server and the test was successful.

Is there any other way to solve this problem without updating the robot's /etc/hosts file?

3.
./demo-k8s.sh onap instantiateVFW 

I am trying to instantiate vFW on OpenStack queen version (Keystone v3.0).
I have followed the steps given by 
kranthi guttikonda (https://lists.onap.org/g/onap-discuss/topic/casablanca_so_openstack/30403892?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,30403892)

Step 1: Updated the 
identity_services in so-MariaDB as:
MariaDB [catalogdb]> update identity_services set IDENTITY_URL="http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0" where ID="DEFAULT_KEYSTONE";

Here, xx.xx.xx.xx is the oom master node ip. 

Step 2: Successfully Updated the RegionOne in AAI using PUT command via postman (here, xx.xx.xx.xx -> oom master node ip and yy.yy.yy.yy -> OpenStack server ip).

{

 

    "cloud-owner": "CloudOwner",

 

    "cloud-region-id": "RegionOne",

 

    "cloud-type": "openstack",

 

    "owner-defined-type": "OwnerType",

 

    "cloud-region-version": "v2.5",

 

 

    "cloud-zone": "CloudZone",

 

    "resource-version": "1555956059967",

 

    "complex-name": "clli1",

 

    "tenants": {

 

        "tenant": [{

 

            "tenant-id": "ecf1539291894c76a6b57308ae4726a3",

 

            "tenant-name": "admin",

 

            "resource-version": "1555956060830"

 

        }]

 

    },

 

    "esr-system-info-list": {

 

      "esr-system-info": [{

 

        "esr-system-info-id": "1",

 

        "system-name": "OpenStack",

 

            "type": "vim",

 

            "service-url": "http://yy.yy.yy.yy:5000/v3",

 

            "user-name": "admin",

 

            "password": "password",

 

            "system-type": "VIM",

 

            "ssl-insecure": true,

 

            "cloud-domain": "default",

 

            "default-tenant": "admin"

 

      }]

 

    }

 

}



Then run the ./demo-k8s.sh onap instantiateVFW and it failed. The error is: 
"AIC did not match an orchestration service for: region=RegionOne,cloud=http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0"

I have given the a snap shot of the debug.log file in so-openstack-adapter. In the debug file:

<OpenStackHostname> represents the openstack host name
xx.xx.xx.xx represents the oom master node ip.

For the debug log, I understand that the openstack adaptor is trying to connect OpenStack server using hostName. May be this is the issue. 
Please help me to debug this Problem.
Thank you very much for your help!

Regards,
Satish



----------------------------------------------------------------
Debug log file form so-openstack-adapter:

2019-04-23T11:58:02.506Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - Found: CloudSite_.._jvst58e_31[regionId=RegionOne,identityServiceId=DEFAULT_KEYSTONE,cloudVersion=2.5,clli=RegionOne,cloudifyId=<null>,platform=<null>,orchestrator=<null>]

2019-04-23T11:58:02.506Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - Found: CloudIdentity[id=DEFAULT_KEYSTONE,identityUrl=http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0,msoId=admin,adminTenant=service,memberRole=admin,tenantMetadata=true,identityServerType=KEYSTONE,identityAuthenticationType=USERNAME_PASSWORD]

2019-04-23T11:58:02.507Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - keystoneUrl=http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0

 

http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0

2019-04-23T11:58:07.061Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - access=Access [token=Token [id=gAAAAABcvv3P64kQ-zlmqVRgBvP4gGzffJ8NrjuJoH6M4R8-PGbblzVfOHyPsWmYk-G6nK1R_u6-SZVX4WVDGQEH8AJ7zO0dtHrqg1zRNDirRP5iflYoWP7xsn2JRHCoEyJN627Cz36raDRXMd6UL0zayCxyQMYP0Qm1rs3muSZthcIb3bIzPjg, Issued_at=java.util.GregorianCalendar[time=1556020687000,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2019,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=23,DAY_OF_YEAR=113,DAY_OF_WEEK=3,DAY_OF_WEEK_IN_MONTH=4,AM_PM=0,HOUR=11,HOUR_OF_DAY=11,MINUTE=58,SECOND=7,MILLISECOND=0,ZONE_OFFSET=0,DST_OFFSET=0], expires=java.util.GregorianCalendar[time=1556024287000,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2019,MONTH=3,WEEK_OF_YEAR=17,WEEK_OF_MONTH=4,DAY_OF_MONTH=23,DAY_OF_YEAR=113,DAY_OF_WEEK=3,DAY_OF_WEEK_IN_MONTH=4,AM_PM=1,HOUR=0,HOUR_OF_DAY=12,MINUTE=58,SECOND=7,MILLISECOND=0,ZONE_OFFSET=0,DST_OFFSET=0], tenant=Tenant [id=ecf1539291894c76a6b57308ae4726a3, name=admin, description=null, enabled=true]], serviceCatalog=[Service [type=volumev3, name=cinderv3, endpoints=[Endpoint [region=RegionOne, publicURL=http://<OpenStackHostname>:8776/v3/ecf1539291894c76a6b57308ae4726a3, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=image, name=glance, endpoints=[Endpoint [region=RegionOne, publicURL=http://<OpenStackHostname>:9292, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=application-catalog, name=murano, endpoints=[Endpoint [region=RegionOne, publicURL=http://<OpenStackHostname>:8082, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=network, name=neutron, endpoints=[Endpoint [region=RegionOne, publicURL=http://<OpenStackHostname>:9696, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=placement, name=placement, endpoints=[Endpoint [region=RegionOne, publicURL=http://<OpenStackHostname>:8778, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=volumev2, name=cinderv2, endpoints=[Endpoint [region=RegionOne, publicURL=http://<OpenStackHostname>:8776/v2/ecf1539291894c76a6b57308ae4726a3, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=identity, name=keystone, endpoints=[Endpoint [region=RegionOne, publicURL=http://<OpenStackHostname>:5000/v3/, internalURL=null, adminURL=null]], endpointsLinks=null], Service [type=compute, name=nova, endpoints=[Endpoint [region=RegionOne, publicURL=http://<OpenStackHostname>:8774/v2.1, internalURL=null, adminURL=null]], endpointsLinks=null]], user=User [id=ce13f5cba38646db89d51b4fb296c245, name=admin, username=null, roles=null, rolesLinks=null], metadata=null]

2019-04-23T11:58:07.062Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T11:58:07.062Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T11:58:07.062Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T11:58:07.062Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T11:58:07.062Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T11:58:07.063Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T11:58:07.063Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T11:58:07.063Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - AIC returned region=RegionOne

2019-04-23T11:58:07.066Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.openstack.utils.MsoHeatUtils - RA_CONNECTION_EXCEPTION

2019-04-23T11:58:07.072Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.adapters.vnf.MsoVnfAdapterImpl - RA_QUERY_VNF_ERR

org.onap.so.openstack.exceptions.MsoAdapterException: AIC did not match an orchestration service for: region=RegionOne,cloud=http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0

        at org.onap.so.openstack.utils.MsoHeatUtils.getHeatClient(MsoHeatUtils.java:948)

        at org.onap.so.openstack.utils.MsoHeatUtils.queryStack(MsoHeatUtils.java:571)

        at org.onap.so.adapters.vnf.MsoVnfAdapterImpl.createVfModule(MsoVnfAdapterImpl.java:658)

        at org.onap.so.adapters.vnf.MsoVnfAdapterImpl$$FastClassBySpringCGLIB$$8b1f101c.invoke(<generated>)

        at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)

        at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:736)

        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)

        at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99)

        at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:282)

        at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)

        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)

        at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:671)

        at org.onap.so.adapters.vnf.MsoVnfAdapterImpl$$EnhancerBySpringCGLIB$$2b1b798a.createVfModule(<generated>)

        at org.onap.so.adapters.vnf.VnfAdapterRest$CreateVfModuleTask.run(VnfAdapterRest.java:440)

        at java.lang.Thread.run(Thread.java:748)

Caused by: java.lang.RuntimeException: endpoint url not found

        at com.woorea.openstack.keystone.utils.KeystoneUtils.findEndpointURL(KeystoneUtils.java:38)

        at org.onap.so.openstack.utils.MsoHeatUtils.getHeatClient(MsoHeatUtils.java:942)

        ... 14 common frames omitted

2019-04-23T11:58:07.075Z|3d1d8156-438c-4b54-a53b-8ee2136723c7| org.onap.so.adapters.vnf.VnfAdapterRest - Exception :

org.onap.so.adapters.vnf.exceptions.VnfException: org.onap.so.openstack.exceptions.MsoAdapterException: AIC did not match an orchestration service for: region=RegionOne,cloud=http://xx.xx.xx.xx:30280/api/multicloud/v0/CloudOwner_RegionOne/identity/v2.0

 







Re: Access for MSO. #so

akash ravishankar <aravishankar885@...>
 

How can we go about solving this problem to get 200 Ok response status code.


Access for MSO. #so

akash ravishankar <aravishankar885@...>
 

Hello team,
                  This is the URL we are using : http://10.168.155.92:30277/ecomp/mso/catalog/v2/serviceVnfs 
inorder to access  MSO. We are using credentials of SO for authorization. We are passing port number of SO. We are getting a 404 response status code.

Thanks and regards,
Akash


Re: #appc Unable to execute stop lcm operation from APPC #appc

Gopigiri, Sirisha
 

Hi Brian,

Except Ansible Server pod all the other pods are running. I am able to execute the mysql commands in the DB pod. I redeployed appc still facing the same issue.And the sequence of the pods are appc-db, appc and then ansible pod.  Not sure what failed.

Hi Lathish,

There are no entries in VNF_DG_MAPPING table, I have inserted manually. And now I could see the same Invalid URL error in both of my setups which has underlying VIM as https and http.

Hi Steve,

I am trying to execute restart LCM operation and it is supported by both VNF/VM. And according to the doc there should be vm-id sent in payload. But when I send the vm-id in the payload I could see that request is not being accepted with schema error and 400 error response code.

Here is the request payload I am using for restart appc

{
  "input": {
    "common-header": {
      "timestamp": "2019-04-25T06:05:04.244Z",
      "api-ver": "2.00",
      "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
      "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
      "sub-request-id": "1",
      "flags": {
          "force" : "TRUE",
          "ttl" : 60000
      }
    },
    "action": "Restart",
    "action-identifiers": {
      "vnf-id":"d86f68e7-9cb2-48cc-b82d-ffd54602b91b"
    },
    "payload":{
       "vm-id":"http://x.x.x.x:yyyy/v2.1/servers/88b65041-81d6-41bc-b6ff-3785e461541e"
    }
  }
}

And here is the error message I see when I send payload field in request.

{
 "errors": {
   "error": [
     {
       "error-type": "protocol",
       "error-tag": "malformed-message",
       "error-message": "Error parsing input: Schema node with name vm-id was not found under (org:onap:appc:lcm?revision=2016-01-08)payload.",
       "error-info": "Schema node with name vm-id was not found under (org:onap:appc:lcm?revision=2016-01-08)payload."
     }
   ]
 }
}

Without payload field in the request payload I get Invalid URL error in APPC logs.

Am I missing anything in the request payload that is sent to APPC?

Thank you in advance!

Regards
Sirisha Gopigiri


Re: Restconf API authentification: User/password fail #appc

Lathish
 

Hi Steve,

 

Can you also check the port no? Usually it will be 8282(internal) or 30230 (external) unless you change it.

 

Can you do get services on your appc pod and verify it.

 

Br,

Lathish

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Yang Xu via Lists.Onap.Org
Sent: Thursday, April 25, 2019 3:30 AM
To: onap-discuss@...; alphonse.steve.siani.djissitchi@...
Subject: Re: [onap-discuss] Restconf API authentification: User/password fail #appc

 

Steve,

 

If you have access to integration lab, try it on one of full ONAP instances (Integration-SB-00) we deployed for ONAP pairwise testing  http://10.12.6.122:30230/apidoc/explorer/index.html.  The credential works for me on this deployment. Note I used APPC service port 30230.

 

Regards,

-yang

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Steve Siani
Sent: Wednesday, April 24, 2019 9:47 AM
To: onap-discuss@...
Subject: [onap-discuss] Restconf API authentification: User/password fail #appc

 

Hello,
I have APPC installed from OOM Dublin version.

I can see all theAPPC pods actives but when I try to reach the Restconf API using the user: admin and password: Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U 
it fails the authenticate.

Doesn't anyone knows if this credential is valid? I am reading the username/password from values.yaml file and even from appc.properties, but I cannot authenticate to http://host_ip:8181/apidoc/explorer/index.html or http://host_ip:8181/restconf/operations/appc-provider 

Thanks!
Steve


Re: Maintenance tag on Casablanca branch?

Sriram Rupanagunta
 

Hi, 

Just checking to see if there is any update on creating the maintenance (3.0.2?) tag?

We don't see any issues with casablanca branch as such but want to base off of stable tag for predictable behavior. 

Thanks. 

-Sriram


Re: MSO Failure #so

akash ravishankar <aravishankar885@...>
 

Thanks Steve. It's working now. 


Unauthorized in VID. #so

akash ravishankar <aravishankar885@...>
 

Hello team,
                We are using the following URL to get the status of service instance:
/vid/mso/mso_get_orch_req/ed91bc29-957f-47b0-9884-40b6c84f2d68
But, we are facing unauthorized request 401. We used VID username and password which has been preferred by ONAP itself. Is there another way to approach the same.

Thanks and regards,
Akash


Re: [SO][docker deployment] testing with SO dockers

seshu kumar m
 

Hi Eric

 

You can test the SO standalone way by launching the pods and through REST apis validations.

We have made our local test cases through Robo script based validations for a given request in the past.

currently there are no ready-made test cases yet and is an key agenda for the E release to introduce them in CSIT suite.

 

Thanks and Regards,

M Seshu Kumar

Senior System Architect

Single OSS India Branch Department. S/W BU.

Huawei Technologies India Pvt. Ltd.

Survey No. 37, Next to EPIP Area, Kundalahalli, Whitefield

Bengaluru-560066, Karnataka.

Tel: + 91-80-49160700 , Mob: 9845355488

Company_logo

___________________________________________________________________________________________________

This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!

-------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Multanen, Eric W
Sent: 2019424 23:36
To: onap-discuss@...; ss835w@...; Seshu m <seshu.kumar.m@...>; Subhash Kumar Singh <subhash.kumar.singh@...>
Subject: [onap-discuss][SO][docker deployment] testing with SO dockers

 

Hi SO,

 

I believe I’ve been able to successfully build and deploy (deploy.sh in docker-config) the SO docker images with docker-compose.

 

Now that I’ve done that, I’m not exactly sure what else I can do. 

 

Can I test creating service instance, vnf and vfmodules ?

 

Are there some canned  test data/scripts somewhere that I can use to fill in some example service models, heat templates, etc?

 

Thanks,

Eric

 


Re: [E] [onap-discuss] aai-graphadmin pod start error in k8 using helm

Mahendra Raghuwanshi
 

Hi Bharath,

Now AAI would be using the common cassandra cluster and would not create its own. The create-db-schema job is dependent on the common cassandra instances.
To make it work you will have to enable the cassandra also while installing the AAI.

e.g.
helm deploy demo local/onap --namespace onap --set aai.enabled=true --set cassandra.enabled=true 

I am trying to document these in details but meanwhile you can refer to https://wiki.onap.org/display/DW/AAI+Rolling+Upgrade

Thx,
Mahendra

6981 - 7000 of 23681