Date   

Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

Brian Freeman
 

Blah

 

I should have looked deeper.

 

https://jira.onap.org/browse/INT-1784 created and updated to track this.

 

Delete VNF in the normal testing is part of the life cycle test so vnf orchestration test template saves the stack list as part of the create and then passes that to “Delete VNF” as vf_module_name_list without using the VARIABLES data in the file.

 

In the demo-k8s.sh the variable file has the stack name (although there is a difference in variable name  STACK_NAME vs STACK_NAMES) so I suspect we haven’t worked on deleteVNF from demo-k8s.sh since we moved from single stack VNF demonstration in robot to multiple stack VNF demonstrations.

 

I would go back to demo-k8s.sh but modify the script as follows (I can’t test this right now so I would be helpful if you could try it and let me know if I’ve missed something else)

 

Sorry for the confusion.

 

Brian

 

demo-k8s.sh “patch”:

 

       deleteVNF)

                        TAG="deleteVNF"

                        shift

                        if [ $# -ne 1 ];then

                                echo "Usage: demo-k8s.sh <namespace> deleteVNF <module_name from instantiateVFW>"

                                exit

                        fi

                        VARFILE=$1.py

                        VARIABLES="$VARIABLES -V /share/${VARFILE} -v STACK_NAMES:$1"

                        #if [ -e /opt/eteshare/${VARFILE} ]; then

                        #       VARIABLES="$VARIABLES -V /share/${VARFILE}"

                        #else

                        #       echo "Cache file ${VARFILE} is not found"

                        #       exit

                        #fi

                        shift

                        ;;

 

 

Here is the path through the robot code for this.

 

 

vnf_orchestration (normal regression test):

 

Orchestrate VNF Template

    [Documentation]   Use ONAP to Orchestrate a service.

    [Arguments]    ${customer_name}    ${service}    ${product_family}    ${delete_flag}=DELETE

    ${uuid}=    Generate UUID4

    ${catalog_service_id}=    Set Variable    ${None}    # default to empty

    ${catalog_resource_ids}=    Set Variable    ${None}    # default to empty

    ${tenant_id}    ${tenant_name}=    Setup Orchestrate VNF    ${GLOBAL_AAI_CLOUD_OWNER}    SharedNode    OwnerType    v1    CloudZone

    ${vf_module_name_list}   ${generic_vnfs}    ${server_id}    ${service_instance_id}    ${catalog_resource_ids}   ${catalog_service_id}    ${uris_to_delete}=    Orchestrate VNF   ${customer_name}_${uuid}    ${service}    ${product_family}    ${tenant_id}    ${tenant_name}

    Run Keyword If   '${delete_flag}' == 'DELETE'   Delete VNF    ${tenant_name}    ${server_id}    ${customer_name}_${uuid}    ${service_instance_id}    ${vf_module_name_list}    ${uris_to_delete}

    [Teardown]         Teardown VNF    ${customer_name}_${uuid}    ${catalog_service_id}    ${catalog_resource_ids}

 

testsuite:demo.robot

 

Delete Instantiated VNF

    [Documentation]   This test assumes all necessary variables are loaded via the variable file create in  Save For Delete

    ...    The Teardown VNF needs to be in the teardown step of the test case...

    [Tags]   deleteVNF

    Setup Browser

    Login To VID GUI

    Delete VNF    ${TENANT_NAME}    ${VVG_SERVER_ID}    ${CUSTOMER_NAME}    ${SERVICE_INSTANCE_ID}    ${STACK_NAMES}    ${REVERSE_HEATBRIDGE}

    [Teardown]   Teardown VNF    ${CUSTOMER_NAME}    ${CATALOG_SERVICE_ID}    ${CATALOG_RESOURCE_IDS}

 

resources/demo_preload.robot:

Save For Delete

    [Documentation]   Create a variable file to be loaded for save for delete

    [Arguments]    ${tenant_id}    ${tenant_name}    ${vvg_server_id}    ${customer_name}    ${service_instance_id}    ${stack_name}    ${catalog_service_id}    ${catalog_resource_ids}

    ${dict}=    Create Dictionary

    Set To Dictionary   ${dict}   TENANT_NAME=${tenant_name}

    Set To Dictionary   ${dict}   TENANT_ID=${tenant_id}

    Set To Dictionary   ${dict}   CUSTOMER_NAME=${customer_name}

    Set To Dictionary   ${dict}   STACK_NAME=${stack_name}

    Set To Dictionary   ${dict}   VVG_SERVER_ID=${vvg_server_id}

    Set To Dictionary   ${dict}   SERVICE_INSTANCE_ID=${service_instance_id}

    Set To Dictionary   ${dict}   CATALOG_SERVICE_ID=${catalog_service_id}

 

    ${vars}=    Catenate

    ${keys}=   Get Dictionary Keys    ${dict}

    :FOR   ${key}   IN   @{keys}

    \    ${value}=   Get From Dictionary   ${dict}   ${key}

    \    ${vars}=   Catenate   ${vars}${key} = "${value}"\n

 

    ${comma}=   Catenate

    ${vars}=    Catenate   ${vars}CATALOG_RESOURCE_IDS = [

    :FOR   ${id}   IN    @{catalog_resource_ids}

    \    ${vars}=    Catenate  ${vars}${comma} "${id}"

    \    ${comma}=   Catenate   ,

    ${vars}=    Catenate  ${vars}]\n

    OperatingSystem.Create File   ${FILE_CACHE}/${stack_name}.py   ${vars}

    OperatingSystem.Create File   ${FILE_CACHE}/lastVNF4HEATBRIGE.py   ${vars}

 

 

resources/test_templates/vnf_orchestration_test_template.robot

Delete VNF

    [Documentation]    Called at the end of a test case to tear down the VNF created by Orchestrate VNF

    [Arguments]    ${tenant_name}    ${server_id}    ${customer_name}    ${service_instance_id}    ${vf_module_name_list}    ${uris_to_delete}

    ${lcp_region}=   Get Openstack Region

    ${list}=    Create List

    # remove duplicates, sort vFW-> vPKG , revers to get vPKG > vFWSNK

    ${sorted_stack_names}=    Create List

    ${sorted_stack_names}=  Remove Duplicates   ${vf_module_name_list}

    Sort List   ${sorted_stack_names}

    Reverse List   ${sorted_stack_names}

    :FOR   ${stack}   IN   @{sorted_stack_names}

    \     ${keypair_name}=    Get Stack Keypairs   ${stack}

    \     Append To List   ${list}   ${keypair_name}

    Teardown VVG Server    ${server_id}

    Run Keyword and Ignore Error   Teardown VID   ${service_instance_id}   ${lcp_region}   ${tenant_name}   ${customer_name}    ${uris_to_delete}

    #

    :FOR   ${stack}   IN   @{sorted_stack_names}

    \    Run Keyword and Ignore Error    Teardown Stack    ${stack}

    \    Log    Stack Deleted ${stack}

    # only needed if stack deleted but not keypair

    :FOR   ${key_pair}   IN   @{list}

    \    Run Keyword and Ignore Error    Delete Stack Keypair  ${key_pair}

    \    Log    Key pair Deleted ${key_pair}

    Log    VNF Deleted

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of emin.aktas@...
Sent: Friday, November 20, 2020 3:14 AM
To: emin.aktas@...; onap-discuss@...
Subject: Re: [onap-discuss] demo-k8s.sh deleteVNF fails #frankfurt #robot

 

Hi Brain, 

I tested delete the service with "ete-k8s.sh onap deleteVNF Vfmodule_Ete_vFWCLvPKG_452156c1_1". It did not work. This service was the one i created with "ete-k8s.sh onap instantiateVFWCL".

ubuntu@onap-nfs:~/oom/kubernetes/robot$ ./ete-k8s.sh onap deleteVNF Vfmodule_Ete_vFWCLvPKG_452156c1_1

++ export NAMESPACE=onap

++ NAMESPACE=onap

+++ kubectl --namespace onap get pods

+++ sed 's/ .*//'

+++ grep robot

++ POD=dev-robot-547f588cc8-vgc7s

++ TAGS='-i deleteVNF'

+++ dirname ./ete-k8s.sh

++ DIR=.

++ SCRIPTDIR=scripts/etescript

++ ETEHOME=/var/opt/ONAP

++ [[ Vfmodule_Ete_vFWCLvPKG_452156c1_1 == \e\x\e\c\s\c\r\i\p\t ]]

+++ kubectl --namespace onap exec dev-robot-547f588cc8-vgc7s -- bash -c 'ls -1q /share/logs/ | wc -l'

++ export GLOBAL_BUILD_NUMBER=20

++ GLOBAL_BUILD_NUMBER=20

+++ printf %04d 20

++ OUTPUT_FOLDER=0020_ete_deleteVNF

++ DISPLAY_NUM=110

++ VARIABLEFILES='-V /share/config/robot_properties.py'

++ VARIABLES='-v GLOBAL_BUILD_NUMBER:29671'

++ case $2 in

++ kubectl --namespace onap exec dev-robot-547f588cc8-vgc7s -- /var/opt/ONAP/runTags.sh -V /share/config/robot_properties.py -v GLOBAL_BUILD_NUMBER:29671 -d /share/logs/0020_ete_deleteVNF -i deleteVNF --display 110

Starting Xvfb on display :110 with res 1280x1024x24

Executing robot tests at log level TRACE

==============================================================================

Testsuites

==============================================================================

[ WARN ] Error in file '/var/opt/ONAP/robot/resources/test_templates/vnf_orchestration_test_template.robot': Escaping empty cells with '\' before line continuation marker '...' is deprecated. Remove escaping before Robot Framework 3.2.

[ WARN ] Error in file '/var/opt/ONAP/robot/resources/test_templates/vnf_orchestration_test_template.robot': Escaping empty cells with '\' before line continuation marker '...' is deprecated. Remove escaping before Robot Framework 3.2.

Testsuites.Demo :: Executes the VNF Orchestration Test cases including setu...

==============================================================================

Delete Instantiated VNF :: This test assumes all necessary variabl... | FAIL |

TypeError: Expected argument 1 to be a list or list-like, got string instead.

------------------------------------------------------------------------------

Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | FAIL |

1 critical test, 0 passed, 1 failed

1 test total, 0 passed, 1 failed

==============================================================================

Testsuites                                                            | FAIL |

1 critical test, 0 passed, 1 failed

1 test total, 0 passed, 1 failed

==============================================================================

Output:  /share/logs/0020_ete_deleteVNF/output.xml

Log:     /share/logs/0020_ete_deleteVNF/log.html

Report:  /share/logs/0020_ete_deleteVNF/report.html

command terminated with exit code 1


Regards,
Emin


Re: Distribution to aai-ml faild in the ONAP Guilin #aai #sdc

William Reehil
 

Do you get the same issue using the Guilin branch, master is now Honolulu and folks have been committing/merging there. We are not currently aware of any bug.


Re: [ONAP] CD chains and CSIT tests strongly impacted by dockerhub policy change

Krzysztof Opasiak
 

On 19.11.2020 18:53, Jessica Wagantall wrote:
Hi Morgan,

I brought this concern to Steve Winslow. Let me quote his reply so that
there is no confusion:

"If we are pulling the build image from our nexus solely for internal
runs and not making the third party images available to external folks,
then I'm not concerned about that."
"The issue we had talked about with the ONAP teams earlier this year was
that we should not be redistributing binary container images / layers
that consist of base layers outside the ONAP project. It's okay to point
to pulling those base layers from DockerHub or third party sources but
we should not be redistributing them ourselves."
Right. So the bottom line is that only LF internal system will be able
to use nexus as a proxy right?


Please let me know if this clarifies the concern

Thanks!
Jess


On Wed, Nov 18, 2020 at 11:21 PM <morgan.richomme@orange.com
<mailto:morgan.richomme@orange.com>> wrote:

Hi,

recently Docker has enabled download rate limits for pull requests
on Docker Hub.
Several upstream components used in ONAP are hosted in dockerhub.

As a consequences lots of CD chains or CSIT tests are now failing
due to the error:
"Error response from daemon: toomanyrequests: You have reached your
pull rate limit. You may increase the limit by authenticating and
upgrading: https://www.docker.com/increase-rate-limit"

Some CSIT tests committed patches to use the Nexus3 as a proxy/cache
for these upstream repositories.

It has obviously major impacts on integration activities.

@Jessica do you confirm that it is possible and there is no legal
issues? as far as I remember it was decided not to host third-party
dockers for legal issues, caching them is not very different from
hosting, no?

As far as I can see Mirantis that acquired the dockerhub activities
recently is a member of the lfn networking, is there any way to
discuss with their representatives to see if they could exclude
community activities from this rate limitation mechanism?


Regards

/Morgan




_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.

--
Krzysztof Opasiak
Samsung R&D Institute Poland
Samsung Electronics


Re: [OOM] registry.hub.docker.com seems to need authentication now

Sylvain Desbureaux
 

Sorry for the spam but it means that for now, we should have issue only with rate limit, no authentication issue.

it's still a big pain on our side (OOM and gating) as we hit the rate limit all the time, and thus we can't really test but it's less critical than what I said earlier

De : onap-discuss@... [onap-discuss@...] de la part de Sylvain Desbureaux via lists.onap.org [sylvain.desbureaux=orange.com@...]
Envoyé : vendredi 20 novembre 2020 08:55
À : onap-release@...; onap-discuss@...
Cc : onap-tsc@...; dmcbride@...; Kenny Paul; jwagantall@...
Objet : [onap-discuss] [OOM] registry.hub.docker.com seems to need authentication now

Hello,
it seems (although some people had this particular issue long time ago) that "registry.hub.docker.com" needs now authentication in contrary to "docker.io" :-(.
It wasn't the case at least a couple of weeks ago.

Unfortunately, it's used in several places in OOM, for a lot of releases.

That means that ALL ONAP deployments via OOM (El Alto, Frankfurt, Guilin, Master and the older ones) are broken now.

I'm doing my best to mitigate the issue (I've created a JIRA: https://jira.onap.org/browse/OOM-2636) but of course, it doesn't help at the end of the release (and it's on top of dockerhub rate limit...)

Regards,
Sylvain


_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: [OOM] registry.hub.docker.com seems to need authentication now

Sylvain Desbureaux
 

Hello,
it's actually a bit more complicated than that:

for docker "official" images, registry.hub.docker.com has a different behavior than docker.io

with busybox:

docker pull busybox:1.32 --> WORKS
docker pull library/busybox:1.32 --> WORKS
docker pull docker.io/busybox:1.32 --> WORKS
docker pull docker.io/library/busybox:1.32 --> WORKS
docker pull registry.hub.docker.com/library/busybox:1.32 --> WORKS

BUT:

docker pull registry.hub.docker.com/busybox:1.32 --> authentication required


De : onap-discuss@... [onap-discuss@...] de la part de Sylvain Desbureaux via lists.onap.org [sylvain.desbureaux=orange.com@...]
Envoyé : vendredi 20 novembre 2020 08:55
À : onap-release@...; onap-discuss@...
Cc : onap-tsc@...; dmcbride@...; Kenny Paul; jwagantall@...
Objet : [onap-discuss] [OOM] registry.hub.docker.com seems to need authentication now

Hello,
it seems (although some people had this particular issue long time ago) that "registry.hub.docker.com" needs now authentication in contrary to "docker.io" :-(.
It wasn't the case at least a couple of weeks ago.

Unfortunately, it's used in several places in OOM, for a lot of releases.

That means that ALL ONAP deployments via OOM (El Alto, Frankfurt, Guilin, Master and the older ones) are broken now.

I'm doing my best to mitigate the issue (I've created a JIRA: https://jira.onap.org/browse/OOM-2636) but of course, it doesn't help at the end of the release (and it's on top of dockerhub rate limit...)

Regards,
Sylvain


_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

emin.aktas@...
 

Hi Brain, 

I tested delete the service with "ete-k8s.sh onap deleteVNF Vfmodule_Ete_vFWCLvPKG_452156c1_1". It did not work. This service was the one i created with "ete-k8s.sh onap instantiateVFWCL".



ubuntu@onap-nfs:~/oom/kubernetes/robot$ ./ete-k8s.sh onap deleteVNF Vfmodule_Ete_vFWCLvPKG_452156c1_1
++ export NAMESPACE=onap
++ NAMESPACE=onap
+++ kubectl --namespace onap get pods
+++ sed 's/ .*//'
+++ grep robot
++ POD=dev-robot-547f588cc8-vgc7s
++ TAGS='-i deleteVNF'
+++ dirname ./ete-k8s.sh
++ DIR=.
++ SCRIPTDIR=scripts/etescript
++ ETEHOME=/var/opt/ONAP
++ [[ Vfmodule_Ete_vFWCLvPKG_452156c1_1 == \e\x\e\c\s\c\r\i\p\t ]]
+++ kubectl --namespace onap exec dev-robot-547f588cc8-vgc7s -- bash -c 'ls -1q /share/logs/ | wc -l'
++ export GLOBAL_BUILD_NUMBER=20
++ GLOBAL_BUILD_NUMBER=20
+++ printf %04d 20
++ OUTPUT_FOLDER=0020_ete_deleteVNF
++ DISPLAY_NUM=110
++ VARIABLEFILES='-V /share/config/robot_properties.py'
++ VARIABLES='-v GLOBAL_BUILD_NUMBER:29671'
++ case $2 in
++ kubectl --namespace onap exec dev-robot-547f588cc8-vgc7s -- /var/opt/ONAP/runTags.sh -V /share/config/robot_properties.py -v GLOBAL_BUILD_NUMBER:29671 -d /share/logs/0020_ete_deleteVNF -i deleteVNF --display 110
Starting Xvfb on display :110 with res 1280x1024x24
Executing robot tests at log level TRACE
==============================================================================
Testsuites
==============================================================================
[ WARN ] Error in file '/var/opt/ONAP/robot/resources/test_templates/vnf_orchestration_test_template.robot': Escaping empty cells with '\' before line continuation marker '...' is deprecated. Remove escaping before Robot Framework 3.2.
[ WARN ] Error in file '/var/opt/ONAP/robot/resources/test_templates/vnf_orchestration_test_template.robot': Escaping empty cells with '\' before line continuation marker '...' is deprecated. Remove escaping before Robot Framework 3.2.
Testsuites.Demo :: Executes the VNF Orchestration Test cases including setu...
==============================================================================
Delete Instantiated VNF :: This test assumes all necessary variabl... | FAIL |
TypeError: Expected argument 1 to be a list or list-like, got string instead.
------------------------------------------------------------------------------
Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================
Testsuites                                                            | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================
Output:  /share/logs/0020_ete_deleteVNF/output.xml
Log:     /share/logs/0020_ete_deleteVNF/log.html
Report:  /share/logs/0020_ete_deleteVNF/report.html
command terminated with exit code 1


Regards,
Emin


[OOM] registry.hub.docker.com seems to need authentication now

Sylvain Desbureaux
 

Hello,
it seems (although some people had this particular issue long time ago) that "registry.hub.docker.com" needs now authentication in contrary to "docker.io" :-(.
It wasn't the case at least a couple of weeks ago.

Unfortunately, it's used in several places in OOM, for a lot of releases.

That means that ALL ONAP deployments via OOM (El Alto, Frankfurt, Guilin, Master and the older ones) are broken now.

I'm doing my best to mitigate the issue (I've created a JIRA: https://jira.onap.org/browse/OOM-2636) but of course, it doesn't help at the end of the release (and it's on top of dockerhub rate limit...)

Regards,
Sylvain


_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Requirements subcommittee meeting on the upcoming Monday - the agenda

Alla Goldner
 

Hi,

 

WE will review:

 

  1. Intent based demo from Guilin Release
  2. New submitted requirements for Honolulu (which are leftovers from Guilin Release)

 

Best regards, Alla

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Re: CDS - demoVLB_CDS deployment failed wit BPMN exceptions

Gülsüm Atıcı
 

Dear All,

During the vLB_CDS demo  service assign and activation phase, service instantiation has failed with below error.

"requestId""8571a445-1246-4e40-b5ff-a3dadff09268",
    "requestStatus""ROLLED_BACK",
    "statusMessage""Failed to create self-serve assignment for vf-module with vf-module-id=a36f564a-ef7d-4f15-b9bb-0b21d17eeb7d with error: Failed to get RA assignments: Error from ConfigAssignmentNode",
    "rollbackStatusMessage""Rollback has been completed successfully.",
    "flowStatus""All Rollback flows have completed successfully",
    "progress"100,
    "startTime""2020-11-19T14:18:40.000+0000",
    "endTime""2020-11-19T14:19:18.000+0000",


At the same time, dev-cds-blueprints-processor throw below error logs.

2020-11-19 14:19:04,504|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| sdnc dictionary information : (/restconf/config/GENERIC-RESOURCE-API:services/service/30eb3257-7eab-44d7-9be4-a987ef7118d9/service-data/vnfs/vnf/871c97c8-9571-49d4-b45d-acaf6ccbf08a/vnf-data/vnf-topology/vnf-parameters-data/param/vlb_int_pktgen_private_ip_0), ({service-instance-id=service-instance-id, vnf-id=vnf-id}), ({vlb_int_pktgen_private_ip_0=value})
2020-11-19 14:19:04,506|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| Rest request method(GET), url(/restconf/config/GENERIC-RESOURCE-API:services/service/30eb3257-7eab-44d7-9be4-a987ef7118d9/service-data/vnfs/vnf/871c97c8-9571-49d4-b45d-acaf6ccbf08a/vnf-data/vnf-topology/vnf-parameters-data/param/vlb_int_pktgen_private_ip_0)
2020-11-19 14:19:04,510|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| Response status(200 - OK)
2020-11-19 14:19:04,510|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| Response processing type (string)
2020-11-19 14:19:04,510|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| populating value for output mapping ({vlb_int_pktgen_private_ip_0=value}), from json ("192.168.20.32")
2020-11-19 14:19:04,510|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| For template key (vlb_int_pktgen_private_ip_0) trying to get value from responseNode ("192.168.20.32")
2020-11-19 14:19:04,510|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| Setting Resource Value ("192.168.20.32") for Resource Name (vlb_int_pktgen_private_ip_0), definition(vlb_int_pktgen_private_ip_0) of type (string)
2020-11-19 14:19:04,510|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| DatabaseResource (processor-db) dictionary information: Query:(select VFC_MODEL.vm_type as vm_type from VFC_MODEL where customization_uuid=:vfccustomizationuuid), input-key-mapping:({vfccustomizationuuid=vfccustomizationuuid}), output-key-mapping:({vm-type=vm_type})
2020-11-19 14:19:04,513|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| Parameter information : ({vfccustomizationuuid=dc26617b-7920-418d-ad9a-615cc958c1b4})
2020-11-19 14:19:04,515|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| Response processing type (string)
2020-11-19 14:19:04,515|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| For template key (vm-type) trying to get value from responseNode ([{"vm_type":"vpg"}])
2020-11-19 14:19:04,515|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| Setting Resource Value ("vpg") for Resource Name (vm-type), definition(vm-type) of type (string)
2020-11-19 14:19:04,516|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||ERROR|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| failed in resource-assignment : Resource Assignment is failed with com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.lang.Object` out of NOT_AVAILABLE token
 at [Source: UNKNOWN; line: -1, column: -1].message
org.onap.ccsdk.cds.controllerblueprints.core.BluePrintProcessorException: Resource Assignment is failed with com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.lang.Object` out of NOT_AVAILABLE token
 at [Source: UNKNOWN; line: -1, column: -1].message
at org.onap.ccsdk.cds.blueprintsprocessor.functions.resource.resolution.utils.ResourceAssignmentUtils$Companion.generateResourceDataForAssignments(ResourceAssignmentUtils.kt:208)
at org.onap.ccsdk.cds.blueprintsprocessor.functions.resource.resolution.ResourceResolutionServiceImpl.resolveResources$suspendImpl(ResourceResolutionService.kt:203)
at org.onap.ccsdk.cds.blueprintsprocessor.functions.resource.resolution.ResourceResolutionServiceImpl$resolveResources$3.invokeSuspend(ResourceResolutionService.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.internal.ScopeCoroutine.afterResume(Scopes.kt:32)
at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:113)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
at kotlinx.coroutines.EventLoop.processUnconfinedEvent(EventLoop.common.kt:68)
at kotlinx.coroutines.DispatchedContinuationKt.resumeCancellableWith(DispatchedContinuation.kt:320)
at kotlinx.coroutines.DispatchedCoroutine.afterResume(Builders.common.kt:254)
at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:113)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:561)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:727)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:667)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:655)
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.lang.Object` out of NOT_AVAILABLE token
 at [Source: UNKNOWN; line: -1, column: -1]
at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1442)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1216)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1126)
at com.fasterxml.jackson.databind.deser.std.UntypedObjectDeserializer$Vanilla.deserialize(UntypedObjectDeserializer.java:702)
at com.fasterxml.jackson.databind.deser.std.UntypedObjectDeserializer$Vanilla.mapObject(UntypedObjectDeserializer.java:895)
at com.fasterxml.jackson.databind.deser.std.UntypedObjectDeserializer$Vanilla.deserialize(UntypedObjectDeserializer.java:654)
at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:4173)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2467)
at com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:2920)
at org.onap.ccsdk.cds.blueprintsprocessor.functions.resource.resolution.utils.ResourceAssignmentUtils$Companion.generateResourceDataForAssignments(ResourceAssignmentUtils.kt:202)
... 17 common frames omitted
2020-11-19 14:19:04,516|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| Preparing Response...
2020-11-19 14:19:04,516|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| resolveNodeTemplateInterfaceOperationOutputs for node template (resource-assignment),interface name (ResourceResolutionComponent), operationName(process)
2020-11-19 14:19:04,516|15efb166-f0b3-4fd1-936a-29be3c3858c1||DefaultDispatcher-worker-3||/api/v1/execution-service/process||INFO|||dev-cds-blueprints-processor-7dc7ccf767-zs54x|||| resolveWorkflowOutputs for workflow(resource-assignment)


I have checked the vm-type parameter in sdnctl from processor db.
Why are we getting this error ?
Could  you please help to fix it? 

 
MariaDB [sdnctl]> select  * from  RESOURCE_DICTIONARY where tags like "vpg";
Empty set (0.00 sec)
 
MariaDB [sdnctl]> select  * from  RESOURCE_DICTIONARY where tags like "%vm_type%";
+---------+---------------------+-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+--------------+---------------------------+---------+----------------------------------------+
| name    | creation_date       | data_type | definition                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | description | entry_schema | resource_dictionary_group | tags    | updated_by                             |
+---------+---------------------+-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+--------------+---------------------------+---------+----------------------------------------+
| vm-type | 2020-11-19 14:07:16 | string    | {"tags":"vm-type","name":"vm-type","property":{"description":"vm-type","type":"string"},"group":"default","updated-by":"MALAKOV, YURIY <yuriy.malakov@...>","sources":{"input":{"type":"source-input"},"default":{"type":"source-default","properties":{}},"processor-db":{"type":"source-db","properties":{"type":"SQL","query":"select VFC_MODEL.vm_type as vm_type from VFC_MODEL where customization_uuid=:vfccustomizationuuid","output-key-mapping":{"vm-type":"vm_type"},"input-key-mapping":{"vfccustomizationuuid":"vfccustomizationuuid"},"key-dependencies":["vfccustomizationuuid"]}}}} | vm-type     | NULL         | default                   | vm-type | MALAKOV, YURIY <yuriy.malakov@...> |
+---------+---------------------+-----------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+--------------+---------------------------+---------+----------------------------------------+
1 row in set (0.00 sec)
 

Thanks,
Gulsum
 


Re: [ONAP] CD chains and CSIT tests strongly impacted by dockerhub policy change

Jessica Wagantall
 

Hi Morgan, 

I brought this concern to Steve Winslow. Let me quote his reply so that there is no confusion:

" If we are pulling the build image from our nexus solely for internal runs and not making the third party images available to external folks, then I'm not concerned about that."
"The issue we had talked about with the ONAP teams earlier this year was that we should not be redistributing binary container images / layers that consist of base layers outside the ONAP project. It's okay to point to pulling those base layers from DockerHub or third party sources but we should not be redistributing them ourselves."

Please let me know if this clarifies the concern

Thanks!
Jess


On Wed, Nov 18, 2020 at 11:21 PM <morgan.richomme@...> wrote:
Hi,

recently Docker has enabled download rate limits for pull requests on Docker Hub.
Several upstream components used in ONAP are hosted in dockerhub.

As a consequences lots of CD chains or CSIT tests are now failing due to the error:
"Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"

Some CSIT tests committed patches to use the Nexus3 as a proxy/cache for these upstream repositories.

It has obviously major impacts on integration activities.

@Jessica do you confirm that it is possible and there is no legal issues? as far as I remember it was decided not to host third-party dockers for legal issues, caching them is not very different from hosting, no?

As far as I can see Mirantis that acquired the dockerhub activities recently is a member of the lfn networking, is there any way to discuss with their representatives to see if they could exclude community activities from this rate limitation mechanism?


Regards

/Morgan




_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


[ONAP] [Guilin] [Integration] Azure staging lab reinstalled

Morgan Richomme
 

Hi

the Azure staging lab reinstallation is now completed.
We got an issue with onap-platform during installation. It prevented SDNC to start well as shown by the tests https://logs.onap.org/onap-integration/daily/onap_oom_staging_azure_1/11-19-2020_14-11/.

After restart of onap-platform, SDNC init finished successfully
HC are now 100% OK

I re-pushed all the keys already referenced on this lab.
The connection procedure remains unchanged
The installed version is a OOM guilin of he day (so it includes the SO 1.7.10 - it is the first Guilin candidate community lab..
see all the docker versions https://logs.onap.org/onap-integration/daily/onap_oom_staging_azure_1/11-19-2020_14-11/infrastructure-healthcheck/k8s/kubernetes-status/versions.html)

If you want to connect - especially the use case owners - please send me your public ssh key.

Enjoy testing

/Morgan

_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: DMAAP-message-router-mirrormaker pods is not running #dmaap

Dominic Lunanuova
 

While you are all discussing this, just want to point out a related Jira we created a long time ago: https://jira.onap.org/browse/OOM-1579

At that time, DMaaP design anticipated there might be a central “hub” and one or more edge deployments because the AT&T ECOMP architecture has long had central/edge deployments.   (e.g. use case: edge data collector/central analysis function)

The blocking issue in ONAP has been some consensus on the technique and naming conventions for inter-K8S services, and as this Jira mentions, some deployment indicator for where you are deploying (and what the central deployment is named).

 

The vision, as it relates to this thread might look like:

  • A “central” k8s deployment with the full set of DMaaP components, AAF, and the CDS central component
  • An “edge” k8s deployment with a streamlined set of DMaaP components (MR, MM, DR Node) and the CDS edge component
  • Provisioning an MR topic in “central” using DMaaP Bus Controller with attributes which indicate the location of publisher (e.g. central) and subscriber (e.g. edge) and a flag for the type of message replication (e.g. “central-to-edge”), which results in the proper provisioning for the message path:
    • Publisher Identity is authorized in AAF to publish on the topic
    • Subscriber Identity is authorized in AAF to subscribe to the topic
    • Edge MM  is provisioned  to replicate Topic from central MR to edge MR
  • End result: publisher produces message to central MR, edge MM replicates message to edge MR, edge subscriber consumes message

 

Note: the direction of the message replication is important – you can’t have bidirectional replication on the same topic – but either direction is supported.

 

As Mandar, Sunil and I are getting pulled from ONAP soon, wish we had the Pruss CDS use case 6 months ago!

While we still have some limited cycles to consult, might somebody want to transition into DMaaP to complete the work using the newer MM?

 

-Dom

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Michael Pruss
Sent: Tuesday, November 17, 2020 3:15 PM
To: SAWANT, MANDAR <ms5838@...>; onap-discuss@...
Subject: Re: [onap-discuss] DMAAP-message-router-mirrormaker pods is not running #dmaap

 

Hi Mandar,

I am working on a hybrid ONAP deployment using Frankfurt release of ONAP.  The idea is to have several components on a cloud platform including a central dmaap hub, then components such as CDS can be deployed on onprem cluster with its own dmaap deployment. Using mirror maker is required to replicate the dmaap topics/messages between the main dmaap hub and the onprem dmaap deployment.

Please let me know if you have any alternatives that do not require mirror maker. Per my understanding, to achieve the above use case mirror maker is required.

Thanks,
Michael


Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

emin.aktas@...
 

Perfect, it’s good to know that we won’t have any problem. Sometimes, the envoriment or some steps could fail (most highly it happens) eventhough it is the same enviroment


The same idea came up to my mind. I will test theory, if I manage the locate the correct informations. I will let you know if it succeeds

Emin


Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

Brian Freeman
 

I haven’t tried this but in theory you could make a copy of the cleanup parameters file in /share and put the parameters in for the VNF that you want to cleanup and then call ete-k8s.sh deleteVNF with that new file ?

 

Brian

 

 

From: FREEMAN, BRIAN D
Sent: Thursday, November 19, 2020 9:14 AM
To: onap-discuss@...; emin.aktas@...
Subject: RE: [onap-discuss] demo-k8s.sh deleteVNF fails #frankfurt #robot

 

They won’t interfere with your environment.

 

On a small deployment the GUI’s can slow down if you have a large number of VNFs in the system simply because repeated model onboarding for scale testing , since we onboard a new model every time to test that instead of re-using the perfectly fine model we loaded on the last run : ) , can make some of the queries retrieve a fairly large amount of data .

 

If you look into the robot scripts you can see the API calls (in fact log.html for the one you did will show you the https rest api calls to Teardown a VNF)

 

AAI can be cleaned up with a delete through there API – there are POSTMAN examples of how to set the headers – we generally only worry about the extra models we load.

SDC has a backend api that robot uses to delete models (or at least mark them for delete so they are not on the GUI)  – this one is trickier but you should be able to see it in robot log.html

SO doesn’t have an persistent data about VNFs so nothing to worry about.

VID doesn’t have persistent data about VNFs it queries AAI for the data displayed on the screen.

SDNC has some data as well but again doesn’t interfere. You can delete that via the REST API to SDNC as well but since there is no GUI based on the SDNC data it really isn’t needed.

 

Brian

 

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of emin.aktas@...
Sent: Thursday, November 19, 2020 8:47 AM
To: FREEMAN, BRIAN D <bf1936@...>; onap-discuss@...
Subject: Re: [onap-discuss] demo-k8s.sh deleteVNF fails #frankfurt #robot

 

As you suggested, i run  "ete-k8s.sh onap instantiateVFWCL". I can see the files inside the /share folder and it's successfully done.





The failed demo's informations still stays inside the so, aai ect. I understand i cannot delete this one via demo-k8s script. I have been searching and reading the documantiation to delete the old ones to be able to manage my enviroment without any error that might cause by the old ones but can't find anything works.

Do you have any documentations that you can share with me? 

This one for example, i am trying to delete.


Thanks,
Emin


Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

Brian Freeman
 

They won’t interfere with your environment.

 

On a small deployment the GUI’s can slow down if you have a large number of VNFs in the system simply because repeated model onboarding for scale testing , since we onboard a new model every time to test that instead of re-using the perfectly fine model we loaded on the last run : ) , can make some of the queries retrieve a fairly large amount of data .

 

If you look into the robot scripts you can see the API calls (in fact log.html for the one you did will show you the https rest api calls to Teardown a VNF)

 

AAI can be cleaned up with a delete through there API – there are POSTMAN examples of how to set the headers – we generally only worry about the extra models we load.

SDC has a backend api that robot uses to delete models (or at least mark them for delete so they are not on the GUI)  – this one is trickier but you should be able to see it in robot log.html

SO doesn’t have an persistent data about VNFs so nothing to worry about.

VID doesn’t have persistent data about VNFs it queries AAI for the data displayed on the screen.

SDNC has some data as well but again doesn’t interfere. You can delete that via the REST API to SDNC as well but since there is no GUI based on the SDNC data it really isn’t needed.

 

Brian

 

 

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of emin.aktas@...
Sent: Thursday, November 19, 2020 8:47 AM
To: FREEMAN, BRIAN D <bf1936@...>; onap-discuss@...
Subject: Re: [onap-discuss] demo-k8s.sh deleteVNF fails #frankfurt #robot

 

As you suggested, i run  "ete-k8s.sh onap instantiateVFWCL". I can see the files inside the /share folder and it's successfully done.





The failed demo's informations still stays inside the so, aai ect. I understand i cannot delete this one via demo-k8s script. I have been searching and reading the documantiation to delete the old ones to be able to manage my enviroment without any error that might cause by the old ones but can't find anything works.

Do you have any documentations that you can share with me? 

This one for example, i am trying to delete.


Thanks,
Emin


Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

emin.aktas@...
 

As you suggested, i run  "ete-k8s.sh onap instantiateVFWCL". I can see the files inside the /share folder and it's successfully done.





The failed demo's informations still stays inside the so, aai ect. I understand i cannot delete this one via demo-k8s script. I have been searching and reading the documantiation to delete the old ones to be able to manage my enviroment without any error that might cause by the old ones but can't find anything works.

Do you have any documentations that you can share with me? 

This one for example, i am trying to delete.


Thanks,
Emin


Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

Brian Freeman
 

Yea that is just wrong – we haven’t use /opt/eteshare in years.

 

As you can see that is simply a check to see of the file exists outside of the container before passing in the reference to inside the container..

 

modify demo-k8s.sh

to simply add the “-V /share/${VARFILE}” to the VARIABLES line and try that.

 

VARIABLES="$VARIABLES -V /share/${VARFILE}"

 

Brian

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of emin.aktas@...
Sent: Thursday, November 19, 2020 8:24 AM
To: FREEMAN, BRIAN D <bf1936@...>; onap-discuss@...
Subject: Re: [onap-discuss] demo-k8s.sh deleteVNF fails #frankfurt #robot

 

I checked the file you've mentioned but it only contains config and log files. But the reason i only checked the directory is this deleteVNF function looks in "/opt/eteshare/".



I've never tried "ete-k8s.sh onap instantiateVFWCL". I will test this one as you suggested. 

Thanks,
Emin


Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

emin.aktas@...
 

I checked the file you've mentioned but it only contains config and log files. But the reason i only checked the directory is this deleteVNF function looks in "/opt/eteshare/".



I've never tried "ete-k8s.sh onap instantiateVFWCL". I will test this one as you suggested. 

Thanks,
Emin


Re: demo-k8s.sh deleteVNF fails #frankfurt #robot

Brian Freeman
 

The most tested robot script is really “ete-k8s.sh onap instantiateVFWCL”

 

The vFW (non closed loop) is almost deprecated.

 

demo-k8s.sh was originally used for these setups but is now only used for some setup activities like “init”. I would focus on ete-k8s.sh if possible.

 

I think demo-k8s.sh uses the “Save For Delete” keyword where FILE_CACHE is /share not /opt/eteshare so you might want to look for /share inside the robot container and see if the instantiate successfully created the save for delete file for the stack (it’s a python variable file)

 

Brian

 

 

Save For Delete

    [Documentation]   Create a variable file to be loaded for save for delete

    [Arguments]    ${tenant_id}    ${tenant_name}    ${vvg_server_id}    ${customer_name}    ${service_instance_id}    ${stack_name}    ${catalog_service_id}    ${catalog_resource_ids}

    ${dict}=    Create Dictionary

    Set To Dictionary   ${dict}   TENANT_NAME=${tenant_name}

    Set To Dictionary   ${dict}   TENANT_ID=${tenant_id}

    Set To Dictionary   ${dict}   CUSTOMER_NAME=${customer_name}

    Set To Dictionary   ${dict}   STACK_NAME=${stack_name}

    Set To Dictionary   ${dict}   VVG_SERVER_ID=${vvg_server_id}

    Set To Dictionary   ${dict}   SERVICE_INSTANCE_ID=${service_instance_id}

    Set To Dictionary   ${dict}   CATALOG_SERVICE_ID=${catalog_service_id}

 

    ${vars}=    Catenate

    ${keys}=   Get Dictionary Keys    ${dict}

    :FOR   ${key}   IN   @{keys}

    \    ${value}=   Get From Dictionary   ${dict}   ${key}

    \    ${vars}=   Catenate   ${vars}${key} = "${value}"\n

 

    ${comma}=   Catenate

    ${vars}=    Catenate   ${vars}CATALOG_RESOURCE_IDS = [

    :FOR   ${id}   IN    @{catalog_resource_ids}

    \    ${vars}=    Catenate  ${vars}${comma} "${id}"

    \    ${comma}=   Catenate   ,

    ${vars}=    Catenate  ${vars}]\n

    OperatingSystem.Create File   ${FILE_CACHE}/${stack_name}.py   ${vars}

    OperatingSystem.Create File   ${FILE_CACHE}/lastVNF4HEATBRIGE.py   ${vars}

 

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of emin.aktas@...
Sent: Thursday, November 19, 2020 3:39 AM
To: onap-discuss@...
Subject: [onap-discuss] demo-k8s.sh deleteVNF fails #frankfurt #robot

 

Hi,

I managed to deploy vFW demo by using demo-k8s script. Unfortunately, it failed  at the login stage. After that problem i wanted to delete the VNFs but it gives the results below. 



I checked the file "/opt/eteshare/", it does not exist in the container. I am looking a way to delete the stack in OpenStack via api but haven't find anything. 


The error happened while the demo runnig. I fixed the problem but could not test yet.





Regards,
Emin


demo-k8s.sh deleteVNF fails #frankfurt #robot

emin.aktas@...
 

Hi,

I managed to deploy vFW demo by using demo-k8s script. Unfortunately, it failed  at the login stage. After that problem i wanted to delete the VNFs but it gives the results below. 



I checked the file "/opt/eteshare/", it does not exist in the container. I am looking a way to delete the stack in OpenStack via api but haven't find anything. 


The error happened while the demo runnig. I fixed the problem but could not test yet.





Regards,
Emin

681 - 700 of 23112