vFW Closed Loop - Operational Policy issues in Beijing #policy #usecaseui #kubernetes #install #drools


Cristina Precup
 

Dear community,

I am trying to upload the Operational Policy for the Closed Loop part of the virtual Firewall use case (step 2 of Close Loop in https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging). However, there is no Policy after performing the upload step with update-vfw-op-policy.sh:

$ sh update-vfw-op-policy.sh localhost 30220 30221 3a35d839-82cc-442a-bcff-92d5d97d6a1f


Removing the vFW Policy from PDP..


* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 30220 (#0)
Handling connection for 30220
DELETE /pdp/api/deletePolicy HTTP/1.1
Host: localhost:30220
User-Agent: curl/7.54.0
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 128
* upload completely sent off: 128 out of 128 bytes
* Connection #0 to host localhost left intact
P

Updating vFW Operational Policy ..

* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 30220 (#0)
PUT /pdp/api/updatePolicy HTTP/1.1
Host: localhost:30220
User-Agent: curl/7.54.0
Handling connection for 30220
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1328
Expect: 100-continue
* Connection #0 to host localhost left intact
P

Pushing the vFW Policy ..


* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 30220 (#0)
Handling connection for 30220
PUT /pdp/api/pushPolicy HTTP/1.1
Host: localhost:30220
User-Agent: curl/7.54.0
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host localhost left intact
P

Restarting PDP-D ..


[drools-pdp-controllers]
L []: Stopping Policy Management... Policy Management (pid=3306) is stopping... Policy Management has stopped.
[drools-pdp-controllers]
L []: Policy Management (pid 3711) is running


PDP-D amsterdam maven coordinates ..


* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 30221 (#0)
* Server auth using Basic with user '@1b3rt'
GET /policy/pdp/engine/controllers/amsterdam/drools HTTP/1.1
Host: localhost:30221
Authorization: Basic QDFiM3J0OjMxbnN0MzFu
User-Agent: curl/7.54.0
Accept: */*
Handling connection for 30221
< HTTP/1.1 200 OK
< Date: Mon, 10 Sep 2018 13:31:31 GMT
< Content-Type: application/json
< Content-Length: 231
< Server: Jetty(9.3.24.v20180605)
<
{ [231 bytes data]
* Connection #0 to host localhost left intact
{
"alive": false,
"artifactId": "NO-ARTIFACT-ID",
"brained": false,
"canonicalSessionNames": [],
"container": null,
"groupId": "NO-GROUP-ID",
"locked": false,
"recentSinkEvents": [],
"recentSourceEvents": [],
"sessionNames": [],
"version": "NO-VERSION"
}


PDP-D control loop updated ..


* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 30221 (#0)
* Server auth using Basic with user '@1b3rt'
Handling connection for 30221
GET /policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amsterdam/org.onap.policy.controlloop.Params HTTP/1.1
Host: localhost:30221
Authorization: Basic QDFiM3J0OjMxbnN0MzFu
User-Agent: curl/7.54.0
Accept: */*
< HTTP/1.1 200 OK
< Date: Mon, 10 Sep 2018 13:31:32 GMT
< Content-Type: application/json
< Content-Length: 2
< Server: Jetty(9.3.24.v20180605)
<
{ [2 bytes data]
* Connection #0 to host localhost left intact
[]

Furthermore, the policy portal does not have any policies in it.

Here is the output of the drools healthcheck:

CURL GET http://10.42.1.8:6969/healthcheck (see postman)
{
"healthy": false,
"details": [
{
"name": "PDP-D",
"url": "self",
"healthy": true,
"code": 200,
"message": "alive"
},
{
"name": "PAP",
"url": "http://pap:9091/pap/test",
"healthy": false,
"code": 0,
"message": null
},
{
"name": "PDP",
"url": "http://pdp:8081/pdp/test",
"healthy": false,
"code": 0,
"message": null
}
]
}

I understand that the steps have changed in Beijing, and have therefore switched to the Before Installing Policies and Install Policies steps from this wiki https://wiki.onap.org/display/DW/Policy+on+OOM.

In my case, it seems that the service name resolution for brmsgw does not work for nexus, drools and message-router. The situation is identical to the one reported here: https://lists.onap.org/g/onap-discuss/message/12074?p=,,,20,0,0,0::relevance,,%23policy,20,2,0,24974444. Are there any instructions on how to deal with this issue? What is the password for policy user with root privileges?

Looking forward to your reply!


Best regards,
--
Cristina Precup


jkzcristiano
 

Dear Cristina,

To update policy in Beijing you can check this topic.

Regarding drools, nexus and message-router, I have the same behavior. As far as I know, it seems it is ok, since these services are exposed as NodePort types to enable communication from external clients and not between pods.

Kind Regards


Cristina Precup
 

Hello,

Thank you for the reference. I did do the onboarding step mentioned here, making sure to replace the field with the correct PG model-invariant-id in the posh-policies.sh script. However, I don't think this script actually does the onboarding in my case:

kubectl exec -it scapula-pap-5bf5f48d7b-v7fld -c pap -n onap -- bash -c "export PRELOAD_POLICIES=true; /home/policy/push-policies.sh"
Upload BRMS Param Template
--2018-09-11 11:32:53-- https://git.onap.org/policy/drools-applications/plain/controlloop/templates/archetype-cl-amsterdam/src/main/resources/archetype-resources/src/main/resources/__closedLoopControlName__.drl
Resolving git.onap.org (git.onap.org)... 198.145.29.92
Connecting to git.onap.org (git.onap.org)|198.145.29.92|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 58366 (57K) [text/plain]
Saving to: 'cl-amsterdam-template.drl'

100%[==============================================================================>] 58,366 193KB/s in 0.3s

2018-09-11 11:32:54 (193 KB/s) - 'cl-amsterdam-template.drl' saved [58366/58366]

* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
POST /pdp/api/policyEngineImport HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 58757
Expect: 100-continue
Content-Type: multipart/form-data; boundary=------------------------110622b19dc01d62
* Connection #0 to host pdp left intact
PPRELOAD_POLICIES is true
Create BRMSParam Operational Policies
Create BRMSParamvFirewall Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1309
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamvDNS Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1148
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamVOLTE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1140
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamvCPE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1139
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroService Config Policies
Create MicroServicevFirewall Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1689
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroServicevDNS Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1306
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroServicevCPE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1640
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreating Decision Guard policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 463
* upload completely sent off: 463 out of 463 bytes
* Connection #0 to host pdp left intact
PPush Decision policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 97
* upload completely sent off: 97 out of 97 bytes
* Connection #0 to host pdp left intact
PPushing BRMSParam Operational policies
pushPolicy : PUT : com.BRMSParamvFirewall
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamvDNS
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 94
* upload completely sent off: 94 out of 94 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamVOLTE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 95
* upload completely sent off: 95 out of 95 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamvCPE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 94
* upload completely sent off: 94 out of 94 bytes
* Connection #0 to host pdp left intact
PPushing MicroService Config policies
pushPolicy : PUT : com.MicroServicevFirewall
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 104
* upload completely sent off: 104 out of 104 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.MicroServicevDNS
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.MicroServicevCPE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact

Checking further on PAP if there are any policies configured gives me nothing:

policy@scapula-pap-5bf5f48d7b-v7fld:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vFirewall.*"}' http://pdp:8081/pdp/api/getConfig

policy@scapula-pap-5bf5f48d7b-v7fld:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": "*"}' http://pdp:8081/pdp/api/getConfig


Best regards,
--
Cristina Precup


Jorge Hernandez
 

Hello Cristina,

A bug has been recently found in the latest Beijing version that you may be hitting (POLICY-1097). A fix has been merged recently.

Please also take a look at https://wiki.onap.org/display/DW/Policy+on+OOM to look at state of things after your installation.

If you are running the vFW use case, note that you could avoid the use the update-vfw-op-policy.sh script. You could instead, before invoking the push-policies.sh, edit the file, the vFirewall encoded operational policy piece, and modify the resourceID to match the one you are using in your lab (which is the input parameter to the update-vfw-op-policy.sh) directly. That is in essence what the update-vfw-op-policy.sh does.

One caveat to this approach is that the kubernetes install mounts the push-policies.sh in a read-only file system, so within the container you would move the push-policies.sh to a dir with write permissions, make your changes, and invoke the push-policies scripts, as suggested in the wiki page above. Good luck!

Jorge

-----Original Message-----
From: onap-discuss@lists.onap.org [mailto:onap-discuss@lists.onap.org] On Behalf Of Cristina Precup via Lists.Onap.Org
Sent: Tuesday, September 11, 2018 6:59 AM
To: onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] vFW Closed Loop - Operational Policy issues in Beijing #kubernetes #policy #drools #dcaegen2 #install #usecaseui

Hello,

Thank you for the reference. I did do the onboarding step mentioned here, making sure to replace the field with the correct PG model-invariant-id in the posh-policies.sh script. However, I don't think this script actually does the onboarding in my case:

kubectl exec -it scapula-pap-5bf5f48d7b-v7fld -c pap -n onap -- bash -c "export PRELOAD_POLICIES=true; /home/policy/push-policies.sh"
Upload BRMS Param Template
--2018-09-11 11:32:53-- https://urldefense.proofpoint.com/v2/url?u=https-3A__git.onap.org_policy_drools-2Dapplications_plain_controlloop_templates_archetype-2Dcl-2Damsterdam_src_main_resources_archetype-2Dresources_src_main_resources_-5F-5FclosedLoopControlName-5F-5F.drl&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=AOclne09odx6cmeimzFUhQ&m=eo2mpzN21NLU2O9aeoUve_IdCqj3Mt7LyBtmVo34emA&s=RukLCAdJ2Ombv0eHqd38YY_A-YsFYOikOQiHNt408uU&e=
Resolving git.onap.org (git.onap.org)... 198.145.29.92 Connecting to git.onap.org (git.onap.org)|198.145.29.92|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 58366 (57K) [text/plain]
Saving to: 'cl-amsterdam-template.drl'

100%[==============================================================================>] 58,366 193KB/s in 0.3s

2018-09-11 11:32:54 (193 KB/s) - 'cl-amsterdam-template.drl' saved [58366/58366]

* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
POST /pdp/api/policyEngineImport HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 58757
Expect: 100-continue
Content-Type: multipart/form-data;
boundary=------------------------110622b19dc01d62
* Connection #0 to host pdp left intact
PPRELOAD_POLICIES is true
Create BRMSParam Operational Policies
Create BRMSParamvFirewall Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1309
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamvDNS Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1148
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamVOLTE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1140
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate BRMSParamvCPE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/html
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1139
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroService Config Policies
Create MicroServicevFirewall Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1689
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroServicevDNS Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1306
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreate MicroServicevCPE Policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 1640
Expect: 100-continue
* Connection #0 to host pdp left intact
PCreating Decision Guard policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/createPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 463
* upload completely sent off: 463 out of 463 bytes
* Connection #0 to host pdp left intact
PPush Decision policy
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 97
* upload completely sent off: 97 out of 97 bytes
* Connection #0 to host pdp left intact
PPushing BRMSParam Operational policies
pushPolicy : PUT : com.BRMSParamvFirewall
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamvDNS
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 94
* upload completely sent off: 94 out of 94 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamVOLTE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 95
* upload completely sent off: 95 out of 95 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.BRMSParamvCPE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 94
* upload completely sent off: 94 out of 94 bytes
* Connection #0 to host pdp left intact
PPushing MicroService Config policies
pushPolicy : PUT : com.MicroServicevFirewall
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 104
* upload completely sent off: 104 out of 104 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.MicroServicevDNS
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact
PpushPolicy : PUT : com.MicroServicevCPE
* Hostname was NOT found in DNS cache
* Trying 10.42.10.50...
* Connected to pdp (10.42.10.50) port 8081 (#0)
PUT /pdp/api/pushPolicy HTTP/1.1
User-Agent: curl/7.35.0
Host: pdp:8081
Content-Type: application/json
Accept: text/plain
ClientAuth: cHl0aG9uOnRlc3Q=
Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==
Environment: TEST
Content-Length: 99
* upload completely sent off: 99 out of 99 bytes
* Connection #0 to host pdp left intact

Checking further on PAP if there are any policies configured gives me nothing:

policy@scapula-pap-5bf5f48d7b-v7fld:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vFirewall.*"}' https://urldefense.proofpoint.com/v2/url?u=http-3A__pdp-3A8081_pdp_api_getConfig&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=AOclne09odx6cmeimzFUhQ&m=eo2mpzN21NLU2O9aeoUve_IdCqj3Mt7LyBtmVo34emA&s=qrlYk0UJGahv2ljyxj7A2vn3njL3ZfCkhnjpbysX7Dg&e=

policy@scapula-pap-5bf5f48d7b-v7fld:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": "*"}' https://urldefense.proofpoint.com/v2/url?u=http-3A__pdp-3A8081_pdp_api_getConfig&d=DwIFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=AOclne09odx6cmeimzFUhQ&m=eo2mpzN21NLU2O9aeoUve_IdCqj3Mt7LyBtmVo34emA&s=qrlYk0UJGahv2ljyxj7A2vn3njL3ZfCkhnjpbysX7Dg&e=


Best regards,
--
Cristina Precup


Cristina Precup
 

Hello Jorge,

Thank you for pointing me in the right direction. I did go through the wiki page that you are mentioning. Here is the overview in short:

- Healthcheck: PDP and PAP are unreachable
- Policy healthcheck fails
- There is no default group in the PDP Tab of the Policy UI
- brmsgw cannot ping nexus, drools and message-router

I understand that the POLICY-1097 fix has been applied in the Casablanca release. Would you suggest taking the changes as a patch into Beijing? What would be the recommended approach here?


Best regards,
--
Cristina Precup


Jorge Hernandez
 

I see .. I think the lab installation is not in good state. It may be beyond the policy component, if you cannot ping the "message-router" service, which may be an indication of bigger problems. Check that you can ping it from other locations outside the brmsgw container, to see if it is a general problem. Verify that the message-router service shows with "kubectl get services ..".

With regards to the POLICY-1097 fix, it only affects the oom beijing branch, so was only submitted there. The master branch should be ok for Casablanca in that regard. You could patch your oom/kubernetes beijing install or pull the latest changes from git - oom beijing branch, then as usual, do the "make all" to make sure your helm charts are updated/

From policy standpoint, I suggest to start with clean data, and make sure first, every component can talk to each other including the message-router before doing the push-policies.sh. The /dockerdata-nfs/<release>/mariadb and /dockerdata-nfs/<release>/nexus directories (PVs) contain policy specific data, and I think safe to remove previous to do a helm upgrade/install to pick up the latest changes mentioned in the previous paragraph.

Hope it helps.
Jorge

-----Original Message-----
From: onap-discuss@lists.onap.org [mailto:onap-discuss@lists.onap.org] On Behalf Of Cristina Precup via Lists.Onap.Org
Sent: Tuesday, September 11, 2018 9:59 AM
To: onap-discuss@lists.onap.org
Subject: Re: [onap-discuss] vFW Closed Loop - Operational Policy issues in Beijing #kubernetes #policy #drools #dcaegen2 #install #usecaseui

Hello Jorge,

Thank you for pointing me in the right direction. I did go through the wiki page that you are mentioning. Here is the overview in short:

- Healthcheck: PDP and PAP are unreachable
- Policy healthcheck fails
- There is no default group in the PDP Tab of the Policy UI
- brmsgw cannot ping nexus, drools and message-router

I understand that the POLICY-1097 fix has been applied in the Casablanca release. Would you suggest taking the changes as a patch into Beijing? What would be the recommended approach here?


Best regards,
--
Cristina Precup


Cristina Precup
 

Dear Jorge,

This really did fix my problem for Policy upload. Thank you for that. However, it seems that after the update, the SINC vFW VMs get created on Openstack, they have no errors in the logs, the networks get created, but then all of them get deleted from Openstack. Any idea what it could be? It seems to me like this is the equivalent of an automatic deletion of the VF SINC Module.


Best regards,
--
Cristina Precup


jkzcristiano
 

Cristina,

I am wondering if you are now able to ping from brmsgw to nexus, drools or message-router.

Regarding your last message, I don't know what happened but I had a similar issue before (VMs suddenly dissapear from OpenStack). In my case, the problem occurrs only if I use the same name for the VM (generic-vnf-name=vfw_name_0) and the stack (vnf-name).

KR


Cristina Precup
 

Hello Jorge,

This time, the PDP healthcheck was Successful. However, regarding the ping from brmsgw: no, it cannot do name resolution and thus cannot ping the three pods (it's using their kubernetes cluster IPs, whereas for the pdp, policydb it does not):

$ kubectl exec -it scapula-brmsgw-76cc87c68b-fjdfk -n onap -- bash -c "ping drools"
PING drools.onap.svc.cluster.local (10.43.116.134) 56(84) bytes of data.
^C
--- drools.onap.svc.cluster.local ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3016ms

$ kubectl exec -it scapula-brmsgw-76cc87c68b-fjdfk -n onap -- bash -c "ping nexus"
PING nexus.onap.svc.cluster.local (10.43.77.23) 56(84) bytes of data.
^C
--- nexus.onap.svc.cluster.local ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1007ms

$ kubectl exec -it scapula-brmsgw-76cc87c68b-fjdfk -n onap -- bash -c "ping message-router"
PING message-router.onap.svc.cluster.local (10.43.130.167) 56(84) bytes of data.
^C
--- message-router.onap.svc.cluster.local ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3022ms

$ kubectl exec -it scapula-brmsgw-76cc87c68b-fjdfk -n onap -- bash -c "ping pdp"
PING pdp.onap.svc.cluster.local (10.42.8.26) 56(84) bytes of data.
64 bytes from scapula-pdp-0.pdp.onap.svc.cluster.local (10.42.8.26): icmp_seq=1 ttl=62 time=0.620 ms
64 bytes from scapula-pdp-0.pdp.onap.svc.cluster.local (10.42.8.26): icmp_seq=2 ttl=62 time=0.652 ms
^C
--- pdp.onap.svc.cluster.local ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.620/0.636/0.652/0.016 ms
$ kubectl exec -it scapula-brmsgw-76cc87c68b-fjdfk -n onap -- bash -c "ping policydb"
PING policydb.onap.svc.cluster.local (10.42.8.12) 56(84) bytes of data.
64 bytes from wsalpha-worker-1.x.com (10.42.8.12): icmp_seq=1 ttl=62 time=1.76 ms
64 bytes from wsalpha-worker-1.x.com (10.42.8.12): icmp_seq=2 ttl=62 time=0.626 ms
64 bytes from wsalpha-worker-1.x.com (10.42.8.12): icmp_seq=3 ttl=62 time=0.721 ms
^C
--- policydb.onap.svc.cluster.local ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.626/1.036/1.762/0.515 ms

As for the Openstack issue: indeed, I made sure the names differ quite a lot (i.e., the generic-vnf-name is not a substring of the vnf-name). The stack has been deployed and this time there is traffic flowing. I had to switch to demo_artifacts_version=1.3.0 in the preload as I see that the 1.2.1 has been removed. However, this gives a build error in the SINC VM:

Making evel.o from evel.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel.c: In function 'evel_free_event':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel.c:461:7: warning: implicit declaration of function 'evel_free_hrtbt_field' [-Wimplicit-function-declaration]
evel_free_hrtbt_field((EVENT_HEARTBEAT_FIELD *)evt_ptr);
^
Making metadata.o from metadata.c
Making ring_buffer.o from ring_buffer.c
Making double_list.o from double_list.c
Making hashtable.o from hashtable.c
Making evel_event.o from evel_event.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event.c: In function 'evel_json_encode_eventtype':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event.c:591:11: warning: implicit declaration of function 'evel_json_encode_voice_quality' [-Wimplicit-function-declaration]
evel_json_encode_voice_quality(jbuf, (EVENT_VOICE_QUALITY *)event);
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event.c:595:11: warning: implicit declaration of function 'evel_json_encode_threshold_cross' [-Wimplicit-function-declaration]
evel_json_encode_threshold_cross(jbuf, (EVENT_THRESHOLD_CROSS *)event);
^
Making evel_fault.o from evel_fault.c
Making evel_mobile_flow.o from evel_mobile_flow.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_mobile_flow.c: In function 'evel_json_encode_mobile_flow':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_mobile_flow.c:965:7: warning: implicit declaration of function 'evel_throttle_suppress_nv_pair' [-Wimplicit-function-declaration]
if (!evel_throttle_suppress_nv_pair(jbuf->throttle_spec,
^
Making evel_option.o from evel_option.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_option.c: In function 'evel_force_option_intheader':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_option.c:393:18: warning: assignment discards 'const' qualifier from pointer target type [enabled by default]
option->object = value;
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_option.c: In function 'evel_set_option_intheader':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_option.c:426:20: warning: assignment discards 'const' qualifier from pointer target type [enabled by default]
option->object = value;
^
Making evel_jsonobject.o from evel_jsonobject.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_jsonobject.c: In function 'evel_new_jsonobjinstance':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_jsonobject.c:99:14: warning: unused variable 'key' [-Wunused-variable]
jsmntok_t *key;
^
Making evel_other.o from evel_other.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c: In function 'evel_other_field_add_namedarray':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:172:3: warning: passing argument 2 of 'ht_get' discards 'const' qualifier from pointer target type [enabled by default]
list = (DLIST *)ht_get(other->namedarrays, hashname);
^
In file included from /opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel.h:44:0,
from /opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:29:
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/hashtable.h:95:7: note: expected 'char *' but argument is of type 'const char *'
void *ht_get( HASHTABLE_T *hashtable, char *key );
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:178:6: warning: passing argument 2 of 'ht_set' discards 'const' qualifier from pointer target type [enabled by default]
ht_set(other->namedarrays, hashname,(void*)nlist);
^
In file included from /opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel.h:44:0,
from /opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:29:
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/hashtable.h:86:6: note: expected 'char *' but argument is of type 'const char *'
void ht_set( HASHTABLE_T *hashtable, char *key, void *value );
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c: In function 'evel_json_encode_other':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:302:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for( idx = 0; idx < ht->size; idx++ ) {
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:347:25: warning: suggest braces around empty body in an 'if' statement [-Wempty-body]
if(jsonobjp != NULL);
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:361:15: warning: implicit declaration of function 'evel_enc_kv_object' [-Wimplicit-function-declaration]
evel_enc_kv_object(jbuf, "objectInstance", jsonobjinst->jsonstring);
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:366:10: warning: variable 'item_added3' set but not used [-Wunused-but-set-variable]
bool item_added3 = false;
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:277:8: warning: unused variable 'itm_added' [-Wunused-variable]
bool itm_added = false;
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c: In function 'evel_free_other':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_other.c:474:16: warning: unused variable 'other_field_item' [-Wunused-variable]
DLIST_ITEM * other_field_item = NULL;
^
Making evel_json_buffer.o from evel_json_buffer.c
Making evel_reporting_measurement.o from evel_reporting_measurement.c
Making evel_heartbeat_fields.o from evel_heartbeat_fields.c
Making evel_sipsignaling.o from evel_sipsignaling.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_sipsignaling.c: In function 'evel_json_encode_signaling':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_sipsignaling.c:485:3: warning: implicit declaration of function 'evel_json_encode_vendor_field' [-Wimplicit-function-declaration]
evel_json_encode_vendor_field(jbuf, &event->vnfname_field);
^
Making evel_scaling_measurement.o from evel_scaling_measurement.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_scaling_measurement.c: In function 'evel_json_encode_measurement':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_scaling_measurement.c:3140:15: warning: implicit declaration of function 'evel_enc_kv_object' [-Wimplicit-function-declaration]
evel_enc_kv_object(jbuf, "objectInstance", jsonobjinst->jsonstring);
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_scaling_measurement.c:3145:10: warning: variable 'item_added3' set but not used [-Wunused-but-set-variable]
bool item_added3 = false;
^
Making evel_state_change.o from evel_state_change.c
Making evel_strings.o from evel_strings.c
Making evel_syslog.o from evel_syslog.c
Making evel_throttle.o from evel_throttle.c
Making evel_internal_event.o from evel_internal_event.c
Making evel_event_mgr.o from evel_event_mgr.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c: In function 'event_handler_initialize':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:373:5: warning: format not a string literal and no format arguments [-Wformat-security]
snprintf(local_address,sizeof(local_address),source_ip);
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:373:5: warning: format not a string literal and no format arguments [-Wformat-security]
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:390:5: warning: format not a string literal and no format arguments [-Wformat-security]
snprintf(local_address,sizeof(local_address),source_ip_bakup);
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:390:5: warning: format not a string literal and no format arguments [-Wformat-security]
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c: In function 'evel_postmulti_message':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:1219:7: warning: unused variable 'http_response_code' [-Wunused-variable]
int http_response_code = 0, i;
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c: In function 'evel_post_multiapi':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:1477:16: warning: unused variable 'found' [-Wunused-variable]
int idx, found = 0;
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:1368:25: warning: unused variable 'i' [-Wunused-variable]
int still_running,i;
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:1365:11: warning: variable 'rc' set but not used [-Wunused-but-set-variable]
int rc = EVEL_SUCCESS;
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c: In function 'evel_handle_event_response':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:1939:54: warning: unused parameter 'post' [-Wunused-parameter]
MEMORY_CHUNK * const post)
^
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c: At top level:
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_event_mgr.c:1996:6: warning: 'evel_handle_response_tokens' defined but not used [-Wunused-function]
bool evel_handle_response_tokens(const MEMORY_CHUNK * const chunk,
^
Making evel_threshold_cross.o from evel_threshold_cross.c
Making evel_voicequality.o from evel_voicequality.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_voicequality.c: In function 'evel_json_encode_voice_quality':
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel_voicequality.c:489:3: warning: implicit declaration of function 'evel_json_encode_vendor_field' [-Wimplicit-function-declaration]
evel_json_encode_vendor_field(jbuf, &event->vendorVnfNameFields);
^
Making evel_logging.o from evel_logging.c
Making evel_batch.o from evel_batch.c
Making jsmn.o from jsmn.c
Linking API Shared Library
Linking API Static Library
Making VNF Reporting
vpp_measurement_reporter.c: In function 'main':
vpp_measurement_reporter.c:207:22: warning: passing argument 4 of 'evel_initialize' makes integer from pointer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'int' but argument is of type 'void *'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: warning: passing argument 5 of 'evel_initialize' makes pointer from integer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'const char * const' but argument is of type 'int'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: warning: passing argument 7 of 'evel_initialize' makes integer from pointer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'int' but argument is of type 'void *'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: warning: passing argument 8 of 'evel_initialize' makes integer from pointer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'int' but argument is of type 'void *'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: warning: passing argument 9 of 'evel_initialize' makes integer from pointer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'int' but argument is of type 'void *'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: warning: passing argument 14 of 'evel_initialize' makes integer from pointer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'long int' but argument is of type 'char *'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: warning: passing argument 15 of 'evel_initialize' makes integer from pointer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'long int' but argument is of type 'void *'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: warning: passing argument 16 of 'evel_initialize' makes pointer from integer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'const char * const' but argument is of type 'int'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: warning: passing argument 18 of 'evel_initialize' makes pointer from integer without a cast [enabled by default]
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: expected 'const char * const' but argument is of type 'int'
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
vpp_measurement_reporter.c:207:22: error: too few arguments to function 'evel_initialize'
1)) /* Verbosity */
^
In file included from vpp_measurement_reporter.c:25:0:
/opt/VES/evel/evel-library/code/VESreporting/../../code/evel_library/evel.h:1460:16: note: declared here
EVEL_ERR_CODES evel_initialize(const char * const fqdn,
^
make[1]: *** [vpp_measurement_reporter] Error 1
make: *** [vnf_reporting] Error 2
Adding system startup for /etc/init.d/vfirewall.sh ...
/etc/rc0.d/K20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc1.d/K20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc6.d/K20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc2.d/S20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc3.d/S20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc4.d/S20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc5.d/S20vfirewall.sh -> ../init.d/vfirewall.sh
vpp start/running, process 21564

I am not sure how this will impact the demo. Is there any recommended version for demo_artifacts_version and install_script_version?

Aside from this, there is actually traffic flowing from PG to SINC - a first timer! Thank you, Jorge.


jkzcristiano
 

Dear Cristina,

You need to use latest heat files for onboarding vFWCL (from https://github.com/onap/demo)
Then, for the preload follow the structure that is attached in "preload.txt" file.

I hope this helps.

Kind Regards
 


Cristina Precup
 

Hello,

Thank you but this did not help. Mainly, I am getting errors regarding curl and brctl missing on the SINC VM:

Making evel.o from evel.c
/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel.c:36:23: fatal error: curl/curl.h: No such file or directory
#include <curl/curl.h>
^
compilation terminated.
make: *** [/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel.o] Error 1
Adding system startup for /etc/init.d/vfirewall.sh ...
/etc/rc0.d/K20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc1.d/K20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc6.d/K20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc2.d/S20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc3.d/S20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc4.d/S20vfirewall.sh -> ../init.d/vfirewall.sh
/etc/rc5.d/S20vfirewall.sh -> ../init.d/vfirewall.sh
vpp start/running, process 7759
tap-0
tap-1
./v_firewall_init.sh: line 55: brctl: command not found
./v_firewall_init.sh: line 56: brctl: command not found
./v_firewall_init.sh: line 57: brctl: command not found
./v_firewall_init.sh: line 58: brctl: command not found
./v_firewall_init.sh: line 59: brctl: command not found
./v_firewall_init.sh: line 60: brctl: command not found
br0: ERROR while getting interface flags: No such device
br1: ERROR while getting interface flags: No such device

Any hint on which ubuntu trusty version you are using?


Best regards,
--
Cristina Precup


Marco Platania
 

Cristina,

 

Please follow this webpage to download VM images: https://docs.openstack.org/image-guide/obtain-images.html

 

I typically use whatever is in the current directory here: http://cloud-images.ubuntu.com/trusty/

 

I just spun up a vFW in my private lab and it worked as expected.

 

Marco

 

From: <onap-discuss@...> on behalf of "Cristina Precup via Lists.Onap.Org" <cprecup=cisco.com@...>
Reply-To: "onap-discuss@..." <onap-discuss@...>, "cprecup@..." <cprecup@...>
Date: Thursday, September 13, 2018 at 1:14 PM
To: "onap-discuss@..." <onap-discuss@...>
Subject: Re: [onap-discuss] vFW Closed Loop - Operational Policy issues in Beijing #kubernetes #policy #drools #dcaegen2 #install #usecaseui

 

Hello,

 

Thank you but this did not help. Mainly, I am getting errors regarding curl and brctl missing on the SINC VM:

 

Making evel.o from evel.c

/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel.c:36:23: fatal error: curl/curl.h: No such file or directory

#include <curl/curl.h>

                       ^

compilation terminated.

make: *** [/opt/VES/evel/evel-library/bldjobs/../code/evel_library/evel.o] Error 1

Adding system startup for /etc/init.d/vfirewall.sh ...

   /etc/rc0.d/K20vfirewall.sh -> ../init.d/vfirewall.sh

   /etc/rc1.d/K20vfirewall.sh -> ../init.d/vfirewall.sh

   /etc/rc6.d/K20vfirewall.sh -> ../init.d/vfirewall.sh

   /etc/rc2.d/S20vfirewall.sh -> ../init.d/vfirewall.sh

   /etc/rc3.d/S20vfirewall.sh -> ../init.d/vfirewall.sh

   /etc/rc4.d/S20vfirewall.sh -> ../init.d/vfirewall.sh

   /etc/rc5.d/S20vfirewall.sh -> ../init.d/vfirewall.sh

vpp start/running, process 7759

tap-0

tap-1

./v_firewall_init.sh: line 55: brctl: command not found

./v_firewall_init.sh: line 56: brctl: command not found

./v_firewall_init.sh: line 57: brctl: command not found

./v_firewall_init.sh: line 58: brctl: command not found

./v_firewall_init.sh: line 59: brctl: command not found

./v_firewall_init.sh: line 60: brctl: command not found

br0: ERROR while getting interface flags: No such device

br1: ERROR while getting interface flags: No such device

 

Any hint on which ubuntu trusty version you are using?

 

 

Best regards,

--

Cristina Precup

 

 

 


Cristina Precup
 

Dear community,

I had switched to the beijing branch of the demo repository and that brought the vFW demo back up. It turns out that the repository branch and the artifact versions are tightly coupled.

Thank you for all the support!


Best regards,
--
Cristina Precup


gulsumatici@...
 

Hello,
In Beijing  installation the  drools   pod  status  is init and  doesn't run.  It's  waiting  on   init status  even  deleted  the  pod  several  times, although there is  no  error in the  logs. It can't create the drools container.  Is the  any  method  to  fix this ?

dev-drools-0                                    0/1       Init:0/1  

Labels:         app=drools
                controller-revision-hash=dev-drools-d9cbdb76d
                release=dev
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"onap","name":"dev-drools","uid":"ec0af246-e82b-11e8-9d15-0272838fbdaf","apiVersi...
Status:         Pending
IP:             10.42.8.108
Created By:     StatefulSet/dev-drools
Controlled By:  StatefulSet/dev-drools
Init Containers:
  drools-readiness:
    Container ID:  docker://8329445dc7d79d50709332a9c86ed85c6c320b010c15d9194177f05bf117ff71
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      policydb
      --container-name
      nexus
    State:          Running
      Started:      Wed, 21 Nov 2018 15:04:36 +0000
    Ready:          False
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-28w78 (ro)
Containers:
  drools:
    Container ID:  
    Image:         nexus3.onap.org:10001/onap/policy-drools:1.2.3
    Image ID:      
    Ports:         6969/TCP, 9696/TCP
    Command:
      /bin/bash
      -c
      ./do-start.sh
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       tcp-socket :6969 delay=180s timeout=1s period=10s #success=1 #failure=3
    Readiness:      tcp-socket :6969 delay=60s timeout=1s period=10s #success=1 #failure=3
    Environment:
      REPLICAS:  1
    Mounts:
      /etc/localtime from localtime (ro)
      /tmp/logback.xml from policy-logback (rw)
      /tmp/policy-install/config/apps-install.sh from drools-config (rw)
      /tmp/policy-install/config/base.conf from drools-config (rw)
      /tmp/policy-install/config/drools-preinstall.sh from drools-config (rw)
      /tmp/policy-install/config/drools-tweaks.sh from drools-config (rw)
      /tmp/policy-install/config/feature-healthcheck.conf from drools-secret (rw)
      /tmp/policy-install/config/feature-pooling-dmaap.conf from drools-config (rw)
      /tmp/policy-install/config/policy-management.conf from drools-config (rw)
      /usr/share/maven/conf/settings.xml from drools-settingsxml (rw)
      /var/log/onap from policy-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-28w78 (ro)
Conditions:
  Type           Status
  Initialized    False 
  Ready          False 
  PodScheduled   True 
Volumes:
  localtime:
    Type:  HostPath (bare host directory volume)
    Path:  /etc/localtime
  filebeat-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-filebeat-configmap
    Optional:  false
  policy-logs:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  policy-data-filebeat:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  policy-logback:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-drools-log-configmap
    Optional:  false
  drools-settingsxml:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-drools-settings-configmap
    Optional:  false
  drools-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-drools-configmap
    Optional:  false
  drools-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dev-drools-secret
    Optional:    false
  default-token-28w78:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-28w78
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
 


Jorge Hernandez
 

Hello,

According to your output, readiness checks are failing, make sure that policydb and nexus container dependencies come up ok.

Best regards,

Jorge

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of gulsumatici@...
Sent: Thursday, November 22, 2018 12:56 AM
To: Cristina Precup <cprecup@...>; onap-discuss@...
Subject: Re: [onap-discuss] vFW Closed Loop - Operational Policy issues in Beijing #kubernetes #policy #drools #dcaegen2 #install #usecaseui

 

Hello,
In Beijing  installation the  drools   pod  status  is init and  doesn't run.  It's  waiting  on   init status  even  deleted  the  pod  several  times, although there is  no  error in the  logs. It can't create the drools container.  Is the  any  method  to  fix this ?

dev-drools-0                                    0/1       Init:0/1  

Labels:         app=drools

                controller-revision-hash=dev-drools-d9cbdb76d

                release=dev

Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"onap","name":"dev-drools","uid":"ec0af246-e82b-11e8-9d15-0272838fbdaf","apiVersi...

Status:         Pending

IP:             10.42.8.108

Created By:     StatefulSet/dev-drools

Controlled By:  StatefulSet/dev-drools

Init Containers:

  drools-readiness:

    Container ID:  docker://8329445dc7d79d50709332a9c86ed85c6c320b010c15d9194177f05bf117ff71

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      policydb

      --container-name

      nexus

    State:          Running

      Started:      Wed, 21 Nov 2018 15:04:36 +0000

    Ready:          False

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-28w78 (ro)

Containers:

  drools:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/policy-drools:1.2.3

    Image ID:      

    Ports:         6969/TCP, 9696/TCP

    Command:

      /bin/bash

      -c

      ./do-start.sh

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :6969 delay=180s timeout=1s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :6969 delay=60s timeout=1s period=10s #success=1 #failure=3

    Environment:

      REPLICAS:  1

    Mounts:

      /etc/localtime from localtime (ro)

      /tmp/logback.xml from policy-logback (rw)

      /tmp/policy-install/config/apps-install.sh from drools-config (rw)

      /tmp/policy-install/config/base.conf from drools-config (rw)

      /tmp/policy-install/config/drools-preinstall.sh from drools-config (rw)

      /tmp/policy-install/config/drools-tweaks.sh from drools-config (rw)

      /tmp/policy-install/config/feature-healthcheck.conf from drools-secret (rw)

      /tmp/policy-install/config/feature-pooling-dmaap.conf from drools-config (rw)

      /tmp/policy-install/config/policy-management.conf from drools-config (rw)

      /usr/share/maven/conf/settings.xml from drools-settingsxml (rw)

      /var/log/onap from policy-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-28w78 (ro)

Conditions:

  Type           Status

  Initialized    False 

  Ready          False 

  PodScheduled   True 

Volumes:

  localtime:

    Type:  HostPath (bare host directory volume)

    Path:  /etc/localtime

  filebeat-conf:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-filebeat-configmap

    Optional:  false

  policy-logs:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  policy-data-filebeat:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  policy-logback:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-drools-log-configmap

    Optional:  false

  drools-settingsxml:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-drools-settings-configmap

    Optional:  false

  drools-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-drools-configmap

    Optional:  false

  drools-secret:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  dev-drools-secret

    Optional:    false

  default-token-28w78:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-28w78

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s

                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s

Events:          <none>