Date   

[appc} #appc today's APPC weekly meeting cancelled #appc

Taka Cho
 

Dear ONAP Community,

 

I am expecting only few people will be available today, since holiday season.

 

I am cancelling today’s APPC weekly meeting.

 

Taka


[ONAP][SO][SDNC] Issue while creating VF-module using new API in Casablanca Environment

Sunil Kumar
 

Hi SO/SDNC Team,

 

I was able to successfully preload the data using the SDNC API “GENERIC-RESOURCE-API:preload-vf-module-topology-operation”.

 

While trying to create a VF module using new SO API (GR_API) in Casablanca environment, getting an error as below.

 

Error from SDNC: There are no VNFs defined in MD-SAL

 

Please find the attachments for the preload data and debug logs of SO.

image.jpeg

Please help me in resolving this issue.



--
Thanks & Regards
Sunil Biradar
Bangalore | India | sunilbiradar30@...


Re: [E] RE: [onap-discuss] [APPC] LCM Command: Restart

Prakash
 

Hi Team,
In the Beijing release notes there was a known issue for APPC LCM Restart action in DG, so i tried Configure LCM. But still getting the below error. Any Suggestions? 
Note: Using Beijing Release since i have alread setup running.

Request:

    "input": {
      "common-header": {
        "timestamp": "2018-12-26T02:10:04.244Z",
        "api-ver": "2.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced2001",
        "request-id": "268a5e6d-3e8e-496c-b282-3c0a33be3c28",
        "sub-request-id": "1",
        "flags": {
            "force" : "TRUE",
            "ttl" : 60000
        }
      },
      "action": "Configure",
      "action-identifiers": {
        "vnf-id": "b0ced82c-b549-4404-970c-ea39437d993b"
      },
     "payload": "{\"request-parameters\":{\"configuration-parameters\":\"chef-server-address\":\"xx.xx.xx.xx\"}}"
    }
}


Response: 
{"output":{"status":{"code":100,"message":"ACCEPTED - request accepted"},"common-header":{"api-ver":"2.00","flags":{"force":"TRUE","ttl":60000},"sub-request-id":"1","originator-id":"664be3d2-6c12-4f4b-a3e7-c349acced2001","timestamp":"2018-12-26T02:10:04.244Z","request-id":"268a5e6d-3e8e-496c-b282-3c0a33be3c28"}}}



LOGS:
2018-12-26 10:08:48,638 | INFO  | 8894141-15128541 | audit                            | 413 - appc-common - 1.3.0 -  -  | APPC0090A Operation "Configure" for VNF type "b0ced82c-b549-4404-970c-ea39437d993b" from Source "664be3d2-6c12-4f4b-a3e7-c349acced2001" with RequestID "268a5e6d-3e8e-496c-b282-3c0a33be3c28" was started at "2018-12-26T10:08:47Z" and ended at "2018-12-26T10:08:48Z" with status code "100"
2018-12-26 10:08:48,638 | INFO  | 8894141-15128541 | metrics                          | 413 - appc-common - 1.3.0 -  -  | APPC0128I Operation "Configure" for VNF type "b0ced82c-b549-4404-970c-ea39437d993b" from Source "664be3d2-6c12-4f4b-a3e7-c349acced2001" with RequestID "268a5e6d-3e8e-496c-b282-3c0a33be3c28" on "APPC" with action "Configure" ended in 688 ms with result "COMPLETE"
2018-12-26 10:08:48,638 | INFO  | 8894141-15128541 | AppcProviderLcm                  | 413 - appc-common - 1.3.0 -  -  | Execute of 'ActionIdentifiers{getVnfId=b0ced82c-b549-4404-970c-ea39437d993b, augmentations={}}' finished with status 100. Reason: ACCEPTED - request accepted
2018-12-26 10:08:48,637 | ERROR | ppc-dispatcher-1 | ConvertNode                      | 452 - org.onap.sdnc.config.generator - 1.3.0 - SvcLogicGraph [module=APPC, rpc=setInputParams, mode=sync, version=4.0.0, md5sum=3c064381db25f6cbfa8bf197cbbb4891] - 1 (block) | Failed in JSON to DGContext Conversion
org.codehaus.jettison.json.JSONException: Expected a ',' or '}' at character 72 of {"request-parameters":{"configuration-parameters":"chef-server-address":"10.75.12.169"}}
    at org.codehaus.jettison.json.JSONTokener.syntaxError(JSONTokener.java:463)
    at org.codehaus.jettison.json.JSONObject.<init>(JSONObject.java:247)
    at org.codehaus.jettison.json.JSONTokener.newJSONObject(JSONTokener.java:412)
    at org.codehaus.jettison.json.JSONTokener.nextValue(JSONTokener.java:327)
    at org.codehaus.jettison.json.JSONObject.<init>(JSONObject.java:230)
    at org.codehaus.jettison.json.JSONObject.<init>(JSONObject.java:311)
    at org.onap.sdnc.config.generator.tool.JSONTool.convertToProperties(JSONTool.java:52)
    at org.onap.sdnc.config.generator.convert.ConvertNode.convertJson2DGContext(ConvertNode.java:71)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_171]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_171]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_171]
    at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_171]
    at org.onap.ccsdk.sli.core.sli.provider.ExecuteNodeExecutor.execute(ExecuteNodeExecutor.java:96)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)
    at org.onap.ccsdk.sli.core.sli.provider.BlockNodeExecutor.execute(BlockNodeExecutor.java:62)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:158)
    at org.onap.ccsdk.sli.core.sli.provider.CallNodeExecutor.execute(CallNodeExecutor.java:127)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)
    at org.onap.ccsdk.sli.core.sli.provider.BlockNodeExecutor.execute(BlockNodeExecutor.java:62)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:158)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:238)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:216)
    at Proxy6ad30656_b7dc_4fda_821b_79fcf1b9f3ea.execute(Unknown Source)
    at Proxyd2bcbfd5_429a_45d3_82e7_706069b232bc.execute(Unknown Source)
    at org.onap.appc.workflow.impl.WorkFlowManagerImpl.SVCLogicServiceExecute(WorkFlowManagerImpl.java:253)[427:appc-workflow-management-core:1.3.0]
    at org.onap.appc.workflow.impl.WorkFlowManagerImpl.executeWorkflow(WorkFlowManagerImpl.java:156)[427:appc-workflow-management-core:1.3.0]
    at Proxye0f56a4a_780f_422c_a970_14594cb9e6b7.executeWorkflow(Unknown Source)
    at Proxy9242ccbb_5919_4c75_8811_d9485b845e33.executeWorkflow(Unknown Source)
    at org.onap.appc.executor.impl.CommandTask.run(CommandTask.java:117)
    at org.onap.appc.executionqueue.impl.QueueManager.lambda$enqueueTask$0(QueueManager.java:105)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_171]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_171]
    at java.lang.Thread.run(Thread.java:748)[:1.8.0_171]
2018-12-26 10:08:48,670 | ERROR | ppc-dispatcher-1 | ExecuteNodeExecutor              | 340 - org.onap.ccsdk.sli.core.sli-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=setInputParams, mode=sync, version=4.0.0, md5sum=3c064381db25f6cbfa8bf197cbbb4891] - 1 (block) | Could not execute plugin. SvcLogic status will be set to failure.
org.onap.ccsdk.sli.core.sli.SvcLogicException: Expected a ',' or '}' at character 72 of {"request-parameters":{"configuration-parameters":"chef-server-address":"10.75.12.169"}}
    at org.onap.sdnc.config.generator.convert.ConvertNode.convertJson2DGContext(ConvertNode.java:87)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.8.0_171]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_171]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_171]
    at java.lang.reflect.Method.invoke(Method.java:498)[:1.8.0_171]
    at org.onap.ccsdk.sli.core.sli.provider.ExecuteNodeExecutor.execute(ExecuteNodeExecutor.java:96)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)
    at org.onap.ccsdk.sli.core.sli.provider.BlockNodeExecutor.execute(BlockNodeExecutor.java:62)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:158)
    at org.onap.ccsdk.sli.core.sli.provider.CallNodeExecutor.execute(CallNodeExecutor.java:127)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)
    at org.onap.ccsdk.sli.core.sli.provider.BlockNodeExecutor.execute(BlockNodeExecutor.java:62)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:158)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:238)
    at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:216)
    at Proxy6ad30656_b7dc_4fda_821b_79fcf1b9f3ea.execute(Unknown Source)
    at Proxyd2bcbfd5_429a_45d3_82e7_706069b232bc.execute(Unknown Source)
    at org.onap.appc.workflow.impl.WorkFlowManagerImpl.SVCLogicServiceExecute(WorkFlowManagerImpl.java:253)[427:appc-workflow-management-core:1.3.0]
    at org.onap.appc.workflow.impl.WorkFlowManagerImpl.executeWorkflow(WorkFlowManagerImpl.java:156)[427:appc-workflow-management-core:1.3.0]
    at Proxye0f56a4a_780f_422c_a970_14594cb9e6b7.executeWorkflow(Unknown Source)
    at Proxy9242ccbb_5919_4c75_8811_d9485b845e33.executeWorkflow(Unknown Source)
    at org.onap.appc.executor.impl.CommandTask.run(CommandTask.java:117)
    at org.onap.appc.executionqueue.impl.QueueManager.lambda$enqueueTask$0(QueueManager.java:105)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_171]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_171]
    at java.lang.Thread.run(Thread.java:748)[:1.8.0_171]
2018-12-26 10:08:48,673 | INFO  | ppc-dispatcher-1 | BlockNodeExecutor                | 340 - org.onap.ccsdk.sli.core.sli-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=setInputParams, mode=sync, version=4.0.0, md5sum=3c064381db25f6cbfa8bf197cbbb4891] - 1 (block) | Block - stopped executing nodes due to failure status
2018-12-26 10:08:48,674 | INFO  | ppc-dispatcher-1 | SvcLogicExprListener             | 339 - org.onap.ccsdk.sli.core.sli-common - 0.2.3 -  -  | Outcome (401) not found, keys are { (400) (Other)}


On Mon, Dec 17, 2018 at 9:01 PM CHO, TAKAMUNE <tc012c@...> wrote:

Then I would recommend using Casablanca release docker, since I do not have any env can test out Bejing code now.

 

There are some known issues described on the APPC release note below may related to your DG issue. The issues will be resolved during the Casablanca Maintenance Release.

 

https://docs.onap.org/en/casablanca/submodules/appc.git/docs/release-notes.html

 

Taka

 

From: P, Prakash [mailto:prakash.p@...]
Sent: Monday, December 17, 2018 12:47 AM
To: CHO, TAKAMUNE <tc012c@...>
Cc: onap-discuss@...
Subject: Re: [E] RE: [onap-discuss] [APPC] LCM Command: Restart

 

It's Beijing.

 

On Fri, Dec 14, 2018 at 8:02 PM CHO, TAKAMUNE <tc012c@...> wrote:

 

Which APPC version are you using for this request?

 

Taka

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Prakash via Lists.Onap.Org
Sent: Friday, December 14, 2018 8:53 AM
To: onap-discuss@...
Subject: [onap-discuss] [APPC] LCM Command: Restart

 

Hello Team,

 When i'm trying to restart my VNF through APPC LCM Restart API, i get successful response from APPC but the VNF is not restarted successfully at openstack. 

 The log at APPC says there is an error at DG ?

 Any input?

Below are the request response and error logs:

 

Request:

POST 

    "input": {

      "common-header": {

        "timestamp": "2018-12-14T02:10:04.244Z",

        "api-ver": "2.00",

        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced2001",

        "request-id": "268a5e6d-3e8e-496c-b282-3c0a33be3c28",

        "sub-request-id": "1",

        "flags": {

            "force" : "TRUE",

            "ttl" : 60000

        }

      },

      "action": "Restart",

      "action-identifiers": {

        "vnf-id": "b0ced82c-b549-4404-970c-ea39437d993b"

      }

    }

}

 

 

 

Response:

200 OK

 

{

  "output": {

    "status": {

      "code": 100,

      "message": "ACCEPTED - request accepted"

    },

    "common-header": {

      "api-ver": "2.00",

      "flags": {

        "force": "TRUE",

        "ttl": 60000

      },

      "sub-request-id": "1",

      "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced2001",

      "timestamp": "2018-12-14T02:10:04.244Z",

      "request-id": "268a5e6d-3e8e-496c-b282-3c0a33be3c28"

    }

  }

}

 

 

ERRRO LOG at APPC karaf.log

2018-12-14 10:34:58,350 | INFO  | ]-nio2-thread-14 | ServerSession                    | 51 - org.apache.sshd.core - 0.14.0 -  -  | Server session created from /127.0.0.1:56104

2018-12-14 10:34:58,533 | INFO  | ]-nio2-thread-13 | LogAuditLoginModule              | 38 - org.apache.karaf.jaas.modules - 4.0.10 -  -  | Authentication attempt - karaf

2018-12-14 10:34:58,533 | INFO  | ]-nio2-thread-13 | LogAuditLoginModule              | 38 - org.apache.karaf.jaas.modules - 4.0.10 -  -  | Authentication failed - karaf

2018-12-14 10:34:58,545 | INFO  | f]-nio2-thread-1 | LogAuditLoginModule              | 38 - org.apache.karaf.jaas.modules - 4.0.10 -  -  | Authentication attempt - karaf

2018-12-14 10:34:58,545 | INFO  | f]-nio2-thread-1 | LogAuditLoginModule              | 38 - org.apache.karaf.jaas.modules - 4.0.10 -  -  | Authentication succeeded - karaf

2018-12-14 10:34:58,545 | INFO  | f]-nio2-thread-1 | ServerUserAuthService            | 51 - org.apache.sshd.core - 0.14.0 -  -  | Session karaf@/127.0.0.1:56104 authenticated

2018-12-14 10:34:58,581 | INFO  | f]-nio2-thread-5 | ChannelSession                   | 51 - org.apache.sshd.core - 0.14.0 -  -  | Executing command: system:start-level 

 

2018-12-14 10:34:58,876 | INFO  | 8894141-12614609 | RequestValidatorImpl             | 413 - appc-common - 1.3.0 -  -  | AAIService from bundlecontext

2018-12-14 10:34:58,883 | INFO  | 8894141-12614609 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 -  -  | AAI Deprecation - the format of request key is no longer supported. Please rewrite this key : vnf-id = 'b0ced82c-b549-4404-970c-ea39437d993b'

2018-12-14 10:34:58,883 | INFO  | 8894141-12614609 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 -  -  | Input - vnf-id : b0ced82c-b549-4404-970c-ea39437d993b

2018-12-14 10:34:58,883 | INFO  | 8894141-12614609 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 -  -  | A&AI transaction :

2018-12-14 10:34:58,883 | INFO  | 8894141-12614609 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 -  -  | Request Time : 2018-12-14T10:34:58.883Z, Method : GET

2018-12-14 10:34:58,883 | INFO  | 8894141-12614609 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 -  -  | Request URL : https://aai.onap:8443/aai/v13/network/generic-vnfs/generic-vnf/b0ced82c-b549-4404-970c-ea39437d993b

2018-12-14 10:34:58,883 | INFO  | 8894141-12614609 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 -  -  | Missing requestID. Assigned def33f65-f1e4-4731-a59b-c208fe202f65

2018-12-14 10:34:58,942 | INFO  | 8894141-12614609 | metric                           | 339 - org.onap.ccsdk.sli.core.sli-common - 0.2.3 -  -  | 

2018-12-14 10:34:58,944 | INFO  | 8894141-12614609 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 -  -  | Response code : 200, OK

2018-12-14 10:34:58,948 | INFO  | 8894141-12614609 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 -  -  | Response data : {"vnf-id":"b0ced82c-b549-4404-970c-ea39437d993b","vnf-name":"vlb002","vnf-type":"vlb_02/vlb_02 0","service-id":"0e5ae4d0-1b6a-4691-bb2e-85159f2eab12","prov-status":"PREPROV","orchestration-status":"Created","in-maint":false,"is-closed-loop-disabled":false,"resource-version":"1544686507931","model-invariant-id":"6d82a5b8-172e-4bfd-aaa6-ae9cb91e3637","model-version-id":"f798d762-7b44-4d8e-898e-a1480defb42e","model-customization-id":"b640bddb-a246-4cfe-addd-b1c0c4e32e72","nf-type":"","nf-function":"","nf-role":"","nf-naming-code":"","relationship-list":{"relationship":[{"related-to":"service-instance","relationship-label":"org.onap.relationships.inventory.ComposedOf","related-link":"/aai/v13/business/customers/customer/Demonstration/service-subscriptions/service-subscription/vLB/service-instances/service-instance/a45bb699-4f45-4299-aa3f-cd65eb08dc7b","relationship-data":[{"relationship-key":"customer.global-customer-id","relationship-value":"Demonstration"},{"relationship-key":"service-subscription.service-type","relationship-value":"vLB"},{"relationship-key":"service-instance.service-instance-id","relationship-value":"a45bb699-4f45-4299-aa3f-cd65eb08dc7b"}],"related-to-property":[{"property-key":"service-instance.service-instance-name","property-value":"vlb02"}]},{"related-to":"platform","relationship-label":"org.onap.relationships.inventory.Uses","related-link":"/aai/v13/business/platforms/platform/Platform-Demonstration","relationship-data":[{"relationship-key":"platform.platform-name","relationship-value":"Platform-Demonstration"}]},{"related-to":"line-of-business","relationship-label":"org.onap.relationships.inventory.Uses","related-link":"/aai/v13/business/lines-of-business/line-of-business/LOB-Demonstration","relationship-data":[{"relationship-key":"line-of-business.line-of-business-name","relationship-value":"LOB-Demonstration"}]}]},"vf-modules":{"vf-module":[{"vf-module-id":"8ac77016-c9dd-4385-84dc-f5423860277d","vf-module-name":"vLB_Test","heat-stack-id":"vLB_Test/fc830b8f-9a91-418a-9fb3-576b8c3fca9a","orchestration-status":"active","is-base-vf-module":true,"resource-version":"1544691551639","model-invariant-id":"80755505-a4ed-46dd-9b58-d1bb45e9bbd0","model-version-id":"dff1abe0-48b3-4414-9d88-0ef433c80036","model-customization-id":"9d760e31-9df5-4567-98a3-ad803ed5334e","module-index":0}]}}

2018-12-14 10:34:58,999 | INFO  | 8894141-12614609 | metrics                          | 413 - appc-common - 1.3.0 -  -  | APPC0128I Operation "null" for VNF type "null" from Source "null" with RequestID "def33f65-f1e4-4731-a59b-c208fe202f65" on "A&AI" with action "query" ended in 115 ms with result "COMPLETE"

2018-12-14 10:34:58,999 | INFO  | 8894141-12614609 | RequestValidatorImpl             | 413 - appc-common - 1.3.0 -  -  | AAIResponse: SUCCESS

2018-12-14 10:34:59,026 | INFO  | 8894141-12614609 | CommandTask                      | 421 - appc-command-executor-core - 1.3.0 -  -  | AAIService from bundlecontext

2018-12-14 10:34:59,027 | INFO  | ppc-dispatcher-7 | SvcLogicServiceImpl              | 340 - org.onap.ccsdk.sli.core.sli-provider - 0.2.3 -  -  | Fetching service logic from data store

2018-12-14 10:34:59,029 | INFO  | ppc-dispatcher-7 | SvcLogicServiceImpl              | 340 - org.onap.ccsdk.sli.core.sli-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] -  | About to execute graph SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e]

2018-12-14 10:34:59,029 | INFO  | ppc-dispatcher-7 | SvcLogicServiceImpl              | 340 - org.onap.ccsdk.sli.core.sli-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 1 (execute) | About to execute node # 1 (execute)

2018-12-14 10:34:59,029 | WARN  | ppc-dispatcher-7 | JsonDgUtilImpl                   | 413 - appc-common - 1.3.0 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 1 (execute) | input payload param value is empty ("") or null

2018-12-14 10:34:59,029 | INFO  | ppc-dispatcher-7 | SvcLogicServiceImpl              | 340 - org.onap.ccsdk.sli.core.sli-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 8 (switch) | About to execute node # 8 (switch)

2018-12-14 10:34:59,029 | INFO  | ppc-dispatcher-7 | SvcLogicExprListener             | 339 - org.onap.ccsdk.sli.core.sli-common - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 8 (switch) | Outcome ($input.action-identifiers.vnf-id) not found, keys are { ("") (Other)}

2018-12-14 10:34:59,029 | INFO  | ppc-dispatcher-7 | SvcLogicServiceImpl              | 340 - org.onap.ccsdk.sli.core.sli-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | About to execute node # 10 (block)

2018-12-14 10:34:59,030 | INFO  | 8894141-12614609 | audit                            | 413 - appc-common - 1.3.0 -  -  | APPC0090A Operation "Restart" for VNF type "b0ced82c-b549-4404-970c-ea39437d993b" from Source "664be3d2-6c12-4f4b-a3e7-c349acced2001" with RequestID "268a5e6d-3e8e-496c-b282-3c0a33be3c28" was started at "2018-12-14T10:34:58Z" and ended at "2018-12-14T10:34:59Z" with status code "100"

2018-12-14 10:34:59,030 | INFO  | 8894141-12614609 | metrics                          | 413 - appc-common - 1.3.0 -  -  | APPC0128I Operation "Restart" for VNF type "b0ced82c-b549-4404-970c-ea39437d993b" from Source "664be3d2-6c12-4f4b-a3e7-c349acced2001" with RequestID "268a5e6d-3e8e-496c-b282-3c0a33be3c28" on "APPC" with action "Restart" ended in 199 ms with result "COMPLETE"

2018-12-14 10:34:59,030 | INFO  | 8894141-12614609 | AppcProviderLcm                  | 413 - appc-common - 1.3.0 -  -  | Execute of 'ActionIdentifiers{getVnfId=b0ced82c-b549-4404-970c-ea39437d993b, augmentations={}}' finished with status 100. Reason: ACCEPTED - request accepted

2018-12-14 10:34:59,031 | INFO  | ppc-dispatcher-7 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Input - named-query-uuid : 037eb932-edac-48f5-9782-c19c0aa5a031

2018-12-14 10:34:59,031 | INFO  | ppc-dispatcher-7 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Input - prefix : namedQueryData

2018-12-14 10:34:59,032 | INFO  | ppc-dispatcher-7 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | A&AI transaction :

2018-12-14 10:34:59,032 | INFO  | ppc-dispatcher-7 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Request Time : 2018-12-14T10:34:59.031Z, Method : POST

2018-12-14 10:34:59,032 | INFO  | ppc-dispatcher-7 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Request URL : https://aai.onap:8443/aai/search/named-query

2018-12-14 10:34:59,034 | INFO  | ppc-dispatcher-7 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Input - data : {"query-parameters":{"named-query":{"named-query-uuid":"037eb932-edac-48f5-9782-c19c0aa5a031"}},"instance-filters":{"instance-filter":[{"generic-vnf":{"vnf-id":"b0ced82c-b549-4404-970c-ea39437d993b"}}]}}

2018-12-14 10:34:59,226 | INFO  | ppc-dispatcher-7 | metric                           | 339 - org.onap.ccsdk.sli.core.sli-common - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | {"query-parameters":{"named-query":{"named-query-uuid":"037eb932-edac-48f5-9782-c19c0aa5a031"}},"instance-filters":{"instance-filter":[{"generic-vnf":{"vnf-id":"b0ced82c-b549-4404-970c-ea39437d993b"}}]}}

2018-12-14 10:34:59,228 | INFO  | ppc-dispatcher-7 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Response code : 200, OK

2018-12-14 10:34:59,228 | INFO  | ppc-dispatcher-7 | AAIService                       | 348 - org.onap.ccsdk.sli.adaptors.aai-service-provider - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Response data : {"inventory-response-item":[{"model-name":"vlb_02","generic-vnf":{"vnf-id":"b0ced82c-b549-4404-970c-ea39437d993b","vnf-name":"vlb002","vnf-type":"vlb_02/vlb_02 0","service-id":"0e5ae4d0-1b6a-4691-bb2e-85159f2eab12","prov-status":"PREPROV","orchestration-status":"Created","in-maint":false,"is-closed-loop-disabled":false,"resource-version":"1544686507931","model-invariant-id":"6d82a5b8-172e-4bfd-aaa6-ae9cb91e3637","model-version-id":"f798d762-7b44-4d8e-898e-a1480defb42e","model-customization-id":"b640bddb-a246-4cfe-addd-b1c0c4e32e72","nf-type":"","nf-function":"","nf-role":"","nf-naming-code":""},"extra-properties":{}}]}

2018-12-14 10:34:59,253 | INFO  | ppc-dispatcher-7 | SvcLogicExprListener             | 339 - org.onap.ccsdk.sli.core.sli-common - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Outcome (success) not found, keys are { (failure)}

2018-12-14 10:34:59,253 | INFO  | ppc-dispatcher-7 | SvcLogicExprListener             | 339 - org.onap.ccsdk.sli.core.sli-common - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Outcome (Other) not found, keys are { (failure)}

2018-12-14 10:34:59,253 | INFO  | ppc-dispatcher-7 | SvcLogicExprListener             | 339 - org.onap.ccsdk.sli.core.sli-common - 0.2.3 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Outcome (b0ced82c-b549-4404-970c-ea39437d993b) not found, keys are { ("")}

2018-12-14 10:34:59,253 | ERROR | ppc-dispatcher-7 | WorkFlowManagerImpl              | 413 - appc-common - 1.3.0 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | Error in DG

org.onap.ccsdk.sli.core.sli.SvcLogicException: Invalid index values [0,]

        at org.onap.ccsdk.sli.core.sli.provider.ForNodeExecutor.execute(ForNodeExecutor.java:68)

        at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)

        at org.onap.ccsdk.sli.core.sli.provider.BlockNodeExecutor.execute(BlockNodeExecutor.java:62)

        at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)

        at org.onap.ccsdk.sli.core.sli.provider.BlockNodeExecutor.execute(BlockNodeExecutor.java:62)

        at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.executeNode(SvcLogicServiceImpl.java:181)

        at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:158)

        at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:238)

        at org.onap.ccsdk.sli.core.sli.provider.SvcLogicServiceImpl.execute(SvcLogicServiceImpl.java:216)

        at Proxy6ad30656_b7dc_4fda_821b_79fcf1b9f3ea.execute(Unknown Source)

        at Proxyd2bcbfd5_429a_45d3_82e7_706069b232bc.execute(Unknown Source)

        at org.onap.appc.workflow.impl.WorkFlowManagerImpl.SVCLogicServiceExecute(WorkFlowManagerImpl.java:253)[427:appc-workflow-management-core:1.3.0]

        at org.onap.appc.workflow.impl.WorkFlowManagerImpl.executeWorkflow(WorkFlowManagerImpl.java:156)[427:appc-workflow-management-core:1.3.0]

        at Proxye0f56a4a_780f_422c_a970_14594cb9e6b7.executeWorkflow(Unknown Source)

        at Proxy9242ccbb_5919_4c75_8811_d9485b845e33.executeWorkflow(Unknown Source)

        at org.onap.appc.executor.impl.CommandTask.run(CommandTask.java:117)

        at org.onap.appc.executionqueue.impl.QueueManager.lambda$enqueueTask$0(QueueManager.java:105)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_171]

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_171]

        at java.lang.Thread.run(Thread.java:748)[:1.8.0_171]

2018-12-14 10:34:59,380 | INFO  | ppc-dispatcher-7 | audit                            | 413 - appc-common - 1.3.0 - SvcLogicGraph [module=APPC, rpc=Generic_Restart, mode=sync, version=3.0.0, md5sum=9223fbd1e1a9645d78ba151289ae7c6e] - 10 (block) | APPC0090A Operation "Restart" for VNF type "b0ced82c-b549-4404-970c-ea39437d993b" from Source "664be3d2-6c12-4f4b-a3e7-c349acced2001" with RequestID "268a5e6d-3e8e-496c-b282-3c0a33be3c28" was started at "2018-12-14T10:34:58Z" and ended at "2018-12-14T10:34:59Z" with status code "200"

 

 

--

Regards,
Prakas P,
Network and Technology IT,
VDSI - Olympia – Chennai

VoIP: 72184 | Direct: +91 44 4394 2184

Mailto  | prakash.p@...


 

--

Regards,
Prakas P,
Network and Technology IT,
VDSI - Olympia – Chennai

VoIP: 72184 | Direct: +91 44 4394 2184

Mailto  | prakash.p@...



--
Regards,
Prakas P,
Network and Technology IT,
VDSI - Olympia – Chennai
VoIP: 72184 | Direct: +91 44 4394 2184
Mailto  | prakash.p@verizon.com


Re: Casablanca oof module pods are waiting on init status #oof

gulsum atici <gulsumatici@...>
 

Dear Ruoyu,

Please  find the  job  details.

Thanks a lot.

ubuntu@kub1:~$ kubectl  describe  job  dev-oof-music-cassandra-job-config   -n onap 
Name:           dev-oof-music-cassandra-job-config
Namespace:      onap
Selector:       controller-uid=0e1b2cf2-08ec-11e9-b23c-028b437f6721
Labels:         app=music-cassandra-job-job
                chart=music-cassandra-job-3.0.0
                heritage=Tiller
                release=dev-oof
Annotations:    <none>
Parallelism:    1
Completions:    1
Pods Statuses:  0 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=music-cassandra-job-job
           controller-uid=0e1b2cf2-08ec-11e9-b23c-028b437f6721
           job-name=dev-oof-music-cassandra-job-config
           release=dev-oof
  Init Containers:
   music-cassandra-job-readiness:
    Image:      oomk8s/readiness-check:2.0.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      music-cassandra
    Environment:
      NAMESPACE:   (v1:metadata.namespace)
    Mounts:       <none>
  Containers:
   music-cassandra-job-update-job:
    Image:      nexus3.onap.org:10001/onap/music/cassandra_job:3.0.24
    Port:       <none>
    Host Port:  <none>
    Environment:
      CASS_HOSTNAME:  music-cassandra
      USERNAME:       nelson24
      PORT:           9042
      PASSWORD:       nelson24
      TIMEOUT:        30
      DELAY:          120
    Mounts:
      /cql/admin.cql from music-cassandra-job-cql (rw)
      /cql/admin_pw.cql from music-cassandra-job-cql (rw)
      /cql/extra from music-cassandra-job-extra-cql (rw)
  Volumes:
   music-cassandra-job-cql:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-cassandra-job-cql
    Optional:  false
   music-cassandra-job-extra-cql:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-cassandra-job-extra-cql
    Optional:  false
Events:        <none>
ubuntu@kub1:~$ 
 


Re: Login Credentials for SDC Cassandra Database

Raju
 

Hi Sonsino,

Thanks for the information. 

Please help me with below query,

DCAE-DS (& Toscalab) should be used to generate the required models/blueprint which can be loaded into DCAE-DS catalog .

I got to know that, the above step is not automated

do we have any documentation in ONAP wiki to achieve the above step?


Thanks & Regards,
Thamlur Raju.

On Tue, Dec 25, 2018 at 4:55 PM Sonsino, Ofir <ofir.sonsino@...> wrote:

Hi Thamlur,

 

The flow type values are stored in application.properties file.

The file’s located at JETTY_HOME/conf/dcae-be/ directory inside the docker container.

 

Ofir

 

From: EMPOROPULO, VITALIY
Sent: Monday, December 24, 2018 4:09 PM
To: Thamlur Raju <thamlurraju468@...>; Sonsino, Ofir <ofir.sonsino@...>
Cc: onap-discuss@...
Subject: RE: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi Thamlur,

 

I’m not sure I understand your questions.

 

@Sonsino, Ofir Can you help please?

 

Regards,

Vitaliy

 

From: Thamlur Raju <thamlurraju468@...>
Sent: Monday, December 24, 2018 11:50
To: Vitaliy Emporopulo <Vitaliy.Emporopulo@...>
Cc: onap-discuss@...
Subject: Re: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi Vitaliy,

 

Thanks for the information.

 

1. Whether SDC Cassandra database will store the DCAE-DS (as shown in below) drop-down data?

 

image.png

 

 

2. Does this DCAE-DS data will have the inter connection with SDC deployment process?

 

 

Thanks & Regards,

Thamlur Raju.

 

On Mon, Dec 24, 2018 at 12:25 PM Vitaliy Emporopulo <Vitaliy.Emporopulo@...> wrote:

Hi Thamlur,

 

It’s asdc_user/Aa1234%^!

 

You can see it in the SDC configuration file https://gerrit.onap.org/r/gitweb?p=sdc.git;a=blob;f=sdc-os-chef/environments/Template.json;h=d212d1e98bd04224c6dcc1c4287ecb14df424dfe;hb=refs/heads/casablanca#l90

 

Regards,

Vitaly Emporopulo

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Raju
Sent: Monday, December 24, 2018 08:35
To: onap-discuss@...
Subject: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi SDC Team,

 

Please help me with the default username and password for SDC Cassandra Database in Casablanca release.

 

 

Thanks & Regards,

Thamlur Raju.

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Re: Casablanca oof module pods are waiting on init status #oof

Ying, Ruoyu
 

Hi,

 

May you help use the cmd ‘kubectl describe job dev-oof-music-cassandra-job-config –n onap’ to show the status of the job?

Thanks.

 

Best Regards,

Ruoyu

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of gulsum atici
Sent: Tuesday, December 25, 2018 7:56 PM
To: Borislav Glozman <Borislav.Glozman@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear Borislav,

I grab some  logs  from the pod init containers. I have recreated all pods including  dbs  several times. However  latest situation didn't  change.

dev-oof-cmso-db-0                                             1/1       Running                 0          33m       10.42.140.74    kub3      <none>

dev-oof-music-cassandra-0                                     1/1       Running                 0          32m       10.42.254.144   kub3      <none>

dev-oof-music-cassandra-1                                     1/1       Running                 0          1h        10.42.244.161   kub4      <none>

dev-oof-music-cassandra-2                                     1/1       Running                 0          1h        10.42.56.156    kub2      <none>

dev-oof-music-tomcat-685fd777c9-8qmll                         0/1       Init:1/3                3          35m       10.42.159.78    kub3      <none>

dev-oof-music-tomcat-685fd777c9-crdf6                         0/1       Init:1/3                3          35m       10.42.167.24    kub2      <none>

dev-oof-music-tomcat-84bc66c649-7xf8q                         0/1       Init:1/3                6          1h        10.42.19.117    kub1      <none>

dev-oof-music-tomcat-84bc66c649-lzmtj                         0/1       Init:1/3                6          1h        10.42.198.179   kub4      <none>

dev-oof-oof-8ff8b46f5-8sbwv                                   1/1       Running                 0          35m       10.42.35.56     kub3      <none>

dev-oof-oof-cmso-service-6c485cdff-pbzb6                      0/1       Init:CrashLoopBackOff   10         35m       10.42.224.93    kub3      <none>

dev-oof-oof-has-api-74c6695b64-kcr4n                          0/1       Init:0/3                2          35m       10.42.70.206    kub1      <none>

dev-oof-oof-has-controller-7cb97bbd4f-n7k9j                   0/1       Init:0/3                3          35m       10.42.194.39    kub3      <none>

dev-oof-oof-has-data-5b4f76fc7b-t92r6                         0/1       Init:0/4                3          35m       10.42.205.181   kub1      <none>

dev-oof-oof-has-healthcheck-8hqbt                             0/1       Init:0/1                3          35m       10.42.131.183   kub3      <none>

dev-oof-oof-has-onboard-mqglv                                 0/1       Init:0/2                3          35m       10.42.34.251    kub1      <none>

dev-oof-oof-has-reservation-5b899687db-dgjnh                  0/1       Init:0/4                3          35m       10.42.245.175   kub1      <none>

dev-oof-oof-has-solver-65486d5fc7-s84w4                       0/1       Init:0/4                3          35m       10.42.35.223    kub3      <none>

 

 

ubuntu@kub4:~$ kubectl  describe  pod  dev-oof-music-tomcat-685fd777c9-8qmll  -n  onap 

Name:           dev-oof-music-tomcat-685fd777c9-8qmll

Namespace:      onap

Node:           kub3/192.168.13.151

Start Time:     Tue, 25 Dec 2018 11:20:04 +0000

Labels:         app=music-tomcat

                pod-template-hash=2419833375

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.159.78

Controlled By:  ReplicaSet/dev-oof-music-tomcat-685fd777c9

Init Containers:

  music-tomcat-zookeeper-readiness:

    Container ID:  docker://79b0507168a8590b10f0b1eb8c720e04cd173914b6365834d5b6c9c6f86a074d

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      zookeeper

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Tue, 25 Dec 2018 11:20:57 +0000

      Finished:     Tue, 25 Dec 2018 11:21:32 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  music-tomcat-cassandra-readiness:

    Container ID:  docker://36b752b9b2d96d6437992cab6d63d32b80107799b34b0420056656fcc4476213

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-music-cassandra-job-config

    State:          Running

      Started:      Tue, 25 Dec 2018 11:41:58 +0000

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Tue, 25 Dec 2018 11:31:49 +0000

      Finished:     Tue, 25 Dec 2018 11:41:53 +0000

    Ready:          False

    Restart Count:  2

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  music-tomcat-war:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/music/music:3.0.24

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      cp

      /app/MUSIC.war

      /webapps

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Environment:    <none>

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

      /webapps from shared-data (rw)

Containers:

  music-tomcat:

    Container ID:   

    Image:          nexus3.onap.org:10001/library/tomcat:8.5

    Image ID:       

    Port:           8080/TCP

    Host Port:      0/TCP

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /opt/app/music/etc/music.properties from properties-music (rw)

      /usr/local/tomcat/webapps from shared-data (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  shared-data:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

  properties-music:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-tomcat-configmap

    Optional:  false

  default-token-rm7hn:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-rm7hn

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason     Age               From               Message

  ----    ------     ----              ----               -------

  Normal  Scheduled  27m               default-scheduler  Successfully assigned onap/dev-oof-music-tomcat-685fd777c9-8qmll to kub3

  Normal  Pulling    26m               kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled     26m               kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created    26m               kubelet, kub3      Created container

  Normal  Started    26m               kubelet, kub3      Started container

  Normal  Pulling    5m (x3 over 25m)  kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled     5m (x3 over 25m)  kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created    5m (x3 over 25m)  kubelet, kub3      Created container

  Normal  Started    5m (x3 over 25m)  kubelet, kub3      Started container

ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c music-tomcat-zookeeper-readiness -n onap 

2018-12-25 11:20:58,478 - INFO - Checking if zookeeper  is ready

2018-12-25 11:21:32,325 - INFO - zookeeper is ready!

2018-12-25 11:21:32,326 - INFO - zookeeper is ready!

ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c  music-tomcat-cassandra-readiness  -n onap 

2018-12-25 11:41:59,688 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:00,014 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:05,019 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:05,305 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:10,310 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:10,681 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:15,686 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:16,192 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:21,198 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:22,058 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:27,063 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:28,051 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:33,054 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:35,798 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:40,802 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:42,112 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:47,117 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:48,173 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:53,176 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:54,378 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:59,382 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:00,239 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:05,245 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:05,925 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:10,930 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:11,930 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:16,934 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:19,212 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:24,217 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:25,102 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:30,106 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:32,245 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:37,254 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:37,534 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:42,539 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:44,826 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:49,830 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:50,486 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:55,490 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:56,398 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:01,403 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:02,134 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:07,139 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:07,834 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:12,837 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:13,026 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:18,030 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:19,561 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:24,566 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:25,153 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

ubuntu@kub4:~$ kubectl describe pod  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -n onap 

Name:           dev-oof-oof-cmso-service-6c485cdff-pbzb6

Namespace:      onap

Node:           kub3/192.168.13.151

Start Time:     Tue, 25 Dec 2018 11:20:07 +0000

Labels:         app=oof-cmso-service

                pod-template-hash=270417899

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.224.93

Controlled By:  ReplicaSet/dev-oof-oof-cmso-service-6c485cdff

Init Containers:

  oof-cmso-service-readiness:

    Container ID:  docker://bb4ccdfaf3ba6836e606685de4bbe069da2e5193f165ae466f768dad85b71908

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      cmso-db

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Tue, 25 Dec 2018 11:22:53 +0000

      Finished:     Tue, 25 Dec 2018 11:25:01 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  db-init:

    Container ID:   docker://dbc9fadd1140584043b8f690974a4d626f64d12ef5002108b7b5c29148981e23

    Image:          nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1

    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/optf-cmso-dbinit@sha256:c5722a319fb0d91ad4d533597cdee2b55fc5c51d0a8740cf02cbaa1969c8554f

    Port:           <none>

    Host Port:      <none>

    State:          Waiting

      Reason:       CrashLoopBackOff

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Tue, 25 Dec 2018 11:48:31 +0000

      Finished:     Tue, 25 Dec 2018 11:48:41 +0000

    Ready:          False

    Restart Count:  9

    Environment:

      DB_HOST:      oof-cmso-dbhost.onap

      DB_PORT:      3306

      DB_USERNAME:  root

      DB_SCHEMA:    cmso

      DB_PASSWORD:  <set to the key 'db-root-password' in secret 'dev-oof-cmso-db'>  Optional: false

    Mounts:

      /share/etc/config from dev-oof-oof-cmso-service-config (rw)

      /share/logs from dev-oof-oof-cmso-service-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Containers:

  oof-cmso-service:

    Container ID:   

    Image:          nexus3.onap.org:10001/onap/optf-cmso-service:1.0.1

    Image ID:       

    Port:           8080/TCP

    Host Port:      0/TCP

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :8080 delay=120s timeout=50s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Environment:

      DB_HOST:      oof-cmso-dbhost.onap

      DB_PORT:      3306

      DB_USERNAME:  cmso-admin

      DB_SCHEMA:    cmso

      DB_PASSWORD:  <set to the key 'user-password' in secret 'dev-oof-cmso-db'>  Optional: false

    Mounts:

      /share/debug-logs from dev-oof-oof-cmso-service-logs (rw)

      /share/etc/config from dev-oof-oof-cmso-service-config (rw)

      /share/logs from dev-oof-oof-cmso-service-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  dev-oof-oof-cmso-service-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-oof-cmso-service

    Optional:  false

  dev-oof-oof-cmso-service-logs:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  default-token-rm7hn:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-rm7hn

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type     Reason                  Age                From               Message

  ----     ------                  ----               ----               -------

  Normal   Scheduled               30m                default-scheduler  Successfully assigned onap/dev-oof-oof-cmso-service-6c485cdff-pbzb6 to kub3

  Warning  FailedCreatePodSandBox  29m                kubelet, kub3      Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to set up pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: No MAC address found, failed to clean up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to teardown pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: failed to get IP addresses for "eth0": <nil>]

  Normal   SandboxChanged          29m                kubelet, kub3      Pod sandbox changed, it will be killed and re-created.

  Normal   Pulling                 27m                kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal   Pulled                  27m                kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal   Created                 27m                kubelet, kub3      Created container

  Normal   Started                 27m                kubelet, kub3      Started container

  Normal   Pulling                 23m (x4 over 25m)  kubelet, kub3      pulling image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"

  Normal   Pulled                  23m (x4 over 25m)  kubelet, kub3      Successfully pulled image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"

  Normal   Created                 23m (x4 over 25m)  kubelet, kub3      Created container

  Normal   Started                 23m (x4 over 25m)  kubelet, kub3      Started container

  Warning  BackOff                 4m (x80 over 24m)  kubelet, kub3      Back-off restarting failed container

ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c oof-cmso-service-readiness -n onap 

2018-12-25 11:22:54,683 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:02,186 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:09,950 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:12,938 - INFO - cmso-db is not ready.

2018-12-25 11:23:17,963 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:20,091 - INFO - cmso-db is not ready.

2018-12-25 11:23:25,111 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:27,315 - INFO - cmso-db is not ready.

2018-12-25 11:23:32,329 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:35,390 - INFO - cmso-db is not ready.

2018-12-25 11:23:40,407 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:43,346 - INFO - cmso-db is not ready.

2018-12-25 11:23:48,371 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:53,848 - INFO - cmso-db is not ready.

2018-12-25 11:23:58,870 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:02,188 - INFO - cmso-db is not ready.

2018-12-25 11:24:07,207 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:10,598 - INFO - cmso-db is not ready.

2018-12-25 11:24:15,622 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:18,936 - INFO - cmso-db is not ready.

2018-12-25 11:24:23,955 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:26,794 - INFO - cmso-db is not ready.

2018-12-25 11:24:31,813 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:35,529 - INFO - cmso-db is not ready.

2018-12-25 11:24:40,566 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:44,374 - INFO - cmso-db is not ready.

2018-12-25 11:24:49,403 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:53,222 - INFO - cmso-db is not ready.

2018-12-25 11:24:58,238 - INFO - Checking if cmso-db  is ready

2018-12-25 11:25:01,340 - INFO - cmso-db is ready!

ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c  db-init  -n onap 

VM_ARGS=

 

  .   ____          _            __ _ _

 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )

  '  |____| .__|_| |_|_| |_\__, | / / / /

 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::        (v2.0.6.RELEASE)

 

2018-12-25 11:48:36.187  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : Starting LiquibaseApplication on dev-oof-oof-cmso-service-6c485cdff-pbzb6 with PID 8 (/opt/app/cmso-dbinit/app.jar started by root in /opt/app/cmso-dbinit)

2018-12-25 11:48:36.199  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : No active profile set, falling back to default profiles: default

2018-12-25 11:48:36.310  INFO 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@d44fc21: startup date [Tue Dec 25 11:48:36 UTC 2018]; root of context hierarchy

2018-12-25 11:48:40.336  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...

2018-12-25 11:48:40.754  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.

2018-12-25 11:48:41.044  WARN 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

2018-12-25 11:48:41.045  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown initiated...

2018-12-25 11:48:41.109  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown completed.

2018-12-25 11:48:41.177  INFO 8 --- [           main] ConditionEvaluationReportLoggingListener : 

 

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.

2018-12-25 11:48:41.223 ERROR 8 --- [           main] o.s.boot.SpringApplication               : Application run failed

 

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1694) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:759) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:548) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.onap.optf.cmso.liquibase.LiquibaseApplication.main(LiquibaseApplication.java:45) [classes!/:na]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]

at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]

at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [app.jar:na]

at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [app.jar:na]

at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [app.jar:na]

at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [app.jar:na]

Caused by: liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:242) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:170) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.Liquibase.update(Liquibase.java:196) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.Liquibase.update(Liquibase.java:192) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:431) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:388) ~[liquibase-core-3.5.5.jar!/:na]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1753) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1690) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

... 23 common frames omitted

Caused by: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1159) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:233) ~[liquibase-core-3.5.5.jar!/:na]

... 30 common frames omitted

Caused by: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:126) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1157) ~[liquibase-core-3.5.5.jar!/:na]

... 31 common frames omitted

Caused by: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:179) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:334) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:386) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbConnection.commit(MariaDbConnection.java:709) ~[mariadb-java-client-2.2.6.jar!/:na]

at com.zaxxer.hikari.pool.ProxyConnection.commit(ProxyConnection.java:368) ~[HikariCP-2.7.9.jar!/:na]

at com.zaxxer.hikari.pool.HikariProxyConnection.commit(HikariProxyConnection.java) ~[HikariCP-2.7.9.jar!/:na]

at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:123) ~[liquibase-core-3.5.5.jar!/:na]

... 32 common frames omitted

Caused by: java.sql.SQLException: Deadlock found when trying to get lock; try restarting transaction

Query is: COMMIT

at org.mariadb.jdbc.internal.util.LogQueryTool.exceptionWithQuery(LogQueryTool.java:119) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:200) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:328) ~[mariadb-java-client-2.2.6.jar!/:na]

... 37 common frames omitted

 

ubuntu@kub4:~$ 

 

 


Re: Casablanca oof module pods are waiting on init status #oof

Borislav Glozman
 

Hi,

 

You will probably need further assistance from OOF team (regarding the exception).

I also did not see dev-oof-music-cassandra-job-config job.

 

Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726

amdocs-a

Amdocs a Platinum member of ONAP

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of gulsum atici
Sent: Tuesday, December 25, 2018 1:56 PM
To: Borislav Glozman <Borislav.Glozman@...>; onap-discuss@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Dear Borislav,

I grab some  logs  from the pod init containers. I have recreated all pods including  dbs  several times. However  latest situation didn't  change.

dev-oof-cmso-db-0                                             1/1       Running                 0          33m       10.42.140.74    kub3      <none>

dev-oof-music-cassandra-0                                     1/1       Running                 0          32m       10.42.254.144   kub3      <none>

dev-oof-music-cassandra-1                                     1/1       Running                 0          1h        10.42.244.161   kub4      <none>

dev-oof-music-cassandra-2                                     1/1       Running                 0          1h        10.42.56.156    kub2      <none>

dev-oof-music-tomcat-685fd777c9-8qmll                         0/1       Init:1/3                3          35m       10.42.159.78    kub3      <none>

dev-oof-music-tomcat-685fd777c9-crdf6                         0/1       Init:1/3                3          35m       10.42.167.24    kub2      <none>

dev-oof-music-tomcat-84bc66c649-7xf8q                         0/1       Init:1/3                6          1h        10.42.19.117    kub1      <none>

dev-oof-music-tomcat-84bc66c649-lzmtj                         0/1       Init:1/3                6          1h        10.42.198.179   kub4      <none>

dev-oof-oof-8ff8b46f5-8sbwv                                   1/1       Running                 0          35m       10.42.35.56     kub3      <none>

dev-oof-oof-cmso-service-6c485cdff-pbzb6                      0/1       Init:CrashLoopBackOff   10         35m       10.42.224.93    kub3      <none>

dev-oof-oof-has-api-74c6695b64-kcr4n                          0/1       Init:0/3                2          35m       10.42.70.206    kub1      <none>

dev-oof-oof-has-controller-7cb97bbd4f-n7k9j                   0/1       Init:0/3                3          35m       10.42.194.39    kub3      <none>

dev-oof-oof-has-data-5b4f76fc7b-t92r6                         0/1       Init:0/4                3          35m       10.42.205.181   kub1      <none>

dev-oof-oof-has-healthcheck-8hqbt                             0/1       Init:0/1                3          35m       10.42.131.183   kub3      <none>

dev-oof-oof-has-onboard-mqglv                                 0/1       Init:0/2                3          35m       10.42.34.251    kub1      <none>

dev-oof-oof-has-reservation-5b899687db-dgjnh                  0/1       Init:0/4                3          35m       10.42.245.175   kub1      <none>

dev-oof-oof-has-solver-65486d5fc7-s84w4                       0/1       Init:0/4                3          35m       10.42.35.223    kub3      <none>

 

 

ubuntu@kub4:~$ kubectl  describe  pod  dev-oof-music-tomcat-685fd777c9-8qmll  -n  onap 

Name:           dev-oof-music-tomcat-685fd777c9-8qmll

Namespace:      onap

Node:           kub3/192.168.13.151

Start Time:     Tue, 25 Dec 2018 11:20:04 +0000

Labels:         app=music-tomcat

                pod-template-hash=2419833375

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.159.78

Controlled By:  ReplicaSet/dev-oof-music-tomcat-685fd777c9

Init Containers:

  music-tomcat-zookeeper-readiness:

    Container ID:  docker://79b0507168a8590b10f0b1eb8c720e04cd173914b6365834d5b6c9c6f86a074d

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      zookeeper

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Tue, 25 Dec 2018 11:20:57 +0000

      Finished:     Tue, 25 Dec 2018 11:21:32 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  music-tomcat-cassandra-readiness:

    Container ID:  docker://36b752b9b2d96d6437992cab6d63d32b80107799b34b0420056656fcc4476213

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/job_complete.py

    Args:

      -j

      dev-oof-music-cassandra-job-config

    State:          Running

      Started:      Tue, 25 Dec 2018 11:41:58 +0000

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Tue, 25 Dec 2018 11:31:49 +0000

      Finished:     Tue, 25 Dec 2018 11:41:53 +0000

    Ready:          False

    Restart Count:  2

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  music-tomcat-war:

    Container ID:  

    Image:         nexus3.onap.org:10001/onap/music/music:3.0.24

    Image ID:      

    Port:          <none>

    Host Port:     <none>

    Command:

      cp

      /app/MUSIC.war

      /webapps

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Environment:    <none>

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

      /webapps from shared-data (rw)

Containers:

  music-tomcat:

    Container ID:   

    Image:          nexus3.onap.org:10001/library/tomcat:8.5

    Image ID:       

    Port:           8080/TCP

    Host Port:      0/TCP

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /etc/localtime from localtime (ro)

      /opt/app/music/etc/music.properties from properties-music (rw)

      /usr/local/tomcat/webapps from shared-data (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  shared-data:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  localtime:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/localtime

    HostPathType:  

  properties-music:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-music-tomcat-configmap

    Optional:  false

  default-token-rm7hn:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-rm7hn

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type    Reason     Age               From               Message

  ----    ------     ----              ----               -------

  Normal  Scheduled  27m               default-scheduler  Successfully assigned onap/dev-oof-music-tomcat-685fd777c9-8qmll to kub3

  Normal  Pulling    26m               kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled     26m               kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created    26m               kubelet, kub3      Created container

  Normal  Started    26m               kubelet, kub3      Started container

  Normal  Pulling    5m (x3 over 25m)  kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled     5m (x3 over 25m)  kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created    5m (x3 over 25m)  kubelet, kub3      Created container

  Normal  Started    5m (x3 over 25m)  kubelet, kub3      Started container

ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c music-tomcat-zookeeper-readiness -n onap 

2018-12-25 11:20:58,478 - INFO - Checking if zookeeper  is ready

2018-12-25 11:21:32,325 - INFO - zookeeper is ready!

2018-12-25 11:21:32,326 - INFO - zookeeper is ready!

ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c  music-tomcat-cassandra-readiness  -n onap 

2018-12-25 11:41:59,688 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:00,014 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:05,019 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:05,305 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:10,310 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:10,681 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:15,686 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:16,192 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:21,198 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:22,058 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:27,063 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:28,051 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:33,054 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:35,798 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:40,802 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:42,112 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:47,117 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:48,173 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:53,176 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:42:54,378 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:42:59,382 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:00,239 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:05,245 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:05,925 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:10,930 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:11,930 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:16,934 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:19,212 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:24,217 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:25,102 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:30,106 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:32,245 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:37,254 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:37,534 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:42,539 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:44,826 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:49,830 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:50,486 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:43:55,490 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:43:56,398 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:01,403 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:02,134 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:07,139 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:07,834 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:12,837 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:13,026 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:18,030 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:19,561 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

2018-12-25 11:44:24,566 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete

2018-12-25 11:44:25,153 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

ubuntu@kub4:~$ kubectl describe pod  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -n onap 

Name:           dev-oof-oof-cmso-service-6c485cdff-pbzb6

Namespace:      onap

Node:           kub3/192.168.13.151

Start Time:     Tue, 25 Dec 2018 11:20:07 +0000

Labels:         app=oof-cmso-service

                pod-template-hash=270417899

                release=dev-oof

Annotations:    <none>

Status:         Pending

IP:             10.42.224.93

Controlled By:  ReplicaSet/dev-oof-oof-cmso-service-6c485cdff

Init Containers:

  oof-cmso-service-readiness:

    Container ID:  docker://bb4ccdfaf3ba6836e606685de4bbe069da2e5193f165ae466f768dad85b71908

    Image:         oomk8s/readiness-check:2.0.0

    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed

    Port:          <none>

    Host Port:     <none>

    Command:

      /root/ready.py

    Args:

      --container-name

      cmso-db

    State:          Terminated

      Reason:       Completed

      Exit Code:    0

      Started:      Tue, 25 Dec 2018 11:22:53 +0000

      Finished:     Tue, 25 Dec 2018 11:25:01 +0000

    Ready:          True

    Restart Count:  0

    Environment:

      NAMESPACE:  onap (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

  db-init:

    Container ID:   docker://dbc9fadd1140584043b8f690974a4d626f64d12ef5002108b7b5c29148981e23

    Image:          nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1

    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/optf-cmso-dbinit@sha256:c5722a319fb0d91ad4d533597cdee2b55fc5c51d0a8740cf02cbaa1969c8554f

    Port:           <none>

    Host Port:      <none>

    State:          Waiting

      Reason:       CrashLoopBackOff

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Tue, 25 Dec 2018 11:48:31 +0000

      Finished:     Tue, 25 Dec 2018 11:48:41 +0000

    Ready:          False

    Restart Count:  9

    Environment:

      DB_HOST:      oof-cmso-dbhost.onap

      DB_PORT:      3306

      DB_USERNAME:  root

      DB_SCHEMA:    cmso

      DB_PASSWORD:  <set to the key 'db-root-password' in secret 'dev-oof-cmso-db'>  Optional: false

    Mounts:

      /share/etc/config from dev-oof-oof-cmso-service-config (rw)

      /share/logs from dev-oof-oof-cmso-service-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Containers:

  oof-cmso-service:

    Container ID:   

    Image:          nexus3.onap.org:10001/onap/optf-cmso-service:1.0.1

    Image ID:       

    Port:           8080/TCP

    Host Port:      0/TCP

    State:          Waiting

      Reason:       PodInitializing

    Ready:          False

    Restart Count:  0

    Liveness:       tcp-socket :8080 delay=120s timeout=50s period=10s #success=1 #failure=3

    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3

    Environment:

      DB_HOST:      oof-cmso-dbhost.onap

      DB_PORT:      3306

      DB_USERNAME:  cmso-admin

      DB_SCHEMA:    cmso

      DB_PASSWORD:  <set to the key 'user-password' in secret 'dev-oof-cmso-db'>  Optional: false

    Mounts:

      /share/debug-logs from dev-oof-oof-cmso-service-logs (rw)

      /share/etc/config from dev-oof-oof-cmso-service-config (rw)

      /share/logs from dev-oof-oof-cmso-service-logs (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)

Conditions:

  Type              Status

  Initialized       False 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  dev-oof-oof-cmso-service-config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-oof-oof-cmso-service

    Optional:  false

  dev-oof-oof-cmso-service-logs:

    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:  

  default-token-rm7hn:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-rm7hn

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type     Reason                  Age                From               Message

  ----     ------                  ----               ----               -------

  Normal   Scheduled               30m                default-scheduler  Successfully assigned onap/dev-oof-oof-cmso-service-6c485cdff-pbzb6 to kub3

  Warning  FailedCreatePodSandBox  29m                kubelet, kub3      Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to set up pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: No MAC address found, failed to clean up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to teardown pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: failed to get IP addresses for "eth0": <nil>]

  Normal   SandboxChanged          29m                kubelet, kub3      Pod sandbox changed, it will be killed and re-created.

  Normal   Pulling                 27m                kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"

  Normal   Pulled                  27m                kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal   Created                 27m                kubelet, kub3      Created container

  Normal   Started                 27m                kubelet, kub3      Started container

  Normal   Pulling                 23m (x4 over 25m)  kubelet, kub3      pulling image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"

  Normal   Pulled                  23m (x4 over 25m)  kubelet, kub3      Successfully pulled image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"

  Normal   Created                 23m (x4 over 25m)  kubelet, kub3      Created container

  Normal   Started                 23m (x4 over 25m)  kubelet, kub3      Started container

  Warning  BackOff                 4m (x80 over 24m)  kubelet, kub3      Back-off restarting failed container

ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c oof-cmso-service-readiness -n onap 

2018-12-25 11:22:54,683 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:02,186 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:09,950 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:12,938 - INFO - cmso-db is not ready.

2018-12-25 11:23:17,963 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:20,091 - INFO - cmso-db is not ready.

2018-12-25 11:23:25,111 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:27,315 - INFO - cmso-db is not ready.

2018-12-25 11:23:32,329 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:35,390 - INFO - cmso-db is not ready.

2018-12-25 11:23:40,407 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:43,346 - INFO - cmso-db is not ready.

2018-12-25 11:23:48,371 - INFO - Checking if cmso-db  is ready

2018-12-25 11:23:53,848 - INFO - cmso-db is not ready.

2018-12-25 11:23:58,870 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:02,188 - INFO - cmso-db is not ready.

2018-12-25 11:24:07,207 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:10,598 - INFO - cmso-db is not ready.

2018-12-25 11:24:15,622 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:18,936 - INFO - cmso-db is not ready.

2018-12-25 11:24:23,955 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:26,794 - INFO - cmso-db is not ready.

2018-12-25 11:24:31,813 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:35,529 - INFO - cmso-db is not ready.

2018-12-25 11:24:40,566 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:44,374 - INFO - cmso-db is not ready.

2018-12-25 11:24:49,403 - INFO - Checking if cmso-db  is ready

2018-12-25 11:24:53,222 - INFO - cmso-db is not ready.

2018-12-25 11:24:58,238 - INFO - Checking if cmso-db  is ready

2018-12-25 11:25:01,340 - INFO - cmso-db is ready!

ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c  db-init  -n onap 

VM_ARGS=

 

  .   ____          _            __ _ _

 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \

( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \

 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )

  '  |____| .__|_| |_|_| |_\__, | / / / /

 =========|_|==============|___/=/_/_/_/

 :: Spring Boot ::        (v2.0.6.RELEASE)

 

2018-12-25 11:48:36.187  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : Starting LiquibaseApplication on dev-oof-oof-cmso-service-6c485cdff-pbzb6 with PID 8 (/opt/app/cmso-dbinit/app.jar started by root in /opt/app/cmso-dbinit)

2018-12-25 11:48:36.199  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : No active profile set, falling back to default profiles: default

2018-12-25 11:48:36.310  INFO 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@d44fc21: startup date [Tue Dec 25 11:48:36 UTC 2018]; root of context hierarchy

2018-12-25 11:48:40.336  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...

2018-12-25 11:48:40.754  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.

2018-12-25 11:48:41.044  WARN 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

2018-12-25 11:48:41.045  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown initiated...

2018-12-25 11:48:41.109  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown completed.

2018-12-25 11:48:41.177  INFO 8 --- [           main] ConditionEvaluationReportLoggingListener : 

 

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.

2018-12-25 11:48:41.223 ERROR 8 --- [           main] o.s.boot.SpringApplication               : Application run failed

 

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1694) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:759) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:548) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]

at org.onap.optf.cmso.liquibase.LiquibaseApplication.main(LiquibaseApplication.java:45) [classes!/:na]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]

at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]

at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [app.jar:na]

at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [app.jar:na]

at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [app.jar:na]

at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [app.jar:na]

Caused by: liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:242) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:170) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.Liquibase.update(Liquibase.java:196) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.Liquibase.update(Liquibase.java:192) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:431) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:388) ~[liquibase-core-3.5.5.jar!/:na]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1753) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1690) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]

... 23 common frames omitted

Caused by: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1159) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:233) ~[liquibase-core-3.5.5.jar!/:na]

... 30 common frames omitted

Caused by: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:126) ~[liquibase-core-3.5.5.jar!/:na]

at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1157) ~[liquibase-core-3.5.5.jar!/:na]

... 31 common frames omitted

Caused by: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction

at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:179) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:334) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:386) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbConnection.commit(MariaDbConnection.java:709) ~[mariadb-java-client-2.2.6.jar!/:na]

at com.zaxxer.hikari.pool.ProxyConnection.commit(ProxyConnection.java:368) ~[HikariCP-2.7.9.jar!/:na]

at com.zaxxer.hikari.pool.HikariProxyConnection.commit(HikariProxyConnection.java) ~[HikariCP-2.7.9.jar!/:na]

at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:123) ~[liquibase-core-3.5.5.jar!/:na]

... 32 common frames omitted

Caused by: java.sql.SQLException: Deadlock found when trying to get lock; try restarting transaction

Query is: COMMIT

at org.mariadb.jdbc.internal.util.LogQueryTool.exceptionWithQuery(LogQueryTool.java:119) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:200) ~[mariadb-java-client-2.2.6.jar!/:na]

at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:328) ~[mariadb-java-client-2.2.6.jar!/:na]

... 37 common frames omitted

 

ubuntu@kub4:~$ 

 

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Re: Casablanca oof module pods are waiting on init status #oof

gulsum atici <gulsumatici@...>
 

Dear Borislav,

I grab some  logs  from the pod init containers. I have recreated all pods including  dbs  several times. However  latest situation didn't  change.

dev-oof-cmso-db-0                                             1/1       Running                 0          33m       10.42.140.74    kub3      <none>
dev-oof-music-cassandra-0                                     1/1       Running                 0          32m       10.42.254.144   kub3      <none>
dev-oof-music-cassandra-1                                     1/1       Running                 0          1h        10.42.244.161   kub4      <none>
dev-oof-music-cassandra-2                                     1/1       Running                 0          1h        10.42.56.156    kub2      <none>
dev-oof-music-tomcat-685fd777c9-8qmll                         0/1       Init:1/3                3          35m       10.42.159.78    kub3      <none>
dev-oof-music-tomcat-685fd777c9-crdf6                         0/1       Init:1/3                3          35m       10.42.167.24    kub2      <none>
dev-oof-music-tomcat-84bc66c649-7xf8q                         0/1       Init:1/3                6          1h        10.42.19.117    kub1      <none>
dev-oof-music-tomcat-84bc66c649-lzmtj                         0/1       Init:1/3                6          1h        10.42.198.179   kub4      <none>
dev-oof-oof-8ff8b46f5-8sbwv                                   1/1       Running                 0          35m       10.42.35.56     kub3      <none>
dev-oof-oof-cmso-service-6c485cdff-pbzb6                      0/1       Init:CrashLoopBackOff   10         35m       10.42.224.93    kub3      <none>
dev-oof-oof-has-api-74c6695b64-kcr4n                          0/1       Init:0/3                2          35m       10.42.70.206    kub1      <none>
dev-oof-oof-has-controller-7cb97bbd4f-n7k9j                   0/1       Init:0/3                3          35m       10.42.194.39    kub3      <none>
dev-oof-oof-has-data-5b4f76fc7b-t92r6                         0/1       Init:0/4                3          35m       10.42.205.181   kub1      <none>
dev-oof-oof-has-healthcheck-8hqbt                             0/1       Init:0/1                3          35m       10.42.131.183   kub3      <none>
dev-oof-oof-has-onboard-mqglv                                 0/1       Init:0/2                3          35m       10.42.34.251    kub1      <none>
dev-oof-oof-has-reservation-5b899687db-dgjnh                  0/1       Init:0/4                3          35m       10.42.245.175   kub1      <none>
dev-oof-oof-has-solver-65486d5fc7-s84w4                       0/1       Init:0/4                3          35m       10.42.35.223    kub3      <none>
 

ubuntu@kub4:~$ kubectl  describe  pod  dev-oof-music-tomcat-685fd777c9-8qmll  -n  onap 
Name:           dev-oof-music-tomcat-685fd777c9-8qmll
Namespace:      onap
Node:           kub3/192.168.13.151
Start Time:     Tue, 25 Dec 2018 11:20:04 +0000
Labels:         app=music-tomcat
                pod-template-hash=2419833375
                release=dev-oof
Annotations:    <none>
Status:         Pending
IP:             10.42.159.78
Controlled By:  ReplicaSet/dev-oof-music-tomcat-685fd777c9
Init Containers:
  music-tomcat-zookeeper-readiness:
    Container ID:  docker://79b0507168a8590b10f0b1eb8c720e04cd173914b6365834d5b6c9c6f86a074d
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      zookeeper
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 25 Dec 2018 11:20:57 +0000
      Finished:     Tue, 25 Dec 2018 11:21:32 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
  music-tomcat-cassandra-readiness:
    Container ID:  docker://36b752b9b2d96d6437992cab6d63d32b80107799b34b0420056656fcc4476213
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/job_complete.py
    Args:
      -j
      dev-oof-music-cassandra-job-config
    State:          Running
      Started:      Tue, 25 Dec 2018 11:41:58 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 25 Dec 2018 11:31:49 +0000
      Finished:     Tue, 25 Dec 2018 11:41:53 +0000
    Ready:          False
    Restart Count:  2
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
  music-tomcat-war:
    Container ID:  
    Image:         nexus3.onap.org:10001/onap/music/music:3.0.24
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
      /app/MUSIC.war
      /webapps
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
      /webapps from shared-data (rw)
Containers:
  music-tomcat:
    Container ID:   
    Image:          nexus3.onap.org:10001/library/tomcat:8.5
    Image ID:       
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3
    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /etc/localtime from localtime (ro)
      /opt/app/music/etc/music.properties from properties-music (rw)
      /usr/local/tomcat/webapps from shared-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  shared-data:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  
  properties-music:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-music-tomcat-configmap
    Optional:  false
  default-token-rm7hn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rm7hn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age               From               Message
  ----    ------     ----              ----               -------
  Normal  Scheduled  27m               default-scheduler  Successfully assigned onap/dev-oof-music-tomcat-685fd777c9-8qmll to kub3
  Normal  Pulling    26m               kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled     26m               kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal  Created    26m               kubelet, kub3      Created container
  Normal  Started    26m               kubelet, kub3      Started container
  Normal  Pulling    5m (x3 over 25m)  kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled     5m (x3 over 25m)  kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal  Created    5m (x3 over 25m)  kubelet, kub3      Created container
  Normal  Started    5m (x3 over 25m)  kubelet, kub3      Started container
ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c music-tomcat-zookeeper-readiness -n onap 
2018-12-25 11:20:58,478 - INFO - Checking if zookeeper  is ready
2018-12-25 11:21:32,325 - INFO - zookeeper is ready!
2018-12-25 11:21:32,326 - INFO - zookeeper is ready!
ubuntu@kub4:~$ kubectl  logs -f  dev-oof-music-tomcat-685fd777c9-8qmll  -c  music-tomcat-cassandra-readiness  -n onap 
2018-12-25 11:41:59,688 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:00,014 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:05,019 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:05,305 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:10,310 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:10,681 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:15,686 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:16,192 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:21,198 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:22,058 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:27,063 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:28,051 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:33,054 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:35,798 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:40,802 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:42,112 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:47,117 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:48,173 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:53,176 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:42:54,378 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:42:59,382 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:00,239 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:05,245 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:05,925 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:10,930 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:11,930 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:16,934 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:19,212 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:24,217 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:25,102 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:30,106 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:32,245 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:37,254 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:37,534 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:42,539 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:44,826 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:49,830 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:50,486 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:43:55,490 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:43:56,398 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:01,403 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:02,134 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:07,139 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:07,834 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:12,837 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:13,026 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:18,030 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:19,561 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet
2018-12-25 11:44:24,566 - INFO - Checking if dev-oof-music-cassandra-job-config  is complete
2018-12-25 11:44:25,153 - INFO - dev-oof-music-cassandra-job-config has not succeeded yet

ubuntu@kub4:~$ kubectl describe pod  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -n onap 
Name:           dev-oof-oof-cmso-service-6c485cdff-pbzb6
Namespace:      onap
Node:           kub3/192.168.13.151
Start Time:     Tue, 25 Dec 2018 11:20:07 +0000
Labels:         app=oof-cmso-service
                pod-template-hash=270417899
                release=dev-oof
Annotations:    <none>
Status:         Pending
IP:             10.42.224.93
Controlled By:  ReplicaSet/dev-oof-oof-cmso-service-6c485cdff
Init Containers:
  oof-cmso-service-readiness:
    Container ID:  docker://bb4ccdfaf3ba6836e606685de4bbe069da2e5193f165ae466f768dad85b71908
    Image:         oomk8s/readiness-check:2.0.0
    Image ID:      docker-pullable://oomk8s/readiness-check@sha256:7daa08b81954360a1111d03364febcb3dcfeb723bcc12ce3eb3ed3e53f2323ed
    Port:          <none>
    Host Port:     <none>
    Command:
      /root/ready.py
    Args:
      --container-name
      cmso-db
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 25 Dec 2018 11:22:53 +0000
      Finished:     Tue, 25 Dec 2018 11:25:01 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NAMESPACE:  onap (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
  db-init:
    Container ID:   docker://dbc9fadd1140584043b8f690974a4d626f64d12ef5002108b7b5c29148981e23
    Image:          nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1
    Image ID:       docker-pullable://nexus3.onap.org:10001/onap/optf-cmso-dbinit@sha256:c5722a319fb0d91ad4d533597cdee2b55fc5c51d0a8740cf02cbaa1969c8554f
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 25 Dec 2018 11:48:31 +0000
      Finished:     Tue, 25 Dec 2018 11:48:41 +0000
    Ready:          False
    Restart Count:  9
    Environment:
      DB_HOST:      oof-cmso-dbhost.onap
      DB_PORT:      3306
      DB_USERNAME:  root
      DB_SCHEMA:    cmso
      DB_PASSWORD:  <set to the key 'db-root-password' in secret 'dev-oof-cmso-db'>  Optional: false
    Mounts:
      /share/etc/config from dev-oof-oof-cmso-service-config (rw)
      /share/logs from dev-oof-oof-cmso-service-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
Containers:
  oof-cmso-service:
    Container ID:   
    Image:          nexus3.onap.org:10001/onap/optf-cmso-service:1.0.1
    Image ID:       
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       tcp-socket :8080 delay=120s timeout=50s period=10s #success=1 #failure=3
    Readiness:      tcp-socket :8080 delay=100s timeout=50s period=10s #success=1 #failure=3
    Environment:
      DB_HOST:      oof-cmso-dbhost.onap
      DB_PORT:      3306
      DB_USERNAME:  cmso-admin
      DB_SCHEMA:    cmso
      DB_PASSWORD:  <set to the key 'user-password' in secret 'dev-oof-cmso-db'>  Optional: false
    Mounts:
      /share/debug-logs from dev-oof-oof-cmso-service-logs (rw)
      /share/etc/config from dev-oof-oof-cmso-service-config (rw)
      /share/logs from dev-oof-oof-cmso-service-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rm7hn (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  dev-oof-oof-cmso-service-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dev-oof-oof-cmso-service
    Optional:  false
  dev-oof-oof-cmso-service-logs:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  default-token-rm7hn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rm7hn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From               Message
  ----     ------                  ----               ----               -------
  Normal   Scheduled               30m                default-scheduler  Successfully assigned onap/dev-oof-oof-cmso-service-6c485cdff-pbzb6 to kub3
  Warning  FailedCreatePodSandBox  29m                kubelet, kub3      Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to set up pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: No MAC address found, failed to clean up sandbox container "7d02bb1144aaaf2479a741c971bad617ea532717e7e72d71e2bfeeac992a7451" network for pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6": NetworkPlugin cni failed to teardown pod "dev-oof-oof-cmso-service-6c485cdff-pbzb6_onap" network: failed to get IP addresses for "eth0": <nil>]
  Normal   SandboxChanged          29m                kubelet, kub3      Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling                 27m                kubelet, kub3      pulling image "oomk8s/readiness-check:2.0.0"
  Normal   Pulled                  27m                kubelet, kub3      Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal   Created                 27m                kubelet, kub3      Created container
  Normal   Started                 27m                kubelet, kub3      Started container
  Normal   Pulling                 23m (x4 over 25m)  kubelet, kub3      pulling image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"
  Normal   Pulled                  23m (x4 over 25m)  kubelet, kub3      Successfully pulled image "nexus3.onap.org:10001/onap/optf-cmso-dbinit:1.0.1"
  Normal   Created                 23m (x4 over 25m)  kubelet, kub3      Created container
  Normal   Started                 23m (x4 over 25m)  kubelet, kub3      Started container
  Warning  BackOff                 4m (x80 over 24m)  kubelet, kub3      Back-off restarting failed container
ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c oof-cmso-service-readiness -n onap 
2018-12-25 11:22:54,683 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:02,186 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:09,950 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:12,938 - INFO - cmso-db is not ready.
2018-12-25 11:23:17,963 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:20,091 - INFO - cmso-db is not ready.
2018-12-25 11:23:25,111 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:27,315 - INFO - cmso-db is not ready.
2018-12-25 11:23:32,329 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:35,390 - INFO - cmso-db is not ready.
2018-12-25 11:23:40,407 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:43,346 - INFO - cmso-db is not ready.
2018-12-25 11:23:48,371 - INFO - Checking if cmso-db  is ready
2018-12-25 11:23:53,848 - INFO - cmso-db is not ready.
2018-12-25 11:23:58,870 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:02,188 - INFO - cmso-db is not ready.
2018-12-25 11:24:07,207 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:10,598 - INFO - cmso-db is not ready.
2018-12-25 11:24:15,622 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:18,936 - INFO - cmso-db is not ready.
2018-12-25 11:24:23,955 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:26,794 - INFO - cmso-db is not ready.
2018-12-25 11:24:31,813 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:35,529 - INFO - cmso-db is not ready.
2018-12-25 11:24:40,566 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:44,374 - INFO - cmso-db is not ready.
2018-12-25 11:24:49,403 - INFO - Checking if cmso-db  is ready
2018-12-25 11:24:53,222 - INFO - cmso-db is not ready.
2018-12-25 11:24:58,238 - INFO - Checking if cmso-db  is ready
2018-12-25 11:25:01,340 - INFO - cmso-db is ready!
ubuntu@kub4:~$ kubectl logs  -f  dev-oof-oof-cmso-service-6c485cdff-pbzb6  -c  db-init  -n onap 
VM_ARGS=
 
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.0.6.RELEASE)
 
2018-12-25 11:48:36.187  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : Starting LiquibaseApplication on dev-oof-oof-cmso-service-6c485cdff-pbzb6 with PID 8 (/opt/app/cmso-dbinit/app.jar started by root in /opt/app/cmso-dbinit)
2018-12-25 11:48:36.199  INFO 8 --- [           main] o.o.o.c.liquibase.LiquibaseApplication   : No active profile set, falling back to default profiles: default
2018-12-25 11:48:36.310  INFO 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@d44fc21: startup date [Tue Dec 25 11:48:36 UTC 2018]; root of context hierarchy
2018-12-25 11:48:40.336  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2018-12-25 11:48:40.754  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
2018-12-25 11:48:41.044  WARN 8 --- [           main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
2018-12-25 11:48:41.045  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown initiated...
2018-12-25 11:48:41.109  INFO 8 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown completed.
2018-12-25 11:48:41.177  INFO 8 --- [           main] ConditionEvaluationReportLoggingListener : 
 
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2018-12-25 11:48:41.223 ERROR 8 --- [           main] o.s.boot.SpringApplication               : Application run failed
 
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/onap/optf/cmso/liquibase/LiquibaseData.class]: Invocation of init method failed; nested exception is liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1694) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:759) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:548) ~[spring-context-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.onap.optf.cmso.liquibase.LiquibaseApplication.main(LiquibaseApplication.java:45) [classes!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_181]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_181]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_181]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_181]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [app.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [app.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [app.jar:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [app.jar:na]
Caused by: liquibase.exception.LockException: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:242) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.lockservice.StandardLockService.waitForLock(StandardLockService.java:170) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.Liquibase.update(Liquibase.java:196) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.Liquibase.update(Liquibase.java:192) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:431) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:388) ~[liquibase-core-3.5.5.jar!/:na]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1753) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1690) ~[spring-beans-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
... 23 common frames omitted
Caused by: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1159) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.lockservice.StandardLockService.acquireLock(StandardLockService.java:233) ~[liquibase-core-3.5.5.jar!/:na]
... 30 common frames omitted
Caused by: liquibase.exception.DatabaseException: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:126) ~[liquibase-core-3.5.5.jar!/:na]
at liquibase.database.AbstractJdbcDatabase.commit(AbstractJdbcDatabase.java:1157) ~[liquibase-core-3.5.5.jar!/:na]
... 31 common frames omitted
Caused by: java.sql.SQLTransactionRollbackException: (conn=327) Deadlock found when trying to get lock; try restarting transaction
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:179) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:334) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbStatement.execute(MariaDbStatement.java:386) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbConnection.commit(MariaDbConnection.java:709) ~[mariadb-java-client-2.2.6.jar!/:na]
at com.zaxxer.hikari.pool.ProxyConnection.commit(ProxyConnection.java:368) ~[HikariCP-2.7.9.jar!/:na]
at com.zaxxer.hikari.pool.HikariProxyConnection.commit(HikariProxyConnection.java) ~[HikariCP-2.7.9.jar!/:na]
at liquibase.database.jvm.JdbcConnection.commit(JdbcConnection.java:123) ~[liquibase-core-3.5.5.jar!/:na]
... 32 common frames omitted
Caused by: java.sql.SQLException: Deadlock found when trying to get lock; try restarting transaction
Query is: COMMIT
at org.mariadb.jdbc.internal.util.LogQueryTool.exceptionWithQuery(LogQueryTool.java:119) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:200) ~[mariadb-java-client-2.2.6.jar!/:na]
at org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:328) ~[mariadb-java-client-2.2.6.jar!/:na]
... 37 common frames omitted
 
ubuntu@kub4:~$ 
 
 


Re: Login Credentials for SDC Cassandra Database

Ofir Sonsino <ofir.sonsino@...>
 

Hi Thamlur,

 

The flow type values are stored in application.properties file.

The file’s located at JETTY_HOME/conf/dcae-be/ directory inside the docker container.

 

Ofir

 

From: EMPOROPULO, VITALIY
Sent: Monday, December 24, 2018 4:09 PM
To: Thamlur Raju <thamlurraju468@...>; Sonsino, Ofir <ofir.sonsino@...>
Cc: onap-discuss@...
Subject: RE: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi Thamlur,

 

I’m not sure I understand your questions.

 

@Sonsino, Ofir Can you help please?

 

Regards,

Vitaliy

 

From: Thamlur Raju <thamlurraju468@...>
Sent: Monday, December 24, 2018 11:50
To: Vitaliy Emporopulo <Vitaliy.Emporopulo@...>
Cc: onap-discuss@...
Subject: Re: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi Vitaliy,

 

Thanks for the information.

 

1. Whether SDC Cassandra database will store the DCAE-DS (as shown in below) drop-down data?

 

image.png

 

 

2. Does this DCAE-DS data will have the inter connection with SDC deployment process?

 

 

Thanks & Regards,

Thamlur Raju.

 

On Mon, Dec 24, 2018 at 12:25 PM Vitaliy Emporopulo <Vitaliy.Emporopulo@...> wrote:

Hi Thamlur,

 

It’s asdc_user/Aa1234%^!

 

You can see it in the SDC configuration file https://gerrit.onap.org/r/gitweb?p=sdc.git;a=blob;f=sdc-os-chef/environments/Template.json;h=d212d1e98bd04224c6dcc1c4287ecb14df424dfe;hb=refs/heads/casablanca#l90

 

Regards,

Vitaly Emporopulo

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Raju
Sent: Monday, December 24, 2018 08:35
To: onap-discuss@...
Subject: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi SDC Team,

 

Please help me with the default username and password for SDC Cassandra Database in Casablanca release.

 

 

Thanks & Regards,

Thamlur Raju.

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Re: Casablanca oof module pods are waiting on init status #oof

Borislav Glozman
 

Correction: kubectl describe po -n <namespace> <pod name>

 

Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726

amdocs-a

Amdocs a Platinum member of ONAP

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Borislav Glozman
Sent: Tuesday, December 25, 2018 10:37 AM
To: onap-discuss@...; gulsumatici@...
Subject: Re: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Hi,

 

You can look in the log of the init containers of those pods to see what they are waiting for.

Find the init containers by running kubectl get po -n <namespace> <pod name>

Use kubectl logs -n <namespace> <pod name> -c <name of the init container>

 

Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726

amdocs-a

Amdocs a Platinum member of ONAP

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of gulsum atici
Sent: Monday, December 24, 2018 4:57 PM
To: onap-discuss@...
Subject: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Hello,

OOF  module pods are  waiting  for init  for hours  abd  can't  become  running  status.  They have  some dependencies  inside  OOF module  and   AAF module. 

AAF module  is completely  running.   After  recreating  OOF  module  pods   several times, status  didn't  change.  

dev-aaf-aaf-cm-858d9bbd58-qlbzr                               1/1       Running            0          1h

dev-aaf-aaf-cs-db78f4b6-ph6c4                                 1/1       Running            0          1h

dev-aaf-aaf-fs-cc68f85f7-5jzbm                                1/1       Running            0          1h

dev-aaf-aaf-gui-8f979c4d9-tqj7q                               1/1       Running            0          1h

dev-aaf-aaf-hello-84df87c74b-9xp8x                            1/1       Running            0          1h

dev-aaf-aaf-locate-74466c9857-xqqp5                           1/1       Running            0          1h

dev-aaf-aaf-oauth-65db47977f-7rfk5                            1/1       Running            0          1h

dev-aaf-aaf-service-77454cb8c-b9bmp                           1/1       Running            0          1h

dev-aaf-aaf-sms-7b5db59d6-vkjlm                               1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-0                                1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-1                                1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-2                                1/1       Running            0          1h

dev-aaf-aaf-sms-vault-0                                       2/2       Running            1          1h

 

 

dev-oof-music-tomcat-64d4c64db7-gff9j                         0/1       Init:1/3           5          1h

dev-oof-music-tomcat-64d4c64db7-kqg9m                         0/1       Init:1/3           5          1h

dev-oof-music-tomcat-64d4c64db7-vmhdt                         0/1       Init:1/3           5          1h

dev-oof-oof-7b4bccc8d7-4wdx7                                  1/1       Running            0          1h

dev-oof-oof-cmso-service-55499fdf4c-h2pnx                     1/1       Running            0          1h

dev-oof-oof-has-api-7d9b977b48-vq8bq                          0/1       Init:0/3           5          1h

dev-oof-oof-has-controller-7f5b6c5f7-2fq68                    0/1       Init:0/3           6          1h

dev-oof-oof-has-data-b57bd54fb-xp942                          0/1       Init:0/4           5          1h

dev-oof-oof-has-healthcheck-mrd87                             0/1       Init:0/1           5          1h

dev-oof-oof-has-onboard-jtxsv                                 0/1       Init:0/2           6          1h

dev-oof-oof-has-reservation-5869b786b-8krtc                   0/1       Init:0/4           5          1h

dev-oof-oof-has-solver-5c75888465-f7pfm                       0/1       Init:0/4           6          1h

In the  pod  logs, there isn't  any  errors, containers are  all  waiting for  initiliaze  but  stuck at that  point.

Events:

  Type    Reason   Age              From           Message

  ----    ------   ----             ----           -------

  Normal  Pulling  5m (x7 over 1h)  kubelet, kub1  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   5m (x7 over 1h)  kubelet, kub1  Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created  5m (x7 over 1h)  kubelet, kub1  Created container

  Normal  Started  5m (x7 over 1h)  kubelet, kub1  Started container

ubuntu@kub2:~$ kubectl logs -f   dev-oof-oof-has-api-7d9b977b48-vq8bq -n onap 

Error from server (BadRequest): container "oof-has-api" in pod "dev-oof-oof-has-api-7d9b977b48-vq8bq" is waiting to start: PodInitializing
***************************************************

Events:

  Type    Reason   Age              From           Message

  ----    ------   ----             ----           -------

  Normal  Pulling  3m (x8 over 1h)  kubelet, kub3  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   3m (x8 over 1h)  kubelet, kub3  Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created  3m (x8 over 1h)  kubelet, kub3  Created container

  Normal  Started  3m (x8 over 1h)  kubelet, kub3  Started container

ubuntu@kub3:$ kubectl  logs  -f   dev-oof-oof-has-onboard-jtxsv  -n onap 

Error from server (BadRequest): container "oof-has-onboard" in pod "dev-oof-oof-has-onboard-jtxsv" is waiting to start: PodInitializing

 

 

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Re: Casablanca oof module pods are waiting on init status #oof

Borislav Glozman
 

Hi,

 

You can look in the log of the init containers of those pods to see what they are waiting for.

Find the init containers by running kubectl get po -n <namespace> <pod name>

Use kubectl logs -n <namespace> <pod name> -c <name of the init container>

 

Thanks,

Borislav Glozman

O:+972.9.776.1988

M:+972.52.2835726

amdocs-a

Amdocs a Platinum member of ONAP

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of gulsum atici
Sent: Monday, December 24, 2018 4:57 PM
To: onap-discuss@...
Subject: [onap-discuss] Casablanca oof module pods are waiting on init status #oof

 

Hello,

OOF  module pods are  waiting  for init  for hours  abd  can't  become  running  status.  They have  some dependencies  inside  OOF module  and   AAF module. 

AAF module  is completely  running.   After  recreating  OOF  module  pods   several times, status  didn't  change.  

dev-aaf-aaf-cm-858d9bbd58-qlbzr                               1/1       Running            0          1h

dev-aaf-aaf-cs-db78f4b6-ph6c4                                 1/1       Running            0          1h

dev-aaf-aaf-fs-cc68f85f7-5jzbm                                1/1       Running            0          1h

dev-aaf-aaf-gui-8f979c4d9-tqj7q                               1/1       Running            0          1h

dev-aaf-aaf-hello-84df87c74b-9xp8x                            1/1       Running            0          1h

dev-aaf-aaf-locate-74466c9857-xqqp5                           1/1       Running            0          1h

dev-aaf-aaf-oauth-65db47977f-7rfk5                            1/1       Running            0          1h

dev-aaf-aaf-service-77454cb8c-b9bmp                           1/1       Running            0          1h

dev-aaf-aaf-sms-7b5db59d6-vkjlm                               1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-0                                1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-1                                1/1       Running            0          1h

dev-aaf-aaf-sms-quorumclient-2                                1/1       Running            0          1h

dev-aaf-aaf-sms-vault-0                                       2/2       Running            1          1h

 

 

dev-oof-music-tomcat-64d4c64db7-gff9j                         0/1       Init:1/3           5          1h

dev-oof-music-tomcat-64d4c64db7-kqg9m                         0/1       Init:1/3           5          1h

dev-oof-music-tomcat-64d4c64db7-vmhdt                         0/1       Init:1/3           5          1h

dev-oof-oof-7b4bccc8d7-4wdx7                                  1/1       Running            0          1h

dev-oof-oof-cmso-service-55499fdf4c-h2pnx                     1/1       Running            0          1h

dev-oof-oof-has-api-7d9b977b48-vq8bq                          0/1       Init:0/3           5          1h

dev-oof-oof-has-controller-7f5b6c5f7-2fq68                    0/1       Init:0/3           6          1h

dev-oof-oof-has-data-b57bd54fb-xp942                          0/1       Init:0/4           5          1h

dev-oof-oof-has-healthcheck-mrd87                             0/1       Init:0/1           5          1h

dev-oof-oof-has-onboard-jtxsv                                 0/1       Init:0/2           6          1h

dev-oof-oof-has-reservation-5869b786b-8krtc                   0/1       Init:0/4           5          1h

dev-oof-oof-has-solver-5c75888465-f7pfm                       0/1       Init:0/4           6          1h

In the  pod  logs, there isn't  any  errors, containers are  all  waiting for  initiliaze  but  stuck at that  point.

Events:

  Type    Reason   Age              From           Message

  ----    ------   ----             ----           -------

  Normal  Pulling  5m (x7 over 1h)  kubelet, kub1  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   5m (x7 over 1h)  kubelet, kub1  Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created  5m (x7 over 1h)  kubelet, kub1  Created container

  Normal  Started  5m (x7 over 1h)  kubelet, kub1  Started container

ubuntu@kub2:~$ kubectl logs -f   dev-oof-oof-has-api-7d9b977b48-vq8bq -n onap 

Error from server (BadRequest): container "oof-has-api" in pod "dev-oof-oof-has-api-7d9b977b48-vq8bq" is waiting to start: PodInitializing
***************************************************

Events:

  Type    Reason   Age              From           Message

  ----    ------   ----             ----           -------

  Normal  Pulling  3m (x8 over 1h)  kubelet, kub3  pulling image "oomk8s/readiness-check:2.0.0"

  Normal  Pulled   3m (x8 over 1h)  kubelet, kub3  Successfully pulled image "oomk8s/readiness-check:2.0.0"

  Normal  Created  3m (x8 over 1h)  kubelet, kub3  Created container

  Normal  Started  3m (x8 over 1h)  kubelet, kub3  Started container

ubuntu@kub3:$ kubectl  logs  -f   dev-oof-oof-has-onboard-jtxsv  -n onap 

Error from server (BadRequest): container "oof-has-onboard" in pod "dev-oof-oof-has-onboard-jtxsv" is waiting to start: PodInitializing

 

 

 

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


[integration] Integration weekly meeting on 12/26 is cancelled

Yang Xu
 

Many people are taking holiday vacation this week, so we will cancel 12/26 Integration meeting. We will have our next weekly meeting on 1/2/2019.

 

Merry Christmas and Happy New Year!

 

-Yang


Re: On boarding Micro-service in DCAE

Vijay Venkatesh Kumar
 

Hi,

The dcae_cli is an standalone utility. The intent of the tool is to validate the component spec and container image provided by the MS owner (validates the startup and CBS fetch function).

 

Once the spec is validated, DCAE-DS (& Toscalab) should be used to generate the required models/blueprint which can be loaded into DCAE-DS catalog. This flow is not automated currently though;  copied Igor from SDC who can clarify R4 plans around it.

 

Regards,

Vijay

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Raju
Sent: Friday, December 21, 2018 7:46 AM
To: onap-discuss@...
Subject: [onap-discuss] On boarding Micro-service in DCAE

 

Hi Dcae Team,

 

I validated the data-format and component sepecific jsons in the DCAE_CLI environment and i run the component, as shown in below ( i customized some values in the docker_util.py script to run the component),

 

The component running successfully.

 

image.png

 

As per my understanding , after running the component successfully , the component name should display in the drop-down of DCAE_DS UI , as shown in below screenshot

 

image.png

 

But i am not able see my component name in the drop-down box.

 

Is i am missing any step to achieve my component name appear in the drop down box.

 

Please suggest something on this issue.

 

Thanks in the Advance.

 

 

Thanks & Regards,

Thamlur Raju.


Casablanca oof module pods are waiting on init status #oof

gulsum atici <gulsumatici@...>
 

Hello,

OOF  module pods are  waiting  for init  for hours  abd  can't  become  running  status.  They have  some dependencies  inside  OOF module  and   AAF module. 

AAF module  is completely  running.   After  recreating  OOF  module  pods   several times, status  didn't  change.  

dev-aaf-aaf-cm-858d9bbd58-qlbzr                               1/1       Running            0          1h
dev-aaf-aaf-cs-db78f4b6-ph6c4                                 1/1       Running            0          1h
dev-aaf-aaf-fs-cc68f85f7-5jzbm                                1/1       Running            0          1h
dev-aaf-aaf-gui-8f979c4d9-tqj7q                               1/1       Running            0          1h
dev-aaf-aaf-hello-84df87c74b-9xp8x                            1/1       Running            0          1h
dev-aaf-aaf-locate-74466c9857-xqqp5                           1/1       Running            0          1h
dev-aaf-aaf-oauth-65db47977f-7rfk5                            1/1       Running            0          1h
dev-aaf-aaf-service-77454cb8c-b9bmp                           1/1       Running            0          1h
dev-aaf-aaf-sms-7b5db59d6-vkjlm                               1/1       Running            0          1h
dev-aaf-aaf-sms-quorumclient-0                                1/1       Running            0          1h
dev-aaf-aaf-sms-quorumclient-1                                1/1       Running            0          1h
dev-aaf-aaf-sms-quorumclient-2                                1/1       Running            0          1h
dev-aaf-aaf-sms-vault-0                                       2/2       Running            1          1h
 

dev-oof-music-tomcat-64d4c64db7-gff9j                         0/1       Init:1/3           5          1h
dev-oof-music-tomcat-64d4c64db7-kqg9m                         0/1       Init:1/3           5          1h
dev-oof-music-tomcat-64d4c64db7-vmhdt                         0/1       Init:1/3           5          1h
dev-oof-oof-7b4bccc8d7-4wdx7                                  1/1       Running            0          1h
dev-oof-oof-cmso-service-55499fdf4c-h2pnx                     1/1       Running            0          1h
dev-oof-oof-has-api-7d9b977b48-vq8bq                          0/1       Init:0/3           5          1h
dev-oof-oof-has-controller-7f5b6c5f7-2fq68                    0/1       Init:0/3           6          1h
dev-oof-oof-has-data-b57bd54fb-xp942                          0/1       Init:0/4           5          1h
dev-oof-oof-has-healthcheck-mrd87                             0/1       Init:0/1           5          1h
dev-oof-oof-has-onboard-jtxsv                                 0/1       Init:0/2           6          1h
dev-oof-oof-has-reservation-5869b786b-8krtc                   0/1       Init:0/4           5          1h
dev-oof-oof-has-solver-5c75888465-f7pfm                       0/1       Init:0/4           6          1h

In the  pod  logs, there isn't  any  errors, containers are  all  waiting for  initiliaze  but  stuck at that  point.


Events:
  Type    Reason   Age              From           Message
  ----    ------   ----             ----           -------
  Normal  Pulling  5m (x7 over 1h)  kubelet, kub1  pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled   5m (x7 over 1h)  kubelet, kub1  Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal  Created  5m (x7 over 1h)  kubelet, kub1  Created container
  Normal  Started  5m (x7 over 1h)  kubelet, kub1  Started container
ubuntu@kub2:~$ kubectl logs -f   dev-oof-oof-has-api-7d9b977b48-vq8bq -n onap 
Error from server (BadRequest): container "oof-has-api" in pod "dev-oof-oof-has-api-7d9b977b48-vq8bq" is waiting to start: PodInitializing
***************************************************
Events:
  Type    Reason   Age              From           Message
  ----    ------   ----             ----           -------
  Normal  Pulling  3m (x8 over 1h)  kubelet, kub3  pulling image "oomk8s/readiness-check:2.0.0"
  Normal  Pulled   3m (x8 over 1h)  kubelet, kub3  Successfully pulled image "oomk8s/readiness-check:2.0.0"
  Normal  Created  3m (x8 over 1h)  kubelet, kub3  Created container
  Normal  Started  3m (x8 over 1h)  kubelet, kub3  Started container
ubuntu@kub3:$ kubectl  logs  -f   dev-oof-oof-has-onboard-jtxsv  -n onap 
Error from server (BadRequest): container "oof-has-onboard" in pod "dev-oof-oof-has-onboard-jtxsv" is waiting to start: PodInitializing
 
 
 


Re: Login Credentials for SDC Cassandra Database

Vitaliy Emporopulo
 

Hi Thamlur,

 

I’m not sure I understand your questions.

 

@Sonsino, Ofir Can you help please?

 

Regards,

Vitaliy

 

From: Thamlur Raju <thamlurraju468@...>
Sent: Monday, December 24, 2018 11:50
To: Vitaliy Emporopulo <Vitaliy.Emporopulo@...>
Cc: onap-discuss@...
Subject: Re: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi Vitaliy,

 

Thanks for the information.

 

1. Whether SDC Cassandra database will store the DCAE-DS (as shown in below) drop-down data?

 

image.png

 

 

2. Does this DCAE-DS data will have the inter connection with SDC deployment process?

 

 

Thanks & Regards,

Thamlur Raju.

 

On Mon, Dec 24, 2018 at 12:25 PM Vitaliy Emporopulo <Vitaliy.Emporopulo@...> wrote:

Hi Thamlur,

 

It’s asdc_user/Aa1234%^!

 

You can see it in the SDC configuration file https://gerrit.onap.org/r/gitweb?p=sdc.git;a=blob;f=sdc-os-chef/environments/Template.json;h=d212d1e98bd04224c6dcc1c4287ecb14df424dfe;hb=refs/heads/casablanca#l90

 

Regards,

Vitaly Emporopulo

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Raju
Sent: Monday, December 24, 2018 08:35
To: onap-discuss@...
Subject: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi SDC Team,

 

Please help me with the default username and password for SDC Cassandra Database in Casablanca release.

 

 

Thanks & Regards,

Thamlur Raju.

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Re: [so] so-openstack-adapter pod (image from local nexus) is crashing in kubernetes

gulsum atici <gulsumatici@...>
 

Hello,


ubuntu@kub3:/dockerdata-nfs/dev-so/mso/mariadb/data$ ls -lrt
total 603160
-rw-rw---- 1 999 docker  157286400 Dec 21 04:40 ib_logfile1
-rw-rw---- 1 999 docker  157286400 Dec 21 04:40 ib_logfile2
drwx------ 2 999 docker       4096 Dec 21 04:40 performance_schema
-rw-rw---- 1 999 docker          0 Dec 21 04:40 multi-master.info
drwx------ 2 999 docker       4096 Dec 21 04:46 mysql
drwx------ 2 999 docker       4096 Dec 21 04:48 camundabpmn
-rw-rw---- 1 999 docker         52 Dec 24 10:57 aria_log_control
-rw-rw---- 1 999 docker      16384 Dec 24 10:57 aria_log.00000001
-rw-rw---- 1 999 docker        170 Dec 24 10:58 gvwstate.dat
-rw-rw---- 1 999 docker      24576 Dec 24 10:58 tc.log
-rw-rw---- 1 999 docker  117440512 Dec 24 10:58 ibdata1
-rw-rw---- 1 999 docker  157286400 Dec 24 10:58 ib_logfile0
-rw------- 1 999 docker 2147484968 Dec 24 10:59 galera.cache
-rw-rw---- 1 999 docker        104 Dec 24 10:59 grastate.dat
-rw-rw---- 1 999 docker       1500 Dec 24 10:59 slow_query.log
-rw-rw---- 1 999 docker   11642438 Dec 24 13:45 audit.log

In so module  mariadb  folder audit.log  gives all disconnect messages like  below, even pod seems  running.
dev-so-so-mariadb-5f854cbbbb-jwnvt                            1/1       Running            0          1h
 
20181224 13:45:39,mariadb,,10.42.0.1,2288,0,FAILED_CONNECT,,,1158
20181224 13:45:39,mariadb,,10.42.0.1,2288,0,DISCONNECT,,,0
20181224 13:45:41,mariadb,,10.42.0.1,2289,0,FAILED_CONNECT,,,1158
20181224 13:45:41,mariadb,,10.42.0.1,2289,0,DISCONNECT,,,0
20181224 13:45:47,mariadb,so_user,10.42.50.44,2290,0,FAILED_CONNECT,,,1045
20181224 13:45:47,mariadb,so_user,10.42.50.44,2290,0,DISCONNECT,,,0
20181224 13:45:49,mariadb,so_user,10.42.50.44,2291,0,FAILED_CONNECT,,,1045
20181224 13:45:49,mariadb,so_user,10.42.50.44,2291,0,DISCONNECT,,,0
20181224 13:45:49,mariadb,,10.42.0.1,2292,0,FAILED_CONNECT,,,1158
20181224 13:45:49,mariadb,,10.42.0.1,2292,0,DISCONNECT,,,0
20181224 13:45:51,mariadb,,10.42.0.1,2293,0,FAILED_CONNECT,,,1158
20181224 13:45:51,mariadb,,10.42.0.1,2293,0,DISCONNECT,,,0
20181224 13:45:55,mariadb,so_admin,10.42.232.2,2294,0,FAILED_CONNECT,,,1045
20181224 13:45:55,mariadb,so_admin,10.42.232.2,2294,0,DISCONNECT,,,0
20181224 13:45:59,mariadb,,10.42.0.1,2295,0,FAILED_CONNECT,,,1158
20181224 13:45:59,mariadb,,10.42.0.1,2295,0,DISCONNECT,,,0
20181224 13:46:01,mariadb,,10.42.0.1,2296,0,FAILED_CONNECT,,,1158
20181224 13:46:01,mariadb,,10.42.0.1,2296,0,DISCONNECT,,,0
20181224 13:46:09,mariadb,,10.42.0.1,2297,0,FAILED_CONNECT,,,1158
20181224 13:46:09,mariadb,,10.42.0.1,2297,0,DISCONNECT,,,0
20181224 13:46:11,mariadb,,10.42.0.1,2298,0,FAILED_CONNECT,,,1158
20181224 13:46:11,mariadb,,10.42.0.1,2298,0,DISCONNECT,,,0
20181224 13:46:19,mariadb,,10.42.0.1,2299,0,FAILED_CONNECT,,,1158
20181224 13:46:19,mariadb,,10.42.0.1,2299,0,DISCONNECT,,,0
20181224 13:46:21,mariadb,,10.42.0.1,2300,0,FAILED_CONNECT,,,1158
20181224 13:46:21,mariadb,,10.42.0.1,2300,0,DISCONNECT,,,0
20181224 13:46:29,mariadb,,10.42.0.1,2301,0,FAILED_CONNECT,,,1158
20181224 13:46:29,mariadb,,10.42.0.1,2301,0,DISCONNECT,,,0
20181224 13:46:31,mariadb,,10.42.0.1,2302,0,FAILED_CONNECT,,,1158
20181224 13:46:31,mariadb,,10.42.0.1,2302,0,DISCONNECT,,,0
20181224 13:46:39,mariadb,,10.42.0.1,2303,0,FAILED_CONNECT,,,1158
20181224 13:46:39,mariadb,,10.42.0.1,2303,0,DISCONNECT,,,0
20181224 13:46:41,mariadb,,10.42.0.1,2304,0,FAILED_CONNECT,,,1158
20181224 13:46:41,mariadb,,10.42.0.1,2304,0,DISCONNECT,,,0
20181224 13:46:49,mariadb,,10.42.0.1,2305,0,FAILED_CONNECT,,,1158
20181224 13:46:49,mariadb,,10.42.0.1,2305,0,DISCONNECT,,,0
20181224 13:46:51,mariadb,,10.42.0.1,2306,0,FAILED_CONNECT,,,1158
20181224 13:46:51,mariadb,,10.42.0.1,2306,0,DISCONNECT,,,0
20181224 13:46:59,mariadb,,10.42.0.1,2307,0,FAILED_CONNECT,,,1158
20181224 13:46:59,mariadb,,10.42.0.1,2307,0,DISCONNECT,,,0
20181224 13:47:01,mariadb,,10.42.0.1,2308,0,FAILED_CONNECT,,,1158
20181224 13:47:01,mariadb,,10.42.0.1,2308,0,DISCONNECT,,,0
20181224 13:47:03,mariadb,so_admin,10.42.76.179,2309,0,FAILED_CONNECT,,,1045
20181224 13:47:03,mariadb,so_admin,10.42.76.179,2309,0,DISCONNECT,,,0
20181224 13:47:08,mariadb,so_user,10.42.148.192,2310,0,FAILED_CONNECT,,,1045
20181224 13:47:08,mariadb,so_user,10.42.148.192,2310,0,DISCONNECT,,,0
20181224 13:47:09,mariadb,,10.42.0.1,2311,0,FAILED_CONNECT,,,1158
20181224 13:47:09,mariadb,,10.42.0.1,2311,0,DISCONNECT,,,0
20181224 13:47:10,mariadb,so_user,10.42.148.192,2312,0,FAILED_CONNECT,,,1045
20181224 13:47:10,mariadb,so_user,10.42.148.192,2312,0,DISCONNECT,,,0
20181224 13:47:11,mariadb,,10.42.0.1,2313,0,FAILED_CONNECT,,,1158
20181224 13:47:11,mariadb,,10.42.0.1,2313,0,DISCONNECT,,,0
20181224 13:47:19,mariadb,,10.42.0.1,2314,0,FAILED_CONNECT,,,1158
20181224 13:47:19,mariadb,,10.42.0.1,2314,0,DISCONNECT,,,0
20181224 13:47:21,mariadb,,10.42.0.1,2315,0,FAILED_CONNECT,,,1158
20181224 13:47:21,mariadb,,10.42.0.1,2315,0,DISCONNECT,,,0
20181224 13:47:29,mariadb,,10.42.0.1,2316,0,FAILED_CONNECT,,,1158
20181224 13:47:29,mariadb,,10.42.0.1,2316,0,DISCONNECT,,,0
20181224 13:47:31,mariadb,,10.42.0.1,2317,0,FAILED_CONNECT,,,1158
20181224 13:47:31,mariadb,,10.42.0.1,2317,0,DISCONNECT,,,0
20181224 13:47:39,mariadb,,10.42.0.1,2318,0,FAILED_CONNECT,,,1158
20181224 13:47:39,mariadb,,10.42.0.1,2318,0,DISCONNECT,,,0
20181224 13:47:41,mariadb,,10.42.0.1,2319,0,FAILED_CONNECT,,,1158
20181224 13:47:42,mariadb,,10.42.0.1,2319,0,DISCONNECT,,,0
20181224 13:47:49,mariadb,,10.42.0.1,2320,0,FAILED_CONNECT,,,1158
20181224 13:47:50,mariadb,,10.42.0.1,2320,0,DISCONNECT,,,0
20181224 13:47:51,mariadb,,10.42.0.1,2321,0,FAILED_CONNECT,,,1158
20181224 13:47:51,mariadb,,10.42.0.1,2321,0,DISCONNECT,,,0
20181224 13:47:59,mariadb,,10.42.0.1,2322,0,FAILED_CONNECT,,,1158
20181224 13:47:59,mariadb,,10.42.0.1,2322,0,DISCONNECT,,,0
20181224 13:48:01,mariadb,,10.42.0.1,2323,0,FAILED_CONNECT,,,1158
20181224 13:48:01,mariadb,,10.42.0.1,2323,0,DISCONNECT,,,0
20181224 13:48:09,mariadb,,10.42.0.1,2324,0,FAILED_CONNECT,,,1158
20181224 13:48:09,mariadb,,10.42.0.1,2324,0,DISCONNECT,,,0
 
 


Re: [so] so-openstack-adapter pod (image from local nexus) is crashing in kubernetes

gulsum atici <gulsumatici@...>
 

Dear All,

So  module's  some pods  are crashing like  below, I tried  to recreate the pods  several  times  however  issue  is  persisting.

dev-so-so-5d9c4fbf5c-lwq9l                                    0/1       CrashLoopBackOff   21         1h
dev-so-so-bpmn-infra-5675c9fd55-jbpxg                         0/1       CrashLoopBackOff   19         1h
dev-so-so-catalog-db-adapter-95f9cc64-mtxr2                   0/1       CrashLoopBackOff   20         1h
dev-so-so-mariadb-5f854cbbbb-jwnvt                            1/1       Running            0          1h
dev-so-so-monitoring-6c76f45c4f-m9mcn                         1/1       Running            0          1h
dev-so-so-openstack-adapter-64cc5554fb-cdlll                  0/1       CrashLoopBackOff   21         1h
dev-so-so-request-db-adapter-87b79c98d-kx6mf                  0/1       CrashLoopBackOff   20         1h
dev-so-so-sdc-controller-6ddd49b8b-6vn7f                      0/1       CrashLoopBackOff   20         1h
dev-so-so-sdnc-adapter-747bfc998f-s99r7                       1/1       Running            0          1h
dev-so-so-vfc-adapter-6b4cf8c449-5s7g2                        0/1       CrashLoopBackOff   21         1h

All those  crashing pods  are dependent  to  so-mariadb  container. It is  running  and  here are the  logs form   mariadb:

ubuntu@kub4:~$ kubectl  logs  -f   dev-so-so-mariadb-5f854cbbbb-jwnvt  -n onap 
2018-12-24 10:58:32 140282828376000 [Note] mysqld (mysqld 10.1.11-MariaDB-1~jessie-log) starting as process 1 ...
2018-12-24 10:58:33 140282828376000 [Note] WSREP: Read nil XID from storage engines, skipping position init
2018-12-24 10:58:33 140282828376000 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera/libgalera_smm.so'
2018-12-24 10:58:33 140282828376000 [Note] WSREP: wsrep_load(): Galera 25.3.12(r3516) by Codership Oy <info@...> loaded successfully.
2018-12-24 10:58:33 140282828376000 [Note] WSREP: CRC-32C: using hardware acceleration.
2018-12-24 10:58:33 140282828376000 [Note] WSREP: Found saved state: 4c21e5c7-04f9-11e9-939c-53aee0b02696:0
2018-12-24 10:58:33 140282828376000 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 10.42.171.249; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 1G; gcache.size = 2G; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc
2018-12-24 10:58:33 140280556070656 [Note] WSREP: Service thread queue flushed.
2018-12-24 10:58:33 140282828376000 [Note] WSREP: Assign initial position for certification: 0, protocol version: -1
2018-12-24 10:58:33 140282828376000 [Note] WSREP: wsrep_sst_grab()
2018-12-24 10:58:33 140282828376000 [Note] WSREP: Start replication
2018-12-24 10:58:33 140282828376000 [Note] WSREP: Setting initial position to 4c21e5c7-04f9-11e9-939c-53aee0b02696:0
2018-12-24 10:58:33 140282828376000 [Note] WSREP: protonet asio version 0
2018-12-24 10:58:33 140282828376000 [Note] WSREP: Using CRC-32C for message checksums.
2018-12-24 10:58:33 140282828376000 [Note] WSREP: backend: asio
2018-12-24 10:58:34 140282828376000 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
2018-12-24 10:58:34 140282828376000 [Note] WSREP: restore pc from disk failed
2018-12-24 10:58:34 140282828376000 [Note] WSREP: GMCast version 0
2018-12-24 10:58:34 140282828376000 [Note] WSREP: (db6894c8, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2018-12-24 10:58:34 140282828376000 [Note] WSREP: (db6894c8, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2018-12-24 10:58:34 140282828376000 [Note] WSREP: EVS version 0
2018-12-24 10:58:34 140282828376000 [Note] WSREP: gcomm: connecting to group 'MSO-automated-tests-cluster', peer ''
2018-12-24 10:58:34 140282828376000 [Note] WSREP: start_prim is enabled, turn off pc_recovery
2018-12-24 10:58:34 140282828376000 [Note] WSREP: Node db6894c8 state prim
2018-12-24 10:58:34 140282828376000 [Note] WSREP: view(view_id(PRIM,db6894c8,1) memb {
db6894c8,0
} joined {
} left {
} partitioned {
})
2018-12-24 10:58:34 140282828376000 [Note] WSREP: save pc into disk
2018-12-24 10:58:34 140282828376000 [Note] WSREP: gcomm: connected
2018-12-24 10:58:34 140282828376000 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2018-12-24 10:58:34 140282828376000 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2018-12-24 10:58:34 140282828376000 [Note] WSREP: Opened channel 'MSO-automated-tests-cluster'
2018-12-24 10:58:34 140280485377792 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 1
2018-12-24 10:58:34 140280485377792 [Note] WSREP: STATE_EXCHANGE: sent state UUID: db79d47b-076a-11e9-8766-a204113915a7
2018-12-24 10:58:34 140280485377792 [Note] WSREP: STATE EXCHANGE: sent state msg: db79d47b-076a-11e9-8766-a204113915a7
2018-12-24 10:58:34 140280485377792 [Note] WSREP: STATE EXCHANGE: got state msg: db79d47b-076a-11e9-8766-a204113915a7 from 0 (mariadb1)
2018-12-24 10:58:34 140280485377792 [Note] WSREP: Quorum results:
version    = 3,
component  = PRIMARY,
conf_id    = 0,
members    = 1/1 (joined/total),
act_id     = 0,
last_appl. = -1,
protocols  = 0/7/3 (gcs/repl/appl),
group UUID = 4c21e5c7-04f9-11e9-939c-53aee0b02696
2018-12-24 10:58:34 140280485377792 [Note] WSREP: Flow-control interval: [16, 16]
2018-12-24 10:58:34 140280485377792 [Note] WSREP: Restored state OPEN -> JOINED (0)
2018-12-24 10:58:34 140280485377792 [Note] WSREP: Member 0.0 (mariadb1) synced with group.
2018-12-24 10:58:34 140280485377792 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
2018-12-24 10:58:34 140282828376000 [Note] WSREP: Waiting for SST to complete.
2018-12-24 10:58:34 140282827052800 [Note] WSREP: New cluster view: global state: 4c21e5c7-04f9-11e9-939c-53aee0b02696:0, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 3
2018-12-24 10:58:34 140282828376000 [Note] WSREP: SST complete, seqno: 0
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: The InnoDB memory heap is disabled
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: Memory barrier is not used
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: Compressed tables use zlib 1.2.8
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: Using Linux native AIO
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: Using SSE crc32 instructions
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: Initializing buffer pool, size = 256.0M
2018-12-24 10:58:34 140282828376000 [Note] InnoDB: Completed initialization of buffer pool
2018-12-24 10:58:35 140282828376000 [Note] InnoDB: Highest supported file format is Barracuda.
2018-12-24 10:58:41 140282828376000 [Note] InnoDB: 128 rollback segment(s) are active.
2018-12-24 10:58:41 140282828376000 [Note] InnoDB: Waiting for purge to start
2018-12-24 10:58:41 140282828376000 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.26-76.0 started; log sequence number 7024327
2018-12-24 10:58:41 140279965255424 [Note] InnoDB: Dumping buffer pool(s) not yet started
2018-12-24 10:58:41 140282828376000 [Note] Plugin 'FEEDBACK' is disabled.
181224 10:58:41 server_audit: MariaDB Audit Plugin version 1.4.0 STARTED.
181224 10:58:41 server_audit: Query cache is enabled with the TABLE events. Some table reads can be veiled.181224 10:58:41 server_audit: logging started to the file //var/lib/mysql/audit.log.
2018-12-24 10:58:42 140282828376000 [Note] Server socket created on IP: '::'.
2018-12-24 10:58:42 140282828376000 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode.
2018-12-24 10:58:44 140282827052800 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-12-24 10:58:44 140282827052800 [Note] WSREP: REPL Protocols: 7 (3, 2)
2018-12-24 10:58:44 140280556070656 [Note] WSREP: Service thread queue flushed.
2018-12-24 10:58:44 140282827052800 [Note] WSREP: Assign initial position for certification: 0, protocol version: 3
2018-12-24 10:58:44 140280556070656 [Note] WSREP: Service thread queue flushed.
2018-12-24 10:58:44 140282827052800 [Note] WSREP: Synchronized with group, ready for connections
2018-12-24 10:58:44 140282827052800 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2018-12-24 10:58:44 140282828376000 [Note] mysqld: ready for connections.
Version: '10.1.11-MariaDB-1~jessie-log'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution
 

Crashing  container  logs:

ubuntu@kub4:~$ kubectl  logs  -f  dev-so-so-request-db-adapter-87b79c98d-kx6mf  -n onap 
Installing onap-ca.crt in /usr/local/share/ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
JVM Arguments:  -Djava.security.egd=file:/dev/./urandom -Dlogs_dir=./logs/reqdb/ -Dlogging.config=/app/logback-spring.xml  -Dspring.config.location=/app/config/override.yaml  
 
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::       (v1.5.13.RELEASE)
 
Application exiting with status code 1
*************************************************************
ubuntu@kub3:~$ kubectl  logs  -f  dev-so-so-bpmn-infra-5675c9fd55-jbpxg   -n  onap 
Installing onap-ca.crt in /usr/local/share/ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
JVM Arguments:  -Djava.security.egd=file:/dev/./urandom -Dlogs_dir=./logs/bpmn/ -Dlogging.config=/app/logback-spring.xml  -Dspring.config.location=/app/config/override.yaml  
 
 ____                                 _         ____  ____  __  __
/ ___| __ _ _ __ ___  _   _ _ __   __| | __ _  | __ )|  _ \|  \/  |
| |   / _` | '_ ` _ \| | | | '_ \ / _` |/ _` | |  _ \| |_) | |\/| |
| |__| (_| | | | | | | |_| | | | | (_| | (_| | | |_) |  __/| |  | |
\____/\__,_|_| |_| |_|\__,_|_| |_|\__,_|\__,_| |____/|_|   |_|  |_|
 
  Spring-Boot:  (v1.5.13.RELEASE)
  Camunda BPM: (v7.8.0)
  Camunda BPM Spring Boot Starter: (v2.3.0)
 
Application exiting with status code 1
 
*************************************************

ubuntu@kub3:~$ kubectl logs  -f   dev-so-so-catalog-db-adapter-95f9cc64-mtxr2  -n onap 
Installing onap-ca.crt in /usr/local/share/ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
JVM Arguments:  -Djava.security.egd=file:/dev/./urandom -Dlogs_dir=./logs/catdb/ -Dlogging.config=/app/logback-spring.xml  -Dspring.config.location=/app/config/override.yaml  
 
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::       (v1.5.13.RELEASE)
 
Application exiting with status code 1
 
 
 


Re: Login Credentials for SDC Cassandra Database

Raju
 

Hi Vitaliy,

Thanks for the information.

1. Whether SDC Cassandra database will store the DCAE-DS (as shown in below) drop-down data?

image.png


2. Does this DCAE-DS data will have the inter connection with SDC deployment process?


Thanks & Regards,
Thamlur Raju.

On Mon, Dec 24, 2018 at 12:25 PM Vitaliy Emporopulo <Vitaliy.Emporopulo@...> wrote:

Hi Thamlur,

 

It’s asdc_user/Aa1234%^!

 

You can see it in the SDC configuration file https://gerrit.onap.org/r/gitweb?p=sdc.git;a=blob;f=sdc-os-chef/environments/Template.json;h=d212d1e98bd04224c6dcc1c4287ecb14df424dfe;hb=refs/heads/casablanca#l90

 

Regards,

Vitaly Emporopulo

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Raju
Sent: Monday, December 24, 2018 08:35
To: onap-discuss@...
Subject: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi SDC Team,

 

Please help me with the default username and password for SDC Cassandra Database in Casablanca release.

 

 

Thanks & Regards,

Thamlur Raju.

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service


Re: [E] Re: [onap-discuss] Documentation for PNF plug n play.

Viswanath Kumar Skand Priya
 

Hi Yang / PNF Team,

Thanks for the links to ppt and demo video. Let me summarize my understand below, please validate it :
  • A PNF is getting mimicked by using a VM baked by HEAT Template.
  • In SDC a PNF is being introduced a resource and same has been used in a Service, which is then getting distributed. Lets say we name that service as PNF Service.
  • The only item that ties the PNF running in the network ( i.e VM ) and PNF Resource in SDC is the Correlation ID .
  • The instantiated PNF will start sending VES events to DCAE VES Collector. Initially these events will get ignored as ONAP doesn't know what to do with these events as no one has subscribed to that topic. When the PNF Service is instantiated via SO through VID, then SO notifies to DMaaP that it is interested in learning all messages with topic <Correlation ID> . From this instant, DMaaP ties both ends together and SO reports back to VID that service is successfully instantiated by ONAP.
If my above understanding is correct, then I have following queries. Could you please clarify ?
  • PNF Modelling in SDC seems to be very abstract i.e It just says "This resource is a PNF" . That's it. It doesn't distinguish between Layer1, Layer2 devices. The datamodel of PNF varies between layers and between vendors and between versions. Also it comes with environment specific details like Building/Rack/Slot/Card/Port details. Tracking this information is very much necessary for complete flow. Is there any working happening in that front in future releases ?
  • The difference between VNF Instantiation and PNF instantiation is Discovery. Unlike VNFs, PNFs cannot be just cloned and created. A PNF is supposed to be shared between network services. Has AAI been updated to support this kind of sharing? Could you please point me to right links to learn AAI side of the story? 
    • PS : I'm aware of Allotted Resource Model . This is WIP and doesn't really work with PNFs. Is there any work happening / happened w.r.t PNF Inventory, apart from this Allotted Resource Model?
BR,
Viswa

On Thu, Dec 20, 2018 at 10:50 PM Yang Xu <Yang.Xu3@...> wrote:

Gupta,

 

I personally found this slides and demo are very helpful, and think it can help you get started.

 

PNF PnP Slides & Demo

Presentation Slides

demo

 

thanks,

-yang

 

 

From: onap-discuss@... [mailto:onap-discuss@...] On Behalf Of Prateek Gupta
Sent: Thursday, December 20, 2018 7:56 AM
To: Gary Wu <gary.i.wu@...>; onap-discuss@...
Subject: [onap-discuss] Documentation for PNF plug n play.

 

Hi,

 

I have ONAP R3 release installed, i want to try the PNF plug n play usecase.

Does anyone has tested and have the complete documentation? From on-boarding to instantiation.

 

Thanks,

Prateek Gupta


Re: Login Credentials for SDC Cassandra Database

Vitaliy Emporopulo
 

Hi Thamlur,

 

It’s asdc_user/Aa1234%^!

 

You can see it in the SDC configuration file https://gerrit.onap.org/r/gitweb?p=sdc.git;a=blob;f=sdc-os-chef/environments/Template.json;h=d212d1e98bd04224c6dcc1c4287ecb14df424dfe;hb=refs/heads/casablanca#l90

 

Regards,

Vitaly Emporopulo

 

From: onap-discuss@... <onap-discuss@...> On Behalf Of Raju
Sent: Monday, December 24, 2018 08:35
To: onap-discuss@...
Subject: [onap-discuss] Login Credentials for SDC Cassandra Database

 

Hi SDC Team,

 

Please help me with the default username and password for SDC Cassandra Database in Casablanca release.

 

 

Thanks & Regards,

Thamlur Raju.

This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service

8661 - 8680 of 23353