Re: revised code coverage requirement


Pierre Close
 

Hello Krzysztof,

Please find my comments inline.

Amy,

Is it possible to wait a little longer before agreeing on this requirement? Or do we have an urgency in deciding today -- in such a case, Krzysztof's comments lead me to delay the use of Sonar until we have a solid plan.

Best regards,
Pierre

-----Original Message-----
From: Krzysztof Opasiak <k.opasiak@...>
Sent: Wednesday, October 30, 2019 10:10 PM
To: Close, Pierre <pierre.close@...>; ZWARICO, AMY <az9121@...>; Pawlak Paweł 3 - Korpo <Pawel.Pawlak3@...>; onap-seccom@...
Subject: Re: [Onap-seccom] revised code coverage requirement

Hi,

On 30.10.2019 19:24, Pierre Close wrote:
Amy,

That’s a good question. But your explanation makes sense to me: when a
new Sonar scan is run it probably overrides the result of the previous
one, causing those counters to reset…  Additionally, what is
considered “new code”? Is a configuration file change seen as new code?

I’ll try to look for an explanation from my colleagues and come back
to you when I got an answer (or maybe the LF knows?)
We would need a clear definition of the new code. Is a change in currently existing class a new code or only newly created classes? Is new function withing a class a new code?
[PC]: Agreed. That's what I was suggesting in my comment above.

Saying that we will just use the metric that the tool provides without any explanation to the community is not a good thing I believe
[PC]: I think we need to dig further into the capabilities of the tool and verify how we can use the metrics in a proper and meaningful way, otherwise it won't be of any use, as you mention.


In the meantime, since the feature seems available in the tool, we
might want to consider binding the “new code coverage” objective to
the tool – in other words, if the feature is usable and gives what we
expect, we could start monitoring it.

Would that make sense?
As far as I can see (please correct me if I'm wrong) the tool gives you only the current status of the code and diff since last run. What we would need is a dashboard as bitergia where you can select any arbitrary period of time and check the results
[PC]: I was under the impression that you could choose different versions to compare between... Now, your comments being valid, I propose that we take some time to further analyze what we can do, build (if possible) meaningful metrics and dashboards (as you mention), and then decide what route to take -- I know the timing is against us, but throwing all this into the community without further research is not constructive, so I support your statements. Would you agree with this approach?

Best regards,
--
Krzysztof Opasiak
Samsung R&D Institute Poland
Samsung Electronics

Join onap-seccom@lists.onap.org to automatically receive all group messages.