Re: [Tails-dev] Automated tests specification

Poista viesti

Vastaa
Lähettäjä: anonym
Päiväys:  
Vastaanottaja: The Tails public development discussion list
Aihe: Re: [Tails-dev] Automated tests specification
On 08/26/2015 07:21 PM, bertagaz wrote:
> On Wed, Aug 26, 2015 at 02:00:25PM +0200, anonym wrote:
>> On 07/01/2015 07:19 PM, intrigeri wrote:
>>> bertagaz wrote (25 Jun 2015 09:41:23 GMT) :
>>>> for feature branches, we could run the full test suite only on the
>>>> daily builds, and either only the automated tests related to the
>>>> branch on every git push, and/or a subset of the whole test suite.
>>>
>>> I'm not sure what's the benefit of testing a topic branch every day if
>>> no new work has been pushed to it. In the general case, as a developer
>>> I'd rather see them tested on Git push only, with some rate limiting
>>> per-day if needed.
>>
>> I would say that testing images built due to a Git push only is good
>> enough, but not ideal. We have good reasons for building branches on
>> Debian package uploads too, and retrying on builds triggered by that
>> would be nice as well.
>
> Yes, the APT upload use case is thought about but reserved for future
> developments, it's not supposed to be taken care of in the current
> milestone.


I know...

> The rational behind my proposal was that it would at least raise the
> issue if there were some external changes that breaks the build of this
> feature branch (mostly, changes in APT/Debian).


... so this was the point I was gonna make, but apparently forgot. And I
some how missed your mention of "external changes" in your response to
intrigeri. Great, then we're on the exact same page! :)

>>> See below wrt. one specific case.
>>
>> I couldn't find what this refers to.
>>
>>>> We can also consider testing only the feature branch that are marked
>>>> as ReadyforQA as a beginning, even if that doesn't cover Scenario
>>>> 2 (developers).
>>>
>>> Absolutely, I think that would be the top-priority thing to do for
>>> topic branches: let's ensure we don't merge crap.
>>
>> It would be great to also ensure that we don't review "crap". :) I guess
>> that is scenario 2, which we explicitly ignore with this proposal. I'll
>> post some ideas about how to deal with that in a separate thread, but I
>> guess this is a good start that will give us 95% of what we want.
>
> Ok, so that looks like the way to cut down the number of automated tests
> everyone agrees on so far.
>
> Now I'm wondering if we should implement this at first, or just start
> with testing all of them on eveyr push and see if we need to switch to
> that solution if our infra can't cope with it.


I expect things to quickly go out of hand if we do it on every push, but
hey, my expectations are frequently wrong. I say we try it if it's easy
to quickly switch to your suggested optimization if needed.

>>>> We can also maybe find more ways to split the automated test suite
>>>> in faster subsets of feature depending on the context, define
>>>> priorities for built ISO and/or tests.
>>>
>>> This feels ambituous and potentially quite complex. I say let's design
>>> something simple and then re-adjust.
>>
>> I'm not sure I like this idea in principle. With "context" I assume you
>> (bertagaz) mean the context of the change implemented in the ISO to be
>> tested, e.g. for an ISO that upgrades Tor, the context is "tests that
>> uses Tor". It's true that in that case we may only want to run some
>> subset of tests that uses Tor, but not Tails USB installation/upgrades,
>> for instance. This is in fact something we have done manually, and while
>> it has worked quite well, I think we've already "missed" stuff. After
>> all, these subsets would represent the obvious things to test that I as
>> an implementer or reviewer probably would explicitly test before asking
>> for a review. Hence, only running them wouldn't catch the non-obvious
>> edge cases that would be found outside of the subsets.
>>
>> It should be noted, though, that defining such subsets actually isn't
>> very complex. It can be implemented with cucumbers tags, e.g. we could
>> have scenarios tagged @networking, @tor, @lan, @persistence,
>> @usb_upgrade etc. even in combinations, and then run only scenarios that
>> have at least one of the tags we're interested in.
>
> Yes, cucumber tags were the solution I was thinking about to implement
> this. But I get your "do not miss stuffs" argument and it sounds
> completely rational to me.
>
> Yet that could be an option we could combine with the previous one
> ("test only ReadyForQA branches"): we could test only specific features
> for all the dev life of a branch, and then once it is marked as
> ReadyForQA, run the whole test suite on it. That would pretty much looks
> like the way you describe the development of a branch.


Ok, this sounds more acceptable.

> Could be interesting, especially if a dev is using the test suite as a
> TDD solution. But I guess in this case she doesn't need such an input
> from Jenkins, she'd already have it by herself.


Right. I do not expect to stop running the automated test suite locally
when developing new stuff. In fact, I expect being able to run the
automated test suite yourself will be the only way to do TDD efficiently
until our infrastructure has evolved quite a bit more. :) That said, I
still see great value in this initial iteration in terms of how it will
improve project wide branch integration testing and QA in general, and
ease the RMs and reviewers workload.

Cheers!