Re: [Tails-dev] Automated tests specification

Delete this message

Reply to this message
Author: bertagaz
Date:  
To: The Tails public development discussion list
Subject: Re: [Tails-dev] Automated tests specification
Hi,

On Wed, Aug 26, 2015 at 07:46:17PM +0200, anonym wrote:
> On 08/26/2015 07:21 PM, bertagaz wrote:
> > The rational behind my proposal was that it would at least raise the
> > issue if there were some external changes that breaks the build of this
> > feature branch (mostly, changes in APT/Debian).
>
> ... so this was the point I was gonna make, but apparently forgot. And I
> some how missed your mention of "external changes" in your response to
> intrigeri. Great, then we're on the exact same page! :)


Yep, that's nice! :)

> I expect things to quickly go out of hand if we do it on every push, but
> hey, my expectations are frequently wrong. I say we try it if it's easy
> to quickly switch to your suggested optimization if needed.


Well, in fact we already decided to use different strategies depending
on the ReadyForQA state of a branch. So it will be implemented at the
beginning. :)

> > Yes, cucumber tags were the solution I was thinking about to implement
> > this. But I get your "do not miss stuffs" argument and it sounds
> > completely rational to me.
> >
> > Yet that could be an option we could combine with the previous one
> > ("test only ReadyForQA branches"): we could test only specific features
> > for all the dev life of a branch, and then once it is marked as
> > ReadyForQA, run the whole test suite on it. That would pretty much looks
> > like the way you describe the development of a branch.
>
> Ok, this sounds more acceptable.


I've updates the "Future ideas" section of the blueprint with this.

I've also added a new section about the result to keep:

## What kind of result shall be kept

The test suite produces different kind of artifacts: logfiles, screen
captures for failing steps, snapshots of the test VM, and also videos of
the running test session.

Videos may be a bit too much to keep, given they slow down the test
suite and might take quite a bit of disk space to store. If we want to
keep them, we may want to do so only for failing test suite runs. But
then, often the screen capture are enought to identify why a step failed
to run. If we decide to still use them, then we probably have to wait
for [[!tails_ticket 10001]] too be resolved.

Proposal:

* For green test suite runs: keep the test logs (Jenkins natively do
that)
* For red test suite runs: keep the screen captures and the logs.

The retention strategy should be the same than for the automatically
built ISOs.

I don't expect this last one to raise a lot of discussions, so let say that the
deadline for this thread is next Sunday 12pm CEST unless some points are still
not clear. I think the rest has already been discussed and drafted enough.

bert.