Re: [Tails-ux] Report on user testing of the Additional Soft…

Delete this message

Reply to this message
Autor: sajolida
Data:  
Para: Tails user experience & user interface design
Assunto: Re: [Tails-ux] Report on user testing of the Additional Software beta
Bernard Tyers:
> Hi Sajolida,
>
> Well done on a really nice piece of work!


Hey Bernard!

> It's really great to see this work being done and also shared publicly.


Thanks!

> Will it be put on the Tails UX wiki?


We don't have a Tails UX wiki as such. What we have is:

- Blueprints on our website for our work on new features. In this case:

https://tails.boum.org/blueprint/additional_software_packages/gui/

I haven't link the results from there but I could :)

- A Git repo that I use for my UX work (I say "my" because I'm alone for
now). It's pretty new and there I push:

  - Wireframes of the prototypes that I design and other documents
    related to new features:


    https://git.tails.boum.org/ux/tree/additional%20software


- Some generic wireframes, tools, and templates:

    https://git.tails.boum.org/ux/tree/generic (for WireframeSketcher)
    https://git.tails.boum.org/ux/tree/tools


In this case the results are there:

    https://git.tails.boum.org/ux/tree/additional%20software


> About the "clear boundary between what is part of the Additional
> Software project and what is not", it's probably better *not* to make
> incorrect assumptions which might lead you to false solutions.


Right. And also, it's not because we are working on new-feature-x that
UX issues everywhere else in Tails or GNOME won't prevent users from
using this new feature.

> Is there a possibility to run this usability testing again?


Why not! For example, it would be super interesting to do the same tests
with people who are not otherwise Linux users. It's a conscious bias I
made when selecting people this time but it would be interesting to see
how other people perform.

The missions that I used for the tests are here:

https://git.tails.boum.org/ux/tree/additional%20software/mission.fodt

And the ISO image here:

https://tails.boum.org/news/test_asp-beta/

I don't think you need anything else.

But note that we'll meet next Tuesday with the main developer of the
feature to draw conclusions and prioritize the issues to fix in the next
weeks. If we have more test results after that, we might not be able to
integrate them (for now at least...).

> Could make probe where that breakdown happens?


I'm not sure to understand what you mean here. Could you reformulate?

> It could be in the language used in the documentation (if I understand
> correctly that the user may need to use some external documentation?).


I didn't probe users to consult our documentation but they all did.
If you perform the test again from the beta ISO image, you can get them
to read the documentation by turning the network off and opening the
"Tails documentation" launcher from the desktop. Then they will get the
version of the documentation that's embedded in the ISO.

> If you suspect there could be some difficulties in the content, here's a
> really simple method for testing content:
>
> (you can do this as part of the mission testing, or separately)
>
> - provide the user with the required content, preferably printed out
> - give them a coloured highlighter
> - ask them to highlight words they don't understand
>
> If the same words keep coming up as problems, you know there is an issue.
>
> Here's a good blogpost on the method:
> https://userresearch.blog.gov.uk/2014/09/02/a-simple-technique-for-evaluating-content/


Uhh, that's interesting!

In this case I think it was more interesting to see how people navigate
inside Tails between the documentation and the software, and inside the
documentation and the website itself. I came up with a bunch of
navigation issues in the results.

I would also think that asking people to highlight what they don't
understand would drastically change their reading behavior: during the
tests it was clear that people were scanning and not reading the whole
text (which is what most real users would do). So it's also important to
spot what they see and what they miss while scanning so we can improve
our content for scanning.

But this technique looks super interesting and I'll definitely give it a
try to evaluate content more in details!

> Here's a few (possiby) useful blogposts on testing content:
>
> https://gds.blog.gov.uk/2016/04/06/guest-post-looking-at-the-different-ways-to-test-content/
> https://insidegovuk.blog.gov.uk/2015/11/16/guerrilla-testing-content-it-makes-it-better/


Good ones, thanks!

--
sajolida