29/11/12 16:40, intrigeri wrote:
> Hi,
>
> (Coming back to this branch that apparently managed to get out of our
> radar, possibly due to the lack of a ticket.)
>
> anonym wrote (09 Oct 2012 14:17:45 GMT) :
>> - With "normal" non-PAE kernel:
>> * Patterns remaining after wipe: 51K ≃ 800 KiB of memory. Also, in
>> this case hugetlb_mem_wipe exits at 51% progress with the
>> following error:
>
>> wipe_page: Cannot allocate memory
>> spawn_new failed (1)
>
>> OTOH, since only 51K patterns remains, the progress meter seems
>> to be wrong (I suppose it's just a buffer issue) and that it in
>> fact dies on around 99% progress.
>> * Time required for wipe: ~1 second.
>
> Ague told me a few weeks ago that this issue had been solved.
> Indeed, it looks like it's been fixed by 64a404f2 (Do not assume page
> sizes in hugetlb_mem_wipe). I could not reproduce this bug in a VM
> fed with 5GB of memory and an emulated Pentium III without PAE,
> but the whole process is going to fast for me to see anything useful.
>
> => anonym, may you please confirm the bugs you experienced are gone?
> (I've merged this branch into experimental to help such tests.)
Both tests performed in the same 8 GiB-RAM VM:
With PAE-kernel: 137K occurences ≃ 2.1 MiB of unwiped memory.
With non-PAE-kernel: 155K occurences ≃ 2.4 MiB of unwiped memory.
I didn't experience the "wipe_page: Cannot allocate memory" or
"spawn_new failed (1)" errors in either test.
>> Given that:
>
>> * current devel cleans *all* memory in the most common case (PAE
>> kernel), and that it does so without taking very much more time, and
>> * I'm unsure what the implications are of hugetlb_mem_wipe exiting with
>> that error on a non-PAE kernel,
>
>> I'd rather wait with merging feature/hugetlb_mem_wipe until after Tails
>> 0.14.
>
> Once it is confirmed the bug is gone, I would like to merge this
> branch in time for 0.16 (so I'll probably update the design doc
> myself, hrm). What do you think?
Well, while this branch makes the wipe much faster and better looking,
it's not as efficient as Tails' current parallel sdmem approach when
using a PAE kernel (which usually results in 0 occurrences in my tests,
if not it's just a few hundred occurences), which arguably is what most
users will use. For the non-PAE kernel I believe this branch is better,
though.
Are we sure we want this?
Cheers!