Re: [Tails-dev] Tails control port filter proxy in Whonix?

Üzenet törlése

Válasz az üzenetre
Szerző: anonym
Dátum:  
Címzett: The Tails public development discussion list
CC: michael, Patrick Schleizer
Új témák: [Tails-dev] Tails HSTS website error - was: Tails control port filter proxy in Whonix?
Tárgy: Re: [Tails-dev] Tails control port filter proxy in Whonix?
Patrick Schleizer:
> Hi,
>
> as discussed elsewhere, yes, it would be great if we could share code bases!


Agreed, but we have to realize that at the moment Whonix and Tails run
these filters in quite different contexts and under different threat
models. Whonix runs the filter in a different VM than the clients
connecting to it, and so has less information about the client
connections than what we can have in Tails (e.g. client PID/uid/gid,
executable path, blah) and probably most problems you are facing are due
to this compartmentalization. Really, what makes Tails' control port
filter worthwhile is it's ability to match filter rules to the client
executable/script, and since Whonix won't be able to make use of this (I
mean, the appvm is untrusted in your threat model, so any such
information cannot be trusted any way) there's perhaps not so much to
gain to migrate. :/

I recommend that you read our monthly report for September:

    https://tails.boum.org/news/report_2016_09/#index2h1


and look at the documentation at the top of the script, and the filter
rules we ship to get an idea of what it can do.

As you can see, in Tails we use match-exe-paths and match-users a lot,
but since you won't have access to these I guess you want something like
match-hosts, so that the filter is picked based on the client's
(non-localhost) address (= VM). Right?

> Does it support simultaneous connections? (Such as two applications
> using ephemeral Tor hidden services plus Tor Browser at once.)


Yes, it uses socketserver.ThreadingTCPServer with each such server
thread opening its own "real" control port session with tor.

> Does Tails control port filter proxy support events? I mean, can a
> client application ask for something and Tor will maybe answer a long
> time later?


Yes, thanks to python-stem it was simple.

> Whonix control-port-filter-python TODO, also stuff we need before we can
> use it:
>
>>> - https://phabricator.whonix.org/T561
>
> Is something we must use in Whonix. Not a cpfpy missing feature but a
> general issue. In essence, for example the onionshare localhost server
> listener will not be reachable. We somehow must force it listen on all
> interfaces so Tor running on the gateway can access it.


Got it. This seems like a problem orthogonal to the filter, so I skip it.

>>> - https://phabricator.whonix.org/T562
>
> This is about parsing add_onion and whitelisting sane commands rather
> than letting through everything.


For any command we allow a list of regexes for the arguments. If a
command doesn't match any of them, it is filtered.

> add_onion is not a whitelist/not whitelist.


I do not understand this sentence...

> Buggy applications or by user mistake, they could choose the add_onion
> flag nonanonymous, which would be a disaster. We also don't know what
> Tor control protocol upgrades are coming in the years to come. So I
> strongly suggest a only letting through whitelisted syntaxes.


... but this I get and agree with. Currently we require ADD_ONION for
onionshare to have args matching the regex 'NEW:BEST Port=80,176\d\d'.

> Malicious applications could make the Tor HS listener bind on the wrong
> interface. In Whonix-Gateway, maliciously listen on Whonix-Gateway.
> Which could be fatal if we had also a real Tor ControlPort open there.
> Does that make sense? I am not sure it applies to Tails, that depends on
> your design and threat model, but it is however an interesting thought
> that can inspire to finding more security issues with it.
>
> Also it may be worth making sure it can only bind to specified (and
> configureable) local ports?


Ack, this is an issue. In Tails there must be an explicit firewall rule
allowing a user listening on some port, so I think we are covered.

> For connectivity, we need to remove 127.0.0.1 and replace it with
> Whonix-Workstation IP. That is currently done with the following code
> block that I was going to merge with T562.
>
> https://github.com/Whonix/control-port-filter-python/blob/6a131266a8dc8f98ff22a3b83fae9d43e38b3127/usr/sbin/cpfpd#L345-L375


Got it. Our filter doesn't do this (as we do not have this need) but I
feel a general solution could be to allow sed-like rewriting rules, e.g.

    ADD_ONION:
      - pattern:     'NEW:BEST Port=80,(176\d\d)'
        replacement: 'NEW:BEST Port=80,10.137.6.41:{0}'


which would be easy to implement.

>>> - https://phabricator.whonix.org/T564
>
> Protecting cpfpy from DDOS from client applications. Not sure that
> matters for Tails?


We do not do much specific here. What kind of DoS are you talking about
here? Eating up all RAM or crashing the filter via oom kill? Preventing
the filter from serving other clients? We admittedly do not do much here
except that each client is dealt with in a separate thread, and that
client requests are limited to 1024 bytes.

>>> - https://phabricator.whonix.org/T565
>
> Similar to above.


There's quite some cost in dealing with each new client since we do a
client port -> client PID lookup, but this wouldn't be an issue in
Whonix. Still, flooding the filter will probably lead to huge CPU usage
no matter what. Tor itself is susceptible to this, but I'm sure the
control port filter adds at least 10x overhead.

>>> - https://phabricator.whonix.org/T566
>
> The unit test for T562.


Writing unit tests for commands vs rules should be easy, around
handle_controlport_session() -- just pass it the rules, a read socket
that you write the commands you want to test to, and a write socket that
you read back the result from.

> Other required features:
>
> - Configurable by dropping .d-style[7] configuration snippets. (ex:
> /etc/cpfpy.d)


Yes.

> - Debian packaging.


No, but if you plan to use it, I guess I'd give a shot at packaging it.

> Lesser important features:
>
> - Supports logging.


I'm not sure what type of logging you are talking about here. Currently
it uses print() (to stdout) and flush, which works well with the journal
when run as a systemd service. Normally only filtered commands are
logged, but there's also a --complain mode which logs all client
requests, which is useful when writing rules for a new application.

> - Honors signals sigterm, sigint, keyboard interrupt.


I guess?

> - systemd support


Not sure what this means. We have a .service file.

> - When request is 'getinfo net/listeners/socks' answer with a lie
> '250-net/listeners/socks="127.0.0.1:9150"'.


Nope. Why is this needed?

I could even imagine yet another rule-type for solving these types of
issues:

    GETINFO:
      - pattern: 'net/listeners/socks'
        respond: '250-net/listeners/socks="127.0.0.1:9150"'


--

In conclusion, I think the truth is that Whonix switching to our filter
will require some work to reach feature-parity with you current filter,
and you will not really gain anything by doing so except code sharing.
YMMV. That said, I'd happily implement match-hosts and the two
additional types of rules I mentioned above if that would be enough for you.

Still, I feel we've left out roflcoptor from this discussion. At the
moment the biggest turn-off for me is that it is written in Go, a
language I have little interest in learning, and the lack of Debian
packaging. I'm not sure what else to say, but it just feels like there
needs to be a discussion before we'd proceed in collaborating on a
control port filter.

Cheers!