[Ciotoflow] [paranoia] leave your cellphone at home

このメッセージを削除

このメッセージに返信
著者: ono-sendai
日付:  
To: Flussi di ciotia.
題目: [Ciotoflow] [paranoia] leave your cellphone at home
Ecco un bell'articolo/intervista con Jacob Appelbaum. Si trattano argomenti che
vanno dalle speculazioni sul nuovo centro dell'NSA nello Utah (che dovrebbe
avere una capacita' di processare informazioni nell'ordine degli YOTTABYTE),alle
rivelazioni dell'ex agete NSA Binney che ha svelato come dal 2001 l'NSA
controlla *sistematicamente* *tutte* le comunicazioni sul suolo americano ai
mille modi in cui i governi (specialmente quello USA) spiano e controllano i
cittadini. Un po' lunga,ma una buona lettura da water,magari utile per il
cryptoparty!

E poi...pensate al mining che si fara' su queste informazioni!!!

Paranoia++

http://nplusonemag.com/leave-your-cellphone-at-home


Sarah Resnick

Leave Your Cellphone at Home

Interview with Jacob Appelbaum

- From OCCUPY Gazette 4, out May 1.

Earlier this year in Wired, writer and intelligence expert James Bamford
described the National Security Agency?s plans for the Utah Data Center. A
nondescript name, but it has another: the First Intelligence Community
Comprehensive National Cyber-security Initiative Data Center. The $2 billion
facility, scheduled to open in September 2013, will be used to intercept,
decipher, analyze, and store the agency?s intercepted
communications?everything from emails, cell phone calls, Google
searches, and
Tweets, to retail transactions. How will all this data be stored?
Imagine, if
you can, 100,000 square-feet filled with row upon row of servers, stacked
neatly on racks. Bamford projects that its processing-capacity may aspire to
yottabytes, or 1024 bytes, and for which no neologism of higher
magnitude has
yet been coined.

To store the data, the NSA must first collect it, and here Bamford relies on
a man named William Binney, a former NSA crypto-mathematician, as his main
source. For the first time, since leaving the NSA in 2001, Binney went
on the
record to discuss Stellar Wind, which we all know by now as the warrantless
wiretapping program, first approved by George Bush after the 2001 attacks on
the twin towers. The program allowed the NSA to bypass the Foreign
Intelligence Surveillance Court, in charge of authorizing eavesdropping on
domestic targets, permitting the wholesale monitoring of millions of
American
phone calls and emails. In his thirty years at the NSA, Binney helped to
engineer its automated system of networked data collection which, until
2001,
was exclusively directed at foreign targets. Binney left when the
organization started to use this same technology to spy on American
citizens.
He tells of secret electronic monitoring rooms in major US telecom
facilities, controlled by the NSA, and powered by complex software programs
examining Internet traffic as it passes through fiber-optic cables. (At a
local event last week, Binney circulated a list of possible interception
points, including 811 10th Avenue, between 53rd & 54th St., which houses the
largest New York exchange of AT&T Long Lines.) He tells of software, created
by a company called Narus, that parses US data sources: any communication
arousing suspicion is automatically copied and sent to the NSA. Once a name
enters the Narus database, all phone calls, emails and other communications
are automatically routed to the NSA?s recorders.

The NSA wasn?t the only intelligence-gathering agency to have its domestic
surveillance powers expanded in the wake of September 11th. The USA PATRIOT
Act, for instance, allows the FBI to spy on US citizens without
demonstrating
probable cause that its targets are engaged in criminal activities. Under
Section 215 of the Act, the now infamous National Security Letters?which
formerly required that the information being sought pertain to a foreign
power or agent of a foreign power?can compel the disclosure of sensitive
information held by banks, credit companies, telephone carrier, and Internet
Service Providers, among many others, about US citizens. The recipient of an
NSL is typically gagged from disclosing the fact or nature of the request.

It?s no secret that, whereas the Fourth Amendment prevents against
unreasonable search and seizure, concerns over ?national security?
occasioned
its disregard and the violation of privacy rights of even the most ordinary
citizens. Activists have all the more reason to worry, repeatedly turning up
as the subject of terrorist investigations. For instance, in 2006 the ACLU
revealed that the Pentagon was secretly conducting surveillance of protest
activities, antiwar organizations, and groups opposed to military
recruitment
policies, including Quakers and student organizations. Relying on sources
from the Department of Homeland Security, local police departments, and FBI
Joint Terrorism Task Forces, the Pentagon collected, stored, and shared this
data through the Threat and Local Observation Database, or TALON,
designed to
track terrorist threats. Or take Scott Crow, a self-described anarchist and
veteran organizer in the global justice movement, who, as the New York Times
reported last year, is one of dozens of political activists across the
country to have come under scrutiny from the FBI?s increased
counterterrorism
operation. The FBI set up a video camera outside his house, monitored guests
as they came and went, tracked his emails and phone conversations, and
picked
through his trash to identify his bank and mortgage companies, presumably to
send them subpoenas. Others to have been investigated included animal rights
activists in Virginia and liberal Roman Catholics in Nebraska. When in 2008,
President Obama took the reigns from George W. Bush, there was an
expectation
that much, or at least some, of this activity would be curbed. Yet, as
Bamford?s article attests, the goverment?s monitoring and collection of our
digital data remains steadfast.

When the Occupy protests started in mid-September of last year, I relied on
data-generating technologies increasingly, more so than I had ever before.
Within a few weeks I had joined multiple OWS-related listservs; I?d started
following Twitter with unprecedented commitment; I spent more hours on
Facebook than I care to acknowledge. I doubt I am the only one. At the same
time, there was a widespread sense of precaution?just because we were
engaging in legal activities, covered by our First Amendment rights, no one,
it seemed, should presume herself exempt from the possibility of
surveillance. Sensitive conversations took place in loud bars, never over
email. Text messages were presumed unsafe. In meetings, cell phone batteries
were removed on occasion. Nevertheless, it was easy to feel unimportant (why
would anyone watch me?) and equally easy to let standards relax?especially
when it meant reclaiming conveniences that, once enjoyed, we?re difficult to
give up. Leaving a trail of potentially incriminating digital data seemed
inevitable. But how bad could it really be? And was there no way to use
these
same tools while safeguarding our privacy?

In late April, I sat down with the independent security researcher, hacker,
and privacy advocate Jacob Appelbaum, who knows a thing or two about the
surveillance state. Appelbaum is one of the key members of the Tor project,
which relies on a worldwide volunteer network of servers to reroute Internet
traffic across a set of encrypted relays. Doing so conceals a user?s
location, and protects her from a common form of networking surveillance
known as traffic analysis, used to infer who is talking to whom over a
public
network. Tor is both free (as in freedom) and free of charge. Appelbaum is
also the only known American member of the international not-for-profit
WikiLeaks.

Resnick: The recent article in Wired describes where and how the NSA
plans to
store its share of collected data. But as the article explains, the Utah
facility will have another important function: cryptanalysis, or
code-breaking, as much of the data cycling through will be heavily
encrypted.
It also suggests that the Advanced Encryption Standard (AES), expected to
remain durable for at least another decade, may be cracked by the NSA in a
much shorter time if they?ve built a secret computer that is considerably
faster than any of the machines we know about. But more to the point?is
encryption safe?

Appelbaum: Some of it is as safe as we think it can be, and some of it
is not
safe at all. The number one rule of ?signals intelligence? is to look for
plain text, or signaling information?who is talking to whom. For instance,
you and I have been emailing, and that information, that metadata, isn?t
encrypted, even if the contents of our messages are. This ?social graph?
information is worth more than the content. So, if you use SSL-encryption to
talk to the OWS server for example, great, they don?t know what you?re
saying. Maybe. Let?s assume the crypto is perfect. They see that you?re in a
discussion on the site, they see that Bob is in a discussion, and they see
that Emma is in a discussion. So what happens? They see an archive of the
website, maybe they see that there were messages posted, and they see that
the timing of the messages correlates to the time you were all browsing
there. They don?t need to know to break a crypto to know what was said and
who said it.

Resnick: And this type of surveillance is called ??

Appelbaum: Traffic analysis. It?s as if they are sitting outside your house,
watching you come and go, as well as the house of every activist you deal
with. Except they?re doing it electronically. They watch you, they take
notes, they infer information by the metadata of your life, which implies
what it is that you?re doing. They can use it to figure out a cell of
people,
or a group of people, or whatever they call it in their parlance where
activists become terrorists. And it?s through identification that they move
into specific targeting, which is why it?s so important to keep this
information safe first.

For example, they see that we?re meeting. They know that I have really good
operational security. I have no phone. I have no computer. It would be very
hard to track me here unless they had me physically followed. But they can
still get to me by way of you. They just have to own your phone, or steal
your recorder on the way out. The key thing is that good operational
security
has to be integrated into all of our lives so that observation of what we?re
doing is much harder. Of course it?s not perfect. They can still target us,
for instance, by sending us an exploit in our email, or a link in a web
browser that compromises each of our computers. But if they have to exploit
us directly, that changes things a lot. For one, the NYPD is not going to be
writing exploits. They might buy software to break into your computer,
but if
they make a mistake, we can catch them. But it?s impossible to catch them if
they?re in a building somewhere reading our text messages as they flow
by, as
they go through the switching center, as they write them down. We want to
raise the bar so much that they have to attack us directly, and then in
theory the law protects us to some extent.

Resnick: So if I were arrested, and the evidence presented came from a
targeted attack on my computer, and I knew about the attack, I would have
some kind of legal recourse?

Appelbaum: Well, that?s an interesting question. What is the legal standard
for breaking into someone?s computer because they were at a protest?
Congratulations, take that to the Supreme Court, you might be able to make
some really good law. I think the answer is that it?s a national newsworthy
incident?nobody knows the cops break into people?s computers. The cops break
into someone?s house, the Fourth Amendment is super clear about that?it
can?t
be done without a warrant.

Resnick: In January of last year, it was reported that the records for your
Twitter account? along with those of Julian Assange, Private Bradley
Manning,
Dutch hacker Rop Gonggrjp, and Icelandic lawmaker Brigatta Jonsdottir?were
subpoenaed by the US government. What is perhaps most notable in this
case is
not that the accounts were subpoenaed, but that the orders, usually gagged
and carried out in secret, became public knowledge. Twitter contested the
secrecy order and won the right to notify you. Several months later, the
Wall
Street Journal revealed that Google and the Internet service provider
Sonic.net, had received similar orders to turn over your data.

Appelbaum: Twitter notified me. But as for Google and Sonic.net, I read
about
it in the Wall Street Journal like everybody else. So now I can talk
about it
because it was in a public newspaper. Those are ?2703(d) administrative
subpoenas,? and they asked for IP addresses, and the email addresses of the
people I communicated with, among other things. The government asserts that
it has the right to get that metadata, that ?signaling? or relationship
information, without a warrant. They get to gag the company, and the company
can?t fight it, because it?s not their data, it?s my data, or it?s data
about
me, so they have no Constitutional standing. And the government asserts that
I have no expectation of privacy because I willingly disclosed it to a third
party. And in fact my Twitter data was given to the government?no one has
really written about that yet. We are still appealing but we lost the stay,
which means Twitter had to disclose the data to the government, and whether
or not they can use it is pending appeal. Once they get the data, it?s not
like it?s private or secret?and even if they can?t use it as evidence, they
can still use it in their investigations.

Resnick: In January of this year, the Twitter account of writer and OWS
protester Malcolm Harris was subpoenaed by the Manhattan District Attorney?s
Office. I think it?s safe to assume these incidents are not anomalies. In
which case, is there a way to use social media sites like Twitter without
putting our private data at risk? Because these sites can be very useful
tools of course.

Appelbaum: In the case of something like Twitter, you can use Tor on the
Android phone?we have a version of Tor for Android called Orbot?and Twitter
together and that?s essentially the best you?re going to do. And even that
isn?t particularly great. Twitter keeps a list of IP addresses where you?ve
logged in, but if you use Tor, it won?t know you are logging in from your
phone. It?s powerful, but the main problem is that it?s kind of complicated
to use. On your computer, you can use the Tor browser, and when you log into
Twitter, you?re fine, no problem all?your IP address will trace back to Tor
again. So now when the government asserts that you have no expectation of
privacy, you can say all right, well I believe I have an expectation of
privacy, which is why I use Tor. I signal that. And the private messaging
capability of Twitter?don?t use it for sensitive stuff. Twitter keeps a copy
of all its messages.

Resnick: During the perceived wave of Internet activism throughout the 2009
Iranian election protests, a new proprietary software called Haystack
received a lot of media attention. Haystack promised Iranian activists
tightly encrypted messages, access to censored websites, and the ability to
obfuscate Internet traffic. You later tested the software and demonstrated
its claims to be false. For those of us who don?t have your technical skill
set, how can we assess whether a particular tool is safe to use, especially
if it?s new?

Appelbaum: First, is the source code available? Second, if the claims are
just too good to be true, they probably are. There?s a thing called
snake oil
crypto or snake oil software, where the product promises the moon and the
sun. When a developer promises that a proprietary software is super secure
and only used by important people, it?s sketchy. Third, are the people
working on this part of the community that has a reputation for
accomplishing
these things? That?s a hard one, but ask someone you know and trust. How
would you go on a date with someone? How would you do an action with
someone?
Transitive trust is just as important in these situations.

Another thing to look at is whether it?s centralized or decentralized. For
example Haystack was centralized, whereas Tor is decentralized. Also, how is
it sustained? Will it inject ads into your web browser, like AnchorFree, the
producer of the Hotspot Shield VPN? Or is it like Riseup.net, whose VPN
service monetizes not through your traffic, but through donations and
solidarity and mutual aid? And if they can inject ads, that means they can
inject a back door. That?s super sketchy?if they do that, that?s bad
news. So
you want to be careful about that.

Finally, remember: The truth is like a bullet that pierces through the armor
of charlatans.

Resnick: What should we know about cell phones? It?s hard to imagine
going to
a protest without one. But like all networked technologies, surely they are
double-edged?

Appelbaum: Cell phones are tracking devices that make phone calls. It?s sad,
but it?s true. Which means software solutions don?t always matter. You can
have a secure set of tools on your phone, but it doesn?t change the fact
that
your phone tracks everywhere you go. And the police can potentially push
updates onto your phone that backdoor it and allow it to be turned into a
microphone remotely, and do other stuff like that. The police can identify
everybody at a protest by bringing in a device called an IMSI catcher.
It?s a
fake cell phone tower that can be built for 1500 bucks. And once nearby,
everybody?s cell phones will automatically jump onto the tower, and if the
phone?s unique identifier is exposed, all the police have to do is go to the
phone company and ask for their information.

Resnick: So phones are tracking devices. They can also be used for
surreptitious recording. Would taking the battery out disable this
capability?

Appelbaum: Maybe. But iPhones, for instance, don?t have a removable battery;
they power off via the power button. So if I wrote a backdoor for the
iPhone,
it would play an animation that looked just like a black screen. And then
when you pressed the button to turn it back on it would pretend to boot.
Just
play two videos.

Resnick: And how easy is it to create something like to that?

Appelbaum: There are weaponized toolkits sold by companies like FinFisher
that enable breaking into BlackBerries, Android phones, iPhones, Symbian
devices and other platforms. And with a single click, say, the police
can own
a person, and take over her phone.

Resnick: Right?in November of last year, the Wall Street Journal first
reported on this new global market for off-the-shelf surveillance
technology,
and created ?Surveillance Catalog? on their website, which includes
documents
obtained from attendees of a secretive surveillance conference held near
Washington, D.C. WikiLeaks has also released documents on these companies.
The industry has grown from almost nothing to a retail market worth $5
billion per year. And whereas companies making and selling this gear say it
is available only to governments and law enforcement and is intended to
catch
criminals, critics say the market represents a new sort of arms trade
supplying Western governments and repressive nations alike.

Appelbaum: It?s scary because [accessing these products is so] easy. But
when
a company builds a backdoor, and sells it, and says trust us, only good guys
will use it? well, first of all, we don?t know how to secure computers, and
anybody that says otherwise is full of shit. If Google can get owned, and
Boeing can get owned, and Lockheed Martin can get owned, and engineering and
communication documents from Marine One can show up on a filesharing
network,
is it realistic to assume that perfect security is possible? Knowing this is
the case, the right thing is to not build any backdoors. Or assume these
backdoors are all abused and bypass them so that the data acquired is very
uninteresting. Like encrypted phone calls between two people?it?s true they
can wiretap the data, but they?ll just get noise.

When Hillary Clinton and the State Department say they want to help people
abroad fight repressive governments, they paint Internet freedom as
something
they can enable with $25 million. Whereas in reality the FBI makes sure that
our communications tech isn?t secure. This makes it impossible for people
like me to help people abroad overthrow their governments because our
government has ensured that all their technology is backdoor ready. And in
theory, they try to legitimize state surveillance here, and there they
try to
make it illegitimate. They say, ?In over-there-a-stan, surveillance is
oppressive. But over here, it?s okay, we have a lawful process.? (Which is
not necessarily a judicial process. For example, Eric Holder and the
drones .
. sounds like a band, right?)

Resnick: Okay, so one thing I?ve heard more than once at meetings when
security culture comes up is that . . . well, there?s a sense that too much
precaution grows into (or comes out of) paranoia, and paranoia breeds
mistrust?and all of it can be paralyzing and lead to a kind of inertia. How
would you respond to something like that?

Appelbaum: The people who that say that?if they?re not cops, they?re feeling
unempowered. The first response people have is, whatever, I?m not important.
And the second is, they?re not watching me, and even if they were, there?s
nothing they could find because I?m not doing anything illegal. But the
thing
is, taking precautions with your communications is like safe sex in that you
have a responsibility to other people to be safe?your transgressions can
fuck
other people over. The reality is that when you find out it will be too
late.
It?s not about doing a perfect job, it?s about recognizing you have a
responsibility to do that job at all, and doing the best job you can manage,
without it breaking down your ability to communicate, without it ruining
your
day, and understanding that sometimes it?s not safe to undertake an action,
even if other times you would. That?s the education component.

So security culture stuff sounds crazy, but the technological
capabilities of
the police, especially with these toolkits for sale, is vast. And to thwart
that by taking all the phones at a party and putting them in a bag and
putting them in the freezer and turning on music in the other room?true,
someone in the meeting might be a snitch, but at least there?s no audio
recording of you.

Part of informed consent is understanding the risks you are taking as you
decide whether to participate in something. That?s what makes us free?the
freedom to question what we?re willing to do. And of course it?s fine to do
that. But it?s not fine to say, I don?t believe there?s a risk, you?re being
paranoid, I?m not a target. When people say that they don?t want to take
precautions, we need to show them how easy it is to do it. And to insist
that
not doing it is irresponsible, and most of all, that these measures are
effective to a degree, and worth doing for that reason. And it?s not about
perfection, because perfection is the enemy of ?good enough.?

I would encourage people to think about the activity they want to engage in,
and then say, Hey, this is what I want to do. Work together collaboratively
to figure out how to do that safely and securely, but also easily without
needing to give someone a technical education. Because that?s a path of
madness. And if people aren?t willing to change their behaviors a little
bit,
you just can?t work with them. I mean that?s really what it comes down
to. If
people pretend that they?re not being oppressed by the state when they are
literally being physically beaten, and forced to give up retinal scans,
that?s fucking ridiculous. We have to take drastic measures for some of
these
things.

The FBI has this big fear that they?re going to ?go dark,? which means that
all the ways they currently obtain information will disappear. Well, America
started with law enforcement in the dark; once, we were perceived to be
innocent until proven guilty. And just because the surveillance is
expanding,
and continues to expand, doesn?t mean we shouldn?t push back. If you haven?t
committed a crime they should have no reason to get that information about
you, especially without a warrant.

Resnick: Are there any other tools or advice you would suggest to an
activist, or anyone for that matter?

Appelbaum: Well, it?s important to consider the whole picture of all the
electronic devices that we have. First, you should use Tor and the Tor
browser for web browsing. Know that your home internet connection is
probably
not safe, particularly if it?s tied to your name. If you use a Mac or
Windows
operating system, be especially careful. For instance, there?s a program
called Evilgrade that makes it easy for attackers to install a backdoor on a
computer by exploiting weaknesses in the auto-update feature of many
software
programs. So if you have Adobe?s PDF reader, and you?re downloading and
installing the update from Adobe, well, maybe you?ll get a little extra
thing, and you?re owned. And the cops have a different but better version of
that software. Which is part of why I encourage people to use Ubontu or
Debian or Linux instead of proprietary systems like a Mac or whatever.
Because there are exploits for everything. If you?re in a particularly
sensitive situation, use a live bootable CD called TAILS?it gives you a
Linux
desktop where everything routes over Tor with no configuration. Or, if
you?re
feeling multilingual, host stuff in another country. Open an email
account in
Sweden, and use TAILS to access it. Most important is to know your
options. A
notepad next to a fireplace is a lot more secure than a computer in some
ways, especially a computer with no encryption. You can always throw the
notepad in the fireplace and that?s that.

For email, using Riseup.net is good news. The solutions they offer are
integrated with Tor as much as possible. They?re badass. Because of the way
they run the system, I?m pretty sure that the only data they have is
encrypted. And I?d like to think that what little unencrypted data they do
have, they will fight tooth and nail to protect. Whereas, yes, you can use
Tor and Gmail together, but it?s not as integrated?when you sign in, Gmail
doesn?t ask if you want to route this over Tor. But also, Google inspects
your traffic as a method of monetization. I?d rather give Riseup fifty
dollars a month for the equivalent service of Gmail, knowing their
commitment
to privacy. And also knowing that they would tell the cops to go fuck
themselves. There?s a lot of value in that.

For chatting, use software with off-the-record messaging (OTR)?not Google?s
?go off the record,? but the actual encryption software?which allows you to
have an end-to-end encrypted conversation. And configure it to work with
Tor.
You can bootstrap a secure communication channel on top of an insecure one.
On a Mac, use Adium?it comes with OTR, but you still have to turn it on.
When
you chat with people, click verify and read the fingerprint to each other
over the telephone. You want to do this because there could be a ?man in the
middle? relaying the messages, which means that you are both talking to a
third party, and that third party is recording it all.

As for your cell phone, consider it a tracking device and a monitoring
device
and treat it appropriately. Be very careful about using cell phones, but
consider especially the patterns you make. If you pull the battery, you?ve
generated an anomaly in your behavior, and perhaps that?s when they trigger
people to go physically surveil you. Instead, maybe don?t turn it off, just
leave it at home. Because, as I said earlier, in a world with lots of data
retention, our data trails tell a story about us, and even if the story is
made of truthful facts, it?s not necessarily the truth. On a cell phone, you
can install stuff like OStel, which allows you to make encrypted
voice-over-the-Internet calls, or PrivateGSM?it?s not free, but it?s
available for BlackBerries, Android phones, iPhones and so on. Which means
that if they want to intercept your communication, they have to break into
your phone. It?s not perfect. Gibberbot for the Android allows you to
use Tor
and Jabber?which is like Google Chat?with OTR automatically configured. You
type in your Jabber ID, it routes over Tor, and when you chat with other
people, it encrypts the messages end-to-end so even the Jabber server can?t
see what?s being said. And there are a lot of tools like that to choose
from.

Another thing to consider is the mode in which we meet. If we want to edit
something collaboratively, there?s a program called Etherpad. And there?s a
social networking application called Crabgrass, and hosted at we.riseup.net.
It?s like a private Facebook. Riseup still has a lot of the data, but it?s
private by default. So it?s secure, short of being hacked, which is
possible,
or short of some legal process. And if you use it in a Tor browser, and
never
reveal information about yourself, you?re in really good shape. Unlike
Facebook, which is like the Stasi, but crowdsourced. And I mean that in the
nicest way possible. I once had a Facebook account?it?s fun and a great way
to meet people. But it is not safe for political organizing, especially when
you?re part of the minority, or when you?re not part of the minority,
but you
are part of the disempowered majority.

As a final thought, I?d say just to remember that a big part of this is
social behavior and not technology per se. And a big part of it is accepting
that while we may live in a dystopian society right now, we don?t always
have
to. That?s the tradeoff, right? Because what is OWS working toward? The
answer is, something different. And if we want an end to social inequality,
the surveillance state is part of what we have to change. If we make it
worthless to surveil people, we will have done this. So, it needs to be the
case that what we do doesn?t hang us for what we wish to create.