Prefer:Safe — Making Online Safety Simpler in Firefox

Alex Fowler

0

Mozilla believes users have the right to shape the Internet and their own experiences on it. However, there are instances when people seek to shape not only their own experiences, but also those of young users and family members whose needs related to trust and safety may differ. To do this, users must navigate multiple settings, enable parental controls, tweak browsers and modify defaults on services like search engines.

We’re pleased to announce a smart feature in Firefox for just this type of user called Prefer:Safe, designed to simplify and strengthen the online trust and safety model. Developed in collaboration with a number of leading technologists and companies, this feature connects parental controls enabled on Mac OS and Windows with the sites they visit online via their browser.

How it works:

  • Users on Mac OS and Windows enable Parental Controls.
  • Firefox sees that the user’s operating system is running in Parental Control mode and sends a HTTP header — “Prefer:Safe” — to every site and service the user visits online.
  • A site or service looking for the HTTP header automatically supports higher safety controls it makes available, including honoring content or functionality restrictions.
  • Users won’t find any UI in Firefox to enable or disable Prefer:Safe, which becomes one less thing for kids to try to circumvent to disable this control.

Prefer:Safe demonstrates the power and elegance of HTTP headers for empowering users to communicate preferences to websites and online services. This is one reason we’ve been championing Do Not Track, which is a HTTP header-based privacy signal for addressing third-party tracking under development at the W3C. In this case, no other configurations are necessary at either the browser or search engine level for this user preference to be effective across the Web, which helps ensure the intended online experiences meet user expectations.

We’re pleased that Internet Explorer has implemented this feature for their users, which along with Firefox, makes this capability relevant at scale right out of the box. We hope to see broader adoption of this feature in the near future.

For more information about Prefer:Safe, a draft specification has been submitted to the IETF. To discuss this feature, I’ve cross-posted this to Mozilla’s Dev.Privacy group.

Clearer Mozilla Privacy Website & Policies

Denelle Dixon-Thayer

**APRIL 16 UPDATE: the privacy policies are now updated, and you can view them here. Thanks to everyone who provided input on draft policies. We have updated the post below to remove links that are now out of date.**

Over the last year, a group of Mozillians have been exploring how to make our privacy website and policies better.  For example, the Firefox Privacy Policy (Update: link now points to an archived version of previous policy) is over 14 pages long and can be hard to parse – we don’t like that.  Given our focus on transparency and privacy, we wanted to create a framework that:

  1. Is easy to understand yet detailed enough to provide transparency.
  2. Gives users an opportunity to dive deeper into the technical aspects of our policy for specific products.
  3. Does not modify our practices but clarifies how we communicate them.
  4. Allows each product to have its own notice that is simple, clear and usable.

We now have an approach that we want to share and gather input on before implementing. I want to make it clear that although we’re rewriting the text of our privacy notices, we are NOT changing our practices. Our goal is only to make the notices easier to digest and provide users with the information they care about most, including new ways to access more detail if they are interested.

Here’s an overview of the new approach:

  • We’ve consolidated the parts of our products’ various privacy policies that are the same into a “Mozilla Privacy Policy.” Because we believe our approach to user data should be consistent regardless of the product, we’ve centralized as much as we can.
  • We’ve created an individual “Privacy Notice” for any policy that’s specific to an individual product.  We’ll be rewriting all our product notices to fit this mold, but are starting with Firefox and Mozilla websites.
  • We believe there are a group of users who want a more detailed explanation of how features work at a technical level. To provide this detail, we’re also creating new SUMO articles for features (like our Firefox Health Report page) that give users a deeper understanding of those products and will link to those explanations within each Product Privacy Notice.
  • As always, we make all the code that we’ve created in our projects available in source code and under open and permissive licenses so you can see how each feature works in the code itself. We’d like to encourage people to get involved in one of our dev channels such as mozilla.dev.privacy and mozilla.dev.planning, or by looking at the code for each project.

We welcome any questions or input you have through our Governance mailing list.  Our current plan is to implement these changes on April 15.

Our new privacy hub layout features our Privacy Policy on the center of the page and lists our Product Privacy Notices along the right.

Our new privacy hub layout features our Privacy Policy on the center of the page and lists our Product Privacy Notices along the right.

 

We added "learn more" options with bullet points and headings for users to more easily learn about issues.

We added “learn more” / “show less” options for users to more easily find information.

 

 

User Data & You: Privacy for Programmers

Allison Naaktgeboren (:ally)

This was originally posted as a guest post on January 31t 2014. Since then, it has been requested that I post under my own name.

Introduction

I am a Firefox Engineer at Mozilla. I have worked on Desktop Firefox, Firefox for Android, and Firefox Sync. I currently work on Firefox for Windows 8 Touch, (née Firefox Metro). I also serve on Mozilla’s Privacy Council.

On Data Privacy Day, I presented a perspective on what we can do differently. My primary audience is fellow engineers or those engaged in engineering related activities. When I say ‘we’ I am largely referring to engineering as group. The remainder of this post is a written expansion on the presentation. The Air Mozilla recording is available here.

Goal

My goal is to start a public discussion about what engineers need to know about user privacy. Eventually the result of this discussion will evolve into a short training or set of best practices so engineers can ship better code with less hassle. Since this is the start of a public discussion, the content below will probably raise more questions than answers.

There be scaly dragons. Ye have been warned.

Privacy? That Word is so Overused. What is it Exactly & Why do I Care?

Privacy is a culturally laden word and definitions vary widely. Privacy means different things to different people at different times, even within the nascent field of privacy engineering. So for sanity’s sake, the following are my table stakes definitions.

Privacy: How & by whom the personal information of an individual is managed with respect to other individuals.

User Data: Any data related to users that they generate or cause to be generated; or that we generate, collect, store, have custody and control over, transfer, process, or hold interest in.

Why do we care? The reason Mozilla exists is to defend and promote the open web. Firefox & FirefoxOS are great products but they are not the raison d’être outlined in the Manifesto; they are means to an end. The Mozilla Manifesto declares that for a healthy web, users must be able to shape their own experiences on it. Ain’t nothing shapes your experience online more than than the data generated for, by, and about you. Whoever controls that controls your experience on the web. So our goal of the open web is directly linked to individuals ability to control that for themselves.

Acknowledging the Elephant in the Room

Let’s start by acknowledging the elephant in the room: whether or not Mozilla products should even handle user data. That would be a rich discussion on its own. This is not that discussion. This discussion assumes we’re going to handle user data. Regardless of your views, let’s agree that there are some things we will need to do differently when we choose to handle user data. Let’s figure out what those are.

Ok, So We Care; There’s Another Team at Mozilla for That.

There is a misconception I run into often that I’d like to clear up. Data safety & user privacy is everyone’s job, but especially an engineer’s job. At the end of the day, engineers make the sausage. No one has more leverage over what gets written than the engineer implementing it. The privacy team is here to help, but there are three of them and hundreds of us. The duty is really on us not them. Whether our code is fast, correct, elegant, secure, and meets Mozilla’s standards is chiefly our responsibility.

Ok, So it’s Kinda My Job. What do I Need to Think About or do Now?

I have good news & bad news. The good news is that it boils down to writing more stuff down & making more decisions upfront. Stated more formally:

  • More active transparency (writing more stuff down)
  • More proactive planning (making more decisions up front)

Sounds simple eh? Seasoned engineers should feel their spider sense tingling. It’s not miscalibrated. That’s the bad news. It’s how you do it that matters. The devil is in the details. So let’s tackle the easier one first: what I flippantly referred as ‘writing more stuff down’

Active Transparency (aka write more stuff down)

Passive Transparency: unintentional, decisions aren’t actively hidden, but are difficult to locate. May not even be documented

Active Transparency: intentional, everything is written down, easily searchable & locatable by interested parties

If you haven’t heard these terms before, don’t panic. I made them up years ago when I was a volunteer contributor trying to articulate how I was part of an open source project, actively following Bugzilla, but couldn’t figure out what was going on in the /storage module, let alone the rest of the Firefox code base.

Active transparency is functioning transparency. It requires sustained effort. Information, history, and decisions of a feature can be searched for, located, and consumed by all inclined.

Passive transparency is what happens unintentionally. People aren’t trying to hide information from each other. It just happens and no one notices until it is too late to do anything about it.

We often don’t notice because those who code marinate in information. We rarely bother to test whether or not anyone else outside can figure out what we’re living and breathing life into.
Break that habit. You test your code to prove it works; so test your transparency to prove it works (or doesn’t). Ask someone in marketing or engagement to figure out the state of your project or why your design is in its current state. Can they explain your tradeoff, constraints or design decisions back to you? Can they even find them?

I hear grumbling already: ‘Sounds like useless paperwork, not worth it’. What you really mean is ‘not worth it to you right now’, but it’s worth much to the people who will be responsible for it after you ship it, and there will be many of them.

One of the ways user data based features differ dramatically from application features of yore is that control will change hands many times over. Future development, operations, database administration, etc teams cannot read your mind. They also can’t go back in time to read your mind.

As an added bonus, privacy is not the only reason to be actively transparent. Active transparency is vital to building our community. Like open source software, it’s not really open if no one can find it and participate. Active transparency applies to the decision making process as much as to source code.

Proactive Planning (aka Making More Decisions With More People)

Now we move on the harder part – more decisions you’ll need to make with more people. Getting agreement on requirements is often one of the most difficult and least pleasant part of of an engineer’s craft. When handling user data, it will get harder. Your stakeholders will increase because the number of people who handle the data your feature generates or uses over its lifetime has increased.

The reason for enduring that pain at the beginning is that effective privacy is something you’ll only get one shot at. It’s usually impossible or cost-prohibitive to bolt it on to stuff after it’s built.

Proactive planning decisions will make up the bulk of the rest of the post. They are phrased in question form because the answers will be different for each project. They should not be interpreted as an all-inclusive list. The call to action for you is to answer them (and write the reasons down in a searchable location. Ahem – active transparency!)

30,000 Foot Views

The problem space can be vast. Below are two high level categorizations to jumpstart your problem solving, so that your feature can concretely bring to life the Manifesto’s declaration that users must be able to shape their own experience.

First Way to Slice It

An intuitive place to start is interaction.

Interactions between events and their data, or the data lifecycle

  • Birth
  • Life
  • Death
  • Zombie (braaaains)

Interactions between us and their data

  • How sensitive is this data?
  • Who should have access to it?
  • Who will be responsible for the safety of that data?
  • Who will make decisions about it when unexpected concerns come up?

Interactions between users and their data

  • How will a user see the data?
  • How will a user control it?
  • How will a user export it?

Second Way to Slice It

Another way to group key decisions is by basics plus high level concerns, such as:

  • Benefits & Risks
  • Openness, Transparency, & Accountability
  • Contributors & Third Parties
  • Identities & Identifiers
  • Data Life Cycles & Service History
  • User Control
  • Compatibility & Portability

Things to Think About – Basics

To start off, most of these seem pretty obvious. However there can be gotchas. For example, how identifying a type of data is can be tricky. What is seemingly harmless now could later be shown to be strongly identifying. Let’s consider the locale of your Firefox installation. If you are in en-us (the American English version), locale is not very identifying. Seems obvious. However, for small niche locales, it can be linked to a person.

  • Does your product/feature generate user data?
  • Metadata still counts
  • Does your product/feature store user data?
  • What kind of data & how identifying is it?
  • Are there legal considerations to this feature?
  • How do you authenticate users before they can access their data?
  • Which person or position is responsible for the feature while it remains active?
  • Who makes decisions after the product ships?
  • Figure this out. Now.

Things to Think About – Benefits and Risks

There will always be risk in doing anything. There exists a risk that when I leave my house an anvil with drop on me. That doesn’t mean I never leave my house. I leave my house because the benefits(like acquiring dinner) outweigh the risk. Similarly, there will always be risk when handling user data. That doesn’t mean we should handle it, but there had better be benefit to the user. ‘Well, it might be useful later‘ is probably not going to cut the mustard at Mozilla as a benefit to users.

  • What is the benefit to users from us storing this data?
  • What are the current alternatives available on the market?
  • What is the risk to users from storing this data?
  • What is the risk and cost to Mozilla from storing this data?
  • Where are you going to store this user data? Whose servers? (If not ours, apply above questions as well)

Things to Think About – Openness, Transparency and Accountability

For a Mozilla audience, this is preaching to the choir.

  • Have the benefits & risks of this feature been discussed on a public forum like a mailing list?
  • Should we exempt detailed discussion of handling really sensitive data?
  • Where is the documentation for our tradeoffs and design decisions, with respect to user data? (*cough* Active transparency!)

Things to Think About – Contributors and Third Parties

The use of third party vendors adds additional nuances, as I alluded to earlier.

  • Are any third party companies or entities involved in this? (ex: Amazon AWS)
  • Do we have a legal agreement governing what they can and can’t do with it?
  • Who makes decisions about access to the data?

At Mozilla, we sometimes release data sets so researchers can contribute knowledge about the open web for the public good.

  • Could researchers access it directly?
  • Do we have plans to release the dataset to researchers?
  • What would we do to de-identify the data before release?

Things to Think About – Identity and Identifiers

There’s probably nothing more personal than someone’s identity.

  • Will this feature have a user identification?
  • Who owns the login/username/identifier?
  • Is it possible to use this feature without supplying an identifier?
  • How will the user manage this identification?
  • Can they delete it?
  • Who can see this identifier?
  • Can the user control who can see their identifier?
  • Can this identifier be linked to the real life identity?
  • Can a single person have multiple identifiers/accounts?

Things to Think About – Data Lifecycles and Service History

This is an area that most application developers will have trouble with because we often don’t think about the mid-life or death of our feature or the data it uses. It ships, it’s out! Onto the next thing!

Not so fast.

  • Which person or position is responsible for the data/feature while it remains active?
  • Who makes decisions after the product ships?
  • Can a user see a record of their activities?
  • What happens to an inactive account and its associated data?
  • When is a user deemed inactive?
  • How will you dispose of user data?
  • What’s the security of the data in storage?
  • How long would we retain the data?
  • Who has access to the data at various stages?

Things to Think About – User Control

To shape their own experiences on the web, users need to have control of their data.

  • How can a user see their data?
  • Can users delete data in this feature?
  • What exactly would deletion mean?
  • how will happen?
  • what will it include?
  • what about already released anonymitized data sets?
  • what about server logs?
  • what about old backups?
  • Is there a case where the user identifier can be deleted, but not necessarily the associated data?
  • Is any of the data created by the user public?
  • What are the default user control settings for this feature?
  • How could a user change them?

Things to Think About – Compatibility and Portability

In my not-so-humble opinion, it’s not an open web if user data is held for ransom or locked into proprietary formats.

  • Can the user export their data from this service?
  • What format would it be in?
  • Is it possible to use an open format for storage?
  • If not, should we start an effort to make one?

That’s a Lot of Extra Work, No. Not OK. Not Cool.

Yes, it is.

Handling user data is going to increase your workload. So does writing test coverage. We do it anyway. We write tests to meet our standards for correctness; we must write code that meets our standards for privacy.

I didn’t say it would be easy, but it’s doable. We can do it better and show that the web the world needs can exist.

The Privacy Team is Here to Help. Talk to Them Early and Often

That a metric ton of questions to ponder. I don’t expect you to remember them all. The privacy team is working on a new kickoff form and a checklist of considerations to make this process smoother (Additional note I spent most of today on just this goal). They may even merge those two things. For now, use the existing Project Kickoff Form and check out this wiki containing the questions I’ve listed above.

Not sure if you need a review? Just have a question? Something you want to run by them? Drop them an email or pop into the #privacy irc channel.

Have an Opinion? Join the Effort.

The Mozilla Privacy Council needs more engineers, including volunteer contributors. No one knows more about building software than we do. User-empowering software won’t get built without us. Help shape the training, best practices, the kickoff form, and privacy reviews of new features. To get involved, email stacy at mozilla dot com.

Special thanks to the Metro team for their patience with my delayed code reviews this week.

Thank you for reading. May your clobber builds be short.

Fighting Back Against Surveillance

Chris Riley

Expansive surveillance programs damage user trust, stifle innovation, and risk a divided Internet. They affect all Internet users around the world – and yet we still don’t know their full impact, even now.

This coming Tuesday, February 11th, will mark “The Day We Fight Back” against mass surveillance. Mozilla is taking part in this campaign to help lead the world’s Internet citizenry in flexing a little muscle and delivering a message: It’s time to fix this.

What will happen without reform? The Internet industry in the United States will feel perhaps the most harm, with potentially hundreds of billions of dollars lost. Over time, expansive surveillance will produce immeasurable harm to the future of innovation and adoption, not just for the U.S. but for the entire world.

Day We Fight Back Eye-Hand Logo

We launched Stop Watching Us to build a grassroots army on this issue. Now, eight months later, reform is beginning. The first round of commitments from the Administration was disappointing. We need much more. Leaders in Congress have made clear their intention to act, and one of the goals of the Day We Fight Back is to organize support for their efforts, in particular through the USA Freedom Act.

Join the fight – make your voice heard.

Celebrate Data Privacy Day!

smartin

PrivacyDay_blogImage_125x125_v2Fighting for data privacy — making sure individuals know who has access to their data, where it goes or could go, and that they have a choice in all of it — is an essential part of building the Internet the world needs.

“Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.” ~ Mozilla Manifesto (principle #4)”

Today is Data Privacy Day — an internationally recognized holiday intended to raise awareness and promote data privacy education. It is officially recognized in the United States, Canada, and 27 European countries.

If you are ready to join the fight for data privacy, check out our privacy publications below for a healthy list of things you can do to make a difference. From downloading an add-on to teaching a class to building an app with major privacy muscle — all you have to do is to do it!

Build an app that rocks at privacy!

Teach, learn, and spread the word about privacy on the web through the Mozilla Foundation’s Webmaker tools.

Watch a live video presentation via Air Mozilla. You are invited to join us for “The Privacy Engineer’s Manifesto: Getting from Policy to Code to QA to Value.” Co-author Michelle Dennedy will speak live at our San Francisco office at 3pm PST on Thursday, January 30th. She will be joined by co-author Jonathan Fox in our Mountain View office.

Read Alex Fowler’s blog post on the collective consciousness and trust in the Web as a result of the Snowden revelations.

What sets Mozilla apart, as one of the most trusted names on the web, is that we are a mission-driven, non-profit organization, committed to building an Internet where the individual is respected and has choices. Yet, we can’t do it alone. And we wouldn’t want to.

Here are a few additional ways you can help:

Learn more about what we’re doing here and let us know in the comments if you’re ready to join the fight for data privacy.

Happy Data Privacy Day!

Response to President Obama’s Speech on Surveillance

Alex Fowler

Expansive government surveillance practices have severely damaged the health of the open Internet, creating calls for change from diverse organizations around the world along with hundreds of thousands of Internet users. President Obama’s speech on surveillance reform provided the first clear signs of the Administration’s response.

Overall, the strategy seems to be to leave current intelligence processes largely intact and improve oversight to a degree. We’d hoped for, and the Internet deserves, more. Without a meaningful change of course, the Internet will continue on its path toward a world of balkanization and distrust, a grave departure from its origins of openness and opportunity.

From our perspective as both an Internet company and a global community of users and developers, we’re concerned that the President didn’t address the most glaring reform needs. The President’s Review Board made 46 recommendations for surveillance reform, and some of the most important pieces are being ignored or punted to further review.

The Administration missed a compelling opportunity to:

  • Endorse legislative reform to limit surveillance, such as the USA FREEDOM Act and ECPA reform efforts;
  • Propose reforms on encouraging, promoting, or supporting backdoors;
  • End efforts to undermine security standards and protocols; and
  • Adequately protect the privacy rights of foreign citizens with no connection to intelligence, military, or terrorist activity.

The speech also didn’t raise one of the most important issues determining the future of government surveillance and privacy: the priorities of the next director of the NSA. If a culture of unlimited data gathering above all else persists, legal reforms and improved technological protections will be watered down over time and will never be enough to restore trust to the Internet. Internet users around the world would be well served if the next director of the NSA makes transparency and human rights a true priority. In Benjamin Franklin’s oft-quoted words, “They who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”

The President’s speech did include one important reform that supports a healthy, trustable Internet: creating a new public advocate for privacy within the specialized intelligence court, FISA. Such an oppositional element is essential to ensure meaningful rule of law over government surveillance practices, in any context. In the U.S., where 99% of FISA court decisions ultimately favor the government, it seems particularly overdue.

Some of the Administration’s other ideas carry mixed benefits and harms for the future of the Open Internet. Limiting the scale of some bulk collection programs helps, to a small degree. But it does not justify the continuation of practices that significantly undermine privacy. The plan to work with Congress on alternative ways to sustain bulk collection through third parties or alternative storage may similarly create more harm. Third-party storage could allow for an additional layer of legal process and increase the practical cost of using the data, creating some safety measures and incentives against abuse. But those third parties might store it insecurely or unreliably, posing significant risk for both the intelligence mission and the communication subjects’ privacy.

At Mozilla, we’ve worked to protect privacy and trust online through many angles:

We’re going to keep working on this, pushing for meaningful change to surveillance practices and security technologies to help restore trust and support the open Internet around the world. We expect the President’s speech to be a floor for reform, not a ceiling, and we will make our positions known to Congress and the Administration. But we’ll need your help. For starters, you can join the movement at StopWatching.Us — and keep watching this page for more opportunities to make your voice heard.

Alex Fowler, Global Privacy & Policy Leader
Chris Riley, Senior Policy Engineer

Trust but Verify: Repost of article on security value of open source software

Alex Fowler

Over the weekend, my colleague Andreas Gal, together with Mozilla’s CTO Brendan Eich, published an article on the importance of open source software for maintaining the public’s trust that our products aren’t secretly working against the interests of our users.

In an effort to bring together posts related to privacy at Mozilla into one place, I’m republishing the post below.

Trust but Verify

Background

It is becoming increasingly difficult to trust the privacy properties of software and services we rely on to use the Internet. Governments, companies, groups and individuals may be surveilling us without our knowledge. This is particularly troubling when such surveillance is done by governments under statutes that provide limited court oversight and almost no room for public scrutiny.

As a result of laws in the US and elsewhere, prudent users must interact with Internet services knowing that despite how much any cloud-service company wants to protect privacy, at the end of the day most big companies must comply with the law. The government can legally access user data in ways that might violate the privacy expectations of law-abiding users. Worse, the government may force service operators to enable surveillance (something that seems to have happened in the Lavabit case).

Worst of all, the government can do all of this without users ever finding out about it, due to gag orders.

Implications for Browsers

This creates a significant predicament for privacy and security on the Open Web. Every major browser today is distributed by an organization within reach of surveillance laws. As the Lavabit case suggests, the government may request that browser vendors secretly inject surveillance code into the browsers they distribute to users. We have no information that any browser vendor has ever received such a directive. However, if that were to happen, the public would likely not find out due to gag orders.

The unfortunate consequence is that software vendors — including browser vendors — must not be blindly trusted. Not because such vendors don’t want to protect user privacy. Rather, because a law might force vendors to secretly violate their own principles and do things they don’t want to do.

Why Mozilla is different

Mozilla has one critical advantage over all other browser vendors. Our products are truly open source. Internet Explorer is fully closed-source, and while the rendering engines WebKit and Blink (chromium) are open-source, the Safari and Chrome browsers that use them are not fully open source. Both contain significant fractions of closed-source code.

Mozilla Firefox in contrast is 100% open source [1]. As Anthony Jones from our New Zealand office pointed out the other month, security researchers can use this fact to verify the executable bits contained in the browsers Mozilla is distributing, by building Firefox from source and comparing the built bits with our official distribution.

This will be the most effective on platforms where we already use open-source compilers to produce the executable, to avoid compiler-level attacks as shown in 1984 by Ken Thompson.

Call to Action

To ensure that no one can inject undetected surveillance code into Firefox, security researchers and organizations should:

  • regularly audit Mozilla source and verified builds by all effective means;
  • establish automated systems to verify official Mozilla builds from source; and
  • raise an alert if the verified bits differ from official bits.

In the best case, we will establish such a verification system at a global scale, with participants from many different geographic regions and political and strategic interests and affiliations.

Security is never “done” — it is a process, not a final rest-state. No silver bullets. All methods have limits. However, open-source auditability cleanly beats the lack of ability to audit source vs. binary.

Through international collaboration of independent entities we can give users the confidence that Firefox cannot be subverted without the world noticing, and offer a browser that verifiably meets users’ privacy expectations.

See bug 885777 to track our work on verifiable builds.

End-to-End Trust

Beyond this first step, can we use such audited browsers as trust anchors, to authenticate fully-audited open-source Internet services? This seems possible in theory. No one has built such a system to our knowledge, but we welcome precedent citations and experience reports, and encourage researchers to collaborate with us.

Brendan Eich, CTO and SVP Engineering, Mozilla
Andreas Gal, VP Mobile and R&D, Mozilla

[1] Firefox on Linux is the best case, because the C/C++ compiler, runtime libraries, and OS kernel are all free and open source software. Note that even on Linux, certain hardware-vendor-supplied system software, e.g., OpenGL drivers, may be closed source.

Nationwide Day of Action for Online Privacy

Chris Riley

ecpaMozilla stands with hundreds of major technology companies and nonprofit organizations, and tens of thousands of digital advocates, in calling for reform of the Electronic Communications Privacy Act, or ECPA.

ECPA was enacted in 1986 to ensure that wiretaps of then “new” forms of electronic communications (e.g., email messages between computers) were limited in the same way that telephone wiretaps were. The safeguards have been watered down over the intervening years, and were not extended to data stored on a computer. The result is that emails, social media messages, and other communications that users may consider private are not uniformly treated as such under United States law.

Today, Congress, with bipartisan support, has proposed changes to ECPA. Yet many government agencies would prefer to see these reform efforts die. Internet users who value privacy and trust online must make their voices heard.

Mozilla supports efforts for positive change in this space – and we’d like your help. We’re asking you to join us by signing a White House petition asking for support for sensible ECPA reform. These changes alone won’t eliminate the harms, but, like the USA FREEDOM Act, they will make a positive contribution to that effort.

We’re More Than The Sum Of Our Data

Alex Fowler

From the day I first browsed the Web, Mozilla has shaped my experience of the Internet. The community is one of the strongest forces making the Web what it is today. So I was intrigued when I was offered the chance to go from loyal user to paid contributor. The Web’s quick growth was creating new privacy concerns and Mozilla wanted to get in front of them. I had a successful consulting practice advising some of the biggest consumer brands on privacy and security, but I wanted to explore ways to have more impact.

What I found at Mozilla was truly inspiring. In the midst of massive investments in tracking and mining of user data, here was a group of people fiercely committed to making individual control part of the Web. Not since my time at the Electronic Frontier Foundation had I encountered an organization so well placed to reshape trust in the Internet. I was hooked.

That was three years ago, and I believe our work is more important than ever. According to leaked documents from Edward Snowden, governments see their ability to spy into our personal lives as the “price of admission” for use of an open Web. The same justification is given by industry lobbyists: that online tracking is the price for content and services. The powers-that-be believe we surrender the basic rights and freedoms we enjoy offline when we are online. And as someone who cares deeply about the Web, I take this personally.

A small group of people has decided that our privacy doesn’t matter. Privacy isn’t a philosophical abstraction. It’s what lets us control who we are through what we choose to reveal. It’s core to our autonomy, identity, and dignity as individuals. Privacy is what lets us trust that our laptops, phones, apps, and services are truly ours and not working against us. Abandoning privacy means accepting a Web where we are no longer informed participants.

At Mozilla, we believe privacy and security are fundamental and cannot be ignored. It’s enshrined in our Manifesto. However, we prefer to skip the platitudes, white papers, and insider deals; choosing, instead, to drive change through our code. Industry groups and policy makers had been debating Do Not Track for years before we showed up, wrote 30 lines of code, and gave it — for free — to hundreds of millions of Firefox users. Within a year, all of the other major browsers followed our lead. We saw the same thing happen when we killed the annoying pop-up ad. And we’re doing it again, together with members of our contributor community, testing new approaches to cookies, personalization and more.

In the wake of Snowden’s revelations and the work of countless journalists and advocates, we have a rare moment to change things for the better. Each week, front-page articles detail new intrusions into our private lives by governments and corporations around the world. 570,000 people signed a letter demanding our governments StopWatching.Us, which we delivered, in person, to politicians in Washington, DC. Over 50 million people have enabled Do Not Track,  sending trillions of anti-tracking signals across the Web each month and asking companies to respect their privacy. The world is being reminded of why privacy — why openness, transparency, and individual control — are fundamental not just to the Web, but to the future of our global, hyper-connected world.

I joined Mozilla because I found a community of people working to build the Web we need. If you believe that the Web and our privacy and security are worth fighting for, I ask you to support our work. Mozilla may compete in commercial markets, but we are proudly non-profit. Your personal contribution and those of other individual donors make it possible for us to stand up for users and our right to privacy. Click here to make a year-end donation to Mozilla — and help us build a Web that puts people before profits.

Alex Fowler
Chief Privacy Officer
Mozilla


This post launches the Mozilla end of year fundraising campaign. Over the balance of the year, you’ll hear personal stories from some of our leaders about why they joined Mozilla, the challenges that face the Web, and why your support matters. I’m pleased to have written the kick-off post and look forward to the discussion to come. — AF

Mozilla joins with Stanford and others to launch Cookie Clearinghouse

Alex Fowler

In a post this morning from Mozilla’s CTO Brendan Eich, we announced that we’re working with Stanford’s Center for Internet and Society to develop a Cookie Clearinghouse. The Cookie Clearinghouse will provide users of Firefox, Opera and other browsers an independent service to address privacy concerns related to third party cookies in a rational, trusted, transparent and consistent manner. The current third party cookie patch will require additional modifications over the course of several months, depending on how quickly the new service takes shape and comes online. Note there will be an open, public brown bag meeting on July 2nd where additional info will be presented.

Here’s what Brendan posted:

As you may recall from almost six weeks ago, we held the Safari-like third-party cookie patch, which blocks cookies set for domains you have not visited according to your browser’s cookie database, from progressing to Firefox Beta, because of two problems:

False positives. For example, say you visit a site named foo.com, which embeds cookie-setting content from a site named foocdn.com. With the patch, Firefox sets cookies from foo.com because you visited it, yet blocks cookies from foocdn.com because you never visited foocdn.com directly, even though there is actually just one company behind both sites.

False negatives. Meanwhile, in the other direction, just because you visit a site once does not mean you are ok with it tracking you all over the Internet on unrelated sites, forever more. Suppose you click on an ad by accident, for example. Or a site you trust directly starts setting third-party cookies you do not want.

Our challenge is to find a way to address these sorts of cases. We are looking for more granularity than deciding automatically and exclusively based upon whether you visit a site or not, although that is often a good place to start the decision process.

The logic driving us along the path to a better default third-party cookie policy looks like this:

  1. We want a third-party cookie policy that better protects privacy and encourages transparency.
  2. Naive visited-based blocking results in significant false negative and false positive errors.
  3. We need an exception management mechanism to refine the visited-based blocking verdicts.
  4. This exception mechanism cannot rely solely on the user in the loop, managing exceptions by hand. (When Safari users run into a false positive, they are advised to disable the block, and apparently many do so, permanently.)
  5. The only credible alternative is a centralized block-list (to cure false negatives) and allow-list (for false positives) service.

I’m very pleased that Aleecia McDonald of the Center for Internet and Society at Stanford has launched just such a list-based exception mechanism, the Cookie Clearinghouse (CCH).

Today Mozilla is committing to work with Aleecia and the CCH Advisory Board, whose members include Opera Software, to develop the CCH so that browsers can use its lists to manage exceptions to a visited-based third-party cookie block.

The CCH proposal is at an early stage, so we crave feedback. This means we will hold the visited-based cookie-blocking patch in Firefox Aurora while we bring up CCH and its Firefox integration, and test them.

Of course, browsers would cache the block- and allow-lists, just as we do for safe browsing. I won’t try to anticipate or restate details here, since we’re just starting. Please see the CCH site for the latest.

We are planning a public “brown bag” event for July 2nd at Mozilla to provide an update on where things stand and to gather feedback. I’ll update this post with details as they become available, but I wanted to share the date ASAP.

/be