A Look Back at Privacy Day 2015


This post is co-authored by Stacy Martin & Greg Jost

Privacy Day is always an amazing opportunity to showcase Mozilla’s deep privacy roots. Each year, we try to do something a little different and this year we went bigger and better. With extensive help from our engagement team, we unified Mozilla’s Privacy Day efforts around a new “get smart on privacy” campaign.

Campaign goals were to raise awareness about privacy as a key issue of online life; encourage users to form a point of view and take action; and underscore that privacy is an essential part of Mozilla’s mission.

Our campaign website highlighted four easy steps to get smart on privacy and our snippets drove traffic to the page:

step 1: Ask yourself about privacy
step 2: Learn about privacy and why it matters
step 3: Find out what you can do
step 4: Start a conversation and share what you have learned

Screen Shot 2015-03-12 at 10.16.14 PM

Traffic to the campaign website increased more than tenfold over last year’s campaign – a good sign that we’re reaching more people!

New this year was our Privacy Day Twitter Chat, with invited guests from CDT, DuckDuckGo, iKeepSafe, and Intel Security. With 3000+ posts, the response was really incredible and the event was fast paced and fun. Participants submitted thoughtful questions, engaging in a real conversation with privacy leaders who shared tangible tips/steps for better privacy.



During the hour long chat, our hashtag, #PrivacyChat, became a trending topic on Twitter in the United States. Pretty exciting! Localization in Spanish was provided by our Mozilla Hispano community, expanding our reach and engagement much further. We enjoyed #PrivacyChat so much that we’d like to do more of them throughout the year across a range of policy topics.

One interesting highlight, the click thru on our Privacy Day tile was more than three times the average and a tenfold increase over our previous privacy tile – another great sign that we’re reaching more people!  Tiles (presented to new users of Firefox Desktop) are localized in 20+ countries and 6+ languages around the world.  We’re pretty excited about the opportunity for users to connect to Mozilla initiatives through tiles.

Privacy Day Tile

Privacy Day Tile

We also had two live events, both in San Francisco – Privacy Lab and “Escape the Internet”.

Privacy Lab on January 28th was our first meetup for privacy minded professionals. The goal of Privacy Lab is to incubate an ongoing series of meetups to bring people together around privacy, to share what they are working on and talk about how to collaborate. Our first event sold out (ok, the tickets were free, but it was still exciting!).  We received some really good feedback and are committed to keeping the momentum going. In February we hosted our second Privacy Lab and are currently planning March and April events. To learn more, click here.  We’d love to have you join us!

The following day, the dynamic and mysterious Nico Sell spoke to our employees and volunteers, as the first in our new “Life in the Offices” series.  Nico is the founder and CEO of Wickr, a private messaging app. Her engaging presentation, titled “Escape the Internet” expressed her passion for keeping private communications off the Internet.  If you missed it, her presentation is recorded here.

A big win for us internally was strong collaboration early and often across Mozilla, including marketing, PR, product, policy, legal, foundation, workplace resources, community and more. We learned a great deal about how to work together to create more impact as a cross-functional team, and we’re eager to apply what we’ve learned to Privacy Day 2016.

Trust in an Increasingly Connected World

Denelle Dixon-Thayer

This year at Mobile World Congress, I participated in two formal discussions. I spoke alongside a panel of experts thinking about mobile and data which was hosted by the GSMA. I was also invited to a Fireside Chat hosted by the IAB’s Randall Rothenberg, where we talked about many issues facing the mobile industry in the context of the internet of things.

At Mozilla, user transparency and control is a core part of our mission. I’ve personally written about how promoting transparency and user control can help us as an industry earn user trust.

In the GSMA panel, we discussed what a user-centered take on data would be in a world where all users are connected through a myriad of devices. It was clear from our discussion that as more devices become networked, user transparency and control remain a paramount concern if users are to trust the technology. These principles can sometimes take a back seat to other product features, as new devices, apps and services collect and transmit data without any transparency or control given to the users of those devices. As we continue to approach a world where more and more devices transmit data, everyone in the development chain, from hardware providers to app developers, will need to do a better job creating transparency for users and enabling controls over the transmission user data. We also touched on the fact that, while privacy notices continue to be an important means by which to provide transparency, we as an industry must innovate and be creative around how to provide information and choices to users in context that allow those users to have better control over their data.

During the IAB session we talked about privacy related to mobile devices as well as the concerns around the future of the “internet” in the context of the internet of things. We recognized that we have an opportunity to engage users in the discussion of the value exchange with respect to advertising on mobile in a way that the industry didn’t effectively do on desktop.  Our discussions also revealed concerns around walled gardens and limits on interoperability and competition if only a handful of companies control the market. This is a hard issue.  This discussion re-affirmed what we know: openness and interoperability will deliver the most value for users and every entity in the ecosystem. In this way, Firefox OS stands for more than just an open operating system – it represents a set of ideals we think are important to the internet of things at its core. If we embrace the web’s inherent architecture and the openness it brings at a platform level, it can be hard to create a walled garden.

We believe that conversations like these truly help us as an industry move forward. What was clear through my discussions is that, while the world is changing and new technologies emerge, the core beliefs that we hold at Mozilla, including those around user control and open and interoperable ecosystems, remain timely and relevant. User trust and open ecosystems remain key in the future of mobile, wherever it may lead.

Making Progress on Privacy: CalECPA reform

Chris Riley

Mozilla takes privacy seriously – in our products, our policies, and our practices. As a result, we support draft legislation introduced today in the state of California to improve warrant requirements for electronic communications, also known as CalECPA reform (California Electronic Communications Privacy Act).

Last week, to recognize International Data Privacy Day, we hosted a Twitter chat and two separate gatherings for privacy professionals and the Mozilla community. But we don’t just prioritize privacy one day every year. We lead the technology industry year round in developing and advancing best practices for privacy and data protection – a distinction recognized by the Ponemon Institute, whose rankings have recognized Mozilla as the most trusted Internet company for privacy. And we are pushing the privacy envelope with product research on new Firefox features like accountless communications through Hello and tracking protection.

Because we put users and privacy first, we support CalECPA reform. The CalECPA bill would help protect Internet users by subjecting law enforcement requests for information to due process, and enforcing due process requirements in a meaningful way. The legislation applies warrant protection to all personal electronic information, regardless of its age or nature; it requires prompt notification to affected individuals; it extends reporting requirements for traditional wiretaps to electronic communications; and it includes reasonable exceptions in the case of emergencies or specific consent. The bill’s provisions help protect data privacy while enabling law enforcement to be effective.

Mozilla supports federal ECPA reform in Congress. In 2013, we joined with hundreds of technology companies and nonprofit organizations, and over a hundred thousand digital citizens, in calling for ECPA reform. At that time, despite overwhelming bipartisan support, government agencies pushed federal reforms off the table, leaving Internet users without protection. In fact, despite significant harm to trust online in recent years, the response from government leaders has left much to be desired.

Trust online has taken a beating, as we’ve all learned that many of our private communications aren’t as private as we thought they were. This isn’t just a California issue, or even a United States issue. It’s about establishing and protecting our right to secure, private communications online.

We remain hopeful for federal ECPA reform, and will continue to advocate for it. But with the new Congress seemingly focusing its technology attention span on conflicting plans for the future of the open Internet, it’s unclear when such reforms might be adopted.

Supporting CalECPA is an opportunity to begin making progress towards that goal.

How Mozilla Addresses the Privacy Paradox


Earlier this month, a 20 year old NBA Clippers fan held up a sign in a crowded Washington DC arena with her phone number on it. Seasoned privacy professionals have long lamented the old adage that if you give someone a free hamburger, they’ll give you all of their personal information. Yet, research consistently tells us that people care about privacy, so what gives?

Why do people say they care, but not take action? Or take action that isn’t privacy protecting? Research shows a number of reasons including complexity, overvaluing present values while undervaluing future costs, an optimism bias, an illusion of control, even association with responsibility and effort.

According to Pew Research in the United States, 91% of adults think that consumers have lost control over how their personal information is used by companies. That same Pew Research study found a majority (61%) say they “would like to do more” to protect their privacy.

The real turning point is change in cultural and societal norms, and this will happen as behaviors change. This year for Privacy Day, we’ve created a ‘Get smart on privacy‘ campaign. Our goal is to raise awareness and encourage action. Overall, we’re promoting change through a combination of tools, community and teaching.


One of the barriers to widespread use of strong privacy practices is the complexity that can surround privacy tools. First you need to know they exist, then you need to know when and how to use them. If we can raise both awareness and ease of use, that’s essential to broader adoption. A recent example is our Privacy Coach add-on to help users learn about and manage their Firefox for Android privacy settings. We’re moving towards a consistent and easy-to-use privacy experience across our products and features.

For more on tools, visit Denelle Dixon-Thayer’s post on the Mozilla blog.


For Privacy Day and throughout the year, we seek to create an open dialogue on privacy topics. By bringing in new points of view and bringing people together, we further reinforce a culture of privacy. Last year, we invited Dr. Ann Cavoukian to share Privacy by Design, and Michelle Dennedy and Jonathan Fox to discuss topics from their book ‘The Privacy Engineer’s Manifesto.’

This year, we’ve collaborated with local privacy advocates to create a new meetup we’re calling Privacy Lab. Cooper Quintin, from EFF, will keynote the event with “A State of the Union for Privacy and Consumer Protection and Wishlist for 2015.” Following his talk, we’ve planned at least an hour for people to connect and hear about what others are working on and how to get involved. The goal is to advance the state of the privacy ecosystem by bringing passionate privacy promoters and advocates together to explore a common purpose.

To reach a consumer audience, we’ve invited industry leaders to an hour long #PrivacyDay Twitter Chat for anyone who is interested in learning more about privacy. Through a combination of outreach activities, including a new ‘Get smart on privacy‘ website and social media posts, we’re focusing on simple things anyone can learn and do to take more control and create a stronger online privacy environment for themselves.

On January 29th, we’ve invited Nico Sell, co-founder and CEO of Wickr to share her views with us, not because they’re identical to ours, but because they may not be. By bringing in a variety of speakers, we always learn from those who may challenge our thinking.


Perhaps the most important community action we can take is to teach others — so they can in turn teach others . How do we take topics like encryption, or password management and turn them into easy actions for everyone? One path forward is through teaching kits, activities, and badges. These allow us to chunk the information so that it’s more digestible, and make it easy for people to share what they know. Teaching kits are an ideal project for community members who have an interest in sharing privacy with others.

In Toronto, Hive member Karen Smith is developing a privacy curriculum with a grant from the Office of the Privacy Commissioner of Canada to educate teens about privacy. Karen has collaborated with local teenagers to design and create four privacy badge pathways to encourage engagement through hands-on activities. Projects like this have enormous potential for leverage in other areas. Together we can continue to build our resources and find new ways to share knowledge.

The Mozilla community continues to be a strong voice in advocating for a free and open Web. The time is now to channel this advocacy to enable people to get smarter on privacy, empowering people with the skills to take control of their privacy, change societal norms, and create the Web we want and need.

Mozilla’s Data Privacy Principles Revisited

Denelle Dixon-Thayer

Mozilla’s commitment to transparency about our data practices is a core part of our identity. We recognized the value in giving a clear voice to this commitment through a set of Privacy Principles that we developed in 2010. These Principles, which we initially released in 2011 as an extension of the Mozilla Manifesto, have reflected and guided our handling of data.

Earlier this year, we revisited these Principles, engaging a wide cross-section of Mozilla and inviting public input. Today, we are introducing Mozilla’s updated Data Privacy Principles.

The update is a response to change, both within Mozilla and beyond. In four years, Mozilla has grown and expanded with new products and services that didn’t exist in 2010. And in 2014, a renewed public emphasis on transparency and user control, particularly in the aftermath of the Snowden revelations, has created new opportunities to address these through our products and policy initiatives.

Mozilla’s Data Privacy Principles continue to inform how we build our products and services, manage user data, and select and interact with partners – while shaping our public policy and advocacy work.

Mozilla’s five Data Privacy Principles are:

Use and share information in a way that is transparent and benefits the user.

Develop products and advocate for best practices that put users in control of their data and online experiences.

Collect what we need, de-identify where we can and delete when no longer necessary.

Design for a thoughtful balance of safety and user experience.

Maintain multi-layered security controls and practices, many of which are publicly verifiable.

Together with the Manifesto, these principles will continue to guide the work of teams across Mozilla as we stand up for users and the Web.  To keep the Web free and open, we put users at the center, through transparency and user control, while helping to minimize risk to the user through limited data, sensible settings, and strong security practices.

Introducing Polaris Privacy Initiative to Accelerate User-focused Privacy Online

Denelle Dixon-Thayer

At Mozilla, we believe that an individuals’  privacy on the Internet cannot be treated as optional. Our Privacy Principles guide us with the design of each of our products and services.  We’ve introduced features to support our privacy focus across desktop and mobile, including: an add-on platform with Firefox Add-ons like LightBeam, Disconnect, Ghostery and Privacy Badger; the Do Not Track preference; Private and Guest Browsing; high levels of encryption with Firefox Sync; an individual approach to apps permissions; and even a new Forget button.  But we recognize we need to do better and do more.  We want to give our users the Web experience they want through features that create transparency and control.  We want our users to trust us and the Web.

In October 2014, Harris Poll conducted a global online survey* on behalf of Mozilla of more than 7,000 online adults ages 18-64. Three quarters (74%) of people feel their personal information on the Web is less private today than it was one year ago. That same figure of adults agree that Internet companies know too much about them. We think we can help with this concern.

Today, we are excited to announce a new strategic initiative at Mozilla called Polaris. Polaris is a privacy initiative built to pull together our own privacy efforts along with other privacy leaders in the industry. Polaris is designed to allow us to collaborate more effectively, more explicitly and more directly to bring more privacy features into our products. We want to accelerate pragmatic and user-focused advances in privacy technology for the Web, giving users more control, awareness and protection in their Web experiences. We want to advance the state of the art in privacy features, with a specific focus on bringing them to more mainstream audiences.

We’re joined at launch by the Center for Democracy & Technology (CDT),  and the Tor Project both non-profits, who will support and advise Polaris projects and help us align them with policy goals. We believe that the support and assistance from each of these groups is crucial. “CDT looks forward to working with Mozilla on the Polaris program and advising on issues like combating Internet censorship and protecting online anonymity, which are vital to promoting free expression online.” said Justin Brookman of CDT.  Not only will these collaborations hold us accountable to staying true to our goal of getting new and innovative privacy features into our general release products, the diversity of understanding, focus and opinion will improve what we bring to the mainstream.

Today we’re announcing two experiments under the Polaris banner, focused on anti-censorship technology, anonymity, and cross-site tracking protection. First, Mozilla engineers are evaluating the Tor Project’s changes to Firefox, to determine if changes to our own platform codebase can enable Tor to work more quickly and easily. Mozilla will also soon begin hosting our own high-capacity Tor middle relays to make Tor’s network more responsive and allow Tor to serve more users.  “The Tor Project is excited to join Mozilla as a launch partner in the Polaris program. We look forward to working together on privacy
technology, open standards, and future product collaborations,” said  Andrew Lewman of the Tor Project.

The second experiment (which is our first in-product Polaris experiment) seeks to understand how we can offer a feature that protects those users that want to be free from invasive tracking without penalizing advertisers and content sites that respect a user’s preferences. We’re currently testing this privacy tool in our “Nightly” channel. The experiment is promising, but it’s not a full-fledged feature yet.  We’ll test and refine the user experience and platform behavior over the coming months and collect feedback from all sides before this is added to our general release versions.

We recognize that privacy is not just a functionality on your computer or a setting you can turn on or off, and we’re excited to see what we can do to advance privacy online with Polaris. To learn more or to join us, visit the wiki.

*Survey Methodology

This survey was conducted online within Great Britain, France, Spain, German, Brazil, and India between October 22nd and 29th, 2014 among 7,077 adults (aged 18-64) by Harris Poll on behalf of Mozilla via its Global Omnibus product. Figures for age, sex, race/ethnicity, education, region and household income were weighted where necessary to bring them into line with their actual proportions in the population. Where appropriate, this data were also weighted to reflect the composition of the adult online population. For complete survey methodology, including weighting variables, please contact press@mozilla.com

Prefer:Safe — Making Online Safety Simpler in Firefox

Alex Fowler

Mozilla believes users have the right to shape the Internet and their own experiences on it. However, there are instances when people seek to shape not only their own experiences, but also those of young users and family members whose needs related to trust and safety may differ. To do this, users must navigate multiple settings, enable parental controls, tweak browsers and modify defaults on services like search engines.

We’re pleased to announce a smart feature in Firefox for just this type of user called Prefer:Safe, designed to simplify and strengthen the online trust and safety model. Developed in collaboration with a number of leading technologists and companies, this feature connects parental controls enabled on Mac OS and Windows with the sites they visit online via their browser.

How it works:

  • Users on Mac OS and Windows enable Parental Controls.
  • Firefox sees that the user’s operating system is running in Parental Control mode and sends a HTTP header — “Prefer:Safe” — to every site and service the user visits online.
  • A site or service looking for the HTTP header automatically supports higher safety controls it makes available, including honoring content or functionality restrictions.
  • Users won’t find any UI in Firefox to enable or disable Prefer:Safe, which becomes one less thing for kids to try to circumvent to disable this control.

Prefer:Safe demonstrates the power and elegance of HTTP headers for empowering users to communicate preferences to websites and online services. This is one reason we’ve been championing Do Not Track, which is a HTTP header-based privacy signal for addressing third-party tracking under development at the W3C. In this case, no other configurations are necessary at either the browser or search engine level for this user preference to be effective across the Web, which helps ensure the intended online experiences meet user expectations.

We’re pleased that Internet Explorer has implemented this feature for their users, which along with Firefox, makes this capability relevant at scale right out of the box. We hope to see broader adoption of this feature in the near future.

For more information about Prefer:Safe, a draft specification has been submitted to the IETF. To discuss this feature, I’ve cross-posted this to Mozilla’s Dev.Privacy group.

Clearer Mozilla Privacy Website & Policies

Denelle Dixon-Thayer

**APRIL 16 UPDATE: the privacy policies are now updated, and you can view them here. Thanks to everyone who provided input on draft policies. We have updated the post below to remove links that are now out of date.**

Over the last year, a group of Mozillians have been exploring how to make our privacy website and policies better.  For example, the Firefox Privacy Policy (Update: link now points to an archived version of previous policy) is over 14 pages long and can be hard to parse – we don’t like that.  Given our focus on transparency and privacy, we wanted to create a framework that:

  1. Is easy to understand yet detailed enough to provide transparency.
  2. Gives users an opportunity to dive deeper into the technical aspects of our policy for specific products.
  3. Does not modify our practices but clarifies how we communicate them.
  4. Allows each product to have its own notice that is simple, clear and usable.

We now have an approach that we want to share and gather input on before implementing. I want to make it clear that although we’re rewriting the text of our privacy notices, we are NOT changing our practices. Our goal is only to make the notices easier to digest and provide users with the information they care about most, including new ways to access more detail if they are interested.

Here’s an overview of the new approach:

  • We’ve consolidated the parts of our products’ various privacy policies that are the same into a “Mozilla Privacy Policy.” Because we believe our approach to user data should be consistent regardless of the product, we’ve centralized as much as we can.
  • We’ve created an individual “Privacy Notice” for any policy that’s specific to an individual product.  We’ll be rewriting all our product notices to fit this mold, but are starting with Firefox and Mozilla websites.
  • We believe there are a group of users who want a more detailed explanation of how features work at a technical level. To provide this detail, we’re also creating new SUMO articles for features (like our Firefox Health Report page) that give users a deeper understanding of those products and will link to those explanations within each Product Privacy Notice.
  • As always, we make all the code that we’ve created in our projects available in source code and under open and permissive licenses so you can see how each feature works in the code itself. We’d like to encourage people to get involved in one of our dev channels such as mozilla.dev.privacy and mozilla.dev.planning, or by looking at the code for each project.

We welcome any questions or input you have through our Governance mailing list.  Our current plan is to implement these changes on April 15.

Our new privacy hub layout features our Privacy Policy on the center of the page and lists our Product Privacy Notices along the right.

Our new privacy hub layout features our Privacy Policy on the center of the page and lists our Product Privacy Notices along the right.


We added "learn more" options with bullet points and headings for users to more easily learn about issues.

We added “learn more” / “show less” options for users to more easily find information.



User Data & You: Privacy for Programmers

Allison Naaktgeboren (:ally)

This was originally posted as a guest post on January 31t 2014. Since then, it has been requested that I post under my own name.


I am a Firefox Engineer at Mozilla. I have worked on Desktop Firefox, Firefox for Android, and Firefox Sync. I currently work on Firefox for Windows 8 Touch, (née Firefox Metro). I also serve on Mozilla’s Privacy Council.

On Data Privacy Day, I presented a perspective on what we can do differently. My primary audience is fellow engineers or those engaged in engineering related activities. When I say ‘we’ I am largely referring to engineering as group. The remainder of this post is a written expansion on the presentation. The Air Mozilla recording is available here.


My goal is to start a public discussion about what engineers need to know about user privacy. Eventually the result of this discussion will evolve into a short training or set of best practices so engineers can ship better code with less hassle. Since this is the start of a public discussion, the content below will probably raise more questions than answers.

There be scaly dragons. Ye have been warned.

Privacy? That Word is so Overused. What is it Exactly & Why do I Care?

Privacy is a culturally laden word and definitions vary widely. Privacy means different things to different people at different times, even within the nascent field of privacy engineering. So for sanity’s sake, the following are my table stakes definitions.

Privacy: How & by whom the personal information of an individual is managed with respect to other individuals.

User Data: Any data related to users that they generate or cause to be generated; or that we generate, collect, store, have custody and control over, transfer, process, or hold interest in.

Why do we care? The reason Mozilla exists is to defend and promote the open web. Firefox & FirefoxOS are great products but they are not the raison d’être outlined in the Manifesto; they are means to an end. The Mozilla Manifesto declares that for a healthy web, users must be able to shape their own experiences on it. Ain’t nothing shapes your experience online more than than the data generated for, by, and about you. Whoever controls that controls your experience on the web. So our goal of the open web is directly linked to individuals ability to control that for themselves.

Acknowledging the Elephant in the Room

Let’s start by acknowledging the elephant in the room: whether or not Mozilla products should even handle user data. That would be a rich discussion on its own. This is not that discussion. This discussion assumes we’re going to handle user data. Regardless of your views, let’s agree that there are some things we will need to do differently when we choose to handle user data. Let’s figure out what those are.

Ok, So We Care; There’s Another Team at Mozilla for That.

There is a misconception I run into often that I’d like to clear up. Data safety & user privacy is everyone’s job, but especially an engineer’s job. At the end of the day, engineers make the sausage. No one has more leverage over what gets written than the engineer implementing it. The privacy team is here to help, but there are three of them and hundreds of us. The duty is really on us not them. Whether our code is fast, correct, elegant, secure, and meets Mozilla’s standards is chiefly our responsibility.

Ok, So it’s Kinda My Job. What do I Need to Think About or do Now?

I have good news & bad news. The good news is that it boils down to writing more stuff down & making more decisions upfront. Stated more formally:

  • More active transparency (writing more stuff down)
  • More proactive planning (making more decisions up front)

Sounds simple eh? Seasoned engineers should feel their spider sense tingling. It’s not miscalibrated. That’s the bad news. It’s how you do it that matters. The devil is in the details. So let’s tackle the easier one first: what I flippantly referred as ‘writing more stuff down’

Active Transparency (aka write more stuff down)

Passive Transparency: unintentional, decisions aren’t actively hidden, but are difficult to locate. May not even be documented

Active Transparency: intentional, everything is written down, easily searchable & locatable by interested parties

If you haven’t heard these terms before, don’t panic. I made them up years ago when I was a volunteer contributor trying to articulate how I was part of an open source project, actively following Bugzilla, but couldn’t figure out what was going on in the /storage module, let alone the rest of the Firefox code base.

Active transparency is functioning transparency. It requires sustained effort. Information, history, and decisions of a feature can be searched for, located, and consumed by all inclined.

Passive transparency is what happens unintentionally. People aren’t trying to hide information from each other. It just happens and no one notices until it is too late to do anything about it.

We often don’t notice because those who code marinate in information. We rarely bother to test whether or not anyone else outside can figure out what we’re living and breathing life into.
Break that habit. You test your code to prove it works; so test your transparency to prove it works (or doesn’t). Ask someone in marketing or engagement to figure out the state of your project or why your design is in its current state. Can they explain your tradeoff, constraints or design decisions back to you? Can they even find them?

I hear grumbling already: ‘Sounds like useless paperwork, not worth it’. What you really mean is ‘not worth it to you right now’, but it’s worth much to the people who will be responsible for it after you ship it, and there will be many of them.

One of the ways user data based features differ dramatically from application features of yore is that control will change hands many times over. Future development, operations, database administration, etc teams cannot read your mind. They also can’t go back in time to read your mind.

As an added bonus, privacy is not the only reason to be actively transparent. Active transparency is vital to building our community. Like open source software, it’s not really open if no one can find it and participate. Active transparency applies to the decision making process as much as to source code.

Proactive Planning (aka Making More Decisions With More People)

Now we move on the harder part – more decisions you’ll need to make with more people. Getting agreement on requirements is often one of the most difficult and least pleasant part of of an engineer’s craft. When handling user data, it will get harder. Your stakeholders will increase because the number of people who handle the data your feature generates or uses over its lifetime has increased.

The reason for enduring that pain at the beginning is that effective privacy is something you’ll only get one shot at. It’s usually impossible or cost-prohibitive to bolt it on to stuff after it’s built.

Proactive planning decisions will make up the bulk of the rest of the post. They are phrased in question form because the answers will be different for each project. They should not be interpreted as an all-inclusive list. The call to action for you is to answer them (and write the reasons down in a searchable location. Ahem – active transparency!)

30,000 Foot Views

The problem space can be vast. Below are two high level categorizations to jumpstart your problem solving, so that your feature can concretely bring to life the Manifesto’s declaration that users must be able to shape their own experience.

First Way to Slice It

An intuitive place to start is interaction.

Interactions between events and their data, or the data lifecycle

  • Birth
  • Life
  • Death
  • Zombie (braaaains)

Interactions between us and their data

  • How sensitive is this data?
  • Who should have access to it?
  • Who will be responsible for the safety of that data?
  • Who will make decisions about it when unexpected concerns come up?

Interactions between users and their data

  • How will a user see the data?
  • How will a user control it?
  • How will a user export it?

Second Way to Slice It

Another way to group key decisions is by basics plus high level concerns, such as:

  • Benefits & Risks
  • Openness, Transparency, & Accountability
  • Contributors & Third Parties
  • Identities & Identifiers
  • Data Life Cycles & Service History
  • User Control
  • Compatibility & Portability

Things to Think About – Basics

To start off, most of these seem pretty obvious. However there can be gotchas. For example, how identifying a type of data is can be tricky. What is seemingly harmless now could later be shown to be strongly identifying. Let’s consider the locale of your Firefox installation. If you are in en-us (the American English version), locale is not very identifying. Seems obvious. However, for small niche locales, it can be linked to a person.

  • Does your product/feature generate user data?
  • Metadata still counts
  • Does your product/feature store user data?
  • What kind of data & how identifying is it?
  • Are there legal considerations to this feature?
  • How do you authenticate users before they can access their data?
  • Which person or position is responsible for the feature while it remains active?
  • Who makes decisions after the product ships?
  • Figure this out. Now.

Things to Think About – Benefits and Risks

There will always be risk in doing anything. There exists a risk that when I leave my house an anvil with drop on me. That doesn’t mean I never leave my house. I leave my house because the benefits(like acquiring dinner) outweigh the risk. Similarly, there will always be risk when handling user data. That doesn’t mean we should handle it, but there had better be benefit to the user. ‘Well, it might be useful later‘ is probably not going to cut the mustard at Mozilla as a benefit to users.

  • What is the benefit to users from us storing this data?
  • What are the current alternatives available on the market?
  • What is the risk to users from storing this data?
  • What is the risk and cost to Mozilla from storing this data?
  • Where are you going to store this user data? Whose servers? (If not ours, apply above questions as well)

Things to Think About – Openness, Transparency and Accountability

For a Mozilla audience, this is preaching to the choir.

  • Have the benefits & risks of this feature been discussed on a public forum like a mailing list?
  • Should we exempt detailed discussion of handling really sensitive data?
  • Where is the documentation for our tradeoffs and design decisions, with respect to user data? (*cough* Active transparency!)

Things to Think About – Contributors and Third Parties

The use of third party vendors adds additional nuances, as I alluded to earlier.

  • Are any third party companies or entities involved in this? (ex: Amazon AWS)
  • Do we have a legal agreement governing what they can and can’t do with it?
  • Who makes decisions about access to the data?

At Mozilla, we sometimes release data sets so researchers can contribute knowledge about the open web for the public good.

  • Could researchers access it directly?
  • Do we have plans to release the dataset to researchers?
  • What would we do to de-identify the data before release?

Things to Think About – Identity and Identifiers

There’s probably nothing more personal than someone’s identity.

  • Will this feature have a user identification?
  • Who owns the login/username/identifier?
  • Is it possible to use this feature without supplying an identifier?
  • How will the user manage this identification?
  • Can they delete it?
  • Who can see this identifier?
  • Can the user control who can see their identifier?
  • Can this identifier be linked to the real life identity?
  • Can a single person have multiple identifiers/accounts?

Things to Think About – Data Lifecycles and Service History

This is an area that most application developers will have trouble with because we often don’t think about the mid-life or death of our feature or the data it uses. It ships, it’s out! Onto the next thing!

Not so fast.

  • Which person or position is responsible for the data/feature while it remains active?
  • Who makes decisions after the product ships?
  • Can a user see a record of their activities?
  • What happens to an inactive account and its associated data?
  • When is a user deemed inactive?
  • How will you dispose of user data?
  • What’s the security of the data in storage?
  • How long would we retain the data?
  • Who has access to the data at various stages?

Things to Think About – User Control

To shape their own experiences on the web, users need to have control of their data.

  • How can a user see their data?
  • Can users delete data in this feature?
  • What exactly would deletion mean?
  • how will happen?
  • what will it include?
  • what about already released anonymitized data sets?
  • what about server logs?
  • what about old backups?
  • Is there a case where the user identifier can be deleted, but not necessarily the associated data?
  • Is any of the data created by the user public?
  • What are the default user control settings for this feature?
  • How could a user change them?

Things to Think About – Compatibility and Portability

In my not-so-humble opinion, it’s not an open web if user data is held for ransom or locked into proprietary formats.

  • Can the user export their data from this service?
  • What format would it be in?
  • Is it possible to use an open format for storage?
  • If not, should we start an effort to make one?

That’s a Lot of Extra Work, No. Not OK. Not Cool.

Yes, it is.

Handling user data is going to increase your workload. So does writing test coverage. We do it anyway. We write tests to meet our standards for correctness; we must write code that meets our standards for privacy.

I didn’t say it would be easy, but it’s doable. We can do it better and show that the web the world needs can exist.

The Privacy Team is Here to Help. Talk to Them Early and Often

That a metric ton of questions to ponder. I don’t expect you to remember them all. The privacy team is working on a new kickoff form and a checklist of considerations to make this process smoother (Additional note I spent most of today on just this goal). They may even merge those two things. For now, use the existing Project Kickoff Form and check out this wiki containing the questions I’ve listed above.

Not sure if you need a review? Just have a question? Something you want to run by them? Drop them an email or pop into the #privacy irc channel.

Have an Opinion? Join the Effort.

The Mozilla Privacy Council needs more engineers, including volunteer contributors. No one knows more about building software than we do. User-empowering software won’t get built without us. Help shape the training, best practices, the kickoff form, and privacy reviews of new features. To get involved, email stacy at mozilla dot com.

Special thanks to the Metro team for their patience with my delayed code reviews this week.

Thank you for reading. May your clobber builds be short.

Fighting Back Against Surveillance

Chris Riley

Expansive surveillance programs damage user trust, stifle innovation, and risk a divided Internet. They affect all Internet users around the world – and yet we still don’t know their full impact, even now.

This coming Tuesday, February 11th, will mark “The Day We Fight Back” against mass surveillance. Mozilla is taking part in this campaign to help lead the world’s Internet citizenry in flexing a little muscle and delivering a message: It’s time to fix this.

What will happen without reform? The Internet industry in the United States will feel perhaps the most harm, with potentially hundreds of billions of dollars lost. Over time, expansive surveillance will produce immeasurable harm to the future of innovation and adoption, not just for the U.S. but for the entire world.

Day We Fight Back Eye-Hand Logo

We launched Stop Watching Us to build a grassroots army on this issue. Now, eight months later, reform is beginning. The first round of commitments from the Administration was disappointing. We need much more. Leaders in Congress have made clear their intention to act, and one of the goals of the Day We Fight Back is to organize support for their efforts, in particular through the USA Freedom Act.

Join the fight – make your voice heard.