Why Fair Elections Require Responsible Tech

For democracy to thrive in the internet era, we need technology that respects rules and privacy.

One of my favorite childhood memories is joining my mom each time she voted. I can still remember approaching our neighborhood polling place with a drooping flag outside the entrance, the smell of coffee and donuts to fuel the poll workers for their long shifts, and the sound of the curtain as it closed to protect the privacy of the voter. It was a sacred ritual that felt both exciting and important.

Voting is still a sacred, important ritual to me. Yet, everything leading-up to elections (if there is even a “lead-up” period anymore) has changed dramatically. These days, preparing to fill out a ballot feels like an endurance test that includes watching hours of debates and candidate bloopers, making your way through mountains of digital news coverage on every moment of the campaign, and beating back misinformation attacks. I love getting the “I Voted” sticker when I’m done, but I really feel like I deserve a medal.

With all of this effort, you’d hope Americans will show up for the midterm elections better informed than ever before — right?

Getting informed online isn’t easy

Anyone who spends time on YouTube, Twitter, or elsewhere online knows that finding the truth isn’t always easy. The journalist Chris Hayes recently provided an apt example:

Scenarios like the one Chris Hayes describes happen all the time. On today’s web, misinformation can abound, algorithms can promote false content, and filter bubbles can form fast. It’s the result of a model that prizes engagement over what’s truthful or what’s right. As a result, people are misled and misinformed about the Federal Reserve — but also about countless other institutions and issues.

For this reason, technology has had an outsized impact on recent elections — and will again in the midterms. It starts with an age-old invention: falsehood (also see: misinformation, disinformation, propaganda, or “fake news.”) Misinforming voters is hardly a new tactic. But a digital ecosystem that spreads, amplifies, and reinforces this misinformation is new. My colleague and Mozilla Fellow Renée DiResta recently wrote about this phenomenon for WIRED. Renée starts by sharing an anecdote similar to Chris Hayes’: After conducting research online, her Pinterest feed was quickly commandeered by polemic and conspiratorial memes.

Renée writes: “Recommendation engines are everywhere, and while my Pinterest feed’s transformation was rapid and pronounced, it is hardly an anomaly. BuzzFeed recently reported that Facebook Groups nudge people toward conspiratorial content, creating a built-in audience for spammers and propagandists. Follow one ISIS sympathizer on Twitter, and several others will appear under the ‘Who to follow’ banner.”

Renée has turned up countless similar examples in her research — like the proliferation of dangerous anti-vaccine content on Google — and she recently testified before the Senate about how foreign actors use social media to manipulate U.S. elections. Renée is in a vanguard of technologists and activists calling attention to these issues.

(Aside: Renée appeared in the season 2 IRL podcast episode “Ctrl+Alt+Facts” — you should definitely listen.)

Of course, dangerous algorithms are just one challenge on today’s internet. Mass surveillance and mass data collection are also threats to fair elections (not to mention privacy). The Facebook-Cambridge Analytica scandal earlier this year made this perfectly clear. And bots can sow discord and influence politics, too.

How do we fix the internet?

So how do we address not just the symptom (imperiled elections), but the root problem — an internet that rewards falsehoods and hyperbole?

First comes acknowledging the problem — that’s why it’s so heartening to see Renée’s research shared widely. It’s also heartening to see technology executives held accountable. Earlier this month, Twitter’s Jack Dorsey and Facebook’s Sheryl Sandberg visited the Hill for hearings, and lawmakers asked important questions about misinformation. Our awareness is growing and so is the call for solutions.

When it comes to developing solutions, there are promising steps in the right direction. In the EU, the GDPR (General Data Protection Regulation) puts individual users in control of their personal data, which enhances privacy and cuts down on invasive microtargeting. In New York City, lawmakers are convening an algorithmic task force — essentially an oversight committee for the lines of code that carry out city services. And there are bills in Congress that would make political ads online more transparent.

(There are also helpful tools being developed by civil society — like Citizen Lab’s Security Planner and Mozilla’s Facebook Container.)

Protecting our personal data

There’s no panacea for solving this issue. But by acknowledging that the current data- and attention-driven internet model often clashes with our democratic values, we can start on the path to uncovering real solutions.

(Another reading / watching suggestion: Mozilla’s recent mini-doc on misinformation, which features interviews with leading journalists and political scientists. Watch it here.)

I long for the simple days of walking to the voting booth with my mom. And, as we push to develop systemic solutions, I stick to the pre digital-age strategies in voting preparation — reading the paper, talking to my friends who work or study the issues on the ballot, and calling my sister to see what she’s doing.


What to Expect When You’re Electing

For more on responsible tech and elections, have a listen to the Season 3 finale of Mozilla’s IRL podcast. Veronica Belmont and Baratunde Thurston explore how social media campaigns, online propaganda, and data mining are all racing to impact the way we vote. Have a listen:


Share on Twitter