What we’ve learned about digital privacy through a 1,500+ person survey
April 28, 2020 · Return to blog
I grew up on the internet, back in the days when no one could even imagine using real names on group chats like IRC. Any real-life personal detail given was only given in private and only if you felt it was absolutely necessary.
Things have obviously changed. We mostly all use our real names on social media, and freely sign up for services we know are using what they know about us to better target their advertising (or worse).
Digital privacy literacy (knowing what digital privacy is and how to protect ourselves on the internet) isn’t binary. We can’t be absolutely private and have a presence online. Any time we do anything, type anything, share anything, we’re opening ourselves up to a potential invasion of privacy. So we make choices. We decide what’s ok to be public and what isn’t. We decide what services we want to use and which we feel take too much from us.
I personally do more than most to be protected and private online (VPNs, anonymized email, different phone numbers for different services, two factor authentication for everything, installing almost nothing on my mobile devices, using a password manager, having a penchant to not using big tech, etc), but I also feel like I barely scratch the surface. I co-run this privacy-focused web analytics company (Fathom, the website you’re on right now) and spend too much time thinking about GDPR, CCPA, PECR, etc. I both know more than most and am still learning about digital privacy at the same time.
When I ran a “State of digital privacy survey” in Spring 2019, I had some idea of what people would say about digital privacy, but I didn’t know for sure. 1,515 people participated and shared their fears and behaviours around digital privacy too (which is pretty awesome). I wanted to learn more about how other people viewed privacy, how it impacted their online behaviour and then what could be done to improve our ability to keep private things private.
Qualitative data analysis isn’t my forte, so I hired Current Forward who are research/qualitative analysis experts (and rad humans). Their help was instrumental in compiling the analysis below.
What we learned from the online privacy trends survey
Everyone from the survey knew and believed their data is being used, shared, and monetized without permission, but there’s a disconnect between understanding that and taking steps to protect oneself. Three reasons we identified driving that intent/action gap:
- It can be difficult to protect yourself online and requires too many steps, or it requires paying for additional services. Consumers have yet to bridge the knowledge gap around if they aren’t paying for software, their data is being monetized by that software.
- The impact of privacy breaches aren’t felt frequently or severely enough to change behaviour. We can know our passwords have been saved in a plaintext file by a service we use, but unless our accounts have been used for bad things, it’s not concerning enough to stop using the services we’ve grown used to using.
- There’s a lack of transparency about what happens when privacy has been breached, so we never really know the full extent of problems because they’re hidden behind either lawyer-speak or PR jargon.
Social media and advertising platforms are thought of as the main and primary abusers of our personal data. If the survey data is any indication of a greater trend, people are starting to leave social platforms in significant numbers and work to hide their data from advertisers.
- When asked how folks control their personal online data, they answered that they either were leaving platforms (like Facebook) or hugely limiting what they share with the platform.
- On the flip-side, people still feel that social platforms are too valuable to quit as that’s where co-workers, friends and family communicate.
- Most people assume that any website they visit is collecting personal data about them. (Note: this is why we created Fathom, so a website owner can choose to protect their visitors’ privacy.).
What we still don’t know
One of the underlying themes in respondents’ answers was a fear of where a lack of digital privacy leads us long-term. We've looked into more online privacy statistics here.
Netflix has a documentary called The Great Hack, which dives into what happens when companies do bad things with our data (but then, Netflix is probably collecting a ton of data about each of their users, ha). The fact that big corporations collect our information and use it against us (for things like personalized advertising, or worse) is becoming more mainstream knowledge. These companies are using our data to serve targeted ads and make more effective products, and then data piracy often happens as a side effect (example: breaches… if the data was never collected in the first place, it could never be stolen).
The looming and more grave threat is how and when the data will be used to predict an individual’s credit worthiness, guilt/innocence, societal worth, etc. A CCTV camera could catch us jay-walking, facially recognize us, and lower our credit score. This feels like a dystopian novel, but is closer to reality than ever ( in China and elsewhere systems like this is already in place and active).
Respondents felt as though they are unable to trust large institutions or tech companies. Even the US government had photos of traveller vehicles (and their licence plates) at border crossings compromised in a data breach. No company, no government, no data is immune to a breach—if data is saved somewhere, it has the potential to be stolen in the future. The other, probably larger factor at play is the fact that we don’t know where all of our personal data is going. That’s because those that collect it don’t actually share with the public how it’s used, who it’s sold to, or what the end result of having it is. Even though it’s our data.
This leaves a lot of unknowns in the minds of the survey respondents, specifically:
- Digital privacy is complex. The average person, regardless of tech savviness, has no idea what companies are actually doing with the data they collect. We get glimpses when they are caught or exposed, but that just scratches the surface. And the number of breaches, reports, laws, etc, that are talked out every day adds to the complexity.
- The track record of large companies (ex: Facebook, Equifax, eBay, Marriott) isn’t great. They’ve all had breaches that have exposed everything from email addresses to home addresses to social security numbers.
- We should own our data as digital citizens, but we don’t. Simply being online means we give up almost all control over what’s collected and shared about us.
Where in the world does this leave us?
The Great Hack validated what we already knew if we were paying attention—we can’t imagine the true extent to which companies are using or collecting our data. Cambridge Analytica essentially folded as a hugely successful company in an effort to avoid having to share the data they collected on ONE SINGLE PERSON. Let that sit for a moment. They were willing to go out of business instead of sharing the data they collected on a single person.
The skeptic in me feels like this essentially shows us how utterly screwed we are as digital users. Even those of us who know we are being used and abused continue to use the services that do it to us. They’re free, they keep us connected to the people that matter, so we’re willing to suspend disbelief just enough to function and not change habits.
Fighting back the skeptic in me, there are things we can do, which other, smarter people have shared, like this resource. The New York Times also has a fabulous resource I closely follow called The Privacy Project, curated by Charlie Warzel. And of course (shameless plug), the reason Fathom Analytics exists is to be a black hole to big tech.
Our business model isn’t and won’t be to make money from personal data since we make money (and a sustainable income) from charging a fair amount for the simple website analytics product we sell—and only collect what we absolutely require. And it’s not just us doing this—there’s a growing movement of small software companies who respect and hash data (like Fathom, a Google Analytics alternative) and back anonymity into their products… so even if a breach occurs, there’s no personal information to be had, since it’s been obfuscated.
Finally, if someone wants to take my findings to further their own interests or work in this space, please learn what you can from the results of my survey. Names/personal details have obviously been scrubbed.