Yet few have changed their online behaviour, boycotted snooping tech firms or exercised what few digital rights they possess. Partly this is because managing your own data is time consuming and complex, even for those who understand how to do it. But it is also because of a misunderstanding of what is at stake. “Data” is an abstract concept, technical and intangible. Far more solid is the idea of identity. It is only when “data” is understood to mean “people” that individuals will demand accountability from those who seek to know them.
Such accountability stretches far beyond an obligation to secure someone’s credit-card details. In the information age, data are used to decide what sort of access people have to services. Uber ratings determine who gets a taxi; Airbnb reviews decide what sort of property you can stay in; dating-app algorithms choose your potential life partners. Firms use location data and payment history to sell you products. Your online searches may establish the price you pay for things. Those with a good Zhima credit score, administered by an Alibaba subsidiary, enjoy discounts and waived deposits. Those without receive few offers.
When they are used by states, such techniques pose a still greater threat. Algorithms that are able to recognise patterns in data can pinpoint dissidents or even those with unconventional opinions. In 2012 Facebook experimented with using data to manipulate emotions. In 2016 Russia used data to influence the American presidential election. The question is not whether someone is doing something wrong. It is whether others can do wrong to them.
The fossils of past actions fuel future economic and social outcomes. Privacy rules, data-protection regulation and new laws surrounding the use of algorithms are crucial in protecting the rights of individuals. But the first step towards ensuring the fairness of the new information age is to understand that it is not data that are valuable. It is you.