Katja Bego is the principal researcher and data scientist at Nesta, a UK-based innovation foundation, and leads its work on the European Commission’s Next Generation Internet initiative. She’s also part of the BBC Women Expert programme, supporting emerging female voices in STEM fields. Here, she talks about the vision for a more democratic, inclusive internet, whether the public are being let down by governments meant to protect us from profit-making corporations, and why we need to make a stand now.
Why does online privacy matter?
Online privacy can be a lot of things. It’s being able to use the internet in the way that was intended, without our interactions being used against us in some way, or being manipulated in a way that impacts our behaviours. You might know that if you use social media, it’s collecting your data. But there’s a whole infrastructure, companies you’ve never heard of, that exist behind that. And even for the experts, it’s not easy to engage with anymore. I’m also concerned about the scale. There are a handful of very large companies that have access to enormous data lakes with information on billions of people within it. Even if a company doesn’t currently intend to use that badly, it’s the kind of political and economic power that comes with having access to so much data, which is very worrisome.
People might think, ‘oh I have nothing to hide, this can’t harm me’. But if you think about the A Level results debate from a couple of weeks ago, and the harm that algorithmic decision making systems can do, you can begin to understand what the impact can be. Imagine all of these systems being used more and more in the background, without us being aware of it at all.
Can you tell me about your work with Nesta and the Next Generation Internet initiative?
I work in Nesta’s technology futures team. So we look a lot at the impact of emerging technologies on society – both how new solutions and new innovation can be used for good but also how the impact of new technologies like artificial intelligence can also have negative impacts. I lead our work on the Next Generation Internet project, which is a European Commission funded programme with a vision to build a more human centric, democratic, inclusive internet by 2030. Our role is to help them come up with a strategy and policy agenda and to identify opportunities (both on the policy and technology sides) that might help us achieve that goal.
A more human centric, democratic inclusive internet sounds great! What does that look like in practice and how do we get there?
A more democratic internet is where, as citizens more of us can benefit from the internet – so small businesses can meaningfully compete in the digital economy and get access to data in an ethical way, and that we as citizens get to have more agency over what happens to our personal data and understand what’s being shown to us and why. We want an internet that’s not just dominated by a handful of large companies. One where interesting innovation can be better used to benefit society, rather than just maximise profits for a small group of very powerful actors.
How we get there is a difficult question. It definitely requires a lot of proactive policymaking and interventions – at a European level, a national level and also a city level. But we’re also thinking about how we can move this discussion beyond the regulation. Are there different ways that we can share data, avoiding these huge centralised data lakes held by companies, for example? I think democratising access to data is very important.
There are a handful of very large companies that have access to enormous data lakes with information on billions of people within it. Even if a company doesn’t currently intend to use that badly, it’s the kind of political and economic power that comes with having access to so much data, which is very worrisome.
How aware do you think the British public are about this need to take data privacy seriously?
Awareness has definitely been rising in recent years – an obvious anchor point is the Cambridge Analytica revelations. I think the interest has recently rekindled during the coronavirus pandemic as well, and we’ve had all of these public debates about things like contact tracing, and other privacy debates. But really, the big problem is that, while people might be aware of these issues, the alternatives aren’t really there. It’s not very reasonable to ask an average user to move over to systems that do prioritise privacy but are often, unfortunately, quite hard to use. We ought to be thinking a lot more about how we can enable people to just be on the internet, anonymously. Or even if not anonymously, at least to have some agency or understanding about what’s happening with their data.
With Covid-19, on the one hand, we’ve seen a lot of governments quickly deploying very invasive systems, and there’s generally been public acceptance of that. At least at the start of events, the public mood was we’d rather fight Covid, we’d rather save lives, restart the economy, etc, with the privacy consequences left to the side. It’s only when things start to settle, that we’ll really start to see the nature of some of these surveillance tools. But on the other hand, I would not have expected at the start of 2020 that we would have such a public debate about this. I hope we’ll be able to make these debates more mainstream when it comes to government technology and data use.
Are we being let down by the powers that are meant to protect us against data manipulation, particularly by commercial companies?
I don’t think it’s an easy task. First of all, because of the pace of development and the complexity of a lot of the systems, it’s very hard to design good regulation for this. By the time you’ve thought of all the steps, a lot of the context has already changed.
But on top of that, you do see an enforcement problem. The General Data Protection Regulation (GDPR) does have the provisions and protections in place that could address a lot of the issues that we see today, but we’re still figuring out how enforcement works and who’s responsible for it (and in what way)? But I do think that despite that complexity, we are seeing a lot more attention on this than in recent years.
Finally, for people who understand that their data is being collected and don’t see a problem with that, why should they care about this?
It’s two things – it’s what’s already happening, which is probably a lot scarier than what people think, but it’s also the long tail of what could happen in the future. It’s almost impossible to challenge those systems because they’re not transparent. So we might think that we’re being shown an article or product just randomly, but if that is suddenly happening thousands of times a day, every time you use the internet, influencing your shopping behaviour … those systems start to take our autonomy away. By allowing ourselves to be tracked like this, we do create the infrastructure that would allow this to happen in many different ways. So, even if you think you might not have anything to hide today, who knows what social and political norms will look like in the future? If we don’t put the right regulations in place now, we may open ourselves up to the increasing risk of an individual or an organisation taking advantage.
This article originally appeared on The Privacy Collective here.