Post

We’ve lost ownership over ourselves through the growing role of technology; how do we regain it?

Artists, academics, writers, and civil society activists are ringing the alarm bell over our lost powers in the age of the internet. Yet, there is hope for optimism. New technologies have given us a wealth of opportunities. They have also made our world smaller. They have allowed us to spend more time with our families […]

Artists, academics, writers, and civil society activists are ringing the alarm bell over our lost powers in the age of the internet. Yet, there is hope for optimism.

New technologies have given us a wealth of opportunities. They have also made our world smaller. They have allowed us to spend more time with our families on and offline. But these same technologies have also brought about a new, darker reality. One that most of us are not aware of.

“Human existence is threatened by quantification through AI technologies, but we are also definitely living in the best moment in history for 21st human existence,” Daniel Leufer, a Mozilla Fellow with the digital rights organisation Access Now, told us.

It was a red line weaving through all the conversations we had with artists, academics, activists and fiction writers. The internet and technology have brought us major opportunities to connect, to work, to express ourselves. But while the technology developed and people got used to it, we lost sight of what we lost, and may lose even more along the way.

Jennifer Morone, an artist and digital rights activist, went through a period of personal insecurity. She worried about the future of work when she started digging into data, how it’s collected and why around 2013.

“I saw that all these big companies, Google, Facebook, data is valuable to them. And with the economic insecurity going on, we are all contributing to that value,” she told me. Now she is the CEO of digital rights organisation RadicalxChange Foundation to democratize our new realities. “We need data unions. We need to be able to bargain through collective bargaining, to be able to say what data should be.”

Regaining control over the data we create is an issue raised by many of the people we   talked with.

Owning your property is embedded deeply in the European heritage, going back to feudal times. It is a universal human right, part of the Universal Declaration of Human Rights and in the European Human Convention on Human Rights: the “right to peaceful enjoyment of possessions.” Why, then, is our data not part of this? Why do our devices not seem to offer the same protection digitally? 

Nicole Immorlica — who researches the intersection of economics and computer science — believes that regaining ownership of personal data could give people a uniquely modern opportunity for financial growth as well. “Where does that data come from? That data comes from an entire ocean of humans that are generating the data,” Immorlica told me. “These humans ought to be compensated for the job.”

But it’s not only factual ownership that we discussed as a means to regain control over our lives in this digital age. Kristina Irion, an assistant professor at the University of Amsterdam’s Institute for Information Law, has been focussing on data protection from the law point of view for over a decade.

Irion argues that people should be at the centre of what technology should and shouldn’t do. We shouldn’t view large tech companies as purely private companies anymore due to their relevance to society. But, also, it shouldn’t be up to people themselves to protect their own rights; that is something a government should be responsible for.

“In this race of technologies between those who control the technologies and those who use it, we should bring the users again on par with those who control the technologies,” Irion says. “Why do we let everybody into our devices? This is personal space. It should be like we have the sanctuary of our homes.”

It’s an imperative issue Irion raises: we do own the devices we buy. Companies have questionable access into our devices and how we use them. Maybe we should also ask ourselves why we are not allowed to use them how we see fits us best.

Why can’t we get rid of apps we don’t want? Why can’t we alter apps to fit our needs? This is a question Cory Doctorow has been focussing on. He believes that we are not able to take full advantage of what the technologies have to offer us, as we have been allowing tech monopolies to form. We do not have technological “self-determination”. 

“You got these concentrated sectors that can collude to spend their monopoly rents, to buy policies that are favourable to their continued existence,” Doctorow explains. “It should never be an offense to modify a product or service in order to repair it, to audit its security, or make it more secure, to add accessibility features, to support people with disabilities.” 

We are not allowing people to use technologies fitting their needs best, but we are allowing tech companies to do what they want. From influencing our policies to allowing them to track workers’ productivity and emotions. We allow governments to use facial recognition. We have not been successful in ensuring labour rights for people working through app-based platforms.

“Technology that could be used to liberate people to give them more flexibility and autonomy is actually used in the opposite way. And it is counterproductive,” Valerio DeStefano, a professor in labour law at the University of Leuven, explains to me. DeStefano argues for labour unions, including platform workers, to have a say over what kind of systems will manage them.

DeStefano cautions that some uses of technology should be outright banned. Especially those that aim to predict people’s future behaviour.

Agreeing with this sentiment is Daniel Leufer. Leufer is “strongly” pushing back against the idea that the human essence could be quantified. “If you’re constantly worrying that every single thing you’re doing is being tracked and evaluated — fed into a profile or a model of your behaviour, which is accessible job advertisers, insurance companies, and the government — that’s going to significantly influence our behaviours.” In other words, we need to start allowing people to have more self-determination and diversity.

The people we spoke to all came from different fields. They had different backgrounds. They are working on different parts of the internet and other technological developments. But all came to the same conclusion: we must put people first to steer away from an otherwise disastrous future. 

We need self-determination, ownership over what we create, and freedom in how we behave.

Post

It’s time to fight for our right to repair

Can we make our devices last longer for the environment?

In late November, anticipating new proposals to strengthen the circular economy from the European Commission, Members of the European Parliament voted in favour of adopting an initiative report that calls for consumers to be given the right to repair. This will allow people across Europe to fix their own smartphones and laptops and see information about the repairability of devices on the box. The motion also favoured placing an outright ban on practices that artificially shorten the lifetime of devices.

If put into legislation next year, these proposals could have a huge impact on the internet’s growing resource footprint – and send a much-needed signal across the globe. EU citizens replace their smartphones on average every two years and a typical smartphone generates 60 to 80kg of CO2-equivalent emissions in its full lifetime. With 200 million smartphones purchased annually in the EU, this creates 12-16 million tonnes of CO2-equivalent emissions each year, which is more than the carbon budget of Latvia in 2017. Increasing the lifetime of a smartphone to three or four years would reduce emissions by 29 and 44 per cent respectively.

The environmental impact of our digital lives has been an area of particular interest for NGI Forward. In recent years, some sectors of the tech industry have been quicker than others in adopting stringent environmental goals. We’ve seen significant voluntary commitments from digital and cloud service providers such as Microsoft, Facebook, Google to reduce their carbon footprint, many of which target the energy consumption of large-scale data operations. But when it comes to extending the lifetime of connected devices themselves and reducing the environmental impact of hardware, it’s a different story. 

Whether it’s due to consumer behaviour, market structure or a lack of economic incentives, many device manufacturers seem less willing to abandon a range of practices that are both ecologically unsustainable and harmful to the consumer. Smartphones and tablets in particular are notoriously difficult to repair. They rely on  designs that break easily and often stop receiving software updates long before their hardware stops functioning. The European Parliament’s report even implies that device makers have even making deliberate design choices that shorten the lifetime of a device and incentivise shorter replacement cycles.

In some ways, we’re at a watershed moment in technological development. With the advent of the Internet of Things, 5G, and the rapid expansion of internet connectivity across the developing world, the number of connected devices on this planet is currently estimated at around 5.8 billion and set to grow exponentially in the coming years. If we want to create a truly circular economy for digital devices, the first step we should take is extending the lifetime of our devices and making repair economically viable again. 

This is one of the areas where a bold regulatory or policy intervention can make an important difference for the future sustainability of the internet. Uniquely, it’s also a matter where Europe’s ambitions for consumer fairness, economic justice and the fostering of new innovation can converge. Beyond just helping to reduce the environmental footprint of our smartphones, a push to extend the lifetimes of our devices will create a fairer and more decentralised repair economy, lower the financial barriers to digital inclusion and stimulate innovation in sustainable device design as well as green manufacturing – areas where Europe can develop a real competitive edge.

That’s why Nesta recently joined the European Right to Repair campaign. We hope that industry, Member States and the European Commission are taking heed of the example set by MEPs and work together to give both consumers and the environment a better deal.

Along with our partners, we are calling for them to:

  • implement a comprehensive right to repair,
  • ensure repair manuals and electronic schematics are available to repairers and end users without restriction, and
  • ensure that the expected lifetime of an otherwise functional device is not unnecessarily cut short by a lack of software updates.

When it comes to creating a greener and fairer European economy, there will be much tougher decisions ahead. This should be an easy one.
If you are interested in learning more about the campaign for the right to repair, visit the Repair.EU website. You can find our recent report about the environmental impact of the internet here.

Post

Workshop report: Ethnography and the internet – collective intelligence insights for digital policy

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by NGI Atlantic, written by Amelia Hassoun, Leonie Schulte and Kate J. Sim.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Edgeryders, written by Amelia Hassoun, Leonie Schulte and Kate J. Sim.

The EdgeRyders NGI Ethnography Team is dedicated to responding to three core, intersecting questions: when people talk about the future of the internet, what are their key concerns and desires? What issues do they face, and what solutions do they imagine? And how can we, as ethnographers, visualise and analyse these topics in a way that meaningfully contributes to ongoing debates and policy-making?

At the 2020 NGI Policy Summit, we demonstrated how we responded to these questions through digital ethnographic methods and qualitative coding. 

As digital ethnographers, we participate in and observe online interaction on EdgeRyders’ open source community platform, engaging with community members as they share their insights on issues from artificial intelligence, to environmental tech, to technology solutions for the COVID-19 pandemic.

We also code all posted content through Open Ethnographer; an open source coding tool that enables us to perform inductive, interpretive analysis of written online content. In practice, this means that we apply tags to portions of digital text produced by community members in a way that captures the semantic content of online interactional moments. This coding process yields a broader Social Semantic Network (SSN) of codes that allows us to gain a large-scale view of emerging salient topics, while being able to zoom in to actionable subsets of conversation. 

We visualise and navigate this ethnographic data using the data visualisation tool, Graph Ryder, which adds a quantitative layer atop our qualitative approach to data collection. Graph Ryder gives us a visualisation of all generated codes, allowing us to trace co-occurrences between codes and see code clusters, which show us what concepts community members are associating with each other. This approach shows us who is talking to each other, and about what topics, across the entire platform. During our workshop, we invited attendees to explore Graph Ryder with us. Here are some examples we used: 

We find interesting co-occurrences when filtering the graph to k=4. For those unfamiliar with our tool, Graph Ryder, this means that we are looking at connections between codes that have been mentioned together by community members at least four times. 

At this level, we can see some broad themes emerging as we look at the graph as a whole. We can then zoom in on a code like “privacy”, an extremely central node in the conversation among community members. This code is linked to other codes like “personal data”, “trade-offs”, “cost”, “surveillance” and “decision-making”. These connections, in turn, create an illustrative network of privacy concerns articulated by the community: around smart cities and human rights, covid-19 and contact tracing, trade-offs and decision-making. A salient theme is the question of how to weigh up privacy trade-offs, in order to make optimal decisions about one’s own data privacy. What does it cost? There is uncertainty around how extensive surveillance is, and a distrust of the information that one is given about these technologies, which makes making quality decisions about these issues difficult for community members.

Social Semantic Network Analysis combines qualitative research at scale with participatory design: we can, thus, dynamically address what people know, what they are trying to do and what they need. It also affords us a great deal of foresight, meaning we can look towards the future and identify what might be brewing on the horizon. 

So, how does this method allow us to inform policy? The digital ethnographic approach means we are continuously engaging with a broad range of individuals and communities across Europe; from activists to tech practitioners and academics, among many others. This gives us unique access to viewpoints and experiences that we can, in turn, explore in greater detail.  This approach combines the richness of everyday life details gained through ethnographic research with the “big picture” vantage point of network science. Our inductive approach to community interaction means we remain open to novelty: allowing us to address problems as they emerge, without having to define what those problems are from the outset.

Post

Workshop report: Privacy and trust: trends in experiments in EU-US research and innovation

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by NGI Atlantic.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by NGI Atlantic, written by Sara Pittonet Gaiarin and Jim Clarke.

Bridging EU-US research on Next Generation Internet: about NGIatlantic.eu

NGIatlantic.eu is one of the growing family of the Next Generation Internet (NGI) initiative’s Research and Innovation (RIA) projects, whose goal is to collectively build a human-centric internet based on the values held dear by European citizens, such as privacy, trust, fairness and inclusiveness. At the same time, the NGI initiative is designed to make sure that there are significant internationalisation activities, including collaborations between EU and United States’ NGI-related activities in order to generate major impacts at both a Pan-European and transatlantic level. Within the above backdrop, from January 2020 through June 2022, NGIatlantic.eu will fund 3rd party EU-based researchers and innovators in carrying out NGI-related experiments, in collaboration with US research teams, throughout regular open calls. In its first open call, running between 1st April and 29th May 2020, six projects have been selected for funding, in areas primary related to EU – US collaboration on privacy and trust enhancing technologies and decentralised data governance by leveraging AI, blockchain, 5G, big data and IoT technologies.

Trends in experiments in EU-US research and innovation

Organising a session during the NGI Policy Summit 2020 was an ideal opportunity to provide policymakers with an overview of the major trends and trajectories in EU – US research collaboration and context for NGI-related policy developments in the present and future. Three of the selected NGIatlantic.eu projects were given an opportunity to pitch their experiments, present their initial results and how these would contribute to ongoing policy dialogues in the EU and US.

Decentralized data governance

George C. Polyzos, Director, Mobile Multimedia Laboratory, Athens University of Economics and Business, presented the “Self-Certifying Names for Named Data Networking” project, whose solution builds on the emerging paradigm of Decentralized Identifiers (DIDs), a new form of self-sovereign identification under standardization by W3C in Name Data Networking (NDN). He was followed by Berat Senel, Research Engineer at PlanetLab Europe, EdgeNet, Laboratoire d’Informatique de Paris 6 (LIP6), who introduced the CacheCash Experiment, leveraging on the Content Delivery Network (CDN) technology that provides a service in which interested users run caches, and they are incentivised to participate by receiving a crypto-currency (Cachecoin) in exchange for serving content to other users.

Privacy and Trust Enabling Data Marketplace for Sustainable Supply Chains 

Moving to the privacy and trust topics, Tomaz Levak, Managing Director at Trace Labs Ltd. introduced the “Food Data Marketplace” (FDM) project, which is fostering new economic models for sustainable food supply chains based on data and employing privacy-by-design approach to enable farmers and cooperatives to regain control of their data, give it a price tag, and sell it to interested partners in the supply chain. Last but not least, the NGIatlantic.eu project also took the opportunity to showcase the Twinning Lab, an on-line space for researchers, innovators and start-ups to establish complementary partnerships with transatlantic actors to address NGI challenges, and to present their future activities and opportunities for the NGI communities.
The project also highlighted their 3rd Open Call , which will open on 1st December 2020.

Post

Workshop report: (Dis)connected future – an immersive simulation

As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Nesta Italia and Impactscool, written by Giacomo Mariotti and Cristina Pozzi.

The NGI Policy Summit was a great opportunity for policymakers, innovators and researchers to come together to start laying out a European vision for the future internet and elaborate the policy interventions and technical solutions that can help get us there.

As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet. It was a participative and thought-provoking session. Here we take a look at how it went.

Our aims

The discussion about the internet of the future is very complex and it touches on many challenges that our societies are facing today. Topics like Data sovereignty, Safety, Privacy, Sustainability, Fairness, just to name a few, as well as the implications of new technologies such as AI and Blockchain, and areas of concern around them, such as Ethics and Accessibility.

In order to define and build the next generation internet, we need to make a series of design choices guided by the European values we want our internet to radiate. However, moving from principles to implementation is really hard. In fact, we face the added complexity coming from the interaction between all these areas and the trade-offs that design choices force us to make.

Our workshop’s goal was to bring to life some of the difficult decisions and trade-offs we need to consider when we design the internet of the future, in order to help us reflect on the implications and interaction of the choices we make today.

How we did it

The workshop was an immersive simulation about the future in which we asked the participants to make some key choices about the design of the future internet and then deep dived into possible future scenarios emerging from these choices. 

The idea is that it is impossible to know exactly what the future holds, but we can explore different models and be open to many different possibilities, which can help us navigate the future and make more responsible and robust choices today.

In practice, we presented the participants with the following 4 challenges in the form of binary dilemmas and asked them to vote for their preferred choice with a poll:

  1. Data privacy: protection of personal data vs data sharing for the greater good
  2. Algorithms: efficiency vs ethics
  3. Systems: centralisation vs decentralisation
  4. Information: content moderation vs absolute freedom

For each of the 16 combinations of binary choices we prepared a short description of a possible future scenario, which considered the interactions between the four design areas and aimed at encouraging reflection and discussion.

Based on the majority votes we then presented the corresponding future scenario and discussed it with the participants, highlighting the interactions between the choices and exploring how things might have panned out had we chosen a different path.

What emerged

Individual-centric Internet

Data privacyProtection of personal data
84%
Data sharing for the greater good
16%
AlgorithmsEfficiency
41%
Ethics
59%
SystemsCentralisation
12%
Decentralisation
88%
InformationContent moderation
41%
Absolute freedom
59%

The table above summarises the choices made by the participants during the workshop, which led to the following scenario.

Individual-centric Internet

Decentralized and distributed points of access to the internet make it easier for individuals to manage their data and the information they are willing to share online. 

Everything that is shared is protected and can be used only following strict ethical principles. People can communicate without relying on big companies that collect data for profit. Information is totally free and everyone can share anything online with no filters.

Not so one-sided

Interesting perspectives emerged when we asked contrarian opinions on the more one-sided questions, which demonstrated how middle-ground and context-aware solutions are required in most cases when dealing with complex topics as those analysed.

We discussed how certain non-privacy-sensitive data can genuinely contribute to the benefit of society, with minimum concern on the side of the individual if they are shared in anonymised form. Two examples that emerged from the discussion were transport management and research.
In discussing the (de)centralisation debate, we discussed how decentralisation could result in a diffusion of responsibility and lack of accountability. “If everyone’s responsible, nobody is responsible”. We mentioned how this risk could be mitigated thanks to tools like Public-Private-People collaboration and data cooperatives, combined with clear institutional responsibility.

Post

Workshop report: What your face reveals – the story of HowNormalAmI.eu

At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work - an online interactive documentary called 'How Normal Am I?'.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Tijmen Schep.

At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work – an online interactive documentary called ‘How Normal Am I?‘. It explains how face recognition technology is increasingly used in the world around us, for example when dating website tinder gives all its users a beauty score to match people who are about equally attractive. Besides just telling us about it, the project also allows people to experience this for themselves. Through your webcam, you will be judged on your beauty, age, gender, body mass index (BMI), and your facial expressions. You’ll even be given a life expectancy score, so you’ll know how long you have left to live.

The project has sparked the imagination – and perhaps a little feeling of dread – in many people, as not even two weeks later the documentary has been ‘watched’ over 100.000 times.

At the Summit, Tijmen offered a unique insight into the ‘making of’ of this project. In his presentation, he talked about the ethical conundrums of building a BMI prediction algorithm that is based on photos from arrest records, and that uses science that has been debunked. The presentation generated a lot of questions and was positively received by those who visited the summit.

Post

Two days to change the internet: the NGI Policy Summit 2020

The Next Generation Internet Policy Summit has gone off with a bang. Find out how it went here.

The Next Generation Internet Policy Summit has gone off with a bang. Organised by Nesta and the City of Amsterdam this September, the Summit brought together participants from all over Europe and beyond to shape a vision for the future internet, moving the conversation on from the diagnosis of past and present challenges to the exploration of practical, concrete solutions. Here are some of the highlights.

100 speakers

33 sessions

650 attendees

45 countries represented

Originally scheduled for the end of June 2020 in Amsterdam, the Summit was rescheduled and reformulated for online participation in response to the COVID-19 pandemic. On Monday 28th September, the Summit began with a morning of Plenary sessions curated by Nesta and the City of Amsterdam. We made Monday afternoon and Tuesday morning available for our Policy-in-Practice workshop sessions, and then finished on Tuesday afternoon with a further series of plenary sessions to close the Summit.

Together with policymakers, researchers, and representatives from civil society, we looked at some of the most promising policy interventions and technological solutions, forging a path that cities, Member States and the European Union could follow.

A tangible vision and the steps to get there

We stirred the imaginations of our attendees by launching our new working paper to coincide with the Summit: A vision for the future internet, which is packed full of analysis and ideas for how to create a better internet by 2030. With such a broad range of people and issues involved in shaping the internet, it is clear that a coherent vision is required to tie it all together. We want to hear from you with feedback on the paper.

We were also honoured to host a keynote from the European Commission as they set out their post-COVID-19 recovery agenda. Pearse O’Donohue, Director of Future Networks at the Commission, outlined the way that technology and environmentalism must come together in a ‘twin transition’. He described the broader impact of the Next Generation Internet Initiative, and how the Commission’s funding and research are contributing to the creation of an Internet of Humans. Pearse has also written a blog to capture his message.

A transparent approach to AI

The City of Amsterdam’s deputy mayor, Touria Meliani, also launched the world’s first AI registry, co-designed by Amsterdam and the City of Helsinki. This AI registry provides citizens with a powerful tool to understand how algorithms are being used by their local governments to make decisions, putting principles of fairness and accountability into practice. We heard that transparency is a huge issue for the way artificial intelligence and algorithms are being used to make decisions about that data. Deputy Mayor Meliani said: ‘When we say as a city: algorithms are useful for our city, it’s also our responsibility to make sure that people know how they work. People deserve to know how they work. It’s a human right.’

A wide range of topics was discussed during the two days of the summit – not surprisingly, given the broad and interconnected nature of the challenges and opportunities driving the internet’s development today. To make our vision a reality, Europe must mobilise its full ecosystem, with interventions necessary on the local, national and supranational level. We were therefore honoured to feature leading policymakers across all layers of governance, including four MEPs, high-level representatives from the European Commission, a former president, a digital minister, and CTOs of leading digital cities from across the world. Below, we summarise just some of the many insights that emerged during the event. 

Taking control of our data

Clockwise from top-left: Lucy Hedges, Frederike Kaltheuner, Tricia Wang and Charlton McIlwain in our session on Solutions vs. Solutionism.

Today, few would question that the centralisation and hoarding of data – and power – in fewer and fewer hands, gives platforms considerable agency to shape our views on the world, social interactions and economic choices. Tricia Wang challenged attendees to think beyond privacy and consider the impact that widespread data collection has on our personhood, our ability to determine our own life decisions and outcomes. ‘Corporations would rather have us live in the world of privacy, because privacy is something that can be legally mediated and tickboxed,’ she said. ‘At this point, we have so much data tied to who we are, that other people can control our lives through that data, and that threatens our agency, our personhood.’

Yet the role that these systems have in perpetuating societal biases and disproportionately affecting minorities and people of colour is not sufficiently well understood. Charlton McIlwain warned that technology being developed today is just as dangerous for people of colour as the systems used to enact racist policies decades go. He drew a parallel between the practice of redlining and the risks of social media for people protesting against racism. He explained: ‘We often call on technology to help solve problems but when society defines, frames and represents people of colour as the problem, those solutions often do more harm than good.’

And despite the heavy emphasis on business data in Europe’s current data strategy, our speakers called for innovation to empower citizens to take control of their data. In her talk, Sylvie Delacroix called for the establishment of Data Trusts to redress the growing power imbalance between citizens and big tech. ‘We can do better than consent,’ she explained. ‘We also need bottom-up empowerment structures to help people take the reins of their data, rather than constantly being asked to consent to this or that.’

Extending device lifetime

Clockwise from top-left: Bas van Abel, Asim Hussain, Madeleine Gabriel, Anne Currie and Janet Gunter in our session on creating a sustainable digital future.

Our speakers repeatedly stressed the importance of considering the environmental impact of the technology that powers the internet. We heard about the campaign to make it easier to repair our devices, and the push for industry to reduce its reliance on polluting energy sources, stop dirty mining practices and improve waste management processes. In our session on the environment, Janet Gunter called for Europe to put more pressure on manufacturers to create devices that are repairable, so that they last far longer. She said, ‘The precedent has been set with ecodesign regulation for large appliances, but we need ecodesign for smartphones and computers.’

A global approach to inclusion

The COVID-19 pandemic was a hot topic throughout the Summit because it has brought our ambivalent relationship with technology and existing inequalities in our societies into sharp relief. Although we managed to move the event online, for the 10 per cent of EU households without internet access, participating in online life and maintaining access to education, work and public services throughout the pandemic has been far more difficult. In her keynote, Payal Arora started with a call for action: ‘We need to move beyond the concept of inclusion, which necessarily requires excluding ‘bad’ actors such as spammers and trolls. Instead, we need to consider the interconnectedness of everything, and the unintended consequences of changes we might make.’ She called for Europe to set itself ambitious targets for digital inclusion, by employing collaborative problem-solving and including transcultural perspectives. 

A digital identity for all

In our session digital identity, our speakers agreed that it is time for Europe to create a comprehensive bloc-wide identity system that allows people to keep control over their own personal data. Former Estonia President Toomas Hendrik Ilves explained that under the eIDAS directive, 15-20 per cent of EU citizens have used a digital ID scheme in their home country, which is a number low enough to prevent genuine investment from public institutions. Only mandatory digital IDs can create the change seen in Estonia. ‘Europe isn’t being held back by technology to build its own ambitious identity infrastructures,’ he said. ‘It is all about political will.’

Europe’s role

Clockwise from top-left: Axel Voss MEP, Anu Bradford, Bill Thompson, Thomas Zerdick and Mara Balestrini

By and large, speakers expressed a strong desire for Europe to provide alternative models to the perceived tech superpowers in Beijing and Silicon Valley, without emulating their approaches or contributing to the further fragmentation of the internet. In our session on what Europe should do in the next decade, Anu Bradford said:  ‘I think that American techno-libertarianism has shown its limit in terms of how we regulate the internet, and we certainly have concerns if the Chinese digital authoritarianism would spread globally. Europe needs to be more than a regulator, but also build our own alternatives. We need to play defence, but also begin to play offence,’ she argued. 

Europe will have to support these developments with significant investment alongside smart regulation and governance to create an internet that is fit for the future. To make our vision a reality, speakers agreed that Europe must be bold in its approach and mobilise its full ecosystem, with interventions necessary on the local, national and supranational level. 

A manifesto for change

We thoroughly enjoyed the discussions that arose during the Summit, and want them to have a lasting effect. Over the coming months, we’ll be exploring these ideas in our work to guide the European Commission’s policy approach to the future internet. 

Our resounding and heartfelt thanks go to everyone that contributed to the Summit. And as always, if you like what you hear, get in touch with us – we are always interested in hearing from new contacts and collaborating on issues that affect the future internet.

Post

Pearse O’Donohue: reaching an internet of trust

I was thrilled to address this year’s Next Generation Internet Policy Summit on behalf of the NGI Initiative, addressing key issues for the future of the internet and our digital economy and society.

Abridged from a speech given by Pearse at the NGI Policy Summit.

I was thrilled to address this year’s Next Generation Internet Policy Summit on behalf of the NGI Initiative, addressing key issues for the future of the internet and our digital economy and society. It made perfect sense to discuss these issues with the City of Amsterdam, not least because over the centuries, the Netherlands has shown a strong willingness to expand the frontier of human knowledge, from pioneering new trade routes in the 16th and 17th centuries to positioning itself at the forefront of the internet and the start-up scene. 

The NGI Policy Summit brought together vibrant communities of tech-innovators and policy makers from all over Europe. These communities embody the spirit of NGI, precisely a place where different perspectives and competencies, from policy to technology development, from civil society to industry, meet to deliver on the vision of a human-centric and sustainable internet. 

The COVID-19 crisis has had enormous impacts on our society, economy and way of life, showing how dependent our society is on the digital technologies and infrastructures and, in particular, the central role that internet now plays in our lives: for remote working, for homeschooling, for remote healthcare and simply for maintaining interpersonal communication. Internet and digital technologies will thus be one of the main pillars to build the recovery upon. 

The Commission’s ambitious €750bn recovery plan, the Next Generation EU, clearly focuses on the ‘twin transition’: the green and the digital transitions will support the EU economy as a whole, make it fit for the challenges of the next decades and able to face future crises. The topics that we debated at the Summit – such as connectivity, data, artificial intelligence, a secure European eID and digital inclusion – are among the main priorities for the digital part of the Recovery and Resilience Facility.

As the internet expands and permeates our daily lives, it must become safer, more open, more respectful of individuals and it must deliver more to the citizens and society. The Next Generation Internet initiative, with its community of innovators supported through research and innovation funding, has already started building the blocks for a more human-centric internet. In the first 18 months of operation, thousands of start-ups, open-source developers, researchers and innovators have already replied to our calls for proposals. We are providing seed funding to more than 500 innovators to develop technologies in key areas like security and privacy-enhancing technologies, decentralised data governance, or self-sovereign identities.

The aim is to achieve ‘An Internet of Trust’: a trustworthy digital environment, built on a more resilient, sustainable, and decentralised internet architecture, to empower end-users with more control over their data and their digital identity, and to enable new social and business models respecting European values.

One of the goals of NGI is to turn the rights bestowed by the General Data Protection Regulation into a reality for EU citizens. We do it by developing new systems and technologies that allow people to control the use of their personal data. At the Summit, we also presented the six winners of the Prize on Blockchains for Social Good, who apply decentralised solutions to address key sustainability challenges. We are also funding concrete tools to increase the privacy of internet end-users – with innovators devising new ways to manage passwords, encrypt emails, secure online collaboration, or improve internet traffic protocols.

Linked to data sovereignty and privacy is the question of building a user-centric digital identity. What we see today is that large online platforms accounts are increasingly used to prove identity and access online services, including public services. This has several drawbacks including significant privacy concerns and long-term competition risks; lack of proper identity verification; and lower quality services for European citizens, as governments and businesses are missing digital means of obtaining verified information. The NGI initiative supports the development of platform-independent, standardised eID technologies and services that allow for trustworthy verification of identity such as proof-of-age, and are under the full control of the end-users.

While the pandemic was spreading, so did a number of applications and software designed to help us fight it. The conceptual design of these solutions varied greatly, from centralised to decentralised, from open source to proprietary software. NGI launched the Emergency Tech Review Facility, which allowed different solutions to be compared in terms of design, code, effectiveness, usability and implications for the users’ privacy and security.

Our work on data and digital identity shows how much, in the internet era, technology development goes hand in hand with policy and regulatory development. This is why, one year ago, we launched the NGI Policy Lab, which provides a platform for policymakers, civil society actors and innovators to come together and collaborate on key digital issues by sharing learnings, exploring new solutions together and taking action. The NGI Policy Summit is a key part of this dialogue, and we are also running a series of pilot experiments to trial new policy approaches to build a better internet. We will experiment with concrete solutions in local communities, and share the insights across the NGI community.

Our vision for an Internet of Humans is gaining traction. In Europe, we are clear on the internet we want: we want an internet that is trustworthy, that is open, and that contributes to a more sustainable and inclusive society. We are working on implementing this vision: with the right regulatory framework and investment incentives. This must be a joint endeavour, involving the whole internet community: researchers and innovators, civil society, businesses and policy makers together. We will need the collective vision and the engagement of all of you who are present today to deliver on our mission to build a better internet – in line with EU values – in the coming digital decade.

Post

Think you don’t have anything to hide online? Think again, says Katja Bego from Nesta

As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of the UK’s leading experts why online privacy matters…

Katja Bego is the principal researcher and data scientist at Nesta, a UK-based innovation foundation, and leads its work on the European Commission’s Next Generation Internet initiative. She’s also part of the BBC Women Expert programme, supporting emerging female voices in STEM fields. Here, she talks about the vision for a more democratic, inclusive internet, whether the public are being let down by governments meant to protect us from profit-making corporations, and why we need to make a stand now.

Why does online privacy matter?

Online privacy can be a lot of things. It’s being able to use the internet in the way that was intended, without our interactions being used against us in some way, or being manipulated in a way that impacts our behaviours. You might know that if you use social media, it’s collecting your data. But there’s a whole infrastructure, companies you’ve never heard of, that exist behind that. And even for the experts, it’s not easy to engage with anymore. I’m also concerned about the scale. There are a handful of very large companies that have access to enormous data lakes with information on billions of people within it. Even if a company doesn’t currently intend to use that badly, it’s the kind of political and economic power that comes with having access to so much data, which is very worrisome.

People might think, ‘oh I have nothing to hide, this can’t harm me’. But if you think about the A Level results debate from a couple of weeks ago, and the harm that algorithmic decision making systems can do, you can begin to understand what the impact can be. Imagine all of these systems being used more and more in the background, without us being aware of it at all.

Can you tell me about your work with Nesta and the Next Generation Internet initiative?

I work in Nesta’s technology futures team. So we look a lot at the impact of emerging technologies on society – both how new solutions and new innovation can be used for good but also how the impact of new technologies like artificial intelligence can also have negative impacts. I lead our work on the Next Generation Internet project, which is a European Commission funded programme with a vision to build a more human centric, democratic, inclusive internet by 2030. Our role is to help them come up with a strategy and policy agenda and to identify opportunities (both on the policy and technology sides) that might help us achieve that goal.

A more human centric, democratic inclusive internet sounds great! What does that look like in practice and how do we get there?

A more democratic internet is where, as citizens more of us can benefit from the internet – so small businesses can meaningfully compete in the digital economy and get access to data in an ethical way, and that we as citizens get to have more agency over what happens to our personal data and understand what’s being shown to us and why. We want an internet that’s not just dominated by a handful of large companies. One where interesting innovation can be better used to benefit society, rather than just maximise profits for a small group of very powerful actors.

How we get there is a difficult question. It definitely requires a lot of proactive policymaking and interventions – at a European level, a national level and also a city level. But we’re also thinking about how we can move this discussion beyond the regulation. Are there different ways that we can share data, avoiding these huge centralised data lakes held by companies, for example? I think democratising access to data is very important.

There are a handful of very large companies that have access to enormous data lakes with information on billions of people within it. Even if a company doesn’t currently intend to use that badly, it’s the kind of political and economic power that comes with having access to so much data, which is very worrisome.

How aware do you think the British public are about this need to take data privacy seriously?

Awareness has definitely been rising in recent years – an obvious anchor point is the Cambridge Analytica revelations. I think the interest has recently rekindled during the coronavirus pandemic as well, and we’ve had all of these public debates about things like contact tracing, and other privacy debates. But really, the big problem is that, while people might be aware of these issues, the alternatives aren’t really there. It’s not very reasonable to ask an average user to move over to systems that do prioritise privacy but are often, unfortunately, quite hard to use. We ought to be thinking a lot more about how we can enable people to just be on the internet, anonymously. Or even if not anonymously, at least to have some agency or understanding about what’s happening with their data.

With Covid-19, on the one hand, we’ve seen a lot of governments quickly deploying very invasive systems, and there’s generally been public acceptance of that. At least at the start of events, the public mood was we’d rather fight Covid, we’d rather save lives, restart the economy, etc, with the privacy consequences left to the side. It’s only when things start to settle, that we’ll really start to see the nature of some of these surveillance tools. But on the other hand, I would not have expected at the start of 2020 that we would have such a public debate about this. I hope we’ll be able to make these debates more mainstream when it comes to government technology and data use.

Are we being let down by the powers that are meant to protect us against data manipulation, particularly by commercial companies?

I don’t think it’s an easy task. First of all, because of the pace of development and the complexity of a lot of the systems, it’s very hard to design good regulation for this. By the time you’ve thought of all the steps, a lot of the context has already changed.

But on top of that, you do see an enforcement problem. The General Data Protection Regulation (GDPR) does have the provisions and protections in place that could address a lot of the issues that we see today, but we’re still figuring out how enforcement works and who’s responsible for it (and in what way)? But I do think that despite that complexity, we are seeing a lot more attention on this than in recent years.

Finally, for people who understand that their data is being collected and don’t see a problem with that, why should they care about this?

It’s two things – it’s what’s already happening, which is probably a lot scarier than what people think, but it’s also the long tail of what could happen in the future. It’s almost impossible to challenge those systems because they’re not transparent. So we might think that we’re being shown an article or product just randomly, but if that is suddenly happening thousands of times a day, every time you use the internet, influencing your shopping behaviour … those systems start to take our autonomy away. By allowing ourselves to be tracked like this, we do create the infrastructure that would allow this to happen in many different ways. So, even if you think you might not have anything to hide today, who knows what social and political norms will look like in the future? If we don’t put the right regulations in place now, we may open ourselves up to the increasing risk of an individual or an organisation taking advantage.

This article originally appeared on The Privacy Collective here.

Woman climbing mountain with European flag
Post

Working Paper: A vision for the future internet

Read our new working paper, setting out NGI Forward's ambitious vision for the future of the internet.

To coincide with the Next Generation Internet (NGI) Policy Summit on 28 and 29 September, we are launching a working paper and kick-starting a discussion about the policy vision and roadmap we want to set for the Next Generation Internet.

The European Commission’s ambitious Next Generation EU recovery plan aims to not just kickstart economic growth and boost employment, but also
use this moment as an opportunity to catalyse the digital and green twin transition.

The internet and its supporting technologies will be instrumental in making these efforts a success, but we cannot harness its full power unless we solve the underlying, systemic issues currently holding it back.

That is why, in this working paper, we set out an ambitious vision and mission framework to create a more democratic, resilient, sustainable, trustworthy and inclusive internet by 2030.

There is no single silver bullet solution that can help resolve all the challenges presented by connected technologies and the digital economy. Instead, we need a wide variety of interventions to reach our objectives, targeting issues across all layers of the internet’s power stack — from its underlying physical infrastructures to the ways in which information flows through the system and impacts our societies.

Challenges across the layers of our power stack model

We propose unifying the ambitious objectives of the Next Generation Internet initiative into one single mission, to sit alongside the ambitious missions previously defined by the European Commission.

Taking such a mission-based approach will empower policymakers and the public sector to take a holistic view, articulate a compelling European story, and mobilise the right actors in Europe’s diverse technology ecosystem to bring about the changes we want to see.

We focus our efforts on five key pillars: 

Democracy: Power over the internet is concentrated in too few hands. Citizens should have more ownership over their own personal data and identity, and a real voice in the development of new innovation. Building a more democratic internet also means levelling the playing field in the digital economy, allowing more actors to meaningfully compete, and public-interest initiatives to thrive. 

Resilience: A human-centric internet also needs to be resilient in order to ensure the continued reliability and sustainability of its networks and social infrastructures. Mounting cyberthreats, climate shocks and rising demand impact different layers of the system, and require renovation and more secure processes to remain robust. 

Sustainability: If we want the internet and related digital technologies to play a role in solving the climate emergency and further the objectives of the European Green Deal, we need to ensure we minimise their own environmental footprint and advance the circular digital economy. 

Trust: From reading an article on social media to making an online payment — trust in and on the internet is vital if we want to make most of its promise. Europe needs more trustworthy models for online interactions, reliable information, data-sharing and identity management, as well as helping to ease growing distrust in the geopolitical arena. 

Inclusion: The internet needs to be accessible to all. This means removing economic and infrastructural barriers to access, but also the development of a flourishing multilingual internet, where services are available and safe to use for underrepresented communities.

You can download the full report here:

>