The EdgeRyders NGI Ethnography Team is dedicated to responding to three core, intersecting questions: when people talk about the future of the internet, what are their key concerns and desires? What issues do they face, and what solutions do they imagine? And how can we, as ethnographers, visualise and analyse these topics in a way that meaningfully contributes to ongoing debates and policy-making?
At the 2020 NGI Policy Summit, we demonstrated how we responded to these questions through digital ethnographic methods and qualitative coding.
As digital ethnographers, we participate in and observe online interaction on EdgeRyders’ open source community platform, engaging with community members as they share their insights on issues from artificial intelligence, to environmental tech, to technology solutions for the COVID-19 pandemic.
We also code all posted content through Open Ethnographer; an open source coding tool that enables us to perform inductive, interpretive analysis of written online content. In practice, this means that we apply tags to portions of digital text produced by community members in a way that captures the semantic content of online interactional moments. This coding process yields a broader Social Semantic Network (SSN) of codes that allows us to gain a large-scale view of emerging salient topics, while being able to zoom in to actionable subsets of conversation.
We visualise and navigate this ethnographic data using the data visualisation tool, Graph Ryder, which adds a quantitative layer atop our qualitative approach to data collection. Graph Ryder gives us a visualisation of all generated codes, allowing us to trace co-occurrences between codes and see code clusters, which show us what concepts community members are associating with each other. This approach shows us who is talking to each other, and about what topics, across the entire platform. During our workshop, we invited attendees to explore Graph Ryder with us. Here are some examples we used:
We find interesting co-occurrences when filtering the graph to k=4. For those unfamiliar with our tool, Graph Ryder, this means that we are looking at connections between codes that have been mentioned together by community members at least four times.
At this level, we can see some broad themes emerging as we look at the graph as a whole. We can then zoom in on a code like “privacy”, an extremely central node in the conversation among community members. This code is linked to other codes like “personal data”, “trade-offs”, “cost”, “surveillance” and “decision-making”. These connections, in turn, create an illustrative network of privacy concerns articulated by the community: around smart cities and human rights, covid-19 and contact tracing, trade-offs and decision-making. A salient theme is the question of how to weigh up privacy trade-offs, in order to make optimal decisions about one’s own data privacy. What does it cost? There is uncertainty around how extensive surveillance is, and a distrust of the information that one is given about these technologies, which makes making quality decisions about these issues difficult for community members.
Social Semantic Network Analysis combines qualitative research at scale with participatory design: we can, thus, dynamically address what people know, what they are trying to do and what they need. It also affords us a great deal of foresight, meaning we can look towards the future and identify what might be brewing on the horizon.
So, how does this method allow us to inform policy? The digital ethnographic approach means we are continuously engaging with a broad range of individuals and communities across Europe; from activists to tech practitioners and academics, among many others. This gives us unique access to viewpoints and experiences that we can, in turn, explore in greater detail. This approach combines the richness of everyday life details gained through ethnographic research with the “big picture” vantage point of network science. Our inductive approach to community interaction means we remain open to novelty: allowing us to address problems as they emerge, without having to define what those problems are from the outset.
Workshop report: Privacy and trust: trends in experiments in EU-US research and innovation
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by NGI Atlantic.
by Louis Stupple-Harris
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by NGI Atlantic, written by Sara Pittonet Gaiarin and Jim Clarke.
Bridging EU-US research on Next Generation Internet: about NGIatlantic.eu
NGIatlantic.eu is one of the growing family of the Next Generation Internet (NGI) initiative’s Research and Innovation (RIA) projects, whose goal is to collectively build a human-centric internet based on the values held dear by European citizens, such as privacy, trust, fairness and inclusiveness. At the same time, the NGI initiative is designed to make sure that there are significant internationalisation activities, including collaborations between EU and United States’ NGI-related activities in order to generate major impacts at both a Pan-European and transatlantic level. Within the above backdrop, from January 2020 through June 2022, NGIatlantic.eu will fund 3rd party EU-based researchers and innovators in carrying out NGI-related experiments, in collaboration with US research teams, throughout regular open calls. In its first open call, running between 1st April and 29th May 2020, six projects have been selected for funding, in areas primary related to EU – US collaboration on privacy and trust enhancing technologies and decentralised data governance by leveraging AI, blockchain, 5G, big data and IoT technologies.
Trends in experiments in EU-US research and innovation
Organising a session during the NGI Policy Summit 2020 was an ideal opportunity to provide policymakers with an overview of the major trends and trajectories in EU – US research collaboration and context for NGI-related policy developments in the present and future. Three of the selected NGIatlantic.eu projects were given an opportunity to pitch their experiments, present their initial results and how these would contribute to ongoing policy dialogues in the EU and US.
Decentralized data governance
George C. Polyzos, Director, Mobile Multimedia Laboratory, Athens University of Economics and Business, presented the “Self-Certifying Names for Named Data Networking” project, whose solution builds on the emerging paradigm of Decentralized Identifiers (DIDs), a new form of self-sovereign identification under standardization by W3C in Name Data Networking (NDN). He was followed by Berat Senel, Research Engineer at PlanetLab Europe, EdgeNet, Laboratoire d’Informatique de Paris 6 (LIP6), who introduced the CacheCash Experiment, leveraging on the Content Delivery Network (CDN) technology that provides a service in which interested users run caches, and they are incentivised to participate by receiving a crypto-currency (Cachecoin) in exchange for serving content to other users.
Privacy and Trust Enabling Data Marketplace for Sustainable Supply Chains
Moving to the privacy and trust topics, Tomaz Levak, Managing Director at Trace Labs Ltd. introduced the “Food Data Marketplace” (FDM) project, which is fostering new economic models for sustainable food supply chains based on data and employing privacy-by-design approach to enable farmers and cooperatives to regain control of their data, give it a price tag, and sell it to interested partners in the supply chain. Last but not least, the NGIatlantic.eu project also took the opportunity to showcase the Twinning Lab, an on-line space for researchers, innovators and start-ups to establish complementary partnerships with transatlantic actors to address NGI challenges, and to present their future activities and opportunities for the NGI communities. The project also highlighted their 3rd Open Call , which will open on 1st December 2020.
Workshop report: Trustworthy content handling and information exchange with ONTOCHAIN
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by ONTOCHAIN. Today, digital life is an extension of our physical world and it demands the same critical, moral and ethical thinking. However, from the current standpoint and when it comes to exchange of knowledge and […]
Today, digital life is an extension of our physical world and it demands the same critical, moral and ethical thinking. However, from the current standpoint and when it comes to exchange of knowledge and services, the internet can’t assure that bias or systematic abuse of global trust are avoided. Several threats in the real life scenarios of a person’s interaction with the Internet can be identified. Here are some examples.
The balance of power, initial spirit of the internet has been broken by few dominating companies. This small centralised network has now the power of information in hands and can potentially dictate what is true and what is false.
We all make daily decisions on the basis of information we find on the Internet. But the provenance of these information is hard, slow, and costly to verify and its quality itself is often uneven and unassessed. Information can be corrupted by malicious storage and network, or by censorship and can be shared and propagated to unforeseeable extent.
Publishing anonymously or pseudonymously to protect privacy leads from time to time, to misinformation. Removing anonymity from the Internet is not an option and even if it was, real people will always be able to share false information whatever the reason. Information disorder would still remain.
Various platforms publicly expose users’ ratings as metadata over the public internet, typically relating to the profile of single users. But this model is flawed in two ways. Firstly, it allows spam to mislead prospective consumers, while past consumers have little incentive in providing their feedback. Secondly, the revenue that service providers make is not shared with the users that took the time to provide feedback.
Artificial Intelligence, increasingly present in our digital daily life, if not trained correctly can only lead us to adopt partial behaviors and reveal how unequal, parochial, and cognitively biased human can be. The blockchain technology, victim of its own success, leads to stand-alone, disconnected blockchains entailing different ecosystems, hashing algorithms, consensus models and communities. The blockchain space is becoming increasingly siloed, and its core philosophical concept – the idea of decentralization – is being undermined.
In order to overcome these threats and make the internet a resilient, trustworthy and sustainable means of knowledge and services exchange, the ONTOCHAIN European program is brought to you today to support the development of innovative and interoperable solutions with novel business models in an open and collaborative way toward three cascading open calls. It proposes to suitably federate blockchain and semantic technologies for trustworthy content handling and information exchange for vital sectors of the European economy. Several question and challenges are nonetheless still open:
Shall we build the ONTOCHAIN ecosystem from scratch? Platform compatibility might make it easier for developers to contribute, but is it a double-edged sword?
Techniques and algorithms (e.g. knowledge representation, storage and querying, Machine Learning, data analytics) to be used, have to be leveraged and integrated in a unique decentralized ontology framework.
The fast pace of innovation in blockchains will certainly lead to obsolete design choice before the end of ONTOCHAIN and has to be anticipated and mitigated for the sustainability of the ecosystem. Open and flexible design will be required and ONTOCHAIN innovators will have to make numerous trade-offs, e.g. between the granularity and how much data is stored on-chain vs. performance, that may evolve as future blockchain protocols emerge. It will have to be documented and adaptable to make ONTOCHAIN contributions interoperable and sustainable.
A method will have to be elaborated to transparently derive a new truth out of several known truths according to a set of rules. The design of specific Smart Contracts that would implement first order logic directly on the blockchain could be one solution. How ONTOCHAIN will maintain competitive advantage against the already existing blockchain ecosystems? A set of innovative business models related to blockchain will have to be thought and implemented in order for all parties involved in the content exchange to be rewarded fairly.
By building ONTOCHAIN with you, we expect to answer these challenges and contribute to a more distributed and transparent internet that respects and promotes the fundamental values of diversity, equality, privacy and participation. Stay tuned, share and engage!
Workshop report: Follow us OFF Facebook – decent alternatives for interacting with citizens
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Redecentralize.org.
Despite the incessant outcry over social media giants’ disrespect of privacy and unaccountable influence on society, any public sector organisation wanting to reach citizens feels forced to be present on their enormous platforms. But through its presence, an organisation legitimises these platforms’ practices, treats them like public utilities, subjects its content to their opaque filters and ranking, and compels citizens to be on them too — thus further strengthening their dominance. How could we avoid the dilemma of either reaching or respecting citizens?
Redecentralize organised a workshop to address this question. The workshop explored the alternative of decentralised social media, in particular Mastodon, which lets users choose whichever providers and apps they prefer because these can all interoperate via standardised protocols like ActivityPub; the result is a diverse, vendor-neutral, open network (dubbed the Fediverse), analogous to e-mail and the world wide web.
Leading by example in this field is the state ministry of Baden-Württemberg, possibly the first government with an official Mastodon presence. Their head of online communications Jana Höffner told the audience about their motivation and experience. Subsequently, the topic was put in a broader perspective by Marcel Kolaja, Member and Vice-President of the European Parliament (and also on Mastodon). He explained how legislation could require the dominant ‘gatekeeper’ platforms to be interoperable too and emphasised the role of political institutions in ensuring that citizens are not forced to agree to particular terms of service in order to participate in public discussion.
Workshop report: (Dis)connected future – an immersive simulation
As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet.
The NGI Policy Summit was a great opportunity for policymakers, innovators and researchers to come together to start laying out a European vision for the future internet and elaborate the policy interventions and technical solutions that can help get us there.
As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet. It was a participative and thought-provoking session. Here we take a look at how it went.
The discussion about the internet of the future is very complex and it touches on many challenges that our societies are facing today. Topics like Data sovereignty, Safety, Privacy, Sustainability, Fairness, just to name a few, as well as the implications of new technologies such as AI and Blockchain, and areas of concern around them, such as Ethics and Accessibility.
In order to define and build the next generation internet, we need to make a series of design choices guided by the European values we want our internet to radiate. However, moving from principles to implementation is really hard. In fact, we face the added complexity coming from the interaction between all these areas and the trade-offs that design choices force us to make.
Our workshop’s goal was to bring to life some of the difficult decisions and trade-offs we need to consider when we design the internet of the future, in order to help us reflect on the implications and interaction of the choices we make today.
How we did it
The workshop was an immersive simulation about the future in which we asked the participants to make some key choices about the design of the future internet and then deep dived into possible future scenarios emerging from these choices.
The idea is that it is impossible to know exactly what the future holds, but we can explore different models and be open to many different possibilities, which can help us navigate the future and make more responsible and robust choices today.
In practice, we presented the participants with the following 4 challenges in the form of binary dilemmas and asked them to vote for their preferred choice with a poll:
Data privacy: protection of personal data vs data sharing for the greater good
Algorithms: efficiency vs ethics
Systems: centralisation vs decentralisation
Information: content moderation vs absolute freedom
For each of the 16 combinations of binary choices we prepared a short description of a possible future scenario, which considered the interactions between the four design areas and aimed at encouraging reflection and discussion.
Based on the majority votes we then presented the corresponding future scenario and discussed it with the participants, highlighting the interactions between the choices and exploring how things might have panned out had we chosen a different path.
Protection of personal data 84%
Data sharing for the greater good 16%
Content moderation 41%
Absolute freedom 59%
The table above summarises the choices made by the participants during the workshop, which led to the following scenario.
Decentralized and distributed points of access to the internet make it easier for individuals to manage their data and the information they are willing to share online.
Everything that is shared is protected and can be used only following strict ethical principles. People can communicate without relying on big companies that collect data for profit. Information is totally free and everyone can share anything online with no filters.
Not so one-sided
Interesting perspectives emerged when we asked contrarian opinions on the more one-sided questions, which demonstrated how middle-ground and context-aware solutions are required in most cases when dealing with complex topics as those analysed.
We discussed how certain non-privacy-sensitive data can genuinely contribute to the benefit of society, with minimum concern on the side of the individual if they are shared in anonymised form. Two examples that emerged from the discussion were transport management and research. In discussing the (de)centralisation debate, we discussed how decentralisation could result in a diffusion of responsibility and lack of accountability. “If everyone’s responsible, nobody is responsible”. We mentioned how this risk could be mitigated thanks to tools like Public-Private-People collaboration and data cooperatives, combined with clear institutional responsibility.
Workshop report: Futurotheque – a trip to the future
Sander Veenhof, Augmented reality artist and Leonieke Verhoog, Program Manager at PublicSpaces took their session attendees on a trip to the future.
by Louis Stupple-Harris
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Sander Veenhof and Leonieke Verhoog, creators of Futurotheque.
Sander Veenhof, Augmented reality artist and Leonieke Verhoog, Program Manager at PublicSpaces took their session attendees on a trip to the future. They did this ‘wearing’ the interactive face-filters they created for their speculative fiction and research project the ‘Futurotheque’. The AR effects transformed them into citizens from the years 2021 right up to 2030, wearing the technical equipment we can expect to be wearing during those years. But besides the hardware, the filters foremostly intended to visualise the way we’ll experience the world in the near future. Which is through the HUD (Head Up Display) of our augmented reality wearables.
As users, we tend to think of the future of AR as more of the same in a hands-free way, but this session aimed to look beyond the well-known use-cases for these devices. Of course, they will provide us with all our information and entertainment needs and they can guide us wherever we are. But will that be our navigation through the physical world, or will these devices try to guide us through life? In what way will cloud intelligence enhance us, making use of the built-in camera that monitors our activities 24/7? What agency do we want to keep? And in what way should citizens be supported with handling these new devices, and the new dilemmas arising from their use?
These are abstract issues, but the face-filter visualisations applied on Sander and Leonieke helped to visualise the day-to-day impact of these technological developments on us as individuals, and have an interesting discussion with the session participants. After a dazzling peek into the next decade, the conclusion was that there’s a lot to think about when these devices are going to be part of our society. But fortunately, that’s not the case yet. We still have time to think of ways to integrate these devices into our society beforehand, instead of doing that afterwards.
Workshop report: People, not experiments – why cities must end biometric surveillance
We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it.
by Louis Stupple-Harris
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by European Digital Rights (EDRi), which was originally published on the EDRi website.
We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it. The discussion got to the heart of EDRi’s warnings that biometric surveillance puts limits on everyone’s rights and freedoms, amplifies discrimination, and treats all of us as experimental test subjects. This techno-driven democratic vacuum must be stopped.
From seriously flawed live trials of facial recognition by London’s Metropolitan police force, to unlawful biometric surveillance in French schools, to secretive roll outs of facial recognition which have been used against protesters in Serbia: creepy mass surveillance by governments and private companies, using people’s sensitive face and body data, is on the rise across Europe. Yet according to a 2020 survey by the EU’s Fundamental Rights Agency, 80% of Europeans are against sharing their face data with authorities.
On 28 September, EDRi participated in a debate at the NGI Policy Summit on “Biometrics and facial recognition in cities” alongside policymakers and police officers who have authorised the use of the tech in their cities. EDRi explained that public facial recognition, and similar systems which use other parts of our bodies like our eyes or the way we walk, are so intrusive as to be inherently disproportionate under European human rights law. The ensuing discussion revealed many of the reasons why public biometric surveillance poses such a threat to our societies:
• Cities are not adequately considering risks of discrimination: according to research by WebRoots Democracy, black, brown and Muslim communities in the UK are disproportionately over-policed. With the introduction of facial recognition in multiple UK cities, minoritised communities are now having their biometric data surveilled at much higher rates. In one example from the research, the London Metropolitan Police failed to carry out an equality impact assessment before using facial recognition at the Notting Hill carnival – an event which famously celebrates black and Afro-Carribean culture – despite knowing the sensitivity of the tech and the foreseeable risks of discrimination. The research also showed that whilst marginalised communities are the most likely to have police tech deployed against them, they are also the ones that are the least consulted about it.
• Legal checks and safeguards are being ignored: according to the Chief Technology Officer (CTO) of London, the London Metropolitan Police has been on “a journey” of learning, and understand that some of their past deployments of facial recognition did not have proper safeguards. Yet under data protection law, authorities must conduct an analysis of fundamental rights impacts before they deploy a technology. And it’s not just London that has treated fundamental rights safeguards as an afterthought when deploying biometric surveillance. Courts and data protection authorities have had to step in to stop unlawful deployments of biometric surveillance in Sweden, Poland, France, and Wales (UK) due to a lack of checks and safeguards.
• Failure to put fundamental rights first: the London CTO and the Dutch police explained that facial recognition in cities is necessary for catching serious criminals and keeping people safe. In London, the police have focused on ethics, transparency and “user voice”. In Amsterdam, the police have focused on “supporting the safety of people and the security of their goods” and have justified the use of facial recognition by the fact that it is already prevalent in society. Crime prevention and public safety are legitimate public policy goals: but the level of the threat to everyone’s fundamental rights posed by biometric mass surveillance in public spaces means that vague and general justifications are just not sufficient. Having fundamental rights means that those rights cannot be reduced unless there is a really strong justification for doing so.
• The public are being treated as experimental test subjects: across these examples, it is clear that members of the public are being used as subjects in high-stakes experiments which can have real-life impacts on their freedom, access to public services, and sense of security. Police forces and authorities are using biometric systems as a way to learn and to develop their capabilities. In doing so, they are not only failing their human rights obligations, but are also violating people’s dignity by treating them as learning opportunities rather than as individual humans deserving of respect and dignity.
The debate highlighted the worrying patterns of a lack of transparency and consideration for fundamental rights in current deployments of facial recognition, and other public biometric surveillance, happening all across Europe. The European Commission has recently started to consider how technology can reinforce structural racism, and to think about whether biometric mass surveillance is compatible with democratic societies. But at the same time, they are bankrolling projects like horrifyingly dystopian iBorderCTRL. EDRi’s position is clear: if we care about fundamental rights, our only option is to stop the regulatory whack-a-mole, and permanently ban biometric mass surveillance.
Workshop report: What your face reveals – the story of HowNormalAmI.eu
At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work - an online interactive documentary called 'How Normal Am I?'.
by Louis Stupple-Harris
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Tijmen Schep.
At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work – an online interactive documentary called ‘How Normal Am I?‘. It explains how face recognition technology is increasingly used in the world around us, for example when dating website tinder gives all its users a beauty score to match people who are about equally attractive. Besides just telling us about it, the project also allows people to experience this for themselves. Through your webcam, you will be judged on your beauty, age, gender, body mass index (BMI), and your facial expressions. You’ll even be given a life expectancy score, so you’ll know how long you have left to live.
The project has sparked the imagination – and perhaps a little feeling of dread – in many people, as not even two weeks later the documentary has been ‘watched’ over 100.000 times.
At the Summit, Tijmen offered a unique insight into the ‘making of’ of this project. In his presentation, he talked about the ethical conundrums of building a BMI prediction algorithm that is based on photos from arrest records, and that uses science that has been debunked. The presentation generated a lot of questions and was positively received by those who visited the summit.
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by the Coalition of Cities for Digital Rights, written by Beatriz Benitez and Malcolm Bain.
Data sharing platforms are playing an important role in the cities by integrating data collected throughout or related to the city and its citizens from a wide variety of sources (central administration, associated entities, utilities, private sector) to enable local authorities, businesses and even occasionally the public to access this data produced within the City and use it for limited or unlimited purposes (open data).
Malcolm introduced the session, highlighting that while Cities are keen to share data and use shared data in city digital services, they are (or should be) also aware of the digital rights issues arising in these projects related to citizens’ privacy, transparency and openness of the data used, accessibility and inclusion of citizens as well as the existence of bias in the data set used and the privatization of the use of city-related data. Luckily, cities are also in the best position to introduce the concept of ‘digital rights by design’ in these projects, and correct issues such as bias, privacy intrusions, fairness, profiling and data misuse. He briefly show-cased the Coalition work in this area in the Data Sharing Working Group, focusing on the ‘building blocks’ for rights-compliant data sharing projects to extract value from urban big data while respecting residents and visitors’ rights, including policies, processes, infrastructures, and specific actions and technologies.
Daniel highlighted the work of Eurocities on their Citizens Data Principles, which aim is to offer guidance to European local governments on more socially responsible use of data, and recognise, protect and uphold the citizens’ rights on the data they produce. The principles support how to use data-generated knowledge to improve urban life and preserve European values through scientific, civic, economic and democratic progress. Daniel presented one of his own city’s data-sharing project, Periscopio, a framework for sharing information contained in urban data (public and private) in such a way that it allows social agents and citizens to be involved to create social, scientific, economic and democratic value, as well as enabling the creation of better urban services.
Then, the cities of San Antonio, Long Beach, Portland, Toronto, Rennes, Helsinki, Amsterdam and Barcelona each presented some case studies from their cities, highlighting different issues raised by their data-sharing platforms and projects.
For the City of San Antonio, USA, Emily B. Royall addressed the issue of data bias and the need to listen to the community under the theme ‘Leveraging Data for Equity’.
Johanna Pasilkar of Helsinki shared with us the work of ‘MyData’ operator initiative and how to ease the daily life of the residents by consolidating data collected by the city’s departments and organisations and enabling sharing across several municipalities (data portability).
On behalf of the City of Amsterdam, Ron Van der Lans told us about the collaboration with the city by sharing traffic data with navigation companies such as Google, Waze and BeMobile to improve the mobility and quality of life of citizens.
Hamish Goodwin from the City of Toronto, Canada explained how they are attempting to integrate digital rights principles into the city digital infrastructure and the municipalities’ decision-making and how to put a policy framework into practice – the results of this are just coming out.
From the city of Rennes, Ben Lister introduced us to the RUDI, a local, multipartner, data sharing platform which goes beyond open data and connects users and producers to create new or/and better services.
Héctor Domínguez from the city of Portland, USA told us about the importance of ‘Racial Justice’ as a core value to regulating emergent technology, based on the respect for privacy, trusted surveillance and digital inclusion.
Ryan Kurtzman on behalf of the City of Longbeach, USA spoke about positive and negative associations of smart cities, and how participatory design of citizens in digital services can leverage the positive aspects: personal convenience, engagement and solving social challenges.
To conclude the round, Marc Pérez-Battle from Barcelona presented several data sharing and open data projects led by the City Council.
The City participants highlighted the need for embedding digital rights at design time (privacy, transparency, security, accessibility, etc.), citizen participation, and having the flexibility to adapt and correct any issues that may arise – something that may be more difficult when the technologies are embedded in the city infrastructure, and thus all the more important for correct design. Common themes among the projects include the importance of citizen involvement in projects, the respect for privacy and security, and the need for transparency and avoiding data bias. In addition, listeners to the session in the online chat also raised the issue of data ‘ownership’, and if this is a useful concept or rather misleading – cities are more stewards of data for the public, rather than an owner of data that they gather and use.
The session concluded stating that much work was still to be done, but just by raising Cities’ awareness of digital rights issues in data-sharing projects, we are making a big first step. The Coalition will shortly be releasing the Data Sharing Concept Note, and associated case studies that were briefly presented during the round table.
Two days to change the internet: the NGI Policy Summit 2020
The Next Generation Internet Policy Summit has gone off with a bang. Find out how it went here.
by Louis Stupple-Harris, Markus Droemann
The Next Generation Internet Policy Summit has gone off with a bang. Organised by Nesta and the City of Amsterdam this September, the Summit brought together participants from all over Europe and beyond to shape a vision for the future internet, moving the conversation on from the diagnosis of past and present challenges to the exploration of practical, concrete solutions. Here are some of the highlights.
45 countries represented
Originally scheduled for the end of June 2020 in Amsterdam, the Summit was rescheduled and reformulated for online participation in response to the COVID-19 pandemic. On Monday 28th September, the Summit began with a morning of Plenary sessions curated by Nesta and the City of Amsterdam. We made Monday afternoon and Tuesday morning available for our Policy-in-Practice workshop sessions, and then finished on Tuesday afternoon with a further series of plenary sessions to close the Summit.
Together with policymakers, researchers, and representatives from civil society, we looked at some of the most promising policy interventions and technological solutions, forging a path that cities, Member States and the European Union could follow.
A tangible vision and the steps to get there
We stirred the imaginations of our attendees by launching our new working paper to coincide with the Summit: A vision for the future internet, which is packed full of analysis and ideas for how to create a better internet by 2030. With such a broad range of people and issues involved in shaping the internet, it is clear that a coherent vision is required to tie it all together. We want to hear from you with feedback on the paper.
We were also honoured to host a keynote from the European Commission as they set out their post-COVID-19 recovery agenda. Pearse O’Donohue, Director of Future Networks at the Commission, outlined the way that technology and environmentalism must come together in a ‘twin transition’. He described the broader impact of the Next Generation Internet Initiative, and how the Commission’s funding and research are contributing to the creation of an Internet of Humans. Pearse has also written a blog to capture his message.
A transparent approach to AI
The City of Amsterdam’s deputy mayor, Touria Meliani, also launched the world’s first AI registry, co-designed by Amsterdam and the City of Helsinki. This AI registry provides citizens with a powerful tool to understand how algorithms are being used by their local governments to make decisions, putting principles of fairness and accountability into practice. We heard that transparency is a huge issue for the way artificial intelligence and algorithms are being used to make decisions about that data. Deputy Mayor Meliani said: ‘When we say as a city: algorithms are useful for our city, it’s also our responsibility to make sure that people know how they work. People deserve to know how they work. It’s a human right.’
A wide range of topics was discussed during the two days of the summit – not surprisingly, given the broad and interconnected nature of the challenges and opportunities driving the internet’s development today. To make our vision a reality, Europe must mobilise its full ecosystem, with interventions necessary on the local, national and supranational level. We were therefore honoured to feature leading policymakers across all layers of governance, including four MEPs, high-level representatives from the European Commission, a former president, a digital minister, and CTOs of leading digital cities from across the world. Below, we summarise just some of the many insights that emerged during the event.
Taking control of our data
Clockwise from top-left: Lucy Hedges, Frederike Kaltheuner, Tricia Wang and Charlton McIlwain in our session on Solutions vs. Solutionism.
Today, few would question that the centralisation and hoarding of data – and power – in fewer and fewer hands, gives platforms considerable agency to shape our views on the world, social interactions and economic choices. Tricia Wang challenged attendees to think beyond privacy and consider the impact that widespread data collection has on our personhood, our ability to determine our own life decisions and outcomes. ‘Corporations would rather have us live in the world of privacy, because privacy is something that can be legally mediated and tickboxed,’ she said. ‘At this point, we have so much data tied to who we are, that other people can control our lives through that data, and that threatens our agency, our personhood.’
Yet the role that these systems have in perpetuating societal biases and disproportionately affecting minorities and people of colour is not sufficiently well understood. Charlton McIlwain warned that technology being developed today is just as dangerous for people of colour as the systems used to enact racist policies decades go. He drew a parallel between the practice of redlining and the risks of social media for people protesting against racism. He explained: ‘We often call on technology to help solve problems but when society defines, frames and represents people of colour as the problem, those solutions often do more harm than good.’
And despite the heavy emphasis on business data in Europe’s current data strategy, our speakers called for innovation to empower citizens to take control of their data. In her talk, Sylvie Delacroix called for the establishment of Data Trusts to redress the growing power imbalance between citizens and big tech. ‘We can do better than consent,’ she explained. ‘We also need bottom-up empowerment structures to help people take the reins of their data, rather than constantly being asked to consent to this or that.’
Our speakers repeatedly stressed the importance of considering the environmental impact of the technology that powers the internet. We heard about the campaign to make it easier to repair our devices, and the push for industry to reduce its reliance on polluting energy sources, stop dirty mining practices and improve waste management processes. In our session on the environment, Janet Gunter called for Europe to put more pressure on manufacturers to create devices that are repairable, so that they last far longer. She said, ‘The precedent has been set with ecodesign regulation for large appliances, but we need ecodesign for smartphones and computers.’
A global approach to inclusion
The COVID-19 pandemic was a hot topic throughout the Summit because it has brought our ambivalent relationship with technology and existing inequalities in our societies into sharp relief. Although we managed to move the event online, for the 10 per cent of EU households without internet access, participating in online life and maintaining access to education, work and public services throughout the pandemic has been far more difficult. In her keynote, Payal Arora started with a call for action: ‘We need to move beyond the concept of inclusion, which necessarily requires excluding ‘bad’ actors such as spammers and trolls. Instead, we need to consider the interconnectedness of everything, and the unintended consequences of changes we might make.’ She called for Europe to set itself ambitious targets for digital inclusion, by employing collaborative problem-solving and including transcultural perspectives.
In our session digital identity, our speakers agreed that it is time for Europe to create a comprehensive bloc-wide identity system that allows people to keep control over their own personal data. Former Estonia President Toomas Hendrik Ilves explained that under the eIDAS directive, 15-20 per cent of EU citizens have used a digital ID scheme in their home country, which is a number low enough to prevent genuine investment from public institutions. Only mandatory digital IDs can create the change seen in Estonia. ‘Europe isn’t being held back by technology to build its own ambitious identity infrastructures,’ he said. ‘It is all about political will.’
By and large, speakers expressed a strong desire for Europe to provide alternative models to the perceived tech superpowers in Beijing and Silicon Valley, without emulating their approaches or contributing to the further fragmentation of the internet. In our session on what Europe should do in the next decade, Anu Bradford said: ‘I think that American techno-libertarianism has shown its limit in terms of how we regulate the internet, and we certainly have concerns if the Chinese digital authoritarianism would spread globally. Europe needs to be more than a regulator, but also build our own alternatives. We need to play defence, but also begin to play offence,’ she argued.
Europe will have to support these developments with significant investment alongside smart regulation and governance to create an internet that is fit for the future. To make our vision a reality, speakers agreed that Europe must be bold in its approach and mobilise its full ecosystem, with interventions necessary on the local, national and supranational level.
A manifesto for change
We thoroughly enjoyed the discussions that arose during the Summit, and want them to have a lasting effect. Over the coming months, we’ll be exploring these ideas in our work to guide the European Commission’s policy approach to the future internet.
Our resounding and heartfelt thanks go to everyone that contributed to the Summit. And as always, if you like what you hear, get in touch with us – we are always interested in hearing from new contacts and collaborating on issues that affect the future internet.