Post

Calling in the experts: our roundtable on smartphone lifetimes

We've got some ideas to extend smartphone lifetimes and we invited some experts to put them under the microscope.

Ideas love company, and there comes a point in developing policy recommendations when a discussion with experts will turn good proposals into excellent ones. NGI Forward is exploring ways to extend the useful life of smartphones to reduce their environmental impact and last week we held a roundtable discussion on extending smartphone lifetimes. This is a complex issue with lots of moving parts, which is why we invited experts in a range of fields. We were joined by an impressive array of experts in repair, cybersecurity, software development, sustainability and European policy. Representatives of device makers, mobile networks, security analysts, advocacy groups and the European Commission pulled our suggestions apart and helped us put them back together.

Our focus on smartphones came from our work last year on the environmental impact of the internet as a whole, which culminated in our report: Internet of Waste. The internet and its underlying infrastructure use a significant portion of earth’s resources, consuming 5-9 per cent of global energy supply and creating around 2 per cent of global emissions. And the little black rectangles we carry around in our pockets and bags? They’re some of the biggest contributors. Europeans replace their smartphone on average every two years, and 72 per cent of their lifetime emissions are created before they hit the shelves. As a result, extending the average lifespan of smartphones from two to four years would reduce emissions by 44 per cent. More than half of Europeans expect their smartphone to last for four or more years, so it’s clear there is a market for devices that last longer.

We’d like to see smartphone lifetimes extended to five years by 2030 and our roundtable discussion focused on two areas that could help to contribute.

Short-lived software support

The software on a device needs to be updated regularly to keep it secure and running smoothly. When software updates stop, a device can become unreliable or vulnerable to data breaches. As a result, the lifetime of smartphones can be artificially shortened if a manufacturer stops providing updates before the hardware breaks. Despite the importance of software updates, most smartphones receive them for only two or three years. A 2020 Eurobarometer survey found that 30 per cent of users replaced a smartphone because the performance of the old device had significantly deteriorated, and 19 per cent replaced it because certain applications or software stopped working on the old device, so the influence on device lifetime is clear. 

In our roundtable discussion, we suggested that smartphone makers should be required to provide at least seven years’ software update support. We thought that setting an ambitious target would push manufacturers to think differently about the way they provide software updates, and also drastically reduce the likelihood of artificially shortening device lifetimes. We also suggested that device makers allow users to install alternative operating systems, preferably open source ones, at the end of official support. This would allow the open-source community to create software that runs easily on older devices and receives regular updates indefinitely.

Davide Polverini of the European Commission described the work going into developing legislation for extending smartphone lifetimes, which focuses on the Ecodesign Directive. The Commission is developing vertical regulations that will apply to smartphones and tablets, as well as reviewing the Directive itself to explore how it can be adapted to cover electronics and internet technology. Ugo Vallauri from the Restart Project and Right to Repair Europe pushed for the Commission to be ambitious and agreed that software updates should be provided for far longer than they are currently. Ugo also explained that the practice of serialisation, where manufacturers prevent repair by tying specific parts to a device’s software, is becoming more common.

Our other experts were broadly in support of extending software update periods, especially since analysis by the Fraunhofer Institute shows that the cost of extending updates from two to five years is around €2 per device. However, participants raised concerns that the cost would be greater for smaller device manufacturers, which could further concentrate the market in the larger manufacturers. Device makers are not the only ones that would be affected, since several chips within smartphones need their own software. Any legislation should take this complexity into account, especially in tackling the dominance of Apple and Google, which together control the vast majority of smartphone software. We also discussed the possibility that manufacturers would create a loophole by providing a basic operating system which would be cheap to support for several years, and offer an alternative with more features that could be abandoned sooner.

We discussed the importance of updates being maintained for each component of the device, including those made by other companies, and whether it is possible to separate software and security updates (we decided possibly not). Our experts emphasised the importance of processes being as easy as possible, and the likelihood that users will be reluctant to start over with a new operating system when theirs is no longer supported. We also heard about the idea of code escrows, in which software is released if a company ceases to exist.

Making repair information public

Our second proposal is for manufacturers to publish repair manuals, device schematics and diagnostic tools so that anyone can use them. Pre-pandemic growth in repair cafes and parties demonstrates that consumers are keen to repair their gadgets and keep them going for longer. Despite this popularity, it remains difficult for end users to conduct their own smartphone repairs, so making repair information public could have a significant impact. The information would also be invaluable for research, since the repairability of products could be compared without having to tear each model apart. The French Repairability Index has also demonstrated the possibility of public availability, after Samsung published repair manuals for several of its devices online.

This is different from the Commission’s current approach for products such as electronic displays, which requires only that approved repair professionals can access this information. For TVs and other screens, repairers must either apply to be added to a national register (though no Member State has implemented one to date) or be approved by the manufacturer, which can implement any arduous contract requirements it so desires. Manufacturers can take five working days to approve a repairer and another working day to provide manuals for a specific model. We think these hurdles are likely to push more people to replace their smartphones rather than repairing them – when these devices are so important to our daily lives, each day they’re away for repair creates a serious disincentive.

Our experts debated the risks of this information being available to people that might use it to take advantage of security vulnerabilities. For several of our participants, Samsung’s recent publication of repair manuals for the French repairability index demonstrates that the right incentives can override worries about the information being misused. We also explored how likely consumers are to conduct repairs, what risks of injury they might face, and whether the availability and quality of spare parts was a greater concern. In the end, it appeared to be a chicken-and-egg issue. We can’t know if consumers will take matters into their own hands because the opportunity does not currently exist, and whatever downsides can clearly be overcome if the incentives are in the right place.

What next?

We are incredibly grateful to all our roundtable participants, who created a lively discussion and really got stuck in. Next, I’ll be incorporating their insights into a policy briefing aimed at the European Commission, to lay out the proposals and their potential impact. We’ll publish it on our website in the next few weeks, but feel free to contact me if you’d like to receive a copy of the final briefing.

Post

The NGI Policy-in-Practice Fund – announcing the grantees

We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund.

We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund

Policymakers and public institutions have more levers at their disposal to spur innovation in the internet space than often thought, and can play a powerful role in shaping new markets for ethical tools. We particularly believe that local experimentation and ecosystem building are vital if we want to make alternative models for the internet actually tangible and gain traction. But finding the funding and space to undertake this type of trial is not always easy – especially if outcomes are uncertain. Through the NGI Policy-in-Practice fund, it has been our aim not only to provide the means to organisations to undertake a number of these trials but also make the case for local trials more generally.

Over the past summer and autumn, we went through a highly competitive applications process, ultimately selecting four ambitious initiatives that embody this vision behind the NGI Policy-in-Practice fund. Each of the projects will receive funding of up to €25,000 to test out their idea on a local level and generate important insights that could help us build a more trustworthy, inclusive and democratic future internet.

In conjunction with this announcement, we have released an interview with each of our grantees, explaining their projects and the important issues they are seeking to address in more detail. You can also find a short summary of each project below. Make sure you register for our newsletter to stay up to date on the progress of each of our grantees, and our other work on the future of the internet.

Interoperability to challenge Big Tech power 

This project is run by a partnership of three organisations: Commons Network and Open Future, based in Amsterdam, Berlin and Warsaw.

This project explores whether the principle of interoperability, the idea that services should be able to work together, and data portability, which would allow users to carry their data with them to new services, can help decentralise power in the digital economy. Currently, we are, as users, often locked into a small number of large platforms. Smaller alternative solutions, particularly those that want to maximise public good rather than optimise for profit, find it hard to compete in this winner-takes-all economy. Can we use interoperability strategically and seize the clout of trusted institutions such as public broadcasters and civil society, to create an ecosystem of fully interoperable and responsible innovation in Europe and beyond?  

Through a series of co-creation workshops, the project will explore how this idea could work in practice, and the role trusted public institutions can play in bringing it to fruition. 

Bridging the Digital Divide through Circular Public Procurement

This project will be run by eReuse, based in Barcelona, with support from the City of Barcelona, the Technical University of Barcelona (UPC) and the global Association for Progressive Communications.

During the pandemic, where homeschooling and remote working have become the norm overnight, bridging the digital divide has become more important than ever. This project is investigating how we can make it easier for public bodies and also the private sector to donate old digital devices, such as laptops and smartphones, to low-income families currently unable to access the internet. 

By extending the lifetime of a device in this way, we are also reducing the environmental footprint of our internet use. Laptops and phones now often end up being recycled, or, worse, binned, long before their actual “useful lifespan” is over, putting further strain on the system. Donating devices could be a simple but effective mechanism for ensuring the circular economy of devices is lengthened.  

The project sets out to do two things: first, it wants to try out this mechanism on a local level and measure its impact through tracking the refurbished devices over time. Second, it wants to make it easier to replicate this model in other places, by creating legal templates that can be inserted in public and private procurement procedures, making it easier for device purchasers to participate in this kind of scheme. The partnership also seeks to solidify the network of refurbishers and recyclers across Europe. The lessons learned from this project can serve as an incredibly useful example for other cities, regions and countries to follow. 

Bringing Human Values to Design Practice

This project will be run by the BBC with support from Designswarm, LSE and the University of Sussex

Many of the digital services we use today, from our favourite news outlet to social media networks, rely on maximising “engagement” as a profit model. A successful service or piece of content is one that generates many clicks, drives further traffic, or generates new paying users. But what if we optimised for human well-being and values instead? 

This project, led by the BBC, seeks to try out a more human-centric focused approach to measuring audience engagement by putting human values at its core. It will do so by putting into practice longer-standing research work on mapping the kinds of values and needs their users care about the most, and developing new design frameworks that would make it easier to actually track these kinds of alternative metrics in a transparent way. 

The project will run a number of design workshops and share its findings through a dedicated website and other outlets to involve the wider community. The learnings and design methodology that will emerge from this work will not just be trialled within the contexts of the project partners, but will also be easily replicable by others interested in taking a more value-led approach. 

Responsible data sharing for emergencies: citizens in control

This project will be run by the Dutch National Police, in partnership with the Dutch Emergency Services Control, the Amsterdam Safety Region and the City of Amsterdam.

In a data economy that is growing ever more complex, giving meaningful consent about what happens to our personal data remains one of the biggest unsolved puzzles. But new online identity models have shown to be a potentially very promising solution, empowering users to share only that information that they want to share with third parties, and sharing that data on their own terms. One way that would allow such a new approach to identity and data sharing to scale would be to bring in government and other trusted institutions to build their own services using these principles. That is exactly what this project seeks to do.  

The project has already laid out all the building blocks of their Data Trust Infrastructure but wants to take it one step further by actually putting this new framework into practice. The project brings together a consortium of Dutch institutional partners to experiment with one first use case, namely the sharing of vital personal data with emergency services in the case of, for example, a fire. The project will not just generate learnings about this specific trial, but will also contribute to the further finetuning of the design of the wider Data Trust Infrastructure, scope further use cases (of which there are many!), and bring on board more interested parties.

Post

It’s time to fight for our right to repair

Can we make our devices last longer for the environment?

In late November, anticipating new proposals to strengthen the circular economy from the European Commission, Members of the European Parliament voted in favour of adopting an initiative report that calls for consumers to be given the right to repair. This will allow people across Europe to fix their own smartphones and laptops and see information about the repairability of devices on the box. The motion also favoured placing an outright ban on practices that artificially shorten the lifetime of devices.

If put into legislation next year, these proposals could have a huge impact on the internet’s growing resource footprint – and send a much-needed signal across the globe. EU citizens replace their smartphones on average every two years and a typical smartphone generates 60 to 80kg of CO2-equivalent emissions in its full lifetime. With 200 million smartphones purchased annually in the EU, this creates 12-16 million tonnes of CO2-equivalent emissions each year, which is more than the carbon budget of Latvia in 2017. Increasing the lifetime of a smartphone to three or four years would reduce emissions by 29 and 44 per cent respectively.

The environmental impact of our digital lives has been an area of particular interest for NGI Forward. In recent years, some sectors of the tech industry have been quicker than others in adopting stringent environmental goals. We’ve seen significant voluntary commitments from digital and cloud service providers such as Microsoft, Facebook, Google to reduce their carbon footprint, many of which target the energy consumption of large-scale data operations. But when it comes to extending the lifetime of connected devices themselves and reducing the environmental impact of hardware, it’s a different story. 

Whether it’s due to consumer behaviour, market structure or a lack of economic incentives, many device manufacturers seem less willing to abandon a range of practices that are both ecologically unsustainable and harmful to the consumer. Smartphones and tablets in particular are notoriously difficult to repair. They rely on  designs that break easily and often stop receiving software updates long before their hardware stops functioning. The European Parliament’s report even implies that device makers have even making deliberate design choices that shorten the lifetime of a device and incentivise shorter replacement cycles.

In some ways, we’re at a watershed moment in technological development. With the advent of the Internet of Things, 5G, and the rapid expansion of internet connectivity across the developing world, the number of connected devices on this planet is currently estimated at around 5.8 billion and set to grow exponentially in the coming years. If we want to create a truly circular economy for digital devices, the first step we should take is extending the lifetime of our devices and making repair economically viable again. 

This is one of the areas where a bold regulatory or policy intervention can make an important difference for the future sustainability of the internet. Uniquely, it’s also a matter where Europe’s ambitions for consumer fairness, economic justice and the fostering of new innovation can converge. Beyond just helping to reduce the environmental footprint of our smartphones, a push to extend the lifetimes of our devices will create a fairer and more decentralised repair economy, lower the financial barriers to digital inclusion and stimulate innovation in sustainable device design as well as green manufacturing – areas where Europe can develop a real competitive edge.

That’s why Nesta recently joined the European Right to Repair campaign. We hope that industry, Member States and the European Commission are taking heed of the example set by MEPs and work together to give both consumers and the environment a better deal.

Along with our partners, we are calling for them to:

  • implement a comprehensive right to repair,
  • ensure repair manuals and electronic schematics are available to repairers and end users without restriction, and
  • ensure that the expected lifetime of an otherwise functional device is not unnecessarily cut short by a lack of software updates.

When it comes to creating a greener and fairer European economy, there will be much tougher decisions ahead. This should be an easy one.
If you are interested in learning more about the campaign for the right to repair, visit the Repair.EU website. You can find our recent report about the environmental impact of the internet here.

Post

Workshop report: Ethnography and the internet – collective intelligence insights for digital policy

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by NGI Atlantic, written by Amelia Hassoun, Leonie Schulte and Kate J. Sim.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Edgeryders, written by Amelia Hassoun, Leonie Schulte and Kate J. Sim.

The EdgeRyders NGI Ethnography Team is dedicated to responding to three core, intersecting questions: when people talk about the future of the internet, what are their key concerns and desires? What issues do they face, and what solutions do they imagine? And how can we, as ethnographers, visualise and analyse these topics in a way that meaningfully contributes to ongoing debates and policy-making?

At the 2020 NGI Policy Summit, we demonstrated how we responded to these questions through digital ethnographic methods and qualitative coding. 

As digital ethnographers, we participate in and observe online interaction on EdgeRyders’ open source community platform, engaging with community members as they share their insights on issues from artificial intelligence, to environmental tech, to technology solutions for the COVID-19 pandemic.

We also code all posted content through Open Ethnographer; an open source coding tool that enables us to perform inductive, interpretive analysis of written online content. In practice, this means that we apply tags to portions of digital text produced by community members in a way that captures the semantic content of online interactional moments. This coding process yields a broader Social Semantic Network (SSN) of codes that allows us to gain a large-scale view of emerging salient topics, while being able to zoom in to actionable subsets of conversation. 

We visualise and navigate this ethnographic data using the data visualisation tool, Graph Ryder, which adds a quantitative layer atop our qualitative approach to data collection. Graph Ryder gives us a visualisation of all generated codes, allowing us to trace co-occurrences between codes and see code clusters, which show us what concepts community members are associating with each other. This approach shows us who is talking to each other, and about what topics, across the entire platform. During our workshop, we invited attendees to explore Graph Ryder with us. Here are some examples we used: 

We find interesting co-occurrences when filtering the graph to k=4. For those unfamiliar with our tool, Graph Ryder, this means that we are looking at connections between codes that have been mentioned together by community members at least four times. 

At this level, we can see some broad themes emerging as we look at the graph as a whole. We can then zoom in on a code like “privacy”, an extremely central node in the conversation among community members. This code is linked to other codes like “personal data”, “trade-offs”, “cost”, “surveillance” and “decision-making”. These connections, in turn, create an illustrative network of privacy concerns articulated by the community: around smart cities and human rights, covid-19 and contact tracing, trade-offs and decision-making. A salient theme is the question of how to weigh up privacy trade-offs, in order to make optimal decisions about one’s own data privacy. What does it cost? There is uncertainty around how extensive surveillance is, and a distrust of the information that one is given about these technologies, which makes making quality decisions about these issues difficult for community members.

Social Semantic Network Analysis combines qualitative research at scale with participatory design: we can, thus, dynamically address what people know, what they are trying to do and what they need. It also affords us a great deal of foresight, meaning we can look towards the future and identify what might be brewing on the horizon. 

So, how does this method allow us to inform policy? The digital ethnographic approach means we are continuously engaging with a broad range of individuals and communities across Europe; from activists to tech practitioners and academics, among many others. This gives us unique access to viewpoints and experiences that we can, in turn, explore in greater detail.  This approach combines the richness of everyday life details gained through ethnographic research with the “big picture” vantage point of network science. Our inductive approach to community interaction means we remain open to novelty: allowing us to address problems as they emerge, without having to define what those problems are from the outset.

Post

Workshop report: Privacy and trust: trends in experiments in EU-US research and innovation

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by NGI Atlantic.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by NGI Atlantic, written by Sara Pittonet Gaiarin and Jim Clarke.

Bridging EU-US research on Next Generation Internet: about NGIatlantic.eu

NGIatlantic.eu is one of the growing family of the Next Generation Internet (NGI) initiative’s Research and Innovation (RIA) projects, whose goal is to collectively build a human-centric internet based on the values held dear by European citizens, such as privacy, trust, fairness and inclusiveness. At the same time, the NGI initiative is designed to make sure that there are significant internationalisation activities, including collaborations between EU and United States’ NGI-related activities in order to generate major impacts at both a Pan-European and transatlantic level. Within the above backdrop, from January 2020 through June 2022, NGIatlantic.eu will fund 3rd party EU-based researchers and innovators in carrying out NGI-related experiments, in collaboration with US research teams, throughout regular open calls. In its first open call, running between 1st April and 29th May 2020, six projects have been selected for funding, in areas primary related to EU – US collaboration on privacy and trust enhancing technologies and decentralised data governance by leveraging AI, blockchain, 5G, big data and IoT technologies.

Trends in experiments in EU-US research and innovation

Organising a session during the NGI Policy Summit 2020 was an ideal opportunity to provide policymakers with an overview of the major trends and trajectories in EU – US research collaboration and context for NGI-related policy developments in the present and future. Three of the selected NGIatlantic.eu projects were given an opportunity to pitch their experiments, present their initial results and how these would contribute to ongoing policy dialogues in the EU and US.

Decentralized data governance

George C. Polyzos, Director, Mobile Multimedia Laboratory, Athens University of Economics and Business, presented the “Self-Certifying Names for Named Data Networking” project, whose solution builds on the emerging paradigm of Decentralized Identifiers (DIDs), a new form of self-sovereign identification under standardization by W3C in Name Data Networking (NDN). He was followed by Berat Senel, Research Engineer at PlanetLab Europe, EdgeNet, Laboratoire d’Informatique de Paris 6 (LIP6), who introduced the CacheCash Experiment, leveraging on the Content Delivery Network (CDN) technology that provides a service in which interested users run caches, and they are incentivised to participate by receiving a crypto-currency (Cachecoin) in exchange for serving content to other users.

Privacy and Trust Enabling Data Marketplace for Sustainable Supply Chains 

Moving to the privacy and trust topics, Tomaz Levak, Managing Director at Trace Labs Ltd. introduced the “Food Data Marketplace” (FDM) project, which is fostering new economic models for sustainable food supply chains based on data and employing privacy-by-design approach to enable farmers and cooperatives to regain control of their data, give it a price tag, and sell it to interested partners in the supply chain. Last but not least, the NGIatlantic.eu project also took the opportunity to showcase the Twinning Lab, an on-line space for researchers, innovators and start-ups to establish complementary partnerships with transatlantic actors to address NGI challenges, and to present their future activities and opportunities for the NGI communities.
The project also highlighted their 3rd Open Call , which will open on 1st December 2020.

Post

Workshop report: Trustworthy content handling and information exchange with ONTOCHAIN

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by ONTOCHAIN. Today, digital life is an extension of our physical world and it demands the same critical, moral and ethical thinking. However, from the current standpoint and when it comes to exchange of knowledge and […]

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by ONTOCHAIN.

Today, digital life is an extension of our physical world and it demands the same critical, moral and ethical thinking. However, from the current standpoint and when it comes to exchange of knowledge and services, the internet can’t assure that bias or systematic abuse of global trust are avoided. Several threats in the real life scenarios of a person’s interaction with the Internet can be identified. Here are some examples.

The balance of power, initial spirit of the internet has been broken by few dominating companies. This small centralised network has now the power of information in hands and can potentially dictate what is true and what is false.

We all make daily decisions on the basis of information we find on the Internet. But the provenance of these information is hard, slow, and costly to verify and its quality itself is often uneven and unassessed. Information can be corrupted by malicious storage and network, or by censorship and can be shared and propagated to unforeseeable extent. 

Publishing anonymously or pseudonymously to protect privacy leads from time to time, to misinformation. Removing anonymity from the Internet is not an option and even if it was, real people will always be able to share false information whatever the reason. Information disorder would still remain. 

Various platforms publicly expose users’ ratings as metadata over the public internet, typically relating to the profile of single users. But this model is flawed in two ways. Firstly, it allows spam to mislead prospective consumers, while past consumers have little incentive in providing their feedback. Secondly, the revenue that service providers make is not shared with the users that took the time to provide feedback. 

Artificial Intelligence, increasingly present in our digital daily life, if not trained correctly can only lead us to adopt partial behaviors and reveal how unequal, parochial, and cognitively biased human can be. The blockchain technology, victim of its own success, leads to stand-alone, disconnected blockchains entailing different ecosystems, hashing algorithms, consensus models and communities. The blockchain space is becoming increasingly siloed, and its core philosophical concept – the idea of decentralization – is being undermined. 

In order to overcome these threats and make the internet a resilient, trustworthy and sustainable means of knowledge and services exchange, the ONTOCHAIN European program is brought to you today to support the development of innovative and interoperable solutions with novel business models in an open and collaborative way toward three cascading open calls. It proposes to suitably federate blockchain and semantic technologies for trustworthy content handling and information exchange for vital sectors of the European economy. Several question and challenges are nonetheless still open:

Shall we build the ONTOCHAIN ecosystem from scratch? Platform compatibility might make it easier for developers to contribute, but is it a double-edged sword?

Techniques and algorithms (e.g. knowledge representation, storage and querying, Machine Learning, data analytics) to be used, have to be leveraged and integrated in a unique decentralized ontology framework. 

The fast pace of innovation in blockchains will certainly lead to obsolete design choice before the end of ONTOCHAIN and has to be anticipated and mitigated for the sustainability of the ecosystem. Open and flexible design will be required and ONTOCHAIN innovators will have to make numerous trade-offs, e.g. between the granularity and how much data is stored on-chain vs. performance, that may evolve as future blockchain protocols emerge. It will have to be documented and adaptable to make ONTOCHAIN contributions interoperable and sustainable.

A method will have to be elaborated to transparently derive a new truth out of several known truths according to a set of rules. The design of specific Smart Contracts that would implement first order logic directly on the blockchain could be one solution. How ONTOCHAIN will maintain competitive advantage against the already existing blockchain ecosystems? A set of innovative business models related to blockchain will have to be thought and implemented in order for all parties involved in the content exchange to be rewarded fairly. 

By building ONTOCHAIN with you, we expect to answer these challenges and contribute to a more distributed and transparent internet that respects and promotes the fundamental values of diversity, equality, privacy and participation. Stay tuned, share and engage!

Post

Workshop report: Follow us OFF Facebook – decent alternatives for interacting with citizens

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Redecentralize.org.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Redecentralize.org.

Despite the incessant outcry over social media giants’ disrespect of privacy and unaccountable influence on society, any public sector organisation wanting to reach citizens feels forced to be present on their enormous platforms. But through its presence, an organisation legitimises these platforms’ practices, treats them like public utilities, subjects its content to their opaque filters and ranking, and compels citizens to be on them too — thus further strengthening their dominance. How could we avoid the dilemma of either reaching or respecting citizens?

Redecentralize organised a workshop to address this question. The workshop explored the alternative of decentralised social media, in particular Mastodon, which lets users choose whichever providers and apps they prefer because these can all interoperate via standardised protocols like ActivityPub; the result is a diverse, vendor-neutral, open network (dubbed the Fediverse), analogous to e-mail and the world wide web.

Leading by example in this field is the state ministry of Baden-Württemberg, possibly the first government with an official Mastodon presence. Their head of online communications Jana Höffner told the audience about their motivation and experience. Subsequently, the topic was put in a broader perspective by Marcel Kolaja, Member and Vice-President of the European Parliament (and also on Mastodon). He explained how legislation could require the dominant ‘gatekeeper’ platforms to be interoperable too and emphasised the role of political institutions in ensuring that citizens are not forced to agree to particular terms of service in order to participate in public discussion.

Post

Workshop report: (Dis)connected future – an immersive simulation

As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Nesta Italia and Impactscool, written by Giacomo Mariotti and Cristina Pozzi.

The NGI Policy Summit was a great opportunity for policymakers, innovators and researchers to come together to start laying out a European vision for the future internet and elaborate the policy interventions and technical solutions that can help get us there.

As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet. It was a participative and thought-provoking session. Here we take a look at how it went.

Our aims

The discussion about the internet of the future is very complex and it touches on many challenges that our societies are facing today. Topics like Data sovereignty, Safety, Privacy, Sustainability, Fairness, just to name a few, as well as the implications of new technologies such as AI and Blockchain, and areas of concern around them, such as Ethics and Accessibility.

In order to define and build the next generation internet, we need to make a series of design choices guided by the European values we want our internet to radiate. However, moving from principles to implementation is really hard. In fact, we face the added complexity coming from the interaction between all these areas and the trade-offs that design choices force us to make.

Our workshop’s goal was to bring to life some of the difficult decisions and trade-offs we need to consider when we design the internet of the future, in order to help us reflect on the implications and interaction of the choices we make today.

How we did it

The workshop was an immersive simulation about the future in which we asked the participants to make some key choices about the design of the future internet and then deep dived into possible future scenarios emerging from these choices. 

The idea is that it is impossible to know exactly what the future holds, but we can explore different models and be open to many different possibilities, which can help us navigate the future and make more responsible and robust choices today.

In practice, we presented the participants with the following 4 challenges in the form of binary dilemmas and asked them to vote for their preferred choice with a poll:

  1. Data privacy: protection of personal data vs data sharing for the greater good
  2. Algorithms: efficiency vs ethics
  3. Systems: centralisation vs decentralisation
  4. Information: content moderation vs absolute freedom

For each of the 16 combinations of binary choices we prepared a short description of a possible future scenario, which considered the interactions between the four design areas and aimed at encouraging reflection and discussion.

Based on the majority votes we then presented the corresponding future scenario and discussed it with the participants, highlighting the interactions between the choices and exploring how things might have panned out had we chosen a different path.

What emerged

Individual-centric Internet

Data privacyProtection of personal data
84%
Data sharing for the greater good
16%
AlgorithmsEfficiency
41%
Ethics
59%
SystemsCentralisation
12%
Decentralisation
88%
InformationContent moderation
41%
Absolute freedom
59%

The table above summarises the choices made by the participants during the workshop, which led to the following scenario.

Individual-centric Internet

Decentralized and distributed points of access to the internet make it easier for individuals to manage their data and the information they are willing to share online. 

Everything that is shared is protected and can be used only following strict ethical principles. People can communicate without relying on big companies that collect data for profit. Information is totally free and everyone can share anything online with no filters.

Not so one-sided

Interesting perspectives emerged when we asked contrarian opinions on the more one-sided questions, which demonstrated how middle-ground and context-aware solutions are required in most cases when dealing with complex topics as those analysed.

We discussed how certain non-privacy-sensitive data can genuinely contribute to the benefit of society, with minimum concern on the side of the individual if they are shared in anonymised form. Two examples that emerged from the discussion were transport management and research.
In discussing the (de)centralisation debate, we discussed how decentralisation could result in a diffusion of responsibility and lack of accountability. “If everyone’s responsible, nobody is responsible”. We mentioned how this risk could be mitigated thanks to tools like Public-Private-People collaboration and data cooperatives, combined with clear institutional responsibility.

Post

Workshop report: Futurotheque – a trip to the future

Sander Veenhof, Augmented reality artist and Leonieke Verhoog, Program Manager at PublicSpaces took their session attendees on a trip to the future.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Sander Veenhof and Leonieke Verhoog, creators of Futurotheque.

Sander Veenhof, Augmented reality artist and Leonieke Verhoog, Program Manager at PublicSpaces took their session attendees on a trip to the future. They did this ‘wearing’ the interactive face-filters they created for their speculative fiction and research project the ‘Futurotheque’. The AR effects transformed them into citizens from the years 2021 right up to 2030, wearing the technical equipment we can expect to be wearing during those years. But besides the hardware, the filters foremostly intended to visualise the way we’ll experience the world in the near future. Which is through the HUD (Head Up Display) of our augmented reality wearables. 

As users, we tend to think of the future of AR as more of the same in a hands-free way, but this session aimed to look beyond the well-known use-cases for these devices. Of course, they will provide us with all our information and entertainment needs and they can guide us wherever we are. But will that be our navigation through the physical world, or will these devices try to guide us through life? In what way will cloud intelligence enhance us, making use of the built-in camera that monitors our activities 24/7? What agency do we want to keep? And in what way should citizens be supported with handling these new devices, and the new dilemmas arising from their use? 

These are abstract issues, but the face-filter visualisations applied on Sander and Leonieke helped to visualise the day-to-day impact of these technological developments on us as individuals, and have an interesting discussion with the session participants. After a dazzling peek into the next decade, the conclusion was that there’s a lot to think about when these devices are going to be part of our society. But fortunately, that’s not the case yet. We still have time to think of ways to integrate these devices into our society beforehand, instead of doing that afterwards.

Post

Workshop report: People, not experiments – why cities must end biometric surveillance

We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by European Digital Rights (EDRi), which was originally published on the EDRi website.

We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it. The discussion got to the heart of EDRi’s warnings that biometric surveillance puts limits on everyone’s rights and freedoms, amplifies discrimination, and treats all of us as experimental test subjects. This techno-driven democratic vacuum must be stopped.

From seriously flawed live trials of facial recognition by London’s Metropolitan police force, to unlawful biometric surveillance in French schools, to secretive roll outs of facial recognition which have been used against protesters in Serbia: creepy mass surveillance by governments and private companies, using people’s sensitive face and body data, is on the rise across Europe. Yet according to a 2020 survey by the EU’s Fundamental Rights Agency, 80% of Europeans are against sharing their face data with authorities.

On 28 September, EDRi participated in a debate at the NGI Policy Summit on “Biometrics and facial recognition in cities” alongside policymakers and police officers who have authorised the use of the tech in their cities. EDRi explained that public facial recognition, and similar systems which use other parts of our bodies like our eyes or the way we walk, are so intrusive as to be inherently disproportionate under European human rights law. The ensuing discussion revealed many of the reasons why public biometric surveillance poses such a threat to our societies:

• Cities are not adequately considering risks of discrimination: according to research by WebRoots Democracy, black, brown and Muslim communities in the UK are disproportionately over-policed. With the introduction of facial recognition in multiple UK cities, minoritised communities are now having their biometric data surveilled at much higher rates. In one example from the research, the London Metropolitan Police failed to carry out an equality impact assessment before using facial recognition at the Notting Hill carnival – an event which famously celebrates black and Afro-Carribean culture – despite knowing the sensitivity of the tech and the foreseeable risks of discrimination. The research also showed that whilst marginalised communities are the most likely to have police tech deployed against them, they are also the ones that are the least consulted about it.

• Legal checks and safeguards are being ignored: according to the Chief Technology Officer (CTO) of London, the London Metropolitan Police has been on “a journey” of learning, and understand that some of their past deployments of facial recognition did not have proper safeguards. Yet under data protection law, authorities must conduct an analysis of fundamental rights impacts before they deploy a technology. And it’s not just London that has treated fundamental rights safeguards as an afterthought when deploying biometric surveillance. Courts and data protection authorities have had to step in to stop unlawful deployments of biometric surveillance in Sweden, Poland, France, and Wales (UK) due to a lack of checks and safeguards.

• Failure to put fundamental rights first: the London CTO and the Dutch police explained that facial recognition in cities is necessary for catching serious criminals and keeping people safe. In London, the police have focused on ethics, transparency and “user voice”. In Amsterdam, the police have focused on “supporting the safety of people and the security of their goods” and have justified the use of facial recognition by the fact that it is already prevalent in society. Crime prevention and public safety are legitimate public policy goals: but the level of the threat to everyone’s fundamental rights posed by biometric mass surveillance in public spaces means that vague and general justifications are just not sufficient. Having fundamental rights means that those rights cannot be reduced unless there is a really strong justification for doing so.

• The public are being treated as experimental test subjects: across these examples, it is clear that members of the public are being used as subjects in high-stakes experiments which can have real-life impacts on their freedom, access to public services, and sense of security. Police forces and authorities are using biometric systems as a way to learn and to develop their capabilities. In doing so, they are not only failing their human rights obligations, but are also violating people’s dignity by treating them as learning opportunities rather than as individual humans deserving of respect and dignity.

The debate highlighted the worrying patterns of a lack of transparency and consideration for fundamental rights in current deployments of facial recognition, and other public biometric surveillance, happening all across Europe. The European Commission has recently started to consider how technology can reinforce structural racism, and to think about whether biometric mass surveillance is compatible with democratic societies. But at the same time, they are bankrolling projects like horrifyingly dystopian iBorderCTRL. EDRi’s position is clear: if we care about fundamental rights, our only option is to stop the regulatory whack-a-mole, and permanently ban biometric mass surveillance.

>