Post

Calling in the experts: our roundtable on smartphone lifetimes

We've got some ideas to extend smartphone lifetimes and we invited some experts to put them under the microscope.

Ideas love company, and there comes a point in developing policy recommendations when a discussion with experts will turn good proposals into excellent ones. NGI Forward is exploring ways to extend the useful life of smartphones to reduce their environmental impact and last week we held a roundtable discussion on extending smartphone lifetimes. This is a complex issue with lots of moving parts, which is why we invited experts in a range of fields. We were joined by an impressive array of experts in repair, cybersecurity, software development, sustainability and European policy. Representatives of device makers, mobile networks, security analysts, advocacy groups and the European Commission pulled our suggestions apart and helped us put them back together.

Our focus on smartphones came from our work last year on the environmental impact of the internet as a whole, which culminated in our report: Internet of Waste. The internet and its underlying infrastructure use a significant portion of earth’s resources, consuming 5-9 per cent of global energy supply and creating around 2 per cent of global emissions. And the little black rectangles we carry around in our pockets and bags? They’re some of the biggest contributors. Europeans replace their smartphone on average every two years, and 72 per cent of their lifetime emissions are created before they hit the shelves. As a result, extending the average lifespan of smartphones from two to four years would reduce emissions by 44 per cent. More than half of Europeans expect their smartphone to last for four or more years, so it’s clear there is a market for devices that last longer.

We’d like to see smartphone lifetimes extended to five years by 2030 and our roundtable discussion focused on two areas that could help to contribute.

Short-lived software support

The software on a device needs to be updated regularly to keep it secure and running smoothly. When software updates stop, a device can become unreliable or vulnerable to data breaches. As a result, the lifetime of smartphones can be artificially shortened if a manufacturer stops providing updates before the hardware breaks. Despite the importance of software updates, most smartphones receive them for only two or three years. A 2020 Eurobarometer survey found that 30 per cent of users replaced a smartphone because the performance of the old device had significantly deteriorated, and 19 per cent replaced it because certain applications or software stopped working on the old device, so the influence on device lifetime is clear. 

In our roundtable discussion, we suggested that smartphone makers should be required to provide at least seven years’ software update support. We thought that setting an ambitious target would push manufacturers to think differently about the way they provide software updates, and also drastically reduce the likelihood of artificially shortening device lifetimes. We also suggested that device makers allow users to install alternative operating systems, preferably open source ones, at the end of official support. This would allow the open-source community to create software that runs easily on older devices and receives regular updates indefinitely.

Davide Polverini of the European Commission described the work going into developing legislation for extending smartphone lifetimes, which focuses on the Ecodesign Directive. The Commission is developing vertical regulations that will apply to smartphones and tablets, as well as reviewing the Directive itself to explore how it can be adapted to cover electronics and internet technology. Ugo Vallauri from the Restart Project and Right to Repair Europe pushed for the Commission to be ambitious and agreed that software updates should be provided for far longer than they are currently. Ugo also explained that the practice of serialisation, where manufacturers prevent repair by tying specific parts to a device’s software, is becoming more common.

Our other experts were broadly in support of extending software update periods, especially since analysis by the Fraunhofer Institute shows that the cost of extending updates from two to five years is around €2 per device. However, participants raised concerns that the cost would be greater for smaller device manufacturers, which could further concentrate the market in the larger manufacturers. Device makers are not the only ones that would be affected, since several chips within smartphones need their own software. Any legislation should take this complexity into account, especially in tackling the dominance of Apple and Google, which together control the vast majority of smartphone software. We also discussed the possibility that manufacturers would create a loophole by providing a basic operating system which would be cheap to support for several years, and offer an alternative with more features that could be abandoned sooner.

We discussed the importance of updates being maintained for each component of the device, including those made by other companies, and whether it is possible to separate software and security updates (we decided possibly not). Our experts emphasised the importance of processes being as easy as possible, and the likelihood that users will be reluctant to start over with a new operating system when theirs is no longer supported. We also heard about the idea of code escrows, in which software is released if a company ceases to exist.

Making repair information public

Our second proposal is for manufacturers to publish repair manuals, device schematics and diagnostic tools so that anyone can use them. Pre-pandemic growth in repair cafes and parties demonstrates that consumers are keen to repair their gadgets and keep them going for longer. Despite this popularity, it remains difficult for end users to conduct their own smartphone repairs, so making repair information public could have a significant impact. The information would also be invaluable for research, since the repairability of products could be compared without having to tear each model apart. The French Repairability Index has also demonstrated the possibility of public availability, after Samsung published repair manuals for several of its devices online.

This is different from the Commission’s current approach for products such as electronic displays, which requires only that approved repair professionals can access this information. For TVs and other screens, repairers must either apply to be added to a national register (though no Member State has implemented one to date) or be approved by the manufacturer, which can implement any arduous contract requirements it so desires. Manufacturers can take five working days to approve a repairer and another working day to provide manuals for a specific model. We think these hurdles are likely to push more people to replace their smartphones rather than repairing them – when these devices are so important to our daily lives, each day they’re away for repair creates a serious disincentive.

Our experts debated the risks of this information being available to people that might use it to take advantage of security vulnerabilities. For several of our participants, Samsung’s recent publication of repair manuals for the French repairability index demonstrates that the right incentives can override worries about the information being misused. We also explored how likely consumers are to conduct repairs, what risks of injury they might face, and whether the availability and quality of spare parts was a greater concern. In the end, it appeared to be a chicken-and-egg issue. We can’t know if consumers will take matters into their own hands because the opportunity does not currently exist, and whatever downsides can clearly be overcome if the incentives are in the right place.

What next?

We are incredibly grateful to all our roundtable participants, who created a lively discussion and really got stuck in. Next, I’ll be incorporating their insights into a policy briefing aimed at the European Commission, to lay out the proposals and their potential impact. We’ll publish it on our website in the next few weeks, but feel free to contact me if you’d like to receive a copy of the final briefing.

Post

The NGI Policy-in-Practice Fund – announcing the grantees

We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund.

We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund

Policymakers and public institutions have more levers at their disposal to spur innovation in the internet space than often thought, and can play a powerful role in shaping new markets for ethical tools. We particularly believe that local experimentation and ecosystem building are vital if we want to make alternative models for the internet actually tangible and gain traction. But finding the funding and space to undertake this type of trial is not always easy – especially if outcomes are uncertain. Through the NGI Policy-in-Practice fund, it has been our aim not only to provide the means to organisations to undertake a number of these trials but also make the case for local trials more generally.

Over the past summer and autumn, we went through a highly competitive applications process, ultimately selecting four ambitious initiatives that embody this vision behind the NGI Policy-in-Practice fund. Each of the projects will receive funding of up to €25,000 to test out their idea on a local level and generate important insights that could help us build a more trustworthy, inclusive and democratic future internet.

In conjunction with this announcement, we have released an interview with each of our grantees, explaining their projects and the important issues they are seeking to address in more detail. You can also find a short summary of each project below. Make sure you register for our newsletter to stay up to date on the progress of each of our grantees, and our other work on the future of the internet.

Interoperability to challenge Big Tech power 

This project is run by a partnership of three organisations: Commons Network and Open Future, based in Amsterdam, Berlin and Warsaw.

This project explores whether the principle of interoperability, the idea that services should be able to work together, and data portability, which would allow users to carry their data with them to new services, can help decentralise power in the digital economy. Currently, we are, as users, often locked into a small number of large platforms. Smaller alternative solutions, particularly those that want to maximise public good rather than optimise for profit, find it hard to compete in this winner-takes-all economy. Can we use interoperability strategically and seize the clout of trusted institutions such as public broadcasters and civil society, to create an ecosystem of fully interoperable and responsible innovation in Europe and beyond?  

Through a series of co-creation workshops, the project will explore how this idea could work in practice, and the role trusted public institutions can play in bringing it to fruition. 

Bridging the Digital Divide through Circular Public Procurement

This project will be run by eReuse, based in Barcelona, with support from the City of Barcelona, the Technical University of Barcelona (UPC) and the global Association for Progressive Communications.

During the pandemic, where homeschooling and remote working have become the norm overnight, bridging the digital divide has become more important than ever. This project is investigating how we can make it easier for public bodies and also the private sector to donate old digital devices, such as laptops and smartphones, to low-income families currently unable to access the internet. 

By extending the lifetime of a device in this way, we are also reducing the environmental footprint of our internet use. Laptops and phones now often end up being recycled, or, worse, binned, long before their actual “useful lifespan” is over, putting further strain on the system. Donating devices could be a simple but effective mechanism for ensuring the circular economy of devices is lengthened.  

The project sets out to do two things: first, it wants to try out this mechanism on a local level and measure its impact through tracking the refurbished devices over time. Second, it wants to make it easier to replicate this model in other places, by creating legal templates that can be inserted in public and private procurement procedures, making it easier for device purchasers to participate in this kind of scheme. The partnership also seeks to solidify the network of refurbishers and recyclers across Europe. The lessons learned from this project can serve as an incredibly useful example for other cities, regions and countries to follow. 

Bringing Human Values to Design Practice

This project will be run by the BBC with support from Designswarm, LSE and the University of Sussex

Many of the digital services we use today, from our favourite news outlet to social media networks, rely on maximising “engagement” as a profit model. A successful service or piece of content is one that generates many clicks, drives further traffic, or generates new paying users. But what if we optimised for human well-being and values instead? 

This project, led by the BBC, seeks to try out a more human-centric focused approach to measuring audience engagement by putting human values at its core. It will do so by putting into practice longer-standing research work on mapping the kinds of values and needs their users care about the most, and developing new design frameworks that would make it easier to actually track these kinds of alternative metrics in a transparent way. 

The project will run a number of design workshops and share its findings through a dedicated website and other outlets to involve the wider community. The learnings and design methodology that will emerge from this work will not just be trialled within the contexts of the project partners, but will also be easily replicable by others interested in taking a more value-led approach. 

Responsible data sharing for emergencies: citizens in control

This project will be run by the Dutch National Police, in partnership with the Dutch Emergency Services Control, the Amsterdam Safety Region and the City of Amsterdam.

In a data economy that is growing ever more complex, giving meaningful consent about what happens to our personal data remains one of the biggest unsolved puzzles. But new online identity models have shown to be a potentially very promising solution, empowering users to share only that information that they want to share with third parties, and sharing that data on their own terms. One way that would allow such a new approach to identity and data sharing to scale would be to bring in government and other trusted institutions to build their own services using these principles. That is exactly what this project seeks to do.  

The project has already laid out all the building blocks of their Data Trust Infrastructure but wants to take it one step further by actually putting this new framework into practice. The project brings together a consortium of Dutch institutional partners to experiment with one first use case, namely the sharing of vital personal data with emergency services in the case of, for example, a fire. The project will not just generate learnings about this specific trial, but will also contribute to the further finetuning of the design of the wider Data Trust Infrastructure, scope further use cases (of which there are many!), and bring on board more interested parties.

Post

Policy in Practice Fund: An internet optimised for human values

Lianne Kerlin from the BBC answers a few of our eager questions to give us some insight into the project and what it will achieve.

We’re introducing each of our four Policy-in-Practice Fund projects with an introductory blog post. Below, Lianne Kerlin from the BBC answers a few of our burning questions to give us some insight into the project and what it will achieve. We’re really excited to be working with four groups of incredible innovators and you’ll be hearing a lot more about the projects as they progress. 

At the BBC, we believe that embedding human values into the heart of design practice is fundamental to building a more inclusive, democratic, resilient, trustworthy and sustainable future internet. We are pleased to have received a grant from the NGI Forward’s Policy-in-Practice fund to integrate our innovative work on human values with existing design frameworks so that it can be used by a wider range of practitioners.

What are human values and how do they relate to technology?

The Human Values Framework is based on the needs of users in today’s technology-driven world. It is the result of a research project that examined the link between people’s values, behaviours and needs through a series of workshops, interviews and surveys.

Our work found fourteen indicators of well-being that express fundamental needs. We have constructed a design framework that puts these needs at the centre of innovation and decision making so that products and services can support people in their lives. Values are judgements about what people deem to be necessary, but also represent underlying needs and motivations that drive and shape everyday behaviour, and include elements such as achieving goals, being inspired, pursuing pleasure, and being safe and well.

Some of the human values identified

So what’s the problem with our current approach to tech? 

As well as offering guidance to designers, the framework addresses a fundamental issue with our current approach to measuring the effectiveness of products and services, which is that they are largely concerned with attention metrics such as the number of users or the number of minutes consumed. As a result, any deeper questions of the impact on audience well-being or happiness are not just left unanswered – they are unaskable. 

This approach has serious implications across the online sphere. It means companies compete solely for consumer attention, creating pressure within organisations to increase consumption and adopt attention-grabbing designs that can lead to addictive user behaviour.

How do you see this approach changing in the future, if we get things right?

The framework offers an alternative perspective, one that asks designers to consider the impact of their product on the end-user. In re-framing success, decision-makers can move away from an end goal of consumption into thinking about their intended impact on the people behind the numbers. Using the framework they can consider how to help people live more fulfilled lives, rather than simply gaining their attention.

The framework also recognises the limitations of the current measurement approach and reframes success as the fulfilment of audience values. The framework is about considering what is fundamentally good for people and designing and measuring how they can enable people to explore, to grow or to understand themselves. We believe that having an alternative way to describe success will result in healthy and more sustainable practices.

What do you hope to learn from this project, and how might those learnings be used by others?

Our goal with this project is to take the insights we have developed into design practice and integrate them into existing approaches, specifically the well-known ‘double diamond’ process model, first outlined by the UK Design Council in 2005 and current work connecting user-centered design and agile development. We hope to make measuring the impact on quality of life and wellness part of the normal design cycle for every organisation.

This collaboration is an exciting opportunity to explore how the human values framework can integrate within existing frameworks and practices in all types of industries. We will work with industry experts to learn as many current processes of decision making in order to understand where the human values framework can best align. Our goal is to produce a set of tools, processes and best practice guidelines for embedding the human values framework into existing frameworks.

How can people get involved and find out more?

We will be posting regular updates on our website at www.humanvalues.io which will launch early in 2021 – it currently points to our main page on the BBC R&D website where you can find out about the human values framework. You can also listen to our podcast series.

Post

Policy-in-Practice Fund: Interoperability with a Purpose

Sophie Bloemen and Thomas de Groot from Commons Network, and Paul Keller and Alek Tarkowski from Open Future, answer a few of our questions to give us some insight into their project and what it will achieve.

We’re introducing each of our four Policy-in-Practice Fund projects with an introductory blog post. Below, Sophie Bloemen and Thomas de Groot from Commons Network, and Paul Keller and Alek Tarkowski from Open Future, answer a few of our burning questions to give us some insight into their project and what it will achieve. We’re really excited to be working with four groups of incredible innovators and you’ll be hearing a lot more about the projects as they progress. 

Can you explain to us what you mean by interoperability, and why you think interoperability could be an effective tool to counter centralisation of power in the digital economy?

Simply put, interoperability is the ability of systems to work together and to exchange data easily. In policy circles today, the term is mostly used as a solution to the structurally unhealthy dominance of a small number of extremely large platform intermediaries that is increasingly being understood to be unhealthy, both socially and politically speaking.

In the case of Facebook, for instance, introducing interoperability would mean that a user could connect with Facebook users, interact with them and with the content that they share or produce, without using Facebook. This would be possible because, thanks to interoperability, other services could connect with the Facebook infrastructure.

This scenario is often presented as a way of creating a more level playing field for competing services (who would have access to Facebook’s user base). In the same vein, proponents of interoperability argue for an ecosystem of messaging services where messages can be exchanged across services. This is also the basis of the most recent antitrust case by the US government against Facebook. At the very core, these are arguments in favour of individual freedom of choice and to empower competitors in the market. We call this approach competitive interoperability.

While this would clearly be a step in the right direction (the Digital Markets Act proposed by the European Commission at the end of last year contains a first baby step, that would require “gatekeeper platforms” to allow interoperability for ancillary services that they offer but not for their main services) it is equally clear that competitive interoperability will not substantially change the nature of the online environment. Increasing choice between different services designed to extract value from our online activities may feel better than being forced to use a single service, but it does not guarantee that the exploitative relationship between service providers and their users will change. We cannot predict the effects of increased market competition on the control of individual users over their data. In fact, allowing users to take their data from one service to another comes with a whole raft of largely unresolved personal data protection issues.

We see interoperability as a design principle that has the potential to build a more decentralised infrastructure that enables individual self-determination in the online environment. We propose to call this type of interoperability generative interoperability.

So, even though there are limits on this particular idea of interoperability, this does not mean that the concept itself has no use. Instead, we say that it needs to be envisaged with a different purpose in mind: building a different kind of online environment that answers to the needs of public institutions and civic communities, instead of ‘improving markets’. We see interoperability as a design principle that has the potential to build a more decentralised infrastructure that enables individual self-determination in the online environment. We propose to call this type of interoperability generative interoperability.

What question are you specifically trying to answer with this research? And why is it important?

In our view, the purpose of generative interoperability must be to enable what we call an Interoperable Public Civic Ecosystem. And we are exploring what such an ecosystem would encompass, together with our partners. We know that a public civic ecosystem has the potential to provide an alternative digital public space that is supported by public institutions (public broadcasters, universities and other educational institutions, libraries and other cultural heritage institutions) and civic- and commons-based initiatives. Can such an ecosystem allow public institutions and civic initiatives to route around the gatekeepers of the commercial internet, without becoming disconnected from their audiences and communities? Can it facilitate meaningful interaction outside the sphere of digital platform capitalism?

This experiment is part of the Shared Digital Europe agenda. Could you tell us a bit more about that agenda?

Shared Digital Europe is a broad coalition of thinkers, activists, policymakers and civil society. Two years ago we launched a new framework to guide policymakers and civil society organisations involved with digital policymaking in the direction of a more equitable and democratic digital environment, where basic liberties and rights are protected, where strong public institutions function in the public interest, and where people have a say in how their digital environment functions.

We established four guiding principles that can be applied to all layers of the digital space, from the physical networking infrastructure to the applications and services running on top of the infrastructure and networking stack. And they can also be applied to the social, economic or political aspects of society undergoing digital transformation. They are: 

  • Enable Self-Determination, 
  • Cultivate the Commons, 
  • Decentralise Infrastructure and 
  • Empower Public Institutions.

Now with this new project, we are saying that interoperability should primarily be understood to enable interactions between public institutions, civic initiatives and their audiences, without the intermediation of the now dominant platforms. Seen in this light the purpose of interoperability is not to increase competition among platform intermediaries, but to contribute to building public infrastructures that lessen the dependence of our societies on these intermediaries. Instead of relying on private companies to provide infrastructures in the digital space that they can shape according to their business model needs, we must finally start to build public infrastructures that are designed to respond to civic and public values underpinning democratic societies. In building these infrastructures a strong commitment to universal interoperability based on open standards and protocols can serve as an insurance policy against re-centralisation and the emergence of dominant intermediaries.

These public infrastructures do not emerge by themselves; they are the product of political and societal will. In the European Union, the political climate seems ripe for creating such a commitment. As evidenced by the current flurry of regulation aimed at the digital space, the European Commission has clearly embraced the view that Europe needs to set its own rules for the digital space. But if we want to see real systemic change we must not limit ourselves in regulating private companies (via baby steps towards competitive interoperability and other types of regulation) but we must also invest in interoperable digital public infrastructures that empower public institutions and civil institutions. If European policymakers are serious about building the next generation internet they will need to see themselves as ecosystem-builders instead of market regulators. Understanding interoperability as a generative principle will be an important step towards this objective.

How can people get involved and find out more?

People can reach out to us through our Twitter account: @SharedDigitalEU, they can write to us via email at hello@shared-digital.eu, or they can stay in touch through our newsletter, which we send out a couple of times per year.

Post

Workshop report: Follow us OFF Facebook – decent alternatives for interacting with citizens

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Redecentralize.org.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Redecentralize.org.

Despite the incessant outcry over social media giants’ disrespect of privacy and unaccountable influence on society, any public sector organisation wanting to reach citizens feels forced to be present on their enormous platforms. But through its presence, an organisation legitimises these platforms’ practices, treats them like public utilities, subjects its content to their opaque filters and ranking, and compels citizens to be on them too — thus further strengthening their dominance. How could we avoid the dilemma of either reaching or respecting citizens?

Redecentralize organised a workshop to address this question. The workshop explored the alternative of decentralised social media, in particular Mastodon, which lets users choose whichever providers and apps they prefer because these can all interoperate via standardised protocols like ActivityPub; the result is a diverse, vendor-neutral, open network (dubbed the Fediverse), analogous to e-mail and the world wide web.

Leading by example in this field is the state ministry of Baden-Württemberg, possibly the first government with an official Mastodon presence. Their head of online communications Jana Höffner told the audience about their motivation and experience. Subsequently, the topic was put in a broader perspective by Marcel Kolaja, Member and Vice-President of the European Parliament (and also on Mastodon). He explained how legislation could require the dominant ‘gatekeeper’ platforms to be interoperable too and emphasised the role of political institutions in ensuring that citizens are not forced to agree to particular terms of service in order to participate in public discussion.

Post

Workshop report: (Dis)connected future – an immersive simulation

As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Nesta Italia and Impactscool, written by Giacomo Mariotti and Cristina Pozzi.

The NGI Policy Summit was a great opportunity for policymakers, innovators and researchers to come together to start laying out a European vision for the future internet and elaborate the policy interventions and technical solutions that can help get us there.

As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet. It was a participative and thought-provoking session. Here we take a look at how it went.

Our aims

The discussion about the internet of the future is very complex and it touches on many challenges that our societies are facing today. Topics like Data sovereignty, Safety, Privacy, Sustainability, Fairness, just to name a few, as well as the implications of new technologies such as AI and Blockchain, and areas of concern around them, such as Ethics and Accessibility.

In order to define and build the next generation internet, we need to make a series of design choices guided by the European values we want our internet to radiate. However, moving from principles to implementation is really hard. In fact, we face the added complexity coming from the interaction between all these areas and the trade-offs that design choices force us to make.

Our workshop’s goal was to bring to life some of the difficult decisions and trade-offs we need to consider when we design the internet of the future, in order to help us reflect on the implications and interaction of the choices we make today.

How we did it

The workshop was an immersive simulation about the future in which we asked the participants to make some key choices about the design of the future internet and then deep dived into possible future scenarios emerging from these choices. 

The idea is that it is impossible to know exactly what the future holds, but we can explore different models and be open to many different possibilities, which can help us navigate the future and make more responsible and robust choices today.

In practice, we presented the participants with the following 4 challenges in the form of binary dilemmas and asked them to vote for their preferred choice with a poll:

  1. Data privacy: protection of personal data vs data sharing for the greater good
  2. Algorithms: efficiency vs ethics
  3. Systems: centralisation vs decentralisation
  4. Information: content moderation vs absolute freedom

For each of the 16 combinations of binary choices we prepared a short description of a possible future scenario, which considered the interactions between the four design areas and aimed at encouraging reflection and discussion.

Based on the majority votes we then presented the corresponding future scenario and discussed it with the participants, highlighting the interactions between the choices and exploring how things might have panned out had we chosen a different path.

What emerged

Individual-centric Internet

Data privacyProtection of personal data
84%
Data sharing for the greater good
16%
AlgorithmsEfficiency
41%
Ethics
59%
SystemsCentralisation
12%
Decentralisation
88%
InformationContent moderation
41%
Absolute freedom
59%

The table above summarises the choices made by the participants during the workshop, which led to the following scenario.

Individual-centric Internet

Decentralized and distributed points of access to the internet make it easier for individuals to manage their data and the information they are willing to share online. 

Everything that is shared is protected and can be used only following strict ethical principles. People can communicate without relying on big companies that collect data for profit. Information is totally free and everyone can share anything online with no filters.

Not so one-sided

Interesting perspectives emerged when we asked contrarian opinions on the more one-sided questions, which demonstrated how middle-ground and context-aware solutions are required in most cases when dealing with complex topics as those analysed.

We discussed how certain non-privacy-sensitive data can genuinely contribute to the benefit of society, with minimum concern on the side of the individual if they are shared in anonymised form. Two examples that emerged from the discussion were transport management and research.
In discussing the (de)centralisation debate, we discussed how decentralisation could result in a diffusion of responsibility and lack of accountability. “If everyone’s responsible, nobody is responsible”. We mentioned how this risk could be mitigated thanks to tools like Public-Private-People collaboration and data cooperatives, combined with clear institutional responsibility.

Post

Workshop report: People, not experiments – why cities must end biometric surveillance

We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by European Digital Rights (EDRi), which was originally published on the EDRi website.

We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it. The discussion got to the heart of EDRi’s warnings that biometric surveillance puts limits on everyone’s rights and freedoms, amplifies discrimination, and treats all of us as experimental test subjects. This techno-driven democratic vacuum must be stopped.

From seriously flawed live trials of facial recognition by London’s Metropolitan police force, to unlawful biometric surveillance in French schools, to secretive roll outs of facial recognition which have been used against protesters in Serbia: creepy mass surveillance by governments and private companies, using people’s sensitive face and body data, is on the rise across Europe. Yet according to a 2020 survey by the EU’s Fundamental Rights Agency, 80% of Europeans are against sharing their face data with authorities.

On 28 September, EDRi participated in a debate at the NGI Policy Summit on “Biometrics and facial recognition in cities” alongside policymakers and police officers who have authorised the use of the tech in their cities. EDRi explained that public facial recognition, and similar systems which use other parts of our bodies like our eyes or the way we walk, are so intrusive as to be inherently disproportionate under European human rights law. The ensuing discussion revealed many of the reasons why public biometric surveillance poses such a threat to our societies:

• Cities are not adequately considering risks of discrimination: according to research by WebRoots Democracy, black, brown and Muslim communities in the UK are disproportionately over-policed. With the introduction of facial recognition in multiple UK cities, minoritised communities are now having their biometric data surveilled at much higher rates. In one example from the research, the London Metropolitan Police failed to carry out an equality impact assessment before using facial recognition at the Notting Hill carnival – an event which famously celebrates black and Afro-Carribean culture – despite knowing the sensitivity of the tech and the foreseeable risks of discrimination. The research also showed that whilst marginalised communities are the most likely to have police tech deployed against them, they are also the ones that are the least consulted about it.

• Legal checks and safeguards are being ignored: according to the Chief Technology Officer (CTO) of London, the London Metropolitan Police has been on “a journey” of learning, and understand that some of their past deployments of facial recognition did not have proper safeguards. Yet under data protection law, authorities must conduct an analysis of fundamental rights impacts before they deploy a technology. And it’s not just London that has treated fundamental rights safeguards as an afterthought when deploying biometric surveillance. Courts and data protection authorities have had to step in to stop unlawful deployments of biometric surveillance in Sweden, Poland, France, and Wales (UK) due to a lack of checks and safeguards.

• Failure to put fundamental rights first: the London CTO and the Dutch police explained that facial recognition in cities is necessary for catching serious criminals and keeping people safe. In London, the police have focused on ethics, transparency and “user voice”. In Amsterdam, the police have focused on “supporting the safety of people and the security of their goods” and have justified the use of facial recognition by the fact that it is already prevalent in society. Crime prevention and public safety are legitimate public policy goals: but the level of the threat to everyone’s fundamental rights posed by biometric mass surveillance in public spaces means that vague and general justifications are just not sufficient. Having fundamental rights means that those rights cannot be reduced unless there is a really strong justification for doing so.

• The public are being treated as experimental test subjects: across these examples, it is clear that members of the public are being used as subjects in high-stakes experiments which can have real-life impacts on their freedom, access to public services, and sense of security. Police forces and authorities are using biometric systems as a way to learn and to develop their capabilities. In doing so, they are not only failing their human rights obligations, but are also violating people’s dignity by treating them as learning opportunities rather than as individual humans deserving of respect and dignity.

The debate highlighted the worrying patterns of a lack of transparency and consideration for fundamental rights in current deployments of facial recognition, and other public biometric surveillance, happening all across Europe. The European Commission has recently started to consider how technology can reinforce structural racism, and to think about whether biometric mass surveillance is compatible with democratic societies. But at the same time, they are bankrolling projects like horrifyingly dystopian iBorderCTRL. EDRi’s position is clear: if we care about fundamental rights, our only option is to stop the regulatory whack-a-mole, and permanently ban biometric mass surveillance.

Post

Workshop report: What your face reveals – the story of HowNormalAmI.eu

At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work - an online interactive documentary called 'How Normal Am I?'.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Tijmen Schep.

At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work – an online interactive documentary called ‘How Normal Am I?‘. It explains how face recognition technology is increasingly used in the world around us, for example when dating website tinder gives all its users a beauty score to match people who are about equally attractive. Besides just telling us about it, the project also allows people to experience this for themselves. Through your webcam, you will be judged on your beauty, age, gender, body mass index (BMI), and your facial expressions. You’ll even be given a life expectancy score, so you’ll know how long you have left to live.

The project has sparked the imagination – and perhaps a little feeling of dread – in many people, as not even two weeks later the documentary has been ‘watched’ over 100.000 times.

At the Summit, Tijmen offered a unique insight into the ‘making of’ of this project. In his presentation, he talked about the ethical conundrums of building a BMI prediction algorithm that is based on photos from arrest records, and that uses science that has been debunked. The presentation generated a lot of questions and was positively received by those who visited the summit.

Post

Workshop report: Data-sharing in cities

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by the Coalition of Cities for Digital Rights, written by Beatriz Benitez and Malcolm Bain.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by the Coalition of Cities for Digital Rights, written by Beatriz Benitez and Malcolm Bain.

At the Next Generation Internet Policy Summit 2020 organised by Nesta and the City of Amsterdam  on 28th September 2020, Daniel Sarasa (Zaragoza) and Malcolm Bain (Barcelona), on behalf of the Coalition of Cities for Digital Rights, hosted a round table for city and regional stakeholders that explored the application of digital rights in local data-sharing platforms, throughout the stages planning, implementation, and evolution of digital data sharing services.

Data sharing platforms are playing an important role in the cities by integrating data collected throughout or related to the city and its citizens from a wide variety of sources (central administration, associated entities, utilities, private sector) to enable local authorities, businesses and even occasionally the public to access this data produced within the City and use it for limited or unlimited purposes (open data). 

Malcolm introduced the session, highlighting that while Cities are keen to share data and use shared data in city digital services, they are (or should be) also aware of the digital rights issues arising in these projects related to citizens’ privacy, transparency and openness of the data used, accessibility and inclusion of citizens as well as the existence of bias in the data set used and the privatization of the use of city-related data. Luckily, cities are also in the best position to introduce the concept of ‘digital rights by design’ in these projects, and correct issues such as bias, privacy intrusions, fairness, profiling and data misuse.  He briefly show-cased the Coalition work in this area in the Data Sharing Working Group, focusing on the ‘building blocks’ for rights-compliant data sharing projects to extract value from urban big data while respecting residents and visitors’ rights, including policies, processes, infrastructures, and specific actions and technologies.

Daniel highlighted the work of Eurocities on their Citizens Data Principles, which aim is to offer guidance to European local governments on more socially responsible use of data, and recognise, protect and uphold the citizens’ rights on the data they produce. The principles support how to use data-generated knowledge to improve urban life and preserve European values through scientific, civic, economic and democratic progress. Daniel presented one of his own city’s data-sharing project, Periscopio, a framework for sharing information contained in urban data (public and private) in such a way that it allows social agents and citizens to be involved to create social, scientific, economic and democratic value, as well as enabling the creation of better urban services.

Then, the cities of San Antonio, Long Beach, Portland, Toronto, Rennes, Helsinki, Amsterdam and Barcelona each presented some case studies from their cities, highlighting different issues raised by their data-sharing platforms and projects.

  • For the City of San Antonio, USA, Emily B. Royall addressed the issue of data bias and the need to listen to the community under the theme ‘Leveraging Data for Equity’.
  • Johanna Pasilkar of Helsinki shared with us the work of ‘MyData’ operator initiative and how to ease the daily life of the residents by consolidating data collected by the city’s departments and organisations and enabling sharing across several municipalities (data portability).
  • On behalf of the City of Amsterdam, Ron Van der Lans told us about the collaboration with the city by sharing traffic data with navigation companies such as Google, Waze and BeMobile to improve the mobility and quality of life of citizens.
  • Hamish Goodwin from the City of Toronto, Canada explained how they are attempting to integrate digital rights principles into the city digital infrastructure and the municipalities’ decision-making and how to put a policy framework into practice – the results of this are just coming out.
  • From the city of Rennes, Ben Lister introduced us to the RUDI, a local, multipartner, data sharing platform which goes beyond open data and connects users and producers to create new or/and better services.
  • Héctor Domínguez from the city of Portland, USA told us about the importance of ‘Racial Justice’ as a core value to regulating emergent technology, based on the respect for privacy, trusted surveillance and digital inclusion.
  • Ryan Kurtzman on behalf of the City of Longbeach, USA spoke about positive and negative associations of smart cities, and how participatory design of citizens in digital services can leverage the positive aspects: personal convenience, engagement and solving social challenges.
  • To conclude the round, Marc Pérez-Battle from Barcelona presented several data sharing and open data projects led by the City Council.

The City participants highlighted the need for embedding digital rights at design time (privacy, transparency, security, accessibility, etc.), citizen participation, and having the flexibility to adapt and correct any issues that may arise – something that may be more difficult when the technologies are embedded in the city infrastructure, and thus all the more important for correct design. Common themes among the projects include the importance of citizen involvement in projects, the respect for privacy and security, and the need for transparency and avoiding data bias. In addition, listeners to the session in the online chat also raised the issue of data ‘ownership’, and if this is a useful concept or rather misleading – cities are more stewards of data for the public, rather than an owner of data that they gather and use.

The session concluded stating that much work was still to be done, but just by raising Cities’ awareness of digital rights issues in data-sharing projects, we are making a big first step.  The Coalition will shortly be releasing the Data Sharing Concept Note, and associated case studies that were briefly presented during the round table.

Picture of Marleen Stikker
Post

NGI Policy Summit: Interview with internet pioneer Marleen Stikker

Marleen Stikker is an internet pioneer who co-founded The Digital City, a non-profit internet provider and community for Dutch people, in 1994. She is now director of Waag, a cultural innovation centre in Amsterdam. Here, she explores the early beginnings of the internet, explains what went wrong, and ponders the future of online life. Marleen […]

Marleen Stikker is an internet pioneer who co-founded The Digital City, a non-profit internet provider and community for Dutch people, in 1994. She is now director of Waag, a cultural innovation centre in Amsterdam. Here, she explores the early beginnings of the internet, explains what went wrong, and ponders the future of online life.

Marleen is one of the speakers of our upcoming NGI Policy Summit, which will take place online on September 28 and 29 2020. Sign up here, if you would like to join us.

This interview originally appeared as part of the NGI Forward’s Finding CTRL collection.

You have personally been involved with the internet from the beginning of the web. What have we lost and gained since those early days?

Back in 1994 when we launched the Digital City, the internet was a green field: it was an open common where shared values thrived. It was an environment for creation, experimentation, and social and cultural values. There was no commercial expectation at that moment and there was no extraction of value for shareholders. The governance of the internet at that time was based on what the network needed to function optimally, the standard committee IETF made its decisions on the basis of consensus.

We lost the notion of the commons: the internet as a shared good. We basically handed it over to the market, and shareholders’ value now defines how the internet functions. We didn’t only lose our privacy but also our self-determination. The internet is basically broken.

What do you think was the most influential decision in the design of the World Wide Web? How could things have turned out differently if we made different decisions?

I think the most important decision was a graphical interface to the internet, enabling different types of visualisation to exist. The World Wide Web brought a multimedia interface to the internet, enabling a visual language. And with that enabling, a whole new group of people got to use the internet.

The World Wide Web became synonymous with pages and therefore publishing, which emphasises the idea it was to do with classical publishing and intellectual rights regulation. Before the World Wide Web, the internet was much more a performative space, a public domain. The publishing metaphor was a set back and for me quite disappointing.

What were the big mistakes where we went wrong in the development of the internet? How do you believe these mistakes have shaped our society?

The whole emphasis on exponential growth, getting filthy rich through the internet, has been a real problem. Basically handing over the internet to the mercy of the capital market has been a major miscalculation. We should have regulated it as a public good and consider people as participants instead of consumers and eyeballs. Now we are not only the product, but the carcass, as Zuboff underlines in her book on surveillance capitalism. All the data is sucked out of us and we act in a scripted nudging environment, captured in the profiles that companies store in their ‘black box’. We should have had encryption and attribute-based identity by default. The fact that these companies can build up their empires without regulation on the use of our data and behaviour has been a major flaw.

We have to re-design how we deal with digital identity and the control over our personal data.

How do you believe the internet has shaped society for the better?

The internet is empowering people by giving means of communication and distribution, and it enables people to share their ideas, designs, and solutions. For instance, in the MakeHealth program that we run at Waag, or the open design activities.

Can you explain your idea for a full-stack internet and tell us more about it?

I believe we have to design the internet as a public stack, which means that we have to start by expressing the public values that will be guiding the whole process, it means that we re-think the governance and business models. We need open and accountable layers of technology, both hardware, firmware operating systems and applications.

It means that we ensure that there is accountability in each part of the internet. At the basis of all this should be the design for data minimisation, data commons, and attribute-based identity so people can choose on what they want to reveal or not.

We are good at diagnosing problems with the internet, but not as great at finding solutions. What should we do next, and who should implement change?

It starts with acknowledging that technology is not neutral. That means that we need to diversify the teams that build our technologies and make public values central. We have to regulate big tech and build alternatives towards a commons based internet. The governmental and public organizations should make explicit choices for public technologies and alternatives.

What is your positive vision for the future of the internet?

After leaving the internet to the market the last 25 years I believe we will need another 25 years to bring back the commons and have a more mature and balanced next generation internet. I do believe 2018 has been a turning point.

Are you personally hopeful about the future of the internet?

I think the coming era could be game changer, if we keep on working together I see a positive future, we can regain a trustworthy internet.

If we use the current crisis for good, we can rebuild a trustworthy internet. We will need to rethink the principles behind the internet. We need to be thorough and choose an active involvement.

On the whole, do you think the web, and the internet more broadly, has had a positive or negative influence on society?

Both… It gave a lot of people a voice and a way of expression, which is still one of the major achievements of the internet. But it also put our democracies in danger and if we are not able to counter these new powers, the outcome will be a very negative one. If you can’t counter surveillance capitalism the outcome of the cost-benefit will be extremely negative.

>