Post

Public procurement conditions for trustworthy AI and algorithmic systems

Governments increasingly use AI and algorithmic systems. The City of Amsterdam for example uses algorithmic systems in some of their primary city tasks, like parking control and acting on notifications from citizens.  In the last couple of years, many guidelines and frameworks have been published on algorithmic accountability. A notable example is the Ethics guidelines […]

Governments increasingly use AI and algorithmic systems. The City of Amsterdam for example uses algorithmic systems in some of their primary city tasks, like parking control and acting on notifications from citizens

In the last couple of years, many guidelines and frameworks have been published on algorithmic accountability. A notable example is the Ethics guidelines for trustworthy AI by the High-Level Expert Group on AI advising the European Commission, but there are many others. What all these frameworks have in common is that they name transparency as a key principle of trustworthy AI and algorithmic systems. But what does that mean, in practice? We can talk about the concept of transparency, but how can it actually be operationalized?

That’s why the City of Amsterdam took the initiative to translate the frameworks and guidelines into a practical instrument: contractual clauses for the procurement of algorithmic systems. In this post, you can read all about these procurement conditions and how you can be involved in taking them to the next level: a European standard for public procurement of AI and algorithmic systems.

Terms & Conditions

Starting late 2019, the City of Amsterdam joined forces with several Dutch and international experts, ranging from legal and procurement experts to suppliers and developers, resulting in a version 1.0 of a new set of procurement conditions, and an accompanying explanatory guide. This first iteration is already free for all to use and adapt.

By choosing procurement conditions as a means to operationalize the ethics and accountability frameworks, we hit two birds with one stone. First of all, it provides clear guidance to suppliers, who according to the World Economic Forum, “understand the challenges of algorithmic accountability for governments, but look to governments to create clarity and predictability about how to manage risks of AI, starting in the procurement process.” Secondly, and maybe more importantly, procurement conditions demand clear definitions, both of key concepts like ‘algorithmic system’ and ‘transparency’, as of the conditions themselves.  

Transparency in practice

Although the procurement conditions aim to tackle several issues related to the procurement of algorithmic systems, like vendor lock-in, the main novelty is that they provide a separation between information needed for algorithmic accountability on the one hand and company-sensitive information on the other. The conditions distinguish between three main types of transparency that the supplier should provide:

  • Technical transparency provides information about the technical inner workings of the algorithmic system; for instance the underpinning source code. For many companies, this type of information is proprietary and often considered a trade secret, it’s their ‘secret sauce’. Therefore, unless it is the procurement of open source software, technical transparency will only be demanded in case of an audit or if needed for explainability (see below). 
  • Procedural transparency provides information about the purpose of the algorithmic system, the process followed in the development and application and the data used in that context; for instance, what measures were taken to mitigate data biases. Procedural transparency provides a government with information that enables them to objectively establish the quality and risks of the algorithms used and perform other controls; to provide explainability (see below); and to inform the general public about algorithmic usage and the manifold ways on how it affects society. Procedural transparency is mandatory in every procurement.
  • Explainability means that a government should be able to explain to individual citizens how an algorithm arrives at a certain decision or outcome that affects that citizen. The information provided should offer the citizen the opportunity to object to the decision, and if necessary follow legal proceedings. This should in any event include a clear indication of the leading factors (including data inputs) that have led the algorithmic system to this particular result and the changes to the input that must be made in order to arrive at a different conclusion. Providing this information becomes mandatory for any relevant product or service procured by the city under the new rules. 

The procurement conditions and their explanatory guide give a detailed account of the situations in which each of these types of transparency applies. 

Towards a European standard for public procurement of AI and algorithmic systems

The ambition for this project has always been to show that it is possible to operationalize general guidelines for AI ethics and to encourage others to do so as well. That’s why we hope these conditions will become the inspiration for a European standard for public procurement of AI and algorithmic systems. We took some steps towards that ambition already:

From February 2020 to June 2020, the European Commission held a public consultation on their AI white paper. The City of Amsterdam and Nesta, together with the Mozilla Foundation, AI Now Institute and the City of Helsinki, published a position paper as a response to that consultation, asking the EC to facilitate the development of common European standards and requirements for the public procurement of algorithmic systems. 

Within the Netherlands, the conditions are now being implemented by several municipalities, regional governments and government agencies, collecting feedback from suppliers and working towards a version 2.0.

Join us!

On June 25th, DG GROW hosts a webinar, titled Public Procurement of AI: building trust for citizens and business. At this webinar we will launch a more generalized version of the procurement conditions, that can be easily adapted to fit your organization. In the meantime, click the links to download the procurement conditions and their explanatory guide in pdf, to use within your organization right now.

Can’t wait? Want to help? Or just want to stay informed? Please let us know through this form how you want to be involved! 

Post

How video games are becoming the next frontier in the ‘Tech Cold War’

The 'platformisation' of the games industry is posing some serious challenges for Europe and the internet at large.

What is a platform and when does it require regulation? Just as lawmakers in Brussels are beginning to seriously grapple with this question, researchers at the University of Amsterdam have published a paper on the evolution of the free-to-play shooter game Fortnite into a content delivery platform and its potential for manipulation

What the researchers identified are two mutually reinforcing trends that blur the lines between certain online games and traditional platforms: by curating in-game events, adding social-media-like features and enabling increasingly sophisticated player interaction, games have the potential to become platforms in all but name, giving developers and third parties an engaging, new channel for the delivery of paid content and services, which can range from pop music concerts and movie trailer premieres to political campaigns

Modern games can also play with our expectations, emotions and needs in ways that elude other means of expression. At their best, this makes games a powerful medium for introspection, education and social commentary. At their least ethical, it reveals the lengths to which some designers will go to manipulate their hyper-engaged audience – from Freemium titles that artificially limit and time content to induce FOMO (the fear of missing out), to addictive in-game microtransactions that resemble gambling in all but name. 

Games that act as quasi-platforms can generate billions of Euros in revenue – Photo by Sean Do on Unsplash

What makes these trends more concerning is that the global gaming industry is exhibiting the tell-tale signs of ‘platformisation’ even at the macro level. Having experienced a period of democratisation and significant growth on the production side in the late 2000s and early 2010s – consider, for example, the advent of app stores and the renaissance of indie games – we are today seeing a period of heavy consolidation and centralisation of market power. And just as in other segments of the tech and creative industries, the new gatekeepers of gaming are engaged in winner-takes-all battles for attention, data, monetisation and intellectual property. 

Why Europe is losing out 

Widely recognised as one of the world’s fastest-growing industries, some estimates see the gaming sector turning over as much as $300 billion by 2025. Already today, games significantly outpace the global film and music industries. While the EU is a major consumer market for games, with revenues in excess of €21 billion in 2019 alone, it lacks the corporate heavyweights that dominate the industry in Asia and North America. As in other segments of the technology sector and creative industries, Europe boasts a rich tapestry of world-class developers and innovators but is home to few of the major studios or publishers and, at best, plays a supporting role in the development of gaming hardware, services and infrastructure. With the loss of the UK’s exceptionally strong gaming sector – which gave birth to Tomb Raider and Grand Theft Auto – to Brexit, it’s fair to say that Europe risks once again falling behind the big and, in China’s case, emerging players – a familiar refrain in the Tech Cold War.

Making Europe competitive in gaming will require greater support and smarter, forward-thinking regulation at the transnational level. Until relatively recently, the politics and regulation of video games were largely under the purview of national governments. Like many other areas of cultural and media policy, EU Member States tend to treat video games as a national competence. Often that means that countries have to go it alone when they feel the need to regulate, as Belgium did with its recent ban on loot boxes in games. But as online gaming and digital distribution are becoming the norm, it’s no longer possible to ignore the medium’s borderless nature and geopolitical relevance. Brussels needs to be prepared to deal with the looming challenges of the industry.

Through the technology glass

One solution is to look at gaming through the prism of platforms, technology and data policy, rather than just media and creative industries policy. This makes sense for several reasons. Firstly, on topics like Europe’s ‘digital sovereignty’ or the future of AI, the institutions in Brussels have finally come to terms with the idea that digital, competition and foreign policy are inextricably intertwined. As with data governance or social media regulation, it makes sense to view video games in the same context of Europe’s systemic competition with the Chinese and U.S. digital economies. 

Secondly, large swathes of today’s gaming industry are owned, controlled or gate-kept by a small number of dominant and data-hungry technology companies, many of which are U.S. or China-based. That is a notable change from the early days of gaming when the industry was shaken up by garage start-ups, medium-sized toymakers, slot machine operators and manufacturers of HiFi equipment.

Lastly, gaming is plagued by many of the same transnational issues that we’re dealing with in technology and data policy. The gaming sector, too, struggles to contain the power of platforms, ensure fair competition, curtail the amplification of harmful content and champion data protection. Its concerns, too, include the manipulation of online marketplaces, foreign takeovers and the security and safety of products and services. 

A ‘platformer’ as a platform is a platform

As the University of Amsterdam paper shows, a small sub-segment of games can – and probably should – be considered content delivery platforms. Sticking with their example, Fortnite is not so much a game in the traditional sense as it is an adaptable infrastructure that allows its developer Epic Games to deliver content and services, including advertising and product placements, to players in a highly engaging and immersive way.

Blurring the line between game and platform: Fortnite recently staged an in-game film festival – Image: Epic Games

Despite being nominally free-to-play, Fortnite operates its own marketplace and in-game currency. It generates billions of dollars in microtransactions and even manages to mobilise its players to express their political support for developer Epic’s antitrust disputes. It also boasts around 350 million registered players, an unknown but no doubt significant percentage of which are underage. In sheer numbers, that puts it on par with Twitter’s 330 million users. Unlike Twitter, however, Europe’s political class has taken relatively little notice of what’s going on over at Fortnite. 

Trying to target Fortnite with ex-post regulation in 2021 would be missing the point. The game has been around for over three years, a lifetime in a fast-moving industry. It’s also just one highly-visible example of symptoms that affect an increasingly ‘platformised’ and politicised industry. Take PlayerUnknown’s Battlegrounds (PUBG), a popular South Korean eSports title that goes heavy on microtransactions and has been downloaded a respectable 800 million times

Because PUBG’s mobile version was co-developed by China’s Tencent, India recently moved to ban the game, describing it, alongside TikTok and a host of other Chinese apps, as a threat to the country’s ‘sovereignty and integrity’. In response, PUBG’s South Korean developers felt compelled to end their collaboration with Tencent in India.

There’s no immediate appetite in the EU to replicate such politically fraught measures, but the steady escalation of the Huawei controversy has shown that international political pressure to sanction tech companies can build up quickly and India’s decision on PUBG demonstrates how geopolitical context matters. In a country still set to bring more than 600 million of its citizens online, mobile games are a huge driver of smartphone adoption. Putting them under the microscope as vectors for soft power, economic exploitation and cyber attacks seems not entirely unreasonable.

Won’t somebody please think of the children?

Whether or not they would accept their classification as platform providers, it’s fair to say that the better-resourced publishers and gaming service providers have become more mindful of their responsibilities when it comes to ‘traditional’ online harms, particularly safeguarding minors. The rallying cry of “protect the children” – whether that’s from gratuitous violence, too much screen time or online grooming – has been a depressing constant in the politics of video games for decades, even if the evidence base often remains shaky

Responding to a proliferation of national-level initiatives to regulate social media and online services after 2016, the gaming industry in Europe was quick to differentiate itself from traditional platforms, emphasising its responsible business practices and comparatively functional self-regulatory regime. Amping up their efforts to protect minors, who generally make up a larger share of the user base in games than they would on platforms like Facebook, the industry has been pushing its own online safety codes, educational campaigns and parental controls. Some platforms have rolled out automated flagging of suspicious online conversations to tackle grooming and online child sexual exploitation

The Uncensored Library makes banned journalism available inside the game Minecraft – Image: Uncensored Library

Playful propaganda

But as gamers get older – the average age of video game players in the EU is 31 years – and the industry finds itself at the centre of geopolitical competition, other ‘online harms’ are likely to come into focus. In 2019, Reporters without Borders released the Uncensored Library, essentially a Minecraft server granting in-game access to banned journalistic articles in an attempt to evade internet censorship in countries where Western social media channels were banned. Although laudable on its own terms, the project highlights how video games can become vectors and catalysts for political speech and even propaganda, a complex phenomenon that deserves a differentiated policy response. 

Concerns over radicalisation loom especially large. At least since the Gamergate controversy of 2014, there is an implicit assumption that gaming subcultures skew towards digitally-native, hyper-engaged adolescent males with extreme views, a combination of characteristics often targeted by Russia’s Internet Research Agency and other state-sponsored troll farms. On the whole, that characterisation doesn’t hold true. Gamers are a more diverse and representative crowd than we give them credit for, and the stigmatisation of players as violent, at-risk individuals or misogynist shut-ins is more counterproductive than helpful when trying to identify or address the issue. 

As a recent paper by the Radicalisation Awareness Network points out, public debate on the relationship between games and radicalisation – stoked after far-right attacks in Christchurch, Halle and El Paso – tends to oversimplify and conflate distinct issues. Games that are designed as propaganda tools, such as Hezbollah’s Special Force, will require a different response than the use of gaming-adjacent communication tools by radicals. Similarly, the use of gaming-cultural references by extremist sympathisers is not quite the same as the application of game design principles to terrorist recruitment, as exemplified by virtual scoreboards for ‘successful’ attacks. If policymakers in Brussels are serious about curtailing challenges like radicalisation, grooming and misinformation on the internet, then a good evidence base on the relationship of these issues with games should be the priority – preferably before reductive media narratives take hold and limit their scope to act. 

States of play

Data flows and foreign takeovers present another contentious issue worth examining in this regard. Online games, and mobile games, in particular, are becoming an increasingly important source and beneficiary of data harvesting. As state or state-owned actors are beginning to invest in video games on a large scale, their ties to the industry are inevitably going to raise questions about the downstream use and potential abuse of gaming data. It’s easy to see how an increasingly state-sponsored gaming landscape could have a similarly destabilising effect on public trust as the arrival of Russian TV and Chinese tabloids had on the Western media ecosystem in the 2010s. 

Indeed, the biggest area of concern seems to be China’s meteoric rise in the games industry, which makes as much sense economically as it does in terms of strategic data access. With investments in over 300 gaming companies, Tencent has rapidly become the world’s biggest video game publisher. Allegations of data-sharing between the tech giant and the Chinese government have already been the subject of occasional criticism, but its stakes in gaming companies with significant data assets, including Fortnite developer Epic Games and eSports giant Riot Games, are likely to receive more scrutiny going forward.  

‘Esports diplomacy’ is already shaping international relations – Photo by Sean Do on Unsplash

Whether data is genuinely at risk in these cases may almost be beside the point. If Europe wants to rekindle the public’s trust in data-sharing and the digital economy, its regulators and policymakers will have to become much better at anticipating, understanding and addressing data and takeovers issues in the games industry. 

Playing to win

These problems extend beyond games that function like platforms themselves. Even ‘offline’ titles or online games that don’t quite fit the description of ‘quasi-platform’ tend to be inextricably linked to services that do. Plug-and-play is a thing of the past. In today’s video game economy, players have to interact with external platform providers that distribute games, enable access to additional content, track and broadcast their achievements, connect them to other players across the world and allow eSports enthusiasts to cheer for their favourite pro gamers. 

Fortnite’s success, for example, is enabled by a platform-powered ecosystem that includes, but is not limited to, the developer’s own Epic Games Store, Twitch, Steam, YouTube, Playstation Network, Microsoft’s Xbox Live and Store, and many others. Pending a European antitrust complaint as well as several lawsuits, the iOS App Store and Google Play Store may or may not be added back to that list eventually. Last summer, both Apple and Google pulled Fortnite for breaching store policies when Epic tried to circumvent their in-app purchasing systems, which funnel 30 cents on every dollar made to Cupertino and Mountain View respectively.

Zooming out to the macroeconomic level, the Epic feud becomes just one of the many battles over platformisation, centralisation and anti-competitive practices that are set to define the next decade in gaming. 

The effect of platform economics on games is equally obvious in the context of more open systems like the PC. Digital distribution is well-established and largely driven by bonafide platforms like Valve’s Steam store. It has cut out most of the middlemen and almost completely collapsed the second-hand economy. With packaging, discs, transportation, logistics and brick-and-mortar retailers out of the equation, publishers are seeing more money for their product and consumers get instant access to software from the comfort of their own home. Controversially, however, Steam – operated by a company that only employs around 360 people – takes a 30 per cent cut on every game sold through its platform. Much like Apple and Google, it has become a gatekeeper and quasi-essential infrastructure for PC gamers. 

The list of grievances associated with Steam, and digital distribution more generally, reads eerily familiar to platform critics everywhere: asymmetrical contractual agreements with developers and publishers, unfair trading practices, data mining, targeted advertising, fake reviews and intransparent search algorithms that often dictate whether small-time developers get any consumer exposure at all. But 17 years into its existence, the Steam model is unlikely to change. Policymakers should focus on what’s next.

If you can’t beat them, integrate them: GOG is building a meta-platform to integrate the various gaming platforms and networks – Image: GOG Galaxy

The next big thing

Among the handful of remaining players in digital distribution on PC, a familiar winner-takes-all mentality has taken hold. Would-be competitors need serious financial heft. Perhaps it’s therefore not surprising that Steam’s most serious challengers are backed by some of the world’s most valuable companies: Tencent is going head-to-head, while Microsoft, Apple, Google and Facebook are all looking to disrupt the digital distribution model in their own ways. 

Europe, as in most other areas of the tech industry, sees itself relegated to the roles of consumer and supporting act. GOG, part of Poland’s CDProjekt Group, provides gamers with a relatively traditional store experience and boasts some laudable principles, such as integration of competitor platforms, DRM-free ownership of software and fairer treatment of developers, but it has so far struggled financially

Tencent’s bid to corner the market comes courtesy of the Epic Games Store which, boosted by a cash injection from the tech giant and soaring Fortnite revenues, launched in late 2018. Intent on carving out a significant piece of the market before it’s too late, the service adopted an aggressive strategy: to lure in potential customers, it has given out at least one free game every week since launch – totalling more than 749 million giveaways in 2020 alone. In addition, Epic has signed a host of expensive exclusivity deals that prevent other distribution platforms from selling popular titles.

Across the Atlantic, perhaps the most serious attempt at shaking up the gaming market comes from Microsoft. Redwood pursues a more ambitious and novel business model than Epic, but at its core, it employs a similarly predatory pricing strategy. By moving its own game catalogue and dozens of licensed titles to the Xbox Game Pass, Microsoft combines a heavily subsidised, monthly subscription model with an opaquely curated selection of games. It also integrates the offering with its Microsoft Store, Xbox Live network and xCloud on-demand gaming service. Not content with limiting its ambitions to just one hardware base, Microsoft provides the service to Xbox consoles, PC and mobile devices, all of which can be covered with a single subscription. If Fortnite is a quasi-platform, Xbox Game Pass is designed to become a hyper-platform, and its strategy raises questions for consumer choice, competition and privacy. 

Service bundling, exclusivity agreements and aggressive pricing are the name of the game for Big Tech – Image: Xbox Game Pass

Whoever emerges victorious from the war over digital distribution, both consumers and innovators will likely suffer in the long term. Players may at first rejoice at the idea of a weekly giveaway or a ‘Netflix for games’, but will eventually find themselves trapped in yet another walled garden. Developers and creatives, in turn, may hope to strike gold through greater and more targeted exposure on a highly centralised platform, but they too will find themselves at the whim of largely unaccountable and self-interested gatekeepers. Smaller competitors will struggle to gain traction or survive, as aggressive pricing strategies will always favour the giants, whose access to consumer data and endless lines of credit enables them to take and hedge long-term risks. 

What’s left to play for?

After more than a decade of platform economics, the dynamics shaping today’s gaming industry are easy enough to spot. Their consequences may not always be predictable, but on balance they are likely to perpetuate the the same inequalities that we observe in the digital economy at large, further centralising power and profits in the hands of fewer market actors. 

The stakes in this new theatre of the ‘Tech Cold War’ are high and, as in other sectors of the digital economy, Europe is at risk of not just losing out economically. In gaming, it could lose in a race for soft power at home and abroad. An overly passive Europe risks becoming a rule-taker, rather than a standard-setter; a captive consumer, rather than an innovator and market-shaper; and, in the parlance of privacy, a data subject, rather than a data controller. Not every excess of the industry will require disruptive, top-down regulation from Brussels. But policymakers across Europe would do well to spend more time reflecting on games and where the medium is headed. 

Post

The NGI Policy-in-Practice Fund – announcing the grantees

We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund.

We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund

Policymakers and public institutions have more levers at their disposal to spur innovation in the internet space than often thought, and can play a powerful role in shaping new markets for ethical tools. We particularly believe that local experimentation and ecosystem building are vital if we want to make alternative models for the internet actually tangible and gain traction. But finding the funding and space to undertake this type of trial is not always easy – especially if outcomes are uncertain. Through the NGI Policy-in-Practice fund, it has been our aim not only to provide the means to organisations to undertake a number of these trials but also make the case for local trials more generally.

Over the past summer and autumn, we went through a highly competitive applications process, ultimately selecting four ambitious initiatives that embody this vision behind the NGI Policy-in-Practice fund. Each of the projects will receive funding of up to €25,000 to test out their idea on a local level and generate important insights that could help us build a more trustworthy, inclusive and democratic future internet.

In conjunction with this announcement, we have released an interview with each of our grantees, explaining their projects and the important issues they are seeking to address in more detail. You can also find a short summary of each project below. Make sure you register for our newsletter to stay up to date on the progress of each of our grantees, and our other work on the future of the internet.

Interoperability to challenge Big Tech power 

This project is run by a partnership of three organisations: Commons Network and Open Future, based in Amsterdam, Berlin and Warsaw.

This project explores whether the principle of interoperability, the idea that services should be able to work together, and data portability, which would allow users to carry their data with them to new services, can help decentralise power in the digital economy. Currently, we are, as users, often locked into a small number of large platforms. Smaller alternative solutions, particularly those that want to maximise public good rather than optimise for profit, find it hard to compete in this winner-takes-all economy. Can we use interoperability strategically and seize the clout of trusted institutions such as public broadcasters and civil society, to create an ecosystem of fully interoperable and responsible innovation in Europe and beyond?  

Through a series of co-creation workshops, the project will explore how this idea could work in practice, and the role trusted public institutions can play in bringing it to fruition. 

Bridging the Digital Divide through Circular Public Procurement

This project will be run by eReuse, based in Barcelona, with support from the City of Barcelona, the Technical University of Barcelona (UPC) and the global Association for Progressive Communications.

During the pandemic, where homeschooling and remote working have become the norm overnight, bridging the digital divide has become more important than ever. This project is investigating how we can make it easier for public bodies and also the private sector to donate old digital devices, such as laptops and smartphones, to low-income families currently unable to access the internet. 

By extending the lifetime of a device in this way, we are also reducing the environmental footprint of our internet use. Laptops and phones now often end up being recycled, or, worse, binned, long before their actual “useful lifespan” is over, putting further strain on the system. Donating devices could be a simple but effective mechanism for ensuring the circular economy of devices is lengthened.  

The project sets out to do two things: first, it wants to try out this mechanism on a local level and measure its impact through tracking the refurbished devices over time. Second, it wants to make it easier to replicate this model in other places, by creating legal templates that can be inserted in public and private procurement procedures, making it easier for device purchasers to participate in this kind of scheme. The partnership also seeks to solidify the network of refurbishers and recyclers across Europe. The lessons learned from this project can serve as an incredibly useful example for other cities, regions and countries to follow. 

Bringing Human Values to Design Practice

This project will be run by the BBC with support from Designswarm, LSE and the University of Sussex

Many of the digital services we use today, from our favourite news outlet to social media networks, rely on maximising “engagement” as a profit model. A successful service or piece of content is one that generates many clicks, drives further traffic, or generates new paying users. But what if we optimised for human well-being and values instead? 

This project, led by the BBC, seeks to try out a more human-centric focused approach to measuring audience engagement by putting human values at its core. It will do so by putting into practice longer-standing research work on mapping the kinds of values and needs their users care about the most, and developing new design frameworks that would make it easier to actually track these kinds of alternative metrics in a transparent way. 

The project will run a number of design workshops and share its findings through a dedicated website and other outlets to involve the wider community. The learnings and design methodology that will emerge from this work will not just be trialled within the contexts of the project partners, but will also be easily replicable by others interested in taking a more value-led approach. 

Responsible data sharing for emergencies: citizens in control

This project will be run by the Dutch National Police, in partnership with the Dutch Emergency Services Control, the Amsterdam Safety Region and the City of Amsterdam.

In a data economy that is growing ever more complex, giving meaningful consent about what happens to our personal data remains one of the biggest unsolved puzzles. But new online identity models have shown to be a potentially very promising solution, empowering users to share only that information that they want to share with third parties, and sharing that data on their own terms. One way that would allow such a new approach to identity and data sharing to scale would be to bring in government and other trusted institutions to build their own services using these principles. That is exactly what this project seeks to do.  

The project has already laid out all the building blocks of their Data Trust Infrastructure but wants to take it one step further by actually putting this new framework into practice. The project brings together a consortium of Dutch institutional partners to experiment with one first use case, namely the sharing of vital personal data with emergency services in the case of, for example, a fire. The project will not just generate learnings about this specific trial, but will also contribute to the further finetuning of the design of the wider Data Trust Infrastructure, scope further use cases (of which there are many!), and bring on board more interested parties.

Post

Policy in Practice Fund: Data sharing in emergencies

Manon den Dunnen from the Dutch National Police answers a few of our burning questions to give us some insight into the project and what it hopes to achieve.

We’re introducing each of our four Policy-in-Practice Fund projects with an introductory blog post. Below, Manon den Dunnen from the Dutch National Police answers a few of our burning questions to give us some insight into the project and what it hopes to achieve. We’re really excited to be working with four groups of incredible innovators and you’ll be hearing a lot more about the projects as they progress. 

Your project is exploring how we share information with public bodies. What is the problem with the way we do it now?

Europe has made significant progress on protecting our privacy, including through the General Data Protection Regulation (GDPR). However, there remain several problems with the collection and use of personal data. Information about us is often collected without our consent or with disguised consent, causing citizens and other data owners to lose control over their personal data and how it is used. This contributes to profiling (discrimination, exclusion) and cybercrime. At the same time, it is laborious and complex for citizens and public institutions to obtain personal data in a transparent and responsible way.

That is why a collective of (semi-) public institutions has been working towards an independent, public trust infrastructure that allows the conditional sharing of data. The Digital Trust Infrastructure or DTI aims to safeguard our public values by unravelling the process of data sharing into small steps and empowering data subjects to take control of each step at all times.

Rather than focusing on information sharing with public bodies, our project will explore public bodies taking responsibility for creating processes that help to safeguard constitutional values in a digitised society.

What kinds of risks are there for handling personal information? 

Preventing these problems requires organisations to work according to several principles, most of which can be found in the GDPR. Let’s take cybercrime as a potential threat. The risk of your data being exposed to cybercrime increases as your data is stored in more places. Organisations must therefore practice data minimisation, but there are other approaches available. For example, structures that allow citizens to give conditional access to information about them, rather than having to store the data themselves. This ‘consent arrangement’ is exactly what this project will set out to test.

This idea will be new for many people, so expert support and protection should be provided when setting up a consent arrangement for data sharing. But the potential impact is huge. Take for example someone with an oxygen cylinder at home for medical use. This is not something a citizen would be expected to declare as part of their daily life. But when there is a fire, both that citizen and the fire brigade would like that information to be shared.

That piece of data about the citizen’s use of an oxygen cylinder is the only information needed by the fire brigade. But current systems often share far more information automatically, including the person’s identity.  Citizens should be able to give the fire brigade conditional and temporary access to only the information that an oxygen bottle is present, such as in emergencies like a fire.

How can public bodies do this safely and with the trust of citizens? What kind of role do you think public bodies can play in increasing trust in data-sharing in the digital economy more broadly?

A data-driven approach to social and safety issues can truly improve the quality of life, facilitate safety and innovation. But we must set an example in the way we approach this. At each moment, we must carefully consider which specific piece of information is needed by a specific person, in what specific context and moment of time, for what specific purpose and for how long. 

By investing in this early on and involving citizens in a democratic and inclusive way, we can not only increase trust but also use the results to demand partners to do the same and create the new normal. 

You are working on a specific case study with the Dutch Police and other partners. Can you tell us a bit more about that experiment, and how you think this model could be used in other contexts too?

During the next few months, we will create a first scalable demonstrator of the Digital Trust Infrastructure. It will be based on the use-case of sharing home-related data in the context of an incident such as a fire. The generic building blocks that we create will be fundamental to all forms of data sharing, like identification, authentication, attribution, permissions, logging etc. They will also be open-source and usable in all contexts. We have a big list of things to work on!

But that is only part of the story. Complementary to the infrastructure, an important part of the project focuses on drawing up a consent arrangement. This will allow residents to conditionally share information about their specific circumstances and characteristics with specific emergency services in a safe, trusted way. To put people in control of every small step and ensure the consent arrangement will be based on equality, we will organise the necessary expertise to understand every aspect. The consequences of our actions have to be transparent, taking into account groups with various abilities, ages and backgrounds.

We will also explore how, and to what extent, the conditions and safeguards can be implemented in a consent arrangement and embedded in the underlying trust infrastructure in a democratic and resilient way. We will also look at how a sustainable and trustworthy governance structure can be set up for such a consent arrangement. We will share all our findings on these areas.

How can people get involved and find out more?

We are currently collecting experiences from other projects on how to engage residents in an inclusive, democratic and empowered (conscious) way. All the knowledge that we are building up in this area will be shared on the website of our partner Amsterdam University of Applied Sciences (HvA). Naturally, we would value hearing about the experiences of others in this area, so please do get in touch.

Post

Workshop report: (Dis)connected future – an immersive simulation

As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Nesta Italia and Impactscool, written by Giacomo Mariotti and Cristina Pozzi.

The NGI Policy Summit was a great opportunity for policymakers, innovators and researchers to come together to start laying out a European vision for the future internet and elaborate the policy interventions and technical solutions that can help get us there.

As part of the Summit, Nesta Italia and Impactscool hosted a futures workshop exploring the key design choices for the future internet. It was a participative and thought-provoking session. Here we take a look at how it went.

Our aims

The discussion about the internet of the future is very complex and it touches on many challenges that our societies are facing today. Topics like Data sovereignty, Safety, Privacy, Sustainability, Fairness, just to name a few, as well as the implications of new technologies such as AI and Blockchain, and areas of concern around them, such as Ethics and Accessibility.

In order to define and build the next generation internet, we need to make a series of design choices guided by the European values we want our internet to radiate. However, moving from principles to implementation is really hard. In fact, we face the added complexity coming from the interaction between all these areas and the trade-offs that design choices force us to make.

Our workshop’s goal was to bring to life some of the difficult decisions and trade-offs we need to consider when we design the internet of the future, in order to help us reflect on the implications and interaction of the choices we make today.

How we did it

The workshop was an immersive simulation about the future in which we asked the participants to make some key choices about the design of the future internet and then deep dived into possible future scenarios emerging from these choices. 

The idea is that it is impossible to know exactly what the future holds, but we can explore different models and be open to many different possibilities, which can help us navigate the future and make more responsible and robust choices today.

In practice, we presented the participants with the following 4 challenges in the form of binary dilemmas and asked them to vote for their preferred choice with a poll:

  1. Data privacy: protection of personal data vs data sharing for the greater good
  2. Algorithms: efficiency vs ethics
  3. Systems: centralisation vs decentralisation
  4. Information: content moderation vs absolute freedom

For each of the 16 combinations of binary choices we prepared a short description of a possible future scenario, which considered the interactions between the four design areas and aimed at encouraging reflection and discussion.

Based on the majority votes we then presented the corresponding future scenario and discussed it with the participants, highlighting the interactions between the choices and exploring how things might have panned out had we chosen a different path.

What emerged

Individual-centric Internet

Data privacyProtection of personal data
84%
Data sharing for the greater good
16%
AlgorithmsEfficiency
41%
Ethics
59%
SystemsCentralisation
12%
Decentralisation
88%
InformationContent moderation
41%
Absolute freedom
59%

The table above summarises the choices made by the participants during the workshop, which led to the following scenario.

Individual-centric Internet

Decentralized and distributed points of access to the internet make it easier for individuals to manage their data and the information they are willing to share online. 

Everything that is shared is protected and can be used only following strict ethical principles. People can communicate without relying on big companies that collect data for profit. Information is totally free and everyone can share anything online with no filters.

Not so one-sided

Interesting perspectives emerged when we asked contrarian opinions on the more one-sided questions, which demonstrated how middle-ground and context-aware solutions are required in most cases when dealing with complex topics as those analysed.

We discussed how certain non-privacy-sensitive data can genuinely contribute to the benefit of society, with minimum concern on the side of the individual if they are shared in anonymised form. Two examples that emerged from the discussion were transport management and research.
In discussing the (de)centralisation debate, we discussed how decentralisation could result in a diffusion of responsibility and lack of accountability. “If everyone’s responsible, nobody is responsible”. We mentioned how this risk could be mitigated thanks to tools like Public-Private-People collaboration and data cooperatives, combined with clear institutional responsibility.

Post

Workshop report: People, not experiments – why cities must end biometric surveillance

We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by European Digital Rights (EDRi), which was originally published on the EDRi website.

We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it. The discussion got to the heart of EDRi’s warnings that biometric surveillance puts limits on everyone’s rights and freedoms, amplifies discrimination, and treats all of us as experimental test subjects. This techno-driven democratic vacuum must be stopped.

From seriously flawed live trials of facial recognition by London’s Metropolitan police force, to unlawful biometric surveillance in French schools, to secretive roll outs of facial recognition which have been used against protesters in Serbia: creepy mass surveillance by governments and private companies, using people’s sensitive face and body data, is on the rise across Europe. Yet according to a 2020 survey by the EU’s Fundamental Rights Agency, 80% of Europeans are against sharing their face data with authorities.

On 28 September, EDRi participated in a debate at the NGI Policy Summit on “Biometrics and facial recognition in cities” alongside policymakers and police officers who have authorised the use of the tech in their cities. EDRi explained that public facial recognition, and similar systems which use other parts of our bodies like our eyes or the way we walk, are so intrusive as to be inherently disproportionate under European human rights law. The ensuing discussion revealed many of the reasons why public biometric surveillance poses such a threat to our societies:

• Cities are not adequately considering risks of discrimination: according to research by WebRoots Democracy, black, brown and Muslim communities in the UK are disproportionately over-policed. With the introduction of facial recognition in multiple UK cities, minoritised communities are now having their biometric data surveilled at much higher rates. In one example from the research, the London Metropolitan Police failed to carry out an equality impact assessment before using facial recognition at the Notting Hill carnival – an event which famously celebrates black and Afro-Carribean culture – despite knowing the sensitivity of the tech and the foreseeable risks of discrimination. The research also showed that whilst marginalised communities are the most likely to have police tech deployed against them, they are also the ones that are the least consulted about it.

• Legal checks and safeguards are being ignored: according to the Chief Technology Officer (CTO) of London, the London Metropolitan Police has been on “a journey” of learning, and understand that some of their past deployments of facial recognition did not have proper safeguards. Yet under data protection law, authorities must conduct an analysis of fundamental rights impacts before they deploy a technology. And it’s not just London that has treated fundamental rights safeguards as an afterthought when deploying biometric surveillance. Courts and data protection authorities have had to step in to stop unlawful deployments of biometric surveillance in Sweden, Poland, France, and Wales (UK) due to a lack of checks and safeguards.

• Failure to put fundamental rights first: the London CTO and the Dutch police explained that facial recognition in cities is necessary for catching serious criminals and keeping people safe. In London, the police have focused on ethics, transparency and “user voice”. In Amsterdam, the police have focused on “supporting the safety of people and the security of their goods” and have justified the use of facial recognition by the fact that it is already prevalent in society. Crime prevention and public safety are legitimate public policy goals: but the level of the threat to everyone’s fundamental rights posed by biometric mass surveillance in public spaces means that vague and general justifications are just not sufficient. Having fundamental rights means that those rights cannot be reduced unless there is a really strong justification for doing so.

• The public are being treated as experimental test subjects: across these examples, it is clear that members of the public are being used as subjects in high-stakes experiments which can have real-life impacts on their freedom, access to public services, and sense of security. Police forces and authorities are using biometric systems as a way to learn and to develop their capabilities. In doing so, they are not only failing their human rights obligations, but are also violating people’s dignity by treating them as learning opportunities rather than as individual humans deserving of respect and dignity.

The debate highlighted the worrying patterns of a lack of transparency and consideration for fundamental rights in current deployments of facial recognition, and other public biometric surveillance, happening all across Europe. The European Commission has recently started to consider how technology can reinforce structural racism, and to think about whether biometric mass surveillance is compatible with democratic societies. But at the same time, they are bankrolling projects like horrifyingly dystopian iBorderCTRL. EDRi’s position is clear: if we care about fundamental rights, our only option is to stop the regulatory whack-a-mole, and permanently ban biometric mass surveillance.

Post

Workshop report: What your face reveals – the story of HowNormalAmI.eu

At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work - an online interactive documentary called 'How Normal Am I?'.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Tijmen Schep.

At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work – an online interactive documentary called ‘How Normal Am I?‘. It explains how face recognition technology is increasingly used in the world around us, for example when dating website tinder gives all its users a beauty score to match people who are about equally attractive. Besides just telling us about it, the project also allows people to experience this for themselves. Through your webcam, you will be judged on your beauty, age, gender, body mass index (BMI), and your facial expressions. You’ll even be given a life expectancy score, so you’ll know how long you have left to live.

The project has sparked the imagination – and perhaps a little feeling of dread – in many people, as not even two weeks later the documentary has been ‘watched’ over 100.000 times.

At the Summit, Tijmen offered a unique insight into the ‘making of’ of this project. In his presentation, he talked about the ethical conundrums of building a BMI prediction algorithm that is based on photos from arrest records, and that uses science that has been debunked. The presentation generated a lot of questions and was positively received by those who visited the summit.

Post

Minutes: NGI Forward Advisory Board meeting (22/07/20)

NGI Forward's advisory board held its inaugural meeting in late July, discussing the project's priorities and ambitions. To promote transparency, we publish written summaries of our meetings.

NGI Forward’s advisory board held its inaugural meeting on 22 July to discuss the project’s current priorities and future ambitions. The membership of our advisory board represents a broad community of internet experts and practitioners. Going forward, it will meet twice a year to provide the project with support, constructive criticism and guidance. To promote transparency, we publish summaries of our meetings. You can learn more about our board here.

Present: Pablo Aragón, Harry Armstrong, Mara Balestrini, Ger Baron, Katja Bego, Martin Bohle, Markus Droemann, Inger Paus, Katarzyna Śledziewska, Louis Stupple-Harris, Sander van der Waal, Marco Zappalorto

Not present: Ian Forrester (excused), Simon Morrison (excused), Marleen Stikker (excused)

Summary: On 22 July, NGI Forward’s advisory board held a two-hour video conference for its inaugural meeting. The agenda was designed to provide board members with an overview of the project, its place within the NGI ecosystem, its goals and current priorities. In particular, we discussed progress made and future ambitions across a series of activities that broadly fall under NGI Forward’s ecosystem-building objective, especially the delivery of an NGI vision paper and policy network. We also collected feedback on the role of the advisory board itself in supporting these activities and agreed a follow-up meeting to assess should be held within six months to assess progress against the project activities discussed. Board members provided detailed and constructive comments on each, which are summarised in bulleted form below. 

NGI vision

In this first part of the meeting, the project provided an overview of the main messages of the upcoming vision paper NGI Forward will release soon, 

  • Members agreed that the NGI vision should work towards concrete actions and alternatives,  rather than framing the issues in a reactive way. It’s necessary to clarify that the NGI is about reclaiming the internet in a European way, without furthering the dynamics moving us towards a splinternet, or  or supporting needlessly fatalistic narratives about reinventing the internet from scratch, or pulling the plug altogether. 
  • Members highlighted the risk that bad practices from big tech companies overshadow the possibilities of doing good through internet technology. The NGI vision should capture this by weaving more optimistic narratives and rewarding those who do the right thing.
  • Members argued that an NGI vision should also promote open standards, practical solutions, inclusion and bottom-up action, and should empower a wide net of stakeholders to play their role in bringing about this vision.
  • Members highlighted the challenge of balancing the NGI’s human-centred and value-based proposition with Europe’s otherwise more economically-driven Digital Single Market narrative. However, bridging that gap may also present a unique opportunity for the project and wider initiative to speak to policymakers who are caught in between both approaches. The story of this vision needs to be sufficiently inclusive to appeal to policymakers and other stakeholders across the political spectrum.
  • Members asked to be provided with an early draft of the vision before it’s published, and generally would like to be involved in the dissemination and future finetuning of the NGI vision.
  • Members expressed some language around data justice and bias was not as explicitly mentioned in the summary slides on the visions paper, and that, given the importance of these topics, the project should consider featuring these more prominently. 

Policy Network

  • NGI Forward presented a short paper on the objectives and design of a potential NGI Policy Network, which would serve as a coalition for change towards a more democratic, sustainable, trustworthy, resilient and inclusive internet by 2030. The proposed network would bring together organisations and individuals with shared ambitions through policy-relevant research and public affairs work. It would serve to avoid the duplication of efforts and the proliferation of competing, often similar, solutions to universal challenges from organisations that operate in different local contexts or represent different stakeholder and practitioner communities. It should aim to make the NGI more inclusive and provide a mechanism for bottom-up contributions to NGI-relevant research and policy work.
  • Members welcomed the idea of a community of communities that would serve to break down silos between different discourses and provide for knowledge-sharing at a practical level. 
  • Members highlighted that many actors in this space have a capacity problem and need to see a clear incentive for joining. 
  • Members similarly highlighted the risk of setting up a policy network that duplicates the work of similar, already existing groups. 
  • The network should have very clear objectives and identify areas of mutual interest that are underserved by other groups. At the same time, it should develop good links between these existing networks to widen its impact.
  • Members also spoke of the risk of setting up another project or network that cannot be sustainably continued after the end of NGI Forward’s funding period, often a problem for H2020-funded initiatives. The goal should be to create a structure that lasts after the end of the project, and could potentially be carried forward by its members. There should be a continuity or succession plan in place before the network is launched. 
  • Similarly, members suggested the network should be open to European project consortia to share their own project outputs and deliverables so as to ensure follow-up by others after the end of their respective grant periods.
  • Members argued that the project’s ambitions for enabling a bottom-up approach will require the network, and the project more generally, to target local governments and communities or institutions that otherwise have limited exposure to these topics and translate NGI ideas from EU jargon into more useful terminology, methods and tools.
  • Another potential selling point is to help public sector organisations who are actively looking for value-aligned alternatives and more ethical ways of organising the digitalisation of their services. 
  • Members highlighted in particular the need to target non-English-speaking audiences, and recommended that the project seek ways to translate outputs, and reflect Europe’s geographical diversity in, for example, the NGI Policy Summit programme.
  • Members agreed that there was a need for practical insights, and tangible, solutions-oriented policy ideas, but less desire for another discussion forum to discuss high-level principles for the future of the internet. One idea put forward was to brand the coalition a ‘Policy and Practice’ Network. 
  • Members suggested that the network could pursue more formal agreements between organisations, e.g. memoranda of understanding. Members said that the network would need visibility in places where policymakers go, such as the OECD and WEF. 
  • Members argued that we should consider setting clear responsibilities and deadlines for participants to ensure engagement and partners following through with commitments. On the other hand, we should be realistic about how much time and resource potential partners could invest in another network. 

Policy Summit 

  • The Members expressed their interest in the NGI Policy Summit, scheduled for September 28 and 29, and all agreed to attend at least some of the sessions. 
  • The Members also expressed the suitability of the summit to both highlight the conclusions of the visions paper, and launch the NGI Policy Network, with this in particular being a good moment to start some of the proposed working groups. 
  • Members recommended we also recruit engaged stakeholders to lead some of these working groups, rather than attempt to organise all of these within the project.
Picture of Toomas Hendrik Ilves
Post

NGI Policy Summit: Former Estonian President Toomas Hendrik Ilves interview

As president of Estonia from 2006 to 2016, Toomas Hendrik Ilves pushed for digital transformation, ultimately leading Forbes to label him “the architect of the most digitally savvy country on earth”. Every day, e-Estonia allows citizens to interact with the state via the internet. Here, Ilves discusses why other governments might be slower with such […]

As president of Estonia from 2006 to 2016, Toomas Hendrik Ilves pushed for digital transformation, ultimately leading Forbes to label him “the architect of the most digitally savvy country on earth”. Every day, e-Estonia allows citizens to interact with the state via the internet. Here, Ilves discusses why other governments might be slower with such developments, and ponders how things can improve further in the future.

Toomas Hendrik Ilves is one of the speakers of our upcoming NGI Policy Summit, which will take place online on September 28 and 29 2020. Sign up here, if you would like to join us.

This interview originally appeared as part of the NGI Forward’s Finding CTRL collection.

Estonia had a rapid ascent to becoming a leading digital country, how did you push for this as a diplomat in the 90s?

Estonia became independent in ’91, and everyone was trying to figure out what we should do – we were in terrible shape economically and completely in disaster. Different people had different ideas. My thinking was basically that no matter what, we would always be behind.

In ’93, Mosaic came out, which I immediately got. You had to buy it at the time. I looked at this, and it just struck me that, ‘Wow, this is something where we could start out on a level playing field, no worse off than anyone else’.

For that, we had to get a population that really is interested in this stuff, so I came up with this idea – which later carried the name of Tiger’s Leap – which was to computerise all the schools, get computers in all the schools and connect them up. It met with huge opposition, but the government finally agreed to it. By 1998, all Estonian schools were online.

How did things progress from there, and what was the early public reaction like?

We had a lot of support from NGOs. People thought it was a cool idea, and the banks also thought it was a good idea, because they really supported the idea of digitization. By the end of the 90s, it became clear that this was something that Estonia was ahead of the curve on.

But, in fact, in order to do something, you really needed to have a much more robust system. That was when a bunch of smart people came up with the idea of a strong digital identity in the form of a chip card,2 and also developed the architecture for connecting everything up, because we were still too poor to have one big data centre to handle everything. That led to what we call X-Road, which connects everything to everybody, but always through an authentication of your identity, which is what gives the system its very strong security.

It was a long process. I would be lying to say that it was extremely popular in the beginning, but over time, many people got used to it.

I should add that Tiger’s Leap was not always popular. The teachers union had a weekly newspaper, and for about a year, no issue would seem to appear without some op ed attacking me.

Estonia’s e-Residency programme allows non-Estonians access to Estonian services via an e-resident smart card. Do you think citizenship should be less defined by geographical boundaries?

Certain things are clearly tied to your nation, anything that involves political rights, or say, social services – if you’re a taxpayer or a citizen, you get those.

But on the other hand, there are many things associated with your geographical location that in fact have very little to do with citizenship. In the old days, you would bank with your local bank, you didn’t have provisions for opening an account from elsewhere because the world was not globalised. And it was the same thing with establishing companies.

So if you think about those things you can’t do, well, why not? We don’t call it citizenship, you don’t get any citizen rights, but why couldn’t you open a bank account in my country if you want to? If we know who you are, and you get a visual identity, you can open a company.

Most recently, we’ve been getting all kinds of interest from people in the UK. Because if you’re a big company in the UK, it’s not a problem to make yourself also resident in Belgium, Germany, France. If you’re a small company, it’s pretty hard. I mean, they’re not going to set up a brick and mortar office. Those are the kind of people who’ve been very interested in setting up or establishing themselves as businesses within the European Union, which, in the case of Estonia, they can do without physically being there.

What do you think Europe and the rest of the world can learn from Estonia?

There are services that are far better when they’re digital which right now are almost exclusively nationally-based. We have digital prescriptions – wonderful things where you just write an email to your doctor and the doctor will put the prescription into the system and you can go to any pharmacy and pick it up.

This would be something that would be popular that would work across the EU. Everywhere I go, I get sick. My doctor, he puts in a prescription. If I’m in Valencia, Spain, he puts it into the system, which then also operates in Spain.

The next step would be for medical records. Extend the same system: you identify yourself, authorise the doctors to look at your records, and they would already be translated. I would like to see these kinds of services being extended across Europe. Right now, the only cross-border service of this type that works is between Estonia and Finland. It doesn’t even work between Estonia and Latvia, our southern neighbour. So I think it’ll be a while, but it’s a political decision. Technologically, it could work within months. The Finns have adopted our X-road architecture especially easily. It’s completely compatible; we just give it away, it’s non-proprietary open source software.

The technical part is actually very easy, the analogue part of things is very difficult, because they have all these political decisions.

What would your positive vision for the future of the internet look like?

Right now I’m in the middle of Silicon Valley, in Palo Alto, and within a ten mile radius of where I sit are the headquarters of Tesla, Apple, Google, Facebook, Palantir – not to mention all kinds of other companies – producing all kinds of wonderful things, really wonderful things that not only my parents or my grandparents could never even dream of, but even I couldn’t dream of 25 years ago. But at the same time, when I look at the level of services for ordinary people – citizens – then the US is immensely behind countries like Estonia.

The fundamental problem of the internet is summed up in a 1993 New Yorker cartoon, where there’s a picture of two dogs at a computer, and one dog says to the other, “On the internet no-one knows you’re a dog”. This is the fundamental problem of identity that needs to be addressed. It has been addressed by my country.

Unless you have services for people that are on the internet, the internet’s full potential will be lost and not used.

What do you think prevents other nations pursuing this idea of digital identity?

It requires political will. The old model and the one that continues to be used, even in government services in places like the United States, is basically “email address plus password”. Unfortunately, that one-factor identification system is not based on anything very serious.

Governments have to understand that they need to deal with issues such as identity. Unless you do that, you will be open to all these hacks, all of these various problems. I think I read somewhere that in the Democratic National Committee servers, that in 2015 and 2016, they had 126 people who had access to the servers. Of those 126 people, 124 used two-factor authentication. Two didn’t. Guess how the Russians got in.

What we’re running up against today is that people who are lawmakers and politicians don’t understand how technology works, and then people have very new technology that we don’t quite understand the ramifications and implications of. What we really need is for people who are making policy to understand far better, and the people who are doing technology maybe should think more about the implications of what they do, and perhaps read up a little bit on ethics.

On balance, do you personally feel the web and the internet has had a positive or negative influence on society?

By and large, positive, though we are beginning to see the negative effects of social media.

Clearly, the web is what has enabled my country to make huge leaps in all kinds of areas, not least of which is transparency, low levels of corruption, so forth.

I would say we entered the digital era in about 2007, when we saw the combination of the ubiquity of portable devices and the smartphones, combined with social media. This led to a wholly different view of the threat of information exchange. And that is when things, I’d say, started getting kind of out of hand.

I think the invention of the web by Tim Berners-Lee in 1989 is probably the most transformative thing to happen since 1452, when Gutenberg invented movable type. Movable type enabled mass book production, followed by mass literacy. That was all good.

But you can also say that the Thirty Years’ War, which was the bloodiest conflict, in terms of proportion of people killed, that Europe has ever had, also came from this huge development of mass literacy. Because it allowed for the popularisation of ideology. Since then, we’ve seen all other kinds of cases; each technology brings with it secondary and tertiary effects.

We don’t quite know yet what the effects are for democracy, but we can sort of hazard a guess. We’re going to have to look at how democracy would survive in this era, in the digital era where we love having a smartphone and reading Facebook.

Picture of Marleen Stikker
Post

NGI Policy Summit: Interview with internet pioneer Marleen Stikker

Marleen Stikker is an internet pioneer who co-founded The Digital City, a non-profit internet provider and community for Dutch people, in 1994. She is now director of Waag, a cultural innovation centre in Amsterdam. Here, she explores the early beginnings of the internet, explains what went wrong, and ponders the future of online life. Marleen […]

Marleen Stikker is an internet pioneer who co-founded The Digital City, a non-profit internet provider and community for Dutch people, in 1994. She is now director of Waag, a cultural innovation centre in Amsterdam. Here, she explores the early beginnings of the internet, explains what went wrong, and ponders the future of online life.

Marleen is one of the speakers of our upcoming NGI Policy Summit, which will take place online on September 28 and 29 2020. Sign up here, if you would like to join us.

This interview originally appeared as part of the NGI Forward’s Finding CTRL collection.

You have personally been involved with the internet from the beginning of the web. What have we lost and gained since those early days?

Back in 1994 when we launched the Digital City, the internet was a green field: it was an open common where shared values thrived. It was an environment for creation, experimentation, and social and cultural values. There was no commercial expectation at that moment and there was no extraction of value for shareholders. The governance of the internet at that time was based on what the network needed to function optimally, the standard committee IETF made its decisions on the basis of consensus.

We lost the notion of the commons: the internet as a shared good. We basically handed it over to the market, and shareholders’ value now defines how the internet functions. We didn’t only lose our privacy but also our self-determination. The internet is basically broken.

What do you think was the most influential decision in the design of the World Wide Web? How could things have turned out differently if we made different decisions?

I think the most important decision was a graphical interface to the internet, enabling different types of visualisation to exist. The World Wide Web brought a multimedia interface to the internet, enabling a visual language. And with that enabling, a whole new group of people got to use the internet.

The World Wide Web became synonymous with pages and therefore publishing, which emphasises the idea it was to do with classical publishing and intellectual rights regulation. Before the World Wide Web, the internet was much more a performative space, a public domain. The publishing metaphor was a set back and for me quite disappointing.

What were the big mistakes where we went wrong in the development of the internet? How do you believe these mistakes have shaped our society?

The whole emphasis on exponential growth, getting filthy rich through the internet, has been a real problem. Basically handing over the internet to the mercy of the capital market has been a major miscalculation. We should have regulated it as a public good and consider people as participants instead of consumers and eyeballs. Now we are not only the product, but the carcass, as Zuboff underlines in her book on surveillance capitalism. All the data is sucked out of us and we act in a scripted nudging environment, captured in the profiles that companies store in their ‘black box’. We should have had encryption and attribute-based identity by default. The fact that these companies can build up their empires without regulation on the use of our data and behaviour has been a major flaw.

We have to re-design how we deal with digital identity and the control over our personal data.

How do you believe the internet has shaped society for the better?

The internet is empowering people by giving means of communication and distribution, and it enables people to share their ideas, designs, and solutions. For instance, in the MakeHealth program that we run at Waag, or the open design activities.

Can you explain your idea for a full-stack internet and tell us more about it?

I believe we have to design the internet as a public stack, which means that we have to start by expressing the public values that will be guiding the whole process, it means that we re-think the governance and business models. We need open and accountable layers of technology, both hardware, firmware operating systems and applications.

It means that we ensure that there is accountability in each part of the internet. At the basis of all this should be the design for data minimisation, data commons, and attribute-based identity so people can choose on what they want to reveal or not.

We are good at diagnosing problems with the internet, but not as great at finding solutions. What should we do next, and who should implement change?

It starts with acknowledging that technology is not neutral. That means that we need to diversify the teams that build our technologies and make public values central. We have to regulate big tech and build alternatives towards a commons based internet. The governmental and public organizations should make explicit choices for public technologies and alternatives.

What is your positive vision for the future of the internet?

After leaving the internet to the market the last 25 years I believe we will need another 25 years to bring back the commons and have a more mature and balanced next generation internet. I do believe 2018 has been a turning point.

Are you personally hopeful about the future of the internet?

I think the coming era could be game changer, if we keep on working together I see a positive future, we can regain a trustworthy internet.

If we use the current crisis for good, we can rebuild a trustworthy internet. We will need to rethink the principles behind the internet. We need to be thorough and choose an active involvement.

On the whole, do you think the web, and the internet more broadly, has had a positive or negative influence on society?

Both… It gave a lot of people a voice and a way of expression, which is still one of the major achievements of the internet. But it also put our democracies in danger and if we are not able to counter these new powers, the outcome will be a very negative one. If you can’t counter surveillance capitalism the outcome of the cost-benefit will be extremely negative.

>