Post

The NGI Policy-in-Practice Fund – announcing the grantees

We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund.

We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund

Policymakers and public institutions have more levers at their disposal to spur innovation in the internet space than often thought, and can play a powerful role in shaping new markets for ethical tools. We particularly believe that local experimentation and ecosystem building are vital if we want to make alternative models for the internet actually tangible and gain traction. But finding the funding and space to undertake this type of trial is not always easy – especially if outcomes are uncertain. Through the NGI Policy-in-Practice fund, it has been our aim not only to provide the means to organisations to undertake a number of these trials but also make the case for local trials more generally.

Over the past summer and autumn, we went through a highly competitive applications process, ultimately selecting four ambitious initiatives that embody this vision behind the NGI Policy-in-Practice fund. Each of the projects will receive funding of up to €25,000 to test out their idea on a local level and generate important insights that could help us build a more trustworthy, inclusive and democratic future internet.

In conjunction with this announcement, we have released an interview with each of our grantees, explaining their projects and the important issues they are seeking to address in more detail. You can also find a short summary of each project below. Make sure you register for our newsletter to stay up to date on the progress of each of our grantees, and our other work on the future of the internet.

Interoperability to challenge Big Tech power 

This project is run by a partnership of three organisations: Commons Network and Open Future, based in Amsterdam, Berlin and Warsaw.

This project explores whether the principle of interoperability, the idea that services should be able to work together, and data portability, which would allow users to carry their data with them to new services, can help decentralise power in the digital economy. Currently, we are, as users, often locked into a small number of large platforms. Smaller alternative solutions, particularly those that want to maximise public good rather than optimise for profit, find it hard to compete in this winner-takes-all economy. Can we use interoperability strategically and seize the clout of trusted institutions such as public broadcasters and civil society, to create an ecosystem of fully interoperable and responsible innovation in Europe and beyond?  

Through a series of co-creation workshops, the project will explore how this idea could work in practice, and the role trusted public institutions can play in bringing it to fruition. 

Bridging the Digital Divide through Circular Public Procurement

This project will be run by eReuse, based in Barcelona, with support from the City of Barcelona, the Technical University of Barcelona (UPC) and the global Association for Progressive Communications.

During the pandemic, where homeschooling and remote working have become the norm overnight, bridging the digital divide has become more important than ever. This project is investigating how we can make it easier for public bodies and also the private sector to donate old digital devices, such as laptops and smartphones, to low-income families currently unable to access the internet. 

By extending the lifetime of a device in this way, we are also reducing the environmental footprint of our internet use. Laptops and phones now often end up being recycled, or, worse, binned, long before their actual “useful lifespan” is over, putting further strain on the system. Donating devices could be a simple but effective mechanism for ensuring the circular economy of devices is lengthened.  

The project sets out to do two things: first, it wants to try out this mechanism on a local level and measure its impact through tracking the refurbished devices over time. Second, it wants to make it easier to replicate this model in other places, by creating legal templates that can be inserted in public and private procurement procedures, making it easier for device purchasers to participate in this kind of scheme. The partnership also seeks to solidify the network of refurbishers and recyclers across Europe. The lessons learned from this project can serve as an incredibly useful example for other cities, regions and countries to follow. 

Bringing Human Values to Design Practice

This project will be run by the BBC with support from Designswarm, LSE and the University of Sussex

Many of the digital services we use today, from our favourite news outlet to social media networks, rely on maximising “engagement” as a profit model. A successful service or piece of content is one that generates many clicks, drives further traffic, or generates new paying users. But what if we optimised for human well-being and values instead? 

This project, led by the BBC, seeks to try out a more human-centric focused approach to measuring audience engagement by putting human values at its core. It will do so by putting into practice longer-standing research work on mapping the kinds of values and needs their users care about the most, and developing new design frameworks that would make it easier to actually track these kinds of alternative metrics in a transparent way. 

The project will run a number of design workshops and share its findings through a dedicated website and other outlets to involve the wider community. The learnings and design methodology that will emerge from this work will not just be trialled within the contexts of the project partners, but will also be easily replicable by others interested in taking a more value-led approach. 

Responsible data sharing for emergencies: citizens in control

This project will be run by the Dutch National Police, in partnership with the Dutch Emergency Services Control, the Amsterdam Safety Region and the City of Amsterdam.

In a data economy that is growing ever more complex, giving meaningful consent about what happens to our personal data remains one of the biggest unsolved puzzles. But new online identity models have shown to be a potentially very promising solution, empowering users to share only that information that they want to share with third parties, and sharing that data on their own terms. One way that would allow such a new approach to identity and data sharing to scale would be to bring in government and other trusted institutions to build their own services using these principles. That is exactly what this project seeks to do.  

The project has already laid out all the building blocks of their Data Trust Infrastructure but wants to take it one step further by actually putting this new framework into practice. The project brings together a consortium of Dutch institutional partners to experiment with one first use case, namely the sharing of vital personal data with emergency services in the case of, for example, a fire. The project will not just generate learnings about this specific trial, but will also contribute to the further finetuning of the design of the wider Data Trust Infrastructure, scope further use cases (of which there are many!), and bring on board more interested parties.

Post

Policy-in-Practice Fund: Interoperability with a Purpose

Sophie Bloemen and Thomas de Groot from Commons Network, and Paul Keller and Alek Tarkowski from Open Future, answer a few of our questions to give us some insight into their project and what it will achieve.

We’re introducing each of our four Policy-in-Practice Fund projects with an introductory blog post. Below, Sophie Bloemen and Thomas de Groot from Commons Network, and Paul Keller and Alek Tarkowski from Open Future, answer a few of our burning questions to give us some insight into their project and what it will achieve. We’re really excited to be working with four groups of incredible innovators and you’ll be hearing a lot more about the projects as they progress. 

Can you explain to us what you mean by interoperability, and why you think interoperability could be an effective tool to counter centralisation of power in the digital economy?

Simply put, interoperability is the ability of systems to work together and to exchange data easily. In policy circles today, the term is mostly used as a solution to the structurally unhealthy dominance of a small number of extremely large platform intermediaries that is increasingly being understood to be unhealthy, both socially and politically speaking.

In the case of Facebook, for instance, introducing interoperability would mean that a user could connect with Facebook users, interact with them and with the content that they share or produce, without using Facebook. This would be possible because, thanks to interoperability, other services could connect with the Facebook infrastructure.

This scenario is often presented as a way of creating a more level playing field for competing services (who would have access to Facebook’s user base). In the same vein, proponents of interoperability argue for an ecosystem of messaging services where messages can be exchanged across services. This is also the basis of the most recent antitrust case by the US government against Facebook. At the very core, these are arguments in favour of individual freedom of choice and to empower competitors in the market. We call this approach competitive interoperability.

While this would clearly be a step in the right direction (the Digital Markets Act proposed by the European Commission at the end of last year contains a first baby step, that would require “gatekeeper platforms” to allow interoperability for ancillary services that they offer but not for their main services) it is equally clear that competitive interoperability will not substantially change the nature of the online environment. Increasing choice between different services designed to extract value from our online activities may feel better than being forced to use a single service, but it does not guarantee that the exploitative relationship between service providers and their users will change. We cannot predict the effects of increased market competition on the control of individual users over their data. In fact, allowing users to take their data from one service to another comes with a whole raft of largely unresolved personal data protection issues.

We see interoperability as a design principle that has the potential to build a more decentralised infrastructure that enables individual self-determination in the online environment. We propose to call this type of interoperability generative interoperability.

So, even though there are limits on this particular idea of interoperability, this does not mean that the concept itself has no use. Instead, we say that it needs to be envisaged with a different purpose in mind: building a different kind of online environment that answers to the needs of public institutions and civic communities, instead of ‘improving markets’. We see interoperability as a design principle that has the potential to build a more decentralised infrastructure that enables individual self-determination in the online environment. We propose to call this type of interoperability generative interoperability.

What question are you specifically trying to answer with this research? And why is it important?

In our view, the purpose of generative interoperability must be to enable what we call an Interoperable Public Civic Ecosystem. And we are exploring what such an ecosystem would encompass, together with our partners. We know that a public civic ecosystem has the potential to provide an alternative digital public space that is supported by public institutions (public broadcasters, universities and other educational institutions, libraries and other cultural heritage institutions) and civic- and commons-based initiatives. Can such an ecosystem allow public institutions and civic initiatives to route around the gatekeepers of the commercial internet, without becoming disconnected from their audiences and communities? Can it facilitate meaningful interaction outside the sphere of digital platform capitalism?

This experiment is part of the Shared Digital Europe agenda. Could you tell us a bit more about that agenda?

Shared Digital Europe is a broad coalition of thinkers, activists, policymakers and civil society. Two years ago we launched a new framework to guide policymakers and civil society organisations involved with digital policymaking in the direction of a more equitable and democratic digital environment, where basic liberties and rights are protected, where strong public institutions function in the public interest, and where people have a say in how their digital environment functions.

We established four guiding principles that can be applied to all layers of the digital space, from the physical networking infrastructure to the applications and services running on top of the infrastructure and networking stack. And they can also be applied to the social, economic or political aspects of society undergoing digital transformation. They are: 

  • Enable Self-Determination, 
  • Cultivate the Commons, 
  • Decentralise Infrastructure and 
  • Empower Public Institutions.

Now with this new project, we are saying that interoperability should primarily be understood to enable interactions between public institutions, civic initiatives and their audiences, without the intermediation of the now dominant platforms. Seen in this light the purpose of interoperability is not to increase competition among platform intermediaries, but to contribute to building public infrastructures that lessen the dependence of our societies on these intermediaries. Instead of relying on private companies to provide infrastructures in the digital space that they can shape according to their business model needs, we must finally start to build public infrastructures that are designed to respond to civic and public values underpinning democratic societies. In building these infrastructures a strong commitment to universal interoperability based on open standards and protocols can serve as an insurance policy against re-centralisation and the emergence of dominant intermediaries.

These public infrastructures do not emerge by themselves; they are the product of political and societal will. In the European Union, the political climate seems ripe for creating such a commitment. As evidenced by the current flurry of regulation aimed at the digital space, the European Commission has clearly embraced the view that Europe needs to set its own rules for the digital space. But if we want to see real systemic change we must not limit ourselves in regulating private companies (via baby steps towards competitive interoperability and other types of regulation) but we must also invest in interoperable digital public infrastructures that empower public institutions and civil institutions. If European policymakers are serious about building the next generation internet they will need to see themselves as ecosystem-builders instead of market regulators. Understanding interoperability as a generative principle will be an important step towards this objective.

How can people get involved and find out more?

People can reach out to us through our Twitter account: @SharedDigitalEU, they can write to us via email at hello@shared-digital.eu, or they can stay in touch through our newsletter, which we send out a couple of times per year.

Post

Policy in Practice Fund: Data sharing in emergencies

Manon den Dunnen from the Dutch National Police answers a few of our burning questions to give us some insight into the project and what it hopes to achieve.

We’re introducing each of our four Policy-in-Practice Fund projects with an introductory blog post. Below, Manon den Dunnen from the Dutch National Police answers a few of our burning questions to give us some insight into the project and what it hopes to achieve. We’re really excited to be working with four groups of incredible innovators and you’ll be hearing a lot more about the projects as they progress. 

Your project is exploring how we share information with public bodies. What is the problem with the way we do it now?

Europe has made significant progress on protecting our privacy, including through the General Data Protection Regulation (GDPR). However, there remain several problems with the collection and use of personal data. Information about us is often collected without our consent or with disguised consent, causing citizens and other data owners to lose control over their personal data and how it is used. This contributes to profiling (discrimination, exclusion) and cybercrime. At the same time, it is laborious and complex for citizens and public institutions to obtain personal data in a transparent and responsible way.

That is why a collective of (semi-) public institutions has been working towards an independent, public trust infrastructure that allows the conditional sharing of data. The Digital Trust Infrastructure or DTI aims to safeguard our public values by unravelling the process of data sharing into small steps and empowering data subjects to take control of each step at all times.

Rather than focusing on information sharing with public bodies, our project will explore public bodies taking responsibility for creating processes that help to safeguard constitutional values in a digitised society.

What kinds of risks are there for handling personal information? 

Preventing these problems requires organisations to work according to several principles, most of which can be found in the GDPR. Let’s take cybercrime as a potential threat. The risk of your data being exposed to cybercrime increases as your data is stored in more places. Organisations must therefore practice data minimisation, but there are other approaches available. For example, structures that allow citizens to give conditional access to information about them, rather than having to store the data themselves. This ‘consent arrangement’ is exactly what this project will set out to test.

This idea will be new for many people, so expert support and protection should be provided when setting up a consent arrangement for data sharing. But the potential impact is huge. Take for example someone with an oxygen cylinder at home for medical use. This is not something a citizen would be expected to declare as part of their daily life. But when there is a fire, both that citizen and the fire brigade would like that information to be shared.

That piece of data about the citizen’s use of an oxygen cylinder is the only information needed by the fire brigade. But current systems often share far more information automatically, including the person’s identity.  Citizens should be able to give the fire brigade conditional and temporary access to only the information that an oxygen bottle is present, such as in emergencies like a fire.

How can public bodies do this safely and with the trust of citizens? What kind of role do you think public bodies can play in increasing trust in data-sharing in the digital economy more broadly?

A data-driven approach to social and safety issues can truly improve the quality of life, facilitate safety and innovation. But we must set an example in the way we approach this. At each moment, we must carefully consider which specific piece of information is needed by a specific person, in what specific context and moment of time, for what specific purpose and for how long. 

By investing in this early on and involving citizens in a democratic and inclusive way, we can not only increase trust but also use the results to demand partners to do the same and create the new normal. 

You are working on a specific case study with the Dutch Police and other partners. Can you tell us a bit more about that experiment, and how you think this model could be used in other contexts too?

During the next few months, we will create a first scalable demonstrator of the Digital Trust Infrastructure. It will be based on the use-case of sharing home-related data in the context of an incident such as a fire. The generic building blocks that we create will be fundamental to all forms of data sharing, like identification, authentication, attribution, permissions, logging etc. They will also be open-source and usable in all contexts. We have a big list of things to work on!

But that is only part of the story. Complementary to the infrastructure, an important part of the project focuses on drawing up a consent arrangement. This will allow residents to conditionally share information about their specific circumstances and characteristics with specific emergency services in a safe, trusted way. To put people in control of every small step and ensure the consent arrangement will be based on equality, we will organise the necessary expertise to understand every aspect. The consequences of our actions have to be transparent, taking into account groups with various abilities, ages and backgrounds.

We will also explore how, and to what extent, the conditions and safeguards can be implemented in a consent arrangement and embedded in the underlying trust infrastructure in a democratic and resilient way. We will also look at how a sustainable and trustworthy governance structure can be set up for such a consent arrangement. We will share all our findings on these areas.

How can people get involved and find out more?

We are currently collecting experiences from other projects on how to engage residents in an inclusive, democratic and empowered (conscious) way. All the knowledge that we are building up in this area will be shared on the website of our partner Amsterdam University of Applied Sciences (HvA). Naturally, we would value hearing about the experiences of others in this area, so please do get in touch.

Post

Workshop report: Follow us OFF Facebook – decent alternatives for interacting with citizens

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Redecentralize.org.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Redecentralize.org.

Despite the incessant outcry over social media giants’ disrespect of privacy and unaccountable influence on society, any public sector organisation wanting to reach citizens feels forced to be present on their enormous platforms. But through its presence, an organisation legitimises these platforms’ practices, treats them like public utilities, subjects its content to their opaque filters and ranking, and compels citizens to be on them too — thus further strengthening their dominance. How could we avoid the dilemma of either reaching or respecting citizens?

Redecentralize organised a workshop to address this question. The workshop explored the alternative of decentralised social media, in particular Mastodon, which lets users choose whichever providers and apps they prefer because these can all interoperate via standardised protocols like ActivityPub; the result is a diverse, vendor-neutral, open network (dubbed the Fediverse), analogous to e-mail and the world wide web.

Leading by example in this field is the state ministry of Baden-Württemberg, possibly the first government with an official Mastodon presence. Their head of online communications Jana Höffner told the audience about their motivation and experience. Subsequently, the topic was put in a broader perspective by Marcel Kolaja, Member and Vice-President of the European Parliament (and also on Mastodon). He explained how legislation could require the dominant ‘gatekeeper’ platforms to be interoperable too and emphasised the role of political institutions in ensuring that citizens are not forced to agree to particular terms of service in order to participate in public discussion.

Post

Workshop report: Futurotheque – a trip to the future

Sander Veenhof, Augmented reality artist and Leonieke Verhoog, Program Manager at PublicSpaces took their session attendees on a trip to the future.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Sander Veenhof and Leonieke Verhoog, creators of Futurotheque.

Sander Veenhof, Augmented reality artist and Leonieke Verhoog, Program Manager at PublicSpaces took their session attendees on a trip to the future. They did this ‘wearing’ the interactive face-filters they created for their speculative fiction and research project the ‘Futurotheque’. The AR effects transformed them into citizens from the years 2021 right up to 2030, wearing the technical equipment we can expect to be wearing during those years. But besides the hardware, the filters foremostly intended to visualise the way we’ll experience the world in the near future. Which is through the HUD (Head Up Display) of our augmented reality wearables. 

As users, we tend to think of the future of AR as more of the same in a hands-free way, but this session aimed to look beyond the well-known use-cases for these devices. Of course, they will provide us with all our information and entertainment needs and they can guide us wherever we are. But will that be our navigation through the physical world, or will these devices try to guide us through life? In what way will cloud intelligence enhance us, making use of the built-in camera that monitors our activities 24/7? What agency do we want to keep? And in what way should citizens be supported with handling these new devices, and the new dilemmas arising from their use? 

These are abstract issues, but the face-filter visualisations applied on Sander and Leonieke helped to visualise the day-to-day impact of these technological developments on us as individuals, and have an interesting discussion with the session participants. After a dazzling peek into the next decade, the conclusion was that there’s a lot to think about when these devices are going to be part of our society. But fortunately, that’s not the case yet. We still have time to think of ways to integrate these devices into our society beforehand, instead of doing that afterwards.

Post

Workshop report: People, not experiments – why cities must end biometric surveillance

We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by European Digital Rights (EDRi), which was originally published on the EDRi website.

We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it. The discussion got to the heart of EDRi’s warnings that biometric surveillance puts limits on everyone’s rights and freedoms, amplifies discrimination, and treats all of us as experimental test subjects. This techno-driven democratic vacuum must be stopped.

From seriously flawed live trials of facial recognition by London’s Metropolitan police force, to unlawful biometric surveillance in French schools, to secretive roll outs of facial recognition which have been used against protesters in Serbia: creepy mass surveillance by governments and private companies, using people’s sensitive face and body data, is on the rise across Europe. Yet according to a 2020 survey by the EU’s Fundamental Rights Agency, 80% of Europeans are against sharing their face data with authorities.

On 28 September, EDRi participated in a debate at the NGI Policy Summit on “Biometrics and facial recognition in cities” alongside policymakers and police officers who have authorised the use of the tech in their cities. EDRi explained that public facial recognition, and similar systems which use other parts of our bodies like our eyes or the way we walk, are so intrusive as to be inherently disproportionate under European human rights law. The ensuing discussion revealed many of the reasons why public biometric surveillance poses such a threat to our societies:

• Cities are not adequately considering risks of discrimination: according to research by WebRoots Democracy, black, brown and Muslim communities in the UK are disproportionately over-policed. With the introduction of facial recognition in multiple UK cities, minoritised communities are now having their biometric data surveilled at much higher rates. In one example from the research, the London Metropolitan Police failed to carry out an equality impact assessment before using facial recognition at the Notting Hill carnival – an event which famously celebrates black and Afro-Carribean culture – despite knowing the sensitivity of the tech and the foreseeable risks of discrimination. The research also showed that whilst marginalised communities are the most likely to have police tech deployed against them, they are also the ones that are the least consulted about it.

• Legal checks and safeguards are being ignored: according to the Chief Technology Officer (CTO) of London, the London Metropolitan Police has been on “a journey” of learning, and understand that some of their past deployments of facial recognition did not have proper safeguards. Yet under data protection law, authorities must conduct an analysis of fundamental rights impacts before they deploy a technology. And it’s not just London that has treated fundamental rights safeguards as an afterthought when deploying biometric surveillance. Courts and data protection authorities have had to step in to stop unlawful deployments of biometric surveillance in Sweden, Poland, France, and Wales (UK) due to a lack of checks and safeguards.

• Failure to put fundamental rights first: the London CTO and the Dutch police explained that facial recognition in cities is necessary for catching serious criminals and keeping people safe. In London, the police have focused on ethics, transparency and “user voice”. In Amsterdam, the police have focused on “supporting the safety of people and the security of their goods” and have justified the use of facial recognition by the fact that it is already prevalent in society. Crime prevention and public safety are legitimate public policy goals: but the level of the threat to everyone’s fundamental rights posed by biometric mass surveillance in public spaces means that vague and general justifications are just not sufficient. Having fundamental rights means that those rights cannot be reduced unless there is a really strong justification for doing so.

• The public are being treated as experimental test subjects: across these examples, it is clear that members of the public are being used as subjects in high-stakes experiments which can have real-life impacts on their freedom, access to public services, and sense of security. Police forces and authorities are using biometric systems as a way to learn and to develop their capabilities. In doing so, they are not only failing their human rights obligations, but are also violating people’s dignity by treating them as learning opportunities rather than as individual humans deserving of respect and dignity.

The debate highlighted the worrying patterns of a lack of transparency and consideration for fundamental rights in current deployments of facial recognition, and other public biometric surveillance, happening all across Europe. The European Commission has recently started to consider how technology can reinforce structural racism, and to think about whether biometric mass surveillance is compatible with democratic societies. But at the same time, they are bankrolling projects like horrifyingly dystopian iBorderCTRL. EDRi’s position is clear: if we care about fundamental rights, our only option is to stop the regulatory whack-a-mole, and permanently ban biometric mass surveillance.

Post

Workshop report: What your face reveals – the story of HowNormalAmI.eu

At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work - an online interactive documentary called 'How Normal Am I?'.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by Tijmen Schep.

At the Next Generation Internet Summit, Dutch media artist Tijmen Schep revealed his latest work – an online interactive documentary called ‘How Normal Am I?‘. It explains how face recognition technology is increasingly used in the world around us, for example when dating website tinder gives all its users a beauty score to match people who are about equally attractive. Besides just telling us about it, the project also allows people to experience this for themselves. Through your webcam, you will be judged on your beauty, age, gender, body mass index (BMI), and your facial expressions. You’ll even be given a life expectancy score, so you’ll know how long you have left to live.

The project has sparked the imagination – and perhaps a little feeling of dread – in many people, as not even two weeks later the documentary has been ‘watched’ over 100.000 times.

At the Summit, Tijmen offered a unique insight into the ‘making of’ of this project. In his presentation, he talked about the ethical conundrums of building a BMI prediction algorithm that is based on photos from arrest records, and that uses science that has been debunked. The presentation generated a lot of questions and was positively received by those who visited the summit.

Post

Workshop report: Data-sharing in cities

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by the Coalition of Cities for Digital Rights, written by Beatriz Benitez and Malcolm Bain.

The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by the Coalition of Cities for Digital Rights, written by Beatriz Benitez and Malcolm Bain.

At the Next Generation Internet Policy Summit 2020 organised by Nesta and the City of Amsterdam  on 28th September 2020, Daniel Sarasa (Zaragoza) and Malcolm Bain (Barcelona), on behalf of the Coalition of Cities for Digital Rights, hosted a round table for city and regional stakeholders that explored the application of digital rights in local data-sharing platforms, throughout the stages planning, implementation, and evolution of digital data sharing services.

Data sharing platforms are playing an important role in the cities by integrating data collected throughout or related to the city and its citizens from a wide variety of sources (central administration, associated entities, utilities, private sector) to enable local authorities, businesses and even occasionally the public to access this data produced within the City and use it for limited or unlimited purposes (open data). 

Malcolm introduced the session, highlighting that while Cities are keen to share data and use shared data in city digital services, they are (or should be) also aware of the digital rights issues arising in these projects related to citizens’ privacy, transparency and openness of the data used, accessibility and inclusion of citizens as well as the existence of bias in the data set used and the privatization of the use of city-related data. Luckily, cities are also in the best position to introduce the concept of ‘digital rights by design’ in these projects, and correct issues such as bias, privacy intrusions, fairness, profiling and data misuse.  He briefly show-cased the Coalition work in this area in the Data Sharing Working Group, focusing on the ‘building blocks’ for rights-compliant data sharing projects to extract value from urban big data while respecting residents and visitors’ rights, including policies, processes, infrastructures, and specific actions and technologies.

Daniel highlighted the work of Eurocities on their Citizens Data Principles, which aim is to offer guidance to European local governments on more socially responsible use of data, and recognise, protect and uphold the citizens’ rights on the data they produce. The principles support how to use data-generated knowledge to improve urban life and preserve European values through scientific, civic, economic and democratic progress. Daniel presented one of his own city’s data-sharing project, Periscopio, a framework for sharing information contained in urban data (public and private) in such a way that it allows social agents and citizens to be involved to create social, scientific, economic and democratic value, as well as enabling the creation of better urban services.

Then, the cities of San Antonio, Long Beach, Portland, Toronto, Rennes, Helsinki, Amsterdam and Barcelona each presented some case studies from their cities, highlighting different issues raised by their data-sharing platforms and projects.

  • For the City of San Antonio, USA, Emily B. Royall addressed the issue of data bias and the need to listen to the community under the theme ‘Leveraging Data for Equity’.
  • Johanna Pasilkar of Helsinki shared with us the work of ‘MyData’ operator initiative and how to ease the daily life of the residents by consolidating data collected by the city’s departments and organisations and enabling sharing across several municipalities (data portability).
  • On behalf of the City of Amsterdam, Ron Van der Lans told us about the collaboration with the city by sharing traffic data with navigation companies such as Google, Waze and BeMobile to improve the mobility and quality of life of citizens.
  • Hamish Goodwin from the City of Toronto, Canada explained how they are attempting to integrate digital rights principles into the city digital infrastructure and the municipalities’ decision-making and how to put a policy framework into practice – the results of this are just coming out.
  • From the city of Rennes, Ben Lister introduced us to the RUDI, a local, multipartner, data sharing platform which goes beyond open data and connects users and producers to create new or/and better services.
  • Héctor Domínguez from the city of Portland, USA told us about the importance of ‘Racial Justice’ as a core value to regulating emergent technology, based on the respect for privacy, trusted surveillance and digital inclusion.
  • Ryan Kurtzman on behalf of the City of Longbeach, USA spoke about positive and negative associations of smart cities, and how participatory design of citizens in digital services can leverage the positive aspects: personal convenience, engagement and solving social challenges.
  • To conclude the round, Marc Pérez-Battle from Barcelona presented several data sharing and open data projects led by the City Council.

The City participants highlighted the need for embedding digital rights at design time (privacy, transparency, security, accessibility, etc.), citizen participation, and having the flexibility to adapt and correct any issues that may arise – something that may be more difficult when the technologies are embedded in the city infrastructure, and thus all the more important for correct design. Common themes among the projects include the importance of citizen involvement in projects, the respect for privacy and security, and the need for transparency and avoiding data bias. In addition, listeners to the session in the online chat also raised the issue of data ‘ownership’, and if this is a useful concept or rather misleading – cities are more stewards of data for the public, rather than an owner of data that they gather and use.

The session concluded stating that much work was still to be done, but just by raising Cities’ awareness of digital rights issues in data-sharing projects, we are making a big first step.  The Coalition will shortly be releasing the Data Sharing Concept Note, and associated case studies that were briefly presented during the round table.

Picture of Toomas Hendrik Ilves
Post

NGI Policy Summit: Former Estonian President Toomas Hendrik Ilves interview

As president of Estonia from 2006 to 2016, Toomas Hendrik Ilves pushed for digital transformation, ultimately leading Forbes to label him “the architect of the most digitally savvy country on earth”. Every day, e-Estonia allows citizens to interact with the state via the internet. Here, Ilves discusses why other governments might be slower with such […]

As president of Estonia from 2006 to 2016, Toomas Hendrik Ilves pushed for digital transformation, ultimately leading Forbes to label him “the architect of the most digitally savvy country on earth”. Every day, e-Estonia allows citizens to interact with the state via the internet. Here, Ilves discusses why other governments might be slower with such developments, and ponders how things can improve further in the future.

Toomas Hendrik Ilves is one of the speakers of our upcoming NGI Policy Summit, which will take place online on September 28 and 29 2020. Sign up here, if you would like to join us.

This interview originally appeared as part of the NGI Forward’s Finding CTRL collection.

Estonia had a rapid ascent to becoming a leading digital country, how did you push for this as a diplomat in the 90s?

Estonia became independent in ’91, and everyone was trying to figure out what we should do – we were in terrible shape economically and completely in disaster. Different people had different ideas. My thinking was basically that no matter what, we would always be behind.

In ’93, Mosaic came out, which I immediately got. You had to buy it at the time. I looked at this, and it just struck me that, ‘Wow, this is something where we could start out on a level playing field, no worse off than anyone else’.

For that, we had to get a population that really is interested in this stuff, so I came up with this idea – which later carried the name of Tiger’s Leap – which was to computerise all the schools, get computers in all the schools and connect them up. It met with huge opposition, but the government finally agreed to it. By 1998, all Estonian schools were online.

How did things progress from there, and what was the early public reaction like?

We had a lot of support from NGOs. People thought it was a cool idea, and the banks also thought it was a good idea, because they really supported the idea of digitization. By the end of the 90s, it became clear that this was something that Estonia was ahead of the curve on.

But, in fact, in order to do something, you really needed to have a much more robust system. That was when a bunch of smart people came up with the idea of a strong digital identity in the form of a chip card,2 and also developed the architecture for connecting everything up, because we were still too poor to have one big data centre to handle everything. That led to what we call X-Road, which connects everything to everybody, but always through an authentication of your identity, which is what gives the system its very strong security.

It was a long process. I would be lying to say that it was extremely popular in the beginning, but over time, many people got used to it.

I should add that Tiger’s Leap was not always popular. The teachers union had a weekly newspaper, and for about a year, no issue would seem to appear without some op ed attacking me.

Estonia’s e-Residency programme allows non-Estonians access to Estonian services via an e-resident smart card. Do you think citizenship should be less defined by geographical boundaries?

Certain things are clearly tied to your nation, anything that involves political rights, or say, social services – if you’re a taxpayer or a citizen, you get those.

But on the other hand, there are many things associated with your geographical location that in fact have very little to do with citizenship. In the old days, you would bank with your local bank, you didn’t have provisions for opening an account from elsewhere because the world was not globalised. And it was the same thing with establishing companies.

So if you think about those things you can’t do, well, why not? We don’t call it citizenship, you don’t get any citizen rights, but why couldn’t you open a bank account in my country if you want to? If we know who you are, and you get a visual identity, you can open a company.

Most recently, we’ve been getting all kinds of interest from people in the UK. Because if you’re a big company in the UK, it’s not a problem to make yourself also resident in Belgium, Germany, France. If you’re a small company, it’s pretty hard. I mean, they’re not going to set up a brick and mortar office. Those are the kind of people who’ve been very interested in setting up or establishing themselves as businesses within the European Union, which, in the case of Estonia, they can do without physically being there.

What do you think Europe and the rest of the world can learn from Estonia?

There are services that are far better when they’re digital which right now are almost exclusively nationally-based. We have digital prescriptions – wonderful things where you just write an email to your doctor and the doctor will put the prescription into the system and you can go to any pharmacy and pick it up.

This would be something that would be popular that would work across the EU. Everywhere I go, I get sick. My doctor, he puts in a prescription. If I’m in Valencia, Spain, he puts it into the system, which then also operates in Spain.

The next step would be for medical records. Extend the same system: you identify yourself, authorise the doctors to look at your records, and they would already be translated. I would like to see these kinds of services being extended across Europe. Right now, the only cross-border service of this type that works is between Estonia and Finland. It doesn’t even work between Estonia and Latvia, our southern neighbour. So I think it’ll be a while, but it’s a political decision. Technologically, it could work within months. The Finns have adopted our X-road architecture especially easily. It’s completely compatible; we just give it away, it’s non-proprietary open source software.

The technical part is actually very easy, the analogue part of things is very difficult, because they have all these political decisions.

What would your positive vision for the future of the internet look like?

Right now I’m in the middle of Silicon Valley, in Palo Alto, and within a ten mile radius of where I sit are the headquarters of Tesla, Apple, Google, Facebook, Palantir – not to mention all kinds of other companies – producing all kinds of wonderful things, really wonderful things that not only my parents or my grandparents could never even dream of, but even I couldn’t dream of 25 years ago. But at the same time, when I look at the level of services for ordinary people – citizens – then the US is immensely behind countries like Estonia.

The fundamental problem of the internet is summed up in a 1993 New Yorker cartoon, where there’s a picture of two dogs at a computer, and one dog says to the other, “On the internet no-one knows you’re a dog”. This is the fundamental problem of identity that needs to be addressed. It has been addressed by my country.

Unless you have services for people that are on the internet, the internet’s full potential will be lost and not used.

What do you think prevents other nations pursuing this idea of digital identity?

It requires political will. The old model and the one that continues to be used, even in government services in places like the United States, is basically “email address plus password”. Unfortunately, that one-factor identification system is not based on anything very serious.

Governments have to understand that they need to deal with issues such as identity. Unless you do that, you will be open to all these hacks, all of these various problems. I think I read somewhere that in the Democratic National Committee servers, that in 2015 and 2016, they had 126 people who had access to the servers. Of those 126 people, 124 used two-factor authentication. Two didn’t. Guess how the Russians got in.

What we’re running up against today is that people who are lawmakers and politicians don’t understand how technology works, and then people have very new technology that we don’t quite understand the ramifications and implications of. What we really need is for people who are making policy to understand far better, and the people who are doing technology maybe should think more about the implications of what they do, and perhaps read up a little bit on ethics.

On balance, do you personally feel the web and the internet has had a positive or negative influence on society?

By and large, positive, though we are beginning to see the negative effects of social media.

Clearly, the web is what has enabled my country to make huge leaps in all kinds of areas, not least of which is transparency, low levels of corruption, so forth.

I would say we entered the digital era in about 2007, when we saw the combination of the ubiquity of portable devices and the smartphones, combined with social media. This led to a wholly different view of the threat of information exchange. And that is when things, I’d say, started getting kind of out of hand.

I think the invention of the web by Tim Berners-Lee in 1989 is probably the most transformative thing to happen since 1452, when Gutenberg invented movable type. Movable type enabled mass book production, followed by mass literacy. That was all good.

But you can also say that the Thirty Years’ War, which was the bloodiest conflict, in terms of proportion of people killed, that Europe has ever had, also came from this huge development of mass literacy. Because it allowed for the popularisation of ideology. Since then, we’ve seen all other kinds of cases; each technology brings with it secondary and tertiary effects.

We don’t quite know yet what the effects are for democracy, but we can sort of hazard a guess. We’re going to have to look at how democracy would survive in this era, in the digital era where we love having a smartphone and reading Facebook.

Picture of Marleen Stikker
Post

NGI Policy Summit: Interview with internet pioneer Marleen Stikker

Marleen Stikker is an internet pioneer who co-founded The Digital City, a non-profit internet provider and community for Dutch people, in 1994. She is now director of Waag, a cultural innovation centre in Amsterdam. Here, she explores the early beginnings of the internet, explains what went wrong, and ponders the future of online life. Marleen […]

Marleen Stikker is an internet pioneer who co-founded The Digital City, a non-profit internet provider and community for Dutch people, in 1994. She is now director of Waag, a cultural innovation centre in Amsterdam. Here, she explores the early beginnings of the internet, explains what went wrong, and ponders the future of online life.

Marleen is one of the speakers of our upcoming NGI Policy Summit, which will take place online on September 28 and 29 2020. Sign up here, if you would like to join us.

This interview originally appeared as part of the NGI Forward’s Finding CTRL collection.

You have personally been involved with the internet from the beginning of the web. What have we lost and gained since those early days?

Back in 1994 when we launched the Digital City, the internet was a green field: it was an open common where shared values thrived. It was an environment for creation, experimentation, and social and cultural values. There was no commercial expectation at that moment and there was no extraction of value for shareholders. The governance of the internet at that time was based on what the network needed to function optimally, the standard committee IETF made its decisions on the basis of consensus.

We lost the notion of the commons: the internet as a shared good. We basically handed it over to the market, and shareholders’ value now defines how the internet functions. We didn’t only lose our privacy but also our self-determination. The internet is basically broken.

What do you think was the most influential decision in the design of the World Wide Web? How could things have turned out differently if we made different decisions?

I think the most important decision was a graphical interface to the internet, enabling different types of visualisation to exist. The World Wide Web brought a multimedia interface to the internet, enabling a visual language. And with that enabling, a whole new group of people got to use the internet.

The World Wide Web became synonymous with pages and therefore publishing, which emphasises the idea it was to do with classical publishing and intellectual rights regulation. Before the World Wide Web, the internet was much more a performative space, a public domain. The publishing metaphor was a set back and for me quite disappointing.

What were the big mistakes where we went wrong in the development of the internet? How do you believe these mistakes have shaped our society?

The whole emphasis on exponential growth, getting filthy rich through the internet, has been a real problem. Basically handing over the internet to the mercy of the capital market has been a major miscalculation. We should have regulated it as a public good and consider people as participants instead of consumers and eyeballs. Now we are not only the product, but the carcass, as Zuboff underlines in her book on surveillance capitalism. All the data is sucked out of us and we act in a scripted nudging environment, captured in the profiles that companies store in their ‘black box’. We should have had encryption and attribute-based identity by default. The fact that these companies can build up their empires without regulation on the use of our data and behaviour has been a major flaw.

We have to re-design how we deal with digital identity and the control over our personal data.

How do you believe the internet has shaped society for the better?

The internet is empowering people by giving means of communication and distribution, and it enables people to share their ideas, designs, and solutions. For instance, in the MakeHealth program that we run at Waag, or the open design activities.

Can you explain your idea for a full-stack internet and tell us more about it?

I believe we have to design the internet as a public stack, which means that we have to start by expressing the public values that will be guiding the whole process, it means that we re-think the governance and business models. We need open and accountable layers of technology, both hardware, firmware operating systems and applications.

It means that we ensure that there is accountability in each part of the internet. At the basis of all this should be the design for data minimisation, data commons, and attribute-based identity so people can choose on what they want to reveal or not.

We are good at diagnosing problems with the internet, but not as great at finding solutions. What should we do next, and who should implement change?

It starts with acknowledging that technology is not neutral. That means that we need to diversify the teams that build our technologies and make public values central. We have to regulate big tech and build alternatives towards a commons based internet. The governmental and public organizations should make explicit choices for public technologies and alternatives.

What is your positive vision for the future of the internet?

After leaving the internet to the market the last 25 years I believe we will need another 25 years to bring back the commons and have a more mature and balanced next generation internet. I do believe 2018 has been a turning point.

Are you personally hopeful about the future of the internet?

I think the coming era could be game changer, if we keep on working together I see a positive future, we can regain a trustworthy internet.

If we use the current crisis for good, we can rebuild a trustworthy internet. We will need to rethink the principles behind the internet. We need to be thorough and choose an active involvement.

On the whole, do you think the web, and the internet more broadly, has had a positive or negative influence on society?

Both… It gave a lot of people a voice and a way of expression, which is still one of the major achievements of the internet. But it also put our democracies in danger and if we are not able to counter these new powers, the outcome will be a very negative one. If you can’t counter surveillance capitalism the outcome of the cost-benefit will be extremely negative.

>