Public procurement conditions for trustworthy AI and algorithmic systems
Governments increasingly use AI and algorithmic systems. The City of Amsterdam for example uses algorithmic systems in some of their primary city tasks, like parking control and acting on notifications from citizens. In the last couple of years, many guidelines and frameworks have been published on algorithmic accountability. A notable example is the Ethics guidelines […]
by Linda van de Fliert
Governments increasingly use AI and algorithmic systems. The City of Amsterdam for example uses algorithmic systems in some of their primary city tasks, like parking control and acting on notifications from citizens.
In the last couple of years, many guidelines and frameworks have been published on algorithmic accountability. A notable example is the Ethics guidelines for trustworthy AI by the High-Level Expert Group on AI advising the European Commission, but there are many others. What all these frameworks have in common is that they name transparency as a key principle of trustworthy AI and algorithmic systems. But what does that mean, in practice? We can talk about the concept of transparency, but how can it actually be operationalized?
That’s why the City of Amsterdam took the initiative to translate the frameworks and guidelines into a practical instrument: contractual clauses for the procurement of algorithmic systems. In this post, you can read all about these procurement conditions and how you can be involved in taking them to the next level: a European standard for public procurement of AI and algorithmic systems.
Terms & Conditions
Starting late 2019, the City of Amsterdam joined forces with several Dutch and international experts, ranging from legal and procurement experts to suppliers and developers, resulting in a version 1.0 of a new set of procurement conditions, and an accompanying explanatory guide. This first iteration is already free for all to use and adapt.
By choosing procurement conditions as a means to operationalize the ethics and accountability frameworks, we hit two birds with one stone. First of all, it provides clear guidance to suppliers, who according to the World Economic Forum, “understand the challenges of algorithmic accountability for governments, but look to governments to create clarity and predictability about how to manage risks of AI, starting in the procurement process.” Secondly, and maybe more importantly, procurement conditions demand clear definitions, both of key concepts like ‘algorithmic system’ and ‘transparency’, as of the conditions themselves.
Transparency in practice
Although the procurement conditions aim to tackle several issues related to the procurement of algorithmic systems, like vendor lock-in, the main novelty is that they provide a separation between information needed for algorithmic accountability on the one hand and company-sensitive information on the other. The conditions distinguish between three main types of transparency that the supplier should provide:
Technical transparency provides information about the technical inner workings of the algorithmic system; for instance the underpinning source code. For many companies, this type of information is proprietary and often considered a trade secret, it’s their ‘secret sauce’. Therefore, unless it is the procurement of open source software, technical transparency will only be demanded in case of an audit or if needed for explainability (see below).
Procedural transparency provides information about the purpose of the algorithmic system, the process followed in the development and application and the data used in that context; for instance, what measures were taken to mitigate data biases. Procedural transparency provides a government with information that enables them to objectively establish the quality and risks of the algorithms used and perform other controls; to provide explainability (see below); and to inform the general public about algorithmic usage and the manifold ways on how it affects society. Procedural transparency is mandatory in every procurement.
Explainability means that a government should be able to explain to individual citizens how an algorithm arrives at a certain decision or outcome that affects that citizen. The information provided should offer the citizen the opportunity to object to the decision, and if necessary follow legal proceedings. This should in any event include a clear indication of the leading factors (including data inputs) that have led the algorithmic system to this particular result and the changes to the input that must be made in order to arrive at a different conclusion. Providing this information becomes mandatory for any relevant product or service procured by the city under the new rules.
The procurement conditions and their explanatory guide give a detailed account of the situations in which each of these types of transparency applies.
Towards a European standard for public procurement of AI and algorithmic systems
The ambition for this project has always been to show that it is possible to operationalize general guidelines for AI ethics and to encourage others to do so as well. That’s why we hope these conditions will become the inspiration for a European standard for public procurement of AI and algorithmic systems. We took some steps towards that ambition already:
From February 2020 to June 2020, the European Commission held a public consultation on their AI white paper. The City of Amsterdam and Nesta, together with the Mozilla Foundation, AI Now Institute and the City of Helsinki, published a position paper as a response to that consultation, asking the EC to facilitate the development of common European standards and requirements for the public procurement of algorithmic systems.
Within the Netherlands, the conditions are now being implemented by several municipalities, regional governments and government agencies, collecting feedback from suppliers and working towards a version 2.0.
On June 25th, DG GROW hosts a webinar, titled Public Procurement of AI: building trust for citizens and business. At this webinar we will launch a more generalized version of the procurement conditions, that can be easily adapted to fit your organization. In the meantime, click the links to download the procurement conditions and their explanatory guide in pdf, to use within your organization right now.
Can’t wait? Want to help? Or just want to stay informed? Please let us know through this form how you want to be involved!
The NGI Policy-in-Practice Fund – announcing the grantees
We are very excited to announce the four projects receiving funding from the Next Generation Internet Policy-in-Practice Fund.
Policymakers and public institutions have more levers at their disposal to spur innovation in the internet space than often thought, and can play a powerful role in shaping new markets for ethical tools. We particularly believe that local experimentation and ecosystem building are vital if we want to make alternative models for the internet actually tangible and gain traction. But finding the funding and space to undertake this type of trial is not always easy – especially if outcomes are uncertain. Through the NGI Policy-in-Practice fund, it has been our aim not only to provide the means to organisations to undertake a number of these trials but also make the case for local trials more generally.
Over the past summer and autumn, we went through a highly competitive applications process, ultimately selecting four ambitious initiatives that embody this vision behind the NGI Policy-in-Practice fund. Each of the projects will receive funding of up to €25,000 to test out their idea on a local level and generate important insights that could help us build a more trustworthy, inclusive and democratic future internet.
In conjunction with this announcement, we have released an interview with each of our grantees, explaining their projects and the important issues they are seeking to address in more detail. You can also find a short summary of each project below. Make sure you register for our newsletter to stay up to date on the progress of each of our grantees, and our other work on the future of the internet.
Interoperability to challenge Big Tech power
This project is run by a partnership of three organisations: Commons Network and Open Future, based in Amsterdam, Berlin and Warsaw.
This project explores whether the principle of interoperability, the idea that services should be able to work together, and data portability, which would allow users to carry their data with them to new services, can help decentralise power in the digital economy. Currently, we are, as users, often locked into a small number of large platforms. Smaller alternative solutions, particularly those that want to maximise public good rather than optimise for profit, find it hard to compete in this winner-takes-all economy. Can we use interoperability strategically and seize the clout of trusted institutions such as public broadcasters and civil society, to create an ecosystem of fully interoperable and responsible innovation in Europe and beyond?
Through a series of co-creation workshops, the project will explore how this idea could work in practice, and the role trusted public institutions can play in bringing it to fruition.
During the pandemic, where homeschooling and remote working have become the norm overnight, bridging the digital divide has become more important than ever. This project is investigating how we can make it easier for public bodies and also the private sector to donate old digital devices, such as laptops and smartphones, to low-income families currently unable to access the internet.
By extending the lifetime of a device in this way, we are also reducing the environmental footprint of our internet use. Laptops and phones now often end up being recycled, or, worse, binned, long before their actual “useful lifespan” is over, putting further strain on the system. Donating devices could be a simple but effective mechanism for ensuring the circular economy of devices is lengthened.
The project sets out to do two things: first, it wants to try out this mechanism on a local level and measure its impact through tracking the refurbished devices over time. Second, it wants to make it easier to replicate this model in other places, by creating legal templates that can be inserted in public and private procurement procedures, making it easier for device purchasers to participate in this kind of scheme. The partnership also seeks to solidify the network of refurbishers and recyclers across Europe. The lessons learned from this project can serve as an incredibly useful example for other cities, regions and countries to follow.
Many of the digital services we use today, from our favourite news outlet to social media networks, rely on maximising “engagement” as a profit model. A successful service or piece of content is one that generates many clicks, drives further traffic, or generates new paying users. But what if we optimised for human well-being and values instead?
This project, led by the BBC, seeks to try out a more human-centric focused approach to measuring audience engagement by putting human values at its core. It will do so by putting into practice longer-standing research work on mapping the kinds of values and needs their users care about the most, and developing new design frameworks that would make it easier to actually track these kinds of alternative metrics in a transparent way.
The project will run a number of design workshops and share its findings through a dedicated website and other outlets to involve the wider community. The learnings and design methodology that will emerge from this work will not just be trialled within the contexts of the project partners, but will also be easily replicable by others interested in taking a more value-led approach.
In a data economy that is growing ever more complex, giving meaningful consent about what happens to our personal data remains one of the biggest unsolved puzzles. But new online identity models have shown to be a potentially very promising solution, empowering users to share only that information that they want to share with third parties, and sharing that data on their own terms. One way that would allow such a new approach to identity and data sharing to scale would be to bring in government and other trusted institutions to build their own services using these principles. That is exactly what this project seeks to do.
The project has already laid out all the building blocks of their Data Trust Infrastructure but wants to take it one step further by actually putting this new framework into practice. The project brings together a consortium of Dutch institutional partners to experiment with one first use case, namely the sharing of vital personal data with emergency services in the case of, for example, a fire. The project will not just generate learnings about this specific trial, but will also contribute to the further finetuning of the design of the wider Data Trust Infrastructure, scope further use cases (of which there are many!), and bring on board more interested parties.
Sander Veenhof, Augmented reality artist and Leonieke Verhoog, Program Manager at PublicSpaces took their session attendees on a trip to the future. They did this ‘wearing’ the interactive face-filters they created for their speculative fiction and research project the ‘Futurotheque’. The AR effects transformed them into citizens from the years 2021 right up to 2030, wearing the technical equipment we can expect to be wearing during those years. But besides the hardware, the filters foremostly intended to visualise the way we’ll experience the world in the near future. Which is through the HUD (Head Up Display) of our augmented reality wearables.
As users, we tend to think of the future of AR as more of the same in a hands-free way, but this session aimed to look beyond the well-known use-cases for these devices. Of course, they will provide us with all our information and entertainment needs and they can guide us wherever we are. But will that be our navigation through the physical world, or will these devices try to guide us through life? In what way will cloud intelligence enhance us, making use of the built-in camera that monitors our activities 24/7? What agency do we want to keep? And in what way should citizens be supported with handling these new devices, and the new dilemmas arising from their use?
These are abstract issues, but the face-filter visualisations applied on Sander and Leonieke helped to visualise the day-to-day impact of these technological developments on us as individuals, and have an interesting discussion with the session participants. After a dazzling peek into the next decade, the conclusion was that there’s a lot to think about when these devices are going to be part of our society. But fortunately, that’s not the case yet. We still have time to think of ways to integrate these devices into our society beforehand, instead of doing that afterwards.
Workshop report: People, not experiments – why cities must end biometric surveillance
We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it.
by Louis Stupple-Harris
The NGI Policy Summit hosted a series of policy-in-practice workshops, and below is a report of the session held by European Digital Rights (EDRi), which was originally published on the EDRi website.
We debated the use of facial recognition in cities with the policymakers and law enforcement officials who actually use it. The discussion got to the heart of EDRi’s warnings that biometric surveillance puts limits on everyone’s rights and freedoms, amplifies discrimination, and treats all of us as experimental test subjects. This techno-driven democratic vacuum must be stopped.
From seriously flawed live trials of facial recognition by London’s Metropolitan police force, to unlawful biometric surveillance in French schools, to secretive roll outs of facial recognition which have been used against protesters in Serbia: creepy mass surveillance by governments and private companies, using people’s sensitive face and body data, is on the rise across Europe. Yet according to a 2020 survey by the EU’s Fundamental Rights Agency, 80% of Europeans are against sharing their face data with authorities.
On 28 September, EDRi participated in a debate at the NGI Policy Summit on “Biometrics and facial recognition in cities” alongside policymakers and police officers who have authorised the use of the tech in their cities. EDRi explained that public facial recognition, and similar systems which use other parts of our bodies like our eyes or the way we walk, are so intrusive as to be inherently disproportionate under European human rights law. The ensuing discussion revealed many of the reasons why public biometric surveillance poses such a threat to our societies:
• Cities are not adequately considering risks of discrimination: according to research by WebRoots Democracy, black, brown and Muslim communities in the UK are disproportionately over-policed. With the introduction of facial recognition in multiple UK cities, minoritised communities are now having their biometric data surveilled at much higher rates. In one example from the research, the London Metropolitan Police failed to carry out an equality impact assessment before using facial recognition at the Notting Hill carnival – an event which famously celebrates black and Afro-Carribean culture – despite knowing the sensitivity of the tech and the foreseeable risks of discrimination. The research also showed that whilst marginalised communities are the most likely to have police tech deployed against them, they are also the ones that are the least consulted about it.
• Legal checks and safeguards are being ignored: according to the Chief Technology Officer (CTO) of London, the London Metropolitan Police has been on “a journey” of learning, and understand that some of their past deployments of facial recognition did not have proper safeguards. Yet under data protection law, authorities must conduct an analysis of fundamental rights impacts before they deploy a technology. And it’s not just London that has treated fundamental rights safeguards as an afterthought when deploying biometric surveillance. Courts and data protection authorities have had to step in to stop unlawful deployments of biometric surveillance in Sweden, Poland, France, and Wales (UK) due to a lack of checks and safeguards.
• Failure to put fundamental rights first: the London CTO and the Dutch police explained that facial recognition in cities is necessary for catching serious criminals and keeping people safe. In London, the police have focused on ethics, transparency and “user voice”. In Amsterdam, the police have focused on “supporting the safety of people and the security of their goods” and have justified the use of facial recognition by the fact that it is already prevalent in society. Crime prevention and public safety are legitimate public policy goals: but the level of the threat to everyone’s fundamental rights posed by biometric mass surveillance in public spaces means that vague and general justifications are just not sufficient. Having fundamental rights means that those rights cannot be reduced unless there is a really strong justification for doing so.
• The public are being treated as experimental test subjects: across these examples, it is clear that members of the public are being used as subjects in high-stakes experiments which can have real-life impacts on their freedom, access to public services, and sense of security. Police forces and authorities are using biometric systems as a way to learn and to develop their capabilities. In doing so, they are not only failing their human rights obligations, but are also violating people’s dignity by treating them as learning opportunities rather than as individual humans deserving of respect and dignity.
The debate highlighted the worrying patterns of a lack of transparency and consideration for fundamental rights in current deployments of facial recognition, and other public biometric surveillance, happening all across Europe. The European Commission has recently started to consider how technology can reinforce structural racism, and to think about whether biometric mass surveillance is compatible with democratic societies. But at the same time, they are bankrolling projects like horrifyingly dystopian iBorderCTRL. EDRi’s position is clear: if we care about fundamental rights, our only option is to stop the regulatory whack-a-mole, and permanently ban biometric mass surveillance.
How collective intelligence can help tackle major challenges…
...and build a better internet along the way!
by Aleks Berditchevskaia, Markus Droemann
It’s hard to imagine what our social response to a public health challenge at the scale of COVID-19 would have looked like just ten or fifteen years ago – in a world without sophisticated tools for remote working, diversified digital economies, and social networking opportunities.
The common enabler of all these activities is the internet. Recent years have seen innovation across all of its layers – from infrastructure to data rights – resulting in an unprecedented capacity for people to work together, share skills and pool information to understand how the world around them is changing and respond to challenges. This enhanced capacity is known as collective intelligence (CI).
The internet certainly needs fixing – from the polarising effect of social media on political discourse to the internet’s perpetual concentration of wealth and power and its poorly understood impact on the environment. But turning to the future, it’s equally clear that there is great promise in the ability of emerging technologies, new governance models and infrastructure protocols to enable entirely new forms of collective intelligence that can help us solve complex problems and change our lives for the better.
Based on examples from Nesta’s recent report,The Future of Minds & Machines, this blog shows how an internet based on five core values can serve to combine distributed human and machine intelligence in new ways and help Europe become more than the sum of its parts.
Resilience is a core value for the future internet. It means secure infrastructure and the right balance between centralisation and decentralisation. But it also means that connected technologies should enable us to better respond to external challenges. Online community networks that can be tapped into and mobilised quickly are already an important part of the 21st century humanitarian response.
Both Amnesty Internationaland Humanitarian OpenStreetMap have global communities of volunteers, numbering in the thousands, who participate in distributed micromapping efforts to trace features like building and roads on satellite images. These online microtasking platforms help charities and aid agencies understand how conflicts and environmental disasters affect different regions around the world, enabling them to make more informed decisions about distribution of resources and support.
More recently, these platforms have started to incorporate elements of artificial intelligence to support the efforts of volunteers. One such initiative, MapWithAI, helps digital humanitarians to prioritise where to apply their skills to make mapping more efficient overall.
The internet also enables and sustains distinct communities of practice, like these groups of humanitarian volunteers, allowing individuals with similar interests to find each other. This social and digital infrastructure may prove invaluable in times of crises, when there is a need to tap into a diversity of skills and ideas to meet unexpected challenges.
One example of collective intelligence improving inclusiveness – while also taking an inclusive-by-design approach – is Mozilla’s Common Voice project, which uses an accessible online platform to crowdsource the world’s largest open dataset of diverse voice recordings, spanning different languages, demographic backgrounds and accents.
Ensuring diversity of contributions is not easy. It requires a deliberate effort to involve individuals with rare knowledge, such as members of indigenous cultures or speakers of unusual dialects. But a future internet built around an inclusive innovation ecosystem, products that are inclusive-by-design, and fundamental rights for the individual – rather than a closed system built around surveillance and exploitation – will make it easier for projects like Common Voice to become the norm.
The future internet should have the ambition to protect democratic institutions and give political agency to all – but it should also itself be an expression of democratic values. That means designing for more meaningful bottom-up engagement of citizens, addressing asymmetric power relationships in the digital economy and creating spaces for different voices to be heard.
Both national and local governments worldwide are starting to appreciate the opportunities that the internet and collective intelligence offer in terms of helping them to better understand the views of their citizens. Parliaments from Brazil to Taiwan are inviting citizens to contribute to the legislative process, while cities like Brussels and Paris are asking their residents to help prioritise spending through participatory budgeting. The EU is also preparing a Conference on the Future Europe to engage citizens at scale in thinking about the future of the bloc, an effort that could be enhanced and facilitated through CI-based approaches like participatory futures. These types of activities can help engage a greater variety of individuals in political decision-making and redefine the relationships between politicians and the constituents they serve.
Unfortunately, some citizen engagement initiatives are still driven by tech-solutionism without a clear market need, rather than the careful design of participation processes that make the most of the collective contributions of citizens. Even when digital democracy projects start out with the best intentions politicians can struggle to make sense of this new source of insight, which risks valuable ideas being overlooked and diminished trust in democratic processes.
There are signs that this is changing. For example, the collective intelligence platform Citizen Lab is trying to optimise the channels of communications and interpretation between citizens and politicians. It has started to apply natural language processing algorithms to help organise and identify themes in the ideas that citizens contribute using its platform, helping public servants to make better use of them. Citizen Lab is used by city administrations in more than 20 countries across Europe and offers a glimpse of how Europe can set an example of democratic collective intelligence enabled by the infrastructure of the internet.
A closely related challenge for the internet today is the continued erosion of trust – trust in the veracity of information, trust between citizens online, and trust in public institutions. The internet of the future will have to find ways of dealing with challenges like digital identities and the safety of our everyday online interactions. But perhaps most importantly, the internet must be able to tackle the problems of information overload and misinformation through systems that optimise for fact-based and balanced exchanges, rather than outrage and division.
We have seen some of the dangers of fake news manifest as part of the response to COVID-19. At a time when receiving accurate public health messaging and government communications are a matter of life and death, the cacophony of information on the internet can make it hard for individuals to distinguish the signal from the noise.
Undoubtedly, part of the solution to effectively navigate his new infosphere will require new forms of public private partnerships. By working with media and technology giants like Facebook and Twitter, governments and health agencies worldwide have started to curb some of the negative effects of misinformation in the wake of the coronavirus pandemic. But the commitment to a trustworthy internet is a long-term investment. It will not only rely on the actions of policy makers and industry to develop recognisable trustmarks, but also on a more literate citizenry that is better able to spot suspicious materials and flag concerns.
Many existing fact checking projects already already use crowdsourcing at different stages of the verification processes. For example, the company Factmata is developing a technology that will draw on specialist communities of more than 2000 trained experts to help them assess the trustworthiness of online content. However, crowdsourced solutions can be vulnerable to issues of bias, polarisation and gaming and will need to be consolidated by complementary sources of intelligence such as expert validation or entirely new AI tools that can help to mitigate against the effects of social bias.
Undoubtedly, some of our biggest challenges are yet to come. But the internet holds untapped potential for us to build awareness for the interdependency of our social and natural environments. We need to champion models that put the digital economy at the service of creating a more sustainable planet and combating climate change, while also remaining conscious of the environmental footprint these systems have in their own right.
Citizen science is a distinct family of collective intelligence methods where volunteers collect data, make observations or perform analyses that helps to advance scientific knowledge. Citizen science projects have proliferated over the last 20 years, in large part due to the internet. For example, the most popular online citizen science platform,Zooniverse, hosts over 50 different scientific projects and has attracted over 1 million contributors.
A large proportion of citizen science projects focus on the environment and ecology, helping to engage members of the public outside of traditional academia with issues such as biodiversity, air quality and pollution of waterways. iNaturalist is an online social network that brings together nature lovers to keep track of different species of plants and animals worldwide. The platform supports learning within a passionate community and creates a unique open data source that can be used by scientists and conservation agencies.
Building the Next Generation Internet – with and for collective intelligence
To enable next-generation collective intelligence, Europe needs to look beyond ‘just AI’ and invest in increasingly smarter ways of connecting people, information and skills, and facilitating interactions on digital platforms. The continued proliferation of data infrastructures, public and private sector data sharing and the emergence of the Internet of Things will play an equally important part in enhancing and scaling up collective human intelligence. Yet, for this technological progress to have a transformative and positive impact on society, it will have to be put in the service of furthering fundamental values. Collective intelligence has the opportunity to be both a key driver and beneficiary of a more inclusive, resilient, democratic, sustainable and trustworthy internet.
At this moment of global deceleration, we suggest it is time to take stock of old trajectories for the internet to set out on a new course, one that allows us to make the most of the diverse collective intelligence that we have within society to become better at solving complex problems. The decisions we make today will help us to shape the society of the future.
Aleks is a Senior Researcher and Project Manager for Nesta’s Centre for Collective Intelligence Design (CCID). The CCID conducts research and develops resources to help innovators understand how they can harness collective intelligence to solve problems. Our latest report, The Future of Minds & Machines mapped the various ways that AI is helping to enhance and scale the problem solving abilities of groups. It is available for download on the Nesta website, where you can also explore 20 case studies of AI & CI in practice.
New Horizons in Search – workshop blog
On November 13th, the NGI Forward project (the NGI’s initiative’s Policy Lab) held an expert workshop on the topic of search and discovery in the Atelier de Tanneurs in Brussels. This workshop brought together over 30 invited experts from across Europe to reflect on the future of internet search, and help shape the European Commission’s […]
by Katja Bego
On November 13th, the NGI Forward project (the NGI’s initiative’s Policy Lab) held an expert workshop on the topic of search and discovery in the Atelier de Tanneurs in Brussels.
This workshop brought together over 30 invited experts from across Europe to reflect on the future of internet search, and help shape the European Commission’s funding and policy agenda in this important area.
This blog discusses some of the main take-aways of this day; a longer report, informed by all the great insights we gathered during the day, will follow soon. If you are interested in being involved in these conversations, do get in touch with the NGI Forward project or sign up to stay informed here.
Search and discovery?
The way in which we order, discover and retrieve information online is one of the, if not the, key building blocks of the internet. It is therefore no surprise that many of today’s technology debates prominently feature aspects of search and discovery: from fairness in automated decision-making and recommendation algorithms, to the sustainability of the internet; from the impact of online disinformation on our democracies to centralisation in the digital economy.
But it is not just in the present that these topics are so important. New technological developments in, for example, artificial intelligence and the IoT space as well as rising hyper-connectivity blurring the boundaries between offline and online, might dramatically change how we think about search in years ahead. This workshop is an opportunity to surface some of these emerging dynamics and opportunities.
Search and discovery is a key topic on the agenda of the Next Generation Internet, the European Commission’s ambitious flagship programme aimed at building a more inclusive, democratic and resilient future internet. The purpose of the workshop was to bring together experts working on different aspects of search, across disciplines and industries, to reflect on the current state of the field, and recommend ways in which the NGI can help strengthen existing ecosystems.
Throughout the day, we focused on answering three key questions:
What are today’s main challenges and opportunities in the space of search: what does the current landscape look like?
What might search and discovery look like in 5 to 10 years? How are emerging dynamics reshaping this space?
What can we the search community do to help ? What are Mechanisms to strengthen the ecosystem
Biggest challenges and opportunities in search today At the start of the workshop, we asked all participants to share what they thought were the biggest challenges and opportunities driving development in search and discovery. From this exercise, we’ve collected a lot of very varied and in-depth insight into the current state of the space.
As a group, we distilled the discussion into five overarching umbrella topics:
Centralisation of power: Many elements of search are dominated by just a handful of players. How do we find the kind of business models and seize new opportunities around, for example, decentralisation that might help level the playing field in this key sector of the digital economy?
Participants emphasised we should not just look at centralisation when it comes to access to data (and respective size of user bases), but take a full-stack approach, where we look at how power can be better distributed across layers of the internet.
Sustainability and resilience: One key concerns several participants surfaced was the environmental impact of search and data storage more broadly on the planet. The budding field of green search tries to address the high energy intensity that comes with search- from ensuring we minimise computing power required to run search queries to ensuring we limit storage and duplication of data. Developments across the search space should be studied with a sustainability lens in might: emerging opportunities in, for example, object search and IoT might help make some processes smarter and more efficient, but are also likely to add new strains on the system.
One interesting insight that emerged from our discussions on sustainability was on the need to be careful when we think about making processes more decentralised in order to reduce their energy intensity. Greener or more distributed alternatives are often good in essence, but do not always scale as well as existing systems- sometimes inadvertently increasing inefficiencies rather than reduce them. Careful cost-benefit analyses are necessary before we lock ourselves into new systems.
Creating a user base for alternatives: Though there are many alternative solutions out there, few manage to compete with the large actors dominating this space. Part of that is a function of economics, but our participants also pointed out that smaller (open source) tools often do not do a particularly good job when it comes to user experience and usability. Addressing these challenges will require a myriad of different solutions, which will be discussed in greater depth in our final report.
What all participants agreed on, however, is that there is an important role to play for the European Commission- both in levelling the playing field through setting fair rules, but also through procuring and funding alternatives, enabling them to find a larger market and find pathways to sustainability on the funding front.
Data quality and access: trust, bias & fairness: How do we ensure search and recommendation systems base their decisions on high-quality and representative data and do not perpetuate existing inequalities? Underpinning black-box algorithms are often hard to understand and near-impossible to challenge, which can lead to the unfair targeting of certain groups, or, probably more pertinent to the field of search, steer our behaviour in directions not of our own choosing or otherwise bias outcomes (e.g. women being shown job ads for less well-paid positions than their male counterparts).
More investment in research and tools that can help us better understand or respond to these biases and inequalities is much needed (though we must not let currently hot debates about ethics in data and automated decision-making overshadow other persistent issues in the space).
Multilingualism: The dominance of English and other major world languages on the internet means that we lose out on a lot of richness of content (which is not taken into account in search queries, for example), and exclude large groups from benefiting fully from the digital economy. The European Union, home to an incredible linguistic diversity within its borders, can play an important frontrunner role in developing a more multilingual internet.
What might the future hold? What might the field of search and discovery look like in five to ten years? How are emerging developments in the search space, such as new possibilities in cognitive and object search, and broader social dynamics reshaping the field?
What kind of real societal problems could these new possibilities help solve? Can we, for example, make aspects of search greener or help level the playing field in the digital economy? And what new challenges might they instead surface?
Participants agreed that if Europe wants to expand its role in the field of search, it needs to address some of the key challenges we face today, but also seize emerging opportunities and technological advances in the space. Our participants pointed out that a lot of existing dynamics will only become more entrenched as the field of search expands beyond the current confines of “the internet”. Addressing economic, social and political challenges will worsen if we don’t address them now.
There are however also many very exciting opportunities and growth fields that will likely transform the field in years and decades ahead. That’s not just topics like green search, the emerging opportunities for objects to communicate with each other, and also be “found” by one another (object search) and ways of making search more serendipitous and better able to recommend us things we might not yet know about, but fit our patterns of interest (rather than linear recommendations).
It’s also important to not treat the field of search as a vacuum: emerging dynamics in other technology field will interlink and expand the possibilities we currently have at our disposal. Think for example of the previously mentioned IoT space, but also advances in 5G, which will make continuous, real-time connectivity, discoverability and communication possible, or AI and Machine Learning.
Where do we go next? After a hugely insightful workshop, we are not done with this work. Our upcoming report will summarise the key insights the group surfaced during the day, and will also make a set of high-level recommendations for what the European Commission should do to help strengthen the search and discovery ecosystem. But we also want to hear from the wider community about what topics we might not have covered, and further deepen our understanding of emerging issues in this space in the coming months.
If you want to take part in these conversations, do join the conversation on our NGI Exchange Platform.