Post

Exploring an NGI Trustmark

Trustmarks are a well-established mechanism which help consumers make more informed decisions about the goods and services they buy. We all know the fairtrade stamp on our bananas, trust environmental certifications, and value Better Business Bureau stickers. Where we haven’t seen the trustmark used much yet, or at least not very effectively, is within the […]

Trustmarks are a well-established mechanism which help consumers make more informed decisions about the goods and services they buy. We all know the fairtrade stamp on our bananas, trust environmental certifications, and value Better Business Bureau stickers. Where we haven’t seen the trustmark used much yet, or at least not very effectively, is within the space of responsible technology and software. 

After a series of highly public scandals which have put in question the trustworthiness of the technology and tools we rely on (from privacy violations and data misuse to large data breaches), there is a rising demand among the general public for ethical, responsible alternatives. It is however not always easy for consumers to find these alternatives, partially due to a lack of easy-to-understand and -find information (among a deluge of apps, how do we know which ones are most careful with handling our data, for example?), but also because of the lack of maturity of the marketplace for these types of tools to begin with (few have been able to gain real traction).

Trustmarks could help solve these issues. A stamp of quality for products that, for example, follow high security standards, do not track and sell the data of their users or use ethical production processes, could make it easier for consumers to pick out these tools in a crowded marketplace, and simultaneously raise awareness about how some of these values are not embodied by many of today’s most popular tools. Furthermore, a trustmark could support the creation of an ecosystem and market around ethical tools, which can struggle as being “responsible” often means compromising on user friendliness, effective marketing and above all profitability. 

Exploring the Trustmark idea in the digital space

On September 25 2019, the NGI Forward held a short workshop on trustmarks as part of the NGI Forum, the Next Generation Internet’s flagship community event. This document outlines the key messages and take-aways from this workshop. 

In this small workshop we brought 16 participants together to explore trustmarks in more depth, and examine their potential value and how they could be practically employed. Before trustmarks can be put to the test, there are a lot of open questions left to be answered. In this workshop, we surfaced many of the key issues that still need to be resolved and different potential solutions. 

Many of the participants in the workshop reported already being involved in the development of some sort of digital trustmark. There are a number of trustmark type initiatives emerging in areas such as the responsible use of data, Internet of Things (IoT) and cyber security. For example the Trustable Technology Mark (https://trustabletech.org/) developed for IoT devices or Sitra’s work on the concept of a ‘Fair data label’ to inform consumers about services’ compliance with basic principles and standards of data protection and reuse. Many of these initiatives are asking the same kinds of questions the workshop set out to explore, how could a trustmark for internet related products or services provide value, what factors make a trustmark a success and which areas should a trustmark cover? Many of these projects have already faced some key challenges, which are explored more below. 

How could a trustmark be useful? 

The main benefit of the trustmark model is the opportunity to empower consumers to make informed decisions about the product or services they are using and it also helps companies to prove their products or services are ‘trustworthy’. It is clear that consumers increasingly have trust issues around the digital products and services that they use, whether those be privacy concerns or potential harms emerging from automated algorithm- based decision making (such as targeted ads or curated social media news streams). Trustmarks may also be able to add additional value, not just for consumers but also for companies and the EU’s drive to make the next generation internet (NGI) more ‘human-centric’.

Trustmarks could help create a market for responsibly created, trustworthy products. This could help encourage the creation of more products and services that compete with existing business models that are largely based on data exploitation and monetisation, and offer a ‘responsible’ alternative. Trustmarks could also help further raise awareness among consumers of the many issues digital products and services can create.  At the same time a new market for responsible, trustworthy products, services and business models may help embed ‘human-centric’ values into the next generation of innovations. Introducing greater transparency around products, services or business models is one of the central ways trustmarks could help facilitate this change. Trustmarks could also improve trust in the digital economy, a critical step in making the most of the digital economy and providing improved private and public services.  

Challenges:

Scope

Successful existing trustmarks cover a wide range of things, from adherence to health and safety standards to ethical business practices. They often focus on one area rather than covering every element that may benefit from indicating ‘trustworthiness’.  A narrower focus can help with consumer engagement as it is easier to convey a single idea over several different metrics outlining many different aspects of what a ‘good’ product is. However, too narrow a focus may not cover all necessary issues, thereby giving consumers a false impression of trustworthiness of the overall solution. This difficult balancing act around getting the scope and remit of a trustmarks right, is particularly challenging for digital and internet products as the issues we have seen emerge around them are so multifaceted. Data collection and use, cybersecurity, accessibility, physical elements of a product, hardware and software etc. Could a useful comprehensive NGI trustmark be created that covers anything from a social media picture app, an IoT sensor to AI algorithms? 

To identify some of the important areas an NGI trustmark could cover, workshop participants focused on individual high-level issues, such as sustainability or responsible data use, rather than attempting to construct a comprehensive trustmark, which the group both agreed would not be particularly useful, nor viable to debate in the short time available for the workshop. 

However even focusing on narrower areas identified many different open questions and concerns that merit further exploration. Participants found there were differing needs, risks and norms across sectors and verticals, for example retail and health, which meant that standards for “good” would likely differ significantly across solutions and applications. 

Metrics and evaluation

For trustmarks to work, we require reliable and easily transferable ways to measure and evaluate how well a product, service or business model meets the relevant requirements. For some areas discussed during the workshop, for example CO2 consumption or energy use as part of sustainability, it would be fairly easy to develop appropriate metrics (particularly as there are already other product trustmarks that do this), but for other, perhaps more subjective, areas like data handling, bias and discrimination, or ethical practice developing such metrics is much more difficult and fuzzy. 

Assessment may also be hampered by two additional factors;

  1. Software is continuously being updated and changed. How can we make sure that after repeated tweaks, products or services still meet the trustmark’s basic requirements? Is it viable for any governance system to oversee such a vast, rapidly changing landscape? 
  2. ‘Black box’ systems, which generally refer to complex AI algorithms in this context, limit the ability to be open and transparent. We may not know what the system is doing or how it achieves the outputs it creates. Alternative metrics may be required in these instances (for example focusing on data handling or data sources), or the trustmark could focus only on explainable systems. 

Another related question around how the trustmark works is whether it is used to define a set of minimum requirements or it is used to identify ‘best practice’. Minimum standards make it easier for more companies or products to acquire a trustmark, but also mean that the solutions championed do not necessarily push the bar for good behaviour. Minimum standards might even reward bad behaviour in some cases, where companies are encouraged to only do the bare minimum. 

Governance model

How to govern trustmarks is one of the biggest challenges in making them a success. Building trust in a trustmark requires the involvement of well-respected institutions, and, as many participants noted, can be very expensive. Especially the auditing and review of solutions, is an open question. 

The digital landscape is vast: if demand from the private sector for the trustmark increases, this could potentially involve hundreds of thousands of companies. There are many ways of doing assessments, either through self assessment or auditing by an independent auditing body (often the outcome is somewhere in between the two). Participants indicated that the focus should be more on independent self-assessment to avoid false self-reporting. However this creates other challenges in terms of resourcing and ability. Any governing body with assessment responsibilities would need to be resourced appropriately to carry out its functions. In light of the growth of the digital economy and ongoing auditing needs as software is updated this may be significant. This raises the question of how the trustmark would be paid for. If it is paid for by companies who apply it may put additional barriers in the way of smaller companies, startups or free, open source software. 

The governance of the trustmark also needs to be tied to a trusted organisation itself, in order to help strengthen support and credibility of the trustmark. Participants felt that the European Commission was in a strong position to play this kind of role. Participants also indicated that many initiatives have stalled or failed to come to fruition due to a lack of funding or support from a larger independent institution. 

Business models and consumers

A trustmark’s success will be heavily dependant on how effectively it can help disrupt entrenched business models and create a market for alternative, responsible companies. This will be particularly difficult in the data economy where many different companies have vested interests and lobbyists will play an influential role. 

Perhaps most important of all however is consumer engagement. If consumers are apathetic about an NGI related trustmark then it will never achieve any of the potential goals set out above. Workshop participants did not consider this to be a big challenge however, as many polls and public engagement exercises have already demonstrated the public’s interest in areas like privacy, data use and sustainability concerns. Trustmarks can be used in several ways, identifying potential impacts on the user or environment, a way to educate consumers or through eliciting a ‘feel good’ response (eg fairtrade approach).

Themes

Participants also brought up a variety of other important topics trustmarks could potentially be used for: 

  • Sustainability: The sustainability of the internet itself, software and hardware are becoming a topic of ever greater salience, though public awareness about the large environmental footprint of many of their connected devices and internet use remains limited. One possible way of encouraging technology companies to adopt more sustainable ways would be to design a trustmark around these issues (which could everything from CO2 emissions from data centres, energy efficiency, ability to recycle a device, etc.). 
  • Privacy and data use: Trustmarks could be given out to companies whose tools handle their users’ data in a particularly secure way, allow for data portability, otherwise make valuable datasets available to third parties in a responsible way, or use particularly transparent models for consent, to name just some examples of concrete interventions we could evaluate on in this realm. 
  • Cybersecurity: Also cybersecurity is often touted as a potential focus of a trustmark, particularly in the Internet of Things space. Has a solution of device successfully undergone a security audit? How transparent is the company about cyber breaches and underlying vulnerabilities? How securely do they store users’ data? Though this is an interesting area, lack of transparency might make it hard in practice to certify tools. 
  • AI ethics: Using trustmarks to formalise AI ethics principles in specific tools often came up as a possible application. Could we give trustmarks to solutions that offer transparency about the inner-workings of their algorithms? Make serious efforts to reduce bias? Subjectivity and lack of agreement about what “ethical” means, will require intensive efforts to build a coalition around this topic. 
Post

When mind meets machine: harnessing collective intelligence for Europe

Collective intelligence (CI) has emerged in the last few years as a new field, prompted by a wave of digital technologies that make it possible for organisations and societies to harness the intelligence of many people, and things, on a huge scale. It is a rapidly evolving area, encompassing everything from citizen science to open […]

Collective intelligence (CI) has emerged in the last few years as a new field, prompted by a wave of digital technologies that make it possible for organisations and societies to harness the intelligence of many people, and things, on a huge scale.

It is a rapidly evolving area, encompassing everything from citizen science to open innovation to the potential use of data trusts, and offers enormous new opportunities in fields like sustainability, health and democracy. For Europe, harnessing CI will be critical to achieving its economic and social goals through initiatives like the Green New Deal or Next Generation Internet.

 A quick primer

Collective intelligence is created when people work together, often with the help of technology, to mobilise a wider range of information, ideas and insights to address a social challenge.

As an idea, it isn’t new. It’s based on the theory that groups of diverse people are collectively smarter than any single individual on their own. The premise is that intelligence is distributed. Different people hold different pieces of information and different perspectives that, when combined, create a more complete picture of a problem and how to solve it. The intelligence of the crowd can be further augmented by combining these insights with data analytics and Artificial Intelligence (AI). Bringing these two elements can be extremely powerful but the field is still emerging and it isn’t always clear how to do it well.

Nesta’s Centre for Collective Intelligence Design is at the forefront of both research and practice in this space, and has recently developed a Collective Intelligence Playbook to support others to harness CI more effectively.

Click here to explore Nesta’s Collective Intelligence Playbook

To explore the opportunities for Europe and the European Commission’s NGI initiative further, Nesta held a workshop as part of the MyData event in Helsinki in September. In this workshop, we introduced participants to the concept of collective intelligence and asked: how can we best combine the intelligence of the crowd and artificial intelligence to solve some of today’s largest societal problems? 

During the workshop Peter Baeck, Head of the Centre for Collective Intelligence Design at Nesta and Aleks Berditchevskaia, Senior Researcher on collective intelligence explained the concept of CI and then took workshop participants through the Collective Intelligence Toolkit developed by Nesta.

Katja Henttonen, project manager in e-democracy for Helsinki, provided a live case study introduction to CI in practice through a demonstration of the Decidim online democracy and participatory budgeting tools currently being trialled in the city. 

Using the collective intelligence toolkit canvas and method prompt cards the groups were given an hour to work on a number of practical problem statements, exploring several challenges in the internet space, opportunities around collective intelligence were explored, as well as questions about how CI can be practically used at scale, learning from exciting case studies from around the world.

What did we learn from the workshop?

  • Workshop participants saw huge potential in using CI and the toolkit In particular participants were excited to be introduced to new methods such as citizen science or using satellite data for collective intelligence.  
  • There is a clear need for better practical guidance, such as Nesta’s playbook, on what CI is and how it can be applied by organisations. Workshop participants suggested this could be done through further development of practical tools and guides for how to design for CI and the creation of open repositories or data bases on CI methods and use cases. Within this, participants highlighted the need to make the support for CI as practical as possible and suggested connecting any research, investment and support for CI to specific social challenges, such as climate change, fake news or digital democracy.
  • Of the different tools and methods to enable CI, the workshop highlighted a particular interest in understanding the relationship between human and machine intelligence in enabling different forms of CI and raised three challenges/questions: 
  1. Better understanding of the different functions in the relationship between human and machine intelligence and how to design solutions that tap into the benefits of these while maintaining strong ethical frameworks and give individuals control of the data they want to contribute to the collective and how this can be used.
  2. Knowledge on how AI enabled CI can be applied and used within grassroots networks and NGO’s to better mobilise volunteers, activists and community group to identify and solve common challenges is needed.
  3. Funding that explicitly focuses on bringing together the AI community with the CI community is needed to foster new forms of collaboration. 
Post

Is regulation Europe’s competitive advantage?

Europe could lead the way in creating a regulatory regime that stimulates innovation

Recent high profile events, from the Cambridge Analytica data scandal to drone chaos at London’s Gatwick airport, have thrown regulation back into the spotlight. At a political level it’s becoming a hot topic too, as public unease with technologies like artificial intelligence grows while the global race to dominate the ‘fourth industrial revolution’ heats up.

Once this might have meant a race to the bottom and a cutting of ‘red tape’. Instead, we are seeing a radical shift in both regulatory theory and practice, driven by a greater acceptance that innovation needs to deliver public value and, more practically, the need for regulatory clarity around uncertain novel innovations like autonomous vehicles, drones or cryptocurrencies. ‘Anticipatory regulation’ is becoming a source of huge competitive advantage.

This presents an important opportunity for any government but only a few places are ready to fully exploit it. If Europe can lead the way in creating a regulatory regime that stimulates innovation while protecting and creating value for the public, it will have achieved something neither the US or China are in a position to do.

Europe’s strongest competitive advantage lies in its size, diversity, and commitment to maintaining basic rights and delivering public values. A host of established regulatory institutions also mean it is uniquely placed to capture this opportunity. General Data Protection Regulation (GDPR), for better or worse, has already shown part of Europe’s strength lies in its regulatory leadership.

The tricky question is how: What would it take for European governments and the EU to develop an innovation-enabling regulatory system that protects European rights and delivers public value?

Regulation renewed

Regulatory systems have failed to keep pace with technological change (and other innovations). They increasingly face problems they have no way of dealing with. Platform economies, the growing importance of data, and development of cross-cutting technologies like AI have all deeply disrupted our outdated regulatory frameworks.

However in the last couple of years we have seen an explosion in regulatory innovation. The emergence of new anticipatory regulation practices have started to reshape the role of regulation as a forward-facing, inclusive, proactive and innovation-enabling system. New practices such as the Financial Conduct Authority’s (FCA) sandbox (and the many other fintech versions worldwide) or the development of various testbeds for autonomous vehicles were at the forefront of this change. More recent developments are supporting even more innovative thinking — the UK government’s £10m Regulators’ Pioneers Fund is the first system-wide attempt to promote the testing of new innovation-enabling regulatory approaches.

Anticipatory regulation in practice

There are four key elements of anticipatory regulation:

#1 Proactive

Engaging innovators and innovation early: this is particularly important when problems and opportunities can scale very quickly. The FCA sandbox or the various innovation hubs regulators have set up are a great example of this in practice. Being proactive is not just about engaging innovations earlier, it is also an opportunity to use regulatory action as a way of driving innovation around a particular strategy or public need. Nesta has found challenge prizes a very effective way to do this: identifying the outcomes you want to achieve and supporting the market to develop innovations that deliver that outcome. For example, we are working with the Solicitors Regulatory Authority to stimulate AI-powered innovations that could serve to widen access to justice, and at the same time inform the regulator’s approach to these new technologies.

#2 Inclusion and collaboration

Where new technologies raise ethical issues with sensitive political implications, the public need to be engaged; as part of a more diverse set of stakeholders, and a more collaborative, co-creation approach (in part to avoid public backlash). Regulators have to leverage the capabilities of businesses, cities and civil society to secure policy goals and build capacity in new areas like data.

#3 Future-facing

Being future-facing is arguably a large part of the success of the Singaporean approach to regulation. While horizon scanning and identifying emerging issues and opportunities is a vital part of this function it must be paired with other foresight and futures approaches to develop resilient, adaptive strategies that can cope with the inherent uncertainty of fast-changing markets.

#4 Experimental

Lastly there is the need for decentralised experimentation in facilitating diverse responses to the regulation of early-stage opportunities and risks where national or global policies or standards are still to be established. Autonomous vehicle development in the US is a good example; lots of states are taking different approaches with a set of general standards being developed at the federal level.

What Europe can do

Europe is in a strong position and can boast a number of particularly interesting and innovative initiatives (see Austria’s approach to autonomous vehicles or the UK’s new regulation strategy) but other nations are in a strong position to challenge its regulatory leadership. China is emerging as a strong contender, particularly in areas like AI and IoT. Singapore arguably still has the most innovative regulatory system and other places are moving quickly to capitalise on the opportunities, for example, Canada’s emerging Centre for Regulatory Innovation.

Building and embedding these anticipatory approaches into national strategies and at the EU level is not an easy task; it requires a very different mindset and way of working. Some bits of Europe are ahead of the game, but for the whole of Europe to benefit it needs a coordinated anticipatory regulation strategy.

A version of this blog was originally published on the sifted.eu site

>