Post

Building a greener internet

How can Europe play a leading role in building a more sustainable internet?

When we think of emerging technology and sustainability, the images that come to mind usually include futuristic cities, made clean, green and perfectly efficient through the magic of algorithms and digital services.

But this utopian vision of a fully connected future comes at a potentially dystopian environmental cost. Many of our daily activities damage the environment. Thirty minutes of video streaming emits between 28 and 57g of carbon dioxide equivalent (CO2e) and binging on a 10-hour series could use the same energy as charging a smartphone 145 times. A group video conference on Zoom creates 4.5g of CO2e for each participant in an hour-long call, so a company of fifty employees each participating in two hours of video calls every working day creates as many emissions as the burning of 50kg of coal each year. 

In 2018, the internet used between five and nine per cent of global energy generated – more than global aviation. Ten years from now, it could account for as much as 23 per cent of global greenhouse gas emissions.

Carbon and energy provision aren’t the only issues at play. With the emergence of smart cities and a 5G-enabled Internet of Things, we are adding billions of low-cost new devices and sensors to our lives and built environments, embedding increasingly sophisticated electronics into our roads, buildings and appliances. How can we be sure that the benefits they deliver aren’t outweighed by their own environmental footprint – from the resources and servicing they require to the waste they produce?

Of course, thinking about the link between sustainability and technology isn’t new. The European Union recently set itself the goal of recovering from the pandemic in a way that supports both its green and digital ambitions. For policymakers across the continent, managing this so-called ‘twin transition’ is set to become one of the defining challenges of the next decade. The UK, as an R&D powerhouse with the desire to become a zero-emission economy, should take notice. The twin transition should be embraced as an opportunity, rather than a challenge, to demonstrate our leadership and ability to innovate.

It is only through a more conscious approach to connectivity that we will succeed in reconciling our green and digital aims. We have identified four principles that will help us get a handle on the environmental impact of the digital economy, and could make Europe the global standard-bearer for sustainable and ethical internet technology: 

  1. Because of the universal nature of this challenge and its potential to create significant knock-on effects, we must integrate sustainability thinking into all areas of internet-related policy, from GDPR enforcement to media regulation and competition law. 
  2. We have to improve the design of technologies by setting standards and regulating where necessary to encourage hardware producers and software developers to align their ambitions for sustainability and innovation. 
  3. Consumers need to be informed about the impact of their purchases and empowered to live their digital lives more sustainably and consciously. 
  4. Finally, we should incentivise positive change and create markets for more sustainable alternatives through tools like procurement, investment and taxation. 

To illustrate how these principles can apply to very different contexts, it is useful to take a look at the lifecycle of an internet device, from beginning to end to beginning:

Extracting natural resources

The internet is made up of physical infrastructure, from our smartphones, laptops, wearables and voice assistants to the core networks and cabling that connect our homes. Producing these tools requires a staggering variety of materials, many of which are extracted from the ground in less developed countries under conditions that threaten both local communities and the environment. Smartphones, for example, can contain upwards to 62 different elements, with the materials in each iPhone requiring the mining of more than 34kg of raw ores. Not only is the amount of energy required in these processes significant – they often involve the use of poisonous chemicals. We urgently need new and sustainable sources for the most difficult to source materials, and promising research shows we could extract many of these from recycled electronics. We can also reduce the global impact of mining by tightening up socio-environmental regulations and investigating mining opportunities within Europe.

Supply chains and importing

Once the metals, minerals and other materials that form the basis of our internet hardware are extracted, they are processed and shaped into components in several stages by a complex web of companies located across the world. An iPhone, for example, contains parts from over 200 suppliers. These supply chains are notoriously opaque. Manufacturers often don’t know exactly where a part or its materials have originated, nor do they know how sustainable their production processes are. By the time a device reaches a shop or online store, up to 95 per cent of its lifetime greenhouse gas emissions have already been created. We cannot meaningfully tackle the environmental footprint of these devices unless we have common European standards for supply chain transparency and mandatory reporting requirements that bind upstream and downstream companies looking to sell into European markets. 

Marketing and purchasing

As soon as our devices arrive in store, they fly off the shelves at astonishing rates – 200 million smartphones are sold each year in Europe alone. We replace our smartphones roughly every two years, often before they are broken, despite the opportunity to save £100 per year by keeping an old device running. Three quarters of Europeans are willing to spend more on products and services if they are environmentally friendly. We need to educate and empower consumers so they can choose devices that last longer and are easier to repair and upgrade. That starts with giving them clear and visible information about the environmental impact of their devices at the point of purchase. Local governments, public sector organisations and infrastructure providers also spend significant sums on connected devices and internet hardware. If we channel their purchasing power through green procurement rules, sustainability assessments and better guidance, we can make a real difference and create markets for manufacturers that design for sustainability and longevity.

Use and services

All of our clicks, calls and content are sent buzzing through the internet’s physical infrastructure, made up of wireless base stations, cables, switches and servers. The data we send and receive travels through data centres, large warehouses full of servers that need huge amounts of energy to power and cool them. Our data consumption is increasing quickly, and even today these systems are powered in large part by fossil fuels because they are cheaper or more readily available. Data centres across the globe used around 416 TWh per year, or about 3 per cent of global electricity supply in 2019, which is nearly 40 per cent more than the consumption of the entire United Kingdom.

According to some estimates, a single email creates around 4g of carbon dioxide. Unaware of the impact of our actions, we send roughly 300 billion emails per day, creating 1.2 million tonnes of emissions every twenty-four hours. That, quite literally, makes spam and marketing emails litter. We could make it easier for consumers to switch from services that still rely on fossil fuels and nudge tech or data companies into adopting greener energy sources and cutting back on unnecessary data-hoarding as per the GDPR’s data minimisation principle. Working with industry, we should develop standards to demonstrate and improve the energy efficiency of websites, software and services. Major platforms could lower the resolution of video content, remove auto-play functions or give users the option to listen to audio without video. Search engines and online stores could do more to identify and promote green results.

Distributed services such as the blockchain also contribute significantly to carbon emissions, with each Bitcoin transaction creating a staggering 287kg of CO2 is emitted for each single Bitcoin transaction, equivalent to around 800,000 VISA card transactions. In Iceland, Bitcoin mining is projected to soon use more energy than the country’s residents. We need to get ahead of these technologies so we can contribute to more environmentally friendly designs.

Extending lifetime

Anyone hoping to extend the life of their internet device when it breaks will come up against some serious hurdles. Repair manuals and spare parts are rarely made available to end users, and manufacturers often threaten tinkerers with draconian warranty conditions. This makes repair expensive and pushes us towards buying a new device. The fragility of modern device designs, with their edge-to-edge glass screens, adds to this trend. Our devices should and could last longer. They ought to receive software updates for longer, and be upgradeable. Modular designs such as the Fairphone have shown that this is possible. We can educate consumers about the repairability of their devices at the point of purchase, and ensure the long-term availability and accessibility of repair manuals, tools and parts to make fixing devices a viable alternative again. We also need legislation to give users the Right to Repair their devices and encourage manufacturers to design products that can have their lives extended.

Managing waste

When our internet devices break or become too slow to run the latest apps, we usually replace them. But that’s not the end of the story. The designs of our devices make them incredibly difficult to recycle, with minuscule parts soldered and glued into place. As in the early stages of a device’s lifecycle, we again rely on less developed countries to do our dirty work: 1.3 million tonnes of undocumented goods are exported from the EU each year, and in the UK as much as 80 per cent of electronic waste recycling is shipped to emerging and developing countries. Working in dire conditions, low-paid workers will disassemble the device for its valuable components, but many parts will be lost, and those that can be reused will be subjected to acid and chemical treatments that are prone to leaking into the environment. We need a Europe-wide takeback scheme and financial incentives for manufacturers to make devices easier to recycle when they are no longer repairable. 

Today we launch a new report that explores the many effects the internet has on the environment and sets out a series of recommendations that policymakers, businesses and consumers should consider to grasp the opportunities presented by the green and digital twin transition and make Europe a leader in sustainable internet technologies. This report is part of our work leading NGI Forward, the strategy and policy arm of the European Commission’s flagship Next Generation Internet initiative, which seeks to build a more democratic, inclusive, resilient, sustainable and trustworthy internet by 2030. We hope that it sparks a new conversation about a common European approach to the internet that will support the twin green and digital transitions necessary to recovery from the pandemic. There is a huge amount that we can do to create positive change in this area but we must act now.

Post

Minutes: NGI Forward Advisory Board meeting (22/07/20)

NGI Forward's advisory board held its inaugural meeting in late July, discussing the project's priorities and ambitions. To promote transparency, we publish written summaries of our meetings.

NGI Forward’s advisory board held its inaugural meeting on 22 July to discuss the project’s current priorities and future ambitions. The membership of our advisory board represents a broad community of internet experts and practitioners. Going forward, it will meet twice a year to provide the project with support, constructive criticism and guidance. To promote transparency, we publish summaries of our meetings. You can learn more about our board here.

Present: Pablo Aragón, Harry Armstrong, Mara Balestrini, Ger Baron, Katja Bego, Martin Bohle, Markus Droemann, Inger Paus, Katarzyna Śledziewska, Louis Stupple-Harris, Sander van der Waal, Marco Zappalorto

Not present: Ian Forrester (excused), Simon Morrison (excused), Marleen Stikker (excused)

Summary: On 22 July, NGI Forward’s advisory board held a two-hour video conference for its inaugural meeting. The agenda was designed to provide board members with an overview of the project, its place within the NGI ecosystem, its goals and current priorities. In particular, we discussed progress made and future ambitions across a series of activities that broadly fall under NGI Forward’s ecosystem-building objective, especially the delivery of an NGI vision paper and policy network. We also collected feedback on the role of the advisory board itself in supporting these activities and agreed a follow-up meeting to assess should be held within six months to assess progress against the project activities discussed. Board members provided detailed and constructive comments on each, which are summarised in bulleted form below. 

NGI vision

In this first part of the meeting, the project provided an overview of the main messages of the upcoming vision paper NGI Forward will release soon, 

  • Members agreed that the NGI vision should work towards concrete actions and alternatives,  rather than framing the issues in a reactive way. It’s necessary to clarify that the NGI is about reclaiming the internet in a European way, without furthering the dynamics moving us towards a splinternet, or  or supporting needlessly fatalistic narratives about reinventing the internet from scratch, or pulling the plug altogether. 
  • Members highlighted the risk that bad practices from big tech companies overshadow the possibilities of doing good through internet technology. The NGI vision should capture this by weaving more optimistic narratives and rewarding those who do the right thing.
  • Members argued that an NGI vision should also promote open standards, practical solutions, inclusion and bottom-up action, and should empower a wide net of stakeholders to play their role in bringing about this vision.
  • Members highlighted the challenge of balancing the NGI’s human-centred and value-based proposition with Europe’s otherwise more economically-driven Digital Single Market narrative. However, bridging that gap may also present a unique opportunity for the project and wider initiative to speak to policymakers who are caught in between both approaches. The story of this vision needs to be sufficiently inclusive to appeal to policymakers and other stakeholders across the political spectrum.
  • Members asked to be provided with an early draft of the vision before it’s published, and generally would like to be involved in the dissemination and future finetuning of the NGI vision.
  • Members expressed some language around data justice and bias was not as explicitly mentioned in the summary slides on the visions paper, and that, given the importance of these topics, the project should consider featuring these more prominently. 

Policy Network

  • NGI Forward presented a short paper on the objectives and design of a potential NGI Policy Network, which would serve as a coalition for change towards a more democratic, sustainable, trustworthy, resilient and inclusive internet by 2030. The proposed network would bring together organisations and individuals with shared ambitions through policy-relevant research and public affairs work. It would serve to avoid the duplication of efforts and the proliferation of competing, often similar, solutions to universal challenges from organisations that operate in different local contexts or represent different stakeholder and practitioner communities. It should aim to make the NGI more inclusive and provide a mechanism for bottom-up contributions to NGI-relevant research and policy work.
  • Members welcomed the idea of a community of communities that would serve to break down silos between different discourses and provide for knowledge-sharing at a practical level. 
  • Members highlighted that many actors in this space have a capacity problem and need to see a clear incentive for joining. 
  • Members similarly highlighted the risk of setting up a policy network that duplicates the work of similar, already existing groups. 
  • The network should have very clear objectives and identify areas of mutual interest that are underserved by other groups. At the same time, it should develop good links between these existing networks to widen its impact.
  • Members also spoke of the risk of setting up another project or network that cannot be sustainably continued after the end of NGI Forward’s funding period, often a problem for H2020-funded initiatives. The goal should be to create a structure that lasts after the end of the project, and could potentially be carried forward by its members. There should be a continuity or succession plan in place before the network is launched. 
  • Similarly, members suggested the network should be open to European project consortia to share their own project outputs and deliverables so as to ensure follow-up by others after the end of their respective grant periods.
  • Members argued that the project’s ambitions for enabling a bottom-up approach will require the network, and the project more generally, to target local governments and communities or institutions that otherwise have limited exposure to these topics and translate NGI ideas from EU jargon into more useful terminology, methods and tools.
  • Another potential selling point is to help public sector organisations who are actively looking for value-aligned alternatives and more ethical ways of organising the digitalisation of their services. 
  • Members highlighted in particular the need to target non-English-speaking audiences, and recommended that the project seek ways to translate outputs, and reflect Europe’s geographical diversity in, for example, the NGI Policy Summit programme.
  • Members agreed that there was a need for practical insights, and tangible, solutions-oriented policy ideas, but less desire for another discussion forum to discuss high-level principles for the future of the internet. One idea put forward was to brand the coalition a ‘Policy and Practice’ Network. 
  • Members suggested that the network could pursue more formal agreements between organisations, e.g. memoranda of understanding. Members said that the network would need visibility in places where policymakers go, such as the OECD and WEF. 
  • Members argued that we should consider setting clear responsibilities and deadlines for participants to ensure engagement and partners following through with commitments. On the other hand, we should be realistic about how much time and resource potential partners could invest in another network. 

Policy Summit 

  • The Members expressed their interest in the NGI Policy Summit, scheduled for September 28 and 29, and all agreed to attend at least some of the sessions. 
  • The Members also expressed the suitability of the summit to both highlight the conclusions of the visions paper, and launch the NGI Policy Network, with this in particular being a good moment to start some of the proposed working groups. 
  • Members recommended we also recruit engaged stakeholders to lead some of these working groups, rather than attempt to organise all of these within the project.
Post

Announcing the NGI Forward Advisory Board

Our work requires the support and guidance of a broad community of experts and practitioners, and to help us achieve this we are excited to announce the establishment of our Advisory Board.

Our work requires the support and guidance of a broad community of experts and practitioners, and to help us achieve this we are excited to announce the establishment of our Advisory Board. Our Advisory Board members have been chosen to help us have the biggest impact we possibly can by connecting us with new networks, guiding our ideas and giving critical feedback on our plans.

Pablo Aragón, Research Scientist, Eurecat

Pablo is a research scientist at Eurecat and adjunct professor at Universitat Pompeu Fabra. His research in computational social science focuses on characterizing online participation in civic technologies, the online network structures of grassroots movements and political parties, and the techno-political dimension of networked democracy. He is also a board member of Decidim, the free open-source platform for participatory democracy. Follow Pablo on Twitter.

Mara Balestrini, Digital Transformation and HCI advisor

Mara is a Human-Computer Interaction (HCI) researcher and a digital transformation strategist. Mara’s work sits at the intersection of civic technology, data, AI and co-creation. She is former CEO of Ideas for Change innovation agency and was cabinet advisor to the Secretary of State for Digitalization and AI at the Government of Spain. Mara earned a PhD in Computer Science from the Intel Collaborative Research Institute on Sustainable Connected Cities (ICRI-Cities) at University College London (UCL). She also holds an MSc in Cognitive Systems and Interactive Media from Universitat Pompeu Fabra. Her work has been awarded at ACM CHI, ACM CSCW, Ars Electronica, among others, and featured in international media such as the BBC, The Guardian, The Financial Times and El País. Follow Mara on Twitter.

Ger Baron, CTO, City of Amsterdam

Ger Baron is the Chief Technology Officer of the City of Amsterdam. His professional career started at Accenture, where he worked as an analyst in the consulting department. In 2007, he was hired by Amsterdam Innovation Motor (AIM) in the role of project manager, specifically to develop and enhance the role of ICT. Baron was responsible for starting up the Amsterdam ICT-cluster and he initiated several projects in public-private partnership. Among these were a number of projects related to the development of Amsterdam’s Smart City initiative. Currently, Mr Baron is responsible for innovation, R&D and innovation partnerships within the City of Amsterdam. In addition, he serves as president of the City Protocol Society. Follow Ger on Twitter.

Martin Bohle, Senior Researcher, Edgeryders

Martin’s research focuses on the relationships between science and society, as perceived from a geoscience baseline. He likes to explore concepts that describe the ‘human-biogeosphere intersections’, refer to societal practices (citizen science, governance arrangements, or narratives) or encroach on complex notions such as Anthropocene, noosphere or engineering. From 1991 to 2019, he was affiliated with the Directorate General for Research and Innovation (DG RTD) of the European Commission, where he worked in operational, executive and senior advisory functions. Before these experiences, he studied the dynamics of coastal seas and lakes. Follow Martin on Twitter.

Ian Forrester, Senior Firestarter, BBC R&D

Ian is a well known and likeable character on the digital scene in the UK. He has now made Manchester his home, where he works for the BBC’s R&D North lab. He focuses on open innovation and new disruptive opportunities via open engagement and collaborations with startups, early adopters and hackers. His current research is in the area of Future Narrative and Storytelling, with a technology he calls Perceptive Media. A new kind method of broadcasting, which pairs the best of broadcast with the best of the internet to create an experience like sitting around a campfire telling stories. His background is in interaction design, which he combines with development using XML and semantic web technologies. He tends to live a few years in the future, and has an excellent eye for spotting the opportunities of open technologies and new business models. Follow Ian on Twitter.

Simon Morrison, Deputy CEO, Nesta

Simon is Deputy CEO of Nesta, where he leads the organisation’s operational teams and oversees the content-creation teams. He also works on strategy and day-to-day issues. He has been a Nesta exec for almost seven years and holds a lot of corporate memory as well as practical experience, which he tries to use to mentor individuals and units throughout the business. Prior to Nesta, he held senior positions at the Institute of Fundraising, Home Office, Royal College of Midwives and the National Trust. He also worked in local government communications and as a journalist in the commercial sector.

Inger Paus, Managing Director, Vodafone Institute for Society & Communications

Inger is responsible for Vodafone Germany’s corporate responsibility strategy. She is Chairwoman of the Management Board of the Vodafone Foundation Germany and Managing Director of the Vodafone Institute for Society and Communications. Before joining Vodafone, Inger held multiple positions in Corporate Affairs and Corporate Communications at Microsoft. As Head of Economic and Social Policy, she developed campaigns and initiatives on issues ranging from digital education and industry 4.0 to the future of work. Furthermore, she led Microsoft’s Berlin Center, which was established to foster the dialogue between government and society. Prior to that, she led Microsoft’s Corporate Communications in Western Europe and Germany. Inger Paus gained her media experience in public service broadcasting and as a consultant for media and technology companies on issues concerning political communications. Follow Inger on Twitter.

Katarzyna Śledziewska, Executive Director, DELab, University of Warsaw

Katarzyna is a professor at the Faculty of Economic Sciences of the University of Warsaw. Her interests mainly focus on digital economy, the Digital Single Market strategy, international economy, economic integration and regionalism. Katarzyna is also a member of Readie – Europe’s Research Alliance for a Digital Economy, a member of the Council for Digitization at the Ministry of Digitization and a member of the Council 17 at the 17 Goals Campaign. She is also co-author of a recently published book titled “Digital Economy: How new technologies change the world”. Follow Katarzyna on Twitter.

Marleen Stikker, Founder, Waag

Marleen Stikker founded and leads Waag, a social enterprise that consists of a research institute for creative technologies and social innovation and Waag Products, that launched companies like Fairphone, the first fair smartphone in the world. Marleen also founded ‘De Digitale Stad’ (The Digital City) in 1993, the first virtual community introducing free public access to the Internet in Amsterdam. She is also a member of the European Commission’s H2020 High-level Expert Group for SRIA on innovating Cities/DGResearch and the Dutch AcTI academy for technology & innovation. Follow Marleen on Twitter.

Marco Zappalorto, CEO, Nesta Italia

Marco is the Chief Executive of Nesta Italia and Director of the Social Innovation Design BA at IAAD University. Marco joined Nesta in 2011 and before setting up Nesta Italia he was Head of European Development and he contributed to the set-up of Challenge Prize Centre and led most of the Centre’s European and international work. Follow Marco on Twitter.

Post

Net Partiality, July issue

We’re on a roll with these newsletters now and I can’t wait for you to see what’s coming up this month. Data, data everywhere When technology perpetuates racism: Charlton McIlwain writes for MIT Technology Review about the fascinating origins of criminal justice information systems in the United States. Drawing parallels to the use of technology to trace […]

We’re on a roll with these newsletters now and I can’t wait for you to see what’s coming up this month.

Data, data everywhere

When technology perpetuates racism: Charlton McIlwain writes for MIT Technology Review about the fascinating origins of criminal justice information systems in the United States. Drawing parallels to the use of technology to trace COVID-19 outbreaks and monitor protestors, he highlights the long-term trust-eroding impact of systems whose precursors were designed to target Black people and the civil rights movement. The modern-day impact of historical systems is no clearer than in the process of redlining, by which 1930s US mortgage lenders established maps of subjective assessments of neighbourhood safety that are still affecting people today. Racist judgments made back then have destined many areas to low investment and poor service provision. While the practice of redlining is no longer allowed, decisions are increasingly being made by artificial intelligence, which is slurping up all of the same kinds of data about people and their neighbourhoods. This algorithmic discrimination is more pernicious because it is hidden.

Information overload: We’re facing information overload, being bombarded with too much online news that is too negative to handle, argues Eric Ravenscraft for One Zero. This has implications for our mental health, the business of accountability journalism and the spread of online misinformation, he writes, as we allow ourselves less time to scrutinise stories while accelerating news cycles mean that some public interest reporting gets buried. April and May saw a particularly significant increase in news avoidance: in a recent survey, 59 per cent of respondents said they avoided the news at least ‘sometimes’. 

Threat level: Infodemic: Meanwhile, the WHO and EU have adopted the term ‘infodemic’ to describe the increase in COVID-19-related fake news. In a recent communication, the Commission went as far as saying that China and Russia had actively engaged in targeted disinformation campaigns in Europe ‘to undermine democratic debate and exacerbate social polarisation, and improve their own image in the COVID-19 context.’ In response, the EU is asking online platforms to increase their efforts to tackle the ‘infodemic’ and submit monthly reports on policies and actions taken to improve users’ awareness of disinformation, promote authoritative content and limit advertisement placement. Platforms have also been asked to step up cooperation with researchers at the newly established European Digital Media Observatory, which supports the creation of a cross-border and multidisciplinary community of independent fact-checkers and academic researchers.

Tick-tock, come along now: It’s been two years since the UK Government announced a flagship National Data Strategy to unlock the power of the country’s data and build public trust in its use. We’re still waiting and the issues it could cover are becoming more pressing by the day. Last year’s letter from a group of civil society organisations lays out the top priorities for the strategy, with calls to invest in skills, lead the strategy from the top of Government, and ensure that the public and data users are thoroughly consulted. Will it happen this year?

A new plan to preserve our privacy: Hacks, leaks and sneaky data sharing have become the norm for internet users, now forced to take defensive manoeuvres to protect themselves from untold levels of spam emails, scam calls and ‘pre-approved’ credit cards. Once it’s stolen, it’s impossible to remove it. But how can we challenge surveillance capitalism? Well, we start by forbidding companies to use personal information as a commodity and let the tech companies find new business models, according to this Salon long read. Could this approach to legislation create a new privacy-focused world?

Privacy alone can’t fix today’s power imbalances: Michael Veale, co-developer of the decentralised DP-3T system that inspired Apple and Google’s approach to privacy-aware COVID-19 contact tracing, warns in the Guardian about the perils of confusing privacy with power concentration on the internet. Veale points to ‘federated’ or ‘edge’ computing and cryptographic tools that allow big tech companies to pursue potentially problematic ends without privacy-invasive means. He argues that we need to rethink digital rights because even if the solution adopted by Apple and Google is ‘great for individual privacy… the kind of infrastructural power it enables should give us sleepless nights.’

But the bursting of a new dotcom bubble mightThe adtech industry is heading for a fall according to this piece in The Correspondent from November, which rings true with recent developments. This in-depth analysis recounts trials and tests of the effectiveness of online advertising and finds it lacking, following a handful of case studies including eBay, and concluding that ‘It’s very hard to change behaviour by showing people pictures and movies that they don’t want to look at.’

New things coming up

The apps that nobody controls: A raft of new systems are being created to wrest control of the internet back from the world’s tech companies, and Dfinity has laid down its Internet Computer Protocol in support. Unlike traditional internet services that require central servers, Dfinity’s apps are distributed across the network, moving between servers and distributing cryptocurrency to their temporary hosts. The hope is that users will retain control over their personal data when using the apps and that they’ll be governed by the hivemind rather than a single authority.

The New Tech Cold War: Is the West losing the tech innovation race to China? We’re falling behind on AI, quantum and networking technology, and the Huawei debacle has shone a light on China’s industrial strategy to dominate in these areas. Find out if the UK or the US will manage to break free of Chinese innovation in this audio investigation from BBC Security Correspondent Gordon Corera.

Speaking of which: 22 French and German companies have banded together to develop common principles for a cloud services platform to serve Europeans. Launching in 2021, GAIA-X will be entirely non-profit, and led by German Minister Peter Altmaier. SAP, Atos, Siemens, Bosch and Deutsche Telekom are all on board. GAIA-X won’t create a direct cloud competitor to challenge US and Chinese services, but initiators hope that it will pave the way for new competitors to arise, while respecting European privacy principles.

And some bits on online content

Not content with what we’ve got: There have been all kinds of activity on content control this month, handily summed up by Mark Scott. France’s highest constitutional court has struck down legislation to force Google, Facebook and other platforms to remove hate content within 24 hours and the UK’s Online Harms Bill has been pushed to next year, but could actually be delayed ‘for years’. At the same time, Germany has approved a law forcing social platforms to report serious incidents of hateful content, the US Department of Justice is pushing to remove platforms’ immunity from lawsuits and Ireland is ramping up efforts to force platforms to build safety into their designs.

Inside the internet’s mind: Adioma has put together an incredible interactive infographic of some of the most popular topics and pieces of content. Inequality, death, kids and the future all feature heavily, with hearty long reads to dig into. Find out how to be mentally strong, how to power Germany with solar and how to die on your own terms.

Post

Eight goals for a human-centric internet

As part of the European Commission’s Next Generation Internet initiative, the NGI Forward consortium aims to set out a vision for a more human-centric internet. This blog identifies eight key objectives that can get us there and inform our policy and technology research.

In recent decades, there has been a revolution in the development of internet technologies across a wide range of fields, and all indications are that the technological progress is continuing at a rapid pace. These breakthroughs undoubtedly have a profound impact on society, and while they present significant opportunities, there are also complex dilemmas and challenges emerging around these new technologies.

Currently, the development of the internet technologies of the future is centralised around a few internet giants in near-monopoly positions on the global data market and, without an adequate response, humans risk losing control to data-driven, non-human-centric business models. It is the goal of the Next Generation Internet initiative and NGI Forward to secure progressive development of internet technologies and policy that support the development of a more human-centric evolution of the Internet.

A mixed method approach to identify emerging challenges

Insights into emerging technologies and their corresponding challenges and opportunities can be of great value for European policy-makers in this process. Understanding these emerging challenge areas will allow policy-makers to become involved in shaping internet development early on to embed more human-centric values.

Following some of our previous work to map out future internet challenges, the NGI Forward consortium have identified a new set of eight key topics that we believe will be central in developing a more democratic, inclusive and resilient Next Generation Internet. These topics will help inform the NGI’s policy and technology research agenda going forward.

To identify the most pressing issues facing the internet today – and tomorrow – we employed a mixed method approach that includes computational social science methods and expert workshops. In the first phase, DELab at the University of Warsaw collected qualitative data from technology news articles and academic working papers to identify trending keywords related to the Internet in the broader public and research community respectively. In the second phase, DATALAB from Aarhus University organised an expert workshop with leading stakeholders in the internet research community to help narrow down the areas of focus and verify or adjust the topics. Lastly, DATALAB synthesized the results to select eight key topics for the NGI.

The chosen topics are not tied to any one technology to prevent them falling out of relevance in the coming years. They are broadly interpretable and solution-agnostic so as to avoid us jumping to simplistic conclusions or specific solutions too quickly. The rapid technological development in recent decades demonstrates that focusing on specific tools and technology may render topics obsolete within just a few years, while societal challenges are more likely to remain relevant and allow the EU to focus on a wider range of solutions beyond a predetermined technology.

1. Trustworthy Information Flows

It is widely recognised that trustworthy information flows are essential for healthy democracies, but with social media and the Internet, content can spread much faster and in less moderated ways, challenging traditional information flows. The problem of online mis- and disinformation – often referred to as fake news – has evolved from a journalistic concern to one of the most urgent democratic issues in recent years. Despite major attention from the media, academia and governments, an effective solution is still not available. Coupled with other issues such as governmental censorship and large-scale content moderation by online platforms, information flows are changing rapidly, and further research is needed to explore different solutions that are sustainable and consider often conflicting values.

2. Decentralised Power on the Internet

The Internet was originally designed to be open and decentralised. But the de facto internet of today is controlled by a handful of giant companies with virtual monopoly control, acting as gatekeepers by enforcing policies on their users. However, visions for a more decentralised Internet are gaining traction – an Internet where humans can communicate without relying on big companies that collect data for profit. Some concepts for a decentralised Internet utilize distributed web and blockchain technologies to yield a more open and accessible Internet, while others focus on empowering people to publish and own content on the web outside centralised social media platforms. More research is needed into these solutions, both technical and socio-technical.

3. Personal Data Control

Recent revelations including the Cambridge Analytica scandal have made clear the lack of control we have over our own data, and the sheer amount of data collected online has created a major privacy concern. New approaches to privacy and data rights are needed to realise the societal and environmental potential of big data to connect diverse information and conduct rapid analysis – such as data sovereignty, data portability, and collective data rights. Achieving this will require research into the ways policymakers can fit these new concepts into existing data regulation frameworks in a way that offers individuals better control and authority, and builds public trust and engagement.

4. Sustainable and Climate-friendly Internet

The environmental impact of the Internet is enormous and growing rapidly. Each activity online comes with a small price in terms of carbon emissions and with over half the global population now online, this adds up. According to some estimates, the global carbon footprint of the Internet and the systems supporting it amounts to about 3.7 percent of the total carbon emissions, similar to the amount produced by the airline industry globally. As the Internet expands into new territory, it is estimated that the carbon footprint of the global internet technologies will double by 2025. Indeed, sustainability should be a bigger priority, and further insights are needed into how emissions could be controlled, how awareness of the environmental impact of the Internet can be raised, and how internet technologies can be utilized in the fight against climate change.

5. Safer Online Environments

People increasingly experience the internet as a hostile space. Cyberviolence in many shapes and forms is a growing concern, and it has a significant impact on an increasing number of people, LGBTQ+, ethnic minorities, women and children in particular. It will be vital for a more human-centric Internet to build safe online environments. For this to happen, a range of issues needs to be taken into consideration, including the role of social media providers and the protection of free expression. At the same time, solutions need to be investigated, such as effective moderation or containment procedures, creating useful aid for victims of cyberviolence and enabling law enforcement to take action against offenders.

6. An Inclusive Internet

The Internet offers a potential for inclusiveness in a global and diverse community, but if access is not evenly distributed, the Internet will deepen inequality. Half of the population of the world is still offline, urban areas are better connected than rural, and those that are connected in advanced ways may not be in a position to realise the full potential of the Internet to improve their lives and mitigate against critical issues. Many disabled people also are excluded from using online information and services, so inclusive infrastructures and tools are needed to remove barriers and create an inclusive and accessible Internet for all.

7. Competitive European Ecosystems

Today, the Internet is dominated by two narratives that give little agency to users: the American model, ruled by capitalist market powers with internet giants harvesting massive amounts of personal data to shape human behaviour, and the Chinese model characterised by mass surveillance and government control of the internet. These narratives cannot go unchallenged, and growth and innovation in the European tech industry without acquisitions from the U.S. and China-based companies is needed to support a competing narrative adhering to European values. This requires further research into possible policy and regulatory initiatives that can increase Europe’s competitiveness in the technology sector.

8. Ethical Internet Technology

Recent examples, such as Google’s censored search engine developed for the Chinese market (‘Project Dragonfly’), instances of algorithmic bias in criminal cases, racially targeted ads and “differential” pricing, and the use of Facebook data for voter manipulation, have shown that the Silicon Valley attitude of ‘moving fast and breaking things’ has failed. With the rapid development of new technologies in the Internet of Things, Artificial Intelligence and Machine Learning, further research is needed in order to develop targeted ethical frameworks for the development and implementation of new technologies.

Post

Making sense of the COVID-19 information maze with text-mining

The COVID-19 pandemic has brought with it an ‘infodemic’, flooding society with myriads of conflicting ideas and opinions. To help cut through the noise, we applied some of our data tools to map recent developments and understand how technology is being used and discussed during the crisis.

Register to attend our webinar to discuss this research, Wednesday 3rd June 2020 at 5 PM CEST

We want these insights to be as useful as possible and are keen to adapt and analyse the data in different ways to answer your burning questions. We invite you to join us in a webinar to discuss our methods and results, and exchange ideas about the most pressing tech challenges.

You can also view our full analysis at https://covid.delabapps.eu

Trending terms in news articles from our analysis.

The COVID-19 pandemic has brought with it an ‘infodemic’, flooding society with myriads of conflicting ideas and opinions. To help cut through the noise, we applied some of our data tools to map recent developments and understand how technology is being used and discussed during the crisis.

As part of NGI Forward’s work to create data-driven insights on social and regulatory challenges related to emerging technologies, we have developed various data-science tools to analyse trends in the evolution of Internet technology. In our previous studies, we focused on such areas as the content crisis in social media, regulating tech giants or cybersecurity.

Now we have opened our toolbox and mapped recent developments in the fight against COVID-19, to bring some clarity to how the crisis is evolving. We concentrated on four major areas:

  • Online tech news
  • Open-source projects at Github
  • Discussions on Reddit
  • Scientific papers

Mixed feelings on COVID tech in the news

First, we examined trends in 11 respected online news sources, such as the Guardian, Reuters or Politico. Based on the changes in the frequency of terms, we identified trending keywords related to COVID-19 and the world of technology. This enabled us to focus on key issues such as contact-tracing, unemployment or misinformation in the following sections of the analysis.

Next, we analysed terms that are frequently used together, or co-occurring, (e.g. “contact-tracing” and “central server”) to see how technology was associated with different aspects of the crisis.  We also measured the sentiment of the paragraphs containing these word pairs to understand whether coverage of COVID technology issues is positive, negative or neutral. As an example, we identified the key actors, initiatives and challenges related to contact-tracing, focusing on EU-wide projects such as PEPP-PT. 

The table below shows terms co-occurring with ‘contact-tracing’, ranked based on sentiment scores. DP-3T and TraceTogether are more associated with positive sentiments, while discussion of privacy and mission creep show that there are  concerns about the implementation of these systems.

Mapping the COVID tech ecosystem

Alongside this specific analysis, we have also mapped articles based on their vocabulary and topic. You can explore the main areas of technology news with characteristic words in these interactive visualisations.

The map below shows the clusters of news articles covering specific technologies and tech companies.

Throughout the crisis, numerous programmers have devoted their time to developing open-source tools to support the fight against COVID-19. We collected COVID-19-focused projects from Github, the software platform where much of this development is taking place, to examine various trends about location, aim and technology. You can find an overview of the top 50 most influential repositories on our analysis page. Perhaps you will be inspired to get involved!

The map below shows the number of Github projects related to COVID-19 in the week commencing 20th April 2020.

Tracking changes in social media

Looking next to social media, we examined activity on Reddit to uncover relevant changes. By analysing the text of posts and comments, we discovered a surge in discussions related to the job market, mental health and remote work. Our analysis also provides insight into the changing perception of lockdown measures and growing lockdown fatigue.

The graph below shows a sharp increase in Reddit discussions about unemployment in the latter half of March 2020.

Social science counts the consequences

Finally, we also examined trends in scientific journal articles related to COVID-19. Analysing articles from the social sciences gives us a broader picture than news articles, and we found increasing discussion of the immediate consequences of the pandemic and lockdown. The trending words range from health-related (pneumonia, infectious, epidemiology) ones to more common for social sciences: economic recession, policy or GDP. 

The word cloud below shows some of the most common terms in social science articles relating to COVID-19.

Post

Greening the internet, remotely

Team NGI Forward has got off to a cracking start in the new world of online conferences, with our debut session at IAM Weekend. Run by a collective of artists, technologists and activists, IAM brought together hundreds of attendees from across the globe to discuss The Weirdness of Interdependencies. We had a great time joining […]

Team NGI Forward has got off to a cracking start in the new world of online conferences, with our debut session at IAM Weekend. Run by a collective of artists, technologists and activists, IAM brought together hundreds of attendees from across the globe to discuss The Weirdness of Interdependencies. We had a great time joining in and stirring up discussion about how we can make the internet more sustainable and climate-friendly. Here’s how it went.

Our aims

We’re experiencing an unprecedented and likely unsustainable proliferation of connected devices, data and traffic. Do we risk sleepwalking into a future of internet curfews and Netflix-shaming? Or can good policy and ‘smart everything’ save the day? We wanted to create an immersive workshop about these strange futures and difficult choices.

How we did it

Let’s start with some notes on the format of our workshop since everyone is looking for ideas on running online events these days. We used video conferencing and a shared slide deck so we could update it real-time with attendee contributions. For the interactive section, we used breakout rooms to split into two groups of around 15. It worked well, and as you’ll see below, we collected a ton of ideas from our willing participants.

The internet’s environmental impact

We started with a discussion of what the internet is, and where it begins and ends. The boundaries are blurry, and we have embedded the internet in so many aspects of human life that its impact is widespread and in many areas difficult to define. Powering this global network requires vast quantities of electricity, estimated at between 5-9% of the world’s total generation capacity. Despite current efforts to move to renewable sources, most of the electricity powering the internet comes from burning coal and gas, which means that our internet use is creating 2% of global greenhouse gas emissions, roughly equal to the entire global airline industry.

With the advent of IoT and 5G, the sustainability of our digital lives will become a pressing issue in the years to come. Yet, we are doing relatively little to build social awareness, spur positive cultural change or capture the market opportunities associated with innovating for a more sustainable, resilient and ethically-sourced internet.

In the rudimentary map below, you can see the different groups of people and activities that contribute to the internet’s environmental impact. From users streaming video and the data centres that serve them, to the devices they use and the mining required to produce them, everything has an impact.

And here is a set of facts about each of these areas that hits home:

Two possible ways forward

Any policy change can have unintended consequences, so we decided to present two potential approaches to reducing the internet’s climate impact. We wanted to explore what the world would look like if politicians, businesses and citizens had to make more conscious decisions about connectivity, our data and the devices we use. We wanted to challenge participants to consider tough trade-offs, predict winners and losers, and think through unexpected consequences that could range from YouTube rationing and internet class systems to urban mining booms and net-zero stock bubbles.

To get us started, Markus sketched out two equally challenging ways forward. We decided we wanted to avoid creating a dichotomy between decisive climate action and calamitous inaction. Instead, we created two imaginary scenarios that involved work on a broad scale, just with a different approach. In reality, the plan would likely include elements of both, but taking an extreme makes a futures exercise more interesting!

At this point, we split into two groups and used a futures wheel to plot out the potential consequences of each scenario. Here’s the template we used.

Scenario one: Fast Forward

In this imagined future, we’re going to accelerate our technological efforts towards solving climate change. There’d be an explosion of funding for research and development. We’d create apps, services and devices to help us monitor, analyse and act on the information we gather about the climate. We’d see all of the following actions:

💙Ramp up investment in green innovation and smart cities to mitigate climate change
💙Drive forward the digitisation of public services and make infrastructure ‘smarter’ to improve energy efficiency
💙Promote more decentralised data infrastructure
💙Encourage the proliferation of connected devices and data to better inform decision-making and policy

The good

This group predicted that new jobs, income streams and circular economy systems would emerge alongside greater access to the arts and the addition of new perspectives to global society. Greater efficiency in logistics and a greater variety of services available could lead to reduced digital lock-in for consumers, and digital tools could give us greater control over our lives.

The scenario would enable climate movements to rapidly scale, connecting people to local grassroots campaigns and improving coordination. We could see loneliness drop as we become more connected, with technology assisting ageing populations with healthcare and companionship. 

We’d invest more in technology to reduce food waste, which could help to resolve food insecurity. The price of healthcare could drop due to automation, and we could even see the beginning of a Universal Basic Income

The bad

On the negative side, our participants feared that this approach could exacerbate some already familiar problems. Greater reliance on technology could further challenge our right to privacy and worsen the digital divide. Remote areas and older people could also be left behind, which might make loneliness worse for marginalised communities.

We’ve seen in the past how a small minority of climate deniers can derail global efforts. Fake news could spread through the population like wildfire, making it challenging to verify the reliability of data. In this scenario, more people could gain access to the internet, and more conflicting views online could make it difficult to come to democratic decisions about what to do. 

Policymakers and the justice system could also be too slow to keep up with the pace of innovation we’d see in a world focused on accelerating research and development. A delayed reaction here could result in failing to protect workers that lose their jobs to automation. That could include farmers, challenged by an increase in mechanised processing and even genetically modified foods

Financial investment could be directed primarily towards large incumbent technology companies, crowding out small businesses in the online marketplace. Companies would also need to focus on long term investment in adapting, rather than short term gain.

Accelerating our adoption of technology could cause huge piles of electronic waste, with the toxic processing of rare earth metals and pollution rising interminably. 

It’s a complicated picture!

Scenario two: Press Pause

In this future, we’ve decided to hit the brakes. We’d slow down emissions by reducing our use of technology. Our focus could move back to spending time in nature. There might be campaigns to shame people that stream content ‘excessively’. We’d likely see the following:

💚Increase taxes on – and remove subsidies for – fossil energy and ‘dirty’ technology
💚Redesign public services to be more energy-efficient and less interdependent or reliant on the internet
💚Consolidate data centres, regulate energy use and traffic
💚Introduce a right-to-repair, discourage further proliferation of devices and encourage data minimisation

The good

This group felt that pressing pause would make things fairer by increasing tax on companies rather than individuals, forcing change at the core of the internet. Achieving this pause would require global collaboration on an unprecedented scale, presenting both a challenge and an opportunity to solidify global ties and collaborate.

With a more conscious approach to connectivity or even reduced internet access, we’d choose more carefully what to post online. The taxes collected in this scenario could fund all manner of social interventions, and we’d likely see a reduction in overall pollution. People could become more connected to their local communities, and we could bring marginalised groups into discussions on an equal level, especially those whom technology currently excludes because of their age, background or lack of infrastructure.

The bad

However, the consolidation of data centres could also lead to more centralised control over the internet, and it’s not clear whether this would be good for the environment in the long term. We could also see the emergence of a two-tier internet, where traffic for either vital services or wealthier groups is prioritised. 

Moreover, a culture of click-shaming could develop, forcing people to reduce or even hide their internet use. Is this how we want our approach to the internet to change?

Trade-offs along the way

Together we identified a set of trade-offs when considering greening the internet.

Green tech vs ethical supply chains: Green innovation is still heavily dependent on problematic battery technology and critical raw materials, such as rare earth minerals. These are often sourced through unsustainable means and under poor working conditions.

Saving the climate vs economic stability: Our economy and innovation ecosystems are heavily dependent on high levels of consumption. If we stop buying new devices every year, jobs may be lost and not necessarily replaced. R&D investment will have to come from new sources. 

Reducing consumption vs fair access: The global north and specific demographics benefit disproportionately from internet access. But there are still 3 billion people without a connection. Providing the same opportunities and fair access to everyone would have a tremendous impact on the environment and more environmentally friendly devices would likely be prohibitively expensive for many.

Public interest vs consumer benefit: Consumers are voters. Policies that hurt them tend to be unpopular. If we encourage companies to sell phones without chargers, will they just make greater profits to the detriment of lower-income families? Are we willing to stick with old phones for longer or pay a premium for repairable devices?

We don’t have the tech or the equality

The groups also questioned how long it would take to develop the technologies required. We decided that we will need serious investment in technology and corresponding public policy in either scenario, but that neither was particularly appealing overall. Large companies will likely be better prepared to adapt, but this depends on their sector. The geographical and political context of companies and users is another barrier to enacting these changes.

Both groups reflected next on who would benefit from these scenarios, and the answers were similar. Some participants expressed pessimism that either scenario would do much to shake up the power imbalance in the digital economy. We could easily empower those currently in marginalised groups to connect and benefit in either scenario. However, connecting them would require coordinated, concerted effort from those in power. Without this effort, participants felt that current inequalities would be exacerbated, continuing the exclusion experienced by people with lower incomes, the socially isolated, people with medical conditions and older people.

Thank you, IAM!

We had a fantastic time discussing these issues with our lovely attendees, and we’ll be contacting them, so we stay in touch. The rest of IAM Weekend was both insightful and great fun; we’d highly recommend it.

If you’re interested in these topics, join our newsletter and get in touch.

Post

How collective intelligence can help tackle major challenges…

...and build a better internet along the way!

It’s hard to imagine what our social response to a public health challenge at the scale of COVID-19 would have looked like just ten or fifteen years ago – in a world without sophisticated tools for remote working, diversified digital economies, and social networking opportunities. 

Today, we see frontline doctors self-organising through social media to share diagnostic and treatment advice, DIY communities sharing open source solutions to help bolster supplies of ventilators and face masks, and the transition of many businesses to a physically distributed and temporally asynchronous workforce model.

The common enabler of all these activities is the internet. Recent years have seen innovation across all of its layers – from infrastructure to data rights – resulting in an unprecedented capacity for people to work together, share skills and pool information to understand how the world around them is changing and respond to challenges. This enhanced capacity is known as collective intelligence (CI)

The internet certainly needs fixing – from the polarising effect of social media on political discourse to the internet’s perpetual concentration of wealth and power and its poorly understood impact on the environment. But turning to the future, it’s equally clear that there is great promise in the ability of emerging technologies, new governance models and infrastructure protocols to enable entirely new forms of collective intelligence that can help us solve complex problems and change our lives for the better. 

Based on examples from Nesta’s recent report, The Future of Minds & Machines, this blog shows how an internet based on five core values can serve to combine distributed human and machine intelligence in new ways and help Europe become more than the sum of its parts. 

We have been mapping projects that bring Artificial Intelligence and Collective Intelligence together.
Source: nesta.org.uk

Resilience

Resilience is a core value for the future internet. It means secure infrastructure and the right balance between centralisation and decentralisation. But it also means that connected technologies should enable us to better respond to external challenges. Online community networks that can be tapped into and mobilised quickly are already an important part of the 21st century humanitarian response. 

Both Amnesty International and Humanitarian OpenStreetMap have global communities of volunteers, numbering in the thousands, who participate in distributed micromapping efforts to trace features like building and roads on satellite images. These online microtasking platforms help charities and aid agencies understand how conflicts and environmental disasters affect different regions around the world, enabling them to make more informed decisions about distribution of resources and support. 

More recently, these platforms have started to incorporate elements of artificial intelligence to support the efforts of volunteers. One such initiative, MapWithAI, helps digital humanitarians to prioritise where to apply their skills to make mapping more efficient overall. 

The internet also enables and sustains distinct communities of practice, like these groups of humanitarian volunteers, allowing individuals with similar interests to find each other. This social and digital infrastructure may prove invaluable in times of crises, when there is a need to tap into a diversity of skills and ideas to meet unexpected challenges. 

In the future, collective intelligence may also help improve our ability to cooperate and share resources in, such as food and energy, effectively between and within groups. At Nesta’s Centre for Collective Intelligence Design (CCID), we are supporting research that asks whether different levels of social connectivity within and between overlapping social groups on an online platform can improve coordination in response to collective crises. Experiments like this one, will help us to understand how the internet can be organised to support more collectively intelligent and resilient behaviours. 

Inclusiveness

The need to consider a diversity of information, opinions and ideas is a key factor in the success of any collective intelligence initiative. This is true for small group interactions – which have been shown to require cognitive diversity of participants to improve problem solving, creativity and learning – as well as large-scale initiatives such as crowd predictions, where individuals making mistakes in slightly different ways ensures that the collective estimate holds. If we want to address challenges facing the whole of society, we need solutions designed for everyone.

One example of collective intelligence improving inclusiveness – while also taking an inclusive-by-design approach – is Mozilla’s Common Voice project, which uses an accessible online platform to crowdsource the world’s largest open dataset of diverse voice recordings, spanning different languages, demographic backgrounds and accents. 

The Common Voice project crowdsources diverse voices, accents and underrepresented languages

Ensuring diversity of contributions is not easy. It requires a deliberate effort to involve individuals with rare knowledge, such as members of indigenous cultures or speakers of unusual dialects. But a future internet built around an inclusive innovation ecosystem, products that are inclusive-by-design, and fundamental rights for the individual – rather than a closed system built around surveillance and exploitation – will make it easier for projects like Common Voice to become the norm. 

Democracy

The future internet should have the ambition to protect democratic institutions and give political agency to all – but it should also itself be an expression of democratic values. That means designing for more meaningful bottom-up engagement of citizens, addressing asymmetric power relationships in the digital economy and creating spaces for different voices to be heard. 

Both national and local governments worldwide are starting to appreciate the opportunities that the internet and collective intelligence offer in terms of helping them to better understand the views of their citizens. Parliaments from Brazil to Taiwan are inviting citizens to contribute to the legislative process, while cities like Brussels and Paris are asking their residents to help prioritise spending through participatory budgeting. The EU is also preparing a Conference on the Future Europe to engage citizens at scale in thinking about the future of the bloc, an effort that could be enhanced and facilitated through CI-based approaches like participatory futures. These types of activities can help engage a greater variety of individuals in political decision-making and redefine the relationships between politicians and the constituents they serve. 

Unfortunately, some citizen engagement initiatives are still driven by tech-solutionism without a clear market need, rather than the careful design of participation processes that make the most of the collective contributions of citizens. Even when digital democracy projects start out with the best intentions politicians can struggle to make sense of this new source of insight, which risks valuable ideas being overlooked and diminished trust in democratic processes. 

There are signs that this is changing. For example, the collective intelligence platform Citizen Lab is trying to optimise the channels of communications and interpretation between citizens and politicians. It has started to apply natural language processing algorithms to help organise and identify themes in the ideas that citizens contribute using its platform, helping public servants to make better use of them. Citizen Lab is used by city administrations in more than 20 countries across Europe and offers a glimpse of how Europe can set an example of democratic collective intelligence enabled by the infrastructure of the internet.

Trust

A closely related challenge for the internet today is the continued erosion of trust – trust in the veracity of information, trust between citizens online, and trust in public institutions. The internet of the future will have to find ways of dealing with challenges like digital identities and the safety of our everyday online interactions. But perhaps most importantly, the internet must be able to tackle the problems of information overload and misinformation through systems that optimise for fact-based and balanced exchanges, rather than outrage and division.

We have seen some of the dangers of fake news manifest as part of the response to COVID-19. At a time when receiving accurate public health messaging and government communications are a matter of life and death, the cacophony of information on the internet can make it hard for individuals to distinguish the signal from the noise. 

Undoubtedly, part of the solution to effectively navigate his new infosphere will require new forms of public private partnerships. By working with media and technology giants like Facebook and Twitter, governments and health agencies worldwide have started to curb some of the negative effects of misinformation in the wake of the coronavirus pandemic. But the commitment to a trustworthy internet is a long-term investment. It will not only rely on the actions of policy makers and industry to develop recognisable trustmarks, but also on a more literate citizenry that is better able to spot suspicious materials and flag concerns. 

A tweet by the UK Government warning about misinformation in the wake of the Covid-19 pandemic.

Many existing fact checking projects already already use crowdsourcing at different stages of the verification processes. For example, the company Factmata is developing a technology that will draw on specialist communities of more than 2000 trained experts to help them assess the trustworthiness of online content. However, crowdsourced solutions can be vulnerable to issues of bias, polarisation and gaming and will need to be consolidated by complementary sources of intelligence such as expert validation or entirely new AI tools that can help to mitigate against the effects of social bias.

Sustainability

Undoubtedly, some of our biggest challenges are yet to come. But the internet holds untapped potential for us to build awareness for the interdependency of our social and natural environments. We need to champion models that put the digital economy at the service of creating a more sustainable planet and combating climate change, while also remaining conscious of the environmental footprint these systems have in their own right.

Citizen science is a distinct family of collective intelligence methods where volunteers collect data, make observations or perform analyses that helps to advance scientific knowledge. Citizen science projects have proliferated over the last 20 years, in large part due to the internet. For example, the most popular online citizen science platform, Zooniverse, hosts over 50 different scientific projects and has attracted over 1 million contributors. 

A large proportion of citizen science projects focus on the environment and ecology, helping to engage members of the public outside of traditional academia with issues such as biodiversity, air quality and pollution of waterways. iNaturalist is an online social network that brings together nature lovers to keep track of different species of plants and animals worldwide. The platform supports learning within a passionate community and creates a unique open data source that can be used by scientists and conservation agencies. 

Beyond the direct use of citizen generated data for environmental monitoring and tracking of progress towards the sustainable development goals, online citizen science and community monitoring projects can lead to increased awareness of sustainability issues and longer term pro-environmental behavioural change and for those involved.

Building the Next Generation Internet – with and for collective intelligence

To enable next-generation collective intelligence, Europe needs to look beyond ‘just AI’ and invest in increasingly smarter ways of connecting people, information and skills, and facilitating interactions on digital platforms. The continued proliferation of data infrastructures, public and private sector data sharing and the emergence of the Internet of Things will play an equally important part in enhancing and scaling up collective human intelligence. Yet, for this technological progress to have a transformative and positive impact on society, it will have to be put in the service of furthering fundamental values. Collective intelligence has the opportunity to be both a key driver and beneficiary of a more inclusive, resilient, democratic, sustainable and trustworthy internet. 

At this moment of global deceleration, we suggest it is time to take stock of old trajectories for the internet to set out on a new course, one that allows us to make the most of the diverse collective intelligence that we have within society to become better at solving complex problems. The decisions we make today will help us to shape the society of the future. 

Aleks is a Senior Researcher and Project Manager for Nesta’s Centre for Collective Intelligence Design (CCID). The CCID conducts research and develops resources to help innovators understand how they can harness collective intelligence to solve problems. Our latest report, The Future of Minds & Machines mapped the various ways that AI is helping to enhance and scale the problem solving abilities of groups. It is available for download on the Nesta website, where you can also explore 20 case studies of AI & CI in practice.

Post

Predicting future trends from past novelty: Trend estimation on social media

Significant developments of internet technologies are being made across a wide range of fields – breakthroughs that will undoubtedly have a profound impact on society and be highly disruptive in nature. Given the importance of these technological developments, insights into emerging technology trends can be of great value for governments and policy-makers to be able […]

Significant developments of internet technologies are being made across a wide range of fields – breakthroughs that will undoubtedly have a profound impact on society and be highly disruptive in nature. Given the importance of these technological developments, insights into emerging technology trends can be of great value for governments and policy-makers to be able to response to the complex dilemmas that emerge around new technologies.

Today, trends can emerge from many different platforms. For instance, the growth of social media enables citizens to generate and share information in unprecedented ways, and thus sociocultural trends from social media have become an important part of knowledge discovery. Social media can provide policy makers with insights about which internet-related technologies and social issues are frequently discussed, and which trends are emerging in the discussions. Such insights can help identify new technologies and issues and create proper policy or regulatory responses.

 

A new approach to social media trend

Accurate trend estimation on social media is however a matter of debate in the research community. Standard approaches often suffer from several methodological issues by focusing solely on spiky behavior and thus equating trend detection with that of natural catastrophes and epidemics.

To remedy these problematic issues our team at DATALAB at Aarhus University in Denmark developed a new approach to trend estimation that combines domain knowledge of social media with advances in information theory and dynamical systems. In particular, trend reservoirs, i.e. signals that display trend potential, are identified by their relationship between novel and resonant behavior, and their minimal persistence.

The model estimates Novelty as a reliable difference from the past – how much does the content diverge from the past – and Resonance as the degree to which future information conforms to the Novelty – to what degree does the novel content ‘stick’. Using calculations of Novelty and Resonance, trends are then characterized by a strong Novelty-Resonance association and long-range memory in the information stream. Results show that these two ‘signatures’ capture different properties of trend reservoirs, information stickiness and multi-scale correlations respectively, and they both have discriminatory power, i.e. they can actually detect trend reservoirs.


Case study: AI discussions on Reddit

To exemplify the application of the model for trend estimation, it is applied on the social media site Reddit to discover innovative discussions related to artificial intelligence (AI). 

Reddit hosts discussions about text posts and web links across hundreds of topic-based communities called “subreddits” that often target specialized expert audiences on these topics. Topically defined discussions are thus an important part of the appeal of Reddit, unlike the information dissemination focus of Twitter, and the specialized audiences make it a promising source for topical discussions on, for example, internet technology.

The most trending subreddits are discovered using the model on a sample of subreddits with the highest overlap between their descriptions and a seed list of AI-related terms. The top 10 most relevant subreddits in terms of content matching can be found in Table 1 in two categories: Leaders and Prospects. The Leaders are subreddits with a substantial number of posts (more than 2560), while the Prospects are subreddits that can be small but rank the highest on the content matching. 




We can then use the trend estimation model to classify into maximally trending and just trending or not trending. In Table 1, the red subreddits are classified as maximally trending while blue are not. In the Leaders category, subreddits will have to be trending based on both signatures, i.e. having a strong Novelty-Resonance association and displaying long-range memory, to qualify as maximally trending. Prospects can qualify as maximally trending with only one of the two signatures. 

Leaders are relevant because of the many posts of potentially trending content, while Prospects can be used to discover singular new trends. A recommender engine can be trained with this classifier to identify trending subreddits within any given subject. Such classifications can be extremely useful for decision support in terms of which subreddits to follow for a continuum of information on trends in e.g. AI.

After the classification, the content on the most trending subreddits can be explored. To do this, we training a neural embedding model to query the highest-ranking words and their associated words, providing insights into e.g. the contexts in which the technologies are discussed. 




This method of content exploration produces concepts graphs as the one above of words used in the subreddit and their associations. Figure 1 is a concept graph from the trending subreddit r/MachineLearing – we can call the graph TOOL-DIVERSIFICATION. As data science and machine learning are complicated fields, many classes of tools are necessary to develop state-of-the-art deep learning models, and the concept graph shows three clusters of interest: 

  • The upper right corner shows important tools related to “Hardware and Cloud” technologies (NVIDIA, GPU, TPU, AWS, server, Ubuntu, Gluon), which are all characteristics of GPU accelerated high performance computing; 
  • The cluster in the center left of the graph is dominated by the most important deep learning “Software Libraries” in Python (Tensorflow, PyTorch, Keras, Theano) and related languages (JavaScript, Java, Caffe MATLAB); 
  • In the lower part, the graph displays “Classes of Problems” (supervised, unsupervised, reinforcement learning). 

Two further observations can be made. Firstly, “Tutorial” is highly interconnected to all clusters, supporting the fact that tutorials have become one of the primary sources of assimilating the diverse tools in the machine learning community. Secondly, software libraries, packages, and frameworks take a central role in the graph. They all signify bundles of preexisting code that minimize the amount of programming and hardware understanding required by machine learning enthusiasts.

These observations indicate that the subreddit does not consist of solely professional machine learning developers, but rather constitutes a community of machine learning enthusiasts with a do-it-yourself approach to machine learning. 

This is just an example of the content that can be extracted after identifying the most trending subreddits within the topic of investigation with the model for trend estimation. This approach to the estimation of trend reservoirs generalizes to other data sources, such as Twitter, and other data types, such as images. 

Post

Mapping the tech world with text-mining: Part 1.

Introduction As part of the NGI Forward project, DELab UW is supporting the European Commission’s Next Generation Internet initiative with identifying emerging technologies and social issues related to the Internet. Our team has been experimenting with various natural language processing methods to discover trends and hidden patterns in different types of online media. You may find our tools and presentations at https://fwd.delabapps.eu. […]

Introduction

As part of the NGI Forward project, DELab UW is supporting the European Commission’s Next Generation Internet initiative with identifying emerging technologies and social issues related to the Internet. Our team has been experimenting with various natural language processing methods to discover trends and hidden patterns in different types of online media. You may find our tools and presentations at https://fwd.delabapps.eu.

This series of blog posts presents the results of our latest analysis of technology news. We have two main goals:

  1. to discover the most important topics in news discussing emerging technologies and social issues,
  2. to map the relationship between these topics.

Our text mining exercises are based on a technology news data set that consists of 213 000 tech media articles. The data has been collected for a period of 40 months (between 2016-01-01 and 2019-04-30), and includes the plain text of articles. As the figure shows, the publishers are based in the US, the UK, Belgium and Australia. More information on the data set is available at a Zenodo repository.

In this first installment, we focus on a widely used text-mining method: Latent Dirichlet Allocation (LDA). LDA gained its popularity due to its ease of use, flexibility and interpretable results. First, we briefly explain the basics of the algorithm for all non-technical readers. In the second part of the post, we show what LDA can achieve with a sufficiently large data set.

LDA

Text data is high-dimensional. In its most basic – but comprehensible to computers – form, it is often represented as a bag-of-words (BOW) matrix, where each row is a document and each column contains a count how often a word occurs in the documents. These matrices are transformable by linear algebra methods to discover the hidden (latent and lower-dimensional) structure in it.

Topic modeling assumes that documents, such as news articles, contain various distinguishable topics. As an example, a news article covering the Cambridge Analytica scandal may contain the following topics: social media, politics and tech regulations, with the following relations: 60% social media, 30% politics and 10% tech regulations. The other assumption is that topics contain characteristic vocabularies, e.g. the social media topic is described by the words Facebook, Twitter etc.

LDA has been proposed by Blei et al. (2003), based on Bayesian statistics. The method’s name provides its key foundations. Latent comes from the assumption that documents contain latent topics that we do not know a priori. Allocation shows that we allocate words to topics, and topics to documents. Dirichlet is a multinomial likelihood distribution: it provides the joint distribution of any number of outcomes. As an example, Dirichlet distribution can describe the occurrences of observed species in a safari (Downey, 2013). In LDA, it describes the distribution of topics in documents, and the distribution of words in topics.

The basic mechanism behind topic modeling methods is simple: assuming that documents can be described by a limited number of topics, we try to recreate our texts from a combination of topics that consist of characteristic words. More precisely, we aim at recreating our BOW word-document matrix with the combination of two matrices: the matrix containing the Dirichlet distribution of topics in documents (topic-document matrix), and the matrix containing the words in topics (word-topic matrix). The construction of the final matrices is achieved by a process called Gibbs sampling. The idea behind Gibbs sampling is to introduce changes into the two matrices word-by-word: change the topic allocation of a selected word in a document, and evaluate if this change improves the decomposition of our document. Repeating the steps of the Gibbs sampling in all documents provides the final matrices that provide the best description of the sample.

For more details on topic modelling, we recommend this and this excellent posts. For the full technical description of this study, head to our full report.

Results

The most important parameter of topic modelling is the number of topics. The main objective is to obtain a satisfactory level of topic separation, i.e. a situation in which topics are neither all issues lumped together nor overly fragmented ones. In order to achieve that, we have experimented with different LDA hyper parameters levels. For settings with 20 topics, the topics were balanced and separable.

Therefore, we identified 20 major topics that are presented in the visualisation below. Each circle represents a topic (the size reflects the topic’s prelevance in the documents), with distances determined by the similarity of vocabularies: topics sharing the same words are closer to each other. In the right panel, the bars represent the individual terms that are characteristic for the currently selected topic on the left. A pair of overlapping bars represent both the corpus-wide frequency of a given term, as well as its topic-specific frequency. We managed to reach gradually decreasing topic sizes: the largest topic has a share of 19%, the 5th 8%, and the 10th 5%.

After studying these most relevant terms, we labeled each topic with the closest umbrella term. Upon closer examination, we have reduced the number of topics to 18 (topics 5 & 16 became the joined category Space tech, while topics 10 & 19 were melded together to form a topic on Online streaming). In the following sections we provide brief descriptions of the identified topics.


AI & robots

AI & robots constitutes the largest topic containing around 19% of all tokens and is characterized by machine learning jargon (e.g. train data) as well as popular ML applications (robots, autonomous cars).


 Social media crisis

Social media topic is similarly prevalent and covers contentious aspects of modern social media platforms (facebooktwitter) as right to privacy, content moderation, user bans or election meddling with the use of microtargeting (i.a.: privacybanelectioncontentremove).


 Business news

A large share of tech articles cover business news, especially on major platforms (uberamazon), services such as cloud computing (aws) or emerging technologies as IoT or blockchain. The topic words also suggest great focus on the financial results of tech companies (revenuebillionsalegrowth).


Smartphones

Topic 4 covers articles about the $522B smartphone market. Two major manufacturers – Samsung and Apple are on the top of the keyword list with equal number of appearances. Articles are focused on the features, parameters and additional services provided by devices (cameradisplayalexa etc.).


 Space

Space exploration excitement is common in the tech press. Topic 5 contains reports about NASA, future Mars and Moon mission as well as companies working on space technologies, such as SpaceX.


 Privacy

Topic 6 revolves around Cambridge Analytica privacy scandal and gathers all mentions of this keyword in the corpus. The involvement of Cambridge Analytica in the Leave campaign during the Brexit referendum is of major focus, as suggested by the high position of keywords such as eu and uk. Unsurprisingly, GDPR is also often mentioned in the articles dealing with the aftermath of CA’ controversy.


Cybersecurity

Topic 7 pertains to cyberspace security issues. It explores subjects of malware and system vulnerabilities targeting both traditional computer systems, as well as novel decentralized technologies based on blockchain.


 5G

The much anticipated fifth-generation wireless network has huge potential to transform all areas with an ICT component. Topic 8 deals with global competition over delivering 5G tech to the market (huaweiericsson). It captures also the debate about 5G’s impact on net neutrality. 5G’s main quality is to enable signal ‘segmentation’, causing debate whether it can be treated like previous generations of mobile communications by net neutrality laws.


Cross platforms

The focus of Topic 9 is on operating systems, both mobile (iosandroi), desktop, (windowsmacos) as well as dedicated services (browsers chromemozilla) and app stores (appstore).



 Media

Topic 10 revolves around the most important media platforms: streaming and social media. The global video streaming market size was valued at around USD 37B in 2018, music streaming adds another 9B to this number and account for nearly half of the music industry revenue. Particularly, this topic focuses on major streaming platforms (youtubenetflixspotify), social media (facebookinstagramsnapchat), the rising popularity of podcasts and business strategies of streaming services (subscriptionsads).


Microsoft

During its 40 year history, Microsoft has made above 200 acquisitions. Some of them were considered to be successful (e.g. LinkedIn, Skype), while others were less so… (Nokia). Topic 11 collects articles describing Microsoft finished, planned and failed acquisitions in the recent years (githubskypedropboxslack).


Autonomous vehicles

Autonomous transportation is a vital point of public debate. Policy makers should consider whether to apply subsidies or taxes to equalize the public and private costs and benefits of this technology. AV technology offers the possibility of significant benefits to social welfare — saving lives; reducing crashes, congestion, fuel consumption, and pollution; increasing mobility for the disabled; and ultimately improving land use (RAND, 2016). Topic 12 addresses technological shortcomings of the technology (batteries) as well as positive externalities such as lower emissions (epaemissions).


 Tesla

LDA modelling has identified Tesla and other Elon Musk projects as a separate topic. Besides Tesla developments of electric and autonomous vehicles, the topic also includes words related to other mobility solutions (e.g. Lime).


 CPU and other hardware

Topics 14 & 15 are focused on hardware. Topic 14 covers CPU innovation race between Intel and AMD, as well as the Broadcom-Qualcomm acquisition saga, blocked by Donald Trump due to national security concerns. Topic 15 includes news regarding various standards (usb-c), storage devices (ssd) etc.


 Startups

Topic 17 concentrates on startup ecosystems and crowdsource financing. Articles discuss major startup competitions such as Startup Battlefield or Startup Alley, and crowdfunding services such as Patreon.


 Wearables

We observe a surge in the adoption of wearables, such as fitness trackers, smart watches or augmented and virtual reality headsets. This trend brings important policy questions. On the one hand, wearables offer tremendous potential when it comes to monitoring health. On the other hand, it might be overshadowed with concerns about user privacy and access to personal data. Articles in topic 18 discuss news from the wearable devices world regarding new devices, novel features etc. (fitbitheartrate).


Gaming

Topic 20 deals with gaming industry. It covers inter alia popular games (pokemon), gaming platforms (nintendo), various game consoles (switch) and game expos (e3).


Conclusions

We provided a bird’s eye view on the technology world with topic modelling. Topic modelling serves as an appropriate basis for exploring broad topics, such as the social media crisis, AI or business news. At this stage, we were able to identify major umbrella topics that ignite the public debate.

In the next post, we will introduce another machine learning method: t-SNE. With the help of this algorithm, we will create a two-dimensional map of the news, where articles covering the same topic will be neighbours. We will also show how t-SNE can be combined with LDA.

>