World leaders are descending on Bletchley Park this week for the AI Safety Summit – pitched as a conversation about ‘frontier AI’. It will “focus on certain types of AI systems based on the risks they may pose”, with selected countries, academics, civil society representatives and companies “who have already done thinking [sic] in this area” present.
I understand the enthusiasm for a selective conversation (and that opportunities have been created for a wider debate, including the AI Fringe, in which my organisation – the Open Data Institute (ODI) – is a partner) but the logic and the assumptions on which this selectivity is based raise their own challenges.
If such a small group holds all of the most advanced thinking in its heads and hands, then our society, our economy, and the planet has a problem. There is now a greater need than ever for leaders who are equipped to understand the value, limitations and opportunities that are offered by advances in AI technology and data.
This cohort needs to exist beyond Silicon Valley and our most prominent academic and government institutions. It also needs to take root in communities and coffee shops, in traditional businesses and institutions, as well as those at the cutting edge of technology.
At the heart of this challenge is the need for digital and data literacy at scale across all areas of society, which was part of the conversation during my appearance at the Science and Technology Select Committee in February. This goes beyond the technical skills needed to work with data; instead, it is the ability to think critically about data in different contexts and examine the impact of different approaches when collecting, using and sharing data and information.
As I said to the committee, we will not all be data scientists but we should have some appreciation of how the data landscape can work effectively and equitably.
Twenty years ago swathes of the workforce were busy acquiring credentials around digital literacy and still are – this often means skills in using the power tools of the time, including Microsoft's products – from spreadsheets to powerpoint, databases to document creation. The descendants of those tools will be turbocharged with AI – Microsoft’s Co-Pilot suffused with generative AI – users will need to understand how to harness this type of AI and the data that it depends on.
This is a learning moment, and one that offers us a chance to lead, both in-work and in the wider community. We need deep technical expertise, good science and entrepreneurship, but the reality is that many people will not be writing extended amounts of code. However, people should be able to understand, in outline, the technologies they work with and that feature in their lives.
They should be able to make good choices for themselves, their families and their organisations. They should understand that poor or biased data will lead to poor or biased AI decision-making, and that disinformation and misinformation are real and present concerns. This very present danger is illustrated by the NYTimes, the BBC, the Washington Post and many others, reporting on the flooding of social media platforms with fabricated ‘facts’ about the situation in Israel and Gaza.
There is no AI without data, and we are already seeing how crucial that data is for generative AI models like ChatGPT that ingest huge amounts of content. We are beginning to see significant push back from human content creators and companies who believe that their data has been appropriated without regard to their rights and interests.
We have also seen the first examples of organised human labour winning disputes with employers around the use of generative AI in their industries. There is a great deal at stake, and understanding how AI will embed itself in our economies and societies is an issue that will necessarily involve many stakeholders and a plurality of voices.
The human race is currently inadequately equipped to work to realise the full benefits of the technologies in our midst and realise the potential of AI for Good. Broad data literacy could put the UK at a huge advantage, but the signs from the business world do not augur well. A recent survey reported in HBR, shows that far from building data and digital skills – and the rapid advancement of technology across every area of society and the economy – the prioritisation of data in organisations is on a downward trajectory. In the survey, just 23.9% of executives reported that their companies had created a data-driven organisation, down from 31% four years ago.
When it comes to creating a culture that both understands and embraces the potential of data, analytics, and AI, the numbers are worse. While companies are investing in technology initiatives, including data products and AI/machine learning projects, nearly 80% of executives said they had encountered cultural impediments (described in the study as “people, business process, organisational alignment”). Yet only 1.6% of them highlighted data literacy as their top investment priority, and just 23.8% say that industry has done enough to establish data ethics policies.
We have been here before – at least in part. Reflecting on the dotcom era, Forbes makes the point that “every organisation was transformed by the mass accessibility of the personal computer, user-friendly software and ubiquitous internet access” and it needed leaders “to understand how these tools could be used together to access new business and improve operations.”
So here the world is again – with a need to address gaps in data and digital literacy as a priority. Oddly enough, the demystification of these technologies could, at least in part, come from the technologies themselves.
In the learning arena, for example, AI assistants can support the analysis of quantitative data. This has the potential to cut down on trainer time for basic tasks, speed up feedback, shorten the overall duration of training, and accelerate the adoption of digital and data skills.This means that tutors can spend their time adding greater value to the learner, by focusing on their personal needs and the areas that most require human intervention.
There are dozens of other examples that could help humans, whether in the classroom or the boardroom, keep abreast of, take advantage of, and be alert to the risks of technological advancements.
As the Bletchley Park Summit gets underway let everyone, whether involved in the central event or not, be alert to the needs of a wider society that must understand the technologies that are being discussed and how they should be embedded in our economies and societies to promote human flourishing and AI for the good. Such understanding is vital if we are to address the opportunities, as well as the risks, attached to ‘frontier AI’.
Nigel Shadbolt is executive chair of the Open Data Institute, which he co-founded with Tim Berners-Lee, and is one of the UK’s foremost computer scientists. He is a leading researcher in artificial intelligence. He is principal of Jesus College Oxford, a professor of computer science at the University of Oxford and a visiting professor of artificial intelligence at the University of Southampton.
Picture by Getty Images / Eugene Mymrin