World Summit AI

I was invited by the Scottish AI Alliance to attend the World Summit AI conference in Amsterdam this month in my capacity as Project Lead for Children’s Parliament on the project ‘Exploring Children’s Rights and AI’ – a project created and run in partnership with the Scottish AI Alliance and The Alan Turing Institute. I work in the field of children’s human rights and have a background in primary education, so launching myself into the world of tech at a vast AI conference was something of a new experience. I’ve been working on our project for a year now, and have learned a lot in that time, but I was excited to both deepen and broaden my understanding of where AI is now, and where it’s going next.

Settling into the main hall for the opening talks on the first day, there was the distinct sense that the organisers wished to instil a futuristic aura into proceedings. The first speakers were preceded by a video piece featuring a mash-up of scenes from Hollywood films and clips from TED Talks loosely linked by references to ‘changing the world’ and featuring a DJ-style ‘scratching’ effect that was perhaps supposed to hint at the power of AI as a ‘disruptor’, to borrow a common buzzword heard across the two days. Considerable thought and resources had clearly been put into creating a sense of excitement and anticipation, something which they also presumably hoped would carry through to the ‘exclusive announcement’ that a further World Summit AI would be hosted in Qatar. Instead, an audible ripple of disquiet went through the crowd. The question on my mind was to what extent was a human rights agenda in general, and a children’s human rights agenda specifically, going to feature across the two days.

There were certainly some extremely innovative uses of AI talked about and on display at the venue (the Taets Art and Event Park, across the water from Amsterdam in Zaandam). From developing ‘virtual medicine programmes’ – which will apparently harness VR and ChatGPT to create AI powered Cognitive Behavioural Therapy (CBT) – to an AI system aimed at spotting abusive behaviour towards animals in abattoirs, the sheer breadth of applications for this varied and rapidly developing technology can feel a little overwhelming. Between the many talks and conversations, there were frequent references to ‘ethical’ and ‘responsible’ AI, but this wasn’t always followed up with a great deal of detail on what that meant in practice. Speaking to somebody from the company behind the abattoir surveillance application, they conceded that such technology threw up various serious ethical questions regarding its use – particularly given data protection and other laws forbidding the surveillance of employees – and potential misuse for the surveillance of other ‘undesirable’ behaviours. That ethical questions, such as these, remain unresolved while the rate of development is accelerating is instructive when thinking about the increasing volume of voices (from within and without the AI sector) calling urgently for better regulation.

More encouraging then, from a rights perspective, is this emerging legislative field. Many of the businesses exhibiting this year were making specific reference to services which would help companies to navigate the incoming EU AI Act. Despite the insistence from one or two speakers that the industry could ‘self-regulate’, or that a ‘flexible’ approach to regulation is required to support ‘innovation’ (which felt like a fairly transparent call from the private sector to be left un- or minimally-regulated), the prevailing impression was that the incoming EU regulation is likely to be fairly robust. What this means for countries outside of the EU, such as the UK, is less clear, although the desire of big tech companies to have consistency across a global market may lead to others following suit.

A panel discussion on ‘The Power of International Standardisation’ addressed this idea in more detail, though from a standardisation rather than regulatory perspective. “Standards are not neutral,” a panel member explained – they reflect the values and interests of those writing them. What was a little disheartening in this instance was that they were talking about the need for businesses to involve themselves in processes in order to make sure their own interests are protected, rather than the interests of people more generally. One hopes that in a context where all stakeholders – and when it comes to AI that essentially means representatives from every section of our society, given the reach that AI now has – were involved in creating regulations & standards for the sector, we would be able to move towards a future where the use and benefits of the technology were equitably distributed.

So where were the voices of children at a conference billed as ‘the only AI summit in the world that matters’? Besides a panel featuring UNICEF’s Irina Mirkina (which I unfortunately missed in my dash to the airport) and a brief video appearance from an Afghan child during Sarah Porter (event organiser Inspired Mind’s CEO)’s opening address, they appeared to be entirely absent. Today’s children will grow up with AI as an ever-present feature in their lives, and it is a technology that presents specific challenges and opportunities for them. For AI to be developed and used in a manner which can truly be considered ‘ethical’ and ‘responsible’ – as so many of those present at the World Summit AI purported to be aiming for – then the views, ideas and interests of children need to be considered. We can only hope that by the time the next summit looms into view this message will have seeped further into the general consciousness.


Gregory Metcalfe
Project Lead, National Programmes

Date: 7th November 2023
Previous Next