Three Major Challenges for Achieving Human-Like AI

David Burden
8 min readNov 23, 2021

--

In my last post I introduced the AI Landscape model which I find useful for understanding where different “types” of AI are.

If we use this landscape to start thinking about the future then I think that there are three big challenges embedded in it for the journey to develop a human-like AI.

Challenge #1 — Presenting as Human

This first (and easiest) challenge is all about moving up the diagram — how we make the AI present as Human before it actually engages in any sort of conversation.

The chart in the previous article (and repeated below) shows an estimate of the relative maturity of some of the more important technologies involved in making a computer programme look (and sound) human.

Note that my working assumption is that the computer is represented as an avatar within a virtual world or metaverse. There are two big advantages of this:

  • We don’t need to waste time think about the mechatronics and mechanics of robots and androids, and can focus on the software. How important a physical (as against digital) body is for an AI can perhaps wait for another post.
  • Within the virtual world the computer and human are effectively on a level playing field — just looking at an avatar you can’t tell whether it’s under computer or human control.

In terms of making the computer “more human” there are two interesting effects related to work in this direction:

  • Uncanny Valley — We’re quite happy to deal with cartoons, and we’re quite happy to deal with something that seems completely real, but there’s a middle ground that we find very spooky — Mori’s well known Uncanny Valley. So in some ways the efficacy of developments rise as they get better, then plummet as they hit the valley, and then finally improve again once you cannot tell them for real. So whilst in some ways we’ve made a lot of progress in some areas over recent years (e.g. visual avatars, text-to-speech) we’re now hitting the valley with them and progress may now seem a lot slower. Other elements, like emotion and empathy, we’re barely started on, so may take a long time to even reach the valley.
  • Anthropomorphism — People readily attribute feelings and intent to even the most inanimate object (toaster, printer). So in some ways a computer needs to do very little in the human direction for us to think of it as far more human than it really is. This can almost help us cross the valley by letting human interpretation assume the system has crossed the valley even though it’s still a lot more basic than is thought. It’s one of the reasons I like working with robotic avatars (robotars) in virtual worlds — the default situation is that they are usually assumed to be human, and just have to not give away the fact they aren’t!

The next few years will certainly see systems being developed and deployed that seem far more human than any around today, even though their fundamental tech is nowhere near being a proper “AI”. I like the term “Turing-capable” as a way of describing a system which, in a particular representational domain, could pass the equivalent of a Turing Test for that domain — for instance in terms of how it looks, sounds, moves. Can it be readily confused with a human doing the same thing?

Challenge #2 — Being General Purpose

One of the biggest issues with current “AI” is that it is very narrow. It’s a programme to interpret data, or to drive a car, or to play chess, or to act as a carer, or to draw a picture. But almost any human can make a stab at doing all of those, and with a bit of training or learning can get better at them all. This just isn’t the case with modern “AI”. If we want to get closer to the SF ideal of AI, and also to make it a lot easier to use AI in the world around us, then what we really need is a “general purpose AI” — or what is commonly called Artificial General Intelligence (AGI).

There is a lot of research going into AGI at the moment in academic institutions and elsewhere (for examples/comment see KORTELING2021, Stanford AI100 Report), but it is really early days. A lot of the ground work is just giving the bot what we would call common-sense — just knowing about categories of things, what they do, how to use them — the sort of stuff a kid picks up before they leave kindergarten. In fact one of the strategies being adopted is to try and build a virtual toddler and get it to learn in the same way that a human toddler does (see HUTSON2018 and AITRENDS2020).

Whilst the effort involved in creating an AGI will be immense, the rewards are likely to be even greater — as we’d be able to just ask or tell the AI to do something and it would be able to do it, or ask us how to do it, or go away and ask another bot or research it for itself. We would cease to need to programme the bot.

Just as a trivial example, but one that is close to our heart. If we’re building a training simulation and want to have a bunch of non-player characters filling roles then we have to script each one, or create behaviour models and implement agents to then operate within those behaviours. It takes a lot of effort. With an AGI we’d be able to treat those bots as though they were actors (well perhaps extras) — we’d just give them the situation and their motivation, give some general direction, shout “action” and then leave them to get on with it.

Of course that then opens up a whole issue of AI ethics, such as AI slavery (DIHAL2020), value loading, and the AI cross-over (BOSTROM2014). Once we really gather pace on the road to AGI we really need to have a good idea of where it is going and how we can control it.

Note that moving to an AGI does not imply any necessary linkage to the level of humanness. It is probably perfectly possible to have a fully fledged AGI that only has the bare minimum of humanness, just enough in order to be able to communicate with us — think R2D2.

Challenge #3 — Developing Sentience

If creating an AGI is probably a several orders of magnitude greater problem than creating “humanness”, then creating “sentience” is probably many orders of magnitude greater again. We are moving from an AI which is a “zombie” with no internal subjective experience or narrative, so one which has a real sense of self and self determination, its own story, hopes and dreams.

There are possibly two extremes of view here:

  • At one end many believe that we will NEVER create artificial sentience. Even the smartest, most human looking AI will essentially be a zombie, there’ll be “nobody home”, no matter much much it appears to show intelligence, emotion or empathy (see Bringsjord2001 for a philosophical discussion).
  • At the other end, some believe that if we create a very human AGI then sentience might almost come with it. In fact just thinking back to the “extras” example above our anthropological instinct almost immediately starts to ask “well what if the extras don’t want to do that…” (interesting discussion with proponents here and also Peter Voss on ‘consciousness’ in AGIs)

We also need to be clear about what we (well I) mean when I talk about sentience. This is more than intelligence, and is certainly beyond what almost all animals show. So it’s more than emotion and empathy and intelligence. It’s about self-awareness, self-actualisation and having a consistent internal narrative, internal dialogue and self-reflection. It’s about being able to think about “me” and who I am, and what I’m doing and why, and then taking actions on that basis — self-determination.

Whilst I’m sure we code a bot that “appears” to do much of that, would that mean we have created sentience — or does sentience have to be an emergent behaviour? We have a tough time pinning down what all this means in humans, so trying to understand what it might mean (and code it, or create the conditions for the AGI to evolve it) is never going to be easy.

Note that some people (e.g. BARTLE2019 — of MUD fame and a treat of a Powerpoint show!) use the term Artificial Sapients, emphasing the “wise” element of our nature, for me Sentient is the better term as it emphasises the subjective experience — the ability to able to perceive and feel things.

It’s also interesting that in some quarters there are already proposals to outlaw work on “synthetic phenomenology” — moving a step beyond potential restrictions on AI/AGI research. A 2018 report for the European Parliament entitled “Should we fear artificial intelligence?” included this section and recommendation:

A Moratorium on Synthetic PhenomenologyIt is important that all politicians understand the difference between artificial intelligence and artificial consciousness. The unintended or even intentional creation of artificial consciousness is highly problematic from an ethical perspective, because it may lead to artificial suffering and a consciously experienced sense of self in autonomous, intelligent systems. “Synthetic phenomenology” (SP; a term coined in analogy to “synthetic biology”) refers to the possibility of creating not only general intelligence, but also consciousness or subjective experiences on advanced artificial systems. Future artificial subjects of experience have no representation in the current political process, they have no legal status, and their interests are not represented in any ethics committee. To make ethical decisions, it is important to have an understanding of which natural and artificial systems have the capacity for producing consciousness, and in particular for experiencing negative states like suffering. One potential risk is to dramatically increase the overall amount of in suffering the universe, for example via cascades of copies or the rapid duplication of conscious systems on a vast scale.Recommendation 7The EU should ban all research that risks or directly aims at the creation of synthetic phenomenology on its territory, and seek international agreements.

The Three Challenges

So let’s bring all three challenges together

To move from the “marketing” AI space of automated intelligence and our original diagram to the science-fiction promise of “true” human-like AI, we face three big challenges, each probably several orders of magnitude greater than the last:

  • Creating something that presents as 100% human across all the domains of “humanness”;
  • Creating an Artificial General Intelligence that can apply itself to almost any task; and
  • Creating, or evolving, something that can truly think for itself, have a sense of self, its onw narrative and subjective experience, and which shows self-determination and self-actualisation.

It’ll be an interesting journey!

--

--

David Burden

David has been involved in 3D immersive environments/VR and conversational AI since the 1990s. Check out www.daden.co.uk and www.virtualhumans.ai for more info.