Three Major Challenges for Achieving Human-Like AI

Challenge #1 — Presenting as Human

This first (and easiest) challenge is all about moving up the diagram — how we make the AI present as Human before it actually engages in any sort of conversation.

  • We don’t need to waste time think about the mechatronics and mechanics of robots and androids, and can focus on the software. How important a physical (as against digital) body is for an AI can perhaps wait for another post.
  • Within the virtual world the computer and human are effectively on a level playing field — just looking at an avatar you can’t tell whether it’s under computer or human control.
  • Uncanny Valley — We’re quite happy to deal with cartoons, and we’re quite happy to deal with something that seems completely real, but there’s a middle ground that we find very spooky — Mori’s well known Uncanny Valley. So in some ways the efficacy of developments rise as they get better, then plummet as they hit the valley, and then finally improve again once you cannot tell them for real. So whilst in some ways we’ve made a lot of progress in some areas over recent years (e.g. visual avatars, text-to-speech) we’re now hitting the valley with them and progress may now seem a lot slower. Other elements, like emotion and empathy, we’re barely started on, so may take a long time to even reach the valley.
  • Anthropomorphism — People readily attribute feelings and intent to even the most inanimate object (toaster, printer). So in some ways a computer needs to do very little in the human direction for us to think of it as far more human than it really is. This can almost help us cross the valley by letting human interpretation assume the system has crossed the valley even though it’s still a lot more basic than is thought. It’s one of the reasons I like working with robotic avatars (robotars) in virtual worlds — the default situation is that they are usually assumed to be human, and just have to not give away the fact they aren’t!

Challenge #2 — Being General Purpose

One of the biggest issues with current “AI” is that it is very narrow. It’s a programme to interpret data, or to drive a car, or to play chess, or to act as a carer, or to draw a picture. But almost any human can make a stab at doing all of those, and with a bit of training or learning can get better at them all. This just isn’t the case with modern “AI”. If we want to get closer to the SF ideal of AI, and also to make it a lot easier to use AI in the world around us, then what we really need is a “general purpose AI” — or what is commonly called Artificial General Intelligence (AGI).

Challenge #3 — Developing Sentience

If creating an AGI is probably a several orders of magnitude greater problem than creating “humanness”, then creating “sentience” is probably many orders of magnitude greater again. We are moving from an AI which is a “zombie” with no internal subjective experience or narrative, so one which has a real sense of self and self determination, its own story, hopes and dreams.

  • At one end many believe that we will NEVER create artificial sentience. Even the smartest, most human looking AI will essentially be a zombie, there’ll be “nobody home”, no matter much much it appears to show intelligence, emotion or empathy (see Bringsjord2001 for a philosophical discussion).
  • At the other end, some believe that if we create a very human AGI then sentience might almost come with it. In fact just thinking back to the “extras” example above our anthropological instinct almost immediately starts to ask “well what if the extras don’t want to do that…” (interesting discussion with proponents here and also Peter Voss on ‘consciousness’ in AGIs)
A Moratorium on Synthetic PhenomenologyIt is important that all politicians understand the difference between artificial intelligence and artificial consciousness. The unintended or even intentional creation of artificial consciousness is highly problematic from an ethical perspective, because it may lead to artificial suffering and a consciously experienced sense of self in autonomous, intelligent systems. “Synthetic phenomenology” (SP; a term coined in analogy to “synthetic biology”) refers to the possibility of creating not only general intelligence, but also consciousness or subjective experiences on advanced artificial systems. Future artificial subjects of experience have no representation in the current political process, they have no legal status, and their interests are not represented in any ethics committee. To make ethical decisions, it is important to have an understanding of which natural and artificial systems have the capacity for producing consciousness, and in particular for experiencing negative states like suffering. One potential risk is to dramatically increase the overall amount of in suffering the universe, for example via cascades of copies or the rapid duplication of conscious systems on a vast scale.Recommendation 7The EU should ban all research that risks or directly aims at the creation of synthetic phenomenology on its territory, and seek international agreements.

The Three Challenges

So let’s bring all three challenges together

  • Creating something that presents as 100% human across all the domains of “humanness”;
  • Creating an Artificial General Intelligence that can apply itself to almost any task; and
  • Creating, or evolving, something that can truly think for itself, have a sense of self, its onw narrative and subjective experience, and which shows self-determination and self-actualisation.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
David Burden

David Burden

David has been involved in 3D immersive environments/VR and conversational AI since the 1990s. Check out and for more info.