Three Major Challenges for Achieving Human-Like AI

Challenge #1 — Presenting as Human

  • We don’t need to waste time think about the mechatronics and mechanics of robots and androids, and can focus on the software. How important a physical (as against digital) body is for an AI can perhaps wait for another post.
  • Within the virtual world the computer and human are effectively on a level playing field — just looking at an avatar you can’t tell whether it’s under computer or human control.
  • Uncanny Valley — We’re quite happy to deal with cartoons, and we’re quite happy to deal with something that seems completely real, but there’s a middle ground that we find very spooky — Mori’s well known Uncanny Valley. So in some ways the efficacy of developments rise as they get better, then plummet as they hit the valley, and then finally improve again once you cannot tell them for real. So whilst in some ways we’ve made a lot of progress in some areas over recent years (e.g. visual avatars, text-to-speech) we’re now hitting the valley with them and progress may now seem a lot slower. Other elements, like emotion and empathy, we’re barely started on, so may take a long time to even reach the valley.
  • Anthropomorphism — People readily attribute feelings and intent to even the most inanimate object (toaster, printer). So in some ways a computer needs to do very little in the human direction for us to think of it as far more human than it really is. This can almost help us cross the valley by letting human interpretation assume the system has crossed the valley even though it’s still a lot more basic than is thought. It’s one of the reasons I like working with robotic avatars (robotars) in virtual worlds — the default situation is that they are usually assumed to be human, and just have to not give away the fact they aren’t!

Challenge #2 — Being General Purpose

Challenge #3 — Developing Sentience

  • At one end many believe that we will NEVER create artificial sentience. Even the smartest, most human looking AI will essentially be a zombie, there’ll be “nobody home”, no matter much much it appears to show intelligence, emotion or empathy (see Bringsjord2001 for a philosophical discussion).
  • At the other end, some believe that if we create a very human AGI then sentience might almost come with it. In fact just thinking back to the “extras” example above our anthropological instinct almost immediately starts to ask “well what if the extras don’t want to do that…” (interesting discussion with proponents here and also Peter Voss on ‘consciousness’ in AGIs)
A Moratorium on Synthetic PhenomenologyIt is important that all politicians understand the difference between artificial intelligence and artificial consciousness. The unintended or even intentional creation of artificial consciousness is highly problematic from an ethical perspective, because it may lead to artificial suffering and a consciously experienced sense of self in autonomous, intelligent systems. “Synthetic phenomenology” (SP; a term coined in analogy to “synthetic biology”) refers to the possibility of creating not only general intelligence, but also consciousness or subjective experiences on advanced artificial systems. Future artificial subjects of experience have no representation in the current political process, they have no legal status, and their interests are not represented in any ethics committee. To make ethical decisions, it is important to have an understanding of which natural and artificial systems have the capacity for producing consciousness, and in particular for experiencing negative states like suffering. One potential risk is to dramatically increase the overall amount of in suffering the universe, for example via cascades of copies or the rapid duplication of conscious systems on a vast scale.Recommendation 7The EU should ban all research that risks or directly aims at the creation of synthetic phenomenology on its territory, and seek international agreements.

The Three Challenges

  • Creating something that presents as 100% human across all the domains of “humanness”;
  • Creating an Artificial General Intelligence that can apply itself to almost any task; and
  • Creating, or evolving, something that can truly think for itself, have a sense of self, its onw narrative and subjective experience, and which shows self-determination and self-actualisation.




David has been involved in 3D immersive environments/VR and conversational AI since the 1990s. Check out and for more info.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Cambricon Stocks Plunge After Sudden Exit of Huawei Veteran CTO; US-Banned DeepGlint Goes Public…

The Future of AI Is Unpredictable, and That’s a Good Thing

Part 3: AI’s Creative Future — A Series

Artificial Intelligence CREATES ART

Innovation at it’s peak, when Computer Vision meets Visual Assistive Technology

Embedding Ethics Into AI : How To Keep The Social Contract Alive

The Real Value of AI for SMB and Corporates

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
David Burden

David Burden

David has been involved in 3D immersive environments/VR and conversational AI since the 1990s. Check out and for more info.

More from Medium

3 Generations of AI — and what they tell us about how we think

Can we develop AGI with reinforcement learning?

Where is all the AI in the land of industrial IoT?

The Path to Machine Intelligence: Classic AI vs. Deep Learning vs. Biological Approach