An AI Landscape — What Do You Mean by AI?


The bottom axis shows complexity (which I’ll also take as being synonymous with sophistication).

  • Simple Algorithms — 99% of most computer programmes, even complex ERP and CRM systems, they are highly linear and predictable.
  • Complex Algorithms — things like (but not limited to) machine learning, deep learning, neural networks, Bayesian networks, fuzzy logic etc where the complexity of the inner code starts to go beyond simple linear relationships. Lots of what is currently called AI is here — but really falls short of a more traditional definition of an AI.
  • Artificial General Intelligence — the holy grail of AI developers, a system which can apply itself using common sense and general knowledge to a wide range of problems and solve them to a similar level as a human.
  • Artificial Sentience — beloved of science-fiction, code which “thinks” and is “self-aware”.


The vertical axis is about “presentation” — does the programme present itself as human (or indeed another animal or being) or as a computer. An ERP or CRM system typically presents as a computer GUI — but if we add a chatbot in front of it it instantly presents as more human. The position on the axis is influenced by the programmes capability in a number of dimensions of “humanness”:

  • Text-to-speech: Does it sound human? TTS has made steady progress in recent years, but appears to now give a choice of a pretty human sounding generic voice or a slightly more robotic sounding person-specific voice. A lot of the work is bring driven my medical voice-banking and virtual voice-overs/actors!
  • Speech Recognition: Can it recognise human speech without training. Systems like Siri have really driven this on recently but it’s still not 100% and pretty shaky in a lot of applications. Voice control within VR environments may be a new driver.
  • Natural Language Understanding: Neural-network approaches seem to be ruling the roost at the moment, but hybrid approaches with a good dose of semantic and grammar understanding still seem a good bet for the future.
  • Natural Language Generation: GPT-3 is being talked about a lot, but whilst it can look good on the surface it can break down in detail, and the bot has no idea of what it is saying. Lots more work needed, especially on argumentation and story-telling.
  • Avatar Body Realism: CGI work in movies has made this pretty much 100% except for skin tones — Abbatars being a case in point!.
  • Avatar Body Animation: For gestures, movement etc. Again movies and decent motion-capture have pretty much solved this — but needed in real-time.
  • Avatar Face Realism: All skin and hair so a lot harder and very much stuck in uncanny valley for any real-time rendering. See Abbatars again!
  • Avatar Expression (& lip sync): Static faces can look pretty good, but try to get them to smile or grimace or just sync to speech and all realism is lost. VR avatar development may help drive several of these avatar areas forward.
  • Emotion: Debatable about whether this should be on the complexity/sophistication axis instead (and/or is an inherent part of an AGI or artificial sentient), but it’s a very human characteristic and a programme needs to crack it to be taken as really human. Games are probably where we’re seeing the most work here.
  • Empathy: Having cracked emotion the programme then needs to be able to “read” the person it is interacting with and respond accordingly — lots of work here and face-cams, EEG and other technology is beginning to give a handle on it.

Using the Chart

So back on the chart we can now plot where current “AI” technologies and systems might sit.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
David Burden

David Burden

David has been involved in 3D immersive environments/VR and conversational AI since the 1990s. Check out and for more info.