Auto-regressive large language models (LLMs), popularized by ChatGPT, have taken the world by storm. Demonstrating similarities with our own way of processing information, they are questioning the very meaning of intelligence. By their chaotic and creative nature, generative AI systems can't be considered "machines" anymore, as machine refers to a deterministic system. Then, how far we are from giving birth to a sentient being?
Many AI researchers are arguing that generative AIs based on large language models are:
- unable to “feel” the world, because they only process data and weights
- regurgitating words one after another, statistically, without any underlying understanding of the concepts and meanings.
- unreliable because of their tendency to make up facts (hallucinations)
- can not be considered “intelligent” because they have no emotions
These preconceptions are based on the belief that intelligence is a unique property of humans, out of reach for machines because of their deterministic nature. In this essay, we will investigate the parallels between the ways humans and AI systems imitate, improvise, make mistakes, and internalize experiences. By examining these similarities and questioning the uniqueness of human intelligence, we aim to uncover insights that may help bridge the gap between us and AI systems.
Next chapter: Imitation as the Essence of Learning