Much of the ongoing discourse surrounding AI can largely be divided along two lines of thought. One concerns practical matters: How will large language models (LLMs) affect the job market? How do we stop bad actors from using LLMs to generate misinformation? How do we mitigate risks related to surveillance, cybersecurity, privacy, copyright, and the environment?
The other is far more theoretical: Are technological constructs capable of feelings or experiences? Will machine learning usher in the singularity, the hypothetical point where progress will accelerate at unimaginable speed? Can AI be considered intelligent in the same way people are?
The answers to many of these questions may hinge on that last one, and if you ask Blaise Agüera y Arcas, he replies with a resounding yes.
Agüera y Arcas is the CTO of Technology & Society at Google and founder of the company’s interdisciplinary Paradigms of Intelligence team, which researches the “fundamental building blocks” of sentience. His new book — fittingly titled What is Intelligence? — makes the bold but thought-provoking claim that LLMs such as Gemini, Claude, and ChatGPT don’t simply resemble human brains; they operate in ways that are functionally indistinguishable from them. Operating on the premise that intelligence is, in essence, prediction-based computation, he contends that AI is not a disruption or aberration, but a continuation of an evolutionary process that stretches from the first single-celled life forms to 21st-century humans.
Big Think recently spoke with Agüera y Arcas about the challenges of writing critically about AI for a general audience, how attitudes in Silicon Valley changed over the course of his career, and why the old approach to machine learning was bound for a dystopian future.




Leave a Reply to A WordPress Commenter Cancel reply