RE: Dialectics and “Artificial Intelligence”

This post is my reply to some ideas in Roderic's post on altesq.net. It got me interred but I was abroad during CNY and I don't have much time to actually write down my thoughts. This post should have gone out a few days back. But that just gave me more time to think through my words, eh?

By no means I'm trying to attack the author. But I want to provide a better, more productive foundation regarding AI - a field I care deeply about.

In which the author defines what he thinks intelligence is and bases the entire discussions on. Stating the following an gives further explanation. (For people reading on the HTTP site, I recommend you download a Gemini client and read it. Most of my readers are on Gemini anwyways)

“Intelligence measures how organisms manage cognitive pressure.”
...
We’re organisms capable of observation and reflection. We’re always absorbing information through our senses, but at the same time we are comparing whatever inputs we receive to our expectations
...

I did AI research in the past. But transitioned to doing AI-related computation later. His idea is close to what is used in AI research. From my understanding, in AI research, intelligence means a very specific thing. "The agent's ability to manipulate the environment to maximize it's objective (function)". The definition stems from the early days, where we train AIs to play Mario and what not. Note the objective itself is not a part of the definition. But a spec of the agent.

For example, an AI is created to play Mario and placed in a NES emulator. The agent is the AI, the Mario game is the enviroment. And the objective function, which measures how well the agent performs is the score. And we call an AI stupid if it dies contantly and smart if it figures out how to get to the end of the game or figured out bugs in the game to defeat enemies. Or even figure out how the 6502 CPU works from inside the game and exploit bugs to make it's score as high as possible. We would call that extremely smart (and scary) as it requires very high level of cognition. This definition has some neat properties that we expect and some that is just convenient.

  • There are levels of intelligence
  • Measuring intelligence is possible
  • A "Chinese Room" is intelligent
  • AI can be intelligent
  • The environment can be extremely simple, like the Cartpole to the real world
  • Furthermore, environment can be abstract, like physics emulation and/or other computer systems

Basically the duck typing definition intelligence. More importantly, this definition of intelligence avoids the weird situation where a seemingly non-intelligence system potentially acts smart and out competes humans. Let's try a thought experiment. Someone made Skynet that is internally a huge look up table of different conditions and actions. This version of Skynet does not think nor care about anything. It just does what it's programmed for. Given the LUT is well designed enough, this "dumb" Skynet will not fail to present actions that you call smart, circumventing attempts from humans to stop it and we all still end up dead.

Luckily building Skynet level of AI is not likely using LUTs. In practice, it is not robust to even minor assumption changes. It will fail as soon as something de-syncs. Nevertheless, it demonstrates self reflection, thinking and cognition are not prerequisites to intelligence. They are nice to haves. Hopefully it also shows that being intelligence does not make a system good or evil. In other terms, being intelligence and being good/evil is orthogonal with regards to each other.

There is a common rebuttal - Is a human with a calculator having more intelligence in math compared to a human doing mental math? Our intuitions say no. But I argue it is a yes. What if this calculator is a chip that directly sits in his brain? And he can think of equations and results just appears in his mind? That feel like actual super intelligence. But every equation that can be think can be entered into the calculator. There is nothing magical happening. I think there's 2 things we need to consider. First, speed is a critical part of intelligence - recall when you argues with your girl/boy friend. And you wish you had more time to think about the arguments. Besides having more time, thinking faster with the same fidelity does the same trick. Secondly, the difference lies in the boundary of the agent. We humans tends to think the calculator-in-brain a single system. While the calculator-on-hand system is 2, One is the human and the other the calculator. In any case, the human-calculator system is smarter is math compared to the human only system.

Intelligence is also an multi-dimensional thing. Being smart for one thing doesn't automatically mean being smart on others. For instance, you, the reader reading this post, can be good at driving a car but hopeless playing chess. It follows agents need not be intelligent in all possible regards to be considered intelligence. Less all humans are not by definition. There's simply too much to learn. And there's even more that we either don't know about or don't care.

Getting back to the original article. With the definition of intelligence outlined above. We can find a new direction of discussion. Both DALL-E and ChatGPT are intelligent against what they are design for - make images from text and text continuation. What's impressive is that large language models like ChatGPT learned some very human aspects like the theory of mind, logical reasoning, etc.. as a means to better predict text generated by humans. Just like how our hypothetical Mario AI figured out how 6502 works as a means to maximize the score.

Instead, I think what Roderic wanted to ask is actually 2 different things. 1. Is ChatGPT and DALL-E intelligence with regards to human values. 2. Will we ever make AI internalized human values instead of learning them as a means to some end. I think the author has answered the first question quite well in his article. While the second question is really, really hard and is being searched by many AI researches right now.

How hard you ask? Quote:

A problem in computer science is hard if we can solve it with infinite computing power.
It is really hard if we don't know even how to approach given infinite computing power.
It is really, really hard if a really hard problem sounds easy when you first hear it.
- Robert Miles, somewhere on YouTube, paraphrased

Like, Chess is hard. We know how to solve it with Minimax, then approximated it with MCTS. Robust natural language processing used to be in the reall hard category, we don't even have a clue how to map text to meanings. But is easy since the advent of LLMs. While "alignment" and getting AI to learn human values belongs squarely in the very, very hard category. For now.

Author's profile. Photo taken in VRChat by my friend Tast+
Martin Chang
Systems software, HPC, GPGPU and AI. I mostly write stupid C++ code. Sometimes does AI research. Chronic VRChat addict

I run TLGS, a major search engine on Gemini. Used by Buran by default.


  • marty1885 \at protonmail.com
  • Matrix: @clehaxze:matrix.clehaxze.tw
  • Jami: a72b62ac04a958ca57739247aa1ed4fe0d11d2df