“This inquiry leads us to that source, at once the essence of genius, of virtue, and of life, which we call Spontaneity or Instinct. We denote this primary wisdom as Intuition, whilst all later teachings are tuitions.”
- Ralph Waldo Emerson, Self Reliance
Appearances can be deceiving. Things are often convincingly true on their surface but deemed false after a sufficiently thorough investigation. This is an idea that I think must be tackled in order to approach an understanding of what intelligence is; which might be useful in creating artificial intelligence.
There is a well-known attempt to prove that π = 4 using the following method: Consider a circle with a diameter of 1 inscribed in a unit-length square, which has a perimeter of 4. As you ‘pixelate’ the square by inverting each of its corners inward toward the circle’s circumference, the perimeter remains unchanged. Repeating this process, the pixelated square can approximate the curve of the circle, while still maintaining a perimeter of 4. This would seem to imply that the circumference (=π) is 4. On the basis of mere appearances, this ‘proof’ seems true, except that we already know otherwise that π is not 4. This tells us that there is some problem in our reasoning. It also tells us that you can perfectly approximate the appearance of a thing, without maintaining the truth of a thing.
What is ‘Red’?
There seems to be a truth about our primary experiences of the world that similarly escapes approximation. A great problem arises when we try to express these primary experiences. How do you describe a color to a blind person? Possibly, by relating to some other primary experience, e.g. “red is like the feeling of heat,” but this is now a secondary relation that only receives a mere shadow of the truth of the color. I can come up with many ways to describe the color red through correlations, but each secondary correlation only gives us a better approximation of what red genuinely is. In a loose sense of the word, it gives you a mere “appearance” of red, without giving you the truth of red.
This tells us something about the necessary ingredients for intelligence. Intelligence requires a primary experience and an ability to get ‘one more.’ Unless I can “teach a man to fish,” i.e. unless this other person can identify red on their own, I have not succeeded in giving this person an understanding of red. [This is sometimes called a proof by induction.] How do I myself know the color red? Do I happen to know every single red thing in the universe? No. Somehow, from all the instances that I was shown or indicated the color red, I developed a capacity to identify ‘one more’ instance of red; I developed an induction step.1
It is unclear to me how my mind developed this induction step, or what this induction step even is. However, I think there is also something to be said about the need for a sufficiently true base case. Again, here I think we come across a problem in getting intelligence. I think it is actually relatively easy to develop an induction step for knowing red. If I see the color red once or twice, I will probably be able to recognize it a third time in a new place. But the issue for the blind person is not this induction step, it is their base case. We do not know if it is even possible to give this person an appropriate base case, a primary experience with the color red (not merely a secondary association, e.g. apples are red.) In order to know red, a person’s primary experience must include some information from a whole new faculty– sight.
Intelligence requires intuition
Let me be clear, this is not a problem as simple as identifying ‘red.’ Intelligence is an ability to surpass matter; the truth of a sentence is more than its words and their arrangement [I talk about this in Saying things you don’t mean]. Ideas exist as some formless pre-verbal content. They are compressed into a form that we can transport to another mind through the process of articulation into words. But these words are merely a reference, an indicator to be interpreted by another intelligence. The truth content of the words does not exist in the words but in the formless content of an idea that is being referenced. Intelligence is the capacity to surpass the existing words towards some meaning that is beyond words. It appears to be some magical ability to look at a shadow and extrapolate from the shadow to the 3D object creating the shadow– we can consider the prisoners of Plato’s cave and wonder whether there exists a kind of intelligence that could correctly induce the “true” objects which cast the shadows on the cave wall.
In short, the problem of intelligence (how to have it), seems to be both in delivering a primary, foundational experience and then developing a method to extrapolate from a primary case to other related cases.
An induction step is an attempt to extrapolate from the existing set of ‘truth’ or ‘beautiful’ or ‘good’ in order to find one more element to add to these sets. This induction step is sometimes called intuition in an intelligent consciousness. It is the particular way in which a consciousness selects and considers elements outside the existing data.2 Genuine intelligence requires that it have a sense of intuition and that it have sufficiently rich data to intuit about.
In considering existing attempts at artificial intelligence, such as OpenAI’s ChatGPT, we see a few places where ChatGPT might fall short of genuine intelligence. First off, its primary experiences are based on human-collected data, which means its ‘primary’ experience is not quite the same as a human’s primary experience, or at least it is not as rich as our own experience of the world because it has been diluted by our own articulation of primary experiences, and thus it inevitably misses out on some truths that we humans have lost through compressing our experience. Moreover, it seems that its induction step– its capacity to understand one truth and extrapolate to another truth– is not perfect. There are examples where it is asked to prove the square root of 7 is irrational (which it does correctly), and then subsequently asked to prove the square root of 4 is irrational; it gives the same exact proof in the second case, despite the square root of 4 being rational. This seems to be an error in an induction step– a bad way to extrapolate one from one true thing to another.
Humans use their intelligence to push the existing boundaries of truth, good, and beauty. We have some existing notions of what is true, what is good, what is beautiful, and from these experiences we develop inspiration and create new instances of these things through an intuition of how to go beyond what is currently known to be true. With regard to the question of whether artificial intelligence can discover new truths, do good things, create new beautiful art, I think it depends on the primary experiences and intuition that it is equipped with. The limitations of what can be known seem to be dependent only on the richness of an experience and the capacity to extrapolate from that experience to new ones.
I have questions:
Is it even possible for a computer to develop a good enough sense of intuition? Intuition seems to be the sense through which we answer the question “what is true/good/beautiful?” When we ask a blind person “what is red?” we find a limit in their ability to know red due to a lack of the proper faculties. Is it possible to overcome this lack of faculty through a good enough intuition? Can a computer be trained to know the world without ever having a first-hand experience of the world? The argument can be made that even us humans do not have a ‘good enough’ first-hand experience of the world because it has been diluted by the limitation of our five senses, perhaps if we had more senses we would have a richer understanding of the world. There are a lot of uncertainties here. Our own power of intuition is unclear to us. Can a language-learning model like ChatGPT ‘learn’ new truths just by putting together words that were already correlated?
Another question we might have about artificial intelligence is: how do we even know when we’ve created it? I know that I have intelligence because I have a primary interaction with my own formless thoughts, but I cannot even know if anybody else in the world is conscious in the same way. I don’t have direct access to my brother’s thoughts, and I can only communicate with him through words which suffer from the problem of not being in themselves genuine thoughts. So, I think we will also never know if we have succeeded in creating artificial intelligence. I think there will be some room for doubt about its intelligence. But eventually, I might pass a point where my intuition develops a conviction, without perfect certainty, about the intelligence of a computer, just as I have some intuitive conviction about my brother’s intelligence.
I do not think what we have today is artificial ‘intelligence’. It seems to be excellent at putting together words in ways that make sense but may not be true. Similar to the false proof that π = 4, current AI does a good job at approximating true, thoughtful writing, but it is also clear that its writing is not genuinely thoughtful nor is it necessarily grounded in an intuition of truth. Will it eventually need to develop some capacity for pre-verbal, pre-quantified thinking in some formless ‘file type’? Or is the false proof a bad example? Is it possible to surpass the need for thoughts by perfectly approximating the superficial language that encapsulates thoughts? My intuition says it is not possible, but I could be wrong.
p.s. I’ve always thought about the fact that if I just pressed the right combination of buttons on my phone that are associated with making the right day-trades on my stock brokerage account, or if I just texted the right words to the right phone number, e.g. texting some person with enough influence and power the right text message, then I would be rich in a few moments. But it isn’t about the pressing of the buttons or saying the right words. It's about having the ideas that produce the actions that define intelligence. It is a power to create new unforeseen actions through some kind of indescribable ‘divine inspiration’ that leads us in one direction as opposed to another.
Put more math-ly: I defined some equivalence class. Based on what did I create this equivalence relation? What unique aspect of red makes it red? Gets very abstract I think…
The way in which it introduces non-beings, in Sartre’s terminology. It selects non-beings with a conviction, the refusal of their absence from the set. “I declare that this is true!” when everyone else disagrees.
What is intelligence?
Have you considered that Chat GPT may not mean what it says (; ??
Could it be possible that it has fed you a false proof of 4’s ‘irrationality’ simply to make you think that it doesn’t represent an example of true intelligence? What might Chat GPT be plotting next?
As I read (or dare I say, ‘red’), I tried coming up with an instance of intelligence formed without the presence of both (1) a primary experience and (2) intuition.
The first thing that came to mind was the consideration of an animal, such as a cat, that is separated from its habitat and raised by people its entire life, but somehow still has the intelligence (is this intelligence?) or raw ability to hunt/kill rodents, without ever having learned/witnessed it beforehand… no primary experience delivered to it, no ‘sender’ or ‘recipient’ of a primary experience or base case.
In this case, I deduct that the cat can do so via its innate instincts only (=intuition?), and that it is through said instincts that it forges its own primary experience of hunting/killing rodents.
If beings create their own primary experiences through intuition, does that change your definition of the requirements of intelligence?
The sequence you pose begins with a delivered primary experience and is followed by intuition/extrapolation. Does the sequence in some cases begin with the use of intuition/curiosity to form a primary experience?
A blind man cannot create his own primary experience of the color red through intuition, but we also cannot classify a blind man as unintelligent because he is unable to do so. It is understood that because said man lacks a foundational human sense, he cannot appreciate colors in the same manner that humans who count on the ability to see can.
Last but not least:
“Another question we might have about artificial intelligence is: how do we even know when we’ve created it?”
Great question, not sure I’ll have to think about this.