top of page

Modern Times: Human Intelligence Vs Artificial Intelligence

Another one of those inventions that fuels the machinery of our modern fears is that of artificial intelligence. Robots taking over humanity has been a latent concern ever since we made them capable of automatic processes. This section aim is not to discuss the history and/or mechanics of machines and/or computers. In turn, let’s analyze what our relationship with these marvels of human invention is, and why we shouldn’t fear them.




Before anything, we need to distinguish what is human, and what is artificial. Then, let’s recall what intelligence is. In the context of the word “intelligence,” both “human” and “artificial” are being used as adjectives, not as nouns. The relevance of this, in this modern debate of ours, is that human and artificial are qualities of the word intelligence. We know quality is a stablished by space. Both the spread of space and the speed of that space changing limits is what provides “human” and “artificial” the starting point of comparison in reference to the reach of that intelligence. Therefore, to be congruent with the definition of intelligence, I’m inferring that the scale across space that humans and machines observe, and the speed at which such a scale changes, determines the scope of intelligence in each one. Let’s remember, every point of observation in the universe is a center, and it is infinitely empty until time emerges from it. We call that process intelligence.


The word “intelligent” is an adjective used to qualify the measure of an outcome or performance. When discussing this topic, we assume that humans are intelligent, and that machines are intelligent as well – meaning that they both produce a qualifiable outcome. Let’s figure out what it is to be human, what is artificial, and whether all outcomes from performance are really a measure of intelligence.


I previously discussed intelligence, but let’s refresh our understanding of the concept. “Intelligence” comes from the Latin “intelligere.” It is composed by the Latin “inter-,” which means “in between,” and “legere,” which means “to read.” Since intelligence is one’s ability to “read in between,” we’ll assume that the ability to produce a qualifiable outcome is reciprocal to the accuracy of the seeing of relationships in space. However, an inconvenience arises from measuring intelligence merely by the quality of outcome because part of it is also the quality of the function creating such results. For example, how can we measure the psychological function of a human?


Let’s recall that function is abstraction, abstraction is the future, and the furthest point in the future of man is his death. How are we to measure death if it doesn’t have any reference? Until we are able to measure what is to not know or space, there’s no way we can measure intelligence.


Measurement can be qualified. The outcome of intelligence can be put into a coefficient, but not intelligence itself. Since we don’t have a grip on intelligence, but only a reference after it took place, we tend to associate intelligence with computation. Computation is an important part of what intelligence is, but it lacks the most important component, which is observation. Observation itself can produce intelligent outcomes because it orders. Computation can only produce direction because it is the shape that the order adopts. That is, consciousness lets us know order, and the brain follows through by computing the order. In machines, that is not the case, as we’ll see shortly.


It is very difficult to define what it means to be human. Nevertheless, for the purpose of simplifying matters, we’ll say humans are living organisms belonging to the Homo Sapiens species. As a living organism, we are separated from inanimate matter because all life can tell and sense time. As Homo sapiens, we stand apart from the rest of the animal kingdom mainly because we can change time’s direction faster than any other living creature. We can stop, observe, and have ideas that turn into abstractions. We make art, music, poetry, math, physics, chemistry, biology, etc. What really makes us humans different from the rest is intelligence. We live in abstraction, creating and making art.


That which is artificial is a copy. Our art is an abstract copy of our consciousness. An artifact is made so that it mimics some order. A man-made artifice contains a copy of a part of the human’s psychological content. In that sense, an artificial “something” is the human attempt of that “something.” Why would we need an artificial something? The issue really is not whether we need it or not, but rather to see that everything we make is artificial by straight definition. Ideas are natural, but their artifices are not. No human can create natural outcomes other than giving birth. Not even a clone, which is a biological copy, is natural because it is artificially or through a technique – copying DNA, and all the rest of it.


The word “natural” comes from the Latin “naturalis,” which means “by birth;” however, birth is still an act of replication. By definition, the birth of a living creature is the partial or complete copy of what gave birth to it. What is then the difference between a copy via birth and a copy via artifice or human technique? The difference between natural and artificial is content. The content of the artificial copy is not the same as the content of the natural. An artificial copy is a cheap version of the original. It is cheap because the content of nature is whole, whereas the content of the artificial is limited by thought. What thought creates contains a directing, so it is excluding a lot of the entire picture. If we say that artificial intelligence is going to outrun us, despite having less content than we do, then we are admitting that we are carrying too much.


Let’s briefly come back to a previous topic, “the content of man is what he comprehends,” to improve the general context of what is meant here. Knowledge is the content of man. What he makes of a machine is the goal of knowledge towards the ideal function of such a machine. If the ideal is to get from New York to London in six hours, he makes the airplane in such a way as to make this possible. In this case, man thinks of a machine that could think faster than he does. In some regard, man himself wishes to compute information faster, and out of that ideal he creates artificial intelligence. There’s only one little problem: computation is not intelligence.


We already have extremely fast computers, but that’s not enough. Artificial intelligence is a deceiving story; it is just a much more complex and fancier attempt to keep death at bay. Man’s greatest desire is not to create faster-computing beings, but to transfer consciousness through artificial means.

Consciousness is not transferable. Consciousness is a field common to all of existence, and it is dependent on the layer of reality where the observer lies. Consciousness says what’s right and what’s wrong, not by means of language or information, but by means of harmony in the relationship. Machines can’t know harmony because harmony is in the silence, and silence can’t be captured. We observed earlier that if we ever get to put our hands on consciousness, it would be worthless for the aforementioned objectives. Although consciousness is order, it can’t get anywhere without thought. Ironically, thought can’t reach everywhere either by itself because it needs order at its base. The “holy grail” of science, and the purpose of all human disciplines is to discover what connects the two.


We just don’t know how order connects with direction. Initially, they look the same, but they’re not. The reason why it seems we will never know the content of such a gap is that it is not in our field of perception. Consciousness is felt by us when it assists in structuring our abstractions, and the image it gives rise to is also in our perceptible reality. As per my human experience, I am completely certain that the act of observation is what lies between consciousness and thought. The funny thing about it is that observation and not-knowing are the same activity. I suggest that we can go deeper than consciousness, and I can see us achieving that. Nevertheless, it is an impossibility to go beyond observation, or beyond not knowing. The real and true action we should take is to expand the reach of awareness of humanity, but we can’t do that by external means.


Externally, we don’t perceive electromagnetic fields, for example, but we see them in action, so they are on a spectrum. Whatever is on a spectrum, even if it is at the extreme end of it, can be abstracted. However, as I mentioned before, an abstraction only leads to more abstractions. In other words, the solution to an event that hasn’t happened is an event that also hasn’t happened.


Death is this thing which is outside of both our field of perception and time’s structure. Consciousness cannot create machines that die because thought doesn’t know what that is. Our creator made us capable of dying; yet, we can’t copy it. Death equals the idea. Meaning, whatever an idea really is, it is not any different from whatever death also is. Machines can’t have ideas because they can’t die. Nature can create replicas with that capacity, but we don’t. Sometimes, it seems as if we are hoping to learn from these machines what it means to live without the immensity of death. These machines can’t teach us anything about death because they don’t possess the means of deduction to understand – not even as a reference – what it means. There’s nothing to abstract for them in that regard because it is not part of their content, nor the content of their creator. In a sense, since they’re already dead, they can’t know what living is either. Put another way, we don’t know how to make machines capable of not knowing; therefore, we can’t make them capable of dying. Machines don’t have a real essence or notion of present time.


So, what does this all mean for us? We don’t live mechanically. We live in abstraction and created machines that can’t even see such an abstraction. These AI systems are not in any relationship, they don’t know what time is. They know the version of time we program in them, but not the totality of it. How can destruction, if that’s what we fear, be part of a thing that doesn’t know what destruction is – really. The problem is us, not the machine. The issue with technology is not tech itself, but application. It has always been that way. When we applied “tomorrow” righteously, it gave us crops; however, when fear invaded us, we used it to bring about death.


Time pushes us to fill our empty centers since that promotes conservation. From the center applications come to life. This is called creativity. Machines lack that center. The emptiness in that center is the same emptiness we find in death. Our relationship with our creator is in that center. These machines’ relationship with their creators – us – is not from an empty center at all. In fact, it is the opposite. Their relationship with us is totally filled. Space for them doesn’t exist because their knowledge is absolute. Whatever we put inside, that’s all that there’s going to be between them and us. Even if such content can be scrambled into infinite forms, it will still be filled with noises from the past.


Our creator shows us the truth without giving us the tools to replicate it. In a sense, we hope these creations of ours could be our savior – but such a savior will never come.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page