Advertisements

Do Words Like Smart & Intelligent Really Describe Our Machines?

How accurate are these terms when describing modern tech?

Advertisements
By Craig Axford | United States

Photo by Andy Kelly on Unsplash

That my phone can do things no other machine in the history of the world could do until very recently is beyond dispute. But is the word smart just something a clever marketer came up with or is it an accurate description of what my phone really is? The same question can be asked about the use of the word intelligence, a quality increasingly being attributed to machines these days.

While no reasonable person would argue that there are things our technology can do better and faster than humans, these tend to be computational or highly repetitive tasks that machines can be programmed to do relatively easily. Even before the invention of the computer, industry had greatly enhanced productivity through the use of technology capable of performing the same task over and over again at greatly enhanced speed. A locomotive can run much faster and for far longer than a human, but we don’t refer to it as though it possessed some sort of embodied intelligence. Regardless, that we don’t need to stick to tracks or the road leaves us with a distinct advantage, our relative slowness notwithstanding.

. . .

Intelligence is a word that’s actually pretty difficult to pin down. There’s general intelligence, commonly referred to simply as g, which we use the standardized IQ test to measure. However, in 1983 the developmental psychologist David Gardiner theorized that we actually possess eight different kinds of intelligence, only some of which can be adequately measured via something like an IQ test. He’s since suggested adding a ninth and tenth: existential and moral intelligence. This theory of multiple intelligences has moved into the mainstream of psychological thinking, making it that much more difficult to quantify exactly how intelligent any one of us actually is in the process.

Developmental Psychologist Howard Gardiner’s 9 types of intelligence. Originally Gardiner proposed eight types of intelligence. He’s since suggested existential and moral intelligence may also be worthy of consideration, though only existential intelligence is shown here.

The Encyclopedia Britannica defines “Human intelligence” as the “mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one’s environment.” This strikes me as a reasonably good working definition that does not exclude either g or Gardiner’s theory of multiple types of intelligence.

. . .

The computer scientist John McCarthy is widely considered to be the person who coined the term “artificial intelligence.” In 1956, McCarthy invited several fellow researchers to join him in “a 2 month, 10 man study of artificial intelligence…” Known as the Dartmouth AI Project Proposal, the study was based on one major fundamental assumption about learning and intelligence. The proposal, in its entirety, read as follows:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

Learning, at least human or animal learning, necessarily involves some degree of interaction with the environment. It cannot be reduced to just mental processes, but is an embodied and interactive experience. So far at least, it’s difficult to argue that “every aspect” of this process, to say nothing of “any other feature of intelligence,” has been “precisely described,” but even if it had been our machines are ill-suited for simulating it.

We do, of course, have machines that move around in the world and that gather and respond to information as they go, but they only do this in ways that are stipulated by their programming. For example, a robot that’s been provided with sensors so that it can “see” where it’s going and avoid potential obstacles might be programmed to find the shortest route between two points. However, it will never spontaneously stop to investigate any flowers, insects, or other objects that happen to catch its electronic eyes along the way because curiosity isn’t part of its programming.

Indeed, it’s not at all clear that something like curiosity could be programmed into a machine. We could simulate curiosity by programming the robot to randomly stop and investigate things as it made its way from point A to point B, but this would not mean the robot had taken an actual interest in anything. Curiosity is not a product of a random number generator spinning in our head that causes us to become engrossed by particular things or events every time the right numbers come up.

Curiosity is, among other things, one very important “feature of intelligence” that so far hasn’t been “precisely described,” and which no program so far devised has even come remotely close to simulating. For all the things our machines can do, they remain as incurious as a stone. So far no computer, no matter how sophisticated, as shown any desire to do any investigating to satisfy its own curiosity.

Photo by Franck Veschi on Unsplash

. . .

In her TED Talk entitled On Being Wrongthe author Kathryn Schulz says “The miracle of your mind isn’t that it can see the world as it is. It’s that it can see the world as it isn’t.” She goes on to state that we need surprise, reversal, and failure. In other words, being wrong now and then is fundamental to our development, both as individuals and as a species.

Robots, on the other hand, are intended to be efficient machines that will perform the same tasks over and over again with speed and accuracy. Ideally, they should never misunderstand our instructions or otherwise screw up. One of the advantages of having robots working on the assembly line instead of humans, at least from the company’s perspective, is robots don’t have daydreams about doing something more fulfilling that causes them to slow down or put a part in the wrong place. Nor do they require breaks or vacations to fight boredom. But it’s precisely these advantages of having robots as workers that demonstrates their stupidity and complete lack of self-awareness. That some very smart people have recently indulged in apocalyptic fantasies about these machines eventually taking over the world, including people that made their fortune building them, reveals just how fond even non-believers can be of end of the world scenarios.

If self-driving cars start expressing boredom with the route we take to work each day and begin requesting we call in sick so they can take a road trip with us, then we should start to worry. At that point, they really have become self-aware and begun to take an interest in things purely out of curiosity or a desire to have a more meaningful existence. That said, at the moment there’s no indication whatsoever that our cars are getting tired of driving the same old roads from home to work and back again day in and day out, let alone developing ambitions that threaten our well-being.

The philosopher of information Luciano Floridi probably expressed the problem best. He argues that those ringing alarm bells regarding a pending singularity or an AI takeover of some sort haven’t really explained how exactly this intelligent army of machines is supposed to emerge. “How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear,” Floridi states in a not so subtle challenge to Elon Musk.

At this point, someone will inevitably point to Moore’s Law, which showed a doubling of the number of transistors per square inch approximately every two years. Of course, if current (or should I say past?) trends continue indefinitely, some kind of super-intelligence emerging in the future would arguably be likely. But if history teaches us anything it’s that current trends never continue indefinitely. Moore’s Law is going to turn into just another sigmoid curve instead of becoming the first example of continuous exponential growth. In fact, there are already indications that flattening of the growth curve is now underway.

. . .

Before you can get to curiosity, imagination, boredom, risk-taking, the desire to take over the world or any of the other characteristics that ultimately define intelligence, you have to have self-awareness. Humans aren’t the only creatures on earth that have demonstrated this capacity. Elephants, cetaceans, and corvids, to name a few, join us in exhibiting evidence of this ability. However, not a single machine has yet done so. Indeed, programmers are still struggling to get computers to consistently and accurately process abstract concepts like metaphor and context.

In his book The Glass Cage, Nicholas Carr drives this point home well:

“Hector Levesque, a computer scientist and roboticist at the University of Toronto, provides an example of a simple question that people can answer in a snap but that baffles computers:

The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam, the large ball or the table?

We come up with the answer effortlessly because we understand what Styrofoam is and what happens when you drop something on a table and what tables tend to be like and what the adjective large implies. We grasp the context, both of the situation and of the words used to describe it. A computer, lacking any true understanding of the world, finds the language of the question hopelessly ambiguous. It remains locked in its algorithms. Reducing intelligence to the statistical analysis of large data sets ‘can lead us,’ says Levesque, ‘to systems with very impressive performance that are nonetheless idiot savants.’ They might be great at chess or Jeopardy! Or facial recognition or other tightly circumscribed mental exercises, but they ‘are completely hopeless outside their area of expertise.’ Their precision is remarkable, but it’s often a symptom of the narrowness of their perception.” ~ The Glass Cage, pages 120–121

We may be able to simulate thinking in very specific cases involving individual tasks, or perhaps a very small universe of related tasks. But even a sophisticated simulation of thinking wouldn’t be the same as thinking. By confusing superhuman performance in an individual area involving consistent patterns or routines with intelligence, we simultaneously hugely overestimate our machines and underestimate our own intellectual capacities. Machines can do much more than rocks, but they are just as lacking in self-awareness, creativity, emotion, and all the other things that go into the development and expression of intelligence in humans and other creatures.

“What’s the difference?” Luciano Floridi asks. “The same as between you and the dishwasher when washing the dishes. What’s the consequence?” Floridi continues. “That any apocalyptic vision of AI can be disregarded. We are and shall remain, for any foreseeable future, the problem, not our technology.” In other words, perhaps if we stopped outsourcing so much of our own innate intelligence to our GPS units and Google’s search engine we would realize just how stupid our machines really are.


Advertisements

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Advertisements
%d bloggers like this: