Computers Never Make Mistakes

| 3 Comments
Back in the early 1970s when I just started learning computer programming, using a computer was a much different process than we use today. First you'd write down your program on paper, then you'd type your program onto punched cards, then you'd hand the deck of cards to the computer operator. In a few hours, you could pick up a printout of your program's results. Quite often your program wouldn't work and all you'd get was a couple pages of incomprehensible error messages. To learn what the error messages meant, you'd go to the desk at the back of the keypunch room, where a 60 foot long rack contained all the documentation for the IBM 360 computer series. Imagine a room with an entire 60 foot long wall lined with tables, and on top, 60 feet of documentation in racks placed side to side.



Trying to find useful, relevant data in this rack was like trying to find a single page in a book 60 feet thick. In fact, that's exactly what it was, and even worse, there were dozens of indexes spread throughout the 60 linear feet of documents, one index could send you to another index, which then referred to specific pages, which might then refer you to updates or errata inserted erratically throughout the rack. When the room was busy, there would often be several people reading different sections of the rack, taking notes, then moving to a different section, taking more notes, etc. Some sections of the rack were more useful than others, and it was common to see people standing in line behind someone, waiting to use that section of the rack.
There were only a few people who knew the entirety of the documentation, a few Comp Sci grad students who had to maintain the racks by inserting the monthly updates and errata. It must have been extremely tedious to insert updated pages throughout the 60 foot rack, but in the process, they learned where all the useful information was.
These same grad students also worked in the "debug room," which was a small office where you could ask for help interpreting your program errors. People would line up in the hall outside the office, waiting to seek advice from "the debugger." The debugger had a short rack on his desk containing a master index of the big 60 foot documentation rack. He would look at your program printout, and if the problem was not obvious, he'd look through the index, and refer you back to a specific document in the big rack. Then you'd go back and read some more documentation, figure out what went wrong, then you'd punch a few cards to correct your program error, search through your card deck to switch a few cards, and resubmit your program. And the cycle would start all over again.
The one thing I remember most vividly about the debug room was a big sign hanging on the wall, it was the first thing you'd see upon entering the room. The sign was written on a computer pen plotter, in an oddly machine-like character set, it said:
Computers never make mistakes. All "computer errors" are human errors.
Even today, this is the hardest thing for computer users to understand. If a computer does not give you the results you expected, it is because you gave it bad instructions. Computers follow your instructions faithfully, and will accurately produce the incorrect answer that you incorrectly specified. In those olden days, computers were not so fault-tolerant, if your program had errors, it would stop and produce nothing but an error message. But modern computer programs anticipate that their users might be idiots, and are designed to gracefully handle even the most stupid, nonsensical requests. I suspect this is a very bad thing. It allows people to get results even if they are imprecise. I think it would be better to be strict, returning no results in response to vague inputs.
At the risk of offending a dear friend, I will use him as a case in point. I have a friend who often asks me for technical support, but his phone calls sometimes take hours, primarily due to his vague descriptions of his problems. He'll phone me up and say things like "I'm trying to print, but I press the whatchamacallit and nothing happens." No, I'm not using the word "whatchamacallit" as an euphemism, he really does say "whatchamacallit." When I object to his vague descriptions, he says I'm supposed to anticipate what he is doing because I know the programs so well. This is precisely NOT how to get good help. If I don't know precisely what you're doing wrong, how can I tell you how to do it right?
To use a computer and get good results, you must operate it with precision. But first, you must think with precision. This is no different than any other complex task in life. Human beings are not used to thinking with precision. This is why it is easier to fix computers than to assist users in operating them. Computers always give you a precise report on what they are doing. Users often don't know what they are doing.
After decades of providing tech support to thousands of computer users, I made an observation that I have formulated as a new law, I call it "The Law of Infinite Stupidity." It states:
There are a finite number of ways to do something right. But there are an infinite number of ways to do something wrong.

3 Comments

I disagree. There are just as many prime numbers are there are even numbers.

Some just occur more often than others.

Doing something wrong is how you find the right way.

The problem is today we do not completely unleash computers to discover their own way. We expect them to follow these impossible narrow paths of execution to arrive at a destination we are not completely familiar with ourselves.

Perhaps I stated that badly, in an attempt to simplify the situation. But I assure you, I've done the math. This is a special case of Turing's Incomputability Theorem, and requires a little set theory and cardinal math.


Remember I was talking about computers, which are finite state machines. Given that computers have a finite amount of memory and limited processing capacity, there are only a finite number of states the computer can arrive at which correspond to "correct answers" to any one specific question. There are only a finite amount of computer cycles you can use to arrive at an answer without the problem running forever, or else the program is "Turing Incomputable." Therefore, there are only a finite number of computer programs that arrive at correct answers.


There are a finite number of states a computer can arrive at that correspond to an incorrect answer. There are a finite number of starting states that begin computation for an answer. However, the set of computer algorithms that correspond to incorrect answers to a specific problem is infinite, since we can include programs that never finish and are Turing Incomputable, or programs that repeatedly request new input forever in an infinite loop.


So perhaps a more mathematically-accurate way of stating The Law of Infinite Stupidity would be, "The set of ways to arrive at a correct answer to any single computing problem is countably finite (i.e. less than Aleph null). However, the number of ways to arrive at something other than the correct answer is transfinite."


It is my conjecture that the set of these incorrect answers exceeds Aleph 1. I have discovered a truly wonderful proof of this proposition, but this comment field is too small to contain it.

wow. that's the best response to a blog comment i've ever read!

Leave a comment

About this Entry

This page contains a single entry by Charles published on February 17, 2006 10:35 AM.

Nam June Paik R.I.P. was the previous entry in this blog.

Dead TiVo is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Archives

Pages

Powered by Movable Type 5.13-en