The goal of this book is to teach you to think like a computer scientist. I like the way computer scientists think because they combine some of the best features of mathematics, engineering, and natural science. Like mathematicians, computer scientists use formal languages to denote ideas (specifically computations). Like engineers, they design things, assembling components into systems and evaluating tradeoffs among alternatives. Like scientists, they observe the behavior of complex systems, form hypotheses, and test predictions.
The single most important skill for a computer scientist is problem-solving. By that I mean the ability to formulate problems, think creatively about solutions, and express a solution clearly and accurately. As it turns out, the process of learning to program is an excellent opportunity to practice problem-solving skills. That’s why this chapter is called “The way of the program.”
On one level, you will be learning to program, which is a useful skill by itself. On another level you will use programming as a means to an end. As we go along, that end will become clearer.
As you might infer from the name “high-level language,” there are also low-level languages, sometimes referred to as machine language or assembly language. Loosely speaking, computers can only execute programs written in low-level languages. Thus, programs written in a high-level language have to be translated before they can run. This translation takes some time, which is a small disadvantage of high-level languages.
But the advantages are enormous. First, it is much easier to program in a high-level language; by “easier” I mean that the program takes less time to write, it’s shorter and easier to read, and it’s more likely to be correct. Second, high-level languages are portable, meaning that they can run on different kinds of computers with few or no modifications. Low-level programs can run only on one kind of computer, and have to be rewritten to run on another.
Due to these advantages, almost all programs are written in high-level languages. Low-level languages are used only for a few special applications.
There are two ways to translate a program: interpreting or compiling. An interpreter is a program that reads a high-level program and does what it says. In effect, it translates the program line-by-line, alternately reading lines and carrying out commands.
A compiler is a program that reads a high-level program and translates it all at once, before executing any of the commands. Often you compile the program as a separate step, and later execute the compiled code. In this case, the high-level program is called the source code, and the translated program is called the object code or the executable.
As an example, suppose you write a program in C. You might use a text editor to write the program (a text editor is a simple word processor that does not store fancy font or format settings). When the program is finished, you might save it in a file named
program.c, where “program” is an arbitrary name you make up, and the suffix
.c is a convention that indicates that the file contains C source code.
Then, depending on what your programming environment is like, you might leave the text editor and run the compiler. The compiler would read your source code, translate it, and create a new file named
program.o to contain the object code, or
program.exe to contain the executable.
In Mark Slagell’s English translation of the Japanese “Ruby User’s Guide”, Matz calls Ruby “an interpreted scripting language for quick and easy object-oriented programming.” What does this mean?
Ruby is an interpreted, not compiled, language. The Ruby interpreter must be installed on each computer you hope to run your own—or anyone’s—Ruby programs on. The term ‘scripting’ is fuzzy to define. Traditionally, a script is an uncompiled, relatively small program which makes calls to the operating system or to other (usually compiled) programs. Therefore, a scripting language is sometimes referred to as a “glue language.” Ruby can be thought of as a glue or scripting language, but it is also much more. Its suitability for much larger, more complex programming projects is due to its “object-oriented” nature. I cover object-oriented programming later.
A program is a sequence of instructions that specifies how to perform a computation. The computation might be something mathematical, such as solving a system of equations or finding the roots of a polynomial, but it can also be a symbolic computation, such as searching and replacing text in a document or (strangely enough) compiling a program.
The instructions, which are called statements, look different in different programming languages, but there are a few basic operations most languages can perform:
That’s pretty much all there is to it. Every program you’ve ever used, no matter how complicated, is made up of statements that perform these operations. Thus, one way to describe programming is the process of breaking up a large, complex task into smaller and smaller subtasks until eventually the subtasks are simple enough to be performed with one of these basic operations.
Programming is a complex process, and since it is done by human beings, it often leads to errors. For whimsical reasons, programming errors are called bugs and the process of tracking them down and correcting them is called debugging.
There are a few different kinds of errors that can occur in a program, and it is useful to distinguish between them in order to track them down more quickly.
The interpreter can only understand a program if the program is syntactically correct; otherwise, the interpretation fails and you will not be able to run your program. Syntax refers to the structure of your program and the rules about that structure.
For example, an English sentence must begin with a capital letter and end with a period. this sentence contains a syntax error. So does this one
For most readers, a few syntax errors are not a significant problem, which is why we can read the poetry of e e cummings without spewing error messages.
Interpreters (and compilers) are not so forgiving. If there is a single syntax error anywhere in your program, the interpreter will print an error message and quit, and you will not be able to run your program.
To make matters worse, there are many syntax rules in Ruby, and the error messages you get from the interpreter are often not very helpful. During the first few weeks of your programming career, you will probably spend a lot of time tracking down syntax errors. As you gain experience, though, you will make fewer errors and find them faster.
The second type of error is a run-time error, so-called because the error does not appear until you run the program, after the syntax has been verified as correct.
The good news for now is that Ruby tends to be a safe language, which means that run-time errors are rare, especially for the simple sorts of programs we will be writing in the next few chapters.
Later on as you work your way through the book, you will probably start to see more run-time errors, especially when I start talking about objects and references (Chapter 8).
In Ruby, run-time errors are called exceptions. An exception outputs a message indicating what happened and what the program was doing when it happened. This information is useful for debugging.
The third type of error is the logical or semantic error. If there is a logical error in your program, it will compile and run successfully, in the sense that the computer will not generate any error messages, but it will not do the right thing. It will do something else. Specifically, it will do what you told it to do.
The problem is that the program you wrote is not the program you wanted to write. The meaning of the program (its semantics) is wrong. Identifying logical errors can be tricky, since it requires you to work backwards by looking at the output of the program and trying to figure out what it is doing.
One of the most important skills you will acquire from working through the book is debugging. Although it can be frustrating, debugging is one of the most intellectually rich, challenging, and interesting parts of programming.
In some ways, debugging is like detective work. You are confronted with clues and you have to infer the processes and events that lead to the results you see.
Debugging is also like an experimental science. Once you have an idea what is going wrong, you modify your program and try again. If your hypothesis is correct, then you can predict the result of the modification, and you take a step closer to a working program. If your hypothesis is wrong, you have to come up with a new one. As Sherlock Holmes pointed out, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” (from A. Conan Doyle’s The Sign of Four).
For some people, programming and debugging are the same thing. That is, programming is the process of gradually debugging a program until it does what you want. The idea is that you should always start with a working program that does something, and make small modifications, debugging them as you go, so that you always have a working program.
For example, Linux is an operating system that contains thousands of lines of code, but it started out as a simple program Linus Torvalds used to explore the Intel 80386 chip. According to Larry Greenfield, “One of Linus’s earlier projects was a program that would switch between printing AAAA and BBBB. This later evolved to Linux” (from The Linux Users’ Guide Beta Version 1).
In later chapters I will make more suggestions about debugging and other programming practices.
Natural languages are the languages that people speak, such as English, Spanish, or French. They were not designed by people (although people try to impose some order on them); they evolved naturally.
Formal languages are languages that are designed by people for specific applications. For example, the notation that mathematicians use is a formal language that is particularly good at denoting relationships among numbers and symbols. Chemists use a formal language to represent the chemical structure of molecules. And most importantly:
Programming languages are formal languages that have been designed to express computations.
As I mentioned before, formal languages tend to have strict rules about syntax. For example, 3 + 3 = 6 is a syntactically correct mathematical statement, but 3 += 6$ is not. Also, H2O is a syntactically correct chemical name, but 2pQ is not.
Syntax rules come in two flavors, pertaining to tokens and structure. Tokens are the basic elements of the language, such as words and numbers and chemical elements. One of the problems with 3 += 6$ is that $ is not a legal token in mathematics. Similarly, 2pQ is not legal because there is no element with the abbreviation pQ, nor can there be.
The second type of syntax rule pertains to the structure of a statement; that is, the way the tokens are arranged. The statement 3 += 6$ is structurally illegal, because you can’t have a plus sign immediately before an equals sign. Similarly, molecular formulas have to have subscripts after the element name, not before.
When you read a sentence in English or a statement in a formal language, you have to figure out what the structure of the sentence is (although in a natural language you do this unconsciously). This process is called parsing.
For example, when you hear the sentence, “The other shoe fell,” you understand that “the other shoe” is the subject and “fell” is the verb. Once you have parsed a sentence, you can figure out what it means, that is, the semantics of the sentence. Assuming that you know what a shoe is, and what it means to fall, you will understand the general implication of this sentence.
Although formal and natural languages have many features in common—tokens, structure, syntax and semantics—there are many differences.
People who grow up speaking a natural language (everyone) often have a hard time adjusting to formal languages. In some ways the difference between formal and natural language is like the difference between poetry and prose, but more so:
Here are some suggestions for reading programs (and other formal languages). First, remember that formal languages are much more dense than natural languages, so it takes longer to read them. Also, the structure is very important, so it is usually not a good idea to read from top to bottom, left to right. Instead, learn to parse the program in your head, identifying the tokens and interpreting the structure. Finally, remember that details matter. Little things such as spelling errors and bad punctuation, which you can get away with in natural languages, can make a big difference in a formal language.
Traditionally the first program people write in a new language is called “Hello, World” because all it does is display the words “Hello, World.” In Ruby, this program looks like this:
# generate some simple output puts "Hello, world."
Some people judge the quality of a programming language by the simplicity of the “Hello, World” program. By this standard, Ruby excels. In fact, the “Hello, World” program can be written even more simply:
puts "Hello, world."
Even the simplest program can contain features that are hard to explain to beginning programmers. Let’s examine the longer “Hello, World” program.
The very first line begins with
#. This indicates that this line contains a comment. When the Ruby interpreter sees a
#, it ignores everything from there until the end of the line. A comment is usually a bit of English text that you can put anywhere in a program, usually to explain what the program does.
The second line happens to be an empty line. Just as the Ruby interpreter ignores comments, it also ignores blank lines. Therefore, writing blank lines in your programs is entirely optional. A blank line here and there in your source code improves its readability by humans. Ruby doesn’t care how easy on the eye your code is, but you and others do. Insert blank lines in your code as you see fit.
The flow of execution of a Ruby program generally proceeds from the top of the code to the bottom. This program starts with a comment and proceeds to a blank line, all of which the Ruby interpreter ignores. So far, Ruby has been told nothing to do!
Finally, the last line of this program is an executable statement, something for Ruby to do. It is a puts statement, meaning it tells Ruby to put a string of text on the output screen (“puts” stands for “put string” and is pronounced “put”-with-an-s-at-the-end, not “putts” as is golf). Hence, we can write the “Hello, World” program in just this one line.
putsstatement causes output to be displayed on the screen.