HOW TO DO PROGRAMMING WITH JUST SIMPLE STEPS
Getting Started with C or C++
Exactly How to Get Started with C/C++ Today
Okay, let's cut to the chase--you want to learn to program in C/C++ and you want to know exactly what you should do, right now.
AT-M Ad
If you're willing to spend a few dollars, I'd strongly recommend you buy my ebook, Jumping into C++, which will take you from absolutely nothing to getting a full working environment, all the core C++ syntax, lots of tips on how to design your programs, sample code, practice problems, quizzes and advanced C++ class design stuff too. You can check out a sample chapter or buy now.
Alternatively, you can take a web-only route:
Set up a C/C++ compiler so you can run your code
Start our C++ Tutorial ( If you want to learn C, go here. Not sure? I suggest C++.)
Quiz yourself
Solve practice programming problems
If you prefer physical books, look at from C++ Beginner to C++ Expert, a book series.
If run into problems, take a look at these articles:
The 5 Most Common Problems New Programmers Face--And How You Can Solve Them
5 Ways to Learn Programming Faster
Finally, keep up with the latest information by subscribing to cprogramming.com by Email or RSS subscribe to a feed
The rest of this page provides answers to some of the most common questions new programmers have about C, C++ and programming.
What is C, What is C++, and What is the Difference?
C is a programming language originally developed for developing the Unix operating system. It is a low-level and powerful language, but it lacks many modern and useful constructs. C++ is a newer language, based on C, that adds many more modern programming language features that make it easier to program than C.
Basically, C++ maintains all aspects of the C language, while providing new features to programmers that make it easier to write useful and sophisticated programs.
For example, C++ makes it easier to manage memory and adds several features to allow "object-oriented" programming and "generic" programming. Basically, it makes it easier for programmers to stop thinking about the nitty-gritty details of how the machine works and think about the problems they are trying to solve.
So, what is C++ used for?
C++ is a powerful general-purpose programming language. It can be used to create small programs or large applications. It can be used to make CGI scripts or console-only DOS programs. C++ allows you to create programs to do almost anything you need to do. The creator of C++, Bjarne Stroustrup, has put together a partial list of applications written in C++.
How do you learn C++?
No special knowledge is needed to learn C++, and if you are an independent learner, you can probably learn C++ from online tutorials or from books. There are plenty of free tutorials online, including Cprogramming.com's C++ tutorial - one which requires no prior programming experience. You can also pick out programming books from our recommendations.
While reading a tutorial or a book, it is often helpful to type - not copy and paste (even if you can!) - the code into the compiler and run it. Typing it yourself will help you to get used to the typical typing errors that cause problems and it will force you to pay attention to the details of programming syntax. Typing your program will also familiarize you with the general structure of programs and with the use of common commands. After running an example program - and after making certain that you understand how it works - you should experiment with it: play with the program and test your own ideas. By seeing which modifications cause problems and which sections of the code are most important to the function of the program, you should learn quite a bit about programming.
Try our C++ Beginner to C++ Expert recommended book series, a six-book set designed to get you maximal information and help take you from beginner to C++ master.
You may also want to read about The 5 Most Common Problems New Programmers Face--And How You Can Solve Them.
What do I need to start programming in C or C++?
In order to make usable programs in C or C++, you will need a compiler. A compiler converts source code - the actual instructions typed by the programmer - into an executable file. Numerous compilers are available for C and C++.
Can you help me set up a compiler?
Absolutely! For beginners, Code::Blocks with MinGW is our recommended free and easy-to-use Windows compiler. For OS X, I recommend Apple XCode, and for Linux, g++. All of these links will help you get up and running and ready to start programming.
Do I need to know C to learn C++?
No. C++ is a superset of C; (almost) anything you can do in C, you can do in C++. If you already know C, you will easily adapt to the object-oriented features of C++. If you don't know C, you will have to learn the syntax of C-style languages while learning C++, but you shouldn't have any conceptual difficulties.
What's the point of learning to program? What can I get out of it?
Ah, a skeptic! You can get a lot of things out of programming. For one thing, it's just plain fun. You can read my opinion on the matter here: Why Learn to Program?
I want to make games in C++, what should I do?
It may be a challenging road, but it is doable. This article has more information: so you want to be a game Programmer?
When you've learned a bit of C++, don't miss Same Game - a Simple Game from Start to Finish which will teach you to create a game, starting from nothing and ending in a fully playable game.
What does it take to learn to be a programmer?
Great question! Here's an article about what it takes to be a programmer!
Do I need to know math to be a programmer?
No! At least, not too much. Most of programming is about design and logical reasoning, not about being able to quickly perform arithmetic, or deeply understanding algebra or calculus. The carryover between math and programming are primarily around logical reasoning and precise thinking. Only if you want to program advanced 3D graphics engines, or do other specialized numerical programming will you need mathematical skill.
How should I think about Program Design?
Try Thinking about Programming - A Beginner's Guide
Help, my program doesn't work!
Take a look at a list of common programming mistakes, send us an email or, if you're really stuck, join our message board or ask an expert!
Where can I learn more about the history of computer science?
Try this article on computer science.
Why do I want to learn C
Today, most people don't need to know how a computer works. Most people can simply turn on a computer or a mobile phone and point at some little graphical object on the display, click a button or swipe a finger or two, and the computer does something. An example would be to get weather information from the net and display it. How to interact with a computer program is all the average person needs to know.
But, since you are going to learn how to write computer programs, you need to know a little bit about how a computer works. Your job will be to instruct the computer to do things.
proc-ess / Noun:
A series of actions or steps taken to achieve an end.
pro-ce-dure / Noun:
A series of actions conducted in a certain order.
al-go-rithm / Noun:
An ordered set of steps to solve a problem.
Basically, writing software (computer programs) involves describing processes, procedures; it involves the authoring of algorithms. Computer programming involves developing lists of instructions - the source code representation of software The stuff that these instructions manipulate are different types of objects, e.g., numbers, words, images, sounds, etc... Creating a computer program can be like composing music, like designing a house, like creating lots of stuff. It has been argued that in its current state it is an art, not engineering.
An important reason to consider learning about how to program a computer is that the concepts underlying this will be valuable to you, regardless of whether or not you go on to make a career out of it. One thing that you will learn quickly is that a computer is very dumb, but obedient. It does exactly what you tell it to do, which is not necessarily what you wanted. Programming will help you learn the importance of clarity of expression.
A deep understanding of programming, in particular the
notions of successive decomposition as a mode of analysis
and debugging of trial solutions, results in significant
educational benefits in many domains of discourse,
including those unrelated to computers and information
technology per se.
(Seymour Papert, in "Mindstorms")
It has often been said that a person does not really
understand something until he teaches it to someone else.
Actually a person does not really understand something
until after teaching it to a computer, i.e., express it
as an algorithm."
(Donald Knuth, in "American Mathematical Monthly," 81)
Computers have proven immensely effective as aids to clear
thinking. Muddled and half-baked ideas have sometimes
survived for centuries because luminaries have deluded
themselves as much as their followers or because lesser
lights, fearing ridicule, couldn't summon up the nerve to
admit that they didn't know what the Master was talking
about. A test as near foolproof as one could get of whether
you understand something as well as you think is to express
it as a computer program and then see if the program does
what it is supposed to. Computers are not sycophants and
won't make enthusiastic noises to ensure their promotion
or camouflage what they don't know. What you get is what
you said.
(James P. Hogan in "Mind Matters")
But, most of all, it can be lots of fun! An associate once said to me "I can't believe I'm paid so well for something I love to do."
Just what do instructions a computer understands look like? And, what kinds of objects do the instructions manipulate? By the end of this lesson you will be able to answer these questions. But first let's try to write a program in the English language.
PROGRAMMING USING THE ENGLISH LANGUAGE
Remember what I said in the Introduction to this lesson?
Writing software, computer programs, is a lot like
writing down the steps it takes to do something.
Before we see what a computer programming language looks like, let's use the English language to describe how to do something as a series of steps. A common exercise that really gets you thinking about what computer programming can be like is to describe a process you are familiar with.
Describe how to make a peanut butter and jelly sandwich.
Rather than write my own version of this exercise, I searched the Internet for the words "computer programming sandwich" using Google. One of the hits returned was http://teachers.net/lessons/posts/2166.html. At the link, Deb Sweeney (Tamaqua Area Middle School, Tamaqua, PA) described the problem as:
Objective: Students will write specific and sequential steps
on how to make a peanut butter and jelly sandwich.
Procedure: Students will write a very detailed and step-by-step
paragraph on how to make a peanut butter and jelly
sandwich for homework. The next day, the students will
then input (read) their instructions to the computer
(teacher). The teacher will then "make" the programs,
being sure to do exactly what the students said...
When this exercise is directed by an experienced teacher or mentor it is excellent for demonstrating how careful you need to be, how detailed you need to be, when writing a computer program. Here is teacher/mentor support material.
Programming in a natural language, say the full scope of the English language, seems like a very difficult task. But, before moving on to languages we can write programs in today, I want to leave on a high note. Click here to read about how Stephen Wolfram sees programming in a natural language happening.
PROGRAMMING LANGUAGES - HIGH-LEVEL LANGUAGES
Almost all of the computer programming these days is done with high-level programming languages. There are lots of them and some are quite old. COBOL, FORTRAN, and Lisp were devised in the 1950s!!! As you will see, high-level languages make it easier to describe the pieces of the program you are creating. They help by letting you concentrate on what you are trying to do rather than on how you represent it in a specific computer architecture. They abstract away the specifics of the microprocessor in your computer. And, all high-level languages come with large sets of common stuff you need to do, called libraries.
In this introduction, you will work with two computer programming languages: Logo and Java. Logo comes from Bolt, Beranek & Newman (BBN) and Massachusetts Institute of Technology (MIT). Seymour Papert, a scientist at MIT's Artificial Intelligence Laboratory, and co-workers championed this computer programming language in the 70s. More research of its use in educational settings exists than for any other programming language. In fact, the fairly new Scratch Programming Environment (also from MIT) consists of a modern graphical user interface on top of Logo-like functionality.
Java is a fairly recent programming language. It appeared in 1995 just as the Internet was starting to get lots of attention. Java was invented by James Gosling, working at Sun Microsystems. It's sort-of a medium-level language. One of the big advantages of learning Java is that there is a lot of software already written ( see: Java Class Library) which will help you write graphical programs that run on the Internet. You get to take advantage of software that thousands of programmers have already written. Java is used in a variety of applications, from mobile phones to massive Internet data manipulation. You get to work with window objects, Internet connection objects, database access objects and thousands of others. Java is the language used to write Android apps.
So, why do these lessons start with the Logo programming language? No other language has the depth of research devoted to its use in educational settings. Hundreds of books and research papers have been written regarding its use in the classroom. Cynthia Solomon, who started MIT's Logo Group with Dr. Papert, has put together a comprehensive website on Logo: logothings.wikispaces.com.
I like using the Logo language to teach introductory programming because it is very easy to learn. The faster you get to write interesting computer programs the more fun you will have. And... having fun is important! But do not let Logo's simplicity fool you into thinking it is just a toy programming language. Logo is a derivative of the Lisp programming language, a very powerful language still used today to tackle some of the most advanced research being performed. Brian Harvey shows the power of Logo in his Computer Science Logo Style series of books. Volume 3: Beyond Programming covers six college-level computer science topics with Logo.
Both Logo and Java have the same sort of stuff needed to write computer programs. Each has the ability to manipulate objects (for example, arithmetic functions for working with numbers). Each lets you compare objects and do a variety of things depending on the outcome of the comparison. Most importantly, they let you define named procedures. Named procedures are lists of built-in instructions and other named procedures. The abstraction of naming stuff lets you write programs in a language you yourself define. This is the stuff that programming is really all about, as you will see.
Just to give you a feel for what programming is like in a high-level language, here's a program that greets us, pretending to know English.
print [Hello world!]
This is one of the simplest programs that can be written in most high-level languages. PRINT is a command in Logo When it is performed, it takes whatever follows it and displays it. The "Hello world" program is famous; checkout its description on Wikipedia by clicking here.
In addition to commands, Logo has operators that output some sort of result. Although it's a bit contrived, here is a program that displays the product of a constant number (ten) and a random number in the range of zero through fourteen.
print product 10 (random 15)
In this source code, the PRINT command's input is the output of the PRODUCT operator. PRODUCT multiplies whatever follows it by whatever follows that and outputs the result. So, PRODUCT needs two inputs. RANDOM is an operator that outputs a number that is greater than or equal to zero (0) and less than the number following it. So, PRODUCT gets its second input from the output of RANDOM.
Confusing?
Figure 1.1 shows a plumbing diagram, a graphical representation of how all these procedures fit together.
PrintProductRandom Plumbing Diagram
Figure 1.1
Still confusing? Don't worry, we will get into the details of Logo operators in lesson 8.
Finally, here's a snipet of advanced Logo source code, just to give you a feeling for what it looks like. This is a procedure definition for selecting the maximum number from a list of numbers.
to getMax :maxNum :numList
if empty? :numList [output :maxNum]
if greater? (first :numList) :maxNum [output getMax (first :numList) (butfirst :numList)]
output getMax :maxNum (butfirst :numList)
end
Again, do not worry if you do not understand exactly how this procedure works. It will be a while before you will be writing anything like this. But, I want you to see that the words that make up the program's instructions and the instructions themselves are similar to English sentences, e.g., the first line and a half in the procedure are similar to the sentences:
If the list of numbers to process is empty then output the maximum number processed.
If the first number in the list is greater than the maximum number processed so far then ...
So, a high-level programming language is *sort-of* like English, just one step closer to what the language a computer really understands looks like. Now let's move on to what a computer's native language looks like when it is given a symbolic representation.
PROGRAMMING LANGUAGES - ASSEMBLER LANGUAGE
One abstract layer above a computer's native language is assembler language. In assembler language, everything is given human-friendly symbolic names. The programmer works with operations that the microprocessor knows how to do, they have symbolic names. The microprocessor's registers and addresses in the computer's memory are also given meaningful names by the programmer. This is actually a very big step over what a computer understands, but still tedious for writing a large program. Assembler language instructions still have a place for little snipits of software that need to interact directly with the microprocessor and/or those that are executed many, many, many times.
Table 1.1 is an example of DEC PDP-10 assembler language, a function that returns the largest integer in a group of them, named NUMARY. The group contains NUMNUM members.
Label OpCode Register Memory
Address Index
Register Comment
GETMAX: MOVSI T1 400000 ; init T1 to smallest integer
MOVE T2 NUMNUM ; get number of array members
GTMAX2: SOJL T2 [POPJ P,] ; decr idx, if -1 then done
CAMG T1 NUMARY (T2) ; skip if T1 > array member
JRST GTMAX2 ; continue with next number
MOVE T1 NUMARY (T2) ; T1 gets new max number
JRST GTMAX2 ; continue with next number
Table 1.1
Hopefully this gives you feel for how primitive computer instruction sets are. I'm not going to go into the details of every instruction. If you want to go through it in detail on your own, the PDP-10 Machine Language is detailed here.
A few points I want to expose you to are the general kinds of things being done.
moving objects (numbers) into the computer's registers - very fast temporary storage,
decrementing the value in a register,
comparing the contents of a register to some value in the computer's memory, and
transfering control to an instruction that's not in the standard sequential order - down the page.
So, as you've seen, higher-level programming languages provide similar functionality and in a form that is closer to the English language.
But there is a problem with assembler language - it is unique for every computer architecture. Although most deskside and notebook computers these days use the Intel architecture, this is only recently the case. And... a variety of computer architectures are commonly used in game systems, smart phones, tablets, automobiles, appliances, etc...
Ok, we are almost at a point where I can show you machine language, the *native* language of a computer. But for you to understand it, I'm going to have to explain how everything is represented in a computer.
INSIDE COMPUTERS - BITS AND PIECES
Your computer successfully creates the illusion that it
contains photographs, letters, songs, and movies. All it
really contains is bits, lots of them, patterned in ways
you can't see. Your computer was designed to store just
bits - all the files and folders and different kinds of
data are illusions created by computer programmers.
(Hal Abelson, Ken Ledeen, Harry Lewis, in "Blown to Bits")
Basically, computer instructions perform operations on groups of bits. A bit is either on or off, like a lightbulb. Figure 1.2_a shows an open switch and a lightbulb that is off - just like a transistor in a computer represents a bit with the value: zero. Figure 1.2_b shows the switch in the closed position and the lightbulb is on, again just like a transistor in a computer representing a bit with the value: one.
Figure 1.2_a
Figure 1.2_b
A microprocessor, which is the heart of a computer, is very primitive but very fast. It takes groups of bits and moves around their contents, adds pairs of groups of bits together, subtracts one group of bits from another, compares a pair of groups, etc... - that sort of stuff.
Inside a microprocessor, at a very low level, everything is simply a bunch of switches, also known as bits - things that are either on or off! Time to expand on how this is done; first let's explore how groups of bits can be used to form numbers.
Just Bits NUMERIC REPRESENTATION WITH BITS Just Bits
There are only 10 different kinds of people in the world:
those who know binary and those who don't.
- Anonymous
Computers are full of zillions of bits that are either on or off. The way we talk about the value of a bit in the electical engineering and computer science communities is first as a logical value (true if on, false if off) and secondly as a binary number (1 if the bit is on and 0 if it's off). Most bits in a computer are manipulated in groups, so we humans need a way to describe groups of bits, things/objects a computer manipulates. Today, bits are most often grouped in quantities of 8, 16, 32, and 64.
Think about how you write down sequential numbers starting with zero: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, etc... Our decimal number system has ten symbols. In this sequential series, when we ran out of symbols, we combined them. You learned how to do this so long ago, in grade school, that today you just naturally think in terms of single digit numbers, then tens, hundreds, thousands, etc... The decimal number 1234 is one thousand, two hundreds, three tens, and four units.
So, how does the binary number system used inside computers work?
Well, with only two symbols, we would write the same sequential numbers as above: 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011. The decimal number 1234 in binary is 10011010010.
Since even reasonable numbers that we use all the time make for very long binary numbers, the bits are grouped in 3s and 4s which are simple to convert into numbers in the octal and hexadecimal number systems. For octal, we group three bits together. Take the binary equivilent of decimal 1234, 10011010010, and put spaces in between each group of three bits - starting at the right and going left.
10011010010 = 10 011 010 010
Now use the symbols 0, 1, 2, 3, 4, 5, 6, and 7 (eight symbols, so OCTAL) to replace each group.
10 011 010 010 = 2 3 2 2 = 2322
The octal representations of the binary patterns are certainly easier to read, write, and remember than the binary counterparts. An even more compact representation can be achieved by grouping the bits in chunks of four and converting these to hexNumerals.
When you group four bits together and use sixteen symbols (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, A, B, C, D, E, and F) as their abbreviations, you have a hexadecimal representation.
10011010010 = 100 1101 0010 = 4D2
As you continue to explore how computers work, you'll hear more about numbers expressed in octal and hex; these are just more manageable representations of binary information - the digital world.
Table 1.2 compares the decimal, binary, octal, and hexadecimal number systems.
Decimal
Number in
Binary in
Octal in
Hex
1
1
1
1
2
10
2
2
3
11
3
3
4
100
4
4
5
101
5
5
6
110
6
6
7
111
7
7
8
1000
10
8
9
1001
11
9
10
1010
12
A
11
1011
13
B
12
1100
14
C
13
1101
15
D
14
1110
16
E
15
1111
17
F
16
10000
20
10
Table 1.2
So, if the most common groupings of bits in a computer are 8, 16, 32, and 64, what kinds of numbers can these groups represent?
A group of eight bits has binary values 00000000 through 11111111, or expressed in decimal 0 through 255. A group of sixteen bits has binary values 0000000000000000 through 1111111111111111, or decimal 0 through 65535. I'm not going to type in binary representations for groups of 32 and 64 bits. The range of decimal values for a group of 32 bits is 0 through 4,294,967,295. The range of decimal values for a group of 64 bits is 0 through 18,446,744,073,709,551,615 - or almost eighteen and a half quintillion.
But wait... these numbers are all positive (Whole Numbers). If we are going to allow for subtraction operations on numbers, which can result in negative numbers, we need Integers. Modern computers use one bit in each of the groups to represent the sign (positive or negative) when the groups are used to represent integers. Table 1.3 shows the range of numbers that can be represented with groups of 8, 16, 32, and 64 bits.
Number
of Bits Unsigned Maximum Value Signed Minimum Value Signed Maximum Value
8 255 -128 127
16 65535 -32768 32767
32 4,294,967,295 -2,147,483,648 2,147,483,647
64 18,446,744,073,709,551,615 -9,223,372,036,854,775,808 9,223,372,036,854,775,807
Table 1.3
That's about as deep as I want to get into the representation of numbers in computers and the binary, octal and hexadecimal number systems. Yes, computers have division operators but I am not going to cover numbers that include fractional parts, i.e., the "rational" and "irrational" numbers due to the complexity of their implementations. If you want to read more, I googled and found what looks like a good place for you to read more. Start at All About Circuits - Systems of numeration and read through it and continue on for a few more web pages in the series. The link to binary number at the start of this section points to a Wikipedia page that also will give you additional depth including the history of the binary number system.
SYMBOLS AS BITS - ASCII CHARACTERS
Ok, so numbers are simply groups of bits. What other objects will the computer's instructions manipulate? How about the symbols that make up an alphabet?
It should come as no surprise that symbols that make up alphabets are just numbers, groups of bits, too. But how do we know which numbers are used to represent which symbols, or characters as I'm going to call them from this point on?
It's all about standards. In these lessons, we will use the American Standard Code for Information Interchange (ASCII) standard. It is so ubiquious that it even has its own web page, www.asciitable.com.
Let's walk through a couple of examples, entries in the table. Here are some characters, their decimal value and their binary value which is then transformed into an octal number.
Uppercase 'A' = decimal 65 = binary 01000001 = 01 000 001 = octal 101
Uppercase 'Z' = decimal 90 = binary 01011010 = 01 011 010 = octal 132
The digit '1' = decimal 49 = binary 00110001 = 00 110 001 = octal 061
Table 1.4 is a small slice from the full ASCII character set, just enough to give you a flavor of its organization.
ASCII
Character in
Binary in
Octal in
Decimal in
Hex
space
00100000
040
32
20
(
00101000
050
40
28
)
00101001
051
41
29
*
00101010
052
42
2A
0
00110000
060
48
30
1
00110001
061
49
31
2
00110010
062
50
32
9
00111001
071
57
39
A
01000001
101
65
41
B
01000010
102
66
42
C
01000011
103
67
43
Z
01011010
132
90
5A
a
01100001
141
97
61
b
01100010
142
98
62
c
01100011
143
99
63
z
01111010
172
122
7A
Table 1.4
Here's a little trivia, from days long past when computers were so slow (compared with today).
Look closely at the binary representations of uppercase and lowercase letters. You can convert uppercase to lowercase by turning on a single bit. Or clearing the bit converts lowercase to uppercase.
Clearing two bits will convert an ASCII digit to its numeric value. Setting the same bits converts a number in the range 0...9 into its ASCII character representation.
CHECK YOUR UNDERSTANDING SO FAR
Take a moment to see if you understand the explanation of binary numbers.
HERE is a jLogo program that converts an 8-bit byte into numbers and ASCII characters. Try it out! .
PIXELS
The image on your computer's display consists of a bunch of colored points called pixels. A pixel is an object. It has a color and a position (its coordinates) which consists of the row and column it is at. Figure 1.3 shows an artist's rendition, a magnification of a display with a circle drawn in yellow. The tiny black dots are the pixels and the big yellow dots are the pixels that have been colored.
Pixel Circle
Figure 1.3
As an example, to display a thin vertical line, the color values of a column of pixels are set to the desired color of the line. If you want a thicker vertical line, you set the color values of the pixels of a group of consecutive columns to the desired color. Figure 1.4 shows a red line that's a single pixel wide and an orange line that's three pixels wide. The orange line is actually a very thin rectangle.
Pixel Lines
Figure 1.4
So, the location of each pixel is obviously specified by a pair of numbers; what about the pixel's color?
Well... a pixel's color is also specified as numbers, three of them, called RGB (Red, Green, Blue) values. Play with the following Java applet which lets you see what number values generate which colors. What color do you get if you set red to 170, green to 85, and blue to 255? What's the RGB value for your favorite color?
Color Numbers Applet
So, just as groups of bits represent numbers and symbols, they are used to form pixels.
Optional exploration: Why red, green, and blue? Why these colors?
There are many good explanations on the Internet. If you are interested, search the net or checkout the Wikipedia entry for color vision.
Ok... I've exposed you to a variety of objects that you commonly see when you are using a computer, things that can be manipulated with instructions in a computer programming language. Now let's move on to the computer's instructions, one more thing that is just a bunch of bits!
PROGRAMMING LANGUAGES
THE MICROPROCESSOR'S LANGUAGE
So, all a computer has in it is bits. You've seen how they are used to represent stuff, pixels, numbers and characters. I've mentioned that computers perform operations on the bits, like move them around, add pairs of them together, etc... One final obvious question is: how are instructions that a computer performs represented?
Well, if you instructed a computer in its native language (machine language), you would have to write instructions in the form of (yes, once again) binary numbers. This is very, VERY hard to do. Although the pioneers of computer science did this, no one does this these days.
Just to give you something to look at, just to compare, Table 1.5 shows what the assembler language program in Table 1.1 could look like assuming that the machine instructions are loaded into memory at addresses 100 through 107. Also, the group of numbers starts at memory address 111 and the size of the group is in memory address 110.
Address OpCode Register Memory
Address Index
Register
100 205 1 400000
101 200 2 110
102 361 2 107
103 317 1 111 2
104 254 102
105 200 1 111 2
106 254 102
107 263 17
Address Value
110 67
111 47316
. . .
177 2751
Table 1.5
A detailed explanation of any computer's instruction set is beyond what can be presented here. I just wanted you to see how the symbolic information in assembler language programs needs to be converted to numbers (bits) before a computer can perform it.
If you really want more details now, here is a side lesson from one of my favorite introductory computer science books: The Computer Continuum.
The lesson walks you through programming a very simple robot computer. And, I wrote a simulator for the Robot Computer in jLogo that you can play with HERE.
DEBUGGING
No introduction to computer programming would be complete without at least mentioning debugging. The term refers to the discovery and correction of mistakes in computer programs. The computer is doing what you instructed it to do, not what you meant it to do. If you enjoy puzzles, there's a good chance you will find the process of debugging an interesting challenge.
The origin of the term came from a bug (a moth) found in a relay of a computer in 1947, by Admiral Grace Murray Hopper. She found why her program was not working.
Grace Hopper Bug
Figure 1.5
Debugging is a process. The good news for us is that mistakes in introductory level programs are not that hard to diagnose and fix. It's basically narrowing in on the instruction, or two, that are not doing what you intended. Steps you take are like solving Sudoku or Mastermind puzzles.
Debugging a program can be done in steps that match the Scientific Method.
Observation,
Hypothesize,
Make predictions, and
Test
In the observation step, you think hard about what is happening versus what you expected to be happening. An example is a program that is drawing something, but the jumble of lines you see on the display does not look right. Or, your program has a button that doesn't do anything when you click on it. As a programmer, you approach these bugs like the legendary Sherlock Holmes approached his cases. Review the program you've written and ask questions like: "How could these instructions produce what is happening?"
Observation should lead you to the point where you can make a hypothesis about the bad behavior. For the drawing not coming out right, a hypothesis might be something like: "If computing the metrics (orientation, length, ...) of the first line is off, that might produce what I'm seeing." For a button not working, a hypothesis could be: "If the mouse's location when it is clicked is computed wrong, the program will ignore the click." The hypotheses made are often based on intuition. This is because usually the person debugging either wrote the program or at least modified it.
Given a hypothesis, the next step is to figure out how to test it by predicting what should be happen if the hypothesis it correct. Continuing with the example of the misbehaving drawing program, let's say that you notice some source code that might not produce proper results if what it is given is not in the bounds expected. This code could produce strange results. This is a prediction.
Finally, either the program is modified or a debugging feature in the programming environment is used to test the prediction. Modification of the program can be the addition of instructions that print stuff on the display. Most programming environments include debugging features, like tracing or setting breakpoints to suspend a program to examine its internal state. Either way, the programmer gathers more information about what the program is actually doing, which is, in the case of bugs, not what you expect.
Testing, even if it does not provide an answer, gets you additional information that can be used to repeat the process. You narrow your exploration until you find your mistake.
FINALLY - WHO WAS THE FIRST COMPUTER PROGRAMMER?
Ok... since you are reading this, accessing this web page on the net, you have Google and it takes a fraction of a second to answer this question.
Ada Lovelace
From the Wikipedia entry for Ada Lovelace:
Ada Lovelace, was an English mathematician and
writer chiefly known for her work on Charles Babbage's
early mechanical general-purpose computer, the Analytical
Engine. Her notes on the engine include what is recognised
as the first algorithm intended to be carried out by a
machine. Because of this, she is often described as the
world's first computer programmer.
SUMMARY
Computer programming is composing/authoring of a process/procedure for doing something, the source code representation of algorithms - in great detail.
proc-ess / Noun:
A series of actions or steps taken to achieve an end.
pro-ce-dure / Noun:
A series of actions conducted in a certain order.
al-go-rithm / Noun:
An ordered set of steps to solve a problem.
Since computers do not understand English and it would be impossible for a human to write a large program as a series of binary numbers that the computer can understand, we need something in between. High-level programming languages currently fit in this category. Given a programming language that you have chosen, you then follow its rules for composing statements (or expressions) that instruct the computer to do what you want.
Because a computer is simply a very fast manipulator of bits (ones and zeros), through the the power of abstraction, computer scientists have layered levels of object representation and functionality, one on top of another. We have now been working on refining and extending these layers for over half of a century. Table 1.6 should give you a feel for where we stand with computer programming languages today.
In the next lesson you'll start to write programs in Logo!
Application-Specific Language(4GL)
Examples: Mathematica, SQL
High-Level Language
Examples: Logo, Python
Low-Level Language
Example: C
Assembler Language
Example: Intel X86
Machine Language
Example: Intel X86
Table 1.6
EXERCISES
Gabriele Cirulli has created a game, very popular at the moment, which will teach you the powers of two (the binary number system) as a side effect. But beware, the game has been tagged very addictive.
Click here to go to the game's website and play - and learn.
Cisco's Learning Network provides a game where you can test your ability to do binary to decimal and decimal to binary conversions.
Click here to go to the website to test your ability.
ADDITIONAL READING
The Book "Blown to Bits" is available on-line for free!
Find parts of it that interest you and read more about BITS...
No comments:
Post a Comment