By the end of this reading you should know the following:
In the most basic sense, programming means creating a set of instructions for a device or person for completing some specific task. In the context of computing it means creating a set of instructions for a computer to accomplish a specific task using a set of directives—a programming language—known to both the programmer and the computer operating system. Usually this set of instructions, or program, is intended to complete a task that:
To understand programming languages it helps to understand how a computer works. All computers have a “brain” or Central Processing Unit (CPU) where most of the computing takes place. CPUs are microprocessors—programmable digital electronic components composed of thousands of tiny electronic switches, or transistors, on a silicon slab. At any given time, each of these switches can be in one of two states—on or off. This is known as binary storage of information. By setting these switches in many different configurations the CPU can perform incredibly fast, complex calculations.
Computer programming languages may be classified into several generations. This term refers not to the relative age of the language, but how close it is to the actual code understood by the CPU itself.
With the first electronic computers, human technicians actually had to give the processors instructions in binary code known as machine languages. For other than very simple programs, the process was tedious, immensely complex, and fraught with potential errors. Machine languages are sometimes referred to as first-generation programming languages.
To speed up the process of programming, languages called assembly languages were developed that could be fairly easily read by humans. The instructions in assembly languages mapped directly to binary machine code, and had to be converted to machine code before they could be run. Assembly languages, or second-generation languages, are still very complex, and programming in assembler is very time-consuming. For this reason they are little used today except for very specific niche programming tasks, such as device drivers and high-performance 3-D games, where fast execution is critical.
href="http://en.wikipedia.org/wiki/Third-generation_programming_language" target="_blank">Third-generation languages were developed to make programming easier and faster, because they take care of many non-essential tasks behind the scenes, and are designed to be easier for humans to understand. They included things like the ability to store data in named variables, and would often combine multiple steps and calculations into single, powerful commands. When most people think of programming languages they think of these third-generation languages. Most of the programming languages you may have heard of—Fortran, C, C++, and Java, for example—are examples of this type of language.
However, even third-generation languages can be very difficult to learn and can require years of use to master. They remain out of reach for many people who don’t have the time or inclination to invest in learning them. Consequently, in the 1980s programming languages and development environments began to emerge that were designed to reduce the time and effort required to produce useful software. These languages are often called fourth-generation languages. One of the earliest and most successful of these, Apple Computer’s HyperCard, opened the door to many non-professional programmers, like teachers, researchers and managers, to develop software customized to their specific needs. While Apple formally discontinued HyperCard in 2004, a number of descendants of HyperCard still exist today, including Toolbook, SuperCard, and LiveCode (formerly called Revolution.)
Sometimes the term low-level language is used to refer to languages like machine code and assembly language. Likewise you may hear the term high-level language applied to third and fourth-generation languages. These terms have nothing to do with the power or capability of the language; rather they refer to how far removed they are from direct communication with the processor itself. High level languages support software development at a high level of abstraction, freeing the developer from having to remember many details that are irrelevant to the immediate programming problem.
Before programs written in the various languages can be run on a computer, they must be converted to a form the CPU can understand. Most programming languages do this though a process called compiling. A few languages, most notably Java, can only run if the host operating system has special libraries installed that translate the program instructions into machine-readable code “on the fly”—as they are being run. These languages are called interpreted languages. Each method of executing programs has its advantages: compiled programs generally run faster than interpreted programs, because the conversion of the code to machine code takes place before the user tries to run it. Interpreted languages tend to be easier to run cross-platform; that is, on various operating systems. (Executable programs created in LiveCode are a kind of hybrid, with scripts being interpreted when they are loaded by a highly efficient engine. Interpreted scripts, in turn, call hundreds of lines of compiled, optimized C++ code.)
In the next lesson we’ll learn some basic concepts that are common to all programming languages.
Read about basic programming concepts.