I knew about Morse code, but now Dof_Doej has opened my eyes to another, slightly more recent, type of code, and I realise I’ve been missing out on a whole section of the history of computer language. Let’s take the easy bit first. Lovers of westerns will be familiar with Morse code, which was invented in the mid-19th century to enable people to communicate over long distances. In the movies, those vital messages that save lives or salvage a doomed love affair are often in Morse. Two centuries ago, it was a symbol of all that was modern. It consisted of dots and dashes, and, as such, it represented the beginnings of the binary code system, which is based on two elements: the two digits, 0 and 1. After that, a number of codes were invented, including the Baudot code that was used in phones and faxes. The characters were coded in 5 bytes. For the uninitiated, I can tell you that a byte is a binary (two-digit) number, in other words one that has two possible values: 0 and 1. By using one single byte you get two different numbers, 0 or 1, but with 2 bytes, you have 4 possible combinations: 00/01/10/11 — don’t give up on me yet, there’s more to come. The Baudot code was based on 5 bytes, which gave a possible 32 characters. In the last century, in 1961, an American, Bob Bemer, invented a new code for use in computers. Being less narcissistic than his predecessors, he gave it an acronym, ASCII (pronounced “askey”), which is short for American Standard Code for Information Interchange.
In really, really broad terms, this is a language that enables computer programmers to communicate with their machines.
ASCII is a binary code like the others, but this time it’s based on 8 bytes, which means that it lets you create characters having 8 digits, as long as those digits are only either 1 or 0. The characters you get can go from 0000000 to 1111111. In all, 256 characters can be created, which is a revolution in technological terms. There is a table for converting the 256 characters into letters, numbers and signs. So, the character 1000001 corresponds to the letter A and the character 1000000 to the @ sign. ASCII contains all the characters needed to write in English, and is the best-known character-coding standard for computers with the broadest compatibility. A number of other codes are based on it, in particular Unicode, invented in 1991, which uses a larger number of bytes (16) and, more importantly, by contrast with its US precursor, is multilingual. It covers almost all the alphabets in existence (Arabic, Armenian, Cyrillic, Greek, Hebrew, Latin, etc) and is compatible with the ASCII code.
If you want to know more, there is a very helpful website: www.howstuffworks.com