Computer memory

A while back (quite a long while, actually) I wrote a column about memory, in which I made the point that our ability to remember things is what distinguishes us from dandelions. You might also say that our ability to remember things is what distinguishes us from household appliances and other inanimate objects, but if you said that, you would be wrong, because nowadays a lot of inanimate objects have terrific memories, thanks to our invention of the computer.

Of course, computers don’t store information the same way we do. (It would be very difficult for us to invent a machine that did, since we don’t really understand how our own memory works.) However, our machines manage quite well, thank you.

Before you can understand how an inanimate object can have any kind of memory, you have to remember, as I’ve pointed out before, that computers are really stupid. In fact, they understand only two things: on and off. But even “ons” and “offs” can be used to convey information, if you have enough of them. Computers use lots of them, because computers, at their most basic level, are really just big collections of on-off switches.

When we ask a computer to “remember” something, what we’re really asking is that it be able to reconstruct, at a moment’s notice, the pattern of “ons” and “offs” that represents the particular information we want it to recall. This means a computer “memory” is just another collection of on-off switches which have been frozen, permanently or temporarily, in a particular sequence.

There are two kinds of computer memories. Read-only memory, or ROM, usually contains information the computer will always need: how to display letters on the computer screen or how to accept input from the keyboard, for example. It’s a mass of on-off switches that were set as needed and can’t be changed.

Random-access memory, or RAM, on the other hand, is fluid; you can store instructions or information in it, but then you can store new instructions or information over top of that, erasing the original information.

Of course, all this information would be useless if the computer couldn’t find it easily. In practice, a number of on-off switches, or “bits,” are grouped together into an “addressable unit.” There might be eight, 16, 32 or 64 bits in each of these units. Each addressable unit is identified by an address. (Gee, maybe that’s why it’s called “addressable,” whaddya think?) When you ask a computer to retrieve something from memory, you give it (or the program you’re using gives it) an address to go to, and it takes the information out of that address, then does something with it Ð- displays it on the screen, for instance.

To put something into a computer memory, the process is reversed: the information is given to the computer, along with an address to store it in. Random-access memory is called that because the computer can easily access any address location.

The size of a computer’s memory is specified by how many bits are in the addressable unit and how many addresses there are. Modern computers have from 64,000 to more than four billion, and the total is going up all the time. You’ll usually hear of a computer having, say, “128K of RAM,” or “two megabytes,” (or just “two megs”). A “byte” is an “addressable unit,” so a “megabyte” is a million addressable units of memory. (It takes one byte to represent one character of the alphabet, so a megabyte of memory would store one million letters.) The “K” in 128K represents one “kilobyte,” or 1,000 bytes.

Exactly how all these bytes are stored has changed with changing technology. Until the 1970s large computers used ferrite cores for their primary memories. These were rings of magnetized material about a millimetre in diameter, strung like beads on a wire grid. The direction of magnetization of each core determined whether it represented an “on” or “off.”

Nowadays, computer memories are made up of small integrated circuits. Each of these has thousands of tiny semiconductor capacitors, which can be either charged or uncharged. Unlike the ferrite cores, these semiconductor memories are “volatile,” which means that when the power goes off, you lose whatever information is stored in them. A lot of computers therefore have standby battery power to preserve that information.

All this sounds terribly impressive, but don’t start feeling inferior to your computer just yet. Remember that the largest computers, with four billion bytes of memory, also require special cooling units, lots of electricity, and a good-sized room.

By contrast, your brain fits comfortably in the not-all-that-spacious confines of your skull (nothing personal) and runs on about 50 watts of power (about as much as a pretty dim light bulb–again, nothing personal!), but has an estimated storage capacity of 12,500,000,000,000 bytes, or, to put it in modern terminology, 12.5 million megabytes, roughly equal to the content of all the books ever written. Even with current computer technology, the equivalent electronic storage device would be gigantic and would consume 100 megawatts of power.

So when it comes to memory, we’ve still got the edge–but keep looking over your shoulder.

Inanimate objects are gaining!

Permanent link to this article: https://edwardwillett.com/1995/04/computer-memory/

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Easy AdSense Pro by Unreal