I have read that some computer memory sizes are in decimal (base 10) rather than normal binary (base 2). Is this true?
Both decimal and binary systems are used to express capacity in computer systems. The binary system measures a kilobyte as 1,024 bytes, whereas the decimal system measures a kilobyte as an even 1,000 bytes. The range between the two gets even more notable when you bump it up to gigabytes: the binary value comes out to 1,073,741,824 bytes, compared to 1,000,000,000 bytes in decimal.
Companies making data-storage products tend to use the decimal system to indicate capacity and often put small disclaimers on product pages that say so, like “One gigabyte (GB) = one billion bytes. One terabyte (TB) = one trillion bytes. Total accessible capacity varies depending on operating environment.”
Computers, being computers, traditionally measure things in binary. The smallest unit of measurement on a computer, after all, is a bit, short for binary digit. Eight bits make a byte and so on.
The use of these two different measuring systems has led to discrepancies when, say, the computer reports a different amount of overall available space than you thought you were getting on that new external hard drive you just bought. Seagate, a company that makes computer storage products, has a knowledge base article that explains a bit of the history of how the two different systems came to be used and why the operating system reports only 465.76 gigabytes of your new external 500-gigabyte hard drive.
The National Institute of Standard and Technology has a reference page for referring to binary multiples. NIST uses notation like 1 GiB (gibibyte) to refer to a binary gigabyte, while 1 GB (gigabyte) is used for the decimal system.
Comments are no longer being accepted.