Data Internetworking Group M. Cronsten Request for Comments: RFC 3001 IMHO Preliminary issue 1, subversion 3/beta October 3rd 1997 *** Proposed revised system of counting in bits/bytes *** In this document, the term "decimal" refers to base ten counting, i.e. counting in ones, tens, hundreds and so on in whole numbers. It does not refer to fractions of a whole number, such as for example 2.5 Binary numbering in its present form consists of 8 bit bytes. This proposal aims to address the problems which counting in such an anti-human fashion presents to the ordinary lay-person and computer literate alike. It does not alleviate the need of detailed knowledge per se, but rather makes counting in binary terms more analogous to the more common ten base system. By changing the size of a byte from 8 to 10 bits several advantages manifests themselves. Since the majority of us humans have 10 fingers, the logical size of a computer byte should by right be 10 bits, to keep in line with the decimal system. Furthermore, by adopting this new way of expressing computer data numbering, hexadecimal counting is made completely redundant, since binary digits can thus be expressed in 10-bit groups, called decabits. In addition, this proposition suggests a standard for the order in which bits are transmitted as serial data, consistent with the increasing value of each bit. The bit with the smallest value is always sent first, i.e. Least Bit first. Introducing the Decabit = 10 bits in length. Bits are numbered from 1 upwards, i.e: bit value 1 1 2 2 3 4 4 8 5 16 6 32 7 64 8 128 9 256 10 512 A decabit, which consists of 10 bits, can take any value from 0 (zero) to 1023. For practical purposes only values from 0 to 1000 are used, 1001 to 1022 are reserved for future use. A decabit with all bits set, i.e. 1023 decimal, is a special case. It signifies a nul decabit and is used for padding purposes to fill out a block of data or similar, and should be otherwise ignored. One Kilo-decabit is always 1000 decimal, not 1024. One Mega-decabit is always 1,000,000 decimal, not 1,048,576. To express a decimal value of half a million, 2 decabits are required, equal to 20 bits, like so: decabit 1: value 0 decimal decabit 2: value 500 decimal A computer using a 40-bit operating system would use 4 decabits to express a value. E.g. the value 4711 means the following values: decabit 1: value 711 decimal decabit 2: value 4 decimal decabit 3: value 0 decimal decabit 4: value 0 decimal (Multiplying the individual values of each decabit gives the answer: 711x1 + 4x1000 + 0x1000000 + 0x1000000000 equals 4711 decimal. Note: the x here represents the multiplication sign) Another example: The value 5,4 billion i.e. 5,400,000,000 (an arbitrarily chosen number) means the following values: decabit 1: value 0 decimal decabit 2: value 0 decimal decabit 3: value 400 decimal decabit 4: value 5 decimal (Again, multiplying the individual values of each decabit gives the answer: 0x1 + 0x1000 + 400x1000000 + 5x1000000000 equals 5,400,000,000 decimal) As we can see, counting decabinary is much simpler than counting in generic bytes! Manufacturers and designers of computer related hardware and software should be encouraged to implement this new system of counting. In the long run this scheme will save a considerable amount of man hours and simplify conversion between decimal numbering and that used in computers. Needless to say, implementation of this RFC3001 represents major changes in the way microprocessor-run systems function, however, its introduction could be well under way by the turn of the century if so recommended. End of document.