NUMBER SYSTEM
Binary
Number System
As stated above, the binary number system has base 2 and
therefore uses only two distinct symbols 0 and 1. Any number in binary is
written only by using these two symbols. Though it uses just two symbols but it
has enough expressive power to represent any number. A binary number 1001 thus
represents the number 1*23 + 0*22 + 0*21 + 1*20.
It is equivalent to number 9 in decimal number system. Similarly 11001 and
10001 represent numbers 25 and 17 respectively in decimal number system.
For example: (10101)2 = 24
+ 22 + 20 = (21)10
(100011)2 = 25 + 21 + 20 =
(35)10
(1010.011)2 = 23 + 21 + 2-2 +
2-3 = (10.375)10
The conversion from decimal to binary or to any other base-r
system is done by separating the numbers into an integer part and a fraction
part and then converting each part separately. For example the decimal number
41.6875 can be converted to binary equivalent, by converting the integer and
fraction parts separately, as follows:
Operation |
Quotient |
Remainder |
41/2 |
20 |
1 |
20/2 |
10 |
0 |
10/2 |
5 |
0 |
5/2 |
2 |
1 |
2/2 |
1 |
0 |
1
/ 2 |
0 |
1 |
The number is divided by 2 and the remainder part is extracted. The quotient obtained is again divided by 2 and this process is repeated until the quotient becomes 0. Every time the remainder obtained is recorded. The set of remainders obtained, read from the bottom to top form the binary equivalent of the integer part of the number. Thus (41)10 = (101001)2.
In order to convert the fraction part, it is multiplied by w to obtain resultant integer and fraction parts. The integer parts. The integer part in the resultant is extracted and the fraction part is again multiplied by 2. This multiplication process is repeated till the fraction part becomes 0 or a desired precision is achieved. The integer part of the given number (.6875) can be converted to binary as follows:
Operation |
Resulting
Integer part |
Resulting
Fraction Part |
0.6875 x 2 |
1 |
.3750 |
0.3750 x 2 |
0 |
.7500 |
0.7500 x 2 |
1 |
.5000 |
0.5000 x 2 |
1 |
.0000 |
The binary equivalent of fraction 0.6875 is 1011, obtained
by reading the integer parts from top to down. Thus the decimal number 41.6875
is equivalent to 101001.1011 binary number system.
Binary Codes
We saw earlier that digital computers use signals with two different values and that there is a direct analogy between binary signals and binary digits. Computers handle not only numbers but also other elements of information. Specific discrete quantities can be represented by groups of binary digits (called bits). For example, two symbols are sufficient to uniquely represent two different quantities, so one binary digit (0 or 1) is sufficient to uniquely represent two symbols. But if one needs to specify more than two quantities, one bit is not enough. In those cases more than one bit is required, which means the bits have to be used repeatedly. For example, if three different objects are to be assigned unique codes, a minimum of 2 bits is required. With two bits we can have the codes 00, 01, 10 and 11. From this the first three can be used to assign unique codes to three different quantities and the fourth can be left unused. In general, and n-bit binary codes can be used to represent 2n different quantities. Thus a group of two bits represents four different quantities by the unique codes 00, 01, 10 and 11. Three bits can be used to represent eight different sizes by the unique codes 000, 001, 010, 011, 100, 101, 110 and 111. In other words, to assign unique codes to m different items, we need at least n bit codes such that 2n>=m.Digital computers use binary codes to represent all kinds of information, from numbers to letters. Whether we need to enter a letter, number or punctuation mark we have to send it to the machine with a unique code for each item. Therefore, the instructions to be executed by the CPU and the input data that make up the operations of the instruction are represented using a binary code system. A typical machine instruction in a digital computer system might look like a set of 0s and 1s. Many binary codes are used in digital systems. Some of the most commonly read binary code systems are BCD codes for representing decimal numbers, ASCII codes for exchanging information between computers and keyboards, Unicode for use on the Internet, and mirrored (gray) codes.
ASCII &
Unicode
An alphanumeric code consists of 10 decimal digits, 26 letters and other symbols such as accents and unusual characters. Therefore, a minimum number of bits is needed to encode alphanumeric characters (26 = 64, but 25 = 32 is not enough). This 6 bit code, with some variations, is used to communicate alphanumeric characters. However, the need to represent more than 64 characters (to include lowercase and uppercase letters and unusual characters) has led to the development of seven- and eight-bit alphanumeric codes. ASCII code is a seven-bit code used to distinguish between keystrokes on consoles. ASCII stands for American Standard Code for Data Interoperability. It is an alphanumeric code used to communicate numbers, letters, accents and other control characters. It's a seven-bit code, but for all common sense purposes it's an eight-bit code, including an eighth bit for parity.
ASCII codes talk about the content of computers, communication devices and other gadgets that use content. Most current character-coding schemes are based on ASCII, although they support many more characters than ASCII. In general, ASCII is continued from the transmitted code. Its commercial use began as a seven-bit teleprinter code developed by the Chime Information Administration. ASCII's working system began on October 6, 1960, with the meeting of the X3.2 Subgroup of the American ASCII Association (ASA). The first edition of the standard was circulated in 1963, with a major revision in 1967 and the most recent revision in 1986. ASCII has 128 character definitions; 33 non-printing control characters (now mostly obsolete) that affect the way content and space are handled; It consists of 94 printable letters, and the space is considered epistemological reality. The most commonly used character encoding on the World Wide Web was US-ASCII until December 2007, when it was replaced by UTF-8.
Unicode
Unicode is a computer industry standard for the consistent encoding, representation, and manipulation of text expressed in most of the world's writing systems. Developed in conjunction with the Universal Character Set Standard and published in book form as the Unicode Standard, the latest version of Unicode includes a repository of 107,000 characters, including 90 scripts, code tables for visual reference, encoding systems, and more. . A set of standard character encodings, enumeration of character properties such as upper and lowercase, a set of reference data computer files and many related items such as character properties, normalization, decomposition, matching, rendering, and rules for bidirectionality -left-to-scripts, e.g. Display order for correct display of text with Arabic and Hebrew and left-to-right scripts. Unicode can be implemented with different character encodings. The most commonly used encoding is UTF-8 (which uses one byte for ASCII characters, which have the same code value in both UTF-8 and ASCII encodings, and up to four bytes for other characters, the now obsolete UCS-). 2 (which uses two bytes for each character byte, cannot encode every character in the current Unicode standard), and UTF-16 (which uses UCS-2 handle code points and extends the scope of UCS-2).
The Unicode Consortium, a non-profit organization coordinating the development of Unicode, has the ambitious goal of replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many existing schemes are limited in size and scope. . And they are not compatible with multilingual environments. Unicode's success in unifying character sets led to its widespread and important use in the internationalization and localization of computer software. The standard has been implemented in many recent technologies, including XML, the Java programming language, the Microsoft .NET Framework, and modern operating systems.
0 Comments