To work with data of various types, it is necessary to unify the form of their presentation, and this allows coding. Coding we deal with quite often, for example, people think very vague concepts, and to convey the idea from one person to another, language is used. Language is a system of coding concepts. To record the words of the language, the encoding, the alphabet, is again applied. Problems of universal coding are engaged in various fields of science, technology, and culture. Recall that the drawings, notes, mathematical calculations are also some coding of various information objects. Likewise, a universal coding system is required in order for a large the number of different types of information could be processed on a computer.

To work with data of various types, it is necessary to unify the form of their presentation, and this allows coding

The preparation of data for processing on a computer (data representation) in computer science has its own specifics related to electronics. For example, we want to make calculations on the computer. In this case, we will have to code the numbers with which the numbers are written. At first glance, it seems quite natural to encode the digit zero by the state of the electronic circuit, where the voltage on some element will be 0 volts, the figure unit - 1 volt, 2 for 2 volts, etc., 9 for 9 volts. To write each digit of a number in this case, you need an electronic circuit element that has ten states. However, the element base of electronic circuits has a spread of parameters, which can lead to the appearance of a voltage, say, 3.5 volts, and it can be interpreted both as a triple and as a quadruple, i.e. it will be necessary at the level of electronic circuits to "explain" to the computer where the triplet ends and where the four starts. In addition, you will have to create very complex electronic elements for the production of arithmetic operations with numbers, i.e. at the schematic level, a multiplication table should be created - 10x10 = 100 schemes and an addition table - also 100 schemes. For electronics 40-ies (the time when the first computers appeared) it was an impossible task. Even more difficult would be the task of word processing, because the Russian alphabet contains 33 letters. Obviously, such coding of computing systems is not consistent.

At the same time, coding on the basis of electronic circuits with two stable states was realized very easily: there is a current of 1, there is no current-0, there is an electric (magnetic) field-1, no-0. The views of the creators of computer technology were turned to binary coding as a universal form of data representation for further processing by means of computer technology. It is assumed that the data is located in some cells representing an ordered set of binary digits, and each bit can temporarily contain one of the states - 0 or 1. Then a group of two binary bits (two bits) can be encoded 22 = 4 different combinations of codes (00, 01, 10, 11); similarly, three bits will give 23 = 8 combinations, eight bits or 1 byte - 28 = 256, etc.

So, the internal alphabet of the computer is very poor, it contains only two symbols: 0, 1, and therefore there is encoding of the whole variety of data types - numbers, texts, sounds, graphics, video, etc. - only with these two symbols, for further processing by means computer technology.