Member-only story
As with all things computers, it all boils down to numbers and every letter, character, or emoji we type has a unique binary number associated with it, so that computers can process them.
Encoding is the process of converting data from one form to another. Main purpose of encoding is to represent and transfer data(character encoding) so that source and destination systems can properly understand data, and encoding can also be used to compress data (video or image encoding).
For example, character encoding deals with converting data to bytes, as shown below
ASCII, a character encoding standard, uses 7 bits to code up to 127 characters, enough to code the Alphabet in upper and lower case, numbers 0–9, and some additional special characters.
Where ASCII fall down? It does not support languages such as Greek, Hebrew, and Arabic for example, this is where Unicode comes in; it uses 32 bits to code up to 2,147,483,647 characters! Unicode gives us enough options to support any language and even our ever-growing collection of emojis.
Unicode is a character set used to translate characters into numbers and will use a variable bit encoding program where we can choose between 32, 16, and 8-bit encodings.
UTF-8,16,32 (Unicode Transformation Format) -8,16,32 bits are an encoding schema’s used to translate numbers into binary data.