Engineering Full Stack Apps with Java and JavaScript
Unicode is a computing industry standard for the consistent encoding, representation and handling of text expressed in most of the world's writing systems. The objective of Unicode is to unify all the different encoding schemes so that the confusion between computers can be limited as much as possible. Currently the Unicode standard defines values for over 100,000 characters and can be seen at the Unicode Consortium (http://unicode.org).
The Unicode standard has several character encoding forms:
UTF stands for Unicode Transformation Unit.
ASCII which stands for American Standard Code for Information Interchange became the first widespread encoding scheme. However, it is limited to only 128 character definitions. This is fine for the most common English characters, numbers and punctuation. ASCII was a bit limiting for the rest of the world. Depending on where you were, there might be a different character being displayed for the same ASCII code. The other parts of the world began creating their own encoding schemes which were of different lengths. Programs had to then figure out which encoding scheme they were meant to be using. The Unicode standard was created to overcome these problems.
ASCII vs UNICODE
ASCII is the lowest common denominator of character sets. ASCII has only 128 characters, but Unicode has more than 65,000. A Unicode escape can be used to insert any Unicode character into a program using only ASCII characters. A Unicode escape means exactly the same thing as the character that it represents.
Unicode escapes are designed for use when a programmer needs to insert a character that can't be represented in the source file's character set. They are used primarily to put non-ASCII characters into identifiers, string literals, character literals, and comments. Occasionally, a Unicode escape adds to the clarity of a program by positively identifying one of several similar-looking characters.