In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written language form of a natural language.http://www.merriam-webster.com/dictionary/character
Examples of characters include letters, , common punctuation marks (such as "." or "-"), and whitespace. The concept also includes control characters, which do not correspond to symbols in a particular natural language, but rather to other bits of information used to process text in one or more languages. Examples of control characters include carriage return or tab key, as well as instructions to computer printer or other devices that display or otherwise process text.
Characters are typically combined into strings.
With the advent and widespread acceptance of Unicode and bit-agnostic coded character sets, a character is increasingly being seen as a unit of data, independent of any particular visual manifestation. The ISO/IEC 10646 (Unicode) International Standard defines character, or abstract character as "a member of a set of elements used for the organisation, control, or representation of data". Unicode's definition supplements this with explanatory notes that encourage the reader to differentiate between characters, graphemes, and glyphs, among other things. Such differentiation is an instance of the wider theme of the separation of presentation and content.
For example, the Hebrew alphabet aleph ("א") is often used by mathematicians to denote certain kinds of Aleph number, but it is also used in ordinary Hebrew text. In Unicode, these two uses are considered different characters, and have two different Unicode numerical identifiers ("Code point"), though they may be rendered identically. Conversely, the Chinese script logogram for water ("水") may have a slightly different appearance in Japanese script texts than it does in Chinese texts, and local may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.
The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.
Both are considered canonically equivalent by the Unicode standard.
Since Unicode requires at least 21 bits to store a single code point, it is usually impossible to store one inside a single char; instead a variable-length encoding such as UTF-8 must be used. Unfortunately, the fact that a character was historically stored in a single byte led to the two terms being used interchangeably in most documentation. This often makes the documentation confusing or misleading when multibyte encodings such as UTF-8 are used, and has led to inefficient and incorrect implementations of string manipulation functions. Modern POSIX documentation attempts to fix this, defining “character” as a sequence of one or more bytes representing a single graphic symbol or control code, and attempts to use “byte” when referring to char data. http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_87 However it defines Character Array as an array of elements of type char. http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_88
Unicode can also be stored in strings made up of code units that are larger than char. These are called “wide characters”. The original C type was called wchar_t. Due to some platforms defining wchar_t as 16 bits and others defining it as 32 bits, recent versions have added char16_t, char32_t. Even then the objects being stored might not be characters, for instance the variable-length UTF-16 is often stored in arrays of char16_t.
Other languages also have a char type. Some such as C++ use 8 bits like C. Others such as Java use 16 bits for char, in order to represent UTF-16 values.
It might be dependent on localization and encoding in use. For example $ and | are not word characters, while 'é' (in French) or 'æ' or 'я' (in Russian) or 'ά' (in Greek) are, as used in words such as fédération, Αγορά, or Примечания.