When displaying (or printing) a text file, this control character causes the text editor to show the following characters in a new line.
Some character sets provide a separate newline character code. EBCDIC, for example, provides an NL character code in addition to the CR and LF codes. Unicode, in addition to providing the ASCII CR and LF control codes, also provides a "next line" (NEL) control code, as well as control codes for "line separator" and "paragraph separator" markers.
Two ways to view newlines, both of which are self-consistent, are that newlines either separate lines or that they terminate lines. If a newline is considered a separator, there will be no newline after the last line of a file. Some programs have problems processing the last line of a file if it is not terminated by a newline. On the other hand, programs that expect newline to be used as a separator will interpret a final newline as starting a new (empty) line. Conversely, if a newline is considered a terminator, all text lines including the last are expected to be terminated by a newline. If the final character sequence in a text file is not a newline, the final line of the file may be considered to be an improper or incomplete text line, or the file may be considered to be improperly truncated.
In text intended primarily to be read by humans using software which implements the word wrap feature, a newline character typically only needs to be stored if a line break is required independent of whether the next word would fit on the same line, such as between and in vertical lists. Therefore, in the logic of word processor and most , newline is used as a paragraph break and is known as a "hard return", in contrast to "soft returns" which are dynamically created to implement word wrapping and are changeable with each display instance. In many applications a separate control character called "manual line break" exists for forcing line breaks inside a single paragraph. The glyph for the control character for a hard return is usually a pilcrow (¶), and for the manual line break is usually a carriage return arrow (↵).
|Multics, Unix and Unix-like systems (Linux, macOS, FreeBSD, AIX, Xenix, etc.), BeOS, Amiga, RISC OS, and others||ASCII|
|Atari TOS, Microsoft Windows, DOS (MS-DOS, PC DOS, etc.), DEC TOPS-10, RT-11, CP/M, MP/M, OS/2, Symbian OS, Palm OS, Amstrad CPC, and most other early non-Unix and non-IBM operating systems|
|Commodore 8-bit machines (C64), BBC Micro, ZX Spectrum, TRS-80, Apple II family, Oberon, the classic Mac OS, MIT Lisp machine and OS-9|
|QNX pre-POSIX implementation (version < 4)|
|BBC Micro and RISC OS spooled text output.||+|
|Atari 8-bit machines||ATASCII |
(155 in decimal)
|IBM mainframe systems, including z/OS (OS/390) and i5/OS (OS/400), AIX OS||EBCDIC|
|ZX80 and ZX81 (Home computers from Sinclair Research Ltd)||used a specific non-ASCII character set||NEWLINE|
Most textual Internet protocols (including HTTP, SMTP, FTP, IRC, and many others) mandate the use of ASCII + (, ) on the protocol level, but recommend that tolerant applications recognize lone (, ) as well. Despite the dictated standard, many applications erroneously use the C newline escape sequence () instead of the correct combination of carriage return escape and newline escape sequences (+) (see section Newline in programming languages below). This accidental use of the wrong escape sequences leads to problems when trying to communicate with systems adhering to the stricter interpretation of the standards instead of the suggested tolerant interpretation. One such intolerant system is the qmail mail transfer agent that actively refuses to accept messages from systems that send bare instead of the required +.
FTP has a feature to transform newlines between CR+LF and the native encoding of the system (e.g., LF only) when transferring text files. This feature must not be used on binary files. Often binary files and text files are recognised by checking their filename extension; most command-line FTP clients have an explicit command to switch between binary and text mode transfers.
This may seem overly complicated compared to an approach such as converting all line terminators to a single character, for example . However, Unicode was designed to preserve all information when converting a text file from any existing encoding to Unicode and back. Therefore, Unicode should contain characters included in existing encodings. is included in EBCDIC with code (0x15). is also a control character in the C1 control set. As such, it is defined by ECMA 48, and recognized by encodings compliant with ISO/IEC 2022 (which is equivalent to ECMA 35). C1 control set is also compatible with ISO-8859-1. The approach taken in the Unicode standard allows round-trip transformation to be information-preserving while still enabling applications to recognize all possible types of line terminators.
Recognizing and using the newline codes greater than 0x7F is not often done. They are multiple bytes in UTF-8, and the code for NEL has been used as the ellipsis ('…') character in Windows-1252. For instance:
The Unicode characters U+2424 (SYMBOL FOR NEWLINE, ␤), U+23CE (RETURN SYMBOL, ⏎), U+240D (SYMBOL FOR CARRIAGE RETURN, ␍) and U+240A (SYMBOL FOR LINE FEED, ␊) are intended for presenting a user-visible character to the reader of the document, and are thus not recognized themselves as a newline.
The C programming language provides the (newline) and (carriage return). However, these are not required to be equivalent to the ASCII and control characters. The C standard only guarantees two things:
On Unix platforms, where C originated, the native newline sequence is ASCII (), so was simply defined to be that value. With the internal and external representation being identical, the translation performed in text mode is a NOP, and Unix has no notion of text mode or binary mode. This has caused many programmers who developed their software on Unix systems simply to ignore the distinction completely, resulting in code that is not portable to different platforms.
The C library function fgets() is best avoided in binary mode because any file not written with the Unix newline convention will be misread. Also, in text mode, any file not written with the system's native newline sequence (such as a file created on a Unix system, then copied to a Windows system) will be misread as well.
Another common problem is the use of when communicating using an Internet protocol that mandates the use of ASCII + for ending lines. Writing to a text mode stream works correctly on Windows systems, but produces only on Unix, and something completely different on more exotic systems. Using in binary mode is slightly better.
Many languages, such as C++, Perl, and Haskell provide the same interpretation of as C.
Java, PHP, and Python provide the sequence (for ASCII +). In contrast to C, these are guaranteed to represent the values and , respectively.
The Java I/O libraries do not transparently translate these into platform-dependent newline sequences on input or output. Instead, they provide functions for writing a full line that automatically add the native newline sequence, and functions for reading lines that accept any of , , or + as a line terminator (see ). The method can be used to retrieve the underlying line separator.
Some languages have created special variables, constants, and to facilitate newlines during program execution. In some languages such as PHP and Perl, double quotes are required to perform escape substitution for all escape sequences, including and . In PHP, to avoid portability problems, newline sequences should be issued using the PHP_EOL constant.
Example in C#:
To denote a single line break, Unix programs use line feed, whose hexadecimal value in ASCII is :%s/}/}\r\t/g, while most programs common to MS-DOS and Microsoft Windows use carriage return+tabulator, whose hexadecimal value in ASCII is line feed. In ASCII, carriage return is a distinct control character.
The different newline conventions cause text files that have been transferred between systems of different types to be displayed incorrectly.
Text in files created with programs which are common on Unix-like or classic Mac OS, appear as a single long line on most programs common to MS-DOS and Microsoft Windows because these do not display a single 0a as a line break.
Conversely, when viewing a file originating from a Windows computer on a Unix-like system, the extra may be displayed as a second line break, as or at the end of each line.
Furthermore, programs other than text editors may not accept a file, e.g. some configuration file, encoded using the foreign newline convention, as a valid file.
The problem can be hard to spot because some programs handle the foreign newlines properly while others do not. For example, a compiler may fail with obscure syntax errors even though the source file looks correct when displayed on the console or in an Text editor. On a Unix-like system, the command will send the file to stdout (normally the terminal) and make the visible, which can be useful for debugging. Modern text editors generally recognize all flavours of + newlines and allow users to convert between the different standards. are usually also capable of displaying text files and websites which use different types of newlines.
Even if a program supports different newline conventions, these features are often not sufficiently labeled, described, or documented. Typically a menu or combo-box enumerating different newline conventions will be displayed to users without an indication if the selection will re-interpret, temporarily convert, or permanently convert the newlines. Some programs will implicitly convert on open, copy, paste, or save—often inconsistently.
The File Transfer Protocol can automatically convert newlines in files being transferred between operating system with different newline representations when the transfer is done in "ASCII mode". However, transferring binary files in this mode usually has disastrous results: any occurrence of the newline byte sequence—which does not have line terminator semantics in this context, but is just part of a normal sequence of bytes—will be translated to whatever newline representation the other system uses, effectively data corruption the file. FTP clients often employ some heuristics (for example, inspection of filename extensions) to automatically select either binary or ASCII mode, but in the end it is up to users to make sure their files are transferred in the correct mode. If there is any doubt as to the correct mode, binary mode should be used, as then no files will be altered by FTP, though they may display incorrectly.
Editors are often unsuitable for converting larger files. For larger files (on Windows NT/2000/XP) the following command is often used: On many Unix systems, the (sometimes named or ) and (sometimes named or ) utilities are used to translate between ASCII + (DOS/Windows) and (Unix) newlines. Different versions of these commands vary slightly in their syntax. However, the command is available on virtually every Unix-like system and can be used to perform arbitrary replacement operations on single characters. A DOS/Windows text file can be converted to Unix format by simply removing all ASCII characters with
$ tr -d '\r' < ''inputfile'' > ''outputfile''or, if the text has only newlines, by converting all newlines to with
$ tr '\r' '\n' < ''inputfile'' > ''outputfile''
The same tasks are sometimes performed with awk, sed, or in Perl if the platform has a Perl interpreter: To identify what type of line breaks a text file contains, the file command can be used. Moreover, the editor Vim can be convenient to make a file compatible with the Windows notepad text editor. For example: within vim The following grep commands echo the filename (in this case myfile.txt) to the command line if the file is of the specified style: For systems with egrep (extended grep) such as Debian (Linux) based systems and many other Unix systems, these commands can be used: The above grep commands work under Unix systems or in Cygwin under Windows. Note that these commands make some assumptions about the kinds of files that exist on the system (specifically it's assuming only Unix and DOS-style files—no Mac OS 9-style files).
This technique is often combined with find to list files recursively. For instance, the following command checks all "regular files" (e.g. it will exclude directories, symbolic links, etc.) to find all Unix-style files in a directory tree, starting from the current directory (.), and saves the results in file unix_files.txt, overwriting it if the file already exists: This example will find C files and convert them to LF style line endings: The file command also detects the type of EOL used: Other tools permit the user to visualise the EOL characters: dos2unix, unix2dos, mac2unix, unix2mac, mac2dos, dos2mac can perform conversions. The flip command is often used.
Later in the age of modern standardized character set control codes were developed to aid in white space text formatting. ASCII was developed simultaneously by the International Organization for Standardization (ISO) and the American Standards Association (ASA), the latter being the predecessor organization to American National Standards Institute (ANSI). During the period of 1963 to 1968, the ISO draft standards supported the use of either + or alone as a newline, while the ASA drafts supported only +.
The sequence + was in common use on many early computer systems that had adopted Teletype machines, typically a Teletype Model 33 ASR, as a console device, because this sequence was required to position those printers at the start of a new line. The separation of newline into two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in one-character time. That is why the sequence was always sent with the first. A character printed after a CR would often print as a smudge, on-the-fly in the middle of the page, while it was still moving the carriage back to the first position. "The solution was to make the newline two characters: CR to move the carriage to column one, and LF to move the paper up."
On these systems, text was often routinely composed to be compatible with these printers, since the concept of hiding such hardware details from the application was not yet well developed; applications had to talk directly to the Teletype machine and follow its conventions. Most minicomputer systems from DEC used this convention. CP/M used it as well, to print on the same terminals that minicomputers used. From there MS-DOS (1981) adopted CP/M's + in order to be compatible, and this convention was inherited by Microsoft's later Windows operating system.
The Multics operating system began development in 1964 and used alone as its newline. Multics used a device driver to translate this character to whatever sequence a printer needed (including extra padding characters), and the single byte was much more convenient for programming. What now seems a more obvious choice of was not used, as a plain provided the useful function of overprinting one line with another to create boldface and strikethrough effects, and thus it was useful to not translate it. Perhaps more importantly, the use of alone as a line terminator had already been incorporated into drafts of the eventual ISO/IEC 646 standard. Unix followed the Multics practice, and later Unix-like systems followed Unix.
Similarly, (Unicode+008B PARTIAL LINE FORWARD, decimal 139) and (Unicode+008C PARTIAL LINE BACKWARD, decimal 140) can be used to advance or reverse the text printing position by some fraction of the vertical line spacing (typically, half). These can be used in combination for subscripts (by advancing and then reversing) and superscripts (by reversing and then advancing), and may also be useful for printing diacritics.