Subtitles are Writing representing the contents of the audio in a film, television show, opera or other audiovisual media. Subtitles might provide a transcription or translation of spoken dialogue. Although naming conventions can vary, captions are subtitles that include written descriptions of other elements of the audio, like music or Sound effect. Captions are thus especially helpful to deaf or hard-of-hearing people. Subtitles may also add information that is not present in the audio. Localizing subtitles provide cultural context to viewers. For example, a subtitle could be used to explain to an audience unfamiliar with sake that it is a type of Japanese wine. Lastly, subtitles are sometimes used for Humour, as in Annie Hall, where subtitles show the characters' inner thoughts, which contradict what they were saying in the audio.
Creating, delivering, and displaying subtitles is a complicated and multi-step endeavor. First, the text of the subtitles needs to be written. When there is plenty of time to prepare, this process can be done by hand. However, for media produced in real-time, like live television, it may be done by Shorthand or using automated speech recognition. Subtitles written by fans, rather than more official sources, are referred to as fansubs. Regardless of who does the writing, they must include information on when each line of text should be displayed.
Second, subtitles need to be distributed to the audience. Open subtitles are added directly to recorded video Film frame and thus cannot be removed once added. On the other hand, closed subtitles are stored separately, allowing subtitles in different languages to be used without changing the video itself. In either case, a wide variety of technical approaches and formats are used to encode the subtitles.
Third, subtitles need to be displayed to the audience. Open subtitles are always shown whenever the video is played because they are part of it. However, displaying closed subtitles is optional since they are overlaid onto the video by whatever is playing it. For example, media player software might be used to combine closed subtitles with the video itself. In some theaters or venues, a dedicated screen or screens are used to display subtitles. If that dedicated screen is above rather than below the main display area, the subtitles are called surtitles.
The finished subtitle file is used to add the subtitles to the picture, either:
Subtitles can also be created by individuals using freely available subtitle-creation software like Subtitle Workshop, MovieCaptioner or Subtitle Composer, and then hardcode them onto a video file with programs such as VirtualDub in combination with VSFilter which could also be used to show subtitles as softsubs in many software video players.
For multimedia-style Webcasting, check:
For example, on YouTube, automatic captions are available in Arabic, Bengali language, Dutch language, English language, French language, German language, Hebrew language, Hindi, Indonesian, Italian language, Japanese, Korean language, Portuguese, Russian language, Spanish language, Turkish language, Ukrainian, and Vietnamese. If automatic captions are available for the language, they will automatically be published on the video. Use automatic captioning, YouTube.
Automatic captions are generally less accurate than human-typed captions. Automatic captions regularly fail to distinguish between similar-sounding words, such as to, two, and too. This can be particularly problematic with educational material, such as lecture recordings, that may include uncommon vocabulary and proper names. This problem can be compounded with poor audio quality (drops in audio, background noise, and people talking over each other, for example). Disability rights groups have emphasised the need for these captions to be reviewed by a human prior to publishing, particularly in cases where students' grades may be adversely affected by inadequate captioning.
Newsroom captioning involves the automatic transfer of text from the newsroom computer system to a device which outputs it as captions. It does work, but its suitability as an exclusive system would only apply to programs which had been scripted in their entirety on the newsroom computer system, such as short interstitial updates.
In the United States and Canada, some broadcasters have used it exclusively and simply left uncaptioned sections of the bulletin for which a script was unavailable. Newsroom captioning limits captions to pre-scripted materials and, therefore, does not cover all of the news, weather and sports segments of a typical local news broadcast which are typically not pre-scripted. This includes last-second breaking news or changes to the scripts, ad-lib conversations of the broadcasters, and emergency or other live remote broadcasts by reporters in-the-field. By failing to cover items such as these, newsroom style captioning (or use of the teleprompter for captioning) typically results in coverage of less than 30% of a local news broadcast.
Stenography is a system of rendering words phonetically, and English, with its multitude of (e.g., there, their, they're), is particularly unsuited to easy transcriptions. Stenographers working in courts and inquiries usually have 24 hours in which to deliver their transcripts. Consequently, they may enter the same phonetic stenographic codes for a variety of homophones, and fix up the spelling later. Real-time stenographers must deliver their transcriptions accurately and immediately. They must therefore develop techniques for keying homophones differently, and be unswayed by the pressures of delivering accurate product on immediate demand.
Submissions to recent captioning-related inquiries have revealed concerns from broadcasters about captioning sports. Captioning sports may also affect many different people because of the weather outside of it. In much sport captioning's absence, the Australian Caption Centre submitted to the National Working Party on Captioning (NWPC), in November 1998, three examples of sport captioning, each performed on tennis, rugby league and swimming programs:
The NWPC concluded that the standard they accept is the comprehensive real-time method, which gives them access to the commentary in its entirety. Also, not all sports are live. Many events are pre-recorded hours before they are broadcast, allowing a captioner to caption them using offline methods.
News captioning applications currently available are designed to accept text from a variety of inputs: stenography, Velotype, QWERTY, ASCII import, and the newsroom computer. This allows one facility to handle a variety of online captioning requirements and to ensure that captioners properly caption all programs.
Current affairs programs usually require stenographic assistance. Even though the segments which comprise a current affairs program may be produced in advance, they are usually done so just before on-air time and their duration makes QWERTY input of text unfeasible.
News bulletins, on the other hand, can often be captioned without stenographic input (unless there are live crosses or ad-libbing by the presenters). This is because:
Offline captioning involves a five-step design and editing process, and does much more than simply display the text of a program. Offline captioning helps the viewer follow a story line, become aware of mood and feeling, and allows them to fully enjoy the entire viewing experience. Offline captioning is the preferred presentation style for entertainment-type programming.
The only significant difference for the user between SDH subtitles and closed captions is their appearance: SDH subtitles usually are displayed with the same proportional font used for the translation subtitles on the DVD; however, closed captions are displayed as white text on a black band, which blocks a large portion of the view. Closed captioning is falling out of favor as many users have no difficulty reading SDH subtitles, which are text with contrast outline. In addition, DVD subtitles can specify many colors on the same character: primary, outline, shadow, and background. This allows subtitlers to display subtitles on a usually translucent band for easier reading; however, this is rare, since most subtitles use an outline and shadow instead, in order to block a smaller portion of the picture. Closed captions may still supersede DVD subtitles, since many SDH subtitles present all of the text centered (an example of this is DVDs and Blu-ray Discs manufactured by Warner Bros.), while closed captions usually specify position on the screen: centered, left align, right align, top, etc. This is helpful for speaker identification and overlapping conversation. Some SDH subtitles (such as the subtitles of newer Universal Studios DVDs and Blu-ray Discs and most 20th Century Fox Blu-ray Discs, and some Columbia Pictures DVDs) do have positioning, but it is not as common.
DVDs for the U.S. market now sometimes have three forms of English subtitles: SDH subtitles; English subtitles, helpful for viewers who may not be hearing impaired but whose first language may not be English (although they are usually an exact transcript and not simplified); and closed caption data that is decoded by the end-user's closed caption decoder. Most anime releases in the U.S. only include translations of the original material as subtitles; therefore, SDH subtitles of English dubs ("dubtitles") are uncommon. Details
High-definition disc media (HD DVD, Blu-ray Disc) uses SDH subtitles as the sole method because technical specifications do not require HD to support line 21 closed captions. Some Blu-ray Discs, however, are said to carry a closed caption stream that only displays through standard-definition connections. Many HDTVs allow the end-user to customize the captions, including the ability to remove the black band.
Song lyrics are not always captioned, as additional copyright permissions may be required to reproduce the lyrics on-screen as part of the subtitle track. In October 2015, major studios and Netflix were sued over this practice, citing claims of false advertising (as the work is henceforth not completely subtitled) and civil rights violations (under California's Unruh Civil Rights Act, guaranteeing equal rights for people with disabilities). Judge Stephen Victor Wilson dismissed the suit in September 2016, ruling that allegations of civil rights violations did not present evidence of intentional discrimination against viewers with disabilities, and that allegations over misrepresenting the extent of subtitles "fall far short of demonstrating that reasonable consumers would actually be deceived as to the amount of subtitled content provided, as there are no representations whatsoever that all song lyrics would be captioned, or even that the content would be 'fully' captioned."
According to HK Magazine, the practice to caption in Standard Chinese was pioneered in Hong Kong during the 1960s by Run Run Shaw of Shaw Brothers Studio. In a bid to reach the largest audience possible, Shaw had already recorded his films in Mandarin, reasoning it would be most universal variety of Chinese. However, this did not guarantee that the films could be understood by non-Mandarin-speaking audiences, and dubbing into different varieties was seen as too costly. The decision was thus made to include Standard Chinese subtitles in all Shaw Brothers films. As the films were made in British-ruled Hong Kong, Shaw also decided to include English subtitles to reach English speakers in Hong Kong and allow for exports outside Asia.
Subtitle translation may be different from the translation of written text or written language. Usually, during the process of creating subtitles for a film or television program, the picture and each sentence of the audio are analyzed by the subtitle translator; also, the subtitle translator may or may not have access to a written transcript of the dialogue. Especially in the field of commercial subtitles, the subtitle translator often interprets what is meant, rather than translating the manner in which the dialogue is stated; that is, the meaning is more important than the form—the audience does not always appreciate this, as it can be frustrating for people who are familiar with some of the spoken language; spoken language may contain verbal padding or culturally implied meanings that cannot be conveyed in the written subtitles. Also, the subtitle translator may also condense the dialogue to achieve an acceptable reading speed, whereby purpose is more important than form.
Especially in , the subtitle translator may translate both form and meaning. The subtitle translator may also choose to display a note in the subtitles, usually in parentheses ("" and ""), or as a separate block of on-screen text—this allows the subtitle translator to preserve form and achieve an acceptable reading speed; that is, the subtitle translator may leave a note on the screen, even after the character has finished speaking, to both preserve form and facilitate understanding. For example, Japanese has multiple first-person pronouns (see Japanese pronouns) and each pronoun is associated with a different degree of politeness. In order to compensate during the English translation process, the subtitle translator may reformulate the sentence, add appropriate words or use notes.
The preference for dubbing or subtitling in various countries is largely based on decisions made in the late 1920s and early 1930s. With the arrival of sound film, the film importers in Germany, Italy, France, Switzerland, Luxembourg, Austria, San Marino, Liechtenstein, Monaco, Slovakia, Hungary, Belarus, Andorra, Spain, Canada, New Zealand, Ireland, United States and United Kingdom decided to dub the foreign voices, while the rest of Europe elected to display the dialogue as translated subtitles. The choice was largely due to financial reasons (subtitling is more economical and quicker than dubbing), but during the 1930s it also became a political preference in Germany, Italy and Spain; an expedient form of censorship that ensured that foreign views and ideas could be stopped from reaching the local audience, as dubbing makes it possible to create a dialogue which is totally different from the original. In larger German cities a few "special cinemas" use subtitling instead of dubbing.
Dubbing is still the norm and favored form in these four countries, but the proportion of subtitling is slowly growing, mainly to save cost and turnaround-time, but also due to a growing acceptance among younger generations, who are better readers and increasingly have a basic knowledge of English (the dominant language in film and TV) and thus prefer to hear the original dialogue.
Nevertheless, in Spain, for example, only public TV channels show subtitled foreign films, usually at late night. It is extremely rare that any Spanish TV channel shows subtitled versions of TV programs, series or documentaries. With the advent of digital land broadcast TV, it has become common practice in Spain to provide optional audio and subtitle streams that allow watching dubbed programs with the original audio and subtitles. In addition, only a small proportion of cinemas show subtitled films. Films with dialogue in Galician, Catalan language or Basque language are always dubbed, not subtitled, when they are shown in the rest of the country. Some non-Spanish-speaking TV stations subtitle interviews in Spanish; others do not.
In many countries, local network television will show dubbed versions of English-language programs and movies, while cable stations (often international) more commonly broadcast subtitled material. Preference for subtitles or dubbing varies according to individual taste and reading ability, and theaters may order two prints of the most popular films, allowing moviegoers to choose between dubbing or subtitles. Animation and children's programming, however, is nearly universally dubbed, as in other regions.
Since the introduction of the DVD and, later, the Blu-ray Disc, some high budget films include the simultaneous option of both subtitles and dubbing. Often in such cases, the translations are made separately, rather than the subtitles being a verbatim transcript of the dubbed scenes of the film. While this allows for the smoothest possible flow of the subtitles, it can be frustrating for someone attempting to learn a foreign language.
In the traditional subtitling countries, dubbing is generally regarded as something strange and unnatural and is only used for animated films and TV programs intended for pre-school children. As animated films are "dubbed" even in their original language and ambient noise and effects are usually recorded on a separate sound track, dubbing a low quality production into a second language produces little or no noticeable effect on the viewing experience. In dubbed live-action television or film, however, viewers are often distracted by the fact that the audio does not match the actors' lip movements. Furthermore, the dubbed voices may seem detached, inappropriate for the character, or overly expressive, and some ambient sounds may not be transferred to the dubbed track, creating a less enjoyable viewing experience.
]]
In several countries or regions nearly all foreign language TV programs are subtitled, instead of dubbed, such as:
It is also common that television services in minority languages subtitle their programs in the dominant language as well. Examples include the Welsh language S4C and Irish language TG4 who subtitle in English language and the Swedish language Yle Fem in Finland who subtitle in the majority language Finnish language.
In Wallonia (Belgium) films are usually dubbed, but sometimes they are played on two channels at the same time: one dubbed (on La Une) and the other subtitled (on La Deux), but this is no longer done as frequently due to low ratings.
In Australia, one free-to-air network, SBS airs its foreign-language shows subtitled in English.
While distributing content, subtitles can appear in one of three types:
In other categorization, digital video subtitles are sometimes called internal, if they are embedded in a single video file container along with video and audio streams, and external if they are distributed as separate file (that is less convenient, but it is easier to edit or change such file).
+ Comparison table ! Feature ! Hard ! Prerendered ! Soft | |||
Can be turned off or on | |||
Multiple subtitle variants (for example, languages) | |||
Editable | |||
Player requirements | |||
Visual appearance, colors, font quality | |||
Transitions, karaoke and other special effects | |||
Distribution | Inside original video | Separate low-bitrate video stream, commonly multiplexed | Relatively small subtitle file or instructions stream, multiplexed or separate |
Additional overhead |
+ Sortable table ! Name ! Extension ! Type ! Text styling ! Metadata ! Timings ! Timing precision |
There are still many more uncommon formats. Most of them are text-based and have the extension .txt.
For movies on DVD Video:
For TV broadcast:
Subtitles created for TV broadcast are stored in a variety of file formats. The majority of these formats are proprietary to the vendors of subtitle insertion systems.
Broadcast subtitle formats include: .ESY, .XIF, .X32, .PAC, .RAC, .CHK, .AYA, .890, .CIP, .CAP, .ULT, .USF, .CIN, .L32, .ST4, .ST7, .TIT, .STL
The EBU format defined by Technical Reference 3264-E is an 'open' format intended for subtitle exchange between broadcasters. Files in this format have the extension .stl (not to be mixed up with text "Spruce subtitle format" mentioned above, which also has extension .stl)
For internet delivery:
The Timed Text format currently a "Candidate Recommendation" of the W3C (called DFXP) is also proposed as an 'open' format for subtitle exchange and distribution to media players, such as Microsoft Silverlight.
A variation of this was used in the video game Max Payne 3. Subtitles are used on all 3 the English, Spanish (only Chapter 11) and Portuguese (Chapter 1, 2, 3, 5, 6, 7, 9, 10, 12, 13 and 14 only) dialogues, but the latter is left untranslated as the main character does not understand the language.
|
|