Generation loss is the loss of quality between subsequent Copying or Transcoding of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be considered a form of generation loss. File size increases are a common result of generation loss, as the introduction of artifacts may actually increase the entropy of the data through each generation.
Generation loss was a major consideration in complex analog audio and video editing, where multi-layered edits were often created by making intermediate mixes which were then "bounced down" back onto tape. Careful planning was required to minimize generation loss, and the resulting noise and poor frequency response.
One way of minimizing the number of generations needed was to use an audio mixing or video editing suite capable of mixing a large number of channels at once; in the extreme case, for example with a 48-track recording studio, an entire complex mixdown could be done in a single generation, although this was prohibitively expensive for all but the best-funded projects.
The introduction of professional analog noise reduction systems such as Dolby A helped reduce the amount of audible generation loss, but were eventually superseded by digital systems which vastly reduced generation loss.
According to ATIS, "Generation loss is limited to analog recording because digital recording and reproduction may be performed in a manner that is essentially free from generation loss."
Generation loss can still occur when using lossy video or audio compression codecs as these introduce artifacts into the source material with each encode or reencode. Lossy compression codecs such as Apple ProRes, Advanced Video Coding and mp3 are very widely used as they allow for dramatic reductions on file size while being indistinguishable from the uncompressed or losslessly compressed original for viewing purposes. The only way to avoid generation loss is by using uncompressed or losslessly compressed files; which may be expensive from a storage standpoint as they require larger amounts of storage space in flash memory or hard drives per second of runtime. Uncompressed video requires a high data rate; for example, a 1080p video at 60 frames per second require approximately 370 megabytes per second. Lossy codecs make Blu-rays and streaming video over the internet feasible since neither can deliver the amounts of data needed for uncompressed or losslessly compressed video at acceptable frame rates and resolutions. Images can suffer from generation loss in the same way video and audio can.
Processing a Data compression file rather than an original usually results in more loss of quality than generating the same output from an uncompressed original. For example, a low-resolution digital image for a web page is better if generated from an uncompressed raw image than from an already-compressed JPEG file of higher quality.
Some digital transforms are reversible, while some are not. Lossless compression is, by definition, fully reversible, while lossy compression throws away some data which cannot be restored. Similarly, many DSP processes are not reversible.
Thus careful planning of an sound or video signal chain from beginning to end and rearranging to minimize multiple conversions is important to avoid generation loss when using lossy compression codecs. Often, arbitrary choices of numbers of pixels and sampling rates for source, destination, and intermediates can seriously degrade digital signals in spite of the potential of digital technology for eliminating generation loss completely.
Similarly, when using lossy compression, it will ideally only be done once, at the end of the workflow involving the file, after all required changes have been made.
Repeated applications of lossy compression and decompression can cause generation loss, particularly if the parameters used are not consistent across generations. Ideally an algorithm will be both idempotent, meaning that if the signal is decoded and then re-encoded with identical settings, there is no loss, and scalable, meaning that if it is re-encoded with lower quality settings, the result will be the same as if it had been encoded from the original signal â see Scalable Video Coding. More generally, transcoding between different parameters of a particular encoding will ideally yield the greatest common shared quality â for instance, converting from an image with 4 bits of red and 8 bits of green to one with 8 bits of red and 4 bits of green would ideally yield simply an image with 4 bits of red color depth and 4 bits of green color depth without further degradation.
Some lossy compression are much worse than others in this regard, being neither idempotent nor scalable, and introducing further degradation if parameters are changed.
For example, with JPEG, changing the quality setting will cause different quantization constants to be used, causing additional loss. Further, as JPEG is divided into 16Ă16 blocks (or 16Ă8, or 8Ă8, depending on chroma subsampling), cropping that does not fall on an 8Ă8 boundary shifts the encoding blocks, causing substantial degradation â similar problems happen on rotation. This can be avoided by the use of or similar tools for cropping. Similar degradation occurs if video keyframes do not line up from generation to generation.
To use the scanning and printing features on a photocopier, these elements rely on noise sensors and physical mediums like paper and ink, leading to the accumulation of noise over successive iterations. Similarly, lossy image formats, such as JPEG, introduce degradation when files are repeatedly edited and re-saved. While directly copying a JPEG file preserves its quality, opening and saving it in an image editor creates a new, re-encoded version, introducing subtle changes. Social media platforms like Facebook and Twitter, automatically re-encode uploaded images at low-quality settings to optimize storage and bandwidth, further compounding compression artifacts. Over time, repeated re-encoding or processing can significantly degrade the image's quality.
Resampling causes aliasing, both blurring low-frequency components and adding high-frequency noise, causing jaggies, while rounding off computations to fit in finite precision introduces quantization, causing colour banding; if fixed by dither, this instead becomes noise. In both cases, these at best degrade the signal's S/N ratio, and may cause artifacts. Quantization can be reduced by using high precision while editing (notably floating point numbers), only reducing back to fixed precision at the end.
Often, particular implementations fall short of theoretical ideals.
|
|