Digital Video Glossary
Glossary of commonly used terms in digital video
This term is often used when describing DVDs video encoding resolution. It simply means that all of the encoded video is used to represent the source image. The DVD player scales the video after decoding to a suitable size for display so that the correct aspect ratio can be maintained.
The alternative to anamorphic encoding is to letterbox the video prior to encoding, resulting in lower resolution when displayed on a widescreen TV.
Advanced Television Systems Committee. An international, non-profit membership organisation developing digital TV standards used mainly in the USA.
Web site at http://www.atsc.org.
2002-06-10 (last updated 2002-06-10) ian
Cinema film is shot using a frame rate of 24 frames per second. If this rate were used to show the film, flicker would be perceived by the viewer, so a technique called "Pull down" is used, showing the same frame multiple times and eliminating the flicker.
The 24 frames per second must be converted to either 50 or 60 fields per second video signals.
In the 50 fields per second case (usually PAL systems) this is achieved by using the same source frame for both fields of a video frame. Thus each source picture is used in 2 consecutive video fields. In order to give 50 frames per second video, the film playback speed in increased to 25 frames per second - an increase of about 4%. This means that the PAL converted film is slightly shorter than the original cinema version. The audio speed is also increased by the same amount.
In the 60 frames per second case (usually NTSC systems) the conversion is achieved using a similar technique, except that the first film frame is shown on 3 consecutive video fields, followed by the second film frame being shown on 2 consecutive video fields. Thus each adjacent pair of source pictures are used in 5 consecutive video fields. This rate conversion technique is known as 3:2 pull down.
Coded Orthogonal Frequency Division Multiplexing.
A modification of OFDM designed specifically for Digital Terrestrial television. See OFDM.
DivX is a video encoding format based on the MPEG-4 video compression standard, developed by DivXNetworks in conjunction with open source developers. It can achieve a higher compression ratio than the older MPEG-2 standard, resulting in lower bitrates and smaller file sizes.
Digital Terrestrial Television.
The transmission of digital television signals via a normal television aerial.
The DV standard (originally known as DVC - Digital Video Cassette) was created by a group of consumer electronics companies, which has grown since and is known as the DV consortium.
It uses a 1/4 inch (6.35mm) metal evaporate tape to record very high quality digital video. The video is sampled 720 pixels per scan line, with 4:1:1 or 4:2:0 chroma (colour) samples.
The video is compressed using DCT (Discrete Cosine Transformation), similar to moving JPEG. DV can achieve better compression than moving JPEG since it allows better optimisation of quantization tables within a frame.
Only intra frame (I-frame) compression is used, meaning that frames do not depend on previous or following frames. This requires less complicated codecs than MPEG and also makes it more suitable for editing, but big bitrates are required to maintain quality levels.
The video bitrate is fixed at about 25 megabits per second (Mbps). The total data bitrate, including error protection and audio streams is about 36 Mbps.
Digital Video Broadcasting (Project)
The DVB is an industry-led consortium of over 300 broadcasters, manufacturers, network operators, software developers, regulatory bodies and others in over 35 countries committed to designing global standards for the delivery of digital television and data services.
DVB standards are in use in many non-USA digital TV systems.
Web site at http://www.dvb.org.
2002-06-10 (last updated 2002-06-10) ian
A standard for storing MPEG-2 compressed audio and video information on a high density disc the same physical size as a normal audio CD.
Click here for further information.
Interlaced video (see Interlace Scanning) comprises of frames that are divided into 2 fields. One field contains only the odd scan lines of the frame, the other one contains only the event scan lines.
The term frame comes from movie film. A frame is one complete picture within the reel of film. Many frames are shown every second to produce the effect of motion.
TV picture tubes have a non-linear response. This means that by doubling the control voltage applied to the picture tube does not double the light intensity emitted from it. A simple law approximates this voltage to light intensity relationship:
output = input ^ gamma
i.e. The output is equal to the input to the power gamma, where gamma is a fixed value for the picture tube in question.
The overall gamma of a system (e.g. camera, transmission media, display device) should be as close to 1 as possible to ensure that images are faithfully reproduced.
Interlacing is a technique used in analogue TV signals to reduce the flicker perceived by the viewer whilst keeping frame rate (and hence bandwidth required) low. Each video frame is sent as 2 separate fields. The first field is displayed on the odd numbered scan lines of the TV - the second field is displayed on the even numbered scan lines.
There is often a temporal (time) difference between the first and second fields of the video frame. This is not the case where video material has been derived from cinematic film.
See also Progressive Scanning.
Moving Picture Experts Group.
A group working under the directives of the International Standards Organisation (ISO) and the International Electro-Technical Commission (IEC).
The groups work concentrates on defining standards for the coding of moving pictures, audio and related data.
MPEG-1 defines a framework for coding moving video and audio, significantly reducing the amount of storage with minimal perceived difference in quality. In addition a System specification defines how audio and video streams can be combined to produce a system stream. This forms the basis of the coding used for the VCD format.
MPEG-2 builds on the MPEG-1 specification, adding further pixel resolutions, support for interlace picture, better error recovery possibilities, more chrominance information formats, non-linear macroblock quantization and the possibility of higher resolution DC components.
Orthogonal Frequency Division Multiplexing.
OFDM allows a high data rate to be transmitted over a hostile channel good resistance to multi-path distortions. It is used for DTT as well as digital audio broadcast (DAB).
Multiple carriers are generated using an Inverse Discrete Fourier Transform. Each symbol is artificially lengthened by a "Guard Interval" to combat multi-path interference.
This refers to the path that the display refresh takes. Progressive scanning starts in the top left of the display, drawing consecutive lines across the screen until reaching the bottom. This is the technique used in modern PC monitors.
See also Interlace Scanning.
A technique used in cinema to reduce flicker. Films are shot in 24 frames per second. Normally the eye would see flicker if frames were changed 24 times a second. To prevent this, each frame is "pulled down" (shown several times).
See also Cinematic Frame Rate Conversion.
See Super Video CD.
See Video CD.
Video CD is a standard for storing audio and video information on CD discs. A 74 minute video sequence can to stored on a single disc. See the media section for more detailed information.
This is an SVCD that does not strictly adhere to the SVCD specification, but is still playable on many SVCD and DVD players. Click here for further information.
This is a VCD that does not strictly adhere to the VCD specification, but is still playable on many VCD, SVCD and DVD players. Click here for further information.