It seems that as soon as we get settled with our 720p and 1080p HDTV displays, talk started up about even higher resolution images. Demonstrations of 2K and 4K display technologies have knocked viewers’ virtual socks off. Is there a limit to how high “high definition” can become?
One problem that doesn’t get discussed much, however, is how to move all that high resolution data to the television screen. A 3,840 by 2,160 pixel image (sometimes called “Double Full HD”) requires four times as much data as a 1080p frame. How can you squeeze that through a broadband pipe, let alone broadcast it over the airwaves? The problem is that bit-mapped images can only be compressed so much before you start to get noticeable artifacts when the image is decompressed back to its original size. You may have noticed this problem with JPEG photos that have been compressed too aggressively.
Now there is hope from an unexpected direction. The Society for Motion Picture and Televison Engineers (SMPTE) has been quietly working on the problem, and have come up with a solution. In a back-to-the-future move, the organization’s latest standard abandons the rasterized approach that has been with us since the first days of broadcast television, and instead adopts a vector-based design.
A rasterize image is scanned dot by dot, row by row. On a 1080p display, this means 1,080 rows of 1,920 dots each. That’s a lot of data. A vector image takes a different route. Instead of scanning the image, it defines the image as a series of lines that can be described by mathematical formulas. It can be as simple as drawing a straight line that starts here and ends there, or it can take on complex curves. These lines can form the boundaries of areas that are then filled with a texture that is mapped from the original image. Thus these vectors can be assigned a variety of characteristics such as color and thickness, and can be used to recreate the original image.
Now here’s where it gets interesting. Because each element of the image is actually a geometrical definition, it can be infinitely scaled to match the resolution of the display on which it is shown. Each display will show as much detail as it can, based on its actual resolution. And because all the information exists as formulas, it can be condensed into a much smaller data stream than raster-graphics can. Some estimates say that a typical movie image can be defined as a vector image that takes up less than 10% of the space required for a compressed raster image, but with no loss of the original image content.
It gets even better. Because these vector definitions can be defined as objects, their behavior can be tracked from frame to frame. Instead of sending all the vector information for each frame, the object definition can be sent just once along with instructions about how it moves in subsequent frames. This makes it possible to reduce the data stream by another order of magnitude, so that it is just 1% the size of the equivalent compressed raster image.
The beauty of this new approach is that no changes need to be made in the display panels already in use. The controllers simply need to be modified to interpret this vector data and convert it into a rasterized image in the native resolution for the display. Existing HDTVs will be able to use a small external box with an HDMI connection to take advantage of this new technology, while new displays with even higher resolutions will be able to take full advantage of the same data stream.
The Brooklyn Bridge Video Lab is one of the first companies to develop the code for video processing chips required to support this new standard. The company has also announced that it will go public shortly, so you can buy a piece of this technology when the IPO happens.