What is up with screen resolutions? [closed]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
17
down vote

favorite
9












I thought, it is determined by the horizontal pixel width, but why is 1920*1080 1080p but 1280*720 720p? It makes kinda no sense or is my source wrong?







share|improve this question














closed as off-topic by Máté Juhász, JakeGould, robinCTS, Pimp Juice IT, bertieb Aug 20 at 10:12


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is not about computer hardware or software, within the scope defined in the help center." – Máté Juhász, JakeGould, robinCTS, Pimp Juice IT
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 11




    You thought what is determined by horizontal width? Screen resolution is width by height. Multiply one by the other to get "K" [in the broadest terms]. 720p & 1080p are rendering standards, not resolution standards. The 'p' means progressive scan, to differentiate from the inferior 'i' for interleaved scan.
    – Tetsujin
    Aug 18 at 16:53






  • 23




    It's an idiotic change made by manufacturers so that the latest jump to "4k" sounds like a huge deal rather than the on going incremental change that it is. Up to 4k we were naming the resolution by vertical resolution and then suddenly with 4k they decided to use the horizontal width instead.
    – Mokubai♦
    Aug 18 at 17:00






  • 4




    Look at this answer I posted here.
    – JakeGould
    Aug 18 at 17:32






  • 4




    @Mokubai - yeah the next step is for them to go all hard drive on us and we'll lose the last 96 pixels too.
    – davidbak
    Aug 18 at 22:45






  • 5




    I'm very confused because I don't see a difference between the two examples of 1920*1080 and 1280*720. The "p" number is the second number in both cases. Maybe the "but" should be an "and", because, as written, it reads very odd to me.
    – Greg Schmit
    Aug 19 at 23:39














up vote
17
down vote

favorite
9












I thought, it is determined by the horizontal pixel width, but why is 1920*1080 1080p but 1280*720 720p? It makes kinda no sense or is my source wrong?







share|improve this question














closed as off-topic by Máté Juhász, JakeGould, robinCTS, Pimp Juice IT, bertieb Aug 20 at 10:12


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is not about computer hardware or software, within the scope defined in the help center." – Máté Juhász, JakeGould, robinCTS, Pimp Juice IT
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 11




    You thought what is determined by horizontal width? Screen resolution is width by height. Multiply one by the other to get "K" [in the broadest terms]. 720p & 1080p are rendering standards, not resolution standards. The 'p' means progressive scan, to differentiate from the inferior 'i' for interleaved scan.
    – Tetsujin
    Aug 18 at 16:53






  • 23




    It's an idiotic change made by manufacturers so that the latest jump to "4k" sounds like a huge deal rather than the on going incremental change that it is. Up to 4k we were naming the resolution by vertical resolution and then suddenly with 4k they decided to use the horizontal width instead.
    – Mokubai♦
    Aug 18 at 17:00






  • 4




    Look at this answer I posted here.
    – JakeGould
    Aug 18 at 17:32






  • 4




    @Mokubai - yeah the next step is for them to go all hard drive on us and we'll lose the last 96 pixels too.
    – davidbak
    Aug 18 at 22:45






  • 5




    I'm very confused because I don't see a difference between the two examples of 1920*1080 and 1280*720. The "p" number is the second number in both cases. Maybe the "but" should be an "and", because, as written, it reads very odd to me.
    – Greg Schmit
    Aug 19 at 23:39












up vote
17
down vote

favorite
9









up vote
17
down vote

favorite
9






9





I thought, it is determined by the horizontal pixel width, but why is 1920*1080 1080p but 1280*720 720p? It makes kinda no sense or is my source wrong?







share|improve this question














I thought, it is determined by the horizontal pixel width, but why is 1920*1080 1080p but 1280*720 720p? It makes kinda no sense or is my source wrong?









share|improve this question













share|improve this question




share|improve this question








edited Aug 19 at 20:58









WELZ

121128




121128










asked Aug 18 at 16:49









Maritn Ge

8313




8313




closed as off-topic by Máté Juhász, JakeGould, robinCTS, Pimp Juice IT, bertieb Aug 20 at 10:12


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is not about computer hardware or software, within the scope defined in the help center." – Máté Juhász, JakeGould, robinCTS, Pimp Juice IT
If this question can be reworded to fit the rules in the help center, please edit the question.




closed as off-topic by Máté Juhász, JakeGould, robinCTS, Pimp Juice IT, bertieb Aug 20 at 10:12


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question is not about computer hardware or software, within the scope defined in the help center." – Máté Juhász, JakeGould, robinCTS, Pimp Juice IT
If this question can be reworded to fit the rules in the help center, please edit the question.







  • 11




    You thought what is determined by horizontal width? Screen resolution is width by height. Multiply one by the other to get "K" [in the broadest terms]. 720p & 1080p are rendering standards, not resolution standards. The 'p' means progressive scan, to differentiate from the inferior 'i' for interleaved scan.
    – Tetsujin
    Aug 18 at 16:53






  • 23




    It's an idiotic change made by manufacturers so that the latest jump to "4k" sounds like a huge deal rather than the on going incremental change that it is. Up to 4k we were naming the resolution by vertical resolution and then suddenly with 4k they decided to use the horizontal width instead.
    – Mokubai♦
    Aug 18 at 17:00






  • 4




    Look at this answer I posted here.
    – JakeGould
    Aug 18 at 17:32






  • 4




    @Mokubai - yeah the next step is for them to go all hard drive on us and we'll lose the last 96 pixels too.
    – davidbak
    Aug 18 at 22:45






  • 5




    I'm very confused because I don't see a difference between the two examples of 1920*1080 and 1280*720. The "p" number is the second number in both cases. Maybe the "but" should be an "and", because, as written, it reads very odd to me.
    – Greg Schmit
    Aug 19 at 23:39












  • 11




    You thought what is determined by horizontal width? Screen resolution is width by height. Multiply one by the other to get "K" [in the broadest terms]. 720p & 1080p are rendering standards, not resolution standards. The 'p' means progressive scan, to differentiate from the inferior 'i' for interleaved scan.
    – Tetsujin
    Aug 18 at 16:53






  • 23




    It's an idiotic change made by manufacturers so that the latest jump to "4k" sounds like a huge deal rather than the on going incremental change that it is. Up to 4k we were naming the resolution by vertical resolution and then suddenly with 4k they decided to use the horizontal width instead.
    – Mokubai♦
    Aug 18 at 17:00






  • 4




    Look at this answer I posted here.
    – JakeGould
    Aug 18 at 17:32






  • 4




    @Mokubai - yeah the next step is for them to go all hard drive on us and we'll lose the last 96 pixels too.
    – davidbak
    Aug 18 at 22:45






  • 5




    I'm very confused because I don't see a difference between the two examples of 1920*1080 and 1280*720. The "p" number is the second number in both cases. Maybe the "but" should be an "and", because, as written, it reads very odd to me.
    – Greg Schmit
    Aug 19 at 23:39







11




11




You thought what is determined by horizontal width? Screen resolution is width by height. Multiply one by the other to get "K" [in the broadest terms]. 720p & 1080p are rendering standards, not resolution standards. The 'p' means progressive scan, to differentiate from the inferior 'i' for interleaved scan.
– Tetsujin
Aug 18 at 16:53




You thought what is determined by horizontal width? Screen resolution is width by height. Multiply one by the other to get "K" [in the broadest terms]. 720p & 1080p are rendering standards, not resolution standards. The 'p' means progressive scan, to differentiate from the inferior 'i' for interleaved scan.
– Tetsujin
Aug 18 at 16:53




23




23




It's an idiotic change made by manufacturers so that the latest jump to "4k" sounds like a huge deal rather than the on going incremental change that it is. Up to 4k we were naming the resolution by vertical resolution and then suddenly with 4k they decided to use the horizontal width instead.
– Mokubai♦
Aug 18 at 17:00




It's an idiotic change made by manufacturers so that the latest jump to "4k" sounds like a huge deal rather than the on going incremental change that it is. Up to 4k we were naming the resolution by vertical resolution and then suddenly with 4k they decided to use the horizontal width instead.
– Mokubai♦
Aug 18 at 17:00




4




4




Look at this answer I posted here.
– JakeGould
Aug 18 at 17:32




Look at this answer I posted here.
– JakeGould
Aug 18 at 17:32




4




4




@Mokubai - yeah the next step is for them to go all hard drive on us and we'll lose the last 96 pixels too.
– davidbak
Aug 18 at 22:45




@Mokubai - yeah the next step is for them to go all hard drive on us and we'll lose the last 96 pixels too.
– davidbak
Aug 18 at 22:45




5




5




I'm very confused because I don't see a difference between the two examples of 1920*1080 and 1280*720. The "p" number is the second number in both cases. Maybe the "but" should be an "and", because, as written, it reads very odd to me.
– Greg Schmit
Aug 19 at 23:39




I'm very confused because I don't see a difference between the two examples of 1920*1080 and 1280*720. The "p" number is the second number in both cases. Maybe the "but" should be an "and", because, as written, it reads very odd to me.
– Greg Schmit
Aug 19 at 23:39










3 Answers
3






active

oldest

votes

















up vote
44
down vote













In the "old days", TVs were analog devices - a CRT scans an electron beam across the front of the display, from left to right. In this sense, people have tried to argue that analog TV has "infinite" horizontal resolution, but they have an exact vertical resolution - the picture is formed of multiple horizontal lines.



Depending on where you are, this would have been either NTSC (525 lines) or PAL (625 lines).



Modern resolutions are still referred to by their "line count" - hence 1080 is the vertical resolution.



With such displays, the image was transmitted interlaced - i.e: the first field contains lines 1, 3, 5, etc... and the second field contains lines 2, 4, 6, etc...




With the advent of digital TV, the display technology changes - we now have discrete pixels instead of lines. A 2D image made from an array of pixels - the array having specific dimensions in the horizontal and vertical axes.



At this point, interlacing remains around only to reduce the bandwidth required for a video stream - in my opinion it's a horrible idea that is fundamentally incompatible with digital display systems.



progressive vs interlaced



As mentioned earlier, modern resolutions are still referred to by their "line count". But as shown above, we also have to identify whether we are referring to "interlaced" video (indicated by an i), or "progressive" video (indicated by a p).



The frame rate can also be specified, for example:




  • 480i60 - 480 rows, interlaced, 60 Hz field rate (i.e: 30 Hz frame rate)


  • 1080p30 - 1080 rows, progressive, 30 Hz frame rate


"... okay, but where did 480 come from?"



The analog electronics involved in CRTs are imprecise, and a particular feature of early models was that as the set warmed up, or the capacitors and electronics aged, the image started to change shape. In addition to this, the electron beam has to be turned off and then redirected to the left of the screen for each line, and to the top for each new field/frame - this takes time, and is the reason for "blanking".



To account for this somewhat, only a handful of the scanned lines were intended / expected to be displayed. NTSC scans 525 lines, but only 483 if these are intended for display - each field shows ~241.5 lines to a viewer (with an additional 21 lines of "blanking"). ~240 lines per field (remember, this is interlaced) equates to 480 lines per frame (in the progressive world). Thus 480. Yay.




For the digital resolutions we follow a pattern... ish:




  • 480 * 1.5 = 720 - "HD Ready"


  • 720 * 1.5 = 1080 - "Full HD"


  • 1080 * 2 = 2160 - "4k" or "Ultra HD"

So actually "4k" isn't following the * 1.5 pattern from before, and it's not really 4000 pixels in either dimension - it's 3840 × 2160.



"4k" is actually "Four times the number of pixels as Full HD". 1920 * 2 = 3840 and 1080 * 2 = 2160. Or lay out four 1080p displays in a 2 × 2 grid - you get 3840 × 2160.



Additionally, if we're using 1080p as a resolution description, then really "4k" should be called 2160p (which it is in the technical world).




In summary, in the consumer / broadcast space:




  • 480 is because that's approximately the number of visible lines that NTSC would display


  • 720 is because that's 1.5× 480


  • 1080 is because that's 1.5× 720


  • 2160 is because that's 2× 1080

  • "4k" is because it's a marketing thing - it isn't a technical specification, and I don't believe that there are any stipulations around the use of it...


Note: I've been taking about consumer / broadcast...



Within Cinema there is DCI 2K (capital K, 2048 × 1080) and DCI 4K (capital K, 4096 × 2160), where the 'K' presumably refers to kibi and the horizontal resolution. "DCI 4K" predates consumer "4k".



The aspect ratio of DCI 4K is not 16:9, but a slightly wider 256∶135... to bring the resolution back in line with 16:9, you can increase the vertical resolution, or decrease the horizontal... But I'm not entirely convinced that the cinema and broadcast standards evolved this closely.



Cinema evolved from whole-frame positives (aka film) directly to digital, whereas TV evolved from a scanning electron beam (line by line) to digital. This is evident in both broadcast, and the way that VHS operates too.




As a little extra, I've included the graphic below to expand on the "image changes shape" statement from above.



The graphic (from here) indicates the various "television safe" areas... note the rounded corners...



Importantly:



  • 5 is the "television scanned area"

  • 6 is the "television safe area for action"

    • Faces and important plot information should not fall outside this area


  • 7 is the "television safe area for titles"

    • No text should be outside this area to ensure that subtitles / news tickers / etc... don't overhang the edge of an old display


display safe areas






share|improve this answer


















  • 2




    480i30 would be a weirdly low frame rate; much more common is 480i60, 60 fields per second (can be combed into 30 frames per second). You can deinterlace it to 480p60 (doubling the vertical spatial resolution) or 480p30 (discarding half the temporal resolution). Agreed that interlacing is stupidly horrible now that progressive displays are nearly universal, and it always sucked to store digitally.
    – Peter Cordes
    Aug 18 at 19:20











  • @Peter thanks - I couldn't remember if the number was field rate or frame rate...
    – Attie
    Aug 18 at 20:10






  • 1




    The rate number is the temporal resolution. Number of times the display changes (or could change) per second. I think this sometimes gets fudged with progressive content stored as interlaced (evil, bad, and stupid), where both fields are actually from the same time, rather than evenly spaced times. (And of course there's 24p content hard-telecined to 60i, or worse telecined to 60i and then having VFX added at 60i or 30p, like the Star Trek DVDs before they remastered. Those DVDs switched from soft-TC 24p for most scenes to 60i for VFX shots, because the show was edited on video.)
    – Peter Cordes
    Aug 18 at 20:26







  • 2




    There are actually a few different resolutions that have been called "4K". The first usage however (according to Wikipedia) was a camera using 4096×2160. Someone thought 4096 was pretty close to 4000, and decided that saying their camera had a 4K resolution sounded cool. It's really just marketing.
    – Yay295
    Aug 19 at 3:47






  • 1




    It should also be noted that 480i or 480p use 720 horizontal pixels with a width to height aspect ratio of 8/9 (= .88888....), which creates an interpolation issue on digital monitors. For CRT displays, this isn't an issue because it just involves a change in the sweep rate (and effective thickness) of the beam used to light up the phosphors. CRT based HDTV's generally support 480i / 480p / 1080i as "native" resolutions, with a better image for 480i/480p than digital displays, but not as good as 1080p digital displays.
    – rcgldr
    Aug 19 at 21:17


















up vote
7
down vote













Attie has explained how vertical pixel counts were derived for traditional resolutions such as SD (480p), HD (720p) and Full HD (1080p). I would like to discuss K measurements which originate from the cinema industry. A K is about 1000 horizontal pixels and 2K and 4K were casual terms referring to resolutions of approximately 2000 and 4000 horizontal pixels.



Different K measurements were later standardised for digital cinema (2005) and consumer television (2007).



Digital Cinema Initiatives has specified the DCI 4K and 2K resoultion standards (4096 × 2160 and 2048 × 1080). These are the full frame resolutions for digital cinema projectors, with many films displayed in the a crop format such as flat crop (3996 × 2160 , 1.85∶1 aspect ratio) or CinemaScope crop (4096 × 1716, ≈2.39∶1 aspect ratio).



The 4K standard for consumer television (UHDTV1) can be more accurately labelled as Ultra HD or Quad HD. It is equivalent to 4 Full HD frames placed together. It is 3840 × 2160 with ≈1.78∶1 aspect ratio.



The Wikipedia article on 4K resolution provides a helpful visualisation which compares different frame sizes.



Comparison of video frame sizes






share|improve this answer



























    up vote
    3
    down vote













    Your assessment that resolution is pixel pitch is kind of correct. In fact, for everything except digital displays, that's the correct definition of resolution (well, more accurately, it's the smallest detail that can be resolved accurately, which functionally translates to the size of the dots making up an image, and from that pixel pitch). If you ever talk to somebody in the printing industry, this is usually what they mean when they say 'resolution'.



    The reason that the term has a different meaning for computer displays is largely historical, but is actually pretty easily explained without getting into much history.



    On a raster display1, you have what's known as a 'native' resolution. This is the highest number of pixels you can have on each side of the screen, and is usually what is used to describe the screen. For example, a Full HD display has a native resolution of 1920 pixels horizontally, and 1080 vertically, so a Full HD display with a roughly 18 inch diagonal has a resolution (in the classical sense) of about 120 dots per inch (DPI) (which is actually pretty low compared to print media).



    However, you can run most displays at lower than their native resolution, which is functionally equivalent to having a larger pixel pitch on the same display. Taking that same 18 inch Full HD display and running it at 1280x720 gives you the equivalent of 80 DPI on the same screen.



    Now, the OS and application software (usually) don't care about the exact physical dimensions of the screen, because that information is not really all that useful to displaying information unless you need to display something 'real-size'. Given this, the OS just assumes the size of the display never changes, which means that the different pixel counts are functionally equivalent to the resolution.




    1. A display that uses rows and columns of individual points that can be turned on and off to produce an image. Almost all modern computer displays are raster displays, because they're fare easier to make. Compare to a vector display, which just draws lines directly (with one of the most recognizable examples being the displays used in the original Asteroids arcade cabinets).





    share|improve this answer
















    • 1




      "Resolution" doesn't have any bearing on dot size, pitch or aspect ratio as such - it refers to the smallest addressable "thing" - DAC resolution of 16-bits, screen resolution, etc... interestingly with RGB displays, you quite whole pixels, not sub pixels. With print it's quite meaningless to advertise a resolution across a page due to various sizes (A4 / A5 / etc...), margins, etc... so instead they specify the area (typically an Inch) and count the dots inside it (DPI)
      – Attie
      Aug 19 at 10:34










    • @Attie No, it does have bearing on dot size and/or pixel pitch (though not really aspect ratio). If you have a display of a given size, higher resolution means smaller pixel pitch, period (and thus higher detail and less aliasing). The reason resolution matters for actually displaying anything when dealing with anything higher level than drivers is pixel pitch and pixel count (which is a side effect of pixel pitch and display size), not addressing. Also, the use of dots-per-unit-area in print far predates computer displays.
      – Austin Hemmelgarn
      Aug 19 at 12:55










    • I completely agree that "resolution", "pixel pitch" and "physical dimensions" are all intertwined (how could they not be)... but "resolution" by itself has no bearing on the other two. The fact that software is typically aware of a display's physical dimensions or PPI means that it's possible to display things at approximately real size.
      – Attie
      Aug 19 at 13:54










    • As for DPI in print and pixel aspect ratio - until recently pixels were not square! IIRC facsimile had pixels with a 2:1 aspect ratio, and NTSC was 10:11... rendering the image with an incorrect PAR produces squashed/stretched looking images. Additionally, DPI in print usually needs to be so high because of the limited count of discernible colors - dithering coming to the rescue.
      – Attie
      Aug 19 at 14:00


















    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    44
    down vote













    In the "old days", TVs were analog devices - a CRT scans an electron beam across the front of the display, from left to right. In this sense, people have tried to argue that analog TV has "infinite" horizontal resolution, but they have an exact vertical resolution - the picture is formed of multiple horizontal lines.



    Depending on where you are, this would have been either NTSC (525 lines) or PAL (625 lines).



    Modern resolutions are still referred to by their "line count" - hence 1080 is the vertical resolution.



    With such displays, the image was transmitted interlaced - i.e: the first field contains lines 1, 3, 5, etc... and the second field contains lines 2, 4, 6, etc...




    With the advent of digital TV, the display technology changes - we now have discrete pixels instead of lines. A 2D image made from an array of pixels - the array having specific dimensions in the horizontal and vertical axes.



    At this point, interlacing remains around only to reduce the bandwidth required for a video stream - in my opinion it's a horrible idea that is fundamentally incompatible with digital display systems.



    progressive vs interlaced



    As mentioned earlier, modern resolutions are still referred to by their "line count". But as shown above, we also have to identify whether we are referring to "interlaced" video (indicated by an i), or "progressive" video (indicated by a p).



    The frame rate can also be specified, for example:




    • 480i60 - 480 rows, interlaced, 60 Hz field rate (i.e: 30 Hz frame rate)


    • 1080p30 - 1080 rows, progressive, 30 Hz frame rate


    "... okay, but where did 480 come from?"



    The analog electronics involved in CRTs are imprecise, and a particular feature of early models was that as the set warmed up, or the capacitors and electronics aged, the image started to change shape. In addition to this, the electron beam has to be turned off and then redirected to the left of the screen for each line, and to the top for each new field/frame - this takes time, and is the reason for "blanking".



    To account for this somewhat, only a handful of the scanned lines were intended / expected to be displayed. NTSC scans 525 lines, but only 483 if these are intended for display - each field shows ~241.5 lines to a viewer (with an additional 21 lines of "blanking"). ~240 lines per field (remember, this is interlaced) equates to 480 lines per frame (in the progressive world). Thus 480. Yay.




    For the digital resolutions we follow a pattern... ish:




    • 480 * 1.5 = 720 - "HD Ready"


    • 720 * 1.5 = 1080 - "Full HD"


    • 1080 * 2 = 2160 - "4k" or "Ultra HD"

    So actually "4k" isn't following the * 1.5 pattern from before, and it's not really 4000 pixels in either dimension - it's 3840 × 2160.



    "4k" is actually "Four times the number of pixels as Full HD". 1920 * 2 = 3840 and 1080 * 2 = 2160. Or lay out four 1080p displays in a 2 × 2 grid - you get 3840 × 2160.



    Additionally, if we're using 1080p as a resolution description, then really "4k" should be called 2160p (which it is in the technical world).




    In summary, in the consumer / broadcast space:




    • 480 is because that's approximately the number of visible lines that NTSC would display


    • 720 is because that's 1.5× 480


    • 1080 is because that's 1.5× 720


    • 2160 is because that's 2× 1080

    • "4k" is because it's a marketing thing - it isn't a technical specification, and I don't believe that there are any stipulations around the use of it...


    Note: I've been taking about consumer / broadcast...



    Within Cinema there is DCI 2K (capital K, 2048 × 1080) and DCI 4K (capital K, 4096 × 2160), where the 'K' presumably refers to kibi and the horizontal resolution. "DCI 4K" predates consumer "4k".



    The aspect ratio of DCI 4K is not 16:9, but a slightly wider 256∶135... to bring the resolution back in line with 16:9, you can increase the vertical resolution, or decrease the horizontal... But I'm not entirely convinced that the cinema and broadcast standards evolved this closely.



    Cinema evolved from whole-frame positives (aka film) directly to digital, whereas TV evolved from a scanning electron beam (line by line) to digital. This is evident in both broadcast, and the way that VHS operates too.




    As a little extra, I've included the graphic below to expand on the "image changes shape" statement from above.



    The graphic (from here) indicates the various "television safe" areas... note the rounded corners...



    Importantly:



    • 5 is the "television scanned area"

    • 6 is the "television safe area for action"

      • Faces and important plot information should not fall outside this area


    • 7 is the "television safe area for titles"

      • No text should be outside this area to ensure that subtitles / news tickers / etc... don't overhang the edge of an old display


    display safe areas






    share|improve this answer


















    • 2




      480i30 would be a weirdly low frame rate; much more common is 480i60, 60 fields per second (can be combed into 30 frames per second). You can deinterlace it to 480p60 (doubling the vertical spatial resolution) or 480p30 (discarding half the temporal resolution). Agreed that interlacing is stupidly horrible now that progressive displays are nearly universal, and it always sucked to store digitally.
      – Peter Cordes
      Aug 18 at 19:20











    • @Peter thanks - I couldn't remember if the number was field rate or frame rate...
      – Attie
      Aug 18 at 20:10






    • 1




      The rate number is the temporal resolution. Number of times the display changes (or could change) per second. I think this sometimes gets fudged with progressive content stored as interlaced (evil, bad, and stupid), where both fields are actually from the same time, rather than evenly spaced times. (And of course there's 24p content hard-telecined to 60i, or worse telecined to 60i and then having VFX added at 60i or 30p, like the Star Trek DVDs before they remastered. Those DVDs switched from soft-TC 24p for most scenes to 60i for VFX shots, because the show was edited on video.)
      – Peter Cordes
      Aug 18 at 20:26







    • 2




      There are actually a few different resolutions that have been called "4K". The first usage however (according to Wikipedia) was a camera using 4096×2160. Someone thought 4096 was pretty close to 4000, and decided that saying their camera had a 4K resolution sounded cool. It's really just marketing.
      – Yay295
      Aug 19 at 3:47






    • 1




      It should also be noted that 480i or 480p use 720 horizontal pixels with a width to height aspect ratio of 8/9 (= .88888....), which creates an interpolation issue on digital monitors. For CRT displays, this isn't an issue because it just involves a change in the sweep rate (and effective thickness) of the beam used to light up the phosphors. CRT based HDTV's generally support 480i / 480p / 1080i as "native" resolutions, with a better image for 480i/480p than digital displays, but not as good as 1080p digital displays.
      – rcgldr
      Aug 19 at 21:17















    up vote
    44
    down vote













    In the "old days", TVs were analog devices - a CRT scans an electron beam across the front of the display, from left to right. In this sense, people have tried to argue that analog TV has "infinite" horizontal resolution, but they have an exact vertical resolution - the picture is formed of multiple horizontal lines.



    Depending on where you are, this would have been either NTSC (525 lines) or PAL (625 lines).



    Modern resolutions are still referred to by their "line count" - hence 1080 is the vertical resolution.



    With such displays, the image was transmitted interlaced - i.e: the first field contains lines 1, 3, 5, etc... and the second field contains lines 2, 4, 6, etc...




    With the advent of digital TV, the display technology changes - we now have discrete pixels instead of lines. A 2D image made from an array of pixels - the array having specific dimensions in the horizontal and vertical axes.



    At this point, interlacing remains around only to reduce the bandwidth required for a video stream - in my opinion it's a horrible idea that is fundamentally incompatible with digital display systems.



    progressive vs interlaced



    As mentioned earlier, modern resolutions are still referred to by their "line count". But as shown above, we also have to identify whether we are referring to "interlaced" video (indicated by an i), or "progressive" video (indicated by a p).



    The frame rate can also be specified, for example:




    • 480i60 - 480 rows, interlaced, 60 Hz field rate (i.e: 30 Hz frame rate)


    • 1080p30 - 1080 rows, progressive, 30 Hz frame rate


    "... okay, but where did 480 come from?"



    The analog electronics involved in CRTs are imprecise, and a particular feature of early models was that as the set warmed up, or the capacitors and electronics aged, the image started to change shape. In addition to this, the electron beam has to be turned off and then redirected to the left of the screen for each line, and to the top for each new field/frame - this takes time, and is the reason for "blanking".



    To account for this somewhat, only a handful of the scanned lines were intended / expected to be displayed. NTSC scans 525 lines, but only 483 if these are intended for display - each field shows ~241.5 lines to a viewer (with an additional 21 lines of "blanking"). ~240 lines per field (remember, this is interlaced) equates to 480 lines per frame (in the progressive world). Thus 480. Yay.




    For the digital resolutions we follow a pattern... ish:




    • 480 * 1.5 = 720 - "HD Ready"


    • 720 * 1.5 = 1080 - "Full HD"


    • 1080 * 2 = 2160 - "4k" or "Ultra HD"

    So actually "4k" isn't following the * 1.5 pattern from before, and it's not really 4000 pixels in either dimension - it's 3840 × 2160.



    "4k" is actually "Four times the number of pixels as Full HD". 1920 * 2 = 3840 and 1080 * 2 = 2160. Or lay out four 1080p displays in a 2 × 2 grid - you get 3840 × 2160.



    Additionally, if we're using 1080p as a resolution description, then really "4k" should be called 2160p (which it is in the technical world).




    In summary, in the consumer / broadcast space:




    • 480 is because that's approximately the number of visible lines that NTSC would display


    • 720 is because that's 1.5× 480


    • 1080 is because that's 1.5× 720


    • 2160 is because that's 2× 1080

    • "4k" is because it's a marketing thing - it isn't a technical specification, and I don't believe that there are any stipulations around the use of it...


    Note: I've been taking about consumer / broadcast...



    Within Cinema there is DCI 2K (capital K, 2048 × 1080) and DCI 4K (capital K, 4096 × 2160), where the 'K' presumably refers to kibi and the horizontal resolution. "DCI 4K" predates consumer "4k".



    The aspect ratio of DCI 4K is not 16:9, but a slightly wider 256∶135... to bring the resolution back in line with 16:9, you can increase the vertical resolution, or decrease the horizontal... But I'm not entirely convinced that the cinema and broadcast standards evolved this closely.



    Cinema evolved from whole-frame positives (aka film) directly to digital, whereas TV evolved from a scanning electron beam (line by line) to digital. This is evident in both broadcast, and the way that VHS operates too.




    As a little extra, I've included the graphic below to expand on the "image changes shape" statement from above.



    The graphic (from here) indicates the various "television safe" areas... note the rounded corners...



    Importantly:



    • 5 is the "television scanned area"

    • 6 is the "television safe area for action"

      • Faces and important plot information should not fall outside this area


    • 7 is the "television safe area for titles"

      • No text should be outside this area to ensure that subtitles / news tickers / etc... don't overhang the edge of an old display


    display safe areas






    share|improve this answer


















    • 2




      480i30 would be a weirdly low frame rate; much more common is 480i60, 60 fields per second (can be combed into 30 frames per second). You can deinterlace it to 480p60 (doubling the vertical spatial resolution) or 480p30 (discarding half the temporal resolution). Agreed that interlacing is stupidly horrible now that progressive displays are nearly universal, and it always sucked to store digitally.
      – Peter Cordes
      Aug 18 at 19:20











    • @Peter thanks - I couldn't remember if the number was field rate or frame rate...
      – Attie
      Aug 18 at 20:10






    • 1




      The rate number is the temporal resolution. Number of times the display changes (or could change) per second. I think this sometimes gets fudged with progressive content stored as interlaced (evil, bad, and stupid), where both fields are actually from the same time, rather than evenly spaced times. (And of course there's 24p content hard-telecined to 60i, or worse telecined to 60i and then having VFX added at 60i or 30p, like the Star Trek DVDs before they remastered. Those DVDs switched from soft-TC 24p for most scenes to 60i for VFX shots, because the show was edited on video.)
      – Peter Cordes
      Aug 18 at 20:26







    • 2




      There are actually a few different resolutions that have been called "4K". The first usage however (according to Wikipedia) was a camera using 4096×2160. Someone thought 4096 was pretty close to 4000, and decided that saying their camera had a 4K resolution sounded cool. It's really just marketing.
      – Yay295
      Aug 19 at 3:47






    • 1




      It should also be noted that 480i or 480p use 720 horizontal pixels with a width to height aspect ratio of 8/9 (= .88888....), which creates an interpolation issue on digital monitors. For CRT displays, this isn't an issue because it just involves a change in the sweep rate (and effective thickness) of the beam used to light up the phosphors. CRT based HDTV's generally support 480i / 480p / 1080i as "native" resolutions, with a better image for 480i/480p than digital displays, but not as good as 1080p digital displays.
      – rcgldr
      Aug 19 at 21:17













    up vote
    44
    down vote










    up vote
    44
    down vote









    In the "old days", TVs were analog devices - a CRT scans an electron beam across the front of the display, from left to right. In this sense, people have tried to argue that analog TV has "infinite" horizontal resolution, but they have an exact vertical resolution - the picture is formed of multiple horizontal lines.



    Depending on where you are, this would have been either NTSC (525 lines) or PAL (625 lines).



    Modern resolutions are still referred to by their "line count" - hence 1080 is the vertical resolution.



    With such displays, the image was transmitted interlaced - i.e: the first field contains lines 1, 3, 5, etc... and the second field contains lines 2, 4, 6, etc...




    With the advent of digital TV, the display technology changes - we now have discrete pixels instead of lines. A 2D image made from an array of pixels - the array having specific dimensions in the horizontal and vertical axes.



    At this point, interlacing remains around only to reduce the bandwidth required for a video stream - in my opinion it's a horrible idea that is fundamentally incompatible with digital display systems.



    progressive vs interlaced



    As mentioned earlier, modern resolutions are still referred to by their "line count". But as shown above, we also have to identify whether we are referring to "interlaced" video (indicated by an i), or "progressive" video (indicated by a p).



    The frame rate can also be specified, for example:




    • 480i60 - 480 rows, interlaced, 60 Hz field rate (i.e: 30 Hz frame rate)


    • 1080p30 - 1080 rows, progressive, 30 Hz frame rate


    "... okay, but where did 480 come from?"



    The analog electronics involved in CRTs are imprecise, and a particular feature of early models was that as the set warmed up, or the capacitors and electronics aged, the image started to change shape. In addition to this, the electron beam has to be turned off and then redirected to the left of the screen for each line, and to the top for each new field/frame - this takes time, and is the reason for "blanking".



    To account for this somewhat, only a handful of the scanned lines were intended / expected to be displayed. NTSC scans 525 lines, but only 483 if these are intended for display - each field shows ~241.5 lines to a viewer (with an additional 21 lines of "blanking"). ~240 lines per field (remember, this is interlaced) equates to 480 lines per frame (in the progressive world). Thus 480. Yay.




    For the digital resolutions we follow a pattern... ish:




    • 480 * 1.5 = 720 - "HD Ready"


    • 720 * 1.5 = 1080 - "Full HD"


    • 1080 * 2 = 2160 - "4k" or "Ultra HD"

    So actually "4k" isn't following the * 1.5 pattern from before, and it's not really 4000 pixels in either dimension - it's 3840 × 2160.



    "4k" is actually "Four times the number of pixels as Full HD". 1920 * 2 = 3840 and 1080 * 2 = 2160. Or lay out four 1080p displays in a 2 × 2 grid - you get 3840 × 2160.



    Additionally, if we're using 1080p as a resolution description, then really "4k" should be called 2160p (which it is in the technical world).




    In summary, in the consumer / broadcast space:




    • 480 is because that's approximately the number of visible lines that NTSC would display


    • 720 is because that's 1.5× 480


    • 1080 is because that's 1.5× 720


    • 2160 is because that's 2× 1080

    • "4k" is because it's a marketing thing - it isn't a technical specification, and I don't believe that there are any stipulations around the use of it...


    Note: I've been taking about consumer / broadcast...



    Within Cinema there is DCI 2K (capital K, 2048 × 1080) and DCI 4K (capital K, 4096 × 2160), where the 'K' presumably refers to kibi and the horizontal resolution. "DCI 4K" predates consumer "4k".



    The aspect ratio of DCI 4K is not 16:9, but a slightly wider 256∶135... to bring the resolution back in line with 16:9, you can increase the vertical resolution, or decrease the horizontal... But I'm not entirely convinced that the cinema and broadcast standards evolved this closely.



    Cinema evolved from whole-frame positives (aka film) directly to digital, whereas TV evolved from a scanning electron beam (line by line) to digital. This is evident in both broadcast, and the way that VHS operates too.




    As a little extra, I've included the graphic below to expand on the "image changes shape" statement from above.



    The graphic (from here) indicates the various "television safe" areas... note the rounded corners...



    Importantly:



    • 5 is the "television scanned area"

    • 6 is the "television safe area for action"

      • Faces and important plot information should not fall outside this area


    • 7 is the "television safe area for titles"

      • No text should be outside this area to ensure that subtitles / news tickers / etc... don't overhang the edge of an old display


    display safe areas






    share|improve this answer














    In the "old days", TVs were analog devices - a CRT scans an electron beam across the front of the display, from left to right. In this sense, people have tried to argue that analog TV has "infinite" horizontal resolution, but they have an exact vertical resolution - the picture is formed of multiple horizontal lines.



    Depending on where you are, this would have been either NTSC (525 lines) or PAL (625 lines).



    Modern resolutions are still referred to by their "line count" - hence 1080 is the vertical resolution.



    With such displays, the image was transmitted interlaced - i.e: the first field contains lines 1, 3, 5, etc... and the second field contains lines 2, 4, 6, etc...




    With the advent of digital TV, the display technology changes - we now have discrete pixels instead of lines. A 2D image made from an array of pixels - the array having specific dimensions in the horizontal and vertical axes.



    At this point, interlacing remains around only to reduce the bandwidth required for a video stream - in my opinion it's a horrible idea that is fundamentally incompatible with digital display systems.



    progressive vs interlaced



    As mentioned earlier, modern resolutions are still referred to by their "line count". But as shown above, we also have to identify whether we are referring to "interlaced" video (indicated by an i), or "progressive" video (indicated by a p).



    The frame rate can also be specified, for example:




    • 480i60 - 480 rows, interlaced, 60 Hz field rate (i.e: 30 Hz frame rate)


    • 1080p30 - 1080 rows, progressive, 30 Hz frame rate


    "... okay, but where did 480 come from?"



    The analog electronics involved in CRTs are imprecise, and a particular feature of early models was that as the set warmed up, or the capacitors and electronics aged, the image started to change shape. In addition to this, the electron beam has to be turned off and then redirected to the left of the screen for each line, and to the top for each new field/frame - this takes time, and is the reason for "blanking".



    To account for this somewhat, only a handful of the scanned lines were intended / expected to be displayed. NTSC scans 525 lines, but only 483 if these are intended for display - each field shows ~241.5 lines to a viewer (with an additional 21 lines of "blanking"). ~240 lines per field (remember, this is interlaced) equates to 480 lines per frame (in the progressive world). Thus 480. Yay.




    For the digital resolutions we follow a pattern... ish:




    • 480 * 1.5 = 720 - "HD Ready"


    • 720 * 1.5 = 1080 - "Full HD"


    • 1080 * 2 = 2160 - "4k" or "Ultra HD"

    So actually "4k" isn't following the * 1.5 pattern from before, and it's not really 4000 pixels in either dimension - it's 3840 × 2160.



    "4k" is actually "Four times the number of pixels as Full HD". 1920 * 2 = 3840 and 1080 * 2 = 2160. Or lay out four 1080p displays in a 2 × 2 grid - you get 3840 × 2160.



    Additionally, if we're using 1080p as a resolution description, then really "4k" should be called 2160p (which it is in the technical world).




    In summary, in the consumer / broadcast space:




    • 480 is because that's approximately the number of visible lines that NTSC would display


    • 720 is because that's 1.5× 480


    • 1080 is because that's 1.5× 720


    • 2160 is because that's 2× 1080

    • "4k" is because it's a marketing thing - it isn't a technical specification, and I don't believe that there are any stipulations around the use of it...


    Note: I've been taking about consumer / broadcast...



    Within Cinema there is DCI 2K (capital K, 2048 × 1080) and DCI 4K (capital K, 4096 × 2160), where the 'K' presumably refers to kibi and the horizontal resolution. "DCI 4K" predates consumer "4k".



    The aspect ratio of DCI 4K is not 16:9, but a slightly wider 256∶135... to bring the resolution back in line with 16:9, you can increase the vertical resolution, or decrease the horizontal... But I'm not entirely convinced that the cinema and broadcast standards evolved this closely.



    Cinema evolved from whole-frame positives (aka film) directly to digital, whereas TV evolved from a scanning electron beam (line by line) to digital. This is evident in both broadcast, and the way that VHS operates too.




    As a little extra, I've included the graphic below to expand on the "image changes shape" statement from above.



    The graphic (from here) indicates the various "television safe" areas... note the rounded corners...



    Importantly:



    • 5 is the "television scanned area"

    • 6 is the "television safe area for action"

      • Faces and important plot information should not fall outside this area


    • 7 is the "television safe area for titles"

      • No text should be outside this area to ensure that subtitles / news tickers / etc... don't overhang the edge of an old display


    display safe areas







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Aug 20 at 10:03

























    answered Aug 18 at 17:20









    Attie

    8,50231934




    8,50231934







    • 2




      480i30 would be a weirdly low frame rate; much more common is 480i60, 60 fields per second (can be combed into 30 frames per second). You can deinterlace it to 480p60 (doubling the vertical spatial resolution) or 480p30 (discarding half the temporal resolution). Agreed that interlacing is stupidly horrible now that progressive displays are nearly universal, and it always sucked to store digitally.
      – Peter Cordes
      Aug 18 at 19:20











    • @Peter thanks - I couldn't remember if the number was field rate or frame rate...
      – Attie
      Aug 18 at 20:10






    • 1




      The rate number is the temporal resolution. Number of times the display changes (or could change) per second. I think this sometimes gets fudged with progressive content stored as interlaced (evil, bad, and stupid), where both fields are actually from the same time, rather than evenly spaced times. (And of course there's 24p content hard-telecined to 60i, or worse telecined to 60i and then having VFX added at 60i or 30p, like the Star Trek DVDs before they remastered. Those DVDs switched from soft-TC 24p for most scenes to 60i for VFX shots, because the show was edited on video.)
      – Peter Cordes
      Aug 18 at 20:26







    • 2




      There are actually a few different resolutions that have been called "4K". The first usage however (according to Wikipedia) was a camera using 4096×2160. Someone thought 4096 was pretty close to 4000, and decided that saying their camera had a 4K resolution sounded cool. It's really just marketing.
      – Yay295
      Aug 19 at 3:47






    • 1




      It should also be noted that 480i or 480p use 720 horizontal pixels with a width to height aspect ratio of 8/9 (= .88888....), which creates an interpolation issue on digital monitors. For CRT displays, this isn't an issue because it just involves a change in the sweep rate (and effective thickness) of the beam used to light up the phosphors. CRT based HDTV's generally support 480i / 480p / 1080i as "native" resolutions, with a better image for 480i/480p than digital displays, but not as good as 1080p digital displays.
      – rcgldr
      Aug 19 at 21:17













    • 2




      480i30 would be a weirdly low frame rate; much more common is 480i60, 60 fields per second (can be combed into 30 frames per second). You can deinterlace it to 480p60 (doubling the vertical spatial resolution) or 480p30 (discarding half the temporal resolution). Agreed that interlacing is stupidly horrible now that progressive displays are nearly universal, and it always sucked to store digitally.
      – Peter Cordes
      Aug 18 at 19:20











    • @Peter thanks - I couldn't remember if the number was field rate or frame rate...
      – Attie
      Aug 18 at 20:10






    • 1




      The rate number is the temporal resolution. Number of times the display changes (or could change) per second. I think this sometimes gets fudged with progressive content stored as interlaced (evil, bad, and stupid), where both fields are actually from the same time, rather than evenly spaced times. (And of course there's 24p content hard-telecined to 60i, or worse telecined to 60i and then having VFX added at 60i or 30p, like the Star Trek DVDs before they remastered. Those DVDs switched from soft-TC 24p for most scenes to 60i for VFX shots, because the show was edited on video.)
      – Peter Cordes
      Aug 18 at 20:26







    • 2




      There are actually a few different resolutions that have been called "4K". The first usage however (according to Wikipedia) was a camera using 4096×2160. Someone thought 4096 was pretty close to 4000, and decided that saying their camera had a 4K resolution sounded cool. It's really just marketing.
      – Yay295
      Aug 19 at 3:47






    • 1




      It should also be noted that 480i or 480p use 720 horizontal pixels with a width to height aspect ratio of 8/9 (= .88888....), which creates an interpolation issue on digital monitors. For CRT displays, this isn't an issue because it just involves a change in the sweep rate (and effective thickness) of the beam used to light up the phosphors. CRT based HDTV's generally support 480i / 480p / 1080i as "native" resolutions, with a better image for 480i/480p than digital displays, but not as good as 1080p digital displays.
      – rcgldr
      Aug 19 at 21:17








    2




    2




    480i30 would be a weirdly low frame rate; much more common is 480i60, 60 fields per second (can be combed into 30 frames per second). You can deinterlace it to 480p60 (doubling the vertical spatial resolution) or 480p30 (discarding half the temporal resolution). Agreed that interlacing is stupidly horrible now that progressive displays are nearly universal, and it always sucked to store digitally.
    – Peter Cordes
    Aug 18 at 19:20





    480i30 would be a weirdly low frame rate; much more common is 480i60, 60 fields per second (can be combed into 30 frames per second). You can deinterlace it to 480p60 (doubling the vertical spatial resolution) or 480p30 (discarding half the temporal resolution). Agreed that interlacing is stupidly horrible now that progressive displays are nearly universal, and it always sucked to store digitally.
    – Peter Cordes
    Aug 18 at 19:20













    @Peter thanks - I couldn't remember if the number was field rate or frame rate...
    – Attie
    Aug 18 at 20:10




    @Peter thanks - I couldn't remember if the number was field rate or frame rate...
    – Attie
    Aug 18 at 20:10




    1




    1




    The rate number is the temporal resolution. Number of times the display changes (or could change) per second. I think this sometimes gets fudged with progressive content stored as interlaced (evil, bad, and stupid), where both fields are actually from the same time, rather than evenly spaced times. (And of course there's 24p content hard-telecined to 60i, or worse telecined to 60i and then having VFX added at 60i or 30p, like the Star Trek DVDs before they remastered. Those DVDs switched from soft-TC 24p for most scenes to 60i for VFX shots, because the show was edited on video.)
    – Peter Cordes
    Aug 18 at 20:26





    The rate number is the temporal resolution. Number of times the display changes (or could change) per second. I think this sometimes gets fudged with progressive content stored as interlaced (evil, bad, and stupid), where both fields are actually from the same time, rather than evenly spaced times. (And of course there's 24p content hard-telecined to 60i, or worse telecined to 60i and then having VFX added at 60i or 30p, like the Star Trek DVDs before they remastered. Those DVDs switched from soft-TC 24p for most scenes to 60i for VFX shots, because the show was edited on video.)
    – Peter Cordes
    Aug 18 at 20:26





    2




    2




    There are actually a few different resolutions that have been called "4K". The first usage however (according to Wikipedia) was a camera using 4096×2160. Someone thought 4096 was pretty close to 4000, and decided that saying their camera had a 4K resolution sounded cool. It's really just marketing.
    – Yay295
    Aug 19 at 3:47




    There are actually a few different resolutions that have been called "4K". The first usage however (according to Wikipedia) was a camera using 4096×2160. Someone thought 4096 was pretty close to 4000, and decided that saying their camera had a 4K resolution sounded cool. It's really just marketing.
    – Yay295
    Aug 19 at 3:47




    1




    1




    It should also be noted that 480i or 480p use 720 horizontal pixels with a width to height aspect ratio of 8/9 (= .88888....), which creates an interpolation issue on digital monitors. For CRT displays, this isn't an issue because it just involves a change in the sweep rate (and effective thickness) of the beam used to light up the phosphors. CRT based HDTV's generally support 480i / 480p / 1080i as "native" resolutions, with a better image for 480i/480p than digital displays, but not as good as 1080p digital displays.
    – rcgldr
    Aug 19 at 21:17





    It should also be noted that 480i or 480p use 720 horizontal pixels with a width to height aspect ratio of 8/9 (= .88888....), which creates an interpolation issue on digital monitors. For CRT displays, this isn't an issue because it just involves a change in the sweep rate (and effective thickness) of the beam used to light up the phosphors. CRT based HDTV's generally support 480i / 480p / 1080i as "native" resolutions, with a better image for 480i/480p than digital displays, but not as good as 1080p digital displays.
    – rcgldr
    Aug 19 at 21:17













    up vote
    7
    down vote













    Attie has explained how vertical pixel counts were derived for traditional resolutions such as SD (480p), HD (720p) and Full HD (1080p). I would like to discuss K measurements which originate from the cinema industry. A K is about 1000 horizontal pixels and 2K and 4K were casual terms referring to resolutions of approximately 2000 and 4000 horizontal pixels.



    Different K measurements were later standardised for digital cinema (2005) and consumer television (2007).



    Digital Cinema Initiatives has specified the DCI 4K and 2K resoultion standards (4096 × 2160 and 2048 × 1080). These are the full frame resolutions for digital cinema projectors, with many films displayed in the a crop format such as flat crop (3996 × 2160 , 1.85∶1 aspect ratio) or CinemaScope crop (4096 × 1716, ≈2.39∶1 aspect ratio).



    The 4K standard for consumer television (UHDTV1) can be more accurately labelled as Ultra HD or Quad HD. It is equivalent to 4 Full HD frames placed together. It is 3840 × 2160 with ≈1.78∶1 aspect ratio.



    The Wikipedia article on 4K resolution provides a helpful visualisation which compares different frame sizes.



    Comparison of video frame sizes






    share|improve this answer
























      up vote
      7
      down vote













      Attie has explained how vertical pixel counts were derived for traditional resolutions such as SD (480p), HD (720p) and Full HD (1080p). I would like to discuss K measurements which originate from the cinema industry. A K is about 1000 horizontal pixels and 2K and 4K were casual terms referring to resolutions of approximately 2000 and 4000 horizontal pixels.



      Different K measurements were later standardised for digital cinema (2005) and consumer television (2007).



      Digital Cinema Initiatives has specified the DCI 4K and 2K resoultion standards (4096 × 2160 and 2048 × 1080). These are the full frame resolutions for digital cinema projectors, with many films displayed in the a crop format such as flat crop (3996 × 2160 , 1.85∶1 aspect ratio) or CinemaScope crop (4096 × 1716, ≈2.39∶1 aspect ratio).



      The 4K standard for consumer television (UHDTV1) can be more accurately labelled as Ultra HD or Quad HD. It is equivalent to 4 Full HD frames placed together. It is 3840 × 2160 with ≈1.78∶1 aspect ratio.



      The Wikipedia article on 4K resolution provides a helpful visualisation which compares different frame sizes.



      Comparison of video frame sizes






      share|improve this answer






















        up vote
        7
        down vote










        up vote
        7
        down vote









        Attie has explained how vertical pixel counts were derived for traditional resolutions such as SD (480p), HD (720p) and Full HD (1080p). I would like to discuss K measurements which originate from the cinema industry. A K is about 1000 horizontal pixels and 2K and 4K were casual terms referring to resolutions of approximately 2000 and 4000 horizontal pixels.



        Different K measurements were later standardised for digital cinema (2005) and consumer television (2007).



        Digital Cinema Initiatives has specified the DCI 4K and 2K resoultion standards (4096 × 2160 and 2048 × 1080). These are the full frame resolutions for digital cinema projectors, with many films displayed in the a crop format such as flat crop (3996 × 2160 , 1.85∶1 aspect ratio) or CinemaScope crop (4096 × 1716, ≈2.39∶1 aspect ratio).



        The 4K standard for consumer television (UHDTV1) can be more accurately labelled as Ultra HD or Quad HD. It is equivalent to 4 Full HD frames placed together. It is 3840 × 2160 with ≈1.78∶1 aspect ratio.



        The Wikipedia article on 4K resolution provides a helpful visualisation which compares different frame sizes.



        Comparison of video frame sizes






        share|improve this answer












        Attie has explained how vertical pixel counts were derived for traditional resolutions such as SD (480p), HD (720p) and Full HD (1080p). I would like to discuss K measurements which originate from the cinema industry. A K is about 1000 horizontal pixels and 2K and 4K were casual terms referring to resolutions of approximately 2000 and 4000 horizontal pixels.



        Different K measurements were later standardised for digital cinema (2005) and consumer television (2007).



        Digital Cinema Initiatives has specified the DCI 4K and 2K resoultion standards (4096 × 2160 and 2048 × 1080). These are the full frame resolutions for digital cinema projectors, with many films displayed in the a crop format such as flat crop (3996 × 2160 , 1.85∶1 aspect ratio) or CinemaScope crop (4096 × 1716, ≈2.39∶1 aspect ratio).



        The 4K standard for consumer television (UHDTV1) can be more accurately labelled as Ultra HD or Quad HD. It is equivalent to 4 Full HD frames placed together. It is 3840 × 2160 with ≈1.78∶1 aspect ratio.



        The Wikipedia article on 4K resolution provides a helpful visualisation which compares different frame sizes.



        Comparison of video frame sizes







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 20 at 2:58









        Wes Toleman

        2013




        2013




















            up vote
            3
            down vote













            Your assessment that resolution is pixel pitch is kind of correct. In fact, for everything except digital displays, that's the correct definition of resolution (well, more accurately, it's the smallest detail that can be resolved accurately, which functionally translates to the size of the dots making up an image, and from that pixel pitch). If you ever talk to somebody in the printing industry, this is usually what they mean when they say 'resolution'.



            The reason that the term has a different meaning for computer displays is largely historical, but is actually pretty easily explained without getting into much history.



            On a raster display1, you have what's known as a 'native' resolution. This is the highest number of pixels you can have on each side of the screen, and is usually what is used to describe the screen. For example, a Full HD display has a native resolution of 1920 pixels horizontally, and 1080 vertically, so a Full HD display with a roughly 18 inch diagonal has a resolution (in the classical sense) of about 120 dots per inch (DPI) (which is actually pretty low compared to print media).



            However, you can run most displays at lower than their native resolution, which is functionally equivalent to having a larger pixel pitch on the same display. Taking that same 18 inch Full HD display and running it at 1280x720 gives you the equivalent of 80 DPI on the same screen.



            Now, the OS and application software (usually) don't care about the exact physical dimensions of the screen, because that information is not really all that useful to displaying information unless you need to display something 'real-size'. Given this, the OS just assumes the size of the display never changes, which means that the different pixel counts are functionally equivalent to the resolution.




            1. A display that uses rows and columns of individual points that can be turned on and off to produce an image. Almost all modern computer displays are raster displays, because they're fare easier to make. Compare to a vector display, which just draws lines directly (with one of the most recognizable examples being the displays used in the original Asteroids arcade cabinets).





            share|improve this answer
















            • 1




              "Resolution" doesn't have any bearing on dot size, pitch or aspect ratio as such - it refers to the smallest addressable "thing" - DAC resolution of 16-bits, screen resolution, etc... interestingly with RGB displays, you quite whole pixels, not sub pixels. With print it's quite meaningless to advertise a resolution across a page due to various sizes (A4 / A5 / etc...), margins, etc... so instead they specify the area (typically an Inch) and count the dots inside it (DPI)
              – Attie
              Aug 19 at 10:34










            • @Attie No, it does have bearing on dot size and/or pixel pitch (though not really aspect ratio). If you have a display of a given size, higher resolution means smaller pixel pitch, period (and thus higher detail and less aliasing). The reason resolution matters for actually displaying anything when dealing with anything higher level than drivers is pixel pitch and pixel count (which is a side effect of pixel pitch and display size), not addressing. Also, the use of dots-per-unit-area in print far predates computer displays.
              – Austin Hemmelgarn
              Aug 19 at 12:55










            • I completely agree that "resolution", "pixel pitch" and "physical dimensions" are all intertwined (how could they not be)... but "resolution" by itself has no bearing on the other two. The fact that software is typically aware of a display's physical dimensions or PPI means that it's possible to display things at approximately real size.
              – Attie
              Aug 19 at 13:54










            • As for DPI in print and pixel aspect ratio - until recently pixels were not square! IIRC facsimile had pixels with a 2:1 aspect ratio, and NTSC was 10:11... rendering the image with an incorrect PAR produces squashed/stretched looking images. Additionally, DPI in print usually needs to be so high because of the limited count of discernible colors - dithering coming to the rescue.
              – Attie
              Aug 19 at 14:00















            up vote
            3
            down vote













            Your assessment that resolution is pixel pitch is kind of correct. In fact, for everything except digital displays, that's the correct definition of resolution (well, more accurately, it's the smallest detail that can be resolved accurately, which functionally translates to the size of the dots making up an image, and from that pixel pitch). If you ever talk to somebody in the printing industry, this is usually what they mean when they say 'resolution'.



            The reason that the term has a different meaning for computer displays is largely historical, but is actually pretty easily explained without getting into much history.



            On a raster display1, you have what's known as a 'native' resolution. This is the highest number of pixels you can have on each side of the screen, and is usually what is used to describe the screen. For example, a Full HD display has a native resolution of 1920 pixels horizontally, and 1080 vertically, so a Full HD display with a roughly 18 inch diagonal has a resolution (in the classical sense) of about 120 dots per inch (DPI) (which is actually pretty low compared to print media).



            However, you can run most displays at lower than their native resolution, which is functionally equivalent to having a larger pixel pitch on the same display. Taking that same 18 inch Full HD display and running it at 1280x720 gives you the equivalent of 80 DPI on the same screen.



            Now, the OS and application software (usually) don't care about the exact physical dimensions of the screen, because that information is not really all that useful to displaying information unless you need to display something 'real-size'. Given this, the OS just assumes the size of the display never changes, which means that the different pixel counts are functionally equivalent to the resolution.




            1. A display that uses rows and columns of individual points that can be turned on and off to produce an image. Almost all modern computer displays are raster displays, because they're fare easier to make. Compare to a vector display, which just draws lines directly (with one of the most recognizable examples being the displays used in the original Asteroids arcade cabinets).





            share|improve this answer
















            • 1




              "Resolution" doesn't have any bearing on dot size, pitch or aspect ratio as such - it refers to the smallest addressable "thing" - DAC resolution of 16-bits, screen resolution, etc... interestingly with RGB displays, you quite whole pixels, not sub pixels. With print it's quite meaningless to advertise a resolution across a page due to various sizes (A4 / A5 / etc...), margins, etc... so instead they specify the area (typically an Inch) and count the dots inside it (DPI)
              – Attie
              Aug 19 at 10:34










            • @Attie No, it does have bearing on dot size and/or pixel pitch (though not really aspect ratio). If you have a display of a given size, higher resolution means smaller pixel pitch, period (and thus higher detail and less aliasing). The reason resolution matters for actually displaying anything when dealing with anything higher level than drivers is pixel pitch and pixel count (which is a side effect of pixel pitch and display size), not addressing. Also, the use of dots-per-unit-area in print far predates computer displays.
              – Austin Hemmelgarn
              Aug 19 at 12:55










            • I completely agree that "resolution", "pixel pitch" and "physical dimensions" are all intertwined (how could they not be)... but "resolution" by itself has no bearing on the other two. The fact that software is typically aware of a display's physical dimensions or PPI means that it's possible to display things at approximately real size.
              – Attie
              Aug 19 at 13:54










            • As for DPI in print and pixel aspect ratio - until recently pixels were not square! IIRC facsimile had pixels with a 2:1 aspect ratio, and NTSC was 10:11... rendering the image with an incorrect PAR produces squashed/stretched looking images. Additionally, DPI in print usually needs to be so high because of the limited count of discernible colors - dithering coming to the rescue.
              – Attie
              Aug 19 at 14:00













            up vote
            3
            down vote










            up vote
            3
            down vote









            Your assessment that resolution is pixel pitch is kind of correct. In fact, for everything except digital displays, that's the correct definition of resolution (well, more accurately, it's the smallest detail that can be resolved accurately, which functionally translates to the size of the dots making up an image, and from that pixel pitch). If you ever talk to somebody in the printing industry, this is usually what they mean when they say 'resolution'.



            The reason that the term has a different meaning for computer displays is largely historical, but is actually pretty easily explained without getting into much history.



            On a raster display1, you have what's known as a 'native' resolution. This is the highest number of pixels you can have on each side of the screen, and is usually what is used to describe the screen. For example, a Full HD display has a native resolution of 1920 pixels horizontally, and 1080 vertically, so a Full HD display with a roughly 18 inch diagonal has a resolution (in the classical sense) of about 120 dots per inch (DPI) (which is actually pretty low compared to print media).



            However, you can run most displays at lower than their native resolution, which is functionally equivalent to having a larger pixel pitch on the same display. Taking that same 18 inch Full HD display and running it at 1280x720 gives you the equivalent of 80 DPI on the same screen.



            Now, the OS and application software (usually) don't care about the exact physical dimensions of the screen, because that information is not really all that useful to displaying information unless you need to display something 'real-size'. Given this, the OS just assumes the size of the display never changes, which means that the different pixel counts are functionally equivalent to the resolution.




            1. A display that uses rows and columns of individual points that can be turned on and off to produce an image. Almost all modern computer displays are raster displays, because they're fare easier to make. Compare to a vector display, which just draws lines directly (with one of the most recognizable examples being the displays used in the original Asteroids arcade cabinets).





            share|improve this answer












            Your assessment that resolution is pixel pitch is kind of correct. In fact, for everything except digital displays, that's the correct definition of resolution (well, more accurately, it's the smallest detail that can be resolved accurately, which functionally translates to the size of the dots making up an image, and from that pixel pitch). If you ever talk to somebody in the printing industry, this is usually what they mean when they say 'resolution'.



            The reason that the term has a different meaning for computer displays is largely historical, but is actually pretty easily explained without getting into much history.



            On a raster display1, you have what's known as a 'native' resolution. This is the highest number of pixels you can have on each side of the screen, and is usually what is used to describe the screen. For example, a Full HD display has a native resolution of 1920 pixels horizontally, and 1080 vertically, so a Full HD display with a roughly 18 inch diagonal has a resolution (in the classical sense) of about 120 dots per inch (DPI) (which is actually pretty low compared to print media).



            However, you can run most displays at lower than their native resolution, which is functionally equivalent to having a larger pixel pitch on the same display. Taking that same 18 inch Full HD display and running it at 1280x720 gives you the equivalent of 80 DPI on the same screen.



            Now, the OS and application software (usually) don't care about the exact physical dimensions of the screen, because that information is not really all that useful to displaying information unless you need to display something 'real-size'. Given this, the OS just assumes the size of the display never changes, which means that the different pixel counts are functionally equivalent to the resolution.




            1. A display that uses rows and columns of individual points that can be turned on and off to produce an image. Almost all modern computer displays are raster displays, because they're fare easier to make. Compare to a vector display, which just draws lines directly (with one of the most recognizable examples being the displays used in the original Asteroids arcade cabinets).






            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Aug 18 at 21:12









            Austin Hemmelgarn

            1,89418




            1,89418







            • 1




              "Resolution" doesn't have any bearing on dot size, pitch or aspect ratio as such - it refers to the smallest addressable "thing" - DAC resolution of 16-bits, screen resolution, etc... interestingly with RGB displays, you quite whole pixels, not sub pixels. With print it's quite meaningless to advertise a resolution across a page due to various sizes (A4 / A5 / etc...), margins, etc... so instead they specify the area (typically an Inch) and count the dots inside it (DPI)
              – Attie
              Aug 19 at 10:34










            • @Attie No, it does have bearing on dot size and/or pixel pitch (though not really aspect ratio). If you have a display of a given size, higher resolution means smaller pixel pitch, period (and thus higher detail and less aliasing). The reason resolution matters for actually displaying anything when dealing with anything higher level than drivers is pixel pitch and pixel count (which is a side effect of pixel pitch and display size), not addressing. Also, the use of dots-per-unit-area in print far predates computer displays.
              – Austin Hemmelgarn
              Aug 19 at 12:55










            • I completely agree that "resolution", "pixel pitch" and "physical dimensions" are all intertwined (how could they not be)... but "resolution" by itself has no bearing on the other two. The fact that software is typically aware of a display's physical dimensions or PPI means that it's possible to display things at approximately real size.
              – Attie
              Aug 19 at 13:54










            • As for DPI in print and pixel aspect ratio - until recently pixels were not square! IIRC facsimile had pixels with a 2:1 aspect ratio, and NTSC was 10:11... rendering the image with an incorrect PAR produces squashed/stretched looking images. Additionally, DPI in print usually needs to be so high because of the limited count of discernible colors - dithering coming to the rescue.
              – Attie
              Aug 19 at 14:00













            • 1




              "Resolution" doesn't have any bearing on dot size, pitch or aspect ratio as such - it refers to the smallest addressable "thing" - DAC resolution of 16-bits, screen resolution, etc... interestingly with RGB displays, you quite whole pixels, not sub pixels. With print it's quite meaningless to advertise a resolution across a page due to various sizes (A4 / A5 / etc...), margins, etc... so instead they specify the area (typically an Inch) and count the dots inside it (DPI)
              – Attie
              Aug 19 at 10:34










            • @Attie No, it does have bearing on dot size and/or pixel pitch (though not really aspect ratio). If you have a display of a given size, higher resolution means smaller pixel pitch, period (and thus higher detail and less aliasing). The reason resolution matters for actually displaying anything when dealing with anything higher level than drivers is pixel pitch and pixel count (which is a side effect of pixel pitch and display size), not addressing. Also, the use of dots-per-unit-area in print far predates computer displays.
              – Austin Hemmelgarn
              Aug 19 at 12:55










            • I completely agree that "resolution", "pixel pitch" and "physical dimensions" are all intertwined (how could they not be)... but "resolution" by itself has no bearing on the other two. The fact that software is typically aware of a display's physical dimensions or PPI means that it's possible to display things at approximately real size.
              – Attie
              Aug 19 at 13:54










            • As for DPI in print and pixel aspect ratio - until recently pixels were not square! IIRC facsimile had pixels with a 2:1 aspect ratio, and NTSC was 10:11... rendering the image with an incorrect PAR produces squashed/stretched looking images. Additionally, DPI in print usually needs to be so high because of the limited count of discernible colors - dithering coming to the rescue.
              – Attie
              Aug 19 at 14:00








            1




            1




            "Resolution" doesn't have any bearing on dot size, pitch or aspect ratio as such - it refers to the smallest addressable "thing" - DAC resolution of 16-bits, screen resolution, etc... interestingly with RGB displays, you quite whole pixels, not sub pixels. With print it's quite meaningless to advertise a resolution across a page due to various sizes (A4 / A5 / etc...), margins, etc... so instead they specify the area (typically an Inch) and count the dots inside it (DPI)
            – Attie
            Aug 19 at 10:34




            "Resolution" doesn't have any bearing on dot size, pitch or aspect ratio as such - it refers to the smallest addressable "thing" - DAC resolution of 16-bits, screen resolution, etc... interestingly with RGB displays, you quite whole pixels, not sub pixels. With print it's quite meaningless to advertise a resolution across a page due to various sizes (A4 / A5 / etc...), margins, etc... so instead they specify the area (typically an Inch) and count the dots inside it (DPI)
            – Attie
            Aug 19 at 10:34












            @Attie No, it does have bearing on dot size and/or pixel pitch (though not really aspect ratio). If you have a display of a given size, higher resolution means smaller pixel pitch, period (and thus higher detail and less aliasing). The reason resolution matters for actually displaying anything when dealing with anything higher level than drivers is pixel pitch and pixel count (which is a side effect of pixel pitch and display size), not addressing. Also, the use of dots-per-unit-area in print far predates computer displays.
            – Austin Hemmelgarn
            Aug 19 at 12:55




            @Attie No, it does have bearing on dot size and/or pixel pitch (though not really aspect ratio). If you have a display of a given size, higher resolution means smaller pixel pitch, period (and thus higher detail and less aliasing). The reason resolution matters for actually displaying anything when dealing with anything higher level than drivers is pixel pitch and pixel count (which is a side effect of pixel pitch and display size), not addressing. Also, the use of dots-per-unit-area in print far predates computer displays.
            – Austin Hemmelgarn
            Aug 19 at 12:55












            I completely agree that "resolution", "pixel pitch" and "physical dimensions" are all intertwined (how could they not be)... but "resolution" by itself has no bearing on the other two. The fact that software is typically aware of a display's physical dimensions or PPI means that it's possible to display things at approximately real size.
            – Attie
            Aug 19 at 13:54




            I completely agree that "resolution", "pixel pitch" and "physical dimensions" are all intertwined (how could they not be)... but "resolution" by itself has no bearing on the other two. The fact that software is typically aware of a display's physical dimensions or PPI means that it's possible to display things at approximately real size.
            – Attie
            Aug 19 at 13:54












            As for DPI in print and pixel aspect ratio - until recently pixels were not square! IIRC facsimile had pixels with a 2:1 aspect ratio, and NTSC was 10:11... rendering the image with an incorrect PAR produces squashed/stretched looking images. Additionally, DPI in print usually needs to be so high because of the limited count of discernible colors - dithering coming to the rescue.
            – Attie
            Aug 19 at 14:00





            As for DPI in print and pixel aspect ratio - until recently pixels were not square! IIRC facsimile had pixels with a 2:1 aspect ratio, and NTSC was 10:11... rendering the image with an incorrect PAR produces squashed/stretched looking images. Additionally, DPI in print usually needs to be so high because of the limited count of discernible colors - dithering coming to the rescue.
            – Attie
            Aug 19 at 14:00



            Comments

            Popular posts from this blog

            Long meetings (6-7 hours a day): Being “babysat” by supervisor

            Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

            Confectionery