Was 1991's Hellcats the first instance of incremental screen updates?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
8
down vote

favorite












In case you have never seen it, 1991's Hellcats was a seminal release on the Mac. It ran at full 8-bit color and could, on a newish machine, drive three 1024x768 screens at the same time. Nothing on the Mac or PC came remotely close.



I recall when I learned how the magic worked... I got up from the game for a moment and returned with the "screen saver" (ahh, the old days) was running. I flicked the mouse to clear it and unpause the game, and noticed the only part of the screen that updated was the horizon. It was reaching high speeds by carefully designing the screen so only certain portions would change as the aircraft moved, and then updating only those portions.



Now you can imagine, in 1991 this was no trivial task on its own. Graphics cards were generally frame buffers driven by the CPU, actual accelerators were just coming to market. So I was always curious how he managed this trick.



Long after I had an email exchange with the author and he explained it. He set aside two screen buffers in memory, and drew into alternate frames. He then compared the two and sent only the diffs over the bus to the card. Because the CPU was much faster than the bus, this provided greatly improved performance.



He also noted that the idea came to him on the SPARC1. This platform had the general problem of having a fast CPU but with a slowish bus. His first app spun a cube at 60 fps. He ended up doing Hellcats on the Mac because it shared this limitation, whereas PCs of the era generally used 320x240 for color, and the advantage of this technique was washed out.



So, the question..



I suspect that this idea of incremental updating may pre-date Mr. Parker's 1989-ish Sun version. Does anyone know of examples of this being used earlier?










share|improve this question



















  • 2




    Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
    – Tommy
    20 hours ago















up vote
8
down vote

favorite












In case you have never seen it, 1991's Hellcats was a seminal release on the Mac. It ran at full 8-bit color and could, on a newish machine, drive three 1024x768 screens at the same time. Nothing on the Mac or PC came remotely close.



I recall when I learned how the magic worked... I got up from the game for a moment and returned with the "screen saver" (ahh, the old days) was running. I flicked the mouse to clear it and unpause the game, and noticed the only part of the screen that updated was the horizon. It was reaching high speeds by carefully designing the screen so only certain portions would change as the aircraft moved, and then updating only those portions.



Now you can imagine, in 1991 this was no trivial task on its own. Graphics cards were generally frame buffers driven by the CPU, actual accelerators were just coming to market. So I was always curious how he managed this trick.



Long after I had an email exchange with the author and he explained it. He set aside two screen buffers in memory, and drew into alternate frames. He then compared the two and sent only the diffs over the bus to the card. Because the CPU was much faster than the bus, this provided greatly improved performance.



He also noted that the idea came to him on the SPARC1. This platform had the general problem of having a fast CPU but with a slowish bus. His first app spun a cube at 60 fps. He ended up doing Hellcats on the Mac because it shared this limitation, whereas PCs of the era generally used 320x240 for color, and the advantage of this technique was washed out.



So, the question..



I suspect that this idea of incremental updating may pre-date Mr. Parker's 1989-ish Sun version. Does anyone know of examples of this being used earlier?










share|improve this question



















  • 2




    Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
    – Tommy
    20 hours ago













up vote
8
down vote

favorite









up vote
8
down vote

favorite











In case you have never seen it, 1991's Hellcats was a seminal release on the Mac. It ran at full 8-bit color and could, on a newish machine, drive three 1024x768 screens at the same time. Nothing on the Mac or PC came remotely close.



I recall when I learned how the magic worked... I got up from the game for a moment and returned with the "screen saver" (ahh, the old days) was running. I flicked the mouse to clear it and unpause the game, and noticed the only part of the screen that updated was the horizon. It was reaching high speeds by carefully designing the screen so only certain portions would change as the aircraft moved, and then updating only those portions.



Now you can imagine, in 1991 this was no trivial task on its own. Graphics cards were generally frame buffers driven by the CPU, actual accelerators were just coming to market. So I was always curious how he managed this trick.



Long after I had an email exchange with the author and he explained it. He set aside two screen buffers in memory, and drew into alternate frames. He then compared the two and sent only the diffs over the bus to the card. Because the CPU was much faster than the bus, this provided greatly improved performance.



He also noted that the idea came to him on the SPARC1. This platform had the general problem of having a fast CPU but with a slowish bus. His first app spun a cube at 60 fps. He ended up doing Hellcats on the Mac because it shared this limitation, whereas PCs of the era generally used 320x240 for color, and the advantage of this technique was washed out.



So, the question..



I suspect that this idea of incremental updating may pre-date Mr. Parker's 1989-ish Sun version. Does anyone know of examples of this being used earlier?










share|improve this question















In case you have never seen it, 1991's Hellcats was a seminal release on the Mac. It ran at full 8-bit color and could, on a newish machine, drive three 1024x768 screens at the same time. Nothing on the Mac or PC came remotely close.



I recall when I learned how the magic worked... I got up from the game for a moment and returned with the "screen saver" (ahh, the old days) was running. I flicked the mouse to clear it and unpause the game, and noticed the only part of the screen that updated was the horizon. It was reaching high speeds by carefully designing the screen so only certain portions would change as the aircraft moved, and then updating only those portions.



Now you can imagine, in 1991 this was no trivial task on its own. Graphics cards were generally frame buffers driven by the CPU, actual accelerators were just coming to market. So I was always curious how he managed this trick.



Long after I had an email exchange with the author and he explained it. He set aside two screen buffers in memory, and drew into alternate frames. He then compared the two and sent only the diffs over the bus to the card. Because the CPU was much faster than the bus, this provided greatly improved performance.



He also noted that the idea came to him on the SPARC1. This platform had the general problem of having a fast CPU but with a slowish bus. His first app spun a cube at 60 fps. He ended up doing Hellcats on the Mac because it shared this limitation, whereas PCs of the era generally used 320x240 for color, and the advantage of this technique was washed out.



So, the question..



I suspect that this idea of incremental updating may pre-date Mr. Parker's 1989-ish Sun version. Does anyone know of examples of this being used earlier?







history graphics apple-macintosh






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 10 mins ago









curiousdannii

1034




1034










asked 22 hours ago









Maury Markowitz

1,890420




1,890420







  • 2




    Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
    – Tommy
    20 hours ago













  • 2




    Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
    – Tommy
    20 hours ago








2




2




Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
– Tommy
20 hours ago





Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
– Tommy
20 hours ago











5 Answers
5






active

oldest

votes

















up vote
8
down vote













The idea is as old as memory-mapped display hardware is. After all, memory bandwidth was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Racing the Beam' :)



Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



The double buffering of complete renderings and only transferring changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cycles (for comparison) instead of one access as with a simple transfer.



And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handling different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transferred toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



Bottom line: "Im Westen nichts Neues"




*1 - Personal experience, I bet others had done that already before.






share|improve this answer


















  • 2




    I concur. I wrote a few "full screen text" programs in the mid 1970s that only sent the differences to the asynch terminal (my critical test case was 300 bps dialup), determined by keeping 24x80 buffers and comparing them. I did not feel like I was inventing anything new.
    – dave
    15 hours ago

















up vote
6
down vote













Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.






share|improve this answer






















  • WM_PAINT doesn't require double buffering. An application may either create a memory device context (MemDC), write to that, then copy the contents to the main DC, or the application can bypass the memory DC and write directly to the main DC.
    – traal
    17 hours ago










  • @traal I didn't say that. A MemDC is double buffering in my book. You can obviously not make use of the offering the GUI makes to you and redraw the whole screen from scratch with every WM_PAINT.
    – tofro
    16 hours ago






  • 1




    You have to explicitly create a MemDC if you want to double buffer. With or without the MemDC, there's no need to redraw the whole window, just paint the background of the region specified by the WM_PAINT message, then iterate through your controls, check which ones coincide with the region that needs to be repainted (in MFC, use CRect::IntersectRect()), and repaint just those controls. This may or may not be faster than double buffering but for sure it uses less memory.
    – traal
    16 hours ago


















up vote
6
down vote













Since my interpretation of the question differs from other answers posted, I think the primary topic is inter-frame video compression. It's about historical precedents for a pipeline that starts with the complete contents of frame n and frame n+1 in some representation, determines the differences between them, and communicates only the differences to a third party. That's as distinct from doing active calculation on only portions of an image because it is innately true that whatever isn't touched will be preserve — such as a video game that redraws only the moving sprites — because there are two complete frames, and their entirety is inspected to calculate a diff without a priori knowledge.



In which case, I'm going to nominate “Transform coding of image difference signals”, M R Schroeder, US Patent 3679821, 1972., which emanated from Bell Labs, as the first public reference I can find to digital image differencing; as per the name it's most heavily invested in transforming the differences for transmission, but "worthwhile bandwidth reduction" is the very second advantage it proposes in its abstract (emphasis added):




Immunity to transmission errors and worthwhile bandwidth reduction are achieved by distributing the difference signal, developed in differentially coding an image signal ... and other economies are achieved, particularly for relatively slowly varying sequences of images.




The filing date for the patent is given as the 30th of April 1970; it lists prior art but is the first thing I can find that is explicit about processing a digital signal rather than being purely analogue — in column 2, near the very bottom of the page and rolling over into column 3:




Moreover, the signals may be in analog form although preferably they are in digital form to simplify subsequent processing. Assuming for this illustrative embodiment that the signals are in digital form they are [pulse-code modulated].







share|improve this answer






















  • I agree with your interpretation of the question, and see also @Raffzahn's description of using the technique on terminals where reading from system memory is significantly faster than writing to the terminal screen through the serial port.
    – traal
    20 hours ago











  • I wasn't completely sure whether the terminals Raffzahn mentioned would necessarily do an actual diff, or just keep a record of things that have changed. But fair enough, yes, my interpretation of the question may or may not differ as much as I think. To be clear, I wanted to explain why I'm adding an additional answer, not to critique anyone. Definitely read it as a comment about me, not about anybody else.
    – Tommy
    20 hours ago







  • 2




    That's how curses works. It keeps two buffers, one for what is currently displayed on screen and a second for what what screen should be updated to display. When the terminal is updated, basically does a diff of the two buffers in order to minimize the number of bytes sent to the terminal. That can include deleting and inserting characters or lines if the terminal supports it.
    – Ross Ridge
    19 hours ago










  • @RossRidge the earliest reference I could find for curses — and this is via Wikipedia, so apply a large pinch of salt — is Arnold, K. C. R. C. (1977). "Screen Updating and Cursor Movement Optimization: A Library Package". University of California, Berkely. So 1977, five years later than the patent I was otherwise citing. I'll try to trawl for earlier mentions.
    – Tommy
    19 hours ago


















up vote
3
down vote













It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.






share|improve this answer



























    up vote
    1
    down vote













    An interesting example, or possibly variation, of this is the Amiga "ANIM" format used by Deluxe Paint. It only stores changes between frames, rather than whole frames. It first appeared in 1988.



    Differences between frames were determined by the application creating the file. In the case of Deluxe paint it was based on user input, rather than examining frame-to-frame differences, so could sometimes produce very large files even though the visible changes were relatively small.






    share|improve this answer




















    • I looked into Autodesk's similar FLIC, but wasn't able to figure out when that appeared beyond being after 1989 but before 1993. So ANIM predates it for sure.
      – Tommy
      1 hour ago










    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "648"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8049%2fwas-1991s-hellcats-the-first-instance-of-incremental-screen-updates%23new-answer', 'question_page');

    );

    Post as a guest






























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    8
    down vote













    The idea is as old as memory-mapped display hardware is. After all, memory bandwidth was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Racing the Beam' :)



    Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



    The double buffering of complete renderings and only transferring changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cycles (for comparison) instead of one access as with a simple transfer.



    And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handling different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



    For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transferred toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



    The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



    Bottom line: "Im Westen nichts Neues"




    *1 - Personal experience, I bet others had done that already before.






    share|improve this answer


















    • 2




      I concur. I wrote a few "full screen text" programs in the mid 1970s that only sent the differences to the asynch terminal (my critical test case was 300 bps dialup), determined by keeping 24x80 buffers and comparing them. I did not feel like I was inventing anything new.
      – dave
      15 hours ago














    up vote
    8
    down vote













    The idea is as old as memory-mapped display hardware is. After all, memory bandwidth was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Racing the Beam' :)



    Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



    The double buffering of complete renderings and only transferring changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cycles (for comparison) instead of one access as with a simple transfer.



    And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handling different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



    For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transferred toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



    The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



    Bottom line: "Im Westen nichts Neues"




    *1 - Personal experience, I bet others had done that already before.






    share|improve this answer


















    • 2




      I concur. I wrote a few "full screen text" programs in the mid 1970s that only sent the differences to the asynch terminal (my critical test case was 300 bps dialup), determined by keeping 24x80 buffers and comparing them. I did not feel like I was inventing anything new.
      – dave
      15 hours ago












    up vote
    8
    down vote










    up vote
    8
    down vote









    The idea is as old as memory-mapped display hardware is. After all, memory bandwidth was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Racing the Beam' :)



    Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



    The double buffering of complete renderings and only transferring changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cycles (for comparison) instead of one access as with a simple transfer.



    And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handling different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



    For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transferred toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



    The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



    Bottom line: "Im Westen nichts Neues"




    *1 - Personal experience, I bet others had done that already before.






    share|improve this answer














    The idea is as old as memory-mapped display hardware is. After all, memory bandwidth was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Racing the Beam' :)



    Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



    The double buffering of complete renderings and only transferring changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cycles (for comparison) instead of one access as with a simple transfer.



    And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handling different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



    For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transferred toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



    The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



    Bottom line: "Im Westen nichts Neues"




    *1 - Personal experience, I bet others had done that already before.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 3 hours ago

























    answered 21 hours ago









    Raffzahn

    39.2k488158




    39.2k488158







    • 2




      I concur. I wrote a few "full screen text" programs in the mid 1970s that only sent the differences to the asynch terminal (my critical test case was 300 bps dialup), determined by keeping 24x80 buffers and comparing them. I did not feel like I was inventing anything new.
      – dave
      15 hours ago












    • 2




      I concur. I wrote a few "full screen text" programs in the mid 1970s that only sent the differences to the asynch terminal (my critical test case was 300 bps dialup), determined by keeping 24x80 buffers and comparing them. I did not feel like I was inventing anything new.
      – dave
      15 hours ago







    2




    2




    I concur. I wrote a few "full screen text" programs in the mid 1970s that only sent the differences to the asynch terminal (my critical test case was 300 bps dialup), determined by keeping 24x80 buffers and comparing them. I did not feel like I was inventing anything new.
    – dave
    15 hours ago




    I concur. I wrote a few "full screen text" programs in the mid 1970s that only sent the differences to the asynch terminal (my critical test case was 300 bps dialup), determined by keeping 24x80 buffers and comparing them. I did not feel like I was inventing anything new.
    – dave
    15 hours ago










    up vote
    6
    down vote













    Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



    It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



    So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



    Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.






    share|improve this answer






















    • WM_PAINT doesn't require double buffering. An application may either create a memory device context (MemDC), write to that, then copy the contents to the main DC, or the application can bypass the memory DC and write directly to the main DC.
      – traal
      17 hours ago










    • @traal I didn't say that. A MemDC is double buffering in my book. You can obviously not make use of the offering the GUI makes to you and redraw the whole screen from scratch with every WM_PAINT.
      – tofro
      16 hours ago






    • 1




      You have to explicitly create a MemDC if you want to double buffer. With or without the MemDC, there's no need to redraw the whole window, just paint the background of the region specified by the WM_PAINT message, then iterate through your controls, check which ones coincide with the region that needs to be repainted (in MFC, use CRect::IntersectRect()), and repaint just those controls. This may or may not be faster than double buffering but for sure it uses less memory.
      – traal
      16 hours ago















    up vote
    6
    down vote













    Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



    It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



    So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



    Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.






    share|improve this answer






















    • WM_PAINT doesn't require double buffering. An application may either create a memory device context (MemDC), write to that, then copy the contents to the main DC, or the application can bypass the memory DC and write directly to the main DC.
      – traal
      17 hours ago










    • @traal I didn't say that. A MemDC is double buffering in my book. You can obviously not make use of the offering the GUI makes to you and redraw the whole screen from scratch with every WM_PAINT.
      – tofro
      16 hours ago






    • 1




      You have to explicitly create a MemDC if you want to double buffer. With or without the MemDC, there's no need to redraw the whole window, just paint the background of the region specified by the WM_PAINT message, then iterate through your controls, check which ones coincide with the region that needs to be repainted (in MFC, use CRect::IntersectRect()), and repaint just those controls. This may or may not be faster than double buffering but for sure it uses less memory.
      – traal
      16 hours ago













    up vote
    6
    down vote










    up vote
    6
    down vote









    Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



    It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



    So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



    Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.






    share|improve this answer














    Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



    It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



    So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



    Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 22 hours ago

























    answered 22 hours ago









    tofro

    13.4k32876




    13.4k32876











    • WM_PAINT doesn't require double buffering. An application may either create a memory device context (MemDC), write to that, then copy the contents to the main DC, or the application can bypass the memory DC and write directly to the main DC.
      – traal
      17 hours ago










    • @traal I didn't say that. A MemDC is double buffering in my book. You can obviously not make use of the offering the GUI makes to you and redraw the whole screen from scratch with every WM_PAINT.
      – tofro
      16 hours ago






    • 1




      You have to explicitly create a MemDC if you want to double buffer. With or without the MemDC, there's no need to redraw the whole window, just paint the background of the region specified by the WM_PAINT message, then iterate through your controls, check which ones coincide with the region that needs to be repainted (in MFC, use CRect::IntersectRect()), and repaint just those controls. This may or may not be faster than double buffering but for sure it uses less memory.
      – traal
      16 hours ago

















    • WM_PAINT doesn't require double buffering. An application may either create a memory device context (MemDC), write to that, then copy the contents to the main DC, or the application can bypass the memory DC and write directly to the main DC.
      – traal
      17 hours ago










    • @traal I didn't say that. A MemDC is double buffering in my book. You can obviously not make use of the offering the GUI makes to you and redraw the whole screen from scratch with every WM_PAINT.
      – tofro
      16 hours ago






    • 1




      You have to explicitly create a MemDC if you want to double buffer. With or without the MemDC, there's no need to redraw the whole window, just paint the background of the region specified by the WM_PAINT message, then iterate through your controls, check which ones coincide with the region that needs to be repainted (in MFC, use CRect::IntersectRect()), and repaint just those controls. This may or may not be faster than double buffering but for sure it uses less memory.
      – traal
      16 hours ago
















    WM_PAINT doesn't require double buffering. An application may either create a memory device context (MemDC), write to that, then copy the contents to the main DC, or the application can bypass the memory DC and write directly to the main DC.
    – traal
    17 hours ago




    WM_PAINT doesn't require double buffering. An application may either create a memory device context (MemDC), write to that, then copy the contents to the main DC, or the application can bypass the memory DC and write directly to the main DC.
    – traal
    17 hours ago












    @traal I didn't say that. A MemDC is double buffering in my book. You can obviously not make use of the offering the GUI makes to you and redraw the whole screen from scratch with every WM_PAINT.
    – tofro
    16 hours ago




    @traal I didn't say that. A MemDC is double buffering in my book. You can obviously not make use of the offering the GUI makes to you and redraw the whole screen from scratch with every WM_PAINT.
    – tofro
    16 hours ago




    1




    1




    You have to explicitly create a MemDC if you want to double buffer. With or without the MemDC, there's no need to redraw the whole window, just paint the background of the region specified by the WM_PAINT message, then iterate through your controls, check which ones coincide with the region that needs to be repainted (in MFC, use CRect::IntersectRect()), and repaint just those controls. This may or may not be faster than double buffering but for sure it uses less memory.
    – traal
    16 hours ago





    You have to explicitly create a MemDC if you want to double buffer. With or without the MemDC, there's no need to redraw the whole window, just paint the background of the region specified by the WM_PAINT message, then iterate through your controls, check which ones coincide with the region that needs to be repainted (in MFC, use CRect::IntersectRect()), and repaint just those controls. This may or may not be faster than double buffering but for sure it uses less memory.
    – traal
    16 hours ago











    up vote
    6
    down vote













    Since my interpretation of the question differs from other answers posted, I think the primary topic is inter-frame video compression. It's about historical precedents for a pipeline that starts with the complete contents of frame n and frame n+1 in some representation, determines the differences between them, and communicates only the differences to a third party. That's as distinct from doing active calculation on only portions of an image because it is innately true that whatever isn't touched will be preserve — such as a video game that redraws only the moving sprites — because there are two complete frames, and their entirety is inspected to calculate a diff without a priori knowledge.



    In which case, I'm going to nominate “Transform coding of image difference signals”, M R Schroeder, US Patent 3679821, 1972., which emanated from Bell Labs, as the first public reference I can find to digital image differencing; as per the name it's most heavily invested in transforming the differences for transmission, but "worthwhile bandwidth reduction" is the very second advantage it proposes in its abstract (emphasis added):




    Immunity to transmission errors and worthwhile bandwidth reduction are achieved by distributing the difference signal, developed in differentially coding an image signal ... and other economies are achieved, particularly for relatively slowly varying sequences of images.




    The filing date for the patent is given as the 30th of April 1970; it lists prior art but is the first thing I can find that is explicit about processing a digital signal rather than being purely analogue — in column 2, near the very bottom of the page and rolling over into column 3:




    Moreover, the signals may be in analog form although preferably they are in digital form to simplify subsequent processing. Assuming for this illustrative embodiment that the signals are in digital form they are [pulse-code modulated].







    share|improve this answer






















    • I agree with your interpretation of the question, and see also @Raffzahn's description of using the technique on terminals where reading from system memory is significantly faster than writing to the terminal screen through the serial port.
      – traal
      20 hours ago











    • I wasn't completely sure whether the terminals Raffzahn mentioned would necessarily do an actual diff, or just keep a record of things that have changed. But fair enough, yes, my interpretation of the question may or may not differ as much as I think. To be clear, I wanted to explain why I'm adding an additional answer, not to critique anyone. Definitely read it as a comment about me, not about anybody else.
      – Tommy
      20 hours ago







    • 2




      That's how curses works. It keeps two buffers, one for what is currently displayed on screen and a second for what what screen should be updated to display. When the terminal is updated, basically does a diff of the two buffers in order to minimize the number of bytes sent to the terminal. That can include deleting and inserting characters or lines if the terminal supports it.
      – Ross Ridge
      19 hours ago










    • @RossRidge the earliest reference I could find for curses — and this is via Wikipedia, so apply a large pinch of salt — is Arnold, K. C. R. C. (1977). "Screen Updating and Cursor Movement Optimization: A Library Package". University of California, Berkely. So 1977, five years later than the patent I was otherwise citing. I'll try to trawl for earlier mentions.
      – Tommy
      19 hours ago















    up vote
    6
    down vote













    Since my interpretation of the question differs from other answers posted, I think the primary topic is inter-frame video compression. It's about historical precedents for a pipeline that starts with the complete contents of frame n and frame n+1 in some representation, determines the differences between them, and communicates only the differences to a third party. That's as distinct from doing active calculation on only portions of an image because it is innately true that whatever isn't touched will be preserve — such as a video game that redraws only the moving sprites — because there are two complete frames, and their entirety is inspected to calculate a diff without a priori knowledge.



    In which case, I'm going to nominate “Transform coding of image difference signals”, M R Schroeder, US Patent 3679821, 1972., which emanated from Bell Labs, as the first public reference I can find to digital image differencing; as per the name it's most heavily invested in transforming the differences for transmission, but "worthwhile bandwidth reduction" is the very second advantage it proposes in its abstract (emphasis added):




    Immunity to transmission errors and worthwhile bandwidth reduction are achieved by distributing the difference signal, developed in differentially coding an image signal ... and other economies are achieved, particularly for relatively slowly varying sequences of images.




    The filing date for the patent is given as the 30th of April 1970; it lists prior art but is the first thing I can find that is explicit about processing a digital signal rather than being purely analogue — in column 2, near the very bottom of the page and rolling over into column 3:




    Moreover, the signals may be in analog form although preferably they are in digital form to simplify subsequent processing. Assuming for this illustrative embodiment that the signals are in digital form they are [pulse-code modulated].







    share|improve this answer






















    • I agree with your interpretation of the question, and see also @Raffzahn's description of using the technique on terminals where reading from system memory is significantly faster than writing to the terminal screen through the serial port.
      – traal
      20 hours ago











    • I wasn't completely sure whether the terminals Raffzahn mentioned would necessarily do an actual diff, or just keep a record of things that have changed. But fair enough, yes, my interpretation of the question may or may not differ as much as I think. To be clear, I wanted to explain why I'm adding an additional answer, not to critique anyone. Definitely read it as a comment about me, not about anybody else.
      – Tommy
      20 hours ago







    • 2




      That's how curses works. It keeps two buffers, one for what is currently displayed on screen and a second for what what screen should be updated to display. When the terminal is updated, basically does a diff of the two buffers in order to minimize the number of bytes sent to the terminal. That can include deleting and inserting characters or lines if the terminal supports it.
      – Ross Ridge
      19 hours ago










    • @RossRidge the earliest reference I could find for curses — and this is via Wikipedia, so apply a large pinch of salt — is Arnold, K. C. R. C. (1977). "Screen Updating and Cursor Movement Optimization: A Library Package". University of California, Berkely. So 1977, five years later than the patent I was otherwise citing. I'll try to trawl for earlier mentions.
      – Tommy
      19 hours ago













    up vote
    6
    down vote










    up vote
    6
    down vote









    Since my interpretation of the question differs from other answers posted, I think the primary topic is inter-frame video compression. It's about historical precedents for a pipeline that starts with the complete contents of frame n and frame n+1 in some representation, determines the differences between them, and communicates only the differences to a third party. That's as distinct from doing active calculation on only portions of an image because it is innately true that whatever isn't touched will be preserve — such as a video game that redraws only the moving sprites — because there are two complete frames, and their entirety is inspected to calculate a diff without a priori knowledge.



    In which case, I'm going to nominate “Transform coding of image difference signals”, M R Schroeder, US Patent 3679821, 1972., which emanated from Bell Labs, as the first public reference I can find to digital image differencing; as per the name it's most heavily invested in transforming the differences for transmission, but "worthwhile bandwidth reduction" is the very second advantage it proposes in its abstract (emphasis added):




    Immunity to transmission errors and worthwhile bandwidth reduction are achieved by distributing the difference signal, developed in differentially coding an image signal ... and other economies are achieved, particularly for relatively slowly varying sequences of images.




    The filing date for the patent is given as the 30th of April 1970; it lists prior art but is the first thing I can find that is explicit about processing a digital signal rather than being purely analogue — in column 2, near the very bottom of the page and rolling over into column 3:




    Moreover, the signals may be in analog form although preferably they are in digital form to simplify subsequent processing. Assuming for this illustrative embodiment that the signals are in digital form they are [pulse-code modulated].







    share|improve this answer














    Since my interpretation of the question differs from other answers posted, I think the primary topic is inter-frame video compression. It's about historical precedents for a pipeline that starts with the complete contents of frame n and frame n+1 in some representation, determines the differences between them, and communicates only the differences to a third party. That's as distinct from doing active calculation on only portions of an image because it is innately true that whatever isn't touched will be preserve — such as a video game that redraws only the moving sprites — because there are two complete frames, and their entirety is inspected to calculate a diff without a priori knowledge.



    In which case, I'm going to nominate “Transform coding of image difference signals”, M R Schroeder, US Patent 3679821, 1972., which emanated from Bell Labs, as the first public reference I can find to digital image differencing; as per the name it's most heavily invested in transforming the differences for transmission, but "worthwhile bandwidth reduction" is the very second advantage it proposes in its abstract (emphasis added):




    Immunity to transmission errors and worthwhile bandwidth reduction are achieved by distributing the difference signal, developed in differentially coding an image signal ... and other economies are achieved, particularly for relatively slowly varying sequences of images.




    The filing date for the patent is given as the 30th of April 1970; it lists prior art but is the first thing I can find that is explicit about processing a digital signal rather than being purely analogue — in column 2, near the very bottom of the page and rolling over into column 3:




    Moreover, the signals may be in analog form although preferably they are in digital form to simplify subsequent processing. Assuming for this illustrative embodiment that the signals are in digital form they are [pulse-code modulated].








    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 19 hours ago

























    answered 20 hours ago









    Tommy

    13.1k13465




    13.1k13465











    • I agree with your interpretation of the question, and see also @Raffzahn's description of using the technique on terminals where reading from system memory is significantly faster than writing to the terminal screen through the serial port.
      – traal
      20 hours ago











    • I wasn't completely sure whether the terminals Raffzahn mentioned would necessarily do an actual diff, or just keep a record of things that have changed. But fair enough, yes, my interpretation of the question may or may not differ as much as I think. To be clear, I wanted to explain why I'm adding an additional answer, not to critique anyone. Definitely read it as a comment about me, not about anybody else.
      – Tommy
      20 hours ago







    • 2




      That's how curses works. It keeps two buffers, one for what is currently displayed on screen and a second for what what screen should be updated to display. When the terminal is updated, basically does a diff of the two buffers in order to minimize the number of bytes sent to the terminal. That can include deleting and inserting characters or lines if the terminal supports it.
      – Ross Ridge
      19 hours ago










    • @RossRidge the earliest reference I could find for curses — and this is via Wikipedia, so apply a large pinch of salt — is Arnold, K. C. R. C. (1977). "Screen Updating and Cursor Movement Optimization: A Library Package". University of California, Berkely. So 1977, five years later than the patent I was otherwise citing. I'll try to trawl for earlier mentions.
      – Tommy
      19 hours ago

















    • I agree with your interpretation of the question, and see also @Raffzahn's description of using the technique on terminals where reading from system memory is significantly faster than writing to the terminal screen through the serial port.
      – traal
      20 hours ago











    • I wasn't completely sure whether the terminals Raffzahn mentioned would necessarily do an actual diff, or just keep a record of things that have changed. But fair enough, yes, my interpretation of the question may or may not differ as much as I think. To be clear, I wanted to explain why I'm adding an additional answer, not to critique anyone. Definitely read it as a comment about me, not about anybody else.
      – Tommy
      20 hours ago







    • 2




      That's how curses works. It keeps two buffers, one for what is currently displayed on screen and a second for what what screen should be updated to display. When the terminal is updated, basically does a diff of the two buffers in order to minimize the number of bytes sent to the terminal. That can include deleting and inserting characters or lines if the terminal supports it.
      – Ross Ridge
      19 hours ago










    • @RossRidge the earliest reference I could find for curses — and this is via Wikipedia, so apply a large pinch of salt — is Arnold, K. C. R. C. (1977). "Screen Updating and Cursor Movement Optimization: A Library Package". University of California, Berkely. So 1977, five years later than the patent I was otherwise citing. I'll try to trawl for earlier mentions.
      – Tommy
      19 hours ago
















    I agree with your interpretation of the question, and see also @Raffzahn's description of using the technique on terminals where reading from system memory is significantly faster than writing to the terminal screen through the serial port.
    – traal
    20 hours ago





    I agree with your interpretation of the question, and see also @Raffzahn's description of using the technique on terminals where reading from system memory is significantly faster than writing to the terminal screen through the serial port.
    – traal
    20 hours ago













    I wasn't completely sure whether the terminals Raffzahn mentioned would necessarily do an actual diff, or just keep a record of things that have changed. But fair enough, yes, my interpretation of the question may or may not differ as much as I think. To be clear, I wanted to explain why I'm adding an additional answer, not to critique anyone. Definitely read it as a comment about me, not about anybody else.
    – Tommy
    20 hours ago





    I wasn't completely sure whether the terminals Raffzahn mentioned would necessarily do an actual diff, or just keep a record of things that have changed. But fair enough, yes, my interpretation of the question may or may not differ as much as I think. To be clear, I wanted to explain why I'm adding an additional answer, not to critique anyone. Definitely read it as a comment about me, not about anybody else.
    – Tommy
    20 hours ago





    2




    2




    That's how curses works. It keeps two buffers, one for what is currently displayed on screen and a second for what what screen should be updated to display. When the terminal is updated, basically does a diff of the two buffers in order to minimize the number of bytes sent to the terminal. That can include deleting and inserting characters or lines if the terminal supports it.
    – Ross Ridge
    19 hours ago




    That's how curses works. It keeps two buffers, one for what is currently displayed on screen and a second for what what screen should be updated to display. When the terminal is updated, basically does a diff of the two buffers in order to minimize the number of bytes sent to the terminal. That can include deleting and inserting characters or lines if the terminal supports it.
    – Ross Ridge
    19 hours ago












    @RossRidge the earliest reference I could find for curses — and this is via Wikipedia, so apply a large pinch of salt — is Arnold, K. C. R. C. (1977). "Screen Updating and Cursor Movement Optimization: A Library Package". University of California, Berkely. So 1977, five years later than the patent I was otherwise citing. I'll try to trawl for earlier mentions.
    – Tommy
    19 hours ago





    @RossRidge the earliest reference I could find for curses — and this is via Wikipedia, so apply a large pinch of salt — is Arnold, K. C. R. C. (1977). "Screen Updating and Cursor Movement Optimization: A Library Package". University of California, Berkely. So 1977, five years later than the patent I was otherwise citing. I'll try to trawl for earlier mentions.
    – Tommy
    19 hours ago











    up vote
    3
    down vote













    It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



    The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.






    share|improve this answer
























      up vote
      3
      down vote













      It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



      The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.






      share|improve this answer






















        up vote
        3
        down vote










        up vote
        3
        down vote









        It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



        The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.






        share|improve this answer












        It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



        The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 22 hours ago









        dirkt

        8,16812243




        8,16812243




















            up vote
            1
            down vote













            An interesting example, or possibly variation, of this is the Amiga "ANIM" format used by Deluxe Paint. It only stores changes between frames, rather than whole frames. It first appeared in 1988.



            Differences between frames were determined by the application creating the file. In the case of Deluxe paint it was based on user input, rather than examining frame-to-frame differences, so could sometimes produce very large files even though the visible changes were relatively small.






            share|improve this answer




















            • I looked into Autodesk's similar FLIC, but wasn't able to figure out when that appeared beyond being after 1989 but before 1993. So ANIM predates it for sure.
              – Tommy
              1 hour ago














            up vote
            1
            down vote













            An interesting example, or possibly variation, of this is the Amiga "ANIM" format used by Deluxe Paint. It only stores changes between frames, rather than whole frames. It first appeared in 1988.



            Differences between frames were determined by the application creating the file. In the case of Deluxe paint it was based on user input, rather than examining frame-to-frame differences, so could sometimes produce very large files even though the visible changes were relatively small.






            share|improve this answer




















            • I looked into Autodesk's similar FLIC, but wasn't able to figure out when that appeared beyond being after 1989 but before 1993. So ANIM predates it for sure.
              – Tommy
              1 hour ago












            up vote
            1
            down vote










            up vote
            1
            down vote









            An interesting example, or possibly variation, of this is the Amiga "ANIM" format used by Deluxe Paint. It only stores changes between frames, rather than whole frames. It first appeared in 1988.



            Differences between frames were determined by the application creating the file. In the case of Deluxe paint it was based on user input, rather than examining frame-to-frame differences, so could sometimes produce very large files even though the visible changes were relatively small.






            share|improve this answer












            An interesting example, or possibly variation, of this is the Amiga "ANIM" format used by Deluxe Paint. It only stores changes between frames, rather than whole frames. It first appeared in 1988.



            Differences between frames were determined by the application creating the file. In the case of Deluxe paint it was based on user input, rather than examining frame-to-frame differences, so could sometimes produce very large files even though the visible changes were relatively small.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered 3 hours ago









            user

            1,974212




            1,974212











            • I looked into Autodesk's similar FLIC, but wasn't able to figure out when that appeared beyond being after 1989 but before 1993. So ANIM predates it for sure.
              – Tommy
              1 hour ago
















            • I looked into Autodesk's similar FLIC, but wasn't able to figure out when that appeared beyond being after 1989 but before 1993. So ANIM predates it for sure.
              – Tommy
              1 hour ago















            I looked into Autodesk's similar FLIC, but wasn't able to figure out when that appeared beyond being after 1989 but before 1993. So ANIM predates it for sure.
            – Tommy
            1 hour ago




            I looked into Autodesk's similar FLIC, but wasn't able to figure out when that appeared beyond being after 1989 but before 1993. So ANIM predates it for sure.
            – Tommy
            1 hour ago

















             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8049%2fwas-1991s-hellcats-the-first-instance-of-incremental-screen-updates%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What does second last employer means? [closed]

            List of Gilmore Girls characters

            Confectionery