Incremental screen updates

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












In case you have never seen it, 1991's Hellcats was a seminal release on the Mac. It ran at full 8-bit color and could, on a newish machine, drive three 1024x768 screens at the same time. Nothing on the Mac or PC came remotely close.



I recall when I learned how the magic worked... I got up from the game for a moment and returned with the "screen saver" (ahh, the old days) was running. I flicked the mouse to clear it and unpause the game, and noticed the only part of the screen that updated was the horizon. It was reaching high speeds by carefully designing the screen so only certain portions would change as the aircraft moved, and then updating only those portions.



Now you can imagine, in 1991 this was no trivial task on its own. Graphics cards were generally frame buffers driven by the CPU, actual accelerators were just coming to market. So I was always curious how he managed this trick.



Long after I had an email exchange with the author and he explained it. He set aside two screen buffers in memory, and drew into alternate frames. He then compared the two and sent only the diffs over the bus to the card. Because the CPU was much faster than the bus, this provided greatly improved performance.



He also noted that the idea came to him on the SPARC1. This platform had the general problem of having a fast CPU but with a slowish bus. His first app spun a cube at 60 fps. He ended up doing Hellcats on the Mac because it shared this limitation, whereas PCs of the era generally used 320x240 for color, and the advantage of this technique was washed out.



So, the question..



I suspect that this idea of incremental updating may pre-date Mr. Parker's 1989-ish Sun version. Does anyone know of examples of this being used earlier?










share|improve this question

















  • 1




    Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
    – Tommy
    18 mins ago















up vote
2
down vote

favorite












In case you have never seen it, 1991's Hellcats was a seminal release on the Mac. It ran at full 8-bit color and could, on a newish machine, drive three 1024x768 screens at the same time. Nothing on the Mac or PC came remotely close.



I recall when I learned how the magic worked... I got up from the game for a moment and returned with the "screen saver" (ahh, the old days) was running. I flicked the mouse to clear it and unpause the game, and noticed the only part of the screen that updated was the horizon. It was reaching high speeds by carefully designing the screen so only certain portions would change as the aircraft moved, and then updating only those portions.



Now you can imagine, in 1991 this was no trivial task on its own. Graphics cards were generally frame buffers driven by the CPU, actual accelerators were just coming to market. So I was always curious how he managed this trick.



Long after I had an email exchange with the author and he explained it. He set aside two screen buffers in memory, and drew into alternate frames. He then compared the two and sent only the diffs over the bus to the card. Because the CPU was much faster than the bus, this provided greatly improved performance.



He also noted that the idea came to him on the SPARC1. This platform had the general problem of having a fast CPU but with a slowish bus. His first app spun a cube at 60 fps. He ended up doing Hellcats on the Mac because it shared this limitation, whereas PCs of the era generally used 320x240 for color, and the advantage of this technique was washed out.



So, the question..



I suspect that this idea of incremental updating may pre-date Mr. Parker's 1989-ish Sun version. Does anyone know of examples of this being used earlier?










share|improve this question

















  • 1




    Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
    – Tommy
    18 mins ago













up vote
2
down vote

favorite









up vote
2
down vote

favorite











In case you have never seen it, 1991's Hellcats was a seminal release on the Mac. It ran at full 8-bit color and could, on a newish machine, drive three 1024x768 screens at the same time. Nothing on the Mac or PC came remotely close.



I recall when I learned how the magic worked... I got up from the game for a moment and returned with the "screen saver" (ahh, the old days) was running. I flicked the mouse to clear it and unpause the game, and noticed the only part of the screen that updated was the horizon. It was reaching high speeds by carefully designing the screen so only certain portions would change as the aircraft moved, and then updating only those portions.



Now you can imagine, in 1991 this was no trivial task on its own. Graphics cards were generally frame buffers driven by the CPU, actual accelerators were just coming to market. So I was always curious how he managed this trick.



Long after I had an email exchange with the author and he explained it. He set aside two screen buffers in memory, and drew into alternate frames. He then compared the two and sent only the diffs over the bus to the card. Because the CPU was much faster than the bus, this provided greatly improved performance.



He also noted that the idea came to him on the SPARC1. This platform had the general problem of having a fast CPU but with a slowish bus. His first app spun a cube at 60 fps. He ended up doing Hellcats on the Mac because it shared this limitation, whereas PCs of the era generally used 320x240 for color, and the advantage of this technique was washed out.



So, the question..



I suspect that this idea of incremental updating may pre-date Mr. Parker's 1989-ish Sun version. Does anyone know of examples of this being used earlier?










share|improve this question













In case you have never seen it, 1991's Hellcats was a seminal release on the Mac. It ran at full 8-bit color and could, on a newish machine, drive three 1024x768 screens at the same time. Nothing on the Mac or PC came remotely close.



I recall when I learned how the magic worked... I got up from the game for a moment and returned with the "screen saver" (ahh, the old days) was running. I flicked the mouse to clear it and unpause the game, and noticed the only part of the screen that updated was the horizon. It was reaching high speeds by carefully designing the screen so only certain portions would change as the aircraft moved, and then updating only those portions.



Now you can imagine, in 1991 this was no trivial task on its own. Graphics cards were generally frame buffers driven by the CPU, actual accelerators were just coming to market. So I was always curious how he managed this trick.



Long after I had an email exchange with the author and he explained it. He set aside two screen buffers in memory, and drew into alternate frames. He then compared the two and sent only the diffs over the bus to the card. Because the CPU was much faster than the bus, this provided greatly improved performance.



He also noted that the idea came to him on the SPARC1. This platform had the general problem of having a fast CPU but with a slowish bus. His first app spun a cube at 60 fps. He ended up doing Hellcats on the Mac because it shared this limitation, whereas PCs of the era generally used 320x240 for color, and the advantage of this technique was washed out.



So, the question..



I suspect that this idea of incremental updating may pre-date Mr. Parker's 1989-ish Sun version. Does anyone know of examples of this being used earlier?







graphics apple-macintosh






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 2 hours ago









Maury Markowitz

1,860420




1,860420







  • 1




    Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
    – Tommy
    18 mins ago













  • 1




    Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
    – Tommy
    18 mins ago








1




1




Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
– Tommy
18 mins ago





Just to be clear: you're asking about incremental updates of the form "calculate the whole display, diff, and submit only the differences"? So e.g. the fact that Space Invaders updates each column of aliens in turn is a completely different thing because there's no comparison step. I guess the 'perform a diff' bit is the focus of the question? (Also: I'd always assumed the Parsoft titles were diffing an s-buffer, not a full-on frame buffer; did the author give any indication as to in-memory representation of the frames?)
– Tommy
18 mins ago











3 Answers
3






active

oldest

votes

















up vote
2
down vote













Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.






share|improve this answer





























    up vote
    2
    down vote













    The idea is as old as memory maped display hardware is. After all, memory bandwith was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Raceing the Beam' :)



    Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



    The double buffering of complete renderings and only transfering changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cyles (for comparsion) instead of one with a simple transfer.



    And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handlign different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



    For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transfered toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



    The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



    Bottom line: "Im Westen nichts Neues"




    *1 - Personal experiance, I bet others have done that already before.






    share|improve this answer



























      up vote
      1
      down vote













      It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



      The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.






      share|improve this answer




















        Your Answer







        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "648"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8049%2fincremental-screen-updates%23new-answer', 'question_page');

        );

        Post as a guest






























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        2
        down vote













        Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



        It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



        So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



        Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.






        share|improve this answer


























          up vote
          2
          down vote













          Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



          It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



          So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



          Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.






          share|improve this answer
























            up vote
            2
            down vote










            up vote
            2
            down vote









            Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



            It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



            So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



            Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.






            share|improve this answer














            Basically all graphical user environments (MS Windows [1985], X11 [1984], Atari GEM [1984] send a running application a list of screen regions that need to be re-drawn when one of this application's windows is un-obscured (This list of screen regions is typically attached to some sort of WM_PAINT message).



            It is up to the application wether it goes the hard way and only updates the parts of its windows that actually need an update, or wether it decides to go easy and collapses all these regions and redraws all of its window area.



            So this standard technique is built in practically all WIMP environments that showed up in the early 1980s.



            Outside WIMPS, double buffering has obviously always been a standard technology for games, especially on computers that didn't have hardware sprites (move partial sprite-sized window background to offscreen buffer, display software sprite, move background back from offscreen buffer). Such offscreen buffers could be as large as a screen and be switched to the display via one single register transfer (video circuitry that can display more than one framebuffer, for example Atari ST) or much smaller for computers with a fixed framebuffer address like the ZX Spectrum.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited 1 hour ago

























            answered 1 hour ago









            tofro

            13.3k32876




            13.3k32876




















                up vote
                2
                down vote













                The idea is as old as memory maped display hardware is. After all, memory bandwith was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Raceing the Beam' :)



                Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



                The double buffering of complete renderings and only transfering changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cyles (for comparsion) instead of one with a simple transfer.



                And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handlign different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



                For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transfered toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



                The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



                Bottom line: "Im Westen nichts Neues"




                *1 - Personal experiance, I bet others have done that already before.






                share|improve this answer
























                  up vote
                  2
                  down vote













                  The idea is as old as memory maped display hardware is. After all, memory bandwith was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Raceing the Beam' :)



                  Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



                  The double buffering of complete renderings and only transfering changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cyles (for comparsion) instead of one with a simple transfer.



                  And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handlign different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



                  For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transfered toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



                  The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



                  Bottom line: "Im Westen nichts Neues"




                  *1 - Personal experiance, I bet others have done that already before.






                  share|improve this answer






















                    up vote
                    2
                    down vote










                    up vote
                    2
                    down vote









                    The idea is as old as memory maped display hardware is. After all, memory bandwith was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Raceing the Beam' :)



                    Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



                    The double buffering of complete renderings and only transfering changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cyles (for comparsion) instead of one with a simple transfer.



                    And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handlign different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



                    For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transfered toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



                    The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



                    Bottom line: "Im Westen nichts Neues"




                    *1 - Personal experiance, I bet others have done that already before.






                    share|improve this answer












                    The idea is as old as memory maped display hardware is. After all, memory bandwith was for most of the time the limiting factor. Every character based text screen only updates what needs to be changed and similar each and every game - maybe with an exception of the Atari VCS 'Raceing the Beam' :)



                    Similar double buffering. As soon as a machine supports multiple pages it's the best (or even only) way to generate a flicker free and complete update when game logic runs asynchrone to the screen refresh.



                    The double buffering of complete renderings and only transfering changed parts into a display buffer only makes sense in an environment where the CPU (and local CPU accessible memory) is way faster than access to screen memory, as each transfer needs at least two local memory access cyles (for comparsion) instead of one with a simple transfer.



                    And it has been done on mainframes already in the late 1970s (*1) for text screens. Applications are usually form based, so even accessing/handlign different data sets, a good part of the screen stays the same, so many screen handlers offered modes to only transfer the data section by default. Similar terminals offered modes to only return changable data.



                    For one, rather large application we even increased that by keeping a complete image of the terminal screen in main memory and used a terminal mode that only returned changed field content. Similar, only data that has been changed on the host side since the last input was transfered toward the terminal. In average this reduced an input transmission to less than 100 bytes, while average output was less than 500 bytes instead of more than 3 KiB for a whole 80x25 screen (including format control).



                    The effect was quite notable - even on 240 kBit inhouse hookups. In addition the terminal also didn't clear the screen when updating, but updated in place, giving an even greater impression of quick response :))



                    Bottom line: "Im Westen nichts Neues"




                    *1 - Personal experiance, I bet others have done that already before.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 42 mins ago









                    Raffzahn

                    39.2k488158




                    39.2k488158




















                        up vote
                        1
                        down vote













                        It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



                        The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.






                        share|improve this answer
























                          up vote
                          1
                          down vote













                          It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



                          The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.






                          share|improve this answer






















                            up vote
                            1
                            down vote










                            up vote
                            1
                            down vote









                            It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



                            The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.






                            share|improve this answer












                            It definitely predates 1989, e.g. Apple II games like Bolo (1982) and Star Maze also used double buffering with incremental updates.



                            The idea is pretty obvious once you have hardware that can do double buffering, so I wouldn't be surprised if it was used much earlier already.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 1 hour ago









                            dirkt

                            8,14812243




                            8,14812243



























                                 

                                draft saved


                                draft discarded















































                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8049%2fincremental-screen-updates%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                Comments

                                Popular posts from this blog

                                What does second last employer means? [closed]

                                Installing NextGIS Connect into QGIS 3?

                                One-line joke