How did the early Macintosh computers update the display?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
19
down vote

favorite
1












This question is about the Macintosh 128k, but if you can give an answer which describes one of the other very similar machines, that's good too.



The bitmap was 512 by 384 pixels. Assuming that one byte stores eight pixels, so that the CPU can read or update 16 bits in a single bus cycle, that means it would take 10 944 bus cycles to re-write the entire display. Assuming it used something like a move.w a0+, a1+ which is 12 cycles, I calculate that to take around 0.3 seconds to copy a bitmap onto the screen, at 8 MHz. Add to that a variable number of roxl or roxr instructions (in case you want to scroll or move something some pixels to the left or right), and overhead for even a generously unrolled loop, you've got a dog slow GUI.



Another problem is that we have a mouse, but apparently no sprite hardware. So now we need to copy around a (partial) shadow of the bitmap, overlaying the cursor with and and or, whenever the user moves the mouse.



This link does not list any hardware which was obviously intended to alleviate these problems.



So how did this line of "classic macs" address the problem of scrolling and moving large areas, or rendering a mouse cursor?










share|improve this question



















  • 1




    Even 8-bit systems with no sprite hardware at all could comfortably handle a mouse cursor and a graphical environment (See WIMP systems for example, for the Amstrad machines or the ZX Spectrum) Shouldn't be a problem for a fast 68k CPU.
    – tofro
    yesterday











  • I can't replicate you maths. 512 x 384 = 196,608 bits = 12,288 words x 12 clock cycles for move.w a0+, a1+ = 147,456 cycles at 8MHz = 0.018 seconds or just over 50Hz,
    – JeremyP
    23 hours ago






  • 1




    Keep in mind that you don't have to rewrite every pixel on the screen during every vertical blank - and indeed, very few machines did that back then. The Windows UI model still works this way (though only for the "software" rendering part), though many applications stopped doing that for simplicity. The pixels you wrote 10 seconds ago are still in the video buffer, unless you explicitly overwrote them - you don't start with an empty buffer on each "frame".
    – Luaan
    19 hours ago










  • FYI, “Macintosh” was never plural in Apple-speak. So, “Macintosh computers”, not “Macintoshes”.
    – Basil Bourque
    10 hours ago














up vote
19
down vote

favorite
1












This question is about the Macintosh 128k, but if you can give an answer which describes one of the other very similar machines, that's good too.



The bitmap was 512 by 384 pixels. Assuming that one byte stores eight pixels, so that the CPU can read or update 16 bits in a single bus cycle, that means it would take 10 944 bus cycles to re-write the entire display. Assuming it used something like a move.w a0+, a1+ which is 12 cycles, I calculate that to take around 0.3 seconds to copy a bitmap onto the screen, at 8 MHz. Add to that a variable number of roxl or roxr instructions (in case you want to scroll or move something some pixels to the left or right), and overhead for even a generously unrolled loop, you've got a dog slow GUI.



Another problem is that we have a mouse, but apparently no sprite hardware. So now we need to copy around a (partial) shadow of the bitmap, overlaying the cursor with and and or, whenever the user moves the mouse.



This link does not list any hardware which was obviously intended to alleviate these problems.



So how did this line of "classic macs" address the problem of scrolling and moving large areas, or rendering a mouse cursor?










share|improve this question



















  • 1




    Even 8-bit systems with no sprite hardware at all could comfortably handle a mouse cursor and a graphical environment (See WIMP systems for example, for the Amstrad machines or the ZX Spectrum) Shouldn't be a problem for a fast 68k CPU.
    – tofro
    yesterday











  • I can't replicate you maths. 512 x 384 = 196,608 bits = 12,288 words x 12 clock cycles for move.w a0+, a1+ = 147,456 cycles at 8MHz = 0.018 seconds or just over 50Hz,
    – JeremyP
    23 hours ago






  • 1




    Keep in mind that you don't have to rewrite every pixel on the screen during every vertical blank - and indeed, very few machines did that back then. The Windows UI model still works this way (though only for the "software" rendering part), though many applications stopped doing that for simplicity. The pixels you wrote 10 seconds ago are still in the video buffer, unless you explicitly overwrote them - you don't start with an empty buffer on each "frame".
    – Luaan
    19 hours ago










  • FYI, “Macintosh” was never plural in Apple-speak. So, “Macintosh computers”, not “Macintoshes”.
    – Basil Bourque
    10 hours ago












up vote
19
down vote

favorite
1









up vote
19
down vote

favorite
1






1





This question is about the Macintosh 128k, but if you can give an answer which describes one of the other very similar machines, that's good too.



The bitmap was 512 by 384 pixels. Assuming that one byte stores eight pixels, so that the CPU can read or update 16 bits in a single bus cycle, that means it would take 10 944 bus cycles to re-write the entire display. Assuming it used something like a move.w a0+, a1+ which is 12 cycles, I calculate that to take around 0.3 seconds to copy a bitmap onto the screen, at 8 MHz. Add to that a variable number of roxl or roxr instructions (in case you want to scroll or move something some pixels to the left or right), and overhead for even a generously unrolled loop, you've got a dog slow GUI.



Another problem is that we have a mouse, but apparently no sprite hardware. So now we need to copy around a (partial) shadow of the bitmap, overlaying the cursor with and and or, whenever the user moves the mouse.



This link does not list any hardware which was obviously intended to alleviate these problems.



So how did this line of "classic macs" address the problem of scrolling and moving large areas, or rendering a mouse cursor?










share|improve this question















This question is about the Macintosh 128k, but if you can give an answer which describes one of the other very similar machines, that's good too.



The bitmap was 512 by 384 pixels. Assuming that one byte stores eight pixels, so that the CPU can read or update 16 bits in a single bus cycle, that means it would take 10 944 bus cycles to re-write the entire display. Assuming it used something like a move.w a0+, a1+ which is 12 cycles, I calculate that to take around 0.3 seconds to copy a bitmap onto the screen, at 8 MHz. Add to that a variable number of roxl or roxr instructions (in case you want to scroll or move something some pixels to the left or right), and overhead for even a generously unrolled loop, you've got a dog slow GUI.



Another problem is that we have a mouse, but apparently no sprite hardware. So now we need to copy around a (partial) shadow of the bitmap, overlaying the cursor with and and or, whenever the user moves the mouse.



This link does not list any hardware which was obviously intended to alleviate these problems.



So how did this line of "classic macs" address the problem of scrolling and moving large areas, or rendering a mouse cursor?







graphics apple-macintosh






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 11 mins ago









Basil Bourque

1033




1033










asked yesterday









Wilson

9,304542114




9,304542114







  • 1




    Even 8-bit systems with no sprite hardware at all could comfortably handle a mouse cursor and a graphical environment (See WIMP systems for example, for the Amstrad machines or the ZX Spectrum) Shouldn't be a problem for a fast 68k CPU.
    – tofro
    yesterday











  • I can't replicate you maths. 512 x 384 = 196,608 bits = 12,288 words x 12 clock cycles for move.w a0+, a1+ = 147,456 cycles at 8MHz = 0.018 seconds or just over 50Hz,
    – JeremyP
    23 hours ago






  • 1




    Keep in mind that you don't have to rewrite every pixel on the screen during every vertical blank - and indeed, very few machines did that back then. The Windows UI model still works this way (though only for the "software" rendering part), though many applications stopped doing that for simplicity. The pixels you wrote 10 seconds ago are still in the video buffer, unless you explicitly overwrote them - you don't start with an empty buffer on each "frame".
    – Luaan
    19 hours ago










  • FYI, “Macintosh” was never plural in Apple-speak. So, “Macintosh computers”, not “Macintoshes”.
    – Basil Bourque
    10 hours ago












  • 1




    Even 8-bit systems with no sprite hardware at all could comfortably handle a mouse cursor and a graphical environment (See WIMP systems for example, for the Amstrad machines or the ZX Spectrum) Shouldn't be a problem for a fast 68k CPU.
    – tofro
    yesterday











  • I can't replicate you maths. 512 x 384 = 196,608 bits = 12,288 words x 12 clock cycles for move.w a0+, a1+ = 147,456 cycles at 8MHz = 0.018 seconds or just over 50Hz,
    – JeremyP
    23 hours ago






  • 1




    Keep in mind that you don't have to rewrite every pixel on the screen during every vertical blank - and indeed, very few machines did that back then. The Windows UI model still works this way (though only for the "software" rendering part), though many applications stopped doing that for simplicity. The pixels you wrote 10 seconds ago are still in the video buffer, unless you explicitly overwrote them - you don't start with an empty buffer on each "frame".
    – Luaan
    19 hours ago










  • FYI, “Macintosh” was never plural in Apple-speak. So, “Macintosh computers”, not “Macintoshes”.
    – Basil Bourque
    10 hours ago







1




1




Even 8-bit systems with no sprite hardware at all could comfortably handle a mouse cursor and a graphical environment (See WIMP systems for example, for the Amstrad machines or the ZX Spectrum) Shouldn't be a problem for a fast 68k CPU.
– tofro
yesterday





Even 8-bit systems with no sprite hardware at all could comfortably handle a mouse cursor and a graphical environment (See WIMP systems for example, for the Amstrad machines or the ZX Spectrum) Shouldn't be a problem for a fast 68k CPU.
– tofro
yesterday













I can't replicate you maths. 512 x 384 = 196,608 bits = 12,288 words x 12 clock cycles for move.w a0+, a1+ = 147,456 cycles at 8MHz = 0.018 seconds or just over 50Hz,
– JeremyP
23 hours ago




I can't replicate you maths. 512 x 384 = 196,608 bits = 12,288 words x 12 clock cycles for move.w a0+, a1+ = 147,456 cycles at 8MHz = 0.018 seconds or just over 50Hz,
– JeremyP
23 hours ago




1




1




Keep in mind that you don't have to rewrite every pixel on the screen during every vertical blank - and indeed, very few machines did that back then. The Windows UI model still works this way (though only for the "software" rendering part), though many applications stopped doing that for simplicity. The pixels you wrote 10 seconds ago are still in the video buffer, unless you explicitly overwrote them - you don't start with an empty buffer on each "frame".
– Luaan
19 hours ago




Keep in mind that you don't have to rewrite every pixel on the screen during every vertical blank - and indeed, very few machines did that back then. The Windows UI model still works this way (though only for the "software" rendering part), though many applications stopped doing that for simplicity. The pixels you wrote 10 seconds ago are still in the video buffer, unless you explicitly overwrote them - you don't start with an empty buffer on each "frame".
– Luaan
19 hours ago












FYI, “Macintosh” was never plural in Apple-speak. So, “Macintosh computers”, not “Macintoshes”.
– Basil Bourque
10 hours ago




FYI, “Macintosh” was never plural in Apple-speak. So, “Macintosh computers”, not “Macintoshes”.
– Basil Bourque
10 hours ago










6 Answers
6






active

oldest

votes

















up vote
34
down vote













TL;DR: Exactly as you assume. The CPU shovelled the data around, even way slower than your calculation suggests, and everyone was happy about the high speed with which it happened :)




To be honest, I'm not complete sure what your question is. It was a plain bitmap display (like you assumed) and quite sufficient to be handled by the CPU. After all, no-one was thinking of real time video back then. Just a UI and whatever the running application wants to display. And for this a CPU is quite sufficient.



Moving a mouse cursor needs at maximum the manipulation of twice 16 bytes (in case of an 8x8). One for undrawing the existing one, one for drawing it at a new position. That's few enough when done every frame for a fast moving one. In reality the mouse doesn't move at all most of the time, so no redraw is needed.



Similar, the display doesn't get changed a lot. Sure, every time a key gets pressed a character is to be drawn. Not a big deal. Several can be done between frames.



Worst case here would be scrolling up at the end of a line, or inserting a character in the topmost line of a window. This may take time to (re)draw the entire window, but then again, it's a rare case and even if it lags by like .2 seconds, it's way acceptable.



Yes, the Mac was slow in drawing a whole screen, but it wasn't exceptionally slow, or slower than other machines of that time. Especially not considering the nice look it had :))



Just compare it to similar machines of about the same time frame. Inserting a character at a top line and wrapping the rest down equals a redraw of the whole text screen. On a 9,600 bps terminal that equals to more than two whole seconds. An Apple II with its built-in 40x24 screen needed about 0.04 seconds (already more than 2 frames) to do the same (*1) and with an 80 column card it may have taken as much as .1 seconds. So while one could notice a little lag when scrolling on a Mac, it was way within its competition.



Today we are used to instantly updated of humungous high resolution screens, no matter what happens. Back then no-one watched videos on a PC. Heck, simple animations of a few pixels were already considered awesome.



Bottom line, there was no such problem.




*1 - That's hand written assembly optimized for that case.






share|improve this answer


















  • 5




    +1. It’s easy to forget how few screen updates there were in old GUIs; scrolling didn’t update contents in real time, nor did moving windows etc.
    – Stephen Kitt
    yesterday






  • 5




    This answer misses the key thing that made it appear fast: an incredible amount of coding effort went into only updating those portions of the screen that actually needed updating. I recently re-wrote some drawing code originally written in the early-Macintosh era: the new code has about a third as many lines, is much less complex, and takes five times longer to draw.
    – Mark
    yesterday






  • 2




    @Barmar Huh? BitBlt is a Windows API, not a CPU instruction.
    – duskwuff
    yesterday






  • 5




    @duskwuff: BitBLT had been a thing long before there was a Windows API function of the same name.
    – Greg Hewgill
    yesterday







  • 2




    Systems with a BITBLT instruction typically had microcoded CPUs, e.g. Alto, Star, Lisp Machines, and so on. Most commercial microprocessors like the 68000 did not have one.
    – Chris Hanson
    yesterday

















up vote
19
down vote













There were three important design aspects of the original Macintosh that allowed the relatively limited video display hardware to provide a pleasing user experience in terms of the performance of its GUI.



  1. Vertical blanking hardware interrupt - this provided the necessary timing for software to synchronize display updates to the CRT refresh cycle. Thus, smooth animation could be achieved, especially when combined with #2.

  2. Double-buffered display support - the hardware supported two independent display buffers that could be rapidly toggled. This allowed redraws that took longer than the blanking interval to appear synchronous with the CRT refresh cycle.

  3. Highly-optimized QuickDraw - about 1/3 of the Macintosh ROM firmware was used to give programmers reusable, highly-optimized, assembly language drawing routines for primitives like lines, ovals, rectangles, and fonts. These drawing routines were in turn used by the ROM GUI routines, which were used by virtually all applications programmers.

In summary, QuickDraw was a rather brilliant inclusion that made it easy for applications to quickly redraw only those regions of the display that they modified. When this optimization was not enough for performance, synchronization with the video display allowed the lag in re-drawing the display to still appear sufficiently smooth and "unbroken" so that a responsive user feel was evoked.



NOTE 1: The visible display was actually 342 lines, not 384.



NOTE 2: Since the display hardware shares access to DRAM with the CPU during each horizontal drawing cycle, the CPU effectively runs at ~6 MHz, not ~8 MHz.






share|improve this answer
















  • 1




    +1 for mentioning double buffering - this at least made the Classic Mac screen update look fast
    – tofro
    yesterday






  • 2




    I'm pretty sure the standard GUI wasn't doubled buffered.That would require a fair chunk of the 128K memory for the second buffer, and would make updating only the part of the screen that needed updating much more complicated. It also would mean that you wouldn't need to do anything to avoid flicker like described here: computerhistory.org/atchm/macpaint-and-quickdraw-source-code "Bill eliminated the flicker by composing everything in a hidden memory buffer, which was then transferred to the screen quickly when it was completely ready."
    – Ross Ridge
    yesterday










  • As an aside, the sound generator was tied directly to the screen refresh, basically the screen logic (I forget the details) was the clock that drove the Sound hardware.
    – Will Hartung
    yesterday






  • 1




    @RossRidge All early Macs supported an alternate screen buffer that started at $12700 ($72700 on the Fat Mac) and was activated by setting a specific bit in the VIA data register.
    – tofro
    yesterday






  • 1




    @tofro I'm not disputing whether the hardware support existed, but whether it was used during the display of the standard GUI.
    – Ross Ridge
    yesterday

















up vote
17
down vote













As others have said, refreshing the whole screen in a fraction of a second was considered blindingly fast at that time.



The Mac's biggest early advantage, though, was the QuickDraw framework. This was leaps and bounds ahead of other graphics packages of the day, both in functionality and efficiency. With QuickDraw, you rarely had to rewrite the whole display -- instead, you'd work within a GrafPort representing a window's content pane, and manipulate Regions that touched only affected pixels, not an entire rectangular bounding box.



At the time the Mac was introduced, a 16-bit 8-MHz 68000 was actually respectably fast in comparison to its competition. But it felt AMAZINGLY fast.






share|improve this answer
















  • 3




    I had an 8086 PC in the 80s, with the GEM desktop. Screen scrolling took about 2 seconds. The Mac was relatively quick for the time.
    – user
    yesterday






  • 3




    Also QuickDraw had an extremely well optimized bitblt which (in addition to the high level features like only moving content within a window - which was a subset of the entire screen) made moving pixels as fast as possible. Windows 2 and 3, a few years later, had a bitblt which would generate code on the fly and then execute it to achieve blindingly fast pixel updates. I feel sure (though don't actually know for a fact) that QuickDraw did that too, because similar things were being done in games and such at the time.
    – davidbak
    yesterday











  • The Atari ST's VDI and Line-A graphics wasn't much worse.
    – tofro
    yesterday

















up vote
8
down vote













Basically all early 68k Home Computers (Classic Mac, early Atari ST, Sinclair QL) would update the display using the CPU only. The Amiga with its Blitter Chip was an exception (somewhat copied by the Atari ST when that one later on got a - simpler - Blitter).



Without support from specific graphics or DMA chips, there were still some ways to speed up graphics operations:



  • Double(/multiple) buffering - This didn't really make the shoving around of pixels faster, but made it look fast (because updates were basically instant). The CPU could write in one memory bank while the video circuitry displayed the other. Mac, Atari ST and Sinclair QL used / could use this technique.

  • Make use of vast amounts of memory - Some Atari ST games used up to 8 pre-shifted screens and switched between them with the above method to implement fast pixel-wise horizontal scrolling (not an option for the early Mac because of its memory limits).

  • Vertical scolling by changing the start address of the displayed video buffer - This technique was often used by the Atari ST that was able to modify the address of the first line displayed from video memory in scanline granularity (I do think, however, that the Mac screen banks were fixed)


  • Highly optimized machine code screen drivers - The fastest way to shove video memory around using a 68000 CPU is actually the MOVEM instruction - Free up as much registers as you can, then move memory around with some code like the following:



     lea.l sourceAddr,a6
    lea.l destAddr,a5
    move.w #count-1,d0
    lp: movem.l (a6)+,d1-d7/a0-a4
    movem.l d0-d7/a0-a4,(a5)+
    dbra d0,lp


So, you can actually do faster than your code examples imply.



Seen from today's viewpoint, Mac or Atari ST screen handling isn't really fast. Back in the days, however, it was - It was way faster than most of what people were used from 8-bit CPUs.






share|improve this answer






















  • And back then, many people were still used to teleterminals, both CRT and actual printers. Even with the complexity of the UI, the Mac was blazingly fast - back then. To modern eyes, it's unbearably slow - everything except for moving the mouse had significant very visible delays, especially when you needed to redraw the whole screen (which meant the redraw optimizations didn't apply).
    – Luaan
    19 hours ago

















up vote
4
down vote













In those old macs, the CPU was basically it. There was no screen controller, no floppy controller, nothing. Drawing the screen was job 1. Handling the floppy and watching for mouse/keyboard interrupts was job 2, and running your program was job 3. Mostly you got time when the electron gun was pulling back from the bottom of the screen to the top after it drew a page, and when it was pulling back horizontally after it drew a line. It felt responsive, but if you wanted to do something compute intensive you had the wrong machine.






share|improve this answer








New contributor




John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

















  • While you are somewhat right, that creates the wrong impression - The Mac was considered a fast computer when it appeared.
    – tofro
    yesterday










  • @tofro A fast home computer, yes. And it was "portable", and with a full graphical UI to boot. Game consoles were much faster thanks to their dedicated graphics hardware. The large computers like LISP machines had both the UI and usually were a bit faster, but also much larger and decidedly not home machines; and of course, almost noöne had any experience with those; most people either had experience with computers like Apple II or Amiga, or "IBM mainframes" with teletype or CRT terminals. The Mac was quite a special thing.
    – Luaan
    18 hours ago










  • @Luaan Game consoles contemporary to the early Mac were Colocovision or Atari 5200, the Famicom and, a bit later, the NES. None of these were close to a Mac (in performance, display resolution, and, very obviously also, price)
    – tofro
    18 hours ago










  • @tofro That depends on what you count as performance. For a game console, latency was important, while resolution wasn't. For a GUI-based micro-computer, resolution was very important, while latency wasn't. They both had their design goals and achieved tham. The Famicom had pretty much no visible latency while doing animations and full-screen scrolling (in SMB2). The same was true with Atari 2600 (with even lower resolution and graphics quality). Neither could ever work as a home computer with a moused-based GUI, but that wasn't their purpose anyway.
    – Luaan
    18 hours ago

















up vote
1
down vote













Other answers cover QuickDraw pretty well. The lack of a hardware mouse cursor was something of a problem from a UI perspective, however, which deserves additional detail. The cursor drawing could be handled by the vertical-blank interrupt at times when nothing was going on. The cursor was 16x16, and the CPU could load and store 32-bit registers at any 16-bit boundary, so for 16 lines of the display the CPU would load 32 bits (two 16-bit words), store them to a backup area, mask with the cursor, and write back. Then the next time it would draw the cursor, it would first restore the part of the screen under the old one. The cursor and mask could be stored shifted to line up with the current cursor position (so if the cursor moves right three pixels, they'd need to be shifted in the buffer before the next time they're drawn), so that unless the cursor moves by 8 pixels in a single frame it would never be necessary to perform an 8-bit shift (the worst-case value).



The problem arises if the cursor is moved while content is being drawn. In early versions of the Macintosh firmware, before anything could be drawn on the screen, it was necessary to first hide the cursor (restoring what was behind it), then perform the graphical operation, ad then restore the cursor. The system would do this automatically if the cursor was enabled when drawing was attempted, but doing so imposed a huge speed penalty. Most programs instead called HideCursor before doing a sequence of operations, and then ShowCursor afterward. Calling HideCursor would increment a counter and ShowCursor would decrement it, so if the cursor was hidden while a function was called that would also hide and restore the cursor, the cursor would still be hidden after the function returned.



A later version of the OS added a major improvement: ShieldCursor. This function took a rectangle, would hide the cursor immediately if it was within that rectangle, and would also cause the vertical-blank interrupt to show or hide the cursor based upon whether it was within that rectangle. This was a major win from both a performance and user-experience standpoint. If code was going to draw some stuff in a rectangular region of the screen and the cursor wasn't there, there was no need to spend time erasing and redrawing the cursor. Further, if a program called ShieldCursor to guard one area of the screen where the cursor hpapened to be, and later used it to guard another, there was no need to redraw the cursor immediately. If the first area needed to be re-shielded and the cursor hadn't been redrawn, there would be no need for an extra redraw/erase cycle. But if the cursor was moved before that area was re-shielded, the cursor could be made visible.



Early versions of the Macintosh OS and programs for it tended to have the cursor disappear quite a lot, which significantly degraded the user experience. I don't remember exactly when ShieldCursor was added, but I think it was around the time of the Macintosh II, and it made things much nicer. I think the mechanisms are simple enough that it probably could have been included within the first Mac OS if anyone had thought of it, but hindsight is of course 20/20.






share|improve this answer




















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "648"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8031%2fhow-did-the-early-macintosh-computers-update-the-display%23new-answer', 'question_page');

    );

    Post as a guest






























    6 Answers
    6






    active

    oldest

    votes








    6 Answers
    6






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    34
    down vote













    TL;DR: Exactly as you assume. The CPU shovelled the data around, even way slower than your calculation suggests, and everyone was happy about the high speed with which it happened :)




    To be honest, I'm not complete sure what your question is. It was a plain bitmap display (like you assumed) and quite sufficient to be handled by the CPU. After all, no-one was thinking of real time video back then. Just a UI and whatever the running application wants to display. And for this a CPU is quite sufficient.



    Moving a mouse cursor needs at maximum the manipulation of twice 16 bytes (in case of an 8x8). One for undrawing the existing one, one for drawing it at a new position. That's few enough when done every frame for a fast moving one. In reality the mouse doesn't move at all most of the time, so no redraw is needed.



    Similar, the display doesn't get changed a lot. Sure, every time a key gets pressed a character is to be drawn. Not a big deal. Several can be done between frames.



    Worst case here would be scrolling up at the end of a line, or inserting a character in the topmost line of a window. This may take time to (re)draw the entire window, but then again, it's a rare case and even if it lags by like .2 seconds, it's way acceptable.



    Yes, the Mac was slow in drawing a whole screen, but it wasn't exceptionally slow, or slower than other machines of that time. Especially not considering the nice look it had :))



    Just compare it to similar machines of about the same time frame. Inserting a character at a top line and wrapping the rest down equals a redraw of the whole text screen. On a 9,600 bps terminal that equals to more than two whole seconds. An Apple II with its built-in 40x24 screen needed about 0.04 seconds (already more than 2 frames) to do the same (*1) and with an 80 column card it may have taken as much as .1 seconds. So while one could notice a little lag when scrolling on a Mac, it was way within its competition.



    Today we are used to instantly updated of humungous high resolution screens, no matter what happens. Back then no-one watched videos on a PC. Heck, simple animations of a few pixels were already considered awesome.



    Bottom line, there was no such problem.




    *1 - That's hand written assembly optimized for that case.






    share|improve this answer


















    • 5




      +1. It’s easy to forget how few screen updates there were in old GUIs; scrolling didn’t update contents in real time, nor did moving windows etc.
      – Stephen Kitt
      yesterday






    • 5




      This answer misses the key thing that made it appear fast: an incredible amount of coding effort went into only updating those portions of the screen that actually needed updating. I recently re-wrote some drawing code originally written in the early-Macintosh era: the new code has about a third as many lines, is much less complex, and takes five times longer to draw.
      – Mark
      yesterday






    • 2




      @Barmar Huh? BitBlt is a Windows API, not a CPU instruction.
      – duskwuff
      yesterday






    • 5




      @duskwuff: BitBLT had been a thing long before there was a Windows API function of the same name.
      – Greg Hewgill
      yesterday







    • 2




      Systems with a BITBLT instruction typically had microcoded CPUs, e.g. Alto, Star, Lisp Machines, and so on. Most commercial microprocessors like the 68000 did not have one.
      – Chris Hanson
      yesterday














    up vote
    34
    down vote













    TL;DR: Exactly as you assume. The CPU shovelled the data around, even way slower than your calculation suggests, and everyone was happy about the high speed with which it happened :)




    To be honest, I'm not complete sure what your question is. It was a plain bitmap display (like you assumed) and quite sufficient to be handled by the CPU. After all, no-one was thinking of real time video back then. Just a UI and whatever the running application wants to display. And for this a CPU is quite sufficient.



    Moving a mouse cursor needs at maximum the manipulation of twice 16 bytes (in case of an 8x8). One for undrawing the existing one, one for drawing it at a new position. That's few enough when done every frame for a fast moving one. In reality the mouse doesn't move at all most of the time, so no redraw is needed.



    Similar, the display doesn't get changed a lot. Sure, every time a key gets pressed a character is to be drawn. Not a big deal. Several can be done between frames.



    Worst case here would be scrolling up at the end of a line, or inserting a character in the topmost line of a window. This may take time to (re)draw the entire window, but then again, it's a rare case and even if it lags by like .2 seconds, it's way acceptable.



    Yes, the Mac was slow in drawing a whole screen, but it wasn't exceptionally slow, or slower than other machines of that time. Especially not considering the nice look it had :))



    Just compare it to similar machines of about the same time frame. Inserting a character at a top line and wrapping the rest down equals a redraw of the whole text screen. On a 9,600 bps terminal that equals to more than two whole seconds. An Apple II with its built-in 40x24 screen needed about 0.04 seconds (already more than 2 frames) to do the same (*1) and with an 80 column card it may have taken as much as .1 seconds. So while one could notice a little lag when scrolling on a Mac, it was way within its competition.



    Today we are used to instantly updated of humungous high resolution screens, no matter what happens. Back then no-one watched videos on a PC. Heck, simple animations of a few pixels were already considered awesome.



    Bottom line, there was no such problem.




    *1 - That's hand written assembly optimized for that case.






    share|improve this answer


















    • 5




      +1. It’s easy to forget how few screen updates there were in old GUIs; scrolling didn’t update contents in real time, nor did moving windows etc.
      – Stephen Kitt
      yesterday






    • 5




      This answer misses the key thing that made it appear fast: an incredible amount of coding effort went into only updating those portions of the screen that actually needed updating. I recently re-wrote some drawing code originally written in the early-Macintosh era: the new code has about a third as many lines, is much less complex, and takes five times longer to draw.
      – Mark
      yesterday






    • 2




      @Barmar Huh? BitBlt is a Windows API, not a CPU instruction.
      – duskwuff
      yesterday






    • 5




      @duskwuff: BitBLT had been a thing long before there was a Windows API function of the same name.
      – Greg Hewgill
      yesterday







    • 2




      Systems with a BITBLT instruction typically had microcoded CPUs, e.g. Alto, Star, Lisp Machines, and so on. Most commercial microprocessors like the 68000 did not have one.
      – Chris Hanson
      yesterday












    up vote
    34
    down vote










    up vote
    34
    down vote









    TL;DR: Exactly as you assume. The CPU shovelled the data around, even way slower than your calculation suggests, and everyone was happy about the high speed with which it happened :)




    To be honest, I'm not complete sure what your question is. It was a plain bitmap display (like you assumed) and quite sufficient to be handled by the CPU. After all, no-one was thinking of real time video back then. Just a UI and whatever the running application wants to display. And for this a CPU is quite sufficient.



    Moving a mouse cursor needs at maximum the manipulation of twice 16 bytes (in case of an 8x8). One for undrawing the existing one, one for drawing it at a new position. That's few enough when done every frame for a fast moving one. In reality the mouse doesn't move at all most of the time, so no redraw is needed.



    Similar, the display doesn't get changed a lot. Sure, every time a key gets pressed a character is to be drawn. Not a big deal. Several can be done between frames.



    Worst case here would be scrolling up at the end of a line, or inserting a character in the topmost line of a window. This may take time to (re)draw the entire window, but then again, it's a rare case and even if it lags by like .2 seconds, it's way acceptable.



    Yes, the Mac was slow in drawing a whole screen, but it wasn't exceptionally slow, or slower than other machines of that time. Especially not considering the nice look it had :))



    Just compare it to similar machines of about the same time frame. Inserting a character at a top line and wrapping the rest down equals a redraw of the whole text screen. On a 9,600 bps terminal that equals to more than two whole seconds. An Apple II with its built-in 40x24 screen needed about 0.04 seconds (already more than 2 frames) to do the same (*1) and with an 80 column card it may have taken as much as .1 seconds. So while one could notice a little lag when scrolling on a Mac, it was way within its competition.



    Today we are used to instantly updated of humungous high resolution screens, no matter what happens. Back then no-one watched videos on a PC. Heck, simple animations of a few pixels were already considered awesome.



    Bottom line, there was no such problem.




    *1 - That's hand written assembly optimized for that case.






    share|improve this answer














    TL;DR: Exactly as you assume. The CPU shovelled the data around, even way slower than your calculation suggests, and everyone was happy about the high speed with which it happened :)




    To be honest, I'm not complete sure what your question is. It was a plain bitmap display (like you assumed) and quite sufficient to be handled by the CPU. After all, no-one was thinking of real time video back then. Just a UI and whatever the running application wants to display. And for this a CPU is quite sufficient.



    Moving a mouse cursor needs at maximum the manipulation of twice 16 bytes (in case of an 8x8). One for undrawing the existing one, one for drawing it at a new position. That's few enough when done every frame for a fast moving one. In reality the mouse doesn't move at all most of the time, so no redraw is needed.



    Similar, the display doesn't get changed a lot. Sure, every time a key gets pressed a character is to be drawn. Not a big deal. Several can be done between frames.



    Worst case here would be scrolling up at the end of a line, or inserting a character in the topmost line of a window. This may take time to (re)draw the entire window, but then again, it's a rare case and even if it lags by like .2 seconds, it's way acceptable.



    Yes, the Mac was slow in drawing a whole screen, but it wasn't exceptionally slow, or slower than other machines of that time. Especially not considering the nice look it had :))



    Just compare it to similar machines of about the same time frame. Inserting a character at a top line and wrapping the rest down equals a redraw of the whole text screen. On a 9,600 bps terminal that equals to more than two whole seconds. An Apple II with its built-in 40x24 screen needed about 0.04 seconds (already more than 2 frames) to do the same (*1) and with an 80 column card it may have taken as much as .1 seconds. So while one could notice a little lag when scrolling on a Mac, it was way within its competition.



    Today we are used to instantly updated of humungous high resolution screens, no matter what happens. Back then no-one watched videos on a PC. Heck, simple animations of a few pixels were already considered awesome.



    Bottom line, there was no such problem.




    *1 - That's hand written assembly optimized for that case.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited yesterday

























    answered yesterday









    Raffzahn

    39.2k488158




    39.2k488158







    • 5




      +1. It’s easy to forget how few screen updates there were in old GUIs; scrolling didn’t update contents in real time, nor did moving windows etc.
      – Stephen Kitt
      yesterday






    • 5




      This answer misses the key thing that made it appear fast: an incredible amount of coding effort went into only updating those portions of the screen that actually needed updating. I recently re-wrote some drawing code originally written in the early-Macintosh era: the new code has about a third as many lines, is much less complex, and takes five times longer to draw.
      – Mark
      yesterday






    • 2




      @Barmar Huh? BitBlt is a Windows API, not a CPU instruction.
      – duskwuff
      yesterday






    • 5




      @duskwuff: BitBLT had been a thing long before there was a Windows API function of the same name.
      – Greg Hewgill
      yesterday







    • 2




      Systems with a BITBLT instruction typically had microcoded CPUs, e.g. Alto, Star, Lisp Machines, and so on. Most commercial microprocessors like the 68000 did not have one.
      – Chris Hanson
      yesterday












    • 5




      +1. It’s easy to forget how few screen updates there were in old GUIs; scrolling didn’t update contents in real time, nor did moving windows etc.
      – Stephen Kitt
      yesterday






    • 5




      This answer misses the key thing that made it appear fast: an incredible amount of coding effort went into only updating those portions of the screen that actually needed updating. I recently re-wrote some drawing code originally written in the early-Macintosh era: the new code has about a third as many lines, is much less complex, and takes five times longer to draw.
      – Mark
      yesterday






    • 2




      @Barmar Huh? BitBlt is a Windows API, not a CPU instruction.
      – duskwuff
      yesterday






    • 5




      @duskwuff: BitBLT had been a thing long before there was a Windows API function of the same name.
      – Greg Hewgill
      yesterday







    • 2




      Systems with a BITBLT instruction typically had microcoded CPUs, e.g. Alto, Star, Lisp Machines, and so on. Most commercial microprocessors like the 68000 did not have one.
      – Chris Hanson
      yesterday







    5




    5




    +1. It’s easy to forget how few screen updates there were in old GUIs; scrolling didn’t update contents in real time, nor did moving windows etc.
    – Stephen Kitt
    yesterday




    +1. It’s easy to forget how few screen updates there were in old GUIs; scrolling didn’t update contents in real time, nor did moving windows etc.
    – Stephen Kitt
    yesterday




    5




    5




    This answer misses the key thing that made it appear fast: an incredible amount of coding effort went into only updating those portions of the screen that actually needed updating. I recently re-wrote some drawing code originally written in the early-Macintosh era: the new code has about a third as many lines, is much less complex, and takes five times longer to draw.
    – Mark
    yesterday




    This answer misses the key thing that made it appear fast: an incredible amount of coding effort went into only updating those portions of the screen that actually needed updating. I recently re-wrote some drawing code originally written in the early-Macintosh era: the new code has about a third as many lines, is much less complex, and takes five times longer to draw.
    – Mark
    yesterday




    2




    2




    @Barmar Huh? BitBlt is a Windows API, not a CPU instruction.
    – duskwuff
    yesterday




    @Barmar Huh? BitBlt is a Windows API, not a CPU instruction.
    – duskwuff
    yesterday




    5




    5




    @duskwuff: BitBLT had been a thing long before there was a Windows API function of the same name.
    – Greg Hewgill
    yesterday





    @duskwuff: BitBLT had been a thing long before there was a Windows API function of the same name.
    – Greg Hewgill
    yesterday





    2




    2




    Systems with a BITBLT instruction typically had microcoded CPUs, e.g. Alto, Star, Lisp Machines, and so on. Most commercial microprocessors like the 68000 did not have one.
    – Chris Hanson
    yesterday




    Systems with a BITBLT instruction typically had microcoded CPUs, e.g. Alto, Star, Lisp Machines, and so on. Most commercial microprocessors like the 68000 did not have one.
    – Chris Hanson
    yesterday










    up vote
    19
    down vote













    There were three important design aspects of the original Macintosh that allowed the relatively limited video display hardware to provide a pleasing user experience in terms of the performance of its GUI.



    1. Vertical blanking hardware interrupt - this provided the necessary timing for software to synchronize display updates to the CRT refresh cycle. Thus, smooth animation could be achieved, especially when combined with #2.

    2. Double-buffered display support - the hardware supported two independent display buffers that could be rapidly toggled. This allowed redraws that took longer than the blanking interval to appear synchronous with the CRT refresh cycle.

    3. Highly-optimized QuickDraw - about 1/3 of the Macintosh ROM firmware was used to give programmers reusable, highly-optimized, assembly language drawing routines for primitives like lines, ovals, rectangles, and fonts. These drawing routines were in turn used by the ROM GUI routines, which were used by virtually all applications programmers.

    In summary, QuickDraw was a rather brilliant inclusion that made it easy for applications to quickly redraw only those regions of the display that they modified. When this optimization was not enough for performance, synchronization with the video display allowed the lag in re-drawing the display to still appear sufficiently smooth and "unbroken" so that a responsive user feel was evoked.



    NOTE 1: The visible display was actually 342 lines, not 384.



    NOTE 2: Since the display hardware shares access to DRAM with the CPU during each horizontal drawing cycle, the CPU effectively runs at ~6 MHz, not ~8 MHz.






    share|improve this answer
















    • 1




      +1 for mentioning double buffering - this at least made the Classic Mac screen update look fast
      – tofro
      yesterday






    • 2




      I'm pretty sure the standard GUI wasn't doubled buffered.That would require a fair chunk of the 128K memory for the second buffer, and would make updating only the part of the screen that needed updating much more complicated. It also would mean that you wouldn't need to do anything to avoid flicker like described here: computerhistory.org/atchm/macpaint-and-quickdraw-source-code "Bill eliminated the flicker by composing everything in a hidden memory buffer, which was then transferred to the screen quickly when it was completely ready."
      – Ross Ridge
      yesterday










    • As an aside, the sound generator was tied directly to the screen refresh, basically the screen logic (I forget the details) was the clock that drove the Sound hardware.
      – Will Hartung
      yesterday






    • 1




      @RossRidge All early Macs supported an alternate screen buffer that started at $12700 ($72700 on the Fat Mac) and was activated by setting a specific bit in the VIA data register.
      – tofro
      yesterday






    • 1




      @tofro I'm not disputing whether the hardware support existed, but whether it was used during the display of the standard GUI.
      – Ross Ridge
      yesterday














    up vote
    19
    down vote













    There were three important design aspects of the original Macintosh that allowed the relatively limited video display hardware to provide a pleasing user experience in terms of the performance of its GUI.



    1. Vertical blanking hardware interrupt - this provided the necessary timing for software to synchronize display updates to the CRT refresh cycle. Thus, smooth animation could be achieved, especially when combined with #2.

    2. Double-buffered display support - the hardware supported two independent display buffers that could be rapidly toggled. This allowed redraws that took longer than the blanking interval to appear synchronous with the CRT refresh cycle.

    3. Highly-optimized QuickDraw - about 1/3 of the Macintosh ROM firmware was used to give programmers reusable, highly-optimized, assembly language drawing routines for primitives like lines, ovals, rectangles, and fonts. These drawing routines were in turn used by the ROM GUI routines, which were used by virtually all applications programmers.

    In summary, QuickDraw was a rather brilliant inclusion that made it easy for applications to quickly redraw only those regions of the display that they modified. When this optimization was not enough for performance, synchronization with the video display allowed the lag in re-drawing the display to still appear sufficiently smooth and "unbroken" so that a responsive user feel was evoked.



    NOTE 1: The visible display was actually 342 lines, not 384.



    NOTE 2: Since the display hardware shares access to DRAM with the CPU during each horizontal drawing cycle, the CPU effectively runs at ~6 MHz, not ~8 MHz.






    share|improve this answer
















    • 1




      +1 for mentioning double buffering - this at least made the Classic Mac screen update look fast
      – tofro
      yesterday






    • 2




      I'm pretty sure the standard GUI wasn't doubled buffered.That would require a fair chunk of the 128K memory for the second buffer, and would make updating only the part of the screen that needed updating much more complicated. It also would mean that you wouldn't need to do anything to avoid flicker like described here: computerhistory.org/atchm/macpaint-and-quickdraw-source-code "Bill eliminated the flicker by composing everything in a hidden memory buffer, which was then transferred to the screen quickly when it was completely ready."
      – Ross Ridge
      yesterday










    • As an aside, the sound generator was tied directly to the screen refresh, basically the screen logic (I forget the details) was the clock that drove the Sound hardware.
      – Will Hartung
      yesterday






    • 1




      @RossRidge All early Macs supported an alternate screen buffer that started at $12700 ($72700 on the Fat Mac) and was activated by setting a specific bit in the VIA data register.
      – tofro
      yesterday






    • 1




      @tofro I'm not disputing whether the hardware support existed, but whether it was used during the display of the standard GUI.
      – Ross Ridge
      yesterday












    up vote
    19
    down vote










    up vote
    19
    down vote









    There were three important design aspects of the original Macintosh that allowed the relatively limited video display hardware to provide a pleasing user experience in terms of the performance of its GUI.



    1. Vertical blanking hardware interrupt - this provided the necessary timing for software to synchronize display updates to the CRT refresh cycle. Thus, smooth animation could be achieved, especially when combined with #2.

    2. Double-buffered display support - the hardware supported two independent display buffers that could be rapidly toggled. This allowed redraws that took longer than the blanking interval to appear synchronous with the CRT refresh cycle.

    3. Highly-optimized QuickDraw - about 1/3 of the Macintosh ROM firmware was used to give programmers reusable, highly-optimized, assembly language drawing routines for primitives like lines, ovals, rectangles, and fonts. These drawing routines were in turn used by the ROM GUI routines, which were used by virtually all applications programmers.

    In summary, QuickDraw was a rather brilliant inclusion that made it easy for applications to quickly redraw only those regions of the display that they modified. When this optimization was not enough for performance, synchronization with the video display allowed the lag in re-drawing the display to still appear sufficiently smooth and "unbroken" so that a responsive user feel was evoked.



    NOTE 1: The visible display was actually 342 lines, not 384.



    NOTE 2: Since the display hardware shares access to DRAM with the CPU during each horizontal drawing cycle, the CPU effectively runs at ~6 MHz, not ~8 MHz.






    share|improve this answer












    There were three important design aspects of the original Macintosh that allowed the relatively limited video display hardware to provide a pleasing user experience in terms of the performance of its GUI.



    1. Vertical blanking hardware interrupt - this provided the necessary timing for software to synchronize display updates to the CRT refresh cycle. Thus, smooth animation could be achieved, especially when combined with #2.

    2. Double-buffered display support - the hardware supported two independent display buffers that could be rapidly toggled. This allowed redraws that took longer than the blanking interval to appear synchronous with the CRT refresh cycle.

    3. Highly-optimized QuickDraw - about 1/3 of the Macintosh ROM firmware was used to give programmers reusable, highly-optimized, assembly language drawing routines for primitives like lines, ovals, rectangles, and fonts. These drawing routines were in turn used by the ROM GUI routines, which were used by virtually all applications programmers.

    In summary, QuickDraw was a rather brilliant inclusion that made it easy for applications to quickly redraw only those regions of the display that they modified. When this optimization was not enough for performance, synchronization with the video display allowed the lag in re-drawing the display to still appear sufficiently smooth and "unbroken" so that a responsive user feel was evoked.



    NOTE 1: The visible display was actually 342 lines, not 384.



    NOTE 2: Since the display hardware shares access to DRAM with the CPU during each horizontal drawing cycle, the CPU effectively runs at ~6 MHz, not ~8 MHz.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered yesterday









    Brian H

    15k53131




    15k53131







    • 1




      +1 for mentioning double buffering - this at least made the Classic Mac screen update look fast
      – tofro
      yesterday






    • 2




      I'm pretty sure the standard GUI wasn't doubled buffered.That would require a fair chunk of the 128K memory for the second buffer, and would make updating only the part of the screen that needed updating much more complicated. It also would mean that you wouldn't need to do anything to avoid flicker like described here: computerhistory.org/atchm/macpaint-and-quickdraw-source-code "Bill eliminated the flicker by composing everything in a hidden memory buffer, which was then transferred to the screen quickly when it was completely ready."
      – Ross Ridge
      yesterday










    • As an aside, the sound generator was tied directly to the screen refresh, basically the screen logic (I forget the details) was the clock that drove the Sound hardware.
      – Will Hartung
      yesterday






    • 1




      @RossRidge All early Macs supported an alternate screen buffer that started at $12700 ($72700 on the Fat Mac) and was activated by setting a specific bit in the VIA data register.
      – tofro
      yesterday






    • 1




      @tofro I'm not disputing whether the hardware support existed, but whether it was used during the display of the standard GUI.
      – Ross Ridge
      yesterday












    • 1




      +1 for mentioning double buffering - this at least made the Classic Mac screen update look fast
      – tofro
      yesterday






    • 2




      I'm pretty sure the standard GUI wasn't doubled buffered.That would require a fair chunk of the 128K memory for the second buffer, and would make updating only the part of the screen that needed updating much more complicated. It also would mean that you wouldn't need to do anything to avoid flicker like described here: computerhistory.org/atchm/macpaint-and-quickdraw-source-code "Bill eliminated the flicker by composing everything in a hidden memory buffer, which was then transferred to the screen quickly when it was completely ready."
      – Ross Ridge
      yesterday










    • As an aside, the sound generator was tied directly to the screen refresh, basically the screen logic (I forget the details) was the clock that drove the Sound hardware.
      – Will Hartung
      yesterday






    • 1




      @RossRidge All early Macs supported an alternate screen buffer that started at $12700 ($72700 on the Fat Mac) and was activated by setting a specific bit in the VIA data register.
      – tofro
      yesterday






    • 1




      @tofro I'm not disputing whether the hardware support existed, but whether it was used during the display of the standard GUI.
      – Ross Ridge
      yesterday







    1




    1




    +1 for mentioning double buffering - this at least made the Classic Mac screen update look fast
    – tofro
    yesterday




    +1 for mentioning double buffering - this at least made the Classic Mac screen update look fast
    – tofro
    yesterday




    2




    2




    I'm pretty sure the standard GUI wasn't doubled buffered.That would require a fair chunk of the 128K memory for the second buffer, and would make updating only the part of the screen that needed updating much more complicated. It also would mean that you wouldn't need to do anything to avoid flicker like described here: computerhistory.org/atchm/macpaint-and-quickdraw-source-code "Bill eliminated the flicker by composing everything in a hidden memory buffer, which was then transferred to the screen quickly when it was completely ready."
    – Ross Ridge
    yesterday




    I'm pretty sure the standard GUI wasn't doubled buffered.That would require a fair chunk of the 128K memory for the second buffer, and would make updating only the part of the screen that needed updating much more complicated. It also would mean that you wouldn't need to do anything to avoid flicker like described here: computerhistory.org/atchm/macpaint-and-quickdraw-source-code "Bill eliminated the flicker by composing everything in a hidden memory buffer, which was then transferred to the screen quickly when it was completely ready."
    – Ross Ridge
    yesterday












    As an aside, the sound generator was tied directly to the screen refresh, basically the screen logic (I forget the details) was the clock that drove the Sound hardware.
    – Will Hartung
    yesterday




    As an aside, the sound generator was tied directly to the screen refresh, basically the screen logic (I forget the details) was the clock that drove the Sound hardware.
    – Will Hartung
    yesterday




    1




    1




    @RossRidge All early Macs supported an alternate screen buffer that started at $12700 ($72700 on the Fat Mac) and was activated by setting a specific bit in the VIA data register.
    – tofro
    yesterday




    @RossRidge All early Macs supported an alternate screen buffer that started at $12700 ($72700 on the Fat Mac) and was activated by setting a specific bit in the VIA data register.
    – tofro
    yesterday




    1




    1




    @tofro I'm not disputing whether the hardware support existed, but whether it was used during the display of the standard GUI.
    – Ross Ridge
    yesterday




    @tofro I'm not disputing whether the hardware support existed, but whether it was used during the display of the standard GUI.
    – Ross Ridge
    yesterday










    up vote
    17
    down vote













    As others have said, refreshing the whole screen in a fraction of a second was considered blindingly fast at that time.



    The Mac's biggest early advantage, though, was the QuickDraw framework. This was leaps and bounds ahead of other graphics packages of the day, both in functionality and efficiency. With QuickDraw, you rarely had to rewrite the whole display -- instead, you'd work within a GrafPort representing a window's content pane, and manipulate Regions that touched only affected pixels, not an entire rectangular bounding box.



    At the time the Mac was introduced, a 16-bit 8-MHz 68000 was actually respectably fast in comparison to its competition. But it felt AMAZINGLY fast.






    share|improve this answer
















    • 3




      I had an 8086 PC in the 80s, with the GEM desktop. Screen scrolling took about 2 seconds. The Mac was relatively quick for the time.
      – user
      yesterday






    • 3




      Also QuickDraw had an extremely well optimized bitblt which (in addition to the high level features like only moving content within a window - which was a subset of the entire screen) made moving pixels as fast as possible. Windows 2 and 3, a few years later, had a bitblt which would generate code on the fly and then execute it to achieve blindingly fast pixel updates. I feel sure (though don't actually know for a fact) that QuickDraw did that too, because similar things were being done in games and such at the time.
      – davidbak
      yesterday











    • The Atari ST's VDI and Line-A graphics wasn't much worse.
      – tofro
      yesterday














    up vote
    17
    down vote













    As others have said, refreshing the whole screen in a fraction of a second was considered blindingly fast at that time.



    The Mac's biggest early advantage, though, was the QuickDraw framework. This was leaps and bounds ahead of other graphics packages of the day, both in functionality and efficiency. With QuickDraw, you rarely had to rewrite the whole display -- instead, you'd work within a GrafPort representing a window's content pane, and manipulate Regions that touched only affected pixels, not an entire rectangular bounding box.



    At the time the Mac was introduced, a 16-bit 8-MHz 68000 was actually respectably fast in comparison to its competition. But it felt AMAZINGLY fast.






    share|improve this answer
















    • 3




      I had an 8086 PC in the 80s, with the GEM desktop. Screen scrolling took about 2 seconds. The Mac was relatively quick for the time.
      – user
      yesterday






    • 3




      Also QuickDraw had an extremely well optimized bitblt which (in addition to the high level features like only moving content within a window - which was a subset of the entire screen) made moving pixels as fast as possible. Windows 2 and 3, a few years later, had a bitblt which would generate code on the fly and then execute it to achieve blindingly fast pixel updates. I feel sure (though don't actually know for a fact) that QuickDraw did that too, because similar things were being done in games and such at the time.
      – davidbak
      yesterday











    • The Atari ST's VDI and Line-A graphics wasn't much worse.
      – tofro
      yesterday












    up vote
    17
    down vote










    up vote
    17
    down vote









    As others have said, refreshing the whole screen in a fraction of a second was considered blindingly fast at that time.



    The Mac's biggest early advantage, though, was the QuickDraw framework. This was leaps and bounds ahead of other graphics packages of the day, both in functionality and efficiency. With QuickDraw, you rarely had to rewrite the whole display -- instead, you'd work within a GrafPort representing a window's content pane, and manipulate Regions that touched only affected pixels, not an entire rectangular bounding box.



    At the time the Mac was introduced, a 16-bit 8-MHz 68000 was actually respectably fast in comparison to its competition. But it felt AMAZINGLY fast.






    share|improve this answer












    As others have said, refreshing the whole screen in a fraction of a second was considered blindingly fast at that time.



    The Mac's biggest early advantage, though, was the QuickDraw framework. This was leaps and bounds ahead of other graphics packages of the day, both in functionality and efficiency. With QuickDraw, you rarely had to rewrite the whole display -- instead, you'd work within a GrafPort representing a window's content pane, and manipulate Regions that touched only affected pixels, not an entire rectangular bounding box.



    At the time the Mac was introduced, a 16-bit 8-MHz 68000 was actually respectably fast in comparison to its competition. But it felt AMAZINGLY fast.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered yesterday









    jeffB

    55627




    55627







    • 3




      I had an 8086 PC in the 80s, with the GEM desktop. Screen scrolling took about 2 seconds. The Mac was relatively quick for the time.
      – user
      yesterday






    • 3




      Also QuickDraw had an extremely well optimized bitblt which (in addition to the high level features like only moving content within a window - which was a subset of the entire screen) made moving pixels as fast as possible. Windows 2 and 3, a few years later, had a bitblt which would generate code on the fly and then execute it to achieve blindingly fast pixel updates. I feel sure (though don't actually know for a fact) that QuickDraw did that too, because similar things were being done in games and such at the time.
      – davidbak
      yesterday











    • The Atari ST's VDI and Line-A graphics wasn't much worse.
      – tofro
      yesterday












    • 3




      I had an 8086 PC in the 80s, with the GEM desktop. Screen scrolling took about 2 seconds. The Mac was relatively quick for the time.
      – user
      yesterday






    • 3




      Also QuickDraw had an extremely well optimized bitblt which (in addition to the high level features like only moving content within a window - which was a subset of the entire screen) made moving pixels as fast as possible. Windows 2 and 3, a few years later, had a bitblt which would generate code on the fly and then execute it to achieve blindingly fast pixel updates. I feel sure (though don't actually know for a fact) that QuickDraw did that too, because similar things were being done in games and such at the time.
      – davidbak
      yesterday











    • The Atari ST's VDI and Line-A graphics wasn't much worse.
      – tofro
      yesterday







    3




    3




    I had an 8086 PC in the 80s, with the GEM desktop. Screen scrolling took about 2 seconds. The Mac was relatively quick for the time.
    – user
    yesterday




    I had an 8086 PC in the 80s, with the GEM desktop. Screen scrolling took about 2 seconds. The Mac was relatively quick for the time.
    – user
    yesterday




    3




    3




    Also QuickDraw had an extremely well optimized bitblt which (in addition to the high level features like only moving content within a window - which was a subset of the entire screen) made moving pixels as fast as possible. Windows 2 and 3, a few years later, had a bitblt which would generate code on the fly and then execute it to achieve blindingly fast pixel updates. I feel sure (though don't actually know for a fact) that QuickDraw did that too, because similar things were being done in games and such at the time.
    – davidbak
    yesterday





    Also QuickDraw had an extremely well optimized bitblt which (in addition to the high level features like only moving content within a window - which was a subset of the entire screen) made moving pixels as fast as possible. Windows 2 and 3, a few years later, had a bitblt which would generate code on the fly and then execute it to achieve blindingly fast pixel updates. I feel sure (though don't actually know for a fact) that QuickDraw did that too, because similar things were being done in games and such at the time.
    – davidbak
    yesterday













    The Atari ST's VDI and Line-A graphics wasn't much worse.
    – tofro
    yesterday




    The Atari ST's VDI and Line-A graphics wasn't much worse.
    – tofro
    yesterday










    up vote
    8
    down vote













    Basically all early 68k Home Computers (Classic Mac, early Atari ST, Sinclair QL) would update the display using the CPU only. The Amiga with its Blitter Chip was an exception (somewhat copied by the Atari ST when that one later on got a - simpler - Blitter).



    Without support from specific graphics or DMA chips, there were still some ways to speed up graphics operations:



    • Double(/multiple) buffering - This didn't really make the shoving around of pixels faster, but made it look fast (because updates were basically instant). The CPU could write in one memory bank while the video circuitry displayed the other. Mac, Atari ST and Sinclair QL used / could use this technique.

    • Make use of vast amounts of memory - Some Atari ST games used up to 8 pre-shifted screens and switched between them with the above method to implement fast pixel-wise horizontal scrolling (not an option for the early Mac because of its memory limits).

    • Vertical scolling by changing the start address of the displayed video buffer - This technique was often used by the Atari ST that was able to modify the address of the first line displayed from video memory in scanline granularity (I do think, however, that the Mac screen banks were fixed)


    • Highly optimized machine code screen drivers - The fastest way to shove video memory around using a 68000 CPU is actually the MOVEM instruction - Free up as much registers as you can, then move memory around with some code like the following:



       lea.l sourceAddr,a6
      lea.l destAddr,a5
      move.w #count-1,d0
      lp: movem.l (a6)+,d1-d7/a0-a4
      movem.l d0-d7/a0-a4,(a5)+
      dbra d0,lp


    So, you can actually do faster than your code examples imply.



    Seen from today's viewpoint, Mac or Atari ST screen handling isn't really fast. Back in the days, however, it was - It was way faster than most of what people were used from 8-bit CPUs.






    share|improve this answer






















    • And back then, many people were still used to teleterminals, both CRT and actual printers. Even with the complexity of the UI, the Mac was blazingly fast - back then. To modern eyes, it's unbearably slow - everything except for moving the mouse had significant very visible delays, especially when you needed to redraw the whole screen (which meant the redraw optimizations didn't apply).
      – Luaan
      19 hours ago














    up vote
    8
    down vote













    Basically all early 68k Home Computers (Classic Mac, early Atari ST, Sinclair QL) would update the display using the CPU only. The Amiga with its Blitter Chip was an exception (somewhat copied by the Atari ST when that one later on got a - simpler - Blitter).



    Without support from specific graphics or DMA chips, there were still some ways to speed up graphics operations:



    • Double(/multiple) buffering - This didn't really make the shoving around of pixels faster, but made it look fast (because updates were basically instant). The CPU could write in one memory bank while the video circuitry displayed the other. Mac, Atari ST and Sinclair QL used / could use this technique.

    • Make use of vast amounts of memory - Some Atari ST games used up to 8 pre-shifted screens and switched between them with the above method to implement fast pixel-wise horizontal scrolling (not an option for the early Mac because of its memory limits).

    • Vertical scolling by changing the start address of the displayed video buffer - This technique was often used by the Atari ST that was able to modify the address of the first line displayed from video memory in scanline granularity (I do think, however, that the Mac screen banks were fixed)


    • Highly optimized machine code screen drivers - The fastest way to shove video memory around using a 68000 CPU is actually the MOVEM instruction - Free up as much registers as you can, then move memory around with some code like the following:



       lea.l sourceAddr,a6
      lea.l destAddr,a5
      move.w #count-1,d0
      lp: movem.l (a6)+,d1-d7/a0-a4
      movem.l d0-d7/a0-a4,(a5)+
      dbra d0,lp


    So, you can actually do faster than your code examples imply.



    Seen from today's viewpoint, Mac or Atari ST screen handling isn't really fast. Back in the days, however, it was - It was way faster than most of what people were used from 8-bit CPUs.






    share|improve this answer






















    • And back then, many people were still used to teleterminals, both CRT and actual printers. Even with the complexity of the UI, the Mac was blazingly fast - back then. To modern eyes, it's unbearably slow - everything except for moving the mouse had significant very visible delays, especially when you needed to redraw the whole screen (which meant the redraw optimizations didn't apply).
      – Luaan
      19 hours ago












    up vote
    8
    down vote










    up vote
    8
    down vote









    Basically all early 68k Home Computers (Classic Mac, early Atari ST, Sinclair QL) would update the display using the CPU only. The Amiga with its Blitter Chip was an exception (somewhat copied by the Atari ST when that one later on got a - simpler - Blitter).



    Without support from specific graphics or DMA chips, there were still some ways to speed up graphics operations:



    • Double(/multiple) buffering - This didn't really make the shoving around of pixels faster, but made it look fast (because updates were basically instant). The CPU could write in one memory bank while the video circuitry displayed the other. Mac, Atari ST and Sinclair QL used / could use this technique.

    • Make use of vast amounts of memory - Some Atari ST games used up to 8 pre-shifted screens and switched between them with the above method to implement fast pixel-wise horizontal scrolling (not an option for the early Mac because of its memory limits).

    • Vertical scolling by changing the start address of the displayed video buffer - This technique was often used by the Atari ST that was able to modify the address of the first line displayed from video memory in scanline granularity (I do think, however, that the Mac screen banks were fixed)


    • Highly optimized machine code screen drivers - The fastest way to shove video memory around using a 68000 CPU is actually the MOVEM instruction - Free up as much registers as you can, then move memory around with some code like the following:



       lea.l sourceAddr,a6
      lea.l destAddr,a5
      move.w #count-1,d0
      lp: movem.l (a6)+,d1-d7/a0-a4
      movem.l d0-d7/a0-a4,(a5)+
      dbra d0,lp


    So, you can actually do faster than your code examples imply.



    Seen from today's viewpoint, Mac or Atari ST screen handling isn't really fast. Back in the days, however, it was - It was way faster than most of what people were used from 8-bit CPUs.






    share|improve this answer














    Basically all early 68k Home Computers (Classic Mac, early Atari ST, Sinclair QL) would update the display using the CPU only. The Amiga with its Blitter Chip was an exception (somewhat copied by the Atari ST when that one later on got a - simpler - Blitter).



    Without support from specific graphics or DMA chips, there were still some ways to speed up graphics operations:



    • Double(/multiple) buffering - This didn't really make the shoving around of pixels faster, but made it look fast (because updates were basically instant). The CPU could write in one memory bank while the video circuitry displayed the other. Mac, Atari ST and Sinclair QL used / could use this technique.

    • Make use of vast amounts of memory - Some Atari ST games used up to 8 pre-shifted screens and switched between them with the above method to implement fast pixel-wise horizontal scrolling (not an option for the early Mac because of its memory limits).

    • Vertical scolling by changing the start address of the displayed video buffer - This technique was often used by the Atari ST that was able to modify the address of the first line displayed from video memory in scanline granularity (I do think, however, that the Mac screen banks were fixed)


    • Highly optimized machine code screen drivers - The fastest way to shove video memory around using a 68000 CPU is actually the MOVEM instruction - Free up as much registers as you can, then move memory around with some code like the following:



       lea.l sourceAddr,a6
      lea.l destAddr,a5
      move.w #count-1,d0
      lp: movem.l (a6)+,d1-d7/a0-a4
      movem.l d0-d7/a0-a4,(a5)+
      dbra d0,lp


    So, you can actually do faster than your code examples imply.



    Seen from today's viewpoint, Mac or Atari ST screen handling isn't really fast. Back in the days, however, it was - It was way faster than most of what people were used from 8-bit CPUs.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited yesterday

























    answered yesterday









    tofro

    13.4k32876




    13.4k32876











    • And back then, many people were still used to teleterminals, both CRT and actual printers. Even with the complexity of the UI, the Mac was blazingly fast - back then. To modern eyes, it's unbearably slow - everything except for moving the mouse had significant very visible delays, especially when you needed to redraw the whole screen (which meant the redraw optimizations didn't apply).
      – Luaan
      19 hours ago
















    • And back then, many people were still used to teleterminals, both CRT and actual printers. Even with the complexity of the UI, the Mac was blazingly fast - back then. To modern eyes, it's unbearably slow - everything except for moving the mouse had significant very visible delays, especially when you needed to redraw the whole screen (which meant the redraw optimizations didn't apply).
      – Luaan
      19 hours ago















    And back then, many people were still used to teleterminals, both CRT and actual printers. Even with the complexity of the UI, the Mac was blazingly fast - back then. To modern eyes, it's unbearably slow - everything except for moving the mouse had significant very visible delays, especially when you needed to redraw the whole screen (which meant the redraw optimizations didn't apply).
    – Luaan
    19 hours ago




    And back then, many people were still used to teleterminals, both CRT and actual printers. Even with the complexity of the UI, the Mac was blazingly fast - back then. To modern eyes, it's unbearably slow - everything except for moving the mouse had significant very visible delays, especially when you needed to redraw the whole screen (which meant the redraw optimizations didn't apply).
    – Luaan
    19 hours ago










    up vote
    4
    down vote













    In those old macs, the CPU was basically it. There was no screen controller, no floppy controller, nothing. Drawing the screen was job 1. Handling the floppy and watching for mouse/keyboard interrupts was job 2, and running your program was job 3. Mostly you got time when the electron gun was pulling back from the bottom of the screen to the top after it drew a page, and when it was pulling back horizontally after it drew a line. It felt responsive, but if you wanted to do something compute intensive you had the wrong machine.






    share|improve this answer








    New contributor




    John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.

















    • While you are somewhat right, that creates the wrong impression - The Mac was considered a fast computer when it appeared.
      – tofro
      yesterday










    • @tofro A fast home computer, yes. And it was "portable", and with a full graphical UI to boot. Game consoles were much faster thanks to their dedicated graphics hardware. The large computers like LISP machines had both the UI and usually were a bit faster, but also much larger and decidedly not home machines; and of course, almost noöne had any experience with those; most people either had experience with computers like Apple II or Amiga, or "IBM mainframes" with teletype or CRT terminals. The Mac was quite a special thing.
      – Luaan
      18 hours ago










    • @Luaan Game consoles contemporary to the early Mac were Colocovision or Atari 5200, the Famicom and, a bit later, the NES. None of these were close to a Mac (in performance, display resolution, and, very obviously also, price)
      – tofro
      18 hours ago










    • @tofro That depends on what you count as performance. For a game console, latency was important, while resolution wasn't. For a GUI-based micro-computer, resolution was very important, while latency wasn't. They both had their design goals and achieved tham. The Famicom had pretty much no visible latency while doing animations and full-screen scrolling (in SMB2). The same was true with Atari 2600 (with even lower resolution and graphics quality). Neither could ever work as a home computer with a moused-based GUI, but that wasn't their purpose anyway.
      – Luaan
      18 hours ago














    up vote
    4
    down vote













    In those old macs, the CPU was basically it. There was no screen controller, no floppy controller, nothing. Drawing the screen was job 1. Handling the floppy and watching for mouse/keyboard interrupts was job 2, and running your program was job 3. Mostly you got time when the electron gun was pulling back from the bottom of the screen to the top after it drew a page, and when it was pulling back horizontally after it drew a line. It felt responsive, but if you wanted to do something compute intensive you had the wrong machine.






    share|improve this answer








    New contributor




    John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.

















    • While you are somewhat right, that creates the wrong impression - The Mac was considered a fast computer when it appeared.
      – tofro
      yesterday










    • @tofro A fast home computer, yes. And it was "portable", and with a full graphical UI to boot. Game consoles were much faster thanks to their dedicated graphics hardware. The large computers like LISP machines had both the UI and usually were a bit faster, but also much larger and decidedly not home machines; and of course, almost noöne had any experience with those; most people either had experience with computers like Apple II or Amiga, or "IBM mainframes" with teletype or CRT terminals. The Mac was quite a special thing.
      – Luaan
      18 hours ago










    • @Luaan Game consoles contemporary to the early Mac were Colocovision or Atari 5200, the Famicom and, a bit later, the NES. None of these were close to a Mac (in performance, display resolution, and, very obviously also, price)
      – tofro
      18 hours ago










    • @tofro That depends on what you count as performance. For a game console, latency was important, while resolution wasn't. For a GUI-based micro-computer, resolution was very important, while latency wasn't. They both had their design goals and achieved tham. The Famicom had pretty much no visible latency while doing animations and full-screen scrolling (in SMB2). The same was true with Atari 2600 (with even lower resolution and graphics quality). Neither could ever work as a home computer with a moused-based GUI, but that wasn't their purpose anyway.
      – Luaan
      18 hours ago












    up vote
    4
    down vote










    up vote
    4
    down vote









    In those old macs, the CPU was basically it. There was no screen controller, no floppy controller, nothing. Drawing the screen was job 1. Handling the floppy and watching for mouse/keyboard interrupts was job 2, and running your program was job 3. Mostly you got time when the electron gun was pulling back from the bottom of the screen to the top after it drew a page, and when it was pulling back horizontally after it drew a line. It felt responsive, but if you wanted to do something compute intensive you had the wrong machine.






    share|improve this answer








    New contributor




    John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    In those old macs, the CPU was basically it. There was no screen controller, no floppy controller, nothing. Drawing the screen was job 1. Handling the floppy and watching for mouse/keyboard interrupts was job 2, and running your program was job 3. Mostly you got time when the electron gun was pulling back from the bottom of the screen to the top after it drew a page, and when it was pulling back horizontally after it drew a line. It felt responsive, but if you wanted to do something compute intensive you had the wrong machine.







    share|improve this answer








    New contributor




    John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    share|improve this answer



    share|improve this answer






    New contributor




    John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    answered yesterday









    John N

    411




    411




    New contributor




    John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





    New contributor





    John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    John N is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.











    • While you are somewhat right, that creates the wrong impression - The Mac was considered a fast computer when it appeared.
      – tofro
      yesterday










    • @tofro A fast home computer, yes. And it was "portable", and with a full graphical UI to boot. Game consoles were much faster thanks to their dedicated graphics hardware. The large computers like LISP machines had both the UI and usually were a bit faster, but also much larger and decidedly not home machines; and of course, almost noöne had any experience with those; most people either had experience with computers like Apple II or Amiga, or "IBM mainframes" with teletype or CRT terminals. The Mac was quite a special thing.
      – Luaan
      18 hours ago










    • @Luaan Game consoles contemporary to the early Mac were Colocovision or Atari 5200, the Famicom and, a bit later, the NES. None of these were close to a Mac (in performance, display resolution, and, very obviously also, price)
      – tofro
      18 hours ago










    • @tofro That depends on what you count as performance. For a game console, latency was important, while resolution wasn't. For a GUI-based micro-computer, resolution was very important, while latency wasn't. They both had their design goals and achieved tham. The Famicom had pretty much no visible latency while doing animations and full-screen scrolling (in SMB2). The same was true with Atari 2600 (with even lower resolution and graphics quality). Neither could ever work as a home computer with a moused-based GUI, but that wasn't their purpose anyway.
      – Luaan
      18 hours ago
















    • While you are somewhat right, that creates the wrong impression - The Mac was considered a fast computer when it appeared.
      – tofro
      yesterday










    • @tofro A fast home computer, yes. And it was "portable", and with a full graphical UI to boot. Game consoles were much faster thanks to their dedicated graphics hardware. The large computers like LISP machines had both the UI and usually were a bit faster, but also much larger and decidedly not home machines; and of course, almost noöne had any experience with those; most people either had experience with computers like Apple II or Amiga, or "IBM mainframes" with teletype or CRT terminals. The Mac was quite a special thing.
      – Luaan
      18 hours ago










    • @Luaan Game consoles contemporary to the early Mac were Colocovision or Atari 5200, the Famicom and, a bit later, the NES. None of these were close to a Mac (in performance, display resolution, and, very obviously also, price)
      – tofro
      18 hours ago










    • @tofro That depends on what you count as performance. For a game console, latency was important, while resolution wasn't. For a GUI-based micro-computer, resolution was very important, while latency wasn't. They both had their design goals and achieved tham. The Famicom had pretty much no visible latency while doing animations and full-screen scrolling (in SMB2). The same was true with Atari 2600 (with even lower resolution and graphics quality). Neither could ever work as a home computer with a moused-based GUI, but that wasn't their purpose anyway.
      – Luaan
      18 hours ago















    While you are somewhat right, that creates the wrong impression - The Mac was considered a fast computer when it appeared.
    – tofro
    yesterday




    While you are somewhat right, that creates the wrong impression - The Mac was considered a fast computer when it appeared.
    – tofro
    yesterday












    @tofro A fast home computer, yes. And it was "portable", and with a full graphical UI to boot. Game consoles were much faster thanks to their dedicated graphics hardware. The large computers like LISP machines had both the UI and usually were a bit faster, but also much larger and decidedly not home machines; and of course, almost noöne had any experience with those; most people either had experience with computers like Apple II or Amiga, or "IBM mainframes" with teletype or CRT terminals. The Mac was quite a special thing.
    – Luaan
    18 hours ago




    @tofro A fast home computer, yes. And it was "portable", and with a full graphical UI to boot. Game consoles were much faster thanks to their dedicated graphics hardware. The large computers like LISP machines had both the UI and usually were a bit faster, but also much larger and decidedly not home machines; and of course, almost noöne had any experience with those; most people either had experience with computers like Apple II or Amiga, or "IBM mainframes" with teletype or CRT terminals. The Mac was quite a special thing.
    – Luaan
    18 hours ago












    @Luaan Game consoles contemporary to the early Mac were Colocovision or Atari 5200, the Famicom and, a bit later, the NES. None of these were close to a Mac (in performance, display resolution, and, very obviously also, price)
    – tofro
    18 hours ago




    @Luaan Game consoles contemporary to the early Mac were Colocovision or Atari 5200, the Famicom and, a bit later, the NES. None of these were close to a Mac (in performance, display resolution, and, very obviously also, price)
    – tofro
    18 hours ago












    @tofro That depends on what you count as performance. For a game console, latency was important, while resolution wasn't. For a GUI-based micro-computer, resolution was very important, while latency wasn't. They both had their design goals and achieved tham. The Famicom had pretty much no visible latency while doing animations and full-screen scrolling (in SMB2). The same was true with Atari 2600 (with even lower resolution and graphics quality). Neither could ever work as a home computer with a moused-based GUI, but that wasn't their purpose anyway.
    – Luaan
    18 hours ago




    @tofro That depends on what you count as performance. For a game console, latency was important, while resolution wasn't. For a GUI-based micro-computer, resolution was very important, while latency wasn't. They both had their design goals and achieved tham. The Famicom had pretty much no visible latency while doing animations and full-screen scrolling (in SMB2). The same was true with Atari 2600 (with even lower resolution and graphics quality). Neither could ever work as a home computer with a moused-based GUI, but that wasn't their purpose anyway.
    – Luaan
    18 hours ago










    up vote
    1
    down vote













    Other answers cover QuickDraw pretty well. The lack of a hardware mouse cursor was something of a problem from a UI perspective, however, which deserves additional detail. The cursor drawing could be handled by the vertical-blank interrupt at times when nothing was going on. The cursor was 16x16, and the CPU could load and store 32-bit registers at any 16-bit boundary, so for 16 lines of the display the CPU would load 32 bits (two 16-bit words), store them to a backup area, mask with the cursor, and write back. Then the next time it would draw the cursor, it would first restore the part of the screen under the old one. The cursor and mask could be stored shifted to line up with the current cursor position (so if the cursor moves right three pixels, they'd need to be shifted in the buffer before the next time they're drawn), so that unless the cursor moves by 8 pixels in a single frame it would never be necessary to perform an 8-bit shift (the worst-case value).



    The problem arises if the cursor is moved while content is being drawn. In early versions of the Macintosh firmware, before anything could be drawn on the screen, it was necessary to first hide the cursor (restoring what was behind it), then perform the graphical operation, ad then restore the cursor. The system would do this automatically if the cursor was enabled when drawing was attempted, but doing so imposed a huge speed penalty. Most programs instead called HideCursor before doing a sequence of operations, and then ShowCursor afterward. Calling HideCursor would increment a counter and ShowCursor would decrement it, so if the cursor was hidden while a function was called that would also hide and restore the cursor, the cursor would still be hidden after the function returned.



    A later version of the OS added a major improvement: ShieldCursor. This function took a rectangle, would hide the cursor immediately if it was within that rectangle, and would also cause the vertical-blank interrupt to show or hide the cursor based upon whether it was within that rectangle. This was a major win from both a performance and user-experience standpoint. If code was going to draw some stuff in a rectangular region of the screen and the cursor wasn't there, there was no need to spend time erasing and redrawing the cursor. Further, if a program called ShieldCursor to guard one area of the screen where the cursor hpapened to be, and later used it to guard another, there was no need to redraw the cursor immediately. If the first area needed to be re-shielded and the cursor hadn't been redrawn, there would be no need for an extra redraw/erase cycle. But if the cursor was moved before that area was re-shielded, the cursor could be made visible.



    Early versions of the Macintosh OS and programs for it tended to have the cursor disappear quite a lot, which significantly degraded the user experience. I don't remember exactly when ShieldCursor was added, but I think it was around the time of the Macintosh II, and it made things much nicer. I think the mechanisms are simple enough that it probably could have been included within the first Mac OS if anyone had thought of it, but hindsight is of course 20/20.






    share|improve this answer
























      up vote
      1
      down vote













      Other answers cover QuickDraw pretty well. The lack of a hardware mouse cursor was something of a problem from a UI perspective, however, which deserves additional detail. The cursor drawing could be handled by the vertical-blank interrupt at times when nothing was going on. The cursor was 16x16, and the CPU could load and store 32-bit registers at any 16-bit boundary, so for 16 lines of the display the CPU would load 32 bits (two 16-bit words), store them to a backup area, mask with the cursor, and write back. Then the next time it would draw the cursor, it would first restore the part of the screen under the old one. The cursor and mask could be stored shifted to line up with the current cursor position (so if the cursor moves right three pixels, they'd need to be shifted in the buffer before the next time they're drawn), so that unless the cursor moves by 8 pixels in a single frame it would never be necessary to perform an 8-bit shift (the worst-case value).



      The problem arises if the cursor is moved while content is being drawn. In early versions of the Macintosh firmware, before anything could be drawn on the screen, it was necessary to first hide the cursor (restoring what was behind it), then perform the graphical operation, ad then restore the cursor. The system would do this automatically if the cursor was enabled when drawing was attempted, but doing so imposed a huge speed penalty. Most programs instead called HideCursor before doing a sequence of operations, and then ShowCursor afterward. Calling HideCursor would increment a counter and ShowCursor would decrement it, so if the cursor was hidden while a function was called that would also hide and restore the cursor, the cursor would still be hidden after the function returned.



      A later version of the OS added a major improvement: ShieldCursor. This function took a rectangle, would hide the cursor immediately if it was within that rectangle, and would also cause the vertical-blank interrupt to show or hide the cursor based upon whether it was within that rectangle. This was a major win from both a performance and user-experience standpoint. If code was going to draw some stuff in a rectangular region of the screen and the cursor wasn't there, there was no need to spend time erasing and redrawing the cursor. Further, if a program called ShieldCursor to guard one area of the screen where the cursor hpapened to be, and later used it to guard another, there was no need to redraw the cursor immediately. If the first area needed to be re-shielded and the cursor hadn't been redrawn, there would be no need for an extra redraw/erase cycle. But if the cursor was moved before that area was re-shielded, the cursor could be made visible.



      Early versions of the Macintosh OS and programs for it tended to have the cursor disappear quite a lot, which significantly degraded the user experience. I don't remember exactly when ShieldCursor was added, but I think it was around the time of the Macintosh II, and it made things much nicer. I think the mechanisms are simple enough that it probably could have been included within the first Mac OS if anyone had thought of it, but hindsight is of course 20/20.






      share|improve this answer






















        up vote
        1
        down vote










        up vote
        1
        down vote









        Other answers cover QuickDraw pretty well. The lack of a hardware mouse cursor was something of a problem from a UI perspective, however, which deserves additional detail. The cursor drawing could be handled by the vertical-blank interrupt at times when nothing was going on. The cursor was 16x16, and the CPU could load and store 32-bit registers at any 16-bit boundary, so for 16 lines of the display the CPU would load 32 bits (two 16-bit words), store them to a backup area, mask with the cursor, and write back. Then the next time it would draw the cursor, it would first restore the part of the screen under the old one. The cursor and mask could be stored shifted to line up with the current cursor position (so if the cursor moves right three pixels, they'd need to be shifted in the buffer before the next time they're drawn), so that unless the cursor moves by 8 pixels in a single frame it would never be necessary to perform an 8-bit shift (the worst-case value).



        The problem arises if the cursor is moved while content is being drawn. In early versions of the Macintosh firmware, before anything could be drawn on the screen, it was necessary to first hide the cursor (restoring what was behind it), then perform the graphical operation, ad then restore the cursor. The system would do this automatically if the cursor was enabled when drawing was attempted, but doing so imposed a huge speed penalty. Most programs instead called HideCursor before doing a sequence of operations, and then ShowCursor afterward. Calling HideCursor would increment a counter and ShowCursor would decrement it, so if the cursor was hidden while a function was called that would also hide and restore the cursor, the cursor would still be hidden after the function returned.



        A later version of the OS added a major improvement: ShieldCursor. This function took a rectangle, would hide the cursor immediately if it was within that rectangle, and would also cause the vertical-blank interrupt to show or hide the cursor based upon whether it was within that rectangle. This was a major win from both a performance and user-experience standpoint. If code was going to draw some stuff in a rectangular region of the screen and the cursor wasn't there, there was no need to spend time erasing and redrawing the cursor. Further, if a program called ShieldCursor to guard one area of the screen where the cursor hpapened to be, and later used it to guard another, there was no need to redraw the cursor immediately. If the first area needed to be re-shielded and the cursor hadn't been redrawn, there would be no need for an extra redraw/erase cycle. But if the cursor was moved before that area was re-shielded, the cursor could be made visible.



        Early versions of the Macintosh OS and programs for it tended to have the cursor disappear quite a lot, which significantly degraded the user experience. I don't remember exactly when ShieldCursor was added, but I think it was around the time of the Macintosh II, and it made things much nicer. I think the mechanisms are simple enough that it probably could have been included within the first Mac OS if anyone had thought of it, but hindsight is of course 20/20.






        share|improve this answer












        Other answers cover QuickDraw pretty well. The lack of a hardware mouse cursor was something of a problem from a UI perspective, however, which deserves additional detail. The cursor drawing could be handled by the vertical-blank interrupt at times when nothing was going on. The cursor was 16x16, and the CPU could load and store 32-bit registers at any 16-bit boundary, so for 16 lines of the display the CPU would load 32 bits (two 16-bit words), store them to a backup area, mask with the cursor, and write back. Then the next time it would draw the cursor, it would first restore the part of the screen under the old one. The cursor and mask could be stored shifted to line up with the current cursor position (so if the cursor moves right three pixels, they'd need to be shifted in the buffer before the next time they're drawn), so that unless the cursor moves by 8 pixels in a single frame it would never be necessary to perform an 8-bit shift (the worst-case value).



        The problem arises if the cursor is moved while content is being drawn. In early versions of the Macintosh firmware, before anything could be drawn on the screen, it was necessary to first hide the cursor (restoring what was behind it), then perform the graphical operation, ad then restore the cursor. The system would do this automatically if the cursor was enabled when drawing was attempted, but doing so imposed a huge speed penalty. Most programs instead called HideCursor before doing a sequence of operations, and then ShowCursor afterward. Calling HideCursor would increment a counter and ShowCursor would decrement it, so if the cursor was hidden while a function was called that would also hide and restore the cursor, the cursor would still be hidden after the function returned.



        A later version of the OS added a major improvement: ShieldCursor. This function took a rectangle, would hide the cursor immediately if it was within that rectangle, and would also cause the vertical-blank interrupt to show or hide the cursor based upon whether it was within that rectangle. This was a major win from both a performance and user-experience standpoint. If code was going to draw some stuff in a rectangular region of the screen and the cursor wasn't there, there was no need to spend time erasing and redrawing the cursor. Further, if a program called ShieldCursor to guard one area of the screen where the cursor hpapened to be, and later used it to guard another, there was no need to redraw the cursor immediately. If the first area needed to be re-shielded and the cursor hadn't been redrawn, there would be no need for an extra redraw/erase cycle. But if the cursor was moved before that area was re-shielded, the cursor could be made visible.



        Early versions of the Macintosh OS and programs for it tended to have the cursor disappear quite a lot, which significantly degraded the user experience. I don't remember exactly when ShieldCursor was added, but I think it was around the time of the Macintosh II, and it made things much nicer. I think the mechanisms are simple enough that it probably could have been included within the first Mac OS if anyone had thought of it, but hindsight is of course 20/20.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 17 hours ago









        supercat

        6,058634




        6,058634



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8031%2fhow-did-the-early-macintosh-computers-update-the-display%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What does second last employer means? [closed]

            List of Gilmore Girls characters

            Confectionery