Did the original Intel peripherals use latches as input registers?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite












This is more a question to satisfy my curiosity.



When looking at the original timing diagrams for reading and writing (e.g. for the 8255 and the 8250), these signals are never expressed in function of a clock. The 8255 doesn't even have a clock input.



So my question is, is this done using level-sensitive latches?










share|improve this question







New contributor




chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.























    up vote
    3
    down vote

    favorite












    This is more a question to satisfy my curiosity.



    When looking at the original timing diagrams for reading and writing (e.g. for the 8255 and the 8250), these signals are never expressed in function of a clock. The 8255 doesn't even have a clock input.



    So my question is, is this done using level-sensitive latches?










    share|improve this question







    New contributor




    chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















      up vote
      3
      down vote

      favorite









      up vote
      3
      down vote

      favorite











      This is more a question to satisfy my curiosity.



      When looking at the original timing diagrams for reading and writing (e.g. for the 8255 and the 8250), these signals are never expressed in function of a clock. The 8255 doesn't even have a clock input.



      So my question is, is this done using level-sensitive latches?










      share|improve this question







      New contributor




      chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      This is more a question to satisfy my curiosity.



      When looking at the original timing diagrams for reading and writing (e.g. for the 8255 and the 8250), these signals are never expressed in function of a clock. The 8255 doesn't even have a clock input.



      So my question is, is this done using level-sensitive latches?







      8080






      share|improve this question







      New contributor




      chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 7 hours ago









      chthon

      1184




      1184




      New contributor




      chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      chthon is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          3 Answers
          3






          active

          oldest

          votes

















          up vote
          4
          down vote



          accepted










          Looking at the 8255 datasheet, timing is giving in reference the the RD (read) and WR (write) signals. The 8080, has READY and WRITE signals, which are described in the 8080 datasheet as




          READY: The READY signal indicates to the 8080A that valid memory or input data is available on the 8080A data bus. This signal is used to synchronize the CPU with slower memory or I/O devices. If after sending an address out the 8080A does not receive a READY input, the 8080A will enter a WAIT state for as long as the READY line is low. READY can also be used to single step the CPU.



          WRITE: The WR signal is used for memory WRITE or I/O output control. The data on the data bus is stable while the WR signal is active low (WR = 0).




          So, yes, the timing is clockless. No, it doesn't use level-sensitive latches; instead the various latches (in the CPU and periphery) are triggered by edges on these signals or combinations that are produced by glue logic.






          share|improve this answer






















          • There is no READY signal in 8255, so the only conclusion it was given at the diagrams for the convenience.
            – lvd
            3 hours ago










          • @lvd: The signals are called RD and WR.
            – dirkt
            3 hours ago










          • @lvd: pin 5 and 36, negated.
            – dirkt
            3 hours ago










          • so RD stands for READ, not READY.
            – lvd
            3 hours ago










          • @lvd: but there's only a READY signal in the 8080. I have to admit I didn't look at application example circuitry.
            – dirkt
            3 hours ago

















          up vote
          3
          down vote













          There are no clock inputs in 8255 but that does not mean that for example /RD and /WR strobes couldn't be used as clock inputs to some internal clocked flip-flops that latch incoming data or trigger some internal state machines.



          The actual answer could be found through reverse-engineering the chips in question (check visual6502.org for similar reverse-engineering projects).






          share|improve this answer



























            up vote
            1
            down vote













            Many I/O devices, especially historically, had bus timing entirely controlled by transitions on chip-select and read/write lines. Even devices like UARTs which need to have clocks for various purposes would often have bus timing which was independent of that clock, thus allowing them to be used on a wide range of CPUs with different clocking systems.



            The biggest difficulty with this approach is something called metastability, which wasn't thoroughly understood in the 1980s. If a device on the bus initiates a read of UART's status almost the precise moment an incoming byte is completed, it would be equally acceptable for the device to report a zero in the "is data ready" bit [the data wasn't ready when the request was received] or a one [the data arrived just in time to be detected].



            Unfortunately, there's a third possibility: the device might drive the data bus with a logic level somewhere between a high and low signal, or might start to drive it with a low level and switch to a high level sometime in the middle of a cycle. While a CPU might regard that situation as equivalent to either outputting zero or outputting 1, it might also end up in a weird state, especially if different parts of the CPU grab the signal at slightly different times. Many products in the 1980s didn't pay much attention to such issues, and would have occasional weird failures as a result.



            The normal way to prevent problems like this is to use a device called a "double synchronizer". This will ensure that an input transition that occurs near a clock edge will end up being regarded as having been either before it or after it, but at the expense of not being able to say what the signal level currently is--merely what it used to be two cycles ago.



            Unfortunately, trying to use a double-synchronizer in a UART without requiring the use of a bus clock would require doing that some bus-initiated action occur before a cycle that's supposed to report UART status, and have the latter action report what the status was when the former action occurred. For example, one could specify that each read of the status register reports the value it had at the previous read request, so code wanting current data would need to read the register twice consecutively. I don't think I've ever seen a UART actually work that way.



            The other possibility is to require the use of a bus clock. If serial transmission and reception use a different clock, a double synchronizer would still be necessary, but delaying data-readiness indications by two bus clocks is likely to be less irksome than requiring the use of two separate read events to capture it.






            share|improve this answer




















              Your Answer







              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "648"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              convertImagesToLinks: false,
              noModals: false,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );






              chthon is a new contributor. Be nice, and check out our Code of Conduct.









               

              draft saved


              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7557%2fdid-the-original-intel-peripherals-use-latches-as-input-registers%23new-answer', 'question_page');

              );

              Post as a guest






























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              4
              down vote



              accepted










              Looking at the 8255 datasheet, timing is giving in reference the the RD (read) and WR (write) signals. The 8080, has READY and WRITE signals, which are described in the 8080 datasheet as




              READY: The READY signal indicates to the 8080A that valid memory or input data is available on the 8080A data bus. This signal is used to synchronize the CPU with slower memory or I/O devices. If after sending an address out the 8080A does not receive a READY input, the 8080A will enter a WAIT state for as long as the READY line is low. READY can also be used to single step the CPU.



              WRITE: The WR signal is used for memory WRITE or I/O output control. The data on the data bus is stable while the WR signal is active low (WR = 0).




              So, yes, the timing is clockless. No, it doesn't use level-sensitive latches; instead the various latches (in the CPU and periphery) are triggered by edges on these signals or combinations that are produced by glue logic.






              share|improve this answer






















              • There is no READY signal in 8255, so the only conclusion it was given at the diagrams for the convenience.
                – lvd
                3 hours ago










              • @lvd: The signals are called RD and WR.
                – dirkt
                3 hours ago










              • @lvd: pin 5 and 36, negated.
                – dirkt
                3 hours ago










              • so RD stands for READ, not READY.
                – lvd
                3 hours ago










              • @lvd: but there's only a READY signal in the 8080. I have to admit I didn't look at application example circuitry.
                – dirkt
                3 hours ago














              up vote
              4
              down vote



              accepted










              Looking at the 8255 datasheet, timing is giving in reference the the RD (read) and WR (write) signals. The 8080, has READY and WRITE signals, which are described in the 8080 datasheet as




              READY: The READY signal indicates to the 8080A that valid memory or input data is available on the 8080A data bus. This signal is used to synchronize the CPU with slower memory or I/O devices. If after sending an address out the 8080A does not receive a READY input, the 8080A will enter a WAIT state for as long as the READY line is low. READY can also be used to single step the CPU.



              WRITE: The WR signal is used for memory WRITE or I/O output control. The data on the data bus is stable while the WR signal is active low (WR = 0).




              So, yes, the timing is clockless. No, it doesn't use level-sensitive latches; instead the various latches (in the CPU and periphery) are triggered by edges on these signals or combinations that are produced by glue logic.






              share|improve this answer






















              • There is no READY signal in 8255, so the only conclusion it was given at the diagrams for the convenience.
                – lvd
                3 hours ago










              • @lvd: The signals are called RD and WR.
                – dirkt
                3 hours ago










              • @lvd: pin 5 and 36, negated.
                – dirkt
                3 hours ago










              • so RD stands for READ, not READY.
                – lvd
                3 hours ago










              • @lvd: but there's only a READY signal in the 8080. I have to admit I didn't look at application example circuitry.
                – dirkt
                3 hours ago












              up vote
              4
              down vote



              accepted







              up vote
              4
              down vote



              accepted






              Looking at the 8255 datasheet, timing is giving in reference the the RD (read) and WR (write) signals. The 8080, has READY and WRITE signals, which are described in the 8080 datasheet as




              READY: The READY signal indicates to the 8080A that valid memory or input data is available on the 8080A data bus. This signal is used to synchronize the CPU with slower memory or I/O devices. If after sending an address out the 8080A does not receive a READY input, the 8080A will enter a WAIT state for as long as the READY line is low. READY can also be used to single step the CPU.



              WRITE: The WR signal is used for memory WRITE or I/O output control. The data on the data bus is stable while the WR signal is active low (WR = 0).




              So, yes, the timing is clockless. No, it doesn't use level-sensitive latches; instead the various latches (in the CPU and periphery) are triggered by edges on these signals or combinations that are produced by glue logic.






              share|improve this answer














              Looking at the 8255 datasheet, timing is giving in reference the the RD (read) and WR (write) signals. The 8080, has READY and WRITE signals, which are described in the 8080 datasheet as




              READY: The READY signal indicates to the 8080A that valid memory or input data is available on the 8080A data bus. This signal is used to synchronize the CPU with slower memory or I/O devices. If after sending an address out the 8080A does not receive a READY input, the 8080A will enter a WAIT state for as long as the READY line is low. READY can also be used to single step the CPU.



              WRITE: The WR signal is used for memory WRITE or I/O output control. The data on the data bus is stable while the WR signal is active low (WR = 0).




              So, yes, the timing is clockless. No, it doesn't use level-sensitive latches; instead the various latches (in the CPU and periphery) are triggered by edges on these signals or combinations that are produced by glue logic.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited 3 hours ago

























              answered 7 hours ago









              dirkt

              7,0331738




              7,0331738











              • There is no READY signal in 8255, so the only conclusion it was given at the diagrams for the convenience.
                – lvd
                3 hours ago










              • @lvd: The signals are called RD and WR.
                – dirkt
                3 hours ago










              • @lvd: pin 5 and 36, negated.
                – dirkt
                3 hours ago










              • so RD stands for READ, not READY.
                – lvd
                3 hours ago










              • @lvd: but there's only a READY signal in the 8080. I have to admit I didn't look at application example circuitry.
                – dirkt
                3 hours ago
















              • There is no READY signal in 8255, so the only conclusion it was given at the diagrams for the convenience.
                – lvd
                3 hours ago










              • @lvd: The signals are called RD and WR.
                – dirkt
                3 hours ago










              • @lvd: pin 5 and 36, negated.
                – dirkt
                3 hours ago










              • so RD stands for READ, not READY.
                – lvd
                3 hours ago










              • @lvd: but there's only a READY signal in the 8080. I have to admit I didn't look at application example circuitry.
                – dirkt
                3 hours ago















              There is no READY signal in 8255, so the only conclusion it was given at the diagrams for the convenience.
              – lvd
              3 hours ago




              There is no READY signal in 8255, so the only conclusion it was given at the diagrams for the convenience.
              – lvd
              3 hours ago












              @lvd: The signals are called RD and WR.
              – dirkt
              3 hours ago




              @lvd: The signals are called RD and WR.
              – dirkt
              3 hours ago












              @lvd: pin 5 and 36, negated.
              – dirkt
              3 hours ago




              @lvd: pin 5 and 36, negated.
              – dirkt
              3 hours ago












              so RD stands for READ, not READY.
              – lvd
              3 hours ago




              so RD stands for READ, not READY.
              – lvd
              3 hours ago












              @lvd: but there's only a READY signal in the 8080. I have to admit I didn't look at application example circuitry.
              – dirkt
              3 hours ago




              @lvd: but there's only a READY signal in the 8080. I have to admit I didn't look at application example circuitry.
              – dirkt
              3 hours ago










              up vote
              3
              down vote













              There are no clock inputs in 8255 but that does not mean that for example /RD and /WR strobes couldn't be used as clock inputs to some internal clocked flip-flops that latch incoming data or trigger some internal state machines.



              The actual answer could be found through reverse-engineering the chips in question (check visual6502.org for similar reverse-engineering projects).






              share|improve this answer
























                up vote
                3
                down vote













                There are no clock inputs in 8255 but that does not mean that for example /RD and /WR strobes couldn't be used as clock inputs to some internal clocked flip-flops that latch incoming data or trigger some internal state machines.



                The actual answer could be found through reverse-engineering the chips in question (check visual6502.org for similar reverse-engineering projects).






                share|improve this answer






















                  up vote
                  3
                  down vote










                  up vote
                  3
                  down vote









                  There are no clock inputs in 8255 but that does not mean that for example /RD and /WR strobes couldn't be used as clock inputs to some internal clocked flip-flops that latch incoming data or trigger some internal state machines.



                  The actual answer could be found through reverse-engineering the chips in question (check visual6502.org for similar reverse-engineering projects).






                  share|improve this answer












                  There are no clock inputs in 8255 but that does not mean that for example /RD and /WR strobes couldn't be used as clock inputs to some internal clocked flip-flops that latch incoming data or trigger some internal state machines.



                  The actual answer could be found through reverse-engineering the chips in question (check visual6502.org for similar reverse-engineering projects).







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 3 hours ago









                  lvd

                  1,942315




                  1,942315




















                      up vote
                      1
                      down vote













                      Many I/O devices, especially historically, had bus timing entirely controlled by transitions on chip-select and read/write lines. Even devices like UARTs which need to have clocks for various purposes would often have bus timing which was independent of that clock, thus allowing them to be used on a wide range of CPUs with different clocking systems.



                      The biggest difficulty with this approach is something called metastability, which wasn't thoroughly understood in the 1980s. If a device on the bus initiates a read of UART's status almost the precise moment an incoming byte is completed, it would be equally acceptable for the device to report a zero in the "is data ready" bit [the data wasn't ready when the request was received] or a one [the data arrived just in time to be detected].



                      Unfortunately, there's a third possibility: the device might drive the data bus with a logic level somewhere between a high and low signal, or might start to drive it with a low level and switch to a high level sometime in the middle of a cycle. While a CPU might regard that situation as equivalent to either outputting zero or outputting 1, it might also end up in a weird state, especially if different parts of the CPU grab the signal at slightly different times. Many products in the 1980s didn't pay much attention to such issues, and would have occasional weird failures as a result.



                      The normal way to prevent problems like this is to use a device called a "double synchronizer". This will ensure that an input transition that occurs near a clock edge will end up being regarded as having been either before it or after it, but at the expense of not being able to say what the signal level currently is--merely what it used to be two cycles ago.



                      Unfortunately, trying to use a double-synchronizer in a UART without requiring the use of a bus clock would require doing that some bus-initiated action occur before a cycle that's supposed to report UART status, and have the latter action report what the status was when the former action occurred. For example, one could specify that each read of the status register reports the value it had at the previous read request, so code wanting current data would need to read the register twice consecutively. I don't think I've ever seen a UART actually work that way.



                      The other possibility is to require the use of a bus clock. If serial transmission and reception use a different clock, a double synchronizer would still be necessary, but delaying data-readiness indications by two bus clocks is likely to be less irksome than requiring the use of two separate read events to capture it.






                      share|improve this answer
























                        up vote
                        1
                        down vote













                        Many I/O devices, especially historically, had bus timing entirely controlled by transitions on chip-select and read/write lines. Even devices like UARTs which need to have clocks for various purposes would often have bus timing which was independent of that clock, thus allowing them to be used on a wide range of CPUs with different clocking systems.



                        The biggest difficulty with this approach is something called metastability, which wasn't thoroughly understood in the 1980s. If a device on the bus initiates a read of UART's status almost the precise moment an incoming byte is completed, it would be equally acceptable for the device to report a zero in the "is data ready" bit [the data wasn't ready when the request was received] or a one [the data arrived just in time to be detected].



                        Unfortunately, there's a third possibility: the device might drive the data bus with a logic level somewhere between a high and low signal, or might start to drive it with a low level and switch to a high level sometime in the middle of a cycle. While a CPU might regard that situation as equivalent to either outputting zero or outputting 1, it might also end up in a weird state, especially if different parts of the CPU grab the signal at slightly different times. Many products in the 1980s didn't pay much attention to such issues, and would have occasional weird failures as a result.



                        The normal way to prevent problems like this is to use a device called a "double synchronizer". This will ensure that an input transition that occurs near a clock edge will end up being regarded as having been either before it or after it, but at the expense of not being able to say what the signal level currently is--merely what it used to be two cycles ago.



                        Unfortunately, trying to use a double-synchronizer in a UART without requiring the use of a bus clock would require doing that some bus-initiated action occur before a cycle that's supposed to report UART status, and have the latter action report what the status was when the former action occurred. For example, one could specify that each read of the status register reports the value it had at the previous read request, so code wanting current data would need to read the register twice consecutively. I don't think I've ever seen a UART actually work that way.



                        The other possibility is to require the use of a bus clock. If serial transmission and reception use a different clock, a double synchronizer would still be necessary, but delaying data-readiness indications by two bus clocks is likely to be less irksome than requiring the use of two separate read events to capture it.






                        share|improve this answer






















                          up vote
                          1
                          down vote










                          up vote
                          1
                          down vote









                          Many I/O devices, especially historically, had bus timing entirely controlled by transitions on chip-select and read/write lines. Even devices like UARTs which need to have clocks for various purposes would often have bus timing which was independent of that clock, thus allowing them to be used on a wide range of CPUs with different clocking systems.



                          The biggest difficulty with this approach is something called metastability, which wasn't thoroughly understood in the 1980s. If a device on the bus initiates a read of UART's status almost the precise moment an incoming byte is completed, it would be equally acceptable for the device to report a zero in the "is data ready" bit [the data wasn't ready when the request was received] or a one [the data arrived just in time to be detected].



                          Unfortunately, there's a third possibility: the device might drive the data bus with a logic level somewhere between a high and low signal, or might start to drive it with a low level and switch to a high level sometime in the middle of a cycle. While a CPU might regard that situation as equivalent to either outputting zero or outputting 1, it might also end up in a weird state, especially if different parts of the CPU grab the signal at slightly different times. Many products in the 1980s didn't pay much attention to such issues, and would have occasional weird failures as a result.



                          The normal way to prevent problems like this is to use a device called a "double synchronizer". This will ensure that an input transition that occurs near a clock edge will end up being regarded as having been either before it or after it, but at the expense of not being able to say what the signal level currently is--merely what it used to be two cycles ago.



                          Unfortunately, trying to use a double-synchronizer in a UART without requiring the use of a bus clock would require doing that some bus-initiated action occur before a cycle that's supposed to report UART status, and have the latter action report what the status was when the former action occurred. For example, one could specify that each read of the status register reports the value it had at the previous read request, so code wanting current data would need to read the register twice consecutively. I don't think I've ever seen a UART actually work that way.



                          The other possibility is to require the use of a bus clock. If serial transmission and reception use a different clock, a double synchronizer would still be necessary, but delaying data-readiness indications by two bus clocks is likely to be less irksome than requiring the use of two separate read events to capture it.






                          share|improve this answer












                          Many I/O devices, especially historically, had bus timing entirely controlled by transitions on chip-select and read/write lines. Even devices like UARTs which need to have clocks for various purposes would often have bus timing which was independent of that clock, thus allowing them to be used on a wide range of CPUs with different clocking systems.



                          The biggest difficulty with this approach is something called metastability, which wasn't thoroughly understood in the 1980s. If a device on the bus initiates a read of UART's status almost the precise moment an incoming byte is completed, it would be equally acceptable for the device to report a zero in the "is data ready" bit [the data wasn't ready when the request was received] or a one [the data arrived just in time to be detected].



                          Unfortunately, there's a third possibility: the device might drive the data bus with a logic level somewhere between a high and low signal, or might start to drive it with a low level and switch to a high level sometime in the middle of a cycle. While a CPU might regard that situation as equivalent to either outputting zero or outputting 1, it might also end up in a weird state, especially if different parts of the CPU grab the signal at slightly different times. Many products in the 1980s didn't pay much attention to such issues, and would have occasional weird failures as a result.



                          The normal way to prevent problems like this is to use a device called a "double synchronizer". This will ensure that an input transition that occurs near a clock edge will end up being regarded as having been either before it or after it, but at the expense of not being able to say what the signal level currently is--merely what it used to be two cycles ago.



                          Unfortunately, trying to use a double-synchronizer in a UART without requiring the use of a bus clock would require doing that some bus-initiated action occur before a cycle that's supposed to report UART status, and have the latter action report what the status was when the former action occurred. For example, one could specify that each read of the status register reports the value it had at the previous read request, so code wanting current data would need to read the register twice consecutively. I don't think I've ever seen a UART actually work that way.



                          The other possibility is to require the use of a bus clock. If serial transmission and reception use a different clock, a double synchronizer would still be necessary, but delaying data-readiness indications by two bus clocks is likely to be less irksome than requiring the use of two separate read events to capture it.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered 1 hour ago









                          supercat

                          5,338532




                          5,338532




















                              chthon is a new contributor. Be nice, and check out our Code of Conduct.









                               

                              draft saved


                              draft discarded


















                              chthon is a new contributor. Be nice, and check out our Code of Conduct.












                              chthon is a new contributor. Be nice, and check out our Code of Conduct.











                              chthon is a new contributor. Be nice, and check out our Code of Conduct.













                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7557%2fdid-the-original-intel-peripherals-use-latches-as-input-registers%23new-answer', 'question_page');

                              );

                              Post as a guest













































































                              Comments

                              Popular posts from this blog

                              What does second last employer means? [closed]

                              Installing NextGIS Connect into QGIS 3?

                              One-line joke