Why is digital serial transmission used everywhere? i.e. SATA, PCIe, USB

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
42
down vote

favorite
8












While looking at SATA, PCIe, USB, SD UHS-II it struck me that they are all the same: digital serial bitstream, transmitted using differential pairs (usually 8b/10b coded), with some differences in link/protocol layers.

Why so? Why did this become the standard?

Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate? Am I missing something?
This is not a question of "serial vs parallel" but a question of "digital signaling vs modulated analog"







share|improve this question


















  • 6




    Well, what alternatives are there?
    – PlasmaHH
    Aug 30 at 8:33






  • 25




    Well, there used to be parallel, but you'd need a lot of copper and very wide cables.
    – Jeroen3
    Aug 30 at 8:53






  • 7




    And that's of course how DDR still works today.
    – MSalters
    Aug 30 at 10:14






  • 9




    And serial overtook parallel for printer cables, et al, when electronics got so cheap that the serial-parallel converter was cheaper than the wire.
    – Hot Licks
    Aug 30 at 11:57






  • 4




    It takes longer to measure voltage more precisely. If you start introducing more voltage levels than 1 and 0, your frequency will go down, and you will have no net gain in bandwidth (possibly a loss).
    – Aaron
    Aug 31 at 14:52
















up vote
42
down vote

favorite
8












While looking at SATA, PCIe, USB, SD UHS-II it struck me that they are all the same: digital serial bitstream, transmitted using differential pairs (usually 8b/10b coded), with some differences in link/protocol layers.

Why so? Why did this become the standard?

Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate? Am I missing something?
This is not a question of "serial vs parallel" but a question of "digital signaling vs modulated analog"







share|improve this question


















  • 6




    Well, what alternatives are there?
    – PlasmaHH
    Aug 30 at 8:33






  • 25




    Well, there used to be parallel, but you'd need a lot of copper and very wide cables.
    – Jeroen3
    Aug 30 at 8:53






  • 7




    And that's of course how DDR still works today.
    – MSalters
    Aug 30 at 10:14






  • 9




    And serial overtook parallel for printer cables, et al, when electronics got so cheap that the serial-parallel converter was cheaper than the wire.
    – Hot Licks
    Aug 30 at 11:57






  • 4




    It takes longer to measure voltage more precisely. If you start introducing more voltage levels than 1 and 0, your frequency will go down, and you will have no net gain in bandwidth (possibly a loss).
    – Aaron
    Aug 31 at 14:52












up vote
42
down vote

favorite
8









up vote
42
down vote

favorite
8






8





While looking at SATA, PCIe, USB, SD UHS-II it struck me that they are all the same: digital serial bitstream, transmitted using differential pairs (usually 8b/10b coded), with some differences in link/protocol layers.

Why so? Why did this become the standard?

Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate? Am I missing something?
This is not a question of "serial vs parallel" but a question of "digital signaling vs modulated analog"







share|improve this question














While looking at SATA, PCIe, USB, SD UHS-II it struck me that they are all the same: digital serial bitstream, transmitted using differential pairs (usually 8b/10b coded), with some differences in link/protocol layers.

Why so? Why did this become the standard?

Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate? Am I missing something?
This is not a question of "serial vs parallel" but a question of "digital signaling vs modulated analog"









share|improve this question













share|improve this question




share|improve this question








edited Aug 31 at 15:12









T.J.L.

1031




1031










asked Aug 30 at 8:31









artemonster

34239




34239







  • 6




    Well, what alternatives are there?
    – PlasmaHH
    Aug 30 at 8:33






  • 25




    Well, there used to be parallel, but you'd need a lot of copper and very wide cables.
    – Jeroen3
    Aug 30 at 8:53






  • 7




    And that's of course how DDR still works today.
    – MSalters
    Aug 30 at 10:14






  • 9




    And serial overtook parallel for printer cables, et al, when electronics got so cheap that the serial-parallel converter was cheaper than the wire.
    – Hot Licks
    Aug 30 at 11:57






  • 4




    It takes longer to measure voltage more precisely. If you start introducing more voltage levels than 1 and 0, your frequency will go down, and you will have no net gain in bandwidth (possibly a loss).
    – Aaron
    Aug 31 at 14:52












  • 6




    Well, what alternatives are there?
    – PlasmaHH
    Aug 30 at 8:33






  • 25




    Well, there used to be parallel, but you'd need a lot of copper and very wide cables.
    – Jeroen3
    Aug 30 at 8:53






  • 7




    And that's of course how DDR still works today.
    – MSalters
    Aug 30 at 10:14






  • 9




    And serial overtook parallel for printer cables, et al, when electronics got so cheap that the serial-parallel converter was cheaper than the wire.
    – Hot Licks
    Aug 30 at 11:57






  • 4




    It takes longer to measure voltage more precisely. If you start introducing more voltage levels than 1 and 0, your frequency will go down, and you will have no net gain in bandwidth (possibly a loss).
    – Aaron
    Aug 31 at 14:52







6




6




Well, what alternatives are there?
– PlasmaHH
Aug 30 at 8:33




Well, what alternatives are there?
– PlasmaHH
Aug 30 at 8:33




25




25




Well, there used to be parallel, but you'd need a lot of copper and very wide cables.
– Jeroen3
Aug 30 at 8:53




Well, there used to be parallel, but you'd need a lot of copper and very wide cables.
– Jeroen3
Aug 30 at 8:53




7




7




And that's of course how DDR still works today.
– MSalters
Aug 30 at 10:14




And that's of course how DDR still works today.
– MSalters
Aug 30 at 10:14




9




9




And serial overtook parallel for printer cables, et al, when electronics got so cheap that the serial-parallel converter was cheaper than the wire.
– Hot Licks
Aug 30 at 11:57




And serial overtook parallel for printer cables, et al, when electronics got so cheap that the serial-parallel converter was cheaper than the wire.
– Hot Licks
Aug 30 at 11:57




4




4




It takes longer to measure voltage more precisely. If you start introducing more voltage levels than 1 and 0, your frequency will go down, and you will have no net gain in bandwidth (possibly a loss).
– Aaron
Aug 31 at 14:52




It takes longer to measure voltage more precisely. If you start introducing more voltage levels than 1 and 0, your frequency will go down, and you will have no net gain in bandwidth (possibly a loss).
– Aaron
Aug 31 at 14:52










8 Answers
8






active

oldest

votes

















up vote
25
down vote



accepted











Why there are no widespread system communication protocols that
heavily employ some advanced modulation methods for a better symbol
rate?




If the basic copper connection between two points supports a digital bit rate that is in excess of the data rate needed to be transmitted by the "application", then why bother with anything else other than standard differential high-speed signalling?



Employing an advanced modulation scheme is usually done when the "channel" has a bandwidth that is much more limited than copper or fibre.






share|improve this answer




















  • thanks! altgough there were really good answers, this is the one I was looking for!
    – artemonster
    Aug 31 at 9:14






  • 2




    A simple question requires a simple answer but you can't stop folk wanting to cover more ground than what the original question required.
    – Andy aka
    Aug 31 at 9:17







  • 1




    and this is a good thing :) I've learned a lot from other answers as they were quite informative.
    – artemonster
    Aug 31 at 9:19

















up vote
63
down vote













There are two main reasons for the rise of serial



1) It's possible. Low cost transistors have been able to manage GHz switching for a decade now, long enough for the capability to be used and become standard.



2) It's necessary. If you want to shift very high speed data more than a few inches. This distance starts to rule out mobo to PCI card links, and definitely rules out mobo to hard disk, or mobo/settopbox to display connections.



The reason for this is skew. If you transmit several parallel signals along a cable, then they have to arrive within a small fraction of the same clock period. This keeps the clock rate down, so the cable width has to increase. As data rates climb, that becomes more and more unweildy. The prospect for increasing the rate into the future is non-existent, double or quadruple width ATA anybody?



The way to slay the skew demon is to go serial. One line is always synchronised with itself, there is nothing for it to be skewed with. The line carries data that is self-clocked. That is, uses a data coding scheme (often 8b/10b, sometimes much higher) that provides a minimum guaranteed transition density which allows clock extraction.



The prospect for increasing data rate or distance into the future is excellent. Each generation brings faster transistors, and more experience crafting the mediium. We saw how that played out with SATA, which started at 1.5Gb/s, then moved through 3 and is now 6Gb/s. Even cheap cables can provide sufficiently consistent impedance and reasonable loss, and equalisers are built into the interface silicon to handle frequency dependent loss. Optical fibre is available for very long runs.



For higher data rates, several serial links can be operated in parallel. This is not the same as putting conductors in parallel, which have to be matched in time to less than a clock cycle. These serial lanes have only need to be matched to within a high level data frame, which can be µs or even ms long.



Of course the advantage in data width doesn't just apply to the cables and connectors. Serial also benefits PCB board area between connectors and chip, chip pinout, and chip silicon area.



I do have a personal angle on this. As a designer working on software defined radio (SDR) from the 90's onwards, I used to rail at people like Analog Devices and Xilinx (and all the other ADC and FPGA companies) (they would visit and ask us from time to time) for making me run so many parallel differential connections between multi-100MHz ADCs and FPGAs, when we were just starting to see SATA emerge to displace ATA. We finally got JESD204x, so now we can hook up converters and FPGAs with only a few serial lines.






share|improve this answer


















  • 7




    PCI express 3 and 4 use 128b/130b encoding.
    – Peter Smith
    Aug 30 at 9:51






  • 10




    What is the Nb/(N+2)b nomenclature people are using here?
    – detly
    Aug 30 at 10:50






  • 17




    @detly the 8b/10b, 64b/66b are both forms of Line-Code encoding. In serial comms context, the line-encoding is needed to ensure you can do Clock-Recovery](en.wikipedia.org/wiki/Clock_recovery).
    – Trevor Boyd Smith
    Aug 30 at 12:31






  • 4




    @tolos - the velocity factor for most flavours of ordinary PCB materials is about 50% (6 in / ns).
    – Peter Smith
    Aug 30 at 14:40






  • 4




    @supercat that's exactly how multiple lanes are handled, each lane has its own independent clocking. Data is framed and spread across all lanes. When the receiver has all the frames, it acts on the data. This allows skew related to the length of the data frame, which can be us or even ms as I said in my answer. This is the meaning of the number of lanes in PCIe, and how 2xHDMI are handled to a high performance display.
    – Neil_UK
    Aug 30 at 16:20

















up vote
22
down vote













If you want an example of something that is widely used, but different, look at 1000BASE-T gigabit Ethernet. That uses parallel cables and non-trivial signal encoding.



Mostly, people use serial buses because they are simple. Parallel buses use more cable, and suffer from signal skew at high data rates over long cables.






share|improve this answer
















  • 9




    GbE was created by throwing money at the problem of pushing more data over the vast amounts of existing Cat 5 wiring. It probably wouldn't quite look like it does if a new interface was designed with no concern over backwards compatibility. Cf how 10GbE is not making any serious inroads in commercial / domestic settings as it requires installing new Cat 6a cabling.
    – Barleyman
    Aug 30 at 11:02










  • 10GbE might still work on cat5e or cat5 depending on cable length and/or quality.
    – user3549596
    Sep 3 at 11:11

















up vote
10
down vote













To add to the other fine answers:



The issues noted in other answers (most notably, skew between parallel signals, and the costs of extra wires in the cable) increase as the distances of the signals increase. Thus, there is a distance at which serial becomes superior to parallel, and that distance has been decreasing as data rates have increased.



Parallel data transfer still happens: inside of chips, and also most signals within circuit boards. However, the distances needed by external peripherals -- and even by internal drives -- are now too far and too fast for parallel interfaces to remain practical. Thus, the signals that an end-user will now be exposed to are largely serial.






share|improve this answer


















  • 3




    Probably best example of high speed parallel signaling is RAM memory, especially in PCs.
    – Jan Dorniak
    Aug 30 at 23:16










  • Excellent answer, points directly to the root issue of time synchronziation across spatial domains. One thing to note, you can still have more than 2 symbols on a serial link, thus getting some benefit of parallel communications by using modulation to encode more bits per baud
    – crasic
    Aug 31 at 7:32


















up vote
8
down vote













Advanced modulation techniques would require you to transmit and receive analog signals. ADCs and DACs running at hundreds of MHz tend to be expensive and consume quite a bit of power. Signal processing required for decoding is also costly in terms of silicon and power.



It's simply cheaper to make a better communication medium which can support binary signals.






share|improve this answer
















  • 1




    good point. спасибо за ответ :)
    – artemonster
    Sep 1 at 18:44

















up vote
7
down vote














Why did the serial bit stream become so common?




Using serial links has the advantage that it reduced the physical size of the connection. Modern integrated circuit architectures have so many pins on them that this created a strong need to minimize the physical interconnection demands on their design. This led to developing circuits that operate at extreme speeds at the interfaces of these circuits using serial protocols. For the same reason, it's natural to minimize the physical interconnection demands elsewhere in any other data link.



The original demand for this kind of technology may have its origins in fiber optic data transmission designs, as well.



Once the technology to support high speed links became very common, it was only natural to apply it in many other places, because the physical size of serial connections is so much smaller than parallel connections.




Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate?




At the encoding level, coding schemes for digital communication can be as simple as NRZ (Non-Return to Zero), a slightly more complicated Line Code (e.g. 8B/10B), or much more complicated, like QAM (Quadrature Amplitude Modulation).



Complexity adds cost, but choices are also dependent on factors that ultimately rely on information theory and the capacity limits of a link. Shannon's Law, from the Shannon-Hartley Theorem describes the maximum capacity of a channel (think of that as "the connection" or "link"):




Maximum Capacity in Bits/Second = Bandwidth * Log2(1 + Signal/Noise)




For radio links (something like LTE or WiFi), bandwidth is going to be limited, often by legal regulations. In those cases QAM and similarly complex protocols may be used to eke out the highest data rate possible. In these cases, the signal to noise ratio is often fairly low (10 to 100, or, in decibels 10 to 20 dB). It can only go so high before an upper limit is reached under the given bandwidth and signal to noise ratio.



For a wire link, the bandwidth is not regulated by anything but the practicality of implementation. Wire links can have a very high signal to noise ratio, greater than 1000 (30 dB). As mentioned in other answers, bandwidth is limited by the design of the transistors driving the wire and receiving the signal, and in the design of the wire itself (a transmission line).



When bandwidth becomes a limiting factor but signal to noise ratio is not, the designer find other ways to increase data rate. It becomes an economic decision whether to go to a more complex encoding scheme or go to more wire:



You will indeed see serial/parallel protocols used when a single wire is still too slow. PCI-Express does this to overcome the bandwidth limitations of the hardware by using multiple lanes.



In fiber transmissions, they don't have to add more fibers (although they might use others if they are already in place and not being used). The can use wave division multiplexing. Generally, this is done to provide multiple independent parallel channels, and the skew issue mentioned in other answers is not a concern for independent channels.






share|improve this answer


















  • 1




    Nice answer. It does make me curious if somebody could (or already has?) do something like implement 256-QAM at USB3 speeds for truly amazing transfer rates...
    – mbrig
    Aug 30 at 20:04










  • FWIW, the fiber world is starting to develop and deploy more complex modulation schemes. PAM-4 is coming for 100 and 400 G Ethernet, and telecoms systems are (I believe, but it's not my field) starting to use coherent QAM.
    – The Photon
    Aug 31 at 4:28










  • but, really, if the SNR of a wire line is so good, why not squeeze out every possible piece of bandwidth? Why push into GHZ frequencies (with all relevant problems), where you could go much slower and employ some modulation/coding. Wouldn't it be more convenient?
    – artemonster
    Aug 31 at 9:21










  • You can do that. It becomes an economic decision.
    – Jim
    Aug 31 at 14:50






  • 1




    QAM requires a carrier, so it doesn't make much sense to do anything other than PAM for 'baseband' digital. It is used in the optical domain, using light itself as the carrier. Over copper, you would basically just build a radio transceiver. This would require much more high speed analog circuitry, and all of the complexity and increased power consumption that comes with that. Serializers and deserializers are relatively simple in comparison. IMHO, we're more likely to see integrated silicon photonic modulators and detectors than moving to QAM over copper.
    – alex.forencich
    Sep 1 at 21:32

















up vote
1
down vote













Take four semi trucks with a payload. Four lane per side highway. For the trucks to successfully haul the payload parallel they have to be perfectly side by side, one can not be ahead or behind the others by more than an inch lets say. Hills, curves, doesnt matter. Vary too much and its a total fail.



But have them take one lane and the distance between them can vary. While true that linearly it takes over four times the distance from front of the first truck to the back of the last to move the payloads, but they dont have to be perfectly spaced. Just within the length of one truck it has to have the cab and payload and length of payload to be properly positioned and spaced.



They even go so far as to be parallel, pcie, network, etc, but while those are technically multiple separate data paths, they are not parallel in that they have to leave and arrive at the same time, using the truck analogy the four trucks can drive on four lanes roughly parallel but can vary, the trucks are marked by what lane they arrived in so that when they arrive at the other end the payloads can be combined back into the original data set. And/or each lane can be one data set serially and by having more lanes you can move more data sets at one time.






share|improve this answer




















  • The lanes don't need to be perfectly "aligned". Modern multi-lane interfaces use more sophisticated methods of symbol alignment than being perfectly side-by-side.
    – Ale..chenski
    Aug 31 at 7:18






  • 1




    Right, thats what I said.
    – old_timer
    Aug 31 at 12:27

















up vote
1
down vote













As an addition to Dmitry Grigoryevs comment.



Analog transmission is allways more error-prone than digital transmission. A digital serial transmission for example has clocked flanks, where an analog signal is somehow floating between 0V and VDD. So interferences are way harder to detect. One could take that into account and use differential signaling, as done in Audio.



But then you run into that speed vs. accuracy tradeof of DACs/ADCs. If you have to digital systems talking to each other it makes way more sense to use a digital transmission, since you don't need some time consuming DA-AD translation.



However if you have an analogue computer running on analogue control voltages, there are still some around, they look like analogue modular synths basically, things are different, and typically you can build analogue computers only for specific tasks. Funny presentation in German about analogue computing.



Talking about analogue modular synths, they're also some kind of analogue computers, especially designed to do callculations on changing signals.



So there is analogue transmission in computing, but limited to very specific fields.






share|improve this answer








New contributor




Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("schematics", function ()
    StackExchange.schematics.init();
    );
    , "cicuitlab");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "135"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f393462%2fwhy-is-digital-serial-transmission-used-everywhere-i-e-sata-pcie-usb%23new-answer', 'question_page');

    );

    Post as a guest






























    8 Answers
    8






    active

    oldest

    votes








    8 Answers
    8






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    25
    down vote



    accepted











    Why there are no widespread system communication protocols that
    heavily employ some advanced modulation methods for a better symbol
    rate?




    If the basic copper connection between two points supports a digital bit rate that is in excess of the data rate needed to be transmitted by the "application", then why bother with anything else other than standard differential high-speed signalling?



    Employing an advanced modulation scheme is usually done when the "channel" has a bandwidth that is much more limited than copper or fibre.






    share|improve this answer




















    • thanks! altgough there were really good answers, this is the one I was looking for!
      – artemonster
      Aug 31 at 9:14






    • 2




      A simple question requires a simple answer but you can't stop folk wanting to cover more ground than what the original question required.
      – Andy aka
      Aug 31 at 9:17







    • 1




      and this is a good thing :) I've learned a lot from other answers as they were quite informative.
      – artemonster
      Aug 31 at 9:19














    up vote
    25
    down vote



    accepted











    Why there are no widespread system communication protocols that
    heavily employ some advanced modulation methods for a better symbol
    rate?




    If the basic copper connection between two points supports a digital bit rate that is in excess of the data rate needed to be transmitted by the "application", then why bother with anything else other than standard differential high-speed signalling?



    Employing an advanced modulation scheme is usually done when the "channel" has a bandwidth that is much more limited than copper or fibre.






    share|improve this answer




















    • thanks! altgough there were really good answers, this is the one I was looking for!
      – artemonster
      Aug 31 at 9:14






    • 2




      A simple question requires a simple answer but you can't stop folk wanting to cover more ground than what the original question required.
      – Andy aka
      Aug 31 at 9:17







    • 1




      and this is a good thing :) I've learned a lot from other answers as they were quite informative.
      – artemonster
      Aug 31 at 9:19












    up vote
    25
    down vote



    accepted







    up vote
    25
    down vote



    accepted







    Why there are no widespread system communication protocols that
    heavily employ some advanced modulation methods for a better symbol
    rate?




    If the basic copper connection between two points supports a digital bit rate that is in excess of the data rate needed to be transmitted by the "application", then why bother with anything else other than standard differential high-speed signalling?



    Employing an advanced modulation scheme is usually done when the "channel" has a bandwidth that is much more limited than copper or fibre.






    share|improve this answer













    Why there are no widespread system communication protocols that
    heavily employ some advanced modulation methods for a better symbol
    rate?




    If the basic copper connection between two points supports a digital bit rate that is in excess of the data rate needed to be transmitted by the "application", then why bother with anything else other than standard differential high-speed signalling?



    Employing an advanced modulation scheme is usually done when the "channel" has a bandwidth that is much more limited than copper or fibre.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Aug 30 at 8:56









    Andy aka

    228k9167386




    228k9167386











    • thanks! altgough there were really good answers, this is the one I was looking for!
      – artemonster
      Aug 31 at 9:14






    • 2




      A simple question requires a simple answer but you can't stop folk wanting to cover more ground than what the original question required.
      – Andy aka
      Aug 31 at 9:17







    • 1




      and this is a good thing :) I've learned a lot from other answers as they were quite informative.
      – artemonster
      Aug 31 at 9:19
















    • thanks! altgough there were really good answers, this is the one I was looking for!
      – artemonster
      Aug 31 at 9:14






    • 2




      A simple question requires a simple answer but you can't stop folk wanting to cover more ground than what the original question required.
      – Andy aka
      Aug 31 at 9:17







    • 1




      and this is a good thing :) I've learned a lot from other answers as they were quite informative.
      – artemonster
      Aug 31 at 9:19















    thanks! altgough there were really good answers, this is the one I was looking for!
    – artemonster
    Aug 31 at 9:14




    thanks! altgough there were really good answers, this is the one I was looking for!
    – artemonster
    Aug 31 at 9:14




    2




    2




    A simple question requires a simple answer but you can't stop folk wanting to cover more ground than what the original question required.
    – Andy aka
    Aug 31 at 9:17





    A simple question requires a simple answer but you can't stop folk wanting to cover more ground than what the original question required.
    – Andy aka
    Aug 31 at 9:17





    1




    1




    and this is a good thing :) I've learned a lot from other answers as they were quite informative.
    – artemonster
    Aug 31 at 9:19




    and this is a good thing :) I've learned a lot from other answers as they were quite informative.
    – artemonster
    Aug 31 at 9:19












    up vote
    63
    down vote













    There are two main reasons for the rise of serial



    1) It's possible. Low cost transistors have been able to manage GHz switching for a decade now, long enough for the capability to be used and become standard.



    2) It's necessary. If you want to shift very high speed data more than a few inches. This distance starts to rule out mobo to PCI card links, and definitely rules out mobo to hard disk, or mobo/settopbox to display connections.



    The reason for this is skew. If you transmit several parallel signals along a cable, then they have to arrive within a small fraction of the same clock period. This keeps the clock rate down, so the cable width has to increase. As data rates climb, that becomes more and more unweildy. The prospect for increasing the rate into the future is non-existent, double or quadruple width ATA anybody?



    The way to slay the skew demon is to go serial. One line is always synchronised with itself, there is nothing for it to be skewed with. The line carries data that is self-clocked. That is, uses a data coding scheme (often 8b/10b, sometimes much higher) that provides a minimum guaranteed transition density which allows clock extraction.



    The prospect for increasing data rate or distance into the future is excellent. Each generation brings faster transistors, and more experience crafting the mediium. We saw how that played out with SATA, which started at 1.5Gb/s, then moved through 3 and is now 6Gb/s. Even cheap cables can provide sufficiently consistent impedance and reasonable loss, and equalisers are built into the interface silicon to handle frequency dependent loss. Optical fibre is available for very long runs.



    For higher data rates, several serial links can be operated in parallel. This is not the same as putting conductors in parallel, which have to be matched in time to less than a clock cycle. These serial lanes have only need to be matched to within a high level data frame, which can be µs or even ms long.



    Of course the advantage in data width doesn't just apply to the cables and connectors. Serial also benefits PCB board area between connectors and chip, chip pinout, and chip silicon area.



    I do have a personal angle on this. As a designer working on software defined radio (SDR) from the 90's onwards, I used to rail at people like Analog Devices and Xilinx (and all the other ADC and FPGA companies) (they would visit and ask us from time to time) for making me run so many parallel differential connections between multi-100MHz ADCs and FPGAs, when we were just starting to see SATA emerge to displace ATA. We finally got JESD204x, so now we can hook up converters and FPGAs with only a few serial lines.






    share|improve this answer


















    • 7




      PCI express 3 and 4 use 128b/130b encoding.
      – Peter Smith
      Aug 30 at 9:51






    • 10




      What is the Nb/(N+2)b nomenclature people are using here?
      – detly
      Aug 30 at 10:50






    • 17




      @detly the 8b/10b, 64b/66b are both forms of Line-Code encoding. In serial comms context, the line-encoding is needed to ensure you can do Clock-Recovery](en.wikipedia.org/wiki/Clock_recovery).
      – Trevor Boyd Smith
      Aug 30 at 12:31






    • 4




      @tolos - the velocity factor for most flavours of ordinary PCB materials is about 50% (6 in / ns).
      – Peter Smith
      Aug 30 at 14:40






    • 4




      @supercat that's exactly how multiple lanes are handled, each lane has its own independent clocking. Data is framed and spread across all lanes. When the receiver has all the frames, it acts on the data. This allows skew related to the length of the data frame, which can be us or even ms as I said in my answer. This is the meaning of the number of lanes in PCIe, and how 2xHDMI are handled to a high performance display.
      – Neil_UK
      Aug 30 at 16:20














    up vote
    63
    down vote













    There are two main reasons for the rise of serial



    1) It's possible. Low cost transistors have been able to manage GHz switching for a decade now, long enough for the capability to be used and become standard.



    2) It's necessary. If you want to shift very high speed data more than a few inches. This distance starts to rule out mobo to PCI card links, and definitely rules out mobo to hard disk, or mobo/settopbox to display connections.



    The reason for this is skew. If you transmit several parallel signals along a cable, then they have to arrive within a small fraction of the same clock period. This keeps the clock rate down, so the cable width has to increase. As data rates climb, that becomes more and more unweildy. The prospect for increasing the rate into the future is non-existent, double or quadruple width ATA anybody?



    The way to slay the skew demon is to go serial. One line is always synchronised with itself, there is nothing for it to be skewed with. The line carries data that is self-clocked. That is, uses a data coding scheme (often 8b/10b, sometimes much higher) that provides a minimum guaranteed transition density which allows clock extraction.



    The prospect for increasing data rate or distance into the future is excellent. Each generation brings faster transistors, and more experience crafting the mediium. We saw how that played out with SATA, which started at 1.5Gb/s, then moved through 3 and is now 6Gb/s. Even cheap cables can provide sufficiently consistent impedance and reasonable loss, and equalisers are built into the interface silicon to handle frequency dependent loss. Optical fibre is available for very long runs.



    For higher data rates, several serial links can be operated in parallel. This is not the same as putting conductors in parallel, which have to be matched in time to less than a clock cycle. These serial lanes have only need to be matched to within a high level data frame, which can be µs or even ms long.



    Of course the advantage in data width doesn't just apply to the cables and connectors. Serial also benefits PCB board area between connectors and chip, chip pinout, and chip silicon area.



    I do have a personal angle on this. As a designer working on software defined radio (SDR) from the 90's onwards, I used to rail at people like Analog Devices and Xilinx (and all the other ADC and FPGA companies) (they would visit and ask us from time to time) for making me run so many parallel differential connections between multi-100MHz ADCs and FPGAs, when we were just starting to see SATA emerge to displace ATA. We finally got JESD204x, so now we can hook up converters and FPGAs with only a few serial lines.






    share|improve this answer


















    • 7




      PCI express 3 and 4 use 128b/130b encoding.
      – Peter Smith
      Aug 30 at 9:51






    • 10




      What is the Nb/(N+2)b nomenclature people are using here?
      – detly
      Aug 30 at 10:50






    • 17




      @detly the 8b/10b, 64b/66b are both forms of Line-Code encoding. In serial comms context, the line-encoding is needed to ensure you can do Clock-Recovery](en.wikipedia.org/wiki/Clock_recovery).
      – Trevor Boyd Smith
      Aug 30 at 12:31






    • 4




      @tolos - the velocity factor for most flavours of ordinary PCB materials is about 50% (6 in / ns).
      – Peter Smith
      Aug 30 at 14:40






    • 4




      @supercat that's exactly how multiple lanes are handled, each lane has its own independent clocking. Data is framed and spread across all lanes. When the receiver has all the frames, it acts on the data. This allows skew related to the length of the data frame, which can be us or even ms as I said in my answer. This is the meaning of the number of lanes in PCIe, and how 2xHDMI are handled to a high performance display.
      – Neil_UK
      Aug 30 at 16:20












    up vote
    63
    down vote










    up vote
    63
    down vote









    There are two main reasons for the rise of serial



    1) It's possible. Low cost transistors have been able to manage GHz switching for a decade now, long enough for the capability to be used and become standard.



    2) It's necessary. If you want to shift very high speed data more than a few inches. This distance starts to rule out mobo to PCI card links, and definitely rules out mobo to hard disk, or mobo/settopbox to display connections.



    The reason for this is skew. If you transmit several parallel signals along a cable, then they have to arrive within a small fraction of the same clock period. This keeps the clock rate down, so the cable width has to increase. As data rates climb, that becomes more and more unweildy. The prospect for increasing the rate into the future is non-existent, double or quadruple width ATA anybody?



    The way to slay the skew demon is to go serial. One line is always synchronised with itself, there is nothing for it to be skewed with. The line carries data that is self-clocked. That is, uses a data coding scheme (often 8b/10b, sometimes much higher) that provides a minimum guaranteed transition density which allows clock extraction.



    The prospect for increasing data rate or distance into the future is excellent. Each generation brings faster transistors, and more experience crafting the mediium. We saw how that played out with SATA, which started at 1.5Gb/s, then moved through 3 and is now 6Gb/s. Even cheap cables can provide sufficiently consistent impedance and reasonable loss, and equalisers are built into the interface silicon to handle frequency dependent loss. Optical fibre is available for very long runs.



    For higher data rates, several serial links can be operated in parallel. This is not the same as putting conductors in parallel, which have to be matched in time to less than a clock cycle. These serial lanes have only need to be matched to within a high level data frame, which can be µs or even ms long.



    Of course the advantage in data width doesn't just apply to the cables and connectors. Serial also benefits PCB board area between connectors and chip, chip pinout, and chip silicon area.



    I do have a personal angle on this. As a designer working on software defined radio (SDR) from the 90's onwards, I used to rail at people like Analog Devices and Xilinx (and all the other ADC and FPGA companies) (they would visit and ask us from time to time) for making me run so many parallel differential connections between multi-100MHz ADCs and FPGAs, when we were just starting to see SATA emerge to displace ATA. We finally got JESD204x, so now we can hook up converters and FPGAs with only a few serial lines.






    share|improve this answer














    There are two main reasons for the rise of serial



    1) It's possible. Low cost transistors have been able to manage GHz switching for a decade now, long enough for the capability to be used and become standard.



    2) It's necessary. If you want to shift very high speed data more than a few inches. This distance starts to rule out mobo to PCI card links, and definitely rules out mobo to hard disk, or mobo/settopbox to display connections.



    The reason for this is skew. If you transmit several parallel signals along a cable, then they have to arrive within a small fraction of the same clock period. This keeps the clock rate down, so the cable width has to increase. As data rates climb, that becomes more and more unweildy. The prospect for increasing the rate into the future is non-existent, double or quadruple width ATA anybody?



    The way to slay the skew demon is to go serial. One line is always synchronised with itself, there is nothing for it to be skewed with. The line carries data that is self-clocked. That is, uses a data coding scheme (often 8b/10b, sometimes much higher) that provides a minimum guaranteed transition density which allows clock extraction.



    The prospect for increasing data rate or distance into the future is excellent. Each generation brings faster transistors, and more experience crafting the mediium. We saw how that played out with SATA, which started at 1.5Gb/s, then moved through 3 and is now 6Gb/s. Even cheap cables can provide sufficiently consistent impedance and reasonable loss, and equalisers are built into the interface silicon to handle frequency dependent loss. Optical fibre is available for very long runs.



    For higher data rates, several serial links can be operated in parallel. This is not the same as putting conductors in parallel, which have to be matched in time to less than a clock cycle. These serial lanes have only need to be matched to within a high level data frame, which can be µs or even ms long.



    Of course the advantage in data width doesn't just apply to the cables and connectors. Serial also benefits PCB board area between connectors and chip, chip pinout, and chip silicon area.



    I do have a personal angle on this. As a designer working on software defined radio (SDR) from the 90's onwards, I used to rail at people like Analog Devices and Xilinx (and all the other ADC and FPGA companies) (they would visit and ask us from time to time) for making me run so many parallel differential connections between multi-100MHz ADCs and FPGAs, when we were just starting to see SATA emerge to displace ATA. We finally got JESD204x, so now we can hook up converters and FPGAs with only a few serial lines.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Sep 1 at 8:14









    Community♦

    1




    1










    answered Aug 30 at 9:31









    Neil_UK

    69.2k272152




    69.2k272152







    • 7




      PCI express 3 and 4 use 128b/130b encoding.
      – Peter Smith
      Aug 30 at 9:51






    • 10




      What is the Nb/(N+2)b nomenclature people are using here?
      – detly
      Aug 30 at 10:50






    • 17




      @detly the 8b/10b, 64b/66b are both forms of Line-Code encoding. In serial comms context, the line-encoding is needed to ensure you can do Clock-Recovery](en.wikipedia.org/wiki/Clock_recovery).
      – Trevor Boyd Smith
      Aug 30 at 12:31






    • 4




      @tolos - the velocity factor for most flavours of ordinary PCB materials is about 50% (6 in / ns).
      – Peter Smith
      Aug 30 at 14:40






    • 4




      @supercat that's exactly how multiple lanes are handled, each lane has its own independent clocking. Data is framed and spread across all lanes. When the receiver has all the frames, it acts on the data. This allows skew related to the length of the data frame, which can be us or even ms as I said in my answer. This is the meaning of the number of lanes in PCIe, and how 2xHDMI are handled to a high performance display.
      – Neil_UK
      Aug 30 at 16:20












    • 7




      PCI express 3 and 4 use 128b/130b encoding.
      – Peter Smith
      Aug 30 at 9:51






    • 10




      What is the Nb/(N+2)b nomenclature people are using here?
      – detly
      Aug 30 at 10:50






    • 17




      @detly the 8b/10b, 64b/66b are both forms of Line-Code encoding. In serial comms context, the line-encoding is needed to ensure you can do Clock-Recovery](en.wikipedia.org/wiki/Clock_recovery).
      – Trevor Boyd Smith
      Aug 30 at 12:31






    • 4




      @tolos - the velocity factor for most flavours of ordinary PCB materials is about 50% (6 in / ns).
      – Peter Smith
      Aug 30 at 14:40






    • 4




      @supercat that's exactly how multiple lanes are handled, each lane has its own independent clocking. Data is framed and spread across all lanes. When the receiver has all the frames, it acts on the data. This allows skew related to the length of the data frame, which can be us or even ms as I said in my answer. This is the meaning of the number of lanes in PCIe, and how 2xHDMI are handled to a high performance display.
      – Neil_UK
      Aug 30 at 16:20







    7




    7




    PCI express 3 and 4 use 128b/130b encoding.
    – Peter Smith
    Aug 30 at 9:51




    PCI express 3 and 4 use 128b/130b encoding.
    – Peter Smith
    Aug 30 at 9:51




    10




    10




    What is the Nb/(N+2)b nomenclature people are using here?
    – detly
    Aug 30 at 10:50




    What is the Nb/(N+2)b nomenclature people are using here?
    – detly
    Aug 30 at 10:50




    17




    17




    @detly the 8b/10b, 64b/66b are both forms of Line-Code encoding. In serial comms context, the line-encoding is needed to ensure you can do Clock-Recovery](en.wikipedia.org/wiki/Clock_recovery).
    – Trevor Boyd Smith
    Aug 30 at 12:31




    @detly the 8b/10b, 64b/66b are both forms of Line-Code encoding. In serial comms context, the line-encoding is needed to ensure you can do Clock-Recovery](en.wikipedia.org/wiki/Clock_recovery).
    – Trevor Boyd Smith
    Aug 30 at 12:31




    4




    4




    @tolos - the velocity factor for most flavours of ordinary PCB materials is about 50% (6 in / ns).
    – Peter Smith
    Aug 30 at 14:40




    @tolos - the velocity factor for most flavours of ordinary PCB materials is about 50% (6 in / ns).
    – Peter Smith
    Aug 30 at 14:40




    4




    4




    @supercat that's exactly how multiple lanes are handled, each lane has its own independent clocking. Data is framed and spread across all lanes. When the receiver has all the frames, it acts on the data. This allows skew related to the length of the data frame, which can be us or even ms as I said in my answer. This is the meaning of the number of lanes in PCIe, and how 2xHDMI are handled to a high performance display.
    – Neil_UK
    Aug 30 at 16:20




    @supercat that's exactly how multiple lanes are handled, each lane has its own independent clocking. Data is framed and spread across all lanes. When the receiver has all the frames, it acts on the data. This allows skew related to the length of the data frame, which can be us or even ms as I said in my answer. This is the meaning of the number of lanes in PCIe, and how 2xHDMI are handled to a high performance display.
    – Neil_UK
    Aug 30 at 16:20










    up vote
    22
    down vote













    If you want an example of something that is widely used, but different, look at 1000BASE-T gigabit Ethernet. That uses parallel cables and non-trivial signal encoding.



    Mostly, people use serial buses because they are simple. Parallel buses use more cable, and suffer from signal skew at high data rates over long cables.






    share|improve this answer
















    • 9




      GbE was created by throwing money at the problem of pushing more data over the vast amounts of existing Cat 5 wiring. It probably wouldn't quite look like it does if a new interface was designed with no concern over backwards compatibility. Cf how 10GbE is not making any serious inroads in commercial / domestic settings as it requires installing new Cat 6a cabling.
      – Barleyman
      Aug 30 at 11:02










    • 10GbE might still work on cat5e or cat5 depending on cable length and/or quality.
      – user3549596
      Sep 3 at 11:11














    up vote
    22
    down vote













    If you want an example of something that is widely used, but different, look at 1000BASE-T gigabit Ethernet. That uses parallel cables and non-trivial signal encoding.



    Mostly, people use serial buses because they are simple. Parallel buses use more cable, and suffer from signal skew at high data rates over long cables.






    share|improve this answer
















    • 9




      GbE was created by throwing money at the problem of pushing more data over the vast amounts of existing Cat 5 wiring. It probably wouldn't quite look like it does if a new interface was designed with no concern over backwards compatibility. Cf how 10GbE is not making any serious inroads in commercial / domestic settings as it requires installing new Cat 6a cabling.
      – Barleyman
      Aug 30 at 11:02










    • 10GbE might still work on cat5e or cat5 depending on cable length and/or quality.
      – user3549596
      Sep 3 at 11:11












    up vote
    22
    down vote










    up vote
    22
    down vote









    If you want an example of something that is widely used, but different, look at 1000BASE-T gigabit Ethernet. That uses parallel cables and non-trivial signal encoding.



    Mostly, people use serial buses because they are simple. Parallel buses use more cable, and suffer from signal skew at high data rates over long cables.






    share|improve this answer












    If you want an example of something that is widely used, but different, look at 1000BASE-T gigabit Ethernet. That uses parallel cables and non-trivial signal encoding.



    Mostly, people use serial buses because they are simple. Parallel buses use more cable, and suffer from signal skew at high data rates over long cables.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Aug 30 at 9:06









    Simon B

    4,403818




    4,403818







    • 9




      GbE was created by throwing money at the problem of pushing more data over the vast amounts of existing Cat 5 wiring. It probably wouldn't quite look like it does if a new interface was designed with no concern over backwards compatibility. Cf how 10GbE is not making any serious inroads in commercial / domestic settings as it requires installing new Cat 6a cabling.
      – Barleyman
      Aug 30 at 11:02










    • 10GbE might still work on cat5e or cat5 depending on cable length and/or quality.
      – user3549596
      Sep 3 at 11:11












    • 9




      GbE was created by throwing money at the problem of pushing more data over the vast amounts of existing Cat 5 wiring. It probably wouldn't quite look like it does if a new interface was designed with no concern over backwards compatibility. Cf how 10GbE is not making any serious inroads in commercial / domestic settings as it requires installing new Cat 6a cabling.
      – Barleyman
      Aug 30 at 11:02










    • 10GbE might still work on cat5e or cat5 depending on cable length and/or quality.
      – user3549596
      Sep 3 at 11:11







    9




    9




    GbE was created by throwing money at the problem of pushing more data over the vast amounts of existing Cat 5 wiring. It probably wouldn't quite look like it does if a new interface was designed with no concern over backwards compatibility. Cf how 10GbE is not making any serious inroads in commercial / domestic settings as it requires installing new Cat 6a cabling.
    – Barleyman
    Aug 30 at 11:02




    GbE was created by throwing money at the problem of pushing more data over the vast amounts of existing Cat 5 wiring. It probably wouldn't quite look like it does if a new interface was designed with no concern over backwards compatibility. Cf how 10GbE is not making any serious inroads in commercial / domestic settings as it requires installing new Cat 6a cabling.
    – Barleyman
    Aug 30 at 11:02












    10GbE might still work on cat5e or cat5 depending on cable length and/or quality.
    – user3549596
    Sep 3 at 11:11




    10GbE might still work on cat5e or cat5 depending on cable length and/or quality.
    – user3549596
    Sep 3 at 11:11










    up vote
    10
    down vote













    To add to the other fine answers:



    The issues noted in other answers (most notably, skew between parallel signals, and the costs of extra wires in the cable) increase as the distances of the signals increase. Thus, there is a distance at which serial becomes superior to parallel, and that distance has been decreasing as data rates have increased.



    Parallel data transfer still happens: inside of chips, and also most signals within circuit boards. However, the distances needed by external peripherals -- and even by internal drives -- are now too far and too fast for parallel interfaces to remain practical. Thus, the signals that an end-user will now be exposed to are largely serial.






    share|improve this answer


















    • 3




      Probably best example of high speed parallel signaling is RAM memory, especially in PCs.
      – Jan Dorniak
      Aug 30 at 23:16










    • Excellent answer, points directly to the root issue of time synchronziation across spatial domains. One thing to note, you can still have more than 2 symbols on a serial link, thus getting some benefit of parallel communications by using modulation to encode more bits per baud
      – crasic
      Aug 31 at 7:32















    up vote
    10
    down vote













    To add to the other fine answers:



    The issues noted in other answers (most notably, skew between parallel signals, and the costs of extra wires in the cable) increase as the distances of the signals increase. Thus, there is a distance at which serial becomes superior to parallel, and that distance has been decreasing as data rates have increased.



    Parallel data transfer still happens: inside of chips, and also most signals within circuit boards. However, the distances needed by external peripherals -- and even by internal drives -- are now too far and too fast for parallel interfaces to remain practical. Thus, the signals that an end-user will now be exposed to are largely serial.






    share|improve this answer


















    • 3




      Probably best example of high speed parallel signaling is RAM memory, especially in PCs.
      – Jan Dorniak
      Aug 30 at 23:16










    • Excellent answer, points directly to the root issue of time synchronziation across spatial domains. One thing to note, you can still have more than 2 symbols on a serial link, thus getting some benefit of parallel communications by using modulation to encode more bits per baud
      – crasic
      Aug 31 at 7:32













    up vote
    10
    down vote










    up vote
    10
    down vote









    To add to the other fine answers:



    The issues noted in other answers (most notably, skew between parallel signals, and the costs of extra wires in the cable) increase as the distances of the signals increase. Thus, there is a distance at which serial becomes superior to parallel, and that distance has been decreasing as data rates have increased.



    Parallel data transfer still happens: inside of chips, and also most signals within circuit boards. However, the distances needed by external peripherals -- and even by internal drives -- are now too far and too fast for parallel interfaces to remain practical. Thus, the signals that an end-user will now be exposed to are largely serial.






    share|improve this answer














    To add to the other fine answers:



    The issues noted in other answers (most notably, skew between parallel signals, and the costs of extra wires in the cable) increase as the distances of the signals increase. Thus, there is a distance at which serial becomes superior to parallel, and that distance has been decreasing as data rates have increased.



    Parallel data transfer still happens: inside of chips, and also most signals within circuit boards. However, the distances needed by external peripherals -- and even by internal drives -- are now too far and too fast for parallel interfaces to remain practical. Thus, the signals that an end-user will now be exposed to are largely serial.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Aug 30 at 20:19

























    answered Aug 30 at 17:35









    Dr Sheldon

    3316




    3316







    • 3




      Probably best example of high speed parallel signaling is RAM memory, especially in PCs.
      – Jan Dorniak
      Aug 30 at 23:16










    • Excellent answer, points directly to the root issue of time synchronziation across spatial domains. One thing to note, you can still have more than 2 symbols on a serial link, thus getting some benefit of parallel communications by using modulation to encode more bits per baud
      – crasic
      Aug 31 at 7:32













    • 3




      Probably best example of high speed parallel signaling is RAM memory, especially in PCs.
      – Jan Dorniak
      Aug 30 at 23:16










    • Excellent answer, points directly to the root issue of time synchronziation across spatial domains. One thing to note, you can still have more than 2 symbols on a serial link, thus getting some benefit of parallel communications by using modulation to encode more bits per baud
      – crasic
      Aug 31 at 7:32








    3




    3




    Probably best example of high speed parallel signaling is RAM memory, especially in PCs.
    – Jan Dorniak
    Aug 30 at 23:16




    Probably best example of high speed parallel signaling is RAM memory, especially in PCs.
    – Jan Dorniak
    Aug 30 at 23:16












    Excellent answer, points directly to the root issue of time synchronziation across spatial domains. One thing to note, you can still have more than 2 symbols on a serial link, thus getting some benefit of parallel communications by using modulation to encode more bits per baud
    – crasic
    Aug 31 at 7:32





    Excellent answer, points directly to the root issue of time synchronziation across spatial domains. One thing to note, you can still have more than 2 symbols on a serial link, thus getting some benefit of parallel communications by using modulation to encode more bits per baud
    – crasic
    Aug 31 at 7:32











    up vote
    8
    down vote













    Advanced modulation techniques would require you to transmit and receive analog signals. ADCs and DACs running at hundreds of MHz tend to be expensive and consume quite a bit of power. Signal processing required for decoding is also costly in terms of silicon and power.



    It's simply cheaper to make a better communication medium which can support binary signals.






    share|improve this answer
















    • 1




      good point. спасибо за ответ :)
      – artemonster
      Sep 1 at 18:44














    up vote
    8
    down vote













    Advanced modulation techniques would require you to transmit and receive analog signals. ADCs and DACs running at hundreds of MHz tend to be expensive and consume quite a bit of power. Signal processing required for decoding is also costly in terms of silicon and power.



    It's simply cheaper to make a better communication medium which can support binary signals.






    share|improve this answer
















    • 1




      good point. спасибо за ответ :)
      – artemonster
      Sep 1 at 18:44












    up vote
    8
    down vote










    up vote
    8
    down vote









    Advanced modulation techniques would require you to transmit and receive analog signals. ADCs and DACs running at hundreds of MHz tend to be expensive and consume quite a bit of power. Signal processing required for decoding is also costly in terms of silicon and power.



    It's simply cheaper to make a better communication medium which can support binary signals.






    share|improve this answer












    Advanced modulation techniques would require you to transmit and receive analog signals. ADCs and DACs running at hundreds of MHz tend to be expensive and consume quite a bit of power. Signal processing required for decoding is also costly in terms of silicon and power.



    It's simply cheaper to make a better communication medium which can support binary signals.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Aug 30 at 11:34









    Dmitry Grigoryev

    16.4k22770




    16.4k22770







    • 1




      good point. спасибо за ответ :)
      – artemonster
      Sep 1 at 18:44












    • 1




      good point. спасибо за ответ :)
      – artemonster
      Sep 1 at 18:44







    1




    1




    good point. спасибо за ответ :)
    – artemonster
    Sep 1 at 18:44




    good point. спасибо за ответ :)
    – artemonster
    Sep 1 at 18:44










    up vote
    7
    down vote














    Why did the serial bit stream become so common?




    Using serial links has the advantage that it reduced the physical size of the connection. Modern integrated circuit architectures have so many pins on them that this created a strong need to minimize the physical interconnection demands on their design. This led to developing circuits that operate at extreme speeds at the interfaces of these circuits using serial protocols. For the same reason, it's natural to minimize the physical interconnection demands elsewhere in any other data link.



    The original demand for this kind of technology may have its origins in fiber optic data transmission designs, as well.



    Once the technology to support high speed links became very common, it was only natural to apply it in many other places, because the physical size of serial connections is so much smaller than parallel connections.




    Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate?




    At the encoding level, coding schemes for digital communication can be as simple as NRZ (Non-Return to Zero), a slightly more complicated Line Code (e.g. 8B/10B), or much more complicated, like QAM (Quadrature Amplitude Modulation).



    Complexity adds cost, but choices are also dependent on factors that ultimately rely on information theory and the capacity limits of a link. Shannon's Law, from the Shannon-Hartley Theorem describes the maximum capacity of a channel (think of that as "the connection" or "link"):




    Maximum Capacity in Bits/Second = Bandwidth * Log2(1 + Signal/Noise)




    For radio links (something like LTE or WiFi), bandwidth is going to be limited, often by legal regulations. In those cases QAM and similarly complex protocols may be used to eke out the highest data rate possible. In these cases, the signal to noise ratio is often fairly low (10 to 100, or, in decibels 10 to 20 dB). It can only go so high before an upper limit is reached under the given bandwidth and signal to noise ratio.



    For a wire link, the bandwidth is not regulated by anything but the practicality of implementation. Wire links can have a very high signal to noise ratio, greater than 1000 (30 dB). As mentioned in other answers, bandwidth is limited by the design of the transistors driving the wire and receiving the signal, and in the design of the wire itself (a transmission line).



    When bandwidth becomes a limiting factor but signal to noise ratio is not, the designer find other ways to increase data rate. It becomes an economic decision whether to go to a more complex encoding scheme or go to more wire:



    You will indeed see serial/parallel protocols used when a single wire is still too slow. PCI-Express does this to overcome the bandwidth limitations of the hardware by using multiple lanes.



    In fiber transmissions, they don't have to add more fibers (although they might use others if they are already in place and not being used). The can use wave division multiplexing. Generally, this is done to provide multiple independent parallel channels, and the skew issue mentioned in other answers is not a concern for independent channels.






    share|improve this answer


















    • 1




      Nice answer. It does make me curious if somebody could (or already has?) do something like implement 256-QAM at USB3 speeds for truly amazing transfer rates...
      – mbrig
      Aug 30 at 20:04










    • FWIW, the fiber world is starting to develop and deploy more complex modulation schemes. PAM-4 is coming for 100 and 400 G Ethernet, and telecoms systems are (I believe, but it's not my field) starting to use coherent QAM.
      – The Photon
      Aug 31 at 4:28










    • but, really, if the SNR of a wire line is so good, why not squeeze out every possible piece of bandwidth? Why push into GHZ frequencies (with all relevant problems), where you could go much slower and employ some modulation/coding. Wouldn't it be more convenient?
      – artemonster
      Aug 31 at 9:21










    • You can do that. It becomes an economic decision.
      – Jim
      Aug 31 at 14:50






    • 1




      QAM requires a carrier, so it doesn't make much sense to do anything other than PAM for 'baseband' digital. It is used in the optical domain, using light itself as the carrier. Over copper, you would basically just build a radio transceiver. This would require much more high speed analog circuitry, and all of the complexity and increased power consumption that comes with that. Serializers and deserializers are relatively simple in comparison. IMHO, we're more likely to see integrated silicon photonic modulators and detectors than moving to QAM over copper.
      – alex.forencich
      Sep 1 at 21:32














    up vote
    7
    down vote














    Why did the serial bit stream become so common?




    Using serial links has the advantage that it reduced the physical size of the connection. Modern integrated circuit architectures have so many pins on them that this created a strong need to minimize the physical interconnection demands on their design. This led to developing circuits that operate at extreme speeds at the interfaces of these circuits using serial protocols. For the same reason, it's natural to minimize the physical interconnection demands elsewhere in any other data link.



    The original demand for this kind of technology may have its origins in fiber optic data transmission designs, as well.



    Once the technology to support high speed links became very common, it was only natural to apply it in many other places, because the physical size of serial connections is so much smaller than parallel connections.




    Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate?




    At the encoding level, coding schemes for digital communication can be as simple as NRZ (Non-Return to Zero), a slightly more complicated Line Code (e.g. 8B/10B), or much more complicated, like QAM (Quadrature Amplitude Modulation).



    Complexity adds cost, but choices are also dependent on factors that ultimately rely on information theory and the capacity limits of a link. Shannon's Law, from the Shannon-Hartley Theorem describes the maximum capacity of a channel (think of that as "the connection" or "link"):




    Maximum Capacity in Bits/Second = Bandwidth * Log2(1 + Signal/Noise)




    For radio links (something like LTE or WiFi), bandwidth is going to be limited, often by legal regulations. In those cases QAM and similarly complex protocols may be used to eke out the highest data rate possible. In these cases, the signal to noise ratio is often fairly low (10 to 100, or, in decibels 10 to 20 dB). It can only go so high before an upper limit is reached under the given bandwidth and signal to noise ratio.



    For a wire link, the bandwidth is not regulated by anything but the practicality of implementation. Wire links can have a very high signal to noise ratio, greater than 1000 (30 dB). As mentioned in other answers, bandwidth is limited by the design of the transistors driving the wire and receiving the signal, and in the design of the wire itself (a transmission line).



    When bandwidth becomes a limiting factor but signal to noise ratio is not, the designer find other ways to increase data rate. It becomes an economic decision whether to go to a more complex encoding scheme or go to more wire:



    You will indeed see serial/parallel protocols used when a single wire is still too slow. PCI-Express does this to overcome the bandwidth limitations of the hardware by using multiple lanes.



    In fiber transmissions, they don't have to add more fibers (although they might use others if they are already in place and not being used). The can use wave division multiplexing. Generally, this is done to provide multiple independent parallel channels, and the skew issue mentioned in other answers is not a concern for independent channels.






    share|improve this answer


















    • 1




      Nice answer. It does make me curious if somebody could (or already has?) do something like implement 256-QAM at USB3 speeds for truly amazing transfer rates...
      – mbrig
      Aug 30 at 20:04










    • FWIW, the fiber world is starting to develop and deploy more complex modulation schemes. PAM-4 is coming for 100 and 400 G Ethernet, and telecoms systems are (I believe, but it's not my field) starting to use coherent QAM.
      – The Photon
      Aug 31 at 4:28










    • but, really, if the SNR of a wire line is so good, why not squeeze out every possible piece of bandwidth? Why push into GHZ frequencies (with all relevant problems), where you could go much slower and employ some modulation/coding. Wouldn't it be more convenient?
      – artemonster
      Aug 31 at 9:21










    • You can do that. It becomes an economic decision.
      – Jim
      Aug 31 at 14:50






    • 1




      QAM requires a carrier, so it doesn't make much sense to do anything other than PAM for 'baseband' digital. It is used in the optical domain, using light itself as the carrier. Over copper, you would basically just build a radio transceiver. This would require much more high speed analog circuitry, and all of the complexity and increased power consumption that comes with that. Serializers and deserializers are relatively simple in comparison. IMHO, we're more likely to see integrated silicon photonic modulators and detectors than moving to QAM over copper.
      – alex.forencich
      Sep 1 at 21:32












    up vote
    7
    down vote










    up vote
    7
    down vote










    Why did the serial bit stream become so common?




    Using serial links has the advantage that it reduced the physical size of the connection. Modern integrated circuit architectures have so many pins on them that this created a strong need to minimize the physical interconnection demands on their design. This led to developing circuits that operate at extreme speeds at the interfaces of these circuits using serial protocols. For the same reason, it's natural to minimize the physical interconnection demands elsewhere in any other data link.



    The original demand for this kind of technology may have its origins in fiber optic data transmission designs, as well.



    Once the technology to support high speed links became very common, it was only natural to apply it in many other places, because the physical size of serial connections is so much smaller than parallel connections.




    Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate?




    At the encoding level, coding schemes for digital communication can be as simple as NRZ (Non-Return to Zero), a slightly more complicated Line Code (e.g. 8B/10B), or much more complicated, like QAM (Quadrature Amplitude Modulation).



    Complexity adds cost, but choices are also dependent on factors that ultimately rely on information theory and the capacity limits of a link. Shannon's Law, from the Shannon-Hartley Theorem describes the maximum capacity of a channel (think of that as "the connection" or "link"):




    Maximum Capacity in Bits/Second = Bandwidth * Log2(1 + Signal/Noise)




    For radio links (something like LTE or WiFi), bandwidth is going to be limited, often by legal regulations. In those cases QAM and similarly complex protocols may be used to eke out the highest data rate possible. In these cases, the signal to noise ratio is often fairly low (10 to 100, or, in decibels 10 to 20 dB). It can only go so high before an upper limit is reached under the given bandwidth and signal to noise ratio.



    For a wire link, the bandwidth is not regulated by anything but the practicality of implementation. Wire links can have a very high signal to noise ratio, greater than 1000 (30 dB). As mentioned in other answers, bandwidth is limited by the design of the transistors driving the wire and receiving the signal, and in the design of the wire itself (a transmission line).



    When bandwidth becomes a limiting factor but signal to noise ratio is not, the designer find other ways to increase data rate. It becomes an economic decision whether to go to a more complex encoding scheme or go to more wire:



    You will indeed see serial/parallel protocols used when a single wire is still too slow. PCI-Express does this to overcome the bandwidth limitations of the hardware by using multiple lanes.



    In fiber transmissions, they don't have to add more fibers (although they might use others if they are already in place and not being used). The can use wave division multiplexing. Generally, this is done to provide multiple independent parallel channels, and the skew issue mentioned in other answers is not a concern for independent channels.






    share|improve this answer















    Why did the serial bit stream become so common?




    Using serial links has the advantage that it reduced the physical size of the connection. Modern integrated circuit architectures have so many pins on them that this created a strong need to minimize the physical interconnection demands on their design. This led to developing circuits that operate at extreme speeds at the interfaces of these circuits using serial protocols. For the same reason, it's natural to minimize the physical interconnection demands elsewhere in any other data link.



    The original demand for this kind of technology may have its origins in fiber optic data transmission designs, as well.



    Once the technology to support high speed links became very common, it was only natural to apply it in many other places, because the physical size of serial connections is so much smaller than parallel connections.




    Why are there no widespread system communication protocols that heavily employ some advanced modulation methods for a better symbol rate?




    At the encoding level, coding schemes for digital communication can be as simple as NRZ (Non-Return to Zero), a slightly more complicated Line Code (e.g. 8B/10B), or much more complicated, like QAM (Quadrature Amplitude Modulation).



    Complexity adds cost, but choices are also dependent on factors that ultimately rely on information theory and the capacity limits of a link. Shannon's Law, from the Shannon-Hartley Theorem describes the maximum capacity of a channel (think of that as "the connection" or "link"):




    Maximum Capacity in Bits/Second = Bandwidth * Log2(1 + Signal/Noise)




    For radio links (something like LTE or WiFi), bandwidth is going to be limited, often by legal regulations. In those cases QAM and similarly complex protocols may be used to eke out the highest data rate possible. In these cases, the signal to noise ratio is often fairly low (10 to 100, or, in decibels 10 to 20 dB). It can only go so high before an upper limit is reached under the given bandwidth and signal to noise ratio.



    For a wire link, the bandwidth is not regulated by anything but the practicality of implementation. Wire links can have a very high signal to noise ratio, greater than 1000 (30 dB). As mentioned in other answers, bandwidth is limited by the design of the transistors driving the wire and receiving the signal, and in the design of the wire itself (a transmission line).



    When bandwidth becomes a limiting factor but signal to noise ratio is not, the designer find other ways to increase data rate. It becomes an economic decision whether to go to a more complex encoding scheme or go to more wire:



    You will indeed see serial/parallel protocols used when a single wire is still too slow. PCI-Express does this to overcome the bandwidth limitations of the hardware by using multiple lanes.



    In fiber transmissions, they don't have to add more fibers (although they might use others if they are already in place and not being used). The can use wave division multiplexing. Generally, this is done to provide multiple independent parallel channels, and the skew issue mentioned in other answers is not a concern for independent channels.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Aug 30 at 22:25

























    answered Aug 30 at 19:50









    Jim

    24116




    24116







    • 1




      Nice answer. It does make me curious if somebody could (or already has?) do something like implement 256-QAM at USB3 speeds for truly amazing transfer rates...
      – mbrig
      Aug 30 at 20:04










    • FWIW, the fiber world is starting to develop and deploy more complex modulation schemes. PAM-4 is coming for 100 and 400 G Ethernet, and telecoms systems are (I believe, but it's not my field) starting to use coherent QAM.
      – The Photon
      Aug 31 at 4:28










    • but, really, if the SNR of a wire line is so good, why not squeeze out every possible piece of bandwidth? Why push into GHZ frequencies (with all relevant problems), where you could go much slower and employ some modulation/coding. Wouldn't it be more convenient?
      – artemonster
      Aug 31 at 9:21










    • You can do that. It becomes an economic decision.
      – Jim
      Aug 31 at 14:50






    • 1




      QAM requires a carrier, so it doesn't make much sense to do anything other than PAM for 'baseband' digital. It is used in the optical domain, using light itself as the carrier. Over copper, you would basically just build a radio transceiver. This would require much more high speed analog circuitry, and all of the complexity and increased power consumption that comes with that. Serializers and deserializers are relatively simple in comparison. IMHO, we're more likely to see integrated silicon photonic modulators and detectors than moving to QAM over copper.
      – alex.forencich
      Sep 1 at 21:32












    • 1




      Nice answer. It does make me curious if somebody could (or already has?) do something like implement 256-QAM at USB3 speeds for truly amazing transfer rates...
      – mbrig
      Aug 30 at 20:04










    • FWIW, the fiber world is starting to develop and deploy more complex modulation schemes. PAM-4 is coming for 100 and 400 G Ethernet, and telecoms systems are (I believe, but it's not my field) starting to use coherent QAM.
      – The Photon
      Aug 31 at 4:28










    • but, really, if the SNR of a wire line is so good, why not squeeze out every possible piece of bandwidth? Why push into GHZ frequencies (with all relevant problems), where you could go much slower and employ some modulation/coding. Wouldn't it be more convenient?
      – artemonster
      Aug 31 at 9:21










    • You can do that. It becomes an economic decision.
      – Jim
      Aug 31 at 14:50






    • 1




      QAM requires a carrier, so it doesn't make much sense to do anything other than PAM for 'baseband' digital. It is used in the optical domain, using light itself as the carrier. Over copper, you would basically just build a radio transceiver. This would require much more high speed analog circuitry, and all of the complexity and increased power consumption that comes with that. Serializers and deserializers are relatively simple in comparison. IMHO, we're more likely to see integrated silicon photonic modulators and detectors than moving to QAM over copper.
      – alex.forencich
      Sep 1 at 21:32







    1




    1




    Nice answer. It does make me curious if somebody could (or already has?) do something like implement 256-QAM at USB3 speeds for truly amazing transfer rates...
    – mbrig
    Aug 30 at 20:04




    Nice answer. It does make me curious if somebody could (or already has?) do something like implement 256-QAM at USB3 speeds for truly amazing transfer rates...
    – mbrig
    Aug 30 at 20:04












    FWIW, the fiber world is starting to develop and deploy more complex modulation schemes. PAM-4 is coming for 100 and 400 G Ethernet, and telecoms systems are (I believe, but it's not my field) starting to use coherent QAM.
    – The Photon
    Aug 31 at 4:28




    FWIW, the fiber world is starting to develop and deploy more complex modulation schemes. PAM-4 is coming for 100 and 400 G Ethernet, and telecoms systems are (I believe, but it's not my field) starting to use coherent QAM.
    – The Photon
    Aug 31 at 4:28












    but, really, if the SNR of a wire line is so good, why not squeeze out every possible piece of bandwidth? Why push into GHZ frequencies (with all relevant problems), where you could go much slower and employ some modulation/coding. Wouldn't it be more convenient?
    – artemonster
    Aug 31 at 9:21




    but, really, if the SNR of a wire line is so good, why not squeeze out every possible piece of bandwidth? Why push into GHZ frequencies (with all relevant problems), where you could go much slower and employ some modulation/coding. Wouldn't it be more convenient?
    – artemonster
    Aug 31 at 9:21












    You can do that. It becomes an economic decision.
    – Jim
    Aug 31 at 14:50




    You can do that. It becomes an economic decision.
    – Jim
    Aug 31 at 14:50




    1




    1




    QAM requires a carrier, so it doesn't make much sense to do anything other than PAM for 'baseband' digital. It is used in the optical domain, using light itself as the carrier. Over copper, you would basically just build a radio transceiver. This would require much more high speed analog circuitry, and all of the complexity and increased power consumption that comes with that. Serializers and deserializers are relatively simple in comparison. IMHO, we're more likely to see integrated silicon photonic modulators and detectors than moving to QAM over copper.
    – alex.forencich
    Sep 1 at 21:32




    QAM requires a carrier, so it doesn't make much sense to do anything other than PAM for 'baseband' digital. It is used in the optical domain, using light itself as the carrier. Over copper, you would basically just build a radio transceiver. This would require much more high speed analog circuitry, and all of the complexity and increased power consumption that comes with that. Serializers and deserializers are relatively simple in comparison. IMHO, we're more likely to see integrated silicon photonic modulators and detectors than moving to QAM over copper.
    – alex.forencich
    Sep 1 at 21:32










    up vote
    1
    down vote













    Take four semi trucks with a payload. Four lane per side highway. For the trucks to successfully haul the payload parallel they have to be perfectly side by side, one can not be ahead or behind the others by more than an inch lets say. Hills, curves, doesnt matter. Vary too much and its a total fail.



    But have them take one lane and the distance between them can vary. While true that linearly it takes over four times the distance from front of the first truck to the back of the last to move the payloads, but they dont have to be perfectly spaced. Just within the length of one truck it has to have the cab and payload and length of payload to be properly positioned and spaced.



    They even go so far as to be parallel, pcie, network, etc, but while those are technically multiple separate data paths, they are not parallel in that they have to leave and arrive at the same time, using the truck analogy the four trucks can drive on four lanes roughly parallel but can vary, the trucks are marked by what lane they arrived in so that when they arrive at the other end the payloads can be combined back into the original data set. And/or each lane can be one data set serially and by having more lanes you can move more data sets at one time.






    share|improve this answer




















    • The lanes don't need to be perfectly "aligned". Modern multi-lane interfaces use more sophisticated methods of symbol alignment than being perfectly side-by-side.
      – Ale..chenski
      Aug 31 at 7:18






    • 1




      Right, thats what I said.
      – old_timer
      Aug 31 at 12:27














    up vote
    1
    down vote













    Take four semi trucks with a payload. Four lane per side highway. For the trucks to successfully haul the payload parallel they have to be perfectly side by side, one can not be ahead or behind the others by more than an inch lets say. Hills, curves, doesnt matter. Vary too much and its a total fail.



    But have them take one lane and the distance between them can vary. While true that linearly it takes over four times the distance from front of the first truck to the back of the last to move the payloads, but they dont have to be perfectly spaced. Just within the length of one truck it has to have the cab and payload and length of payload to be properly positioned and spaced.



    They even go so far as to be parallel, pcie, network, etc, but while those are technically multiple separate data paths, they are not parallel in that they have to leave and arrive at the same time, using the truck analogy the four trucks can drive on four lanes roughly parallel but can vary, the trucks are marked by what lane they arrived in so that when they arrive at the other end the payloads can be combined back into the original data set. And/or each lane can be one data set serially and by having more lanes you can move more data sets at one time.






    share|improve this answer




















    • The lanes don't need to be perfectly "aligned". Modern multi-lane interfaces use more sophisticated methods of symbol alignment than being perfectly side-by-side.
      – Ale..chenski
      Aug 31 at 7:18






    • 1




      Right, thats what I said.
      – old_timer
      Aug 31 at 12:27












    up vote
    1
    down vote










    up vote
    1
    down vote









    Take four semi trucks with a payload. Four lane per side highway. For the trucks to successfully haul the payload parallel they have to be perfectly side by side, one can not be ahead or behind the others by more than an inch lets say. Hills, curves, doesnt matter. Vary too much and its a total fail.



    But have them take one lane and the distance between them can vary. While true that linearly it takes over four times the distance from front of the first truck to the back of the last to move the payloads, but they dont have to be perfectly spaced. Just within the length of one truck it has to have the cab and payload and length of payload to be properly positioned and spaced.



    They even go so far as to be parallel, pcie, network, etc, but while those are technically multiple separate data paths, they are not parallel in that they have to leave and arrive at the same time, using the truck analogy the four trucks can drive on four lanes roughly parallel but can vary, the trucks are marked by what lane they arrived in so that when they arrive at the other end the payloads can be combined back into the original data set. And/or each lane can be one data set serially and by having more lanes you can move more data sets at one time.






    share|improve this answer












    Take four semi trucks with a payload. Four lane per side highway. For the trucks to successfully haul the payload parallel they have to be perfectly side by side, one can not be ahead or behind the others by more than an inch lets say. Hills, curves, doesnt matter. Vary too much and its a total fail.



    But have them take one lane and the distance between them can vary. While true that linearly it takes over four times the distance from front of the first truck to the back of the last to move the payloads, but they dont have to be perfectly spaced. Just within the length of one truck it has to have the cab and payload and length of payload to be properly positioned and spaced.



    They even go so far as to be parallel, pcie, network, etc, but while those are technically multiple separate data paths, they are not parallel in that they have to leave and arrive at the same time, using the truck analogy the four trucks can drive on four lanes roughly parallel but can vary, the trucks are marked by what lane they arrived in so that when they arrive at the other end the payloads can be combined back into the original data set. And/or each lane can be one data set serially and by having more lanes you can move more data sets at one time.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Aug 31 at 0:57









    old_timer

    5,6291424




    5,6291424











    • The lanes don't need to be perfectly "aligned". Modern multi-lane interfaces use more sophisticated methods of symbol alignment than being perfectly side-by-side.
      – Ale..chenski
      Aug 31 at 7:18






    • 1




      Right, thats what I said.
      – old_timer
      Aug 31 at 12:27
















    • The lanes don't need to be perfectly "aligned". Modern multi-lane interfaces use more sophisticated methods of symbol alignment than being perfectly side-by-side.
      – Ale..chenski
      Aug 31 at 7:18






    • 1




      Right, thats what I said.
      – old_timer
      Aug 31 at 12:27















    The lanes don't need to be perfectly "aligned". Modern multi-lane interfaces use more sophisticated methods of symbol alignment than being perfectly side-by-side.
    – Ale..chenski
    Aug 31 at 7:18




    The lanes don't need to be perfectly "aligned". Modern multi-lane interfaces use more sophisticated methods of symbol alignment than being perfectly side-by-side.
    – Ale..chenski
    Aug 31 at 7:18




    1




    1




    Right, thats what I said.
    – old_timer
    Aug 31 at 12:27




    Right, thats what I said.
    – old_timer
    Aug 31 at 12:27










    up vote
    1
    down vote













    As an addition to Dmitry Grigoryevs comment.



    Analog transmission is allways more error-prone than digital transmission. A digital serial transmission for example has clocked flanks, where an analog signal is somehow floating between 0V and VDD. So interferences are way harder to detect. One could take that into account and use differential signaling, as done in Audio.



    But then you run into that speed vs. accuracy tradeof of DACs/ADCs. If you have to digital systems talking to each other it makes way more sense to use a digital transmission, since you don't need some time consuming DA-AD translation.



    However if you have an analogue computer running on analogue control voltages, there are still some around, they look like analogue modular synths basically, things are different, and typically you can build analogue computers only for specific tasks. Funny presentation in German about analogue computing.



    Talking about analogue modular synths, they're also some kind of analogue computers, especially designed to do callculations on changing signals.



    So there is analogue transmission in computing, but limited to very specific fields.






    share|improve this answer








    New contributor




    Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















      up vote
      1
      down vote













      As an addition to Dmitry Grigoryevs comment.



      Analog transmission is allways more error-prone than digital transmission. A digital serial transmission for example has clocked flanks, where an analog signal is somehow floating between 0V and VDD. So interferences are way harder to detect. One could take that into account and use differential signaling, as done in Audio.



      But then you run into that speed vs. accuracy tradeof of DACs/ADCs. If you have to digital systems talking to each other it makes way more sense to use a digital transmission, since you don't need some time consuming DA-AD translation.



      However if you have an analogue computer running on analogue control voltages, there are still some around, they look like analogue modular synths basically, things are different, and typically you can build analogue computers only for specific tasks. Funny presentation in German about analogue computing.



      Talking about analogue modular synths, they're also some kind of analogue computers, especially designed to do callculations on changing signals.



      So there is analogue transmission in computing, but limited to very specific fields.






      share|improve this answer








      New contributor




      Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.



















        up vote
        1
        down vote










        up vote
        1
        down vote









        As an addition to Dmitry Grigoryevs comment.



        Analog transmission is allways more error-prone than digital transmission. A digital serial transmission for example has clocked flanks, where an analog signal is somehow floating between 0V and VDD. So interferences are way harder to detect. One could take that into account and use differential signaling, as done in Audio.



        But then you run into that speed vs. accuracy tradeof of DACs/ADCs. If you have to digital systems talking to each other it makes way more sense to use a digital transmission, since you don't need some time consuming DA-AD translation.



        However if you have an analogue computer running on analogue control voltages, there are still some around, they look like analogue modular synths basically, things are different, and typically you can build analogue computers only for specific tasks. Funny presentation in German about analogue computing.



        Talking about analogue modular synths, they're also some kind of analogue computers, especially designed to do callculations on changing signals.



        So there is analogue transmission in computing, but limited to very specific fields.






        share|improve this answer








        New contributor




        Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        As an addition to Dmitry Grigoryevs comment.



        Analog transmission is allways more error-prone than digital transmission. A digital serial transmission for example has clocked flanks, where an analog signal is somehow floating between 0V and VDD. So interferences are way harder to detect. One could take that into account and use differential signaling, as done in Audio.



        But then you run into that speed vs. accuracy tradeof of DACs/ADCs. If you have to digital systems talking to each other it makes way more sense to use a digital transmission, since you don't need some time consuming DA-AD translation.



        However if you have an analogue computer running on analogue control voltages, there are still some around, they look like analogue modular synths basically, things are different, and typically you can build analogue computers only for specific tasks. Funny presentation in German about analogue computing.



        Talking about analogue modular synths, they're also some kind of analogue computers, especially designed to do callculations on changing signals.



        So there is analogue transmission in computing, but limited to very specific fields.







        share|improve this answer








        New contributor




        Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer






        New contributor




        Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered Sep 3 at 10:48









        Tillo Reilly

        211




        211




        New contributor




        Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        Tillo Reilly is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f393462%2fwhy-is-digital-serial-transmission-used-everywhere-i-e-sata-pcie-usb%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What does second last employer means? [closed]

            List of Gilmore Girls characters

            Confectionery