How can a TCP window size be allowed to be larger than the maximum size of an ethernet packet?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I know that TCP window sizes can be scaled to over 64KB, but looking at an ethernet packet datagram, such as this one:



Network Layer model



it looks like a layer 2 packet is limited in size to be much smaller than that. How does ACKing work at the TCP layer if a single TCP request requires several network requests to be assembled at a reciever?










share|improve this question

















  • 1




    I think you are confusing window with MSS (Maximum Segment Size).
    – Ron Maupin♦
    4 hours ago










  • I am @RonMaupin (in fact I didn't know there was a difference)
    – Zach Smith
    38 mins ago










  • @Zac67 explained it for you.
    – Ron Maupin♦
    36 mins ago














up vote
1
down vote

favorite












I know that TCP window sizes can be scaled to over 64KB, but looking at an ethernet packet datagram, such as this one:



Network Layer model



it looks like a layer 2 packet is limited in size to be much smaller than that. How does ACKing work at the TCP layer if a single TCP request requires several network requests to be assembled at a reciever?










share|improve this question

















  • 1




    I think you are confusing window with MSS (Maximum Segment Size).
    – Ron Maupin♦
    4 hours ago










  • I am @RonMaupin (in fact I didn't know there was a difference)
    – Zach Smith
    38 mins ago










  • @Zac67 explained it for you.
    – Ron Maupin♦
    36 mins ago












up vote
1
down vote

favorite









up vote
1
down vote

favorite











I know that TCP window sizes can be scaled to over 64KB, but looking at an ethernet packet datagram, such as this one:



Network Layer model



it looks like a layer 2 packet is limited in size to be much smaller than that. How does ACKing work at the TCP layer if a single TCP request requires several network requests to be assembled at a reciever?










share|improve this question













I know that TCP window sizes can be scaled to over 64KB, but looking at an ethernet packet datagram, such as this one:



Network Layer model



it looks like a layer 2 packet is limited in size to be much smaller than that. How does ACKing work at the TCP layer if a single TCP request requires several network requests to be assembled at a reciever?







ethernet tcp






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 5 hours ago









Zach Smith

23528




23528







  • 1




    I think you are confusing window with MSS (Maximum Segment Size).
    – Ron Maupin♦
    4 hours ago










  • I am @RonMaupin (in fact I didn't know there was a difference)
    – Zach Smith
    38 mins ago










  • @Zac67 explained it for you.
    – Ron Maupin♦
    36 mins ago












  • 1




    I think you are confusing window with MSS (Maximum Segment Size).
    – Ron Maupin♦
    4 hours ago










  • I am @RonMaupin (in fact I didn't know there was a difference)
    – Zach Smith
    38 mins ago










  • @Zac67 explained it for you.
    – Ron Maupin♦
    36 mins ago







1




1




I think you are confusing window with MSS (Maximum Segment Size).
– Ron Maupin♦
4 hours ago




I think you are confusing window with MSS (Maximum Segment Size).
– Ron Maupin♦
4 hours ago












I am @RonMaupin (in fact I didn't know there was a difference)
– Zach Smith
38 mins ago




I am @RonMaupin (in fact I didn't know there was a difference)
– Zach Smith
38 mins ago












@Zac67 explained it for you.
– Ron Maupin♦
36 mins ago




@Zac67 explained it for you.
– Ron Maupin♦
36 mins ago










1 Answer
1






active

oldest

votes

















up vote
6
down vote



accepted










The TCP window size is generally independent of the maximum segment size which depends on the maximum transfer unit which in turn depends on the maximum frame size.



Let's start low.



The maximum frame size is the largest frame a network (segment) can transport. For Ethernet, this is 1518 bytes by definition.



The frame encapsulates an IP packet, so the largest packet - the maximum transfer unit MTU - is the maximum frame size minus the frame overhead. For Ethernet, that's 1518 - 18 = 1500 bytes.



The IP packet encapsulates a TCP segment, so the maximum segment size MSS is the MTU minus the IP overhead minus the TCP overhead (MSS doesn't include the TCP header). For Ethernet and TCP over IPv4 without options, this is 1500 - 20 (IPv4 overhead) - 20 (TCP overhead) ) = 1460 bytes.



Now, TCP is a transport protocol that presents itself as a stream socket to the application. That means that an application can just transmit any arbitrarily sized amount of data across that socket. For that, TCP splits the data stream into said segments (1 to MSS bytes long), transmits each segment over IP, and puts them back together at the destination.



TCP segments are acknowledged by the destination to guarantee delivery. Imagine the source node would only send a single segment, wait for acknowledgment, and then send the next segment. Regardless of the actual bandwidth, the throughput of this TCP connection would be limited by the round-trip time (RTT, the time it takes for a packet to travel from source to destination and back again).



So, if you had a 1 Gbit/s connection between two nodes with an RTT of 10 ms, you could effectively send 1460 bytes every 10 ms or 146 kB/s. That's not very satisfying.



TCP therefore uses a send window - multiple segments that can be "in flight" at the same time, being sent out and awaiting acknowledgment. It's also called a sliding window as it advanced each time the segment at the beginning of the window is acknowledged, triggering the sending of the next segment that the window advanced to. This way, the segment size doesn't matter. With a window of traditionally 64 KiB we can have that amount in-flight and accordingly, transport 64 KiB in each 10 ms = 6.5 MB/s. Better, but still not really satisfying for a gigabit connection.



Modern TCP uses the window scale option that can increase the send window exponentially to up to 2 GiB, providing for some future growth.



But why isn't all data just sent at once and why do we need this send window? If you send everything as fast as you - locally - can and there's (very likely) a slower link somewhere in the path to the destination, significant amounts of data would need to be queued. No switch or router is able to buffer more than a few MB, so the excess traffic would need to be dropped. Failing acknowledgment, it would need to be resent, the excess being dropped again. This would be highly inefficient and it would severly congest the network. TCP handles this problem with its congestion control, adjusting the window size according to effective bandwidth and round-trip time in a complex algorithm.






share|improve this answer




















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "496"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fnetworkengineering.stackexchange.com%2fquestions%2f54107%2fhow-can-a-tcp-window-size-be-allowed-to-be-larger-than-the-maximum-size-of-an-et%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    6
    down vote



    accepted










    The TCP window size is generally independent of the maximum segment size which depends on the maximum transfer unit which in turn depends on the maximum frame size.



    Let's start low.



    The maximum frame size is the largest frame a network (segment) can transport. For Ethernet, this is 1518 bytes by definition.



    The frame encapsulates an IP packet, so the largest packet - the maximum transfer unit MTU - is the maximum frame size minus the frame overhead. For Ethernet, that's 1518 - 18 = 1500 bytes.



    The IP packet encapsulates a TCP segment, so the maximum segment size MSS is the MTU minus the IP overhead minus the TCP overhead (MSS doesn't include the TCP header). For Ethernet and TCP over IPv4 without options, this is 1500 - 20 (IPv4 overhead) - 20 (TCP overhead) ) = 1460 bytes.



    Now, TCP is a transport protocol that presents itself as a stream socket to the application. That means that an application can just transmit any arbitrarily sized amount of data across that socket. For that, TCP splits the data stream into said segments (1 to MSS bytes long), transmits each segment over IP, and puts them back together at the destination.



    TCP segments are acknowledged by the destination to guarantee delivery. Imagine the source node would only send a single segment, wait for acknowledgment, and then send the next segment. Regardless of the actual bandwidth, the throughput of this TCP connection would be limited by the round-trip time (RTT, the time it takes for a packet to travel from source to destination and back again).



    So, if you had a 1 Gbit/s connection between two nodes with an RTT of 10 ms, you could effectively send 1460 bytes every 10 ms or 146 kB/s. That's not very satisfying.



    TCP therefore uses a send window - multiple segments that can be "in flight" at the same time, being sent out and awaiting acknowledgment. It's also called a sliding window as it advanced each time the segment at the beginning of the window is acknowledged, triggering the sending of the next segment that the window advanced to. This way, the segment size doesn't matter. With a window of traditionally 64 KiB we can have that amount in-flight and accordingly, transport 64 KiB in each 10 ms = 6.5 MB/s. Better, but still not really satisfying for a gigabit connection.



    Modern TCP uses the window scale option that can increase the send window exponentially to up to 2 GiB, providing for some future growth.



    But why isn't all data just sent at once and why do we need this send window? If you send everything as fast as you - locally - can and there's (very likely) a slower link somewhere in the path to the destination, significant amounts of data would need to be queued. No switch or router is able to buffer more than a few MB, so the excess traffic would need to be dropped. Failing acknowledgment, it would need to be resent, the excess being dropped again. This would be highly inefficient and it would severly congest the network. TCP handles this problem with its congestion control, adjusting the window size according to effective bandwidth and round-trip time in a complex algorithm.






    share|improve this answer
























      up vote
      6
      down vote



      accepted










      The TCP window size is generally independent of the maximum segment size which depends on the maximum transfer unit which in turn depends on the maximum frame size.



      Let's start low.



      The maximum frame size is the largest frame a network (segment) can transport. For Ethernet, this is 1518 bytes by definition.



      The frame encapsulates an IP packet, so the largest packet - the maximum transfer unit MTU - is the maximum frame size minus the frame overhead. For Ethernet, that's 1518 - 18 = 1500 bytes.



      The IP packet encapsulates a TCP segment, so the maximum segment size MSS is the MTU minus the IP overhead minus the TCP overhead (MSS doesn't include the TCP header). For Ethernet and TCP over IPv4 without options, this is 1500 - 20 (IPv4 overhead) - 20 (TCP overhead) ) = 1460 bytes.



      Now, TCP is a transport protocol that presents itself as a stream socket to the application. That means that an application can just transmit any arbitrarily sized amount of data across that socket. For that, TCP splits the data stream into said segments (1 to MSS bytes long), transmits each segment over IP, and puts them back together at the destination.



      TCP segments are acknowledged by the destination to guarantee delivery. Imagine the source node would only send a single segment, wait for acknowledgment, and then send the next segment. Regardless of the actual bandwidth, the throughput of this TCP connection would be limited by the round-trip time (RTT, the time it takes for a packet to travel from source to destination and back again).



      So, if you had a 1 Gbit/s connection between two nodes with an RTT of 10 ms, you could effectively send 1460 bytes every 10 ms or 146 kB/s. That's not very satisfying.



      TCP therefore uses a send window - multiple segments that can be "in flight" at the same time, being sent out and awaiting acknowledgment. It's also called a sliding window as it advanced each time the segment at the beginning of the window is acknowledged, triggering the sending of the next segment that the window advanced to. This way, the segment size doesn't matter. With a window of traditionally 64 KiB we can have that amount in-flight and accordingly, transport 64 KiB in each 10 ms = 6.5 MB/s. Better, but still not really satisfying for a gigabit connection.



      Modern TCP uses the window scale option that can increase the send window exponentially to up to 2 GiB, providing for some future growth.



      But why isn't all data just sent at once and why do we need this send window? If you send everything as fast as you - locally - can and there's (very likely) a slower link somewhere in the path to the destination, significant amounts of data would need to be queued. No switch or router is able to buffer more than a few MB, so the excess traffic would need to be dropped. Failing acknowledgment, it would need to be resent, the excess being dropped again. This would be highly inefficient and it would severly congest the network. TCP handles this problem with its congestion control, adjusting the window size according to effective bandwidth and round-trip time in a complex algorithm.






      share|improve this answer






















        up vote
        6
        down vote



        accepted







        up vote
        6
        down vote



        accepted






        The TCP window size is generally independent of the maximum segment size which depends on the maximum transfer unit which in turn depends on the maximum frame size.



        Let's start low.



        The maximum frame size is the largest frame a network (segment) can transport. For Ethernet, this is 1518 bytes by definition.



        The frame encapsulates an IP packet, so the largest packet - the maximum transfer unit MTU - is the maximum frame size minus the frame overhead. For Ethernet, that's 1518 - 18 = 1500 bytes.



        The IP packet encapsulates a TCP segment, so the maximum segment size MSS is the MTU minus the IP overhead minus the TCP overhead (MSS doesn't include the TCP header). For Ethernet and TCP over IPv4 without options, this is 1500 - 20 (IPv4 overhead) - 20 (TCP overhead) ) = 1460 bytes.



        Now, TCP is a transport protocol that presents itself as a stream socket to the application. That means that an application can just transmit any arbitrarily sized amount of data across that socket. For that, TCP splits the data stream into said segments (1 to MSS bytes long), transmits each segment over IP, and puts them back together at the destination.



        TCP segments are acknowledged by the destination to guarantee delivery. Imagine the source node would only send a single segment, wait for acknowledgment, and then send the next segment. Regardless of the actual bandwidth, the throughput of this TCP connection would be limited by the round-trip time (RTT, the time it takes for a packet to travel from source to destination and back again).



        So, if you had a 1 Gbit/s connection between two nodes with an RTT of 10 ms, you could effectively send 1460 bytes every 10 ms or 146 kB/s. That's not very satisfying.



        TCP therefore uses a send window - multiple segments that can be "in flight" at the same time, being sent out and awaiting acknowledgment. It's also called a sliding window as it advanced each time the segment at the beginning of the window is acknowledged, triggering the sending of the next segment that the window advanced to. This way, the segment size doesn't matter. With a window of traditionally 64 KiB we can have that amount in-flight and accordingly, transport 64 KiB in each 10 ms = 6.5 MB/s. Better, but still not really satisfying for a gigabit connection.



        Modern TCP uses the window scale option that can increase the send window exponentially to up to 2 GiB, providing for some future growth.



        But why isn't all data just sent at once and why do we need this send window? If you send everything as fast as you - locally - can and there's (very likely) a slower link somewhere in the path to the destination, significant amounts of data would need to be queued. No switch or router is able to buffer more than a few MB, so the excess traffic would need to be dropped. Failing acknowledgment, it would need to be resent, the excess being dropped again. This would be highly inefficient and it would severly congest the network. TCP handles this problem with its congestion control, adjusting the window size according to effective bandwidth and round-trip time in a complex algorithm.






        share|improve this answer












        The TCP window size is generally independent of the maximum segment size which depends on the maximum transfer unit which in turn depends on the maximum frame size.



        Let's start low.



        The maximum frame size is the largest frame a network (segment) can transport. For Ethernet, this is 1518 bytes by definition.



        The frame encapsulates an IP packet, so the largest packet - the maximum transfer unit MTU - is the maximum frame size minus the frame overhead. For Ethernet, that's 1518 - 18 = 1500 bytes.



        The IP packet encapsulates a TCP segment, so the maximum segment size MSS is the MTU minus the IP overhead minus the TCP overhead (MSS doesn't include the TCP header). For Ethernet and TCP over IPv4 without options, this is 1500 - 20 (IPv4 overhead) - 20 (TCP overhead) ) = 1460 bytes.



        Now, TCP is a transport protocol that presents itself as a stream socket to the application. That means that an application can just transmit any arbitrarily sized amount of data across that socket. For that, TCP splits the data stream into said segments (1 to MSS bytes long), transmits each segment over IP, and puts them back together at the destination.



        TCP segments are acknowledged by the destination to guarantee delivery. Imagine the source node would only send a single segment, wait for acknowledgment, and then send the next segment. Regardless of the actual bandwidth, the throughput of this TCP connection would be limited by the round-trip time (RTT, the time it takes for a packet to travel from source to destination and back again).



        So, if you had a 1 Gbit/s connection between two nodes with an RTT of 10 ms, you could effectively send 1460 bytes every 10 ms or 146 kB/s. That's not very satisfying.



        TCP therefore uses a send window - multiple segments that can be "in flight" at the same time, being sent out and awaiting acknowledgment. It's also called a sliding window as it advanced each time the segment at the beginning of the window is acknowledged, triggering the sending of the next segment that the window advanced to. This way, the segment size doesn't matter. With a window of traditionally 64 KiB we can have that amount in-flight and accordingly, transport 64 KiB in each 10 ms = 6.5 MB/s. Better, but still not really satisfying for a gigabit connection.



        Modern TCP uses the window scale option that can increase the send window exponentially to up to 2 GiB, providing for some future growth.



        But why isn't all data just sent at once and why do we need this send window? If you send everything as fast as you - locally - can and there's (very likely) a slower link somewhere in the path to the destination, significant amounts of data would need to be queued. No switch or router is able to buffer more than a few MB, so the excess traffic would need to be dropped. Failing acknowledgment, it would need to be resent, the excess being dropped again. This would be highly inefficient and it would severly congest the network. TCP handles this problem with its congestion control, adjusting the window size according to effective bandwidth and round-trip time in a complex algorithm.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 4 hours ago









        Zac67

        21.6k21149




        21.6k21149



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fnetworkengineering.stackexchange.com%2fquestions%2f54107%2fhow-can-a-tcp-window-size-be-allowed-to-be-larger-than-the-maximum-size-of-an-et%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            Long meetings (6-7 hours a day): Being “babysat” by supervisor

            Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

            Confectionery