“Warnings: Operation caused residual I/O” versus key lookups

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
5
down vote

favorite












I've seen this warning in SQL Server 2017 execution plans:




Warnings: Operation caused residual IO [sic]. The actual number of rows read was (3,321,318), but the number of rows returned was 40.




Here is a snippet from SQLSentry PlanExplorer:



Enter image description here



In order to improve the code, I've added a non-clustered index, so SQL Server can get to the relevant rows. It works fine, but normally there would be too many (big) columns to include in the index. It looks like this:



Enter image description here



If I only add the index, without include columns, it looks like this, if I force the use of the index:



Enter image description here



Obviously, SQL Server thinks the key lookup is much more expensive than residual I/O. I have a test setup without much test data (yet), but when the code goes into production, it needs to work with much more data, so I'm fairly sure that some sort of NonClustered index is needed.



Are key lookups really that expensive, when you run on SSDs, that I have to create full-fat indexes (with a lot of include columns)?










share|improve this question























  • Question for you - Based on your last paragraph, why would the cost of a lookup be less based on the hardware? (Assumedly the same hardware that the non-clustered index will be running on) I'm unclear on that.
    – George.Palacios
    17 hours ago






  • 3




    It's estimated to be 76.9% of the cost of that plan. That doesn't mean it's expensive. Look at the I/O cost of 0.06 compared to your original plan with I/O cost over 10. I think you'll be better off, but you should test with actual plans against enough data that really simulates what production will be like (and if the query runs long enough for us collect data from sys.dm_exec_query_profiles, we'll re-cost it from actual costs vs. estimated). Stop using the estimated cost % as some absolute indicator of cost - it's relative and it's often out to lunch.
    – Aaron Bertrand♦
    17 hours ago










  • @George.Palacios; See brentozar.com/pastetheplan/?id=SJtiRte2X but it is part of a long stored procedure. Look for IX_BatchNo_DeviceNo_CreatedUTC
    – Henrik Staun Poulsen
    17 hours ago






  • 1




    @JacobH Unless we have information about the actual costs that happened during query execution. SentryOne Plan Explorer does this for queries that (a) run long enough and (b) against SQL Server 2014 or greater. Currently it relies on sys.dm_exec_query_profiles but in a future version it will be able to use new information that has been added to showplan in recent SQL Server releases.
    – Aaron Bertrand♦
    16 hours ago







  • 1




    No, I'm saying that the costs you see are estimated costs and, as Paul explains below, don't necessarily reflect runtime performance.
    – Aaron Bertrand♦
    16 hours ago
















up vote
5
down vote

favorite












I've seen this warning in SQL Server 2017 execution plans:




Warnings: Operation caused residual IO [sic]. The actual number of rows read was (3,321,318), but the number of rows returned was 40.




Here is a snippet from SQLSentry PlanExplorer:



Enter image description here



In order to improve the code, I've added a non-clustered index, so SQL Server can get to the relevant rows. It works fine, but normally there would be too many (big) columns to include in the index. It looks like this:



Enter image description here



If I only add the index, without include columns, it looks like this, if I force the use of the index:



Enter image description here



Obviously, SQL Server thinks the key lookup is much more expensive than residual I/O. I have a test setup without much test data (yet), but when the code goes into production, it needs to work with much more data, so I'm fairly sure that some sort of NonClustered index is needed.



Are key lookups really that expensive, when you run on SSDs, that I have to create full-fat indexes (with a lot of include columns)?










share|improve this question























  • Question for you - Based on your last paragraph, why would the cost of a lookup be less based on the hardware? (Assumedly the same hardware that the non-clustered index will be running on) I'm unclear on that.
    – George.Palacios
    17 hours ago






  • 3




    It's estimated to be 76.9% of the cost of that plan. That doesn't mean it's expensive. Look at the I/O cost of 0.06 compared to your original plan with I/O cost over 10. I think you'll be better off, but you should test with actual plans against enough data that really simulates what production will be like (and if the query runs long enough for us collect data from sys.dm_exec_query_profiles, we'll re-cost it from actual costs vs. estimated). Stop using the estimated cost % as some absolute indicator of cost - it's relative and it's often out to lunch.
    – Aaron Bertrand♦
    17 hours ago










  • @George.Palacios; See brentozar.com/pastetheplan/?id=SJtiRte2X but it is part of a long stored procedure. Look for IX_BatchNo_DeviceNo_CreatedUTC
    – Henrik Staun Poulsen
    17 hours ago






  • 1




    @JacobH Unless we have information about the actual costs that happened during query execution. SentryOne Plan Explorer does this for queries that (a) run long enough and (b) against SQL Server 2014 or greater. Currently it relies on sys.dm_exec_query_profiles but in a future version it will be able to use new information that has been added to showplan in recent SQL Server releases.
    – Aaron Bertrand♦
    16 hours ago







  • 1




    No, I'm saying that the costs you see are estimated costs and, as Paul explains below, don't necessarily reflect runtime performance.
    – Aaron Bertrand♦
    16 hours ago












up vote
5
down vote

favorite









up vote
5
down vote

favorite











I've seen this warning in SQL Server 2017 execution plans:




Warnings: Operation caused residual IO [sic]. The actual number of rows read was (3,321,318), but the number of rows returned was 40.




Here is a snippet from SQLSentry PlanExplorer:



Enter image description here



In order to improve the code, I've added a non-clustered index, so SQL Server can get to the relevant rows. It works fine, but normally there would be too many (big) columns to include in the index. It looks like this:



Enter image description here



If I only add the index, without include columns, it looks like this, if I force the use of the index:



Enter image description here



Obviously, SQL Server thinks the key lookup is much more expensive than residual I/O. I have a test setup without much test data (yet), but when the code goes into production, it needs to work with much more data, so I'm fairly sure that some sort of NonClustered index is needed.



Are key lookups really that expensive, when you run on SSDs, that I have to create full-fat indexes (with a lot of include columns)?










share|improve this question















I've seen this warning in SQL Server 2017 execution plans:




Warnings: Operation caused residual IO [sic]. The actual number of rows read was (3,321,318), but the number of rows returned was 40.




Here is a snippet from SQLSentry PlanExplorer:



Enter image description here



In order to improve the code, I've added a non-clustered index, so SQL Server can get to the relevant rows. It works fine, but normally there would be too many (big) columns to include in the index. It looks like this:



Enter image description here



If I only add the index, without include columns, it looks like this, if I force the use of the index:



Enter image description here



Obviously, SQL Server thinks the key lookup is much more expensive than residual I/O. I have a test setup without much test data (yet), but when the code goes into production, it needs to work with much more data, so I'm fairly sure that some sort of NonClustered index is needed.



Are key lookups really that expensive, when you run on SSDs, that I have to create full-fat indexes (with a lot of include columns)?







sql-server sql-server-2017 nonclustered-index






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 17 mins ago









Peter Mortensen

22519




22519










asked 17 hours ago









Henrik Staun Poulsen

1,0901821




1,0901821











  • Question for you - Based on your last paragraph, why would the cost of a lookup be less based on the hardware? (Assumedly the same hardware that the non-clustered index will be running on) I'm unclear on that.
    – George.Palacios
    17 hours ago






  • 3




    It's estimated to be 76.9% of the cost of that plan. That doesn't mean it's expensive. Look at the I/O cost of 0.06 compared to your original plan with I/O cost over 10. I think you'll be better off, but you should test with actual plans against enough data that really simulates what production will be like (and if the query runs long enough for us collect data from sys.dm_exec_query_profiles, we'll re-cost it from actual costs vs. estimated). Stop using the estimated cost % as some absolute indicator of cost - it's relative and it's often out to lunch.
    – Aaron Bertrand♦
    17 hours ago










  • @George.Palacios; See brentozar.com/pastetheplan/?id=SJtiRte2X but it is part of a long stored procedure. Look for IX_BatchNo_DeviceNo_CreatedUTC
    – Henrik Staun Poulsen
    17 hours ago






  • 1




    @JacobH Unless we have information about the actual costs that happened during query execution. SentryOne Plan Explorer does this for queries that (a) run long enough and (b) against SQL Server 2014 or greater. Currently it relies on sys.dm_exec_query_profiles but in a future version it will be able to use new information that has been added to showplan in recent SQL Server releases.
    – Aaron Bertrand♦
    16 hours ago







  • 1




    No, I'm saying that the costs you see are estimated costs and, as Paul explains below, don't necessarily reflect runtime performance.
    – Aaron Bertrand♦
    16 hours ago
















  • Question for you - Based on your last paragraph, why would the cost of a lookup be less based on the hardware? (Assumedly the same hardware that the non-clustered index will be running on) I'm unclear on that.
    – George.Palacios
    17 hours ago






  • 3




    It's estimated to be 76.9% of the cost of that plan. That doesn't mean it's expensive. Look at the I/O cost of 0.06 compared to your original plan with I/O cost over 10. I think you'll be better off, but you should test with actual plans against enough data that really simulates what production will be like (and if the query runs long enough for us collect data from sys.dm_exec_query_profiles, we'll re-cost it from actual costs vs. estimated). Stop using the estimated cost % as some absolute indicator of cost - it's relative and it's often out to lunch.
    – Aaron Bertrand♦
    17 hours ago










  • @George.Palacios; See brentozar.com/pastetheplan/?id=SJtiRte2X but it is part of a long stored procedure. Look for IX_BatchNo_DeviceNo_CreatedUTC
    – Henrik Staun Poulsen
    17 hours ago






  • 1




    @JacobH Unless we have information about the actual costs that happened during query execution. SentryOne Plan Explorer does this for queries that (a) run long enough and (b) against SQL Server 2014 or greater. Currently it relies on sys.dm_exec_query_profiles but in a future version it will be able to use new information that has been added to showplan in recent SQL Server releases.
    – Aaron Bertrand♦
    16 hours ago







  • 1




    No, I'm saying that the costs you see are estimated costs and, as Paul explains below, don't necessarily reflect runtime performance.
    – Aaron Bertrand♦
    16 hours ago















Question for you - Based on your last paragraph, why would the cost of a lookup be less based on the hardware? (Assumedly the same hardware that the non-clustered index will be running on) I'm unclear on that.
– George.Palacios
17 hours ago




Question for you - Based on your last paragraph, why would the cost of a lookup be less based on the hardware? (Assumedly the same hardware that the non-clustered index will be running on) I'm unclear on that.
– George.Palacios
17 hours ago




3




3




It's estimated to be 76.9% of the cost of that plan. That doesn't mean it's expensive. Look at the I/O cost of 0.06 compared to your original plan with I/O cost over 10. I think you'll be better off, but you should test with actual plans against enough data that really simulates what production will be like (and if the query runs long enough for us collect data from sys.dm_exec_query_profiles, we'll re-cost it from actual costs vs. estimated). Stop using the estimated cost % as some absolute indicator of cost - it's relative and it's often out to lunch.
– Aaron Bertrand♦
17 hours ago




It's estimated to be 76.9% of the cost of that plan. That doesn't mean it's expensive. Look at the I/O cost of 0.06 compared to your original plan with I/O cost over 10. I think you'll be better off, but you should test with actual plans against enough data that really simulates what production will be like (and if the query runs long enough for us collect data from sys.dm_exec_query_profiles, we'll re-cost it from actual costs vs. estimated). Stop using the estimated cost % as some absolute indicator of cost - it's relative and it's often out to lunch.
– Aaron Bertrand♦
17 hours ago












@George.Palacios; See brentozar.com/pastetheplan/?id=SJtiRte2X but it is part of a long stored procedure. Look for IX_BatchNo_DeviceNo_CreatedUTC
– Henrik Staun Poulsen
17 hours ago




@George.Palacios; See brentozar.com/pastetheplan/?id=SJtiRte2X but it is part of a long stored procedure. Look for IX_BatchNo_DeviceNo_CreatedUTC
– Henrik Staun Poulsen
17 hours ago




1




1




@JacobH Unless we have information about the actual costs that happened during query execution. SentryOne Plan Explorer does this for queries that (a) run long enough and (b) against SQL Server 2014 or greater. Currently it relies on sys.dm_exec_query_profiles but in a future version it will be able to use new information that has been added to showplan in recent SQL Server releases.
– Aaron Bertrand♦
16 hours ago





@JacobH Unless we have information about the actual costs that happened during query execution. SentryOne Plan Explorer does this for queries that (a) run long enough and (b) against SQL Server 2014 or greater. Currently it relies on sys.dm_exec_query_profiles but in a future version it will be able to use new information that has been added to showplan in recent SQL Server releases.
– Aaron Bertrand♦
16 hours ago





1




1




No, I'm saying that the costs you see are estimated costs and, as Paul explains below, don't necessarily reflect runtime performance.
– Aaron Bertrand♦
16 hours ago




No, I'm saying that the costs you see are estimated costs and, as Paul explains below, don't necessarily reflect runtime performance.
– Aaron Bertrand♦
16 hours ago










1 Answer
1






active

oldest

votes

















up vote
12
down vote













The cost model used by the optimizer is exactly that: a model. It produces generally good results over a wide range of workloads, on a wide range of database designs, on a wide range of hardware.



You should generally not assume that individual cost estimates will strongly correlate with runtime performance on a particular hardware configuration. The point of costing is to allow the optimizer to make an educated choice between candidate physical alternatives for the same logical operation.



When you really get into the details, a skilled database professional (with the time to spare on tuning an important query) can often do better. To that extent, you can think of the optimizer's plan selection as a good starting point. In most cases, that starting point will also be the ending point, since the solution found is good enough.



Included columns are great for read-heavy OLTP workloads, where the trade-off between index space usage and update cost versus runtime read performance makes sense.



All that said: Yes, lookups can be expensive even on SSDs, or ultimately even when reading exclusively from memory. Traversing b-tree structures is not for free. Obviously the cost mounts as you do more of them.



Only you can decide if the trade-off is worth it in your case. Test both alternatives on a representative data sample, and make an informed choice.






share|improve this answer




















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "182"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f221095%2fwarnings-operation-caused-residual-i-o-versus-key-lookups%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    12
    down vote













    The cost model used by the optimizer is exactly that: a model. It produces generally good results over a wide range of workloads, on a wide range of database designs, on a wide range of hardware.



    You should generally not assume that individual cost estimates will strongly correlate with runtime performance on a particular hardware configuration. The point of costing is to allow the optimizer to make an educated choice between candidate physical alternatives for the same logical operation.



    When you really get into the details, a skilled database professional (with the time to spare on tuning an important query) can often do better. To that extent, you can think of the optimizer's plan selection as a good starting point. In most cases, that starting point will also be the ending point, since the solution found is good enough.



    Included columns are great for read-heavy OLTP workloads, where the trade-off between index space usage and update cost versus runtime read performance makes sense.



    All that said: Yes, lookups can be expensive even on SSDs, or ultimately even when reading exclusively from memory. Traversing b-tree structures is not for free. Obviously the cost mounts as you do more of them.



    Only you can decide if the trade-off is worth it in your case. Test both alternatives on a representative data sample, and make an informed choice.






    share|improve this answer
























      up vote
      12
      down vote













      The cost model used by the optimizer is exactly that: a model. It produces generally good results over a wide range of workloads, on a wide range of database designs, on a wide range of hardware.



      You should generally not assume that individual cost estimates will strongly correlate with runtime performance on a particular hardware configuration. The point of costing is to allow the optimizer to make an educated choice between candidate physical alternatives for the same logical operation.



      When you really get into the details, a skilled database professional (with the time to spare on tuning an important query) can often do better. To that extent, you can think of the optimizer's plan selection as a good starting point. In most cases, that starting point will also be the ending point, since the solution found is good enough.



      Included columns are great for read-heavy OLTP workloads, where the trade-off between index space usage and update cost versus runtime read performance makes sense.



      All that said: Yes, lookups can be expensive even on SSDs, or ultimately even when reading exclusively from memory. Traversing b-tree structures is not for free. Obviously the cost mounts as you do more of them.



      Only you can decide if the trade-off is worth it in your case. Test both alternatives on a representative data sample, and make an informed choice.






      share|improve this answer






















        up vote
        12
        down vote










        up vote
        12
        down vote









        The cost model used by the optimizer is exactly that: a model. It produces generally good results over a wide range of workloads, on a wide range of database designs, on a wide range of hardware.



        You should generally not assume that individual cost estimates will strongly correlate with runtime performance on a particular hardware configuration. The point of costing is to allow the optimizer to make an educated choice between candidate physical alternatives for the same logical operation.



        When you really get into the details, a skilled database professional (with the time to spare on tuning an important query) can often do better. To that extent, you can think of the optimizer's plan selection as a good starting point. In most cases, that starting point will also be the ending point, since the solution found is good enough.



        Included columns are great for read-heavy OLTP workloads, where the trade-off between index space usage and update cost versus runtime read performance makes sense.



        All that said: Yes, lookups can be expensive even on SSDs, or ultimately even when reading exclusively from memory. Traversing b-tree structures is not for free. Obviously the cost mounts as you do more of them.



        Only you can decide if the trade-off is worth it in your case. Test both alternatives on a representative data sample, and make an informed choice.






        share|improve this answer












        The cost model used by the optimizer is exactly that: a model. It produces generally good results over a wide range of workloads, on a wide range of database designs, on a wide range of hardware.



        You should generally not assume that individual cost estimates will strongly correlate with runtime performance on a particular hardware configuration. The point of costing is to allow the optimizer to make an educated choice between candidate physical alternatives for the same logical operation.



        When you really get into the details, a skilled database professional (with the time to spare on tuning an important query) can often do better. To that extent, you can think of the optimizer's plan selection as a good starting point. In most cases, that starting point will also be the ending point, since the solution found is good enough.



        Included columns are great for read-heavy OLTP workloads, where the trade-off between index space usage and update cost versus runtime read performance makes sense.



        All that said: Yes, lookups can be expensive even on SSDs, or ultimately even when reading exclusively from memory. Traversing b-tree structures is not for free. Obviously the cost mounts as you do more of them.



        Only you can decide if the trade-off is worth it in your case. Test both alternatives on a representative data sample, and make an informed choice.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 16 hours ago









        Paul White♦

        47.7k14256407




        47.7k14256407



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f221095%2fwarnings-operation-caused-residual-i-o-versus-key-lookups%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            Long meetings (6-7 hours a day): Being “babysat” by supervisor

            Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

            Confectionery