Is it acceptable to use deadlocks as a strategy?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
2
down vote

favorite












We recently upgraded our SQL production environment from 2008R2 to 2017. I am finally at a point where I want to try optimistic locking in the form of using snapshot isolation. I read Brent Ozar's article suggesting to SET ALLOW_SNAPSHOT_ISOLATION ON at the database level, and then alter queries to take advantage of the new to us feature. To my surprise, my manager who is very knowledgeable in sql, did not want to move in this direction. I understood him to say that we depended on the deadlocks to prevent us from reading dirty data, and saw no reason to solve a problem that did not exist. I did not know how to respond to him. I have never considered deadlocks as anything other than something to try and eliminate, not depend on as a way of life. I get that they do an important function, but are a sign of something that can be improved.



My question is how many others rely on deadlocks to such an extant that you would not consider trying to eliminate them?










share|improve this question





















  • Are you sure your manager said "deadlocks" and not "locks"?
    – mustaccio
    3 hours ago










  • We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
    – JohnH
    2 hours ago










  • No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
    – dean
    1 hour ago
















up vote
2
down vote

favorite












We recently upgraded our SQL production environment from 2008R2 to 2017. I am finally at a point where I want to try optimistic locking in the form of using snapshot isolation. I read Brent Ozar's article suggesting to SET ALLOW_SNAPSHOT_ISOLATION ON at the database level, and then alter queries to take advantage of the new to us feature. To my surprise, my manager who is very knowledgeable in sql, did not want to move in this direction. I understood him to say that we depended on the deadlocks to prevent us from reading dirty data, and saw no reason to solve a problem that did not exist. I did not know how to respond to him. I have never considered deadlocks as anything other than something to try and eliminate, not depend on as a way of life. I get that they do an important function, but are a sign of something that can be improved.



My question is how many others rely on deadlocks to such an extant that you would not consider trying to eliminate them?










share|improve this question





















  • Are you sure your manager said "deadlocks" and not "locks"?
    – mustaccio
    3 hours ago










  • We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
    – JohnH
    2 hours ago










  • No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
    – dean
    1 hour ago












up vote
2
down vote

favorite









up vote
2
down vote

favorite











We recently upgraded our SQL production environment from 2008R2 to 2017. I am finally at a point where I want to try optimistic locking in the form of using snapshot isolation. I read Brent Ozar's article suggesting to SET ALLOW_SNAPSHOT_ISOLATION ON at the database level, and then alter queries to take advantage of the new to us feature. To my surprise, my manager who is very knowledgeable in sql, did not want to move in this direction. I understood him to say that we depended on the deadlocks to prevent us from reading dirty data, and saw no reason to solve a problem that did not exist. I did not know how to respond to him. I have never considered deadlocks as anything other than something to try and eliminate, not depend on as a way of life. I get that they do an important function, but are a sign of something that can be improved.



My question is how many others rely on deadlocks to such an extant that you would not consider trying to eliminate them?










share|improve this question













We recently upgraded our SQL production environment from 2008R2 to 2017. I am finally at a point where I want to try optimistic locking in the form of using snapshot isolation. I read Brent Ozar's article suggesting to SET ALLOW_SNAPSHOT_ISOLATION ON at the database level, and then alter queries to take advantage of the new to us feature. To my surprise, my manager who is very knowledgeable in sql, did not want to move in this direction. I understood him to say that we depended on the deadlocks to prevent us from reading dirty data, and saw no reason to solve a problem that did not exist. I did not know how to respond to him. I have never considered deadlocks as anything other than something to try and eliminate, not depend on as a way of life. I get that they do an important function, but are a sign of something that can be improved.



My question is how many others rely on deadlocks to such an extant that you would not consider trying to eliminate them?







deadlock snapshot-isolation






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 3 hours ago









JohnH

1517




1517











  • Are you sure your manager said "deadlocks" and not "locks"?
    – mustaccio
    3 hours ago










  • We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
    – JohnH
    2 hours ago










  • No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
    – dean
    1 hour ago
















  • Are you sure your manager said "deadlocks" and not "locks"?
    – mustaccio
    3 hours ago










  • We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
    – JohnH
    2 hours ago










  • No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
    – dean
    1 hour ago















Are you sure your manager said "deadlocks" and not "locks"?
– mustaccio
3 hours ago




Are you sure your manager said "deadlocks" and not "locks"?
– mustaccio
3 hours ago












We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
– JohnH
2 hours ago




We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
– JohnH
2 hours ago












No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
– dean
1 hour ago




No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
– dean
1 hour ago










1 Answer
1






active

oldest

votes

















up vote
4
down vote













Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).






share|improve this answer




















  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    2 hours ago










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    1 hour ago










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "182"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f219223%2fis-it-acceptable-to-use-deadlocks-as-a-strategy%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
4
down vote













Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).






share|improve this answer




















  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    2 hours ago










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    1 hour ago














up vote
4
down vote













Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).






share|improve this answer




















  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    2 hours ago










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    1 hour ago












up vote
4
down vote










up vote
4
down vote









Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).






share|improve this answer












Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).







share|improve this answer












share|improve this answer



share|improve this answer










answered 2 hours ago









a1ex07

7,11621533




7,11621533











  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    2 hours ago










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    1 hour ago
















  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    2 hours ago










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    1 hour ago















I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
– JohnH
2 hours ago




I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
– JohnH
2 hours ago












In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
– a1ex07
1 hour ago




In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
– a1ex07
1 hour ago

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f219223%2fis-it-acceptable-to-use-deadlocks-as-a-strategy%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

Long meetings (6-7 hours a day): Being “babysat” by supervisor

Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

Confectionery