How can I avoid the risk of mistakenly deleting data from a production environment with no backups? [closed]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
50
down vote

favorite
9












I am a junior developer who is still not confident in my role. I am dealing with very sensitive, unrecoverable data in our production environment. A lot of my workload involves different tasks in our live environment. If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.



Should I ask my seniors not to give me these kind of risky tasks? That might be like saying that I cannot handle any risky work at all. I want to tell them not to overload me with risky work. What is the best approach to ask this?



There is no possibility to restore the data from backups as the data I'm handling is changing frequently. Actually, I haven't accidentally deleted anything yet. It's a hypothetical situation.







share|improve this question














closed as off-topic by gnat, user9158, IDrinkandIKnowThings, jmoreno, yochannah Apr 19 '15 at 12:11


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions asking for advice on what to do are not practical answerable questions (e.g. "what job should I take?", or "what skills should I learn?"). Questions should get answers explaining why and how to make a decision, not advice on what to do. For more information, click here." – gnat, Community, IDrinkandIKnowThings, jmoreno, yochannah
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 3




    Since you see yourself as inexperienced, are you sure these data changes are unrecoverable? Maybe you need to ask what to do just in case.
    – user8365
    Apr 16 '15 at 22:33










  • Comments are not for extended discussion; this conversation has been moved to chat.
    – Elysian Fields♦
    Apr 17 '15 at 14:03










  • can you not implement at least basic backup scripts
    – Pepone
    Apr 17 '15 at 22:45










  • The solution to highly dynamic data is not to "not have backups", but rather to have real-time data mirroring/replication in addition to periodic backups.
    – LindaJeanne
    Apr 19 '15 at 11:03










  • @LindaJeanne yes. We have several slave databases and i'm dealing with the master database.
    – Jude Niroshan
    Apr 19 '15 at 15:19
















up vote
50
down vote

favorite
9












I am a junior developer who is still not confident in my role. I am dealing with very sensitive, unrecoverable data in our production environment. A lot of my workload involves different tasks in our live environment. If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.



Should I ask my seniors not to give me these kind of risky tasks? That might be like saying that I cannot handle any risky work at all. I want to tell them not to overload me with risky work. What is the best approach to ask this?



There is no possibility to restore the data from backups as the data I'm handling is changing frequently. Actually, I haven't accidentally deleted anything yet. It's a hypothetical situation.







share|improve this question














closed as off-topic by gnat, user9158, IDrinkandIKnowThings, jmoreno, yochannah Apr 19 '15 at 12:11


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions asking for advice on what to do are not practical answerable questions (e.g. "what job should I take?", or "what skills should I learn?"). Questions should get answers explaining why and how to make a decision, not advice on what to do. For more information, click here." – gnat, Community, IDrinkandIKnowThings, jmoreno, yochannah
If this question can be reworded to fit the rules in the help center, please edit the question.








  • 3




    Since you see yourself as inexperienced, are you sure these data changes are unrecoverable? Maybe you need to ask what to do just in case.
    – user8365
    Apr 16 '15 at 22:33










  • Comments are not for extended discussion; this conversation has been moved to chat.
    – Elysian Fields♦
    Apr 17 '15 at 14:03










  • can you not implement at least basic backup scripts
    – Pepone
    Apr 17 '15 at 22:45










  • The solution to highly dynamic data is not to "not have backups", but rather to have real-time data mirroring/replication in addition to periodic backups.
    – LindaJeanne
    Apr 19 '15 at 11:03










  • @LindaJeanne yes. We have several slave databases and i'm dealing with the master database.
    – Jude Niroshan
    Apr 19 '15 at 15:19












up vote
50
down vote

favorite
9









up vote
50
down vote

favorite
9






9





I am a junior developer who is still not confident in my role. I am dealing with very sensitive, unrecoverable data in our production environment. A lot of my workload involves different tasks in our live environment. If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.



Should I ask my seniors not to give me these kind of risky tasks? That might be like saying that I cannot handle any risky work at all. I want to tell them not to overload me with risky work. What is the best approach to ask this?



There is no possibility to restore the data from backups as the data I'm handling is changing frequently. Actually, I haven't accidentally deleted anything yet. It's a hypothetical situation.







share|improve this question














I am a junior developer who is still not confident in my role. I am dealing with very sensitive, unrecoverable data in our production environment. A lot of my workload involves different tasks in our live environment. If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.



Should I ask my seniors not to give me these kind of risky tasks? That might be like saying that I cannot handle any risky work at all. I want to tell them not to overload me with risky work. What is the best approach to ask this?



There is no possibility to restore the data from backups as the data I'm handling is changing frequently. Actually, I haven't accidentally deleted anything yet. It's a hypothetical situation.









share|improve this question













share|improve this question




share|improve this question








edited Apr 17 '15 at 18:10









CodeGnome

1,515516




1,515516










asked Apr 16 '15 at 6:43









Jude Niroshan

76531223




76531223




closed as off-topic by gnat, user9158, IDrinkandIKnowThings, jmoreno, yochannah Apr 19 '15 at 12:11


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions asking for advice on what to do are not practical answerable questions (e.g. "what job should I take?", or "what skills should I learn?"). Questions should get answers explaining why and how to make a decision, not advice on what to do. For more information, click here." – gnat, Community, IDrinkandIKnowThings, jmoreno, yochannah
If this question can be reworded to fit the rules in the help center, please edit the question.




closed as off-topic by gnat, user9158, IDrinkandIKnowThings, jmoreno, yochannah Apr 19 '15 at 12:11


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions asking for advice on what to do are not practical answerable questions (e.g. "what job should I take?", or "what skills should I learn?"). Questions should get answers explaining why and how to make a decision, not advice on what to do. For more information, click here." – gnat, Community, IDrinkandIKnowThings, jmoreno, yochannah
If this question can be reworded to fit the rules in the help center, please edit the question.







  • 3




    Since you see yourself as inexperienced, are you sure these data changes are unrecoverable? Maybe you need to ask what to do just in case.
    – user8365
    Apr 16 '15 at 22:33










  • Comments are not for extended discussion; this conversation has been moved to chat.
    – Elysian Fields♦
    Apr 17 '15 at 14:03










  • can you not implement at least basic backup scripts
    – Pepone
    Apr 17 '15 at 22:45










  • The solution to highly dynamic data is not to "not have backups", but rather to have real-time data mirroring/replication in addition to periodic backups.
    – LindaJeanne
    Apr 19 '15 at 11:03










  • @LindaJeanne yes. We have several slave databases and i'm dealing with the master database.
    – Jude Niroshan
    Apr 19 '15 at 15:19












  • 3




    Since you see yourself as inexperienced, are you sure these data changes are unrecoverable? Maybe you need to ask what to do just in case.
    – user8365
    Apr 16 '15 at 22:33










  • Comments are not for extended discussion; this conversation has been moved to chat.
    – Elysian Fields♦
    Apr 17 '15 at 14:03










  • can you not implement at least basic backup scripts
    – Pepone
    Apr 17 '15 at 22:45










  • The solution to highly dynamic data is not to "not have backups", but rather to have real-time data mirroring/replication in addition to periodic backups.
    – LindaJeanne
    Apr 19 '15 at 11:03










  • @LindaJeanne yes. We have several slave databases and i'm dealing with the master database.
    – Jude Niroshan
    Apr 19 '15 at 15:19







3




3




Since you see yourself as inexperienced, are you sure these data changes are unrecoverable? Maybe you need to ask what to do just in case.
– user8365
Apr 16 '15 at 22:33




Since you see yourself as inexperienced, are you sure these data changes are unrecoverable? Maybe you need to ask what to do just in case.
– user8365
Apr 16 '15 at 22:33












Comments are not for extended discussion; this conversation has been moved to chat.
– Elysian Fields♦
Apr 17 '15 at 14:03




Comments are not for extended discussion; this conversation has been moved to chat.
– Elysian Fields♦
Apr 17 '15 at 14:03












can you not implement at least basic backup scripts
– Pepone
Apr 17 '15 at 22:45




can you not implement at least basic backup scripts
– Pepone
Apr 17 '15 at 22:45












The solution to highly dynamic data is not to "not have backups", but rather to have real-time data mirroring/replication in addition to periodic backups.
– LindaJeanne
Apr 19 '15 at 11:03




The solution to highly dynamic data is not to "not have backups", but rather to have real-time data mirroring/replication in addition to periodic backups.
– LindaJeanne
Apr 19 '15 at 11:03












@LindaJeanne yes. We have several slave databases and i'm dealing with the master database.
– Jude Niroshan
Apr 19 '15 at 15:19




@LindaJeanne yes. We have several slave databases and i'm dealing with the master database.
– Jude Niroshan
Apr 19 '15 at 15:19










9 Answers
9






active

oldest

votes

















up vote
112
down vote



accepted










One thing you can do now, is verify what safeguards are in place. This shows taking responsibility. Does your company have recent backups in case anything goes wrong? Can you make ad-hoc backups before changing critical parts? Are you working on test systems and do you have a good procedure for implementing the changes in the live systems? Etc.



If these safeguards are missing, do a request to have them available.



BTW I'm surprised that you say you are working with unrecoverable data. If that really is the case, it's a red flag for the company as a whole. Nothing (well, as little as possible) should be 'unrecoverable'.






share|improve this answer


















  • 7




    "do a request to have them available"? I'm 90% sure the answer will be "no". Companies who are know that failsafes are needed have already installed them before a junior dev asks for them. The ones who don't have them are the ones who believe that "I don't need no stinkin' backups, it's your responsibility to be infallible, and the fear of me firing your ass if anything happens will ensure that you make no mistakes ever". They don't change their ways just because a new junior dev asked for it. So in reality, this answer is a blind alley.
    – rumtscho
    Apr 17 '15 at 10:11






  • 1




    @rumtscho The request should be made anyway, on the .2% chance that he'll have a boss that actually understands how important this is, and so that the request can be documented to avoid future finger-pointing.
    – Zibbobz
    Apr 17 '15 at 18:26






  • 2




    @rumtscho Or maybe nobody has ever brought the idea up to someone who can spend the money on the solution... I've been handed poorly maintained datasets in the past and had good success with requesting an update to infrastructure to better manage them.
    – Jasper
    Apr 17 '15 at 18:29











  • You need backups, not only to protect customer data, but to make work reasonably efficient. For example, if you know there's no backup, you will probably work in a way that is too cautious, e.g. rechecking every single command on your local machine using some test data, slowing down even routine tasks. If you need to do a hotfix on the live machine, do it in an efficient way but have a rollback plan in case the hotfix breaks something else unexpectedly.
    – Brandin
    Apr 19 '15 at 19:31

















up vote
36
down vote













You should never, ever modify unrecoverable production data.



I really can't stress this enough. You should be willing to take a stance on this. If it were me I'd make two requests.



  1. That there are frequent backups (ideally, more than one set) taken automatically. The frequency depends on the sensitivity of the data, but I'd say at least once a day. Is the customer going to be willing to accept a week's worth of lost data if something goes wrong?


  2. That you aren't the one to do it. This is actually more important, but the first should still be the case. You, a software developer, shouldn't be messing with live data. I don't know the size of your company, but I work in a company of fifty and developers get told off if they interact with the live system. If you have a support department, that's their job and you should, even with backups, feel uneasy about touching live data. Development work should only ever be done on a development system. If something needs to be done to the live system, you get them to do it, even at the cost of efficiency. If you need to, provide a well-tested script. Generally, though, any changes should only happen when everyone is aware that the system could go down, so customers need to be aware of this possibility. If it necessary for you to touch live data as a developer, don't do it without backups, even something small.


These things are pretty basic. You should always consider the potential effects of mistakes and do everything possible to minimise those risks. These might sound overly-cautious, but if you don't take precautions, the personal consequences will be much greater if something goes wrong.






share|improve this answer


















  • 10




    No devs shoudl ever be running scripts on a production database. They shoudl not even have teh rights to do so. Only a DBA or Build team memeber should do this or at worst only managers should do this. A company that allows junior devs access to production deserves what they get.
    – HLGEM
    Apr 16 '15 at 12:49






  • 8




    @HLGEM Agreed, sadly reality is so very many companies have developers, good ones, that are practically forced to do this sort of risky crap all the time. "Hey Jim! Kim just uploading the new leads to production through the portal, Legal wants us to change their source so accounting can bill them properly, there's 60,000 of them, oh and accounting needs to be able to start invoicing in an hour so we don't have time to copy the data to the dev database, make it happen! counting on you champ!" facepalm (I wish that was a joke)
    – RualStorge
    Apr 16 '15 at 19:18






  • 19




    @HLGEM: Not every company is large enough to have a "DBA" or "Build team". And certainly I've never met a manager who has the first clue how to do anything like this.
    – Lightness Races in Orbit
    Apr 17 '15 at 0:25







  • 4




    @PointlessSpike Sometimes one person works two roles. Maybe the developer is also the support staff. I've been employed as a software engineer but also done build, test engineering, client support as part of the same role. You work with the staff you have.
    – Gusdor
    Apr 17 '15 at 8:25






  • 5




    @Lightning Been there - in micro-startups people have to wear many hats. But even there the only people who should have update access to a production server are individuals who know exactly what they are doing, who know to rehearse even the smallest change in a test environment beforehand, and who are trusted by the business to do it. Preferably as few as practicable.
    – Julia Hayward
    Apr 17 '15 at 10:36

















up vote
17
down vote













Before doing any work which could result in loss of data, you need to make sure you have a rollback plan. This usually means taking a manual database backup before running any sql script that changes data. This is part of your responsibility as the person that is carrying out the work.



Next time you are asked to do such work, let your line manager know that you are going to take a backup immediately before making the change.



Side Note: Its always useful to enclose your sql scripts in BEGIN TRANSACTION .. ROLLBACK TRANSACTION for the first time you run them on production data. This executes the script and shows you the output, without actually applying the changes. This gives you a good indication about how many records will be changed and whether there will be any errors.






share|improve this answer




















  • Yes, think about "What you go wrong?" before it goes wrong. However, the advice about "BEGIN TRANSACTION..." strikes me as a bit dangerous: Even with that applied, the query could still take down a production system, for example by causing too much load. These things depend too much on the specific situation, there is no universal "safe" way to run stuff on a production system.
    – sleske
    Apr 16 '15 at 10:18










  • @sleske if it was going to cause too much load and take down the server then it would do it with or without the begin/rollback transaction so I dont see the downside of using it compared to just executing the script
    – rdab
    Apr 16 '15 at 11:10






  • 2




    Yes, of course, and sorry, that's not what I meant. I meant: Even with the ROLLBACK at the end, running the script may still cause problems. Of course adding the ROLLBACK is much better than just running without it right away. I just wanted to alert people that this could still be dangerous.
    – sleske
    Apr 16 '15 at 11:16










  • Don't forget to verify that you can restore (or at least retrieve your data) from your backup before you actually need to do so for real!
    – Gwyn Evans
    Apr 16 '15 at 17:29

















up vote
9
down vote













Organizational Risk and Individual Due Diligence




If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.




The question of whether or not the company is Doing the Right Thingâ„¢ by making changes to production data that has no backups is really a business decision that is above your paygrade. While you can certainly recommend that they not do that because of the risks, I'd be very surprised if they weren't already aware of the risks and considered them acceptable risks to the business in comparison to the costs of doing something systemic about it.



For your own part, you should perform your own full or partial backups before making changes. While it might be impractical to backup the entire system, you can certainly dump the records you're planning to change or the configuration files you're planning to modify so that you can roll them back in the case of error.



This won't protect from catastrophic failures (e.g. dropping the entire database, for example), but would certainly ensure that if you make changes to record 12345 you can restore that record after you make your changes if it turns out your changes were incorrect.



Just remember that while you have a professional responsibility to bring risks to the attention of your management, and to mitigate those risks as well as you are able in your own work, your organization's management team actually owns 100% of the business risk. If you've done your due diligence to the best of your ability, any residual risk is on the organization rather than on you.



In Case of Accident...



In the event that something does go wrong, your professional responsibility is to inform someone in authority immediately. You should let them know what happened, what data has been lost, what (if anything) you are able to recover, and offer to assist with any additional recovery efforts that may be needed.



These sorts of mistakes should not be covered up. However, while you should own up to any mistakes or errors you have made, remember that the responsibility for having a system without proper safeguards is a risk that belongs to the organization, not you. Take responsibility for your part in the accident, but don't take the blame for the failure of a system without proper safeguards.



There's a big difference between taking ownership and taking blame. Make sure you only accept the former, and not the latter, unless you've truly done something negligent.






share|improve this answer





























    up vote
    4
    down vote













    There are actually two questions in one here:



    • who should be responsible for changes in Production data?

    • how best to perform those changes?

    Let me address them separately.




    Who should be responsible for changes in Production data?



    No single person.



    What the way you are performing the change; a change in Production (or any sensitive system) should be reviewed by at least another (knowledgeable) person, and approved by some manager.



    This is teamwork and obeying the chain of responsibility. Then at this point, it does not matter if you do an error:



    1. It will only be applied if someone else has reviewed it (and failed to notice the issue)

    2. It will only be applied if some manager approved it (and took responsibility for it)

    If no manager is willing to take responsibility for the change, do not perform it.



    If people argue about the time-sensitivity of the change, tell them that one should never mistake being fast for rushing. I would actually argue for extra care in case of urgency (another reviewer, for example), specifically because pressure increases the chance of errors. It is much faster to be right the first time, than messing things up, cleaning the mess, and finally performing the change.




    How best to perform those changes?



    Ideally:



    • a back-up is available and there is a restore procedure

    • the change is performed through a script, which is accompanied by a vetted (*) fallback script

    Now, unfortunately, conditions are not always ideal.



    Back-ups are good, but in a live environment where data change every second it is just not possible to keep the back-ups exactly up-to-date; back-ups can only be used in case of massive error and by accepting that the latest changes will be lost. This is why I cannot insist enough on scripting the changes, and checking that the fallback script is working as intended.



    Some changes cannot be fallen back. For example, when removing a column, the data in this column cannot be restored in case of issue. Those changes should be done in two-steps:



    • in a first step, disable the access to the piece of data that will be deleted, without actually deleting it; in the case of column, rename it for example. This step can be fallen back.

    • then, when it has been assessed that the change was valid (several days or weeks have gone by without issues), perform the non-fallbackable change in a single-purpose script

    (*) To vet a fallback script, you have to run your script against a copy of the real database, then apply the fallback script and check that the data is back to normal.



    (*) I have seen the suggestion of doing the change in a transaction; this is insufficient (what if you realize your mistake after the commit?), contention-prone (you are blocking all modified rows until you commit) and not always possible (too large change-set/risks of deadlocks). Still, if possible, use transactions within your script as half-done changes are harder to fallback.






    share|improve this answer



























      up vote
      3
      down vote













      If you are stuck with this system (and I would seriously push this back as it is extermely risky and a poor practice), this is what I would do.



      First create a backup table for the data you are going to affect (we have a scratch database for things that are one time use only). Dpending onteh size of the data , you might want to create an index on this)



      Once you have the backup table, put everything in a transaction.
      Then when you do your query to affect the data join to the backup table you created.



      When you run, run one step at at time and note how many records are in the staging table, if the data affected in the action query does not match the number of records in the table, you will want to rollback and then figure out why.



      This approach also gives you teh most flexibility to restore if teh change was bad as it is easier and generally faster to update one table back to the old values than restore the whole database. And if only a few records were changed in error, you have the ability to change just those back to the old values.



      An alternative to all this is to have audit tables that record all changes. However, it is unlikely you have those if devs are running things directly on production. Personally I woudl never consider having a datbase without auditing becvause it is great to fix errors that came from the User interface as well as data imports and ad hoc action scripts that are run against the database including where data is maliciously changed. but I work in a regulatory environment where it is a requirement.



      Added later
      I forgot to mention, have someone else code review what you are doing before you do it.






      share|improve this answer





























        up vote
        2
        down vote













        Pretty sure you're aware of this, but I'll point out the obvious anyways: test, test, and do some more testing. There's so much that can go wrong if you don't do proper testing, one of many being removing data by accident. By mimicking the production environment with actual data, you can test in an environment that's realistic and keep errors and bugs to a minimum.



        Do keep in mind, even after hours of extensive testing, errors do and will happen. If they do, report to your supervisor/manager and explain. If by any means your code shuts down a database connection, millions of records that should've be inserted will go missing (just an example, though). If you find an error or suspect an error: let people know.






        share|improve this answer



























          up vote
          2
          down vote













          If you have to do ANYTHING in a production system, do it in a transaction. (Or let a DBA do it if you have one)



          A number of years ago I saw my boss' look of horror as he ran a "simple" update against a production database, without a WHERE clause.



          If he had used a transaction, he could have issued a ROLLBACK and saved himself a panicked night of recovering data from a backup to a still running production system. (The rollback would have taken two seconds, not four hours...)



          (This sounds like a Scott Adams cartoon, but yes, I actually watched this happen...)






          share|improve this answer
















          • 4




            My boss regularly tells us about the time he dropped an entire database while doing a department training. He was panicked for a minute that he had done it on production, but he was lucky enough to have done it on our test server database.
            – Martin Carney
            Apr 16 '15 at 22:42

















          up vote
          1
          down vote













          Another simple way to prevent catastrophic mistakes is to get in the habit of adding a limit statement (e.g. limit 1; if you are just changing one record).



          So if you're modifying something like a user table, even if you forgot a WHERE clause like @tim's boss did, you'd only mess up one user record and not every single user record.






          share|improve this answer



























            9 Answers
            9






            active

            oldest

            votes








            9 Answers
            9






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            112
            down vote



            accepted










            One thing you can do now, is verify what safeguards are in place. This shows taking responsibility. Does your company have recent backups in case anything goes wrong? Can you make ad-hoc backups before changing critical parts? Are you working on test systems and do you have a good procedure for implementing the changes in the live systems? Etc.



            If these safeguards are missing, do a request to have them available.



            BTW I'm surprised that you say you are working with unrecoverable data. If that really is the case, it's a red flag for the company as a whole. Nothing (well, as little as possible) should be 'unrecoverable'.






            share|improve this answer


















            • 7




              "do a request to have them available"? I'm 90% sure the answer will be "no". Companies who are know that failsafes are needed have already installed them before a junior dev asks for them. The ones who don't have them are the ones who believe that "I don't need no stinkin' backups, it's your responsibility to be infallible, and the fear of me firing your ass if anything happens will ensure that you make no mistakes ever". They don't change their ways just because a new junior dev asked for it. So in reality, this answer is a blind alley.
              – rumtscho
              Apr 17 '15 at 10:11






            • 1




              @rumtscho The request should be made anyway, on the .2% chance that he'll have a boss that actually understands how important this is, and so that the request can be documented to avoid future finger-pointing.
              – Zibbobz
              Apr 17 '15 at 18:26






            • 2




              @rumtscho Or maybe nobody has ever brought the idea up to someone who can spend the money on the solution... I've been handed poorly maintained datasets in the past and had good success with requesting an update to infrastructure to better manage them.
              – Jasper
              Apr 17 '15 at 18:29











            • You need backups, not only to protect customer data, but to make work reasonably efficient. For example, if you know there's no backup, you will probably work in a way that is too cautious, e.g. rechecking every single command on your local machine using some test data, slowing down even routine tasks. If you need to do a hotfix on the live machine, do it in an efficient way but have a rollback plan in case the hotfix breaks something else unexpectedly.
              – Brandin
              Apr 19 '15 at 19:31














            up vote
            112
            down vote



            accepted










            One thing you can do now, is verify what safeguards are in place. This shows taking responsibility. Does your company have recent backups in case anything goes wrong? Can you make ad-hoc backups before changing critical parts? Are you working on test systems and do you have a good procedure for implementing the changes in the live systems? Etc.



            If these safeguards are missing, do a request to have them available.



            BTW I'm surprised that you say you are working with unrecoverable data. If that really is the case, it's a red flag for the company as a whole. Nothing (well, as little as possible) should be 'unrecoverable'.






            share|improve this answer


















            • 7




              "do a request to have them available"? I'm 90% sure the answer will be "no". Companies who are know that failsafes are needed have already installed them before a junior dev asks for them. The ones who don't have them are the ones who believe that "I don't need no stinkin' backups, it's your responsibility to be infallible, and the fear of me firing your ass if anything happens will ensure that you make no mistakes ever". They don't change their ways just because a new junior dev asked for it. So in reality, this answer is a blind alley.
              – rumtscho
              Apr 17 '15 at 10:11






            • 1




              @rumtscho The request should be made anyway, on the .2% chance that he'll have a boss that actually understands how important this is, and so that the request can be documented to avoid future finger-pointing.
              – Zibbobz
              Apr 17 '15 at 18:26






            • 2




              @rumtscho Or maybe nobody has ever brought the idea up to someone who can spend the money on the solution... I've been handed poorly maintained datasets in the past and had good success with requesting an update to infrastructure to better manage them.
              – Jasper
              Apr 17 '15 at 18:29











            • You need backups, not only to protect customer data, but to make work reasonably efficient. For example, if you know there's no backup, you will probably work in a way that is too cautious, e.g. rechecking every single command on your local machine using some test data, slowing down even routine tasks. If you need to do a hotfix on the live machine, do it in an efficient way but have a rollback plan in case the hotfix breaks something else unexpectedly.
              – Brandin
              Apr 19 '15 at 19:31












            up vote
            112
            down vote



            accepted







            up vote
            112
            down vote



            accepted






            One thing you can do now, is verify what safeguards are in place. This shows taking responsibility. Does your company have recent backups in case anything goes wrong? Can you make ad-hoc backups before changing critical parts? Are you working on test systems and do you have a good procedure for implementing the changes in the live systems? Etc.



            If these safeguards are missing, do a request to have them available.



            BTW I'm surprised that you say you are working with unrecoverable data. If that really is the case, it's a red flag for the company as a whole. Nothing (well, as little as possible) should be 'unrecoverable'.






            share|improve this answer














            One thing you can do now, is verify what safeguards are in place. This shows taking responsibility. Does your company have recent backups in case anything goes wrong? Can you make ad-hoc backups before changing critical parts? Are you working on test systems and do you have a good procedure for implementing the changes in the live systems? Etc.



            If these safeguards are missing, do a request to have them available.



            BTW I'm surprised that you say you are working with unrecoverable data. If that really is the case, it's a red flag for the company as a whole. Nothing (well, as little as possible) should be 'unrecoverable'.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Apr 16 '15 at 7:26

























            answered Apr 16 '15 at 7:25









            Jan Doggen

            11.5k145066




            11.5k145066







            • 7




              "do a request to have them available"? I'm 90% sure the answer will be "no". Companies who are know that failsafes are needed have already installed them before a junior dev asks for them. The ones who don't have them are the ones who believe that "I don't need no stinkin' backups, it's your responsibility to be infallible, and the fear of me firing your ass if anything happens will ensure that you make no mistakes ever". They don't change their ways just because a new junior dev asked for it. So in reality, this answer is a blind alley.
              – rumtscho
              Apr 17 '15 at 10:11






            • 1




              @rumtscho The request should be made anyway, on the .2% chance that he'll have a boss that actually understands how important this is, and so that the request can be documented to avoid future finger-pointing.
              – Zibbobz
              Apr 17 '15 at 18:26






            • 2




              @rumtscho Or maybe nobody has ever brought the idea up to someone who can spend the money on the solution... I've been handed poorly maintained datasets in the past and had good success with requesting an update to infrastructure to better manage them.
              – Jasper
              Apr 17 '15 at 18:29











            • You need backups, not only to protect customer data, but to make work reasonably efficient. For example, if you know there's no backup, you will probably work in a way that is too cautious, e.g. rechecking every single command on your local machine using some test data, slowing down even routine tasks. If you need to do a hotfix on the live machine, do it in an efficient way but have a rollback plan in case the hotfix breaks something else unexpectedly.
              – Brandin
              Apr 19 '15 at 19:31












            • 7




              "do a request to have them available"? I'm 90% sure the answer will be "no". Companies who are know that failsafes are needed have already installed them before a junior dev asks for them. The ones who don't have them are the ones who believe that "I don't need no stinkin' backups, it's your responsibility to be infallible, and the fear of me firing your ass if anything happens will ensure that you make no mistakes ever". They don't change their ways just because a new junior dev asked for it. So in reality, this answer is a blind alley.
              – rumtscho
              Apr 17 '15 at 10:11






            • 1




              @rumtscho The request should be made anyway, on the .2% chance that he'll have a boss that actually understands how important this is, and so that the request can be documented to avoid future finger-pointing.
              – Zibbobz
              Apr 17 '15 at 18:26






            • 2




              @rumtscho Or maybe nobody has ever brought the idea up to someone who can spend the money on the solution... I've been handed poorly maintained datasets in the past and had good success with requesting an update to infrastructure to better manage them.
              – Jasper
              Apr 17 '15 at 18:29











            • You need backups, not only to protect customer data, but to make work reasonably efficient. For example, if you know there's no backup, you will probably work in a way that is too cautious, e.g. rechecking every single command on your local machine using some test data, slowing down even routine tasks. If you need to do a hotfix on the live machine, do it in an efficient way but have a rollback plan in case the hotfix breaks something else unexpectedly.
              – Brandin
              Apr 19 '15 at 19:31







            7




            7




            "do a request to have them available"? I'm 90% sure the answer will be "no". Companies who are know that failsafes are needed have already installed them before a junior dev asks for them. The ones who don't have them are the ones who believe that "I don't need no stinkin' backups, it's your responsibility to be infallible, and the fear of me firing your ass if anything happens will ensure that you make no mistakes ever". They don't change their ways just because a new junior dev asked for it. So in reality, this answer is a blind alley.
            – rumtscho
            Apr 17 '15 at 10:11




            "do a request to have them available"? I'm 90% sure the answer will be "no". Companies who are know that failsafes are needed have already installed them before a junior dev asks for them. The ones who don't have them are the ones who believe that "I don't need no stinkin' backups, it's your responsibility to be infallible, and the fear of me firing your ass if anything happens will ensure that you make no mistakes ever". They don't change their ways just because a new junior dev asked for it. So in reality, this answer is a blind alley.
            – rumtscho
            Apr 17 '15 at 10:11




            1




            1




            @rumtscho The request should be made anyway, on the .2% chance that he'll have a boss that actually understands how important this is, and so that the request can be documented to avoid future finger-pointing.
            – Zibbobz
            Apr 17 '15 at 18:26




            @rumtscho The request should be made anyway, on the .2% chance that he'll have a boss that actually understands how important this is, and so that the request can be documented to avoid future finger-pointing.
            – Zibbobz
            Apr 17 '15 at 18:26




            2




            2




            @rumtscho Or maybe nobody has ever brought the idea up to someone who can spend the money on the solution... I've been handed poorly maintained datasets in the past and had good success with requesting an update to infrastructure to better manage them.
            – Jasper
            Apr 17 '15 at 18:29





            @rumtscho Or maybe nobody has ever brought the idea up to someone who can spend the money on the solution... I've been handed poorly maintained datasets in the past and had good success with requesting an update to infrastructure to better manage them.
            – Jasper
            Apr 17 '15 at 18:29













            You need backups, not only to protect customer data, but to make work reasonably efficient. For example, if you know there's no backup, you will probably work in a way that is too cautious, e.g. rechecking every single command on your local machine using some test data, slowing down even routine tasks. If you need to do a hotfix on the live machine, do it in an efficient way but have a rollback plan in case the hotfix breaks something else unexpectedly.
            – Brandin
            Apr 19 '15 at 19:31




            You need backups, not only to protect customer data, but to make work reasonably efficient. For example, if you know there's no backup, you will probably work in a way that is too cautious, e.g. rechecking every single command on your local machine using some test data, slowing down even routine tasks. If you need to do a hotfix on the live machine, do it in an efficient way but have a rollback plan in case the hotfix breaks something else unexpectedly.
            – Brandin
            Apr 19 '15 at 19:31












            up vote
            36
            down vote













            You should never, ever modify unrecoverable production data.



            I really can't stress this enough. You should be willing to take a stance on this. If it were me I'd make two requests.



            1. That there are frequent backups (ideally, more than one set) taken automatically. The frequency depends on the sensitivity of the data, but I'd say at least once a day. Is the customer going to be willing to accept a week's worth of lost data if something goes wrong?


            2. That you aren't the one to do it. This is actually more important, but the first should still be the case. You, a software developer, shouldn't be messing with live data. I don't know the size of your company, but I work in a company of fifty and developers get told off if they interact with the live system. If you have a support department, that's their job and you should, even with backups, feel uneasy about touching live data. Development work should only ever be done on a development system. If something needs to be done to the live system, you get them to do it, even at the cost of efficiency. If you need to, provide a well-tested script. Generally, though, any changes should only happen when everyone is aware that the system could go down, so customers need to be aware of this possibility. If it necessary for you to touch live data as a developer, don't do it without backups, even something small.


            These things are pretty basic. You should always consider the potential effects of mistakes and do everything possible to minimise those risks. These might sound overly-cautious, but if you don't take precautions, the personal consequences will be much greater if something goes wrong.






            share|improve this answer


















            • 10




              No devs shoudl ever be running scripts on a production database. They shoudl not even have teh rights to do so. Only a DBA or Build team memeber should do this or at worst only managers should do this. A company that allows junior devs access to production deserves what they get.
              – HLGEM
              Apr 16 '15 at 12:49






            • 8




              @HLGEM Agreed, sadly reality is so very many companies have developers, good ones, that are practically forced to do this sort of risky crap all the time. "Hey Jim! Kim just uploading the new leads to production through the portal, Legal wants us to change their source so accounting can bill them properly, there's 60,000 of them, oh and accounting needs to be able to start invoicing in an hour so we don't have time to copy the data to the dev database, make it happen! counting on you champ!" facepalm (I wish that was a joke)
              – RualStorge
              Apr 16 '15 at 19:18






            • 19




              @HLGEM: Not every company is large enough to have a "DBA" or "Build team". And certainly I've never met a manager who has the first clue how to do anything like this.
              – Lightness Races in Orbit
              Apr 17 '15 at 0:25







            • 4




              @PointlessSpike Sometimes one person works two roles. Maybe the developer is also the support staff. I've been employed as a software engineer but also done build, test engineering, client support as part of the same role. You work with the staff you have.
              – Gusdor
              Apr 17 '15 at 8:25






            • 5




              @Lightning Been there - in micro-startups people have to wear many hats. But even there the only people who should have update access to a production server are individuals who know exactly what they are doing, who know to rehearse even the smallest change in a test environment beforehand, and who are trusted by the business to do it. Preferably as few as practicable.
              – Julia Hayward
              Apr 17 '15 at 10:36














            up vote
            36
            down vote













            You should never, ever modify unrecoverable production data.



            I really can't stress this enough. You should be willing to take a stance on this. If it were me I'd make two requests.



            1. That there are frequent backups (ideally, more than one set) taken automatically. The frequency depends on the sensitivity of the data, but I'd say at least once a day. Is the customer going to be willing to accept a week's worth of lost data if something goes wrong?


            2. That you aren't the one to do it. This is actually more important, but the first should still be the case. You, a software developer, shouldn't be messing with live data. I don't know the size of your company, but I work in a company of fifty and developers get told off if they interact with the live system. If you have a support department, that's their job and you should, even with backups, feel uneasy about touching live data. Development work should only ever be done on a development system. If something needs to be done to the live system, you get them to do it, even at the cost of efficiency. If you need to, provide a well-tested script. Generally, though, any changes should only happen when everyone is aware that the system could go down, so customers need to be aware of this possibility. If it necessary for you to touch live data as a developer, don't do it without backups, even something small.


            These things are pretty basic. You should always consider the potential effects of mistakes and do everything possible to minimise those risks. These might sound overly-cautious, but if you don't take precautions, the personal consequences will be much greater if something goes wrong.






            share|improve this answer


















            • 10




              No devs shoudl ever be running scripts on a production database. They shoudl not even have teh rights to do so. Only a DBA or Build team memeber should do this or at worst only managers should do this. A company that allows junior devs access to production deserves what they get.
              – HLGEM
              Apr 16 '15 at 12:49






            • 8




              @HLGEM Agreed, sadly reality is so very many companies have developers, good ones, that are practically forced to do this sort of risky crap all the time. "Hey Jim! Kim just uploading the new leads to production through the portal, Legal wants us to change their source so accounting can bill them properly, there's 60,000 of them, oh and accounting needs to be able to start invoicing in an hour so we don't have time to copy the data to the dev database, make it happen! counting on you champ!" facepalm (I wish that was a joke)
              – RualStorge
              Apr 16 '15 at 19:18






            • 19




              @HLGEM: Not every company is large enough to have a "DBA" or "Build team". And certainly I've never met a manager who has the first clue how to do anything like this.
              – Lightness Races in Orbit
              Apr 17 '15 at 0:25







            • 4




              @PointlessSpike Sometimes one person works two roles. Maybe the developer is also the support staff. I've been employed as a software engineer but also done build, test engineering, client support as part of the same role. You work with the staff you have.
              – Gusdor
              Apr 17 '15 at 8:25






            • 5




              @Lightning Been there - in micro-startups people have to wear many hats. But even there the only people who should have update access to a production server are individuals who know exactly what they are doing, who know to rehearse even the smallest change in a test environment beforehand, and who are trusted by the business to do it. Preferably as few as practicable.
              – Julia Hayward
              Apr 17 '15 at 10:36












            up vote
            36
            down vote










            up vote
            36
            down vote









            You should never, ever modify unrecoverable production data.



            I really can't stress this enough. You should be willing to take a stance on this. If it were me I'd make two requests.



            1. That there are frequent backups (ideally, more than one set) taken automatically. The frequency depends on the sensitivity of the data, but I'd say at least once a day. Is the customer going to be willing to accept a week's worth of lost data if something goes wrong?


            2. That you aren't the one to do it. This is actually more important, but the first should still be the case. You, a software developer, shouldn't be messing with live data. I don't know the size of your company, but I work in a company of fifty and developers get told off if they interact with the live system. If you have a support department, that's their job and you should, even with backups, feel uneasy about touching live data. Development work should only ever be done on a development system. If something needs to be done to the live system, you get them to do it, even at the cost of efficiency. If you need to, provide a well-tested script. Generally, though, any changes should only happen when everyone is aware that the system could go down, so customers need to be aware of this possibility. If it necessary for you to touch live data as a developer, don't do it without backups, even something small.


            These things are pretty basic. You should always consider the potential effects of mistakes and do everything possible to minimise those risks. These might sound overly-cautious, but if you don't take precautions, the personal consequences will be much greater if something goes wrong.






            share|improve this answer














            You should never, ever modify unrecoverable production data.



            I really can't stress this enough. You should be willing to take a stance on this. If it were me I'd make two requests.



            1. That there are frequent backups (ideally, more than one set) taken automatically. The frequency depends on the sensitivity of the data, but I'd say at least once a day. Is the customer going to be willing to accept a week's worth of lost data if something goes wrong?


            2. That you aren't the one to do it. This is actually more important, but the first should still be the case. You, a software developer, shouldn't be messing with live data. I don't know the size of your company, but I work in a company of fifty and developers get told off if they interact with the live system. If you have a support department, that's their job and you should, even with backups, feel uneasy about touching live data. Development work should only ever be done on a development system. If something needs to be done to the live system, you get them to do it, even at the cost of efficiency. If you need to, provide a well-tested script. Generally, though, any changes should only happen when everyone is aware that the system could go down, so customers need to be aware of this possibility. If it necessary for you to touch live data as a developer, don't do it without backups, even something small.


            These things are pretty basic. You should always consider the potential effects of mistakes and do everything possible to minimise those risks. These might sound overly-cautious, but if you don't take precautions, the personal consequences will be much greater if something goes wrong.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Apr 17 '15 at 13:12

























            answered Apr 16 '15 at 12:29









            PointlessSpike

            514312




            514312







            • 10




              No devs shoudl ever be running scripts on a production database. They shoudl not even have teh rights to do so. Only a DBA or Build team memeber should do this or at worst only managers should do this. A company that allows junior devs access to production deserves what they get.
              – HLGEM
              Apr 16 '15 at 12:49






            • 8




              @HLGEM Agreed, sadly reality is so very many companies have developers, good ones, that are practically forced to do this sort of risky crap all the time. "Hey Jim! Kim just uploading the new leads to production through the portal, Legal wants us to change their source so accounting can bill them properly, there's 60,000 of them, oh and accounting needs to be able to start invoicing in an hour so we don't have time to copy the data to the dev database, make it happen! counting on you champ!" facepalm (I wish that was a joke)
              – RualStorge
              Apr 16 '15 at 19:18






            • 19




              @HLGEM: Not every company is large enough to have a "DBA" or "Build team". And certainly I've never met a manager who has the first clue how to do anything like this.
              – Lightness Races in Orbit
              Apr 17 '15 at 0:25







            • 4




              @PointlessSpike Sometimes one person works two roles. Maybe the developer is also the support staff. I've been employed as a software engineer but also done build, test engineering, client support as part of the same role. You work with the staff you have.
              – Gusdor
              Apr 17 '15 at 8:25






            • 5




              @Lightning Been there - in micro-startups people have to wear many hats. But even there the only people who should have update access to a production server are individuals who know exactly what they are doing, who know to rehearse even the smallest change in a test environment beforehand, and who are trusted by the business to do it. Preferably as few as practicable.
              – Julia Hayward
              Apr 17 '15 at 10:36












            • 10




              No devs shoudl ever be running scripts on a production database. They shoudl not even have teh rights to do so. Only a DBA or Build team memeber should do this or at worst only managers should do this. A company that allows junior devs access to production deserves what they get.
              – HLGEM
              Apr 16 '15 at 12:49






            • 8




              @HLGEM Agreed, sadly reality is so very many companies have developers, good ones, that are practically forced to do this sort of risky crap all the time. "Hey Jim! Kim just uploading the new leads to production through the portal, Legal wants us to change their source so accounting can bill them properly, there's 60,000 of them, oh and accounting needs to be able to start invoicing in an hour so we don't have time to copy the data to the dev database, make it happen! counting on you champ!" facepalm (I wish that was a joke)
              – RualStorge
              Apr 16 '15 at 19:18






            • 19




              @HLGEM: Not every company is large enough to have a "DBA" or "Build team". And certainly I've never met a manager who has the first clue how to do anything like this.
              – Lightness Races in Orbit
              Apr 17 '15 at 0:25







            • 4




              @PointlessSpike Sometimes one person works two roles. Maybe the developer is also the support staff. I've been employed as a software engineer but also done build, test engineering, client support as part of the same role. You work with the staff you have.
              – Gusdor
              Apr 17 '15 at 8:25






            • 5




              @Lightning Been there - in micro-startups people have to wear many hats. But even there the only people who should have update access to a production server are individuals who know exactly what they are doing, who know to rehearse even the smallest change in a test environment beforehand, and who are trusted by the business to do it. Preferably as few as practicable.
              – Julia Hayward
              Apr 17 '15 at 10:36







            10




            10




            No devs shoudl ever be running scripts on a production database. They shoudl not even have teh rights to do so. Only a DBA or Build team memeber should do this or at worst only managers should do this. A company that allows junior devs access to production deserves what they get.
            – HLGEM
            Apr 16 '15 at 12:49




            No devs shoudl ever be running scripts on a production database. They shoudl not even have teh rights to do so. Only a DBA or Build team memeber should do this or at worst only managers should do this. A company that allows junior devs access to production deserves what they get.
            – HLGEM
            Apr 16 '15 at 12:49




            8




            8




            @HLGEM Agreed, sadly reality is so very many companies have developers, good ones, that are practically forced to do this sort of risky crap all the time. "Hey Jim! Kim just uploading the new leads to production through the portal, Legal wants us to change their source so accounting can bill them properly, there's 60,000 of them, oh and accounting needs to be able to start invoicing in an hour so we don't have time to copy the data to the dev database, make it happen! counting on you champ!" facepalm (I wish that was a joke)
            – RualStorge
            Apr 16 '15 at 19:18




            @HLGEM Agreed, sadly reality is so very many companies have developers, good ones, that are practically forced to do this sort of risky crap all the time. "Hey Jim! Kim just uploading the new leads to production through the portal, Legal wants us to change their source so accounting can bill them properly, there's 60,000 of them, oh and accounting needs to be able to start invoicing in an hour so we don't have time to copy the data to the dev database, make it happen! counting on you champ!" facepalm (I wish that was a joke)
            – RualStorge
            Apr 16 '15 at 19:18




            19




            19




            @HLGEM: Not every company is large enough to have a "DBA" or "Build team". And certainly I've never met a manager who has the first clue how to do anything like this.
            – Lightness Races in Orbit
            Apr 17 '15 at 0:25





            @HLGEM: Not every company is large enough to have a "DBA" or "Build team". And certainly I've never met a manager who has the first clue how to do anything like this.
            – Lightness Races in Orbit
            Apr 17 '15 at 0:25





            4




            4




            @PointlessSpike Sometimes one person works two roles. Maybe the developer is also the support staff. I've been employed as a software engineer but also done build, test engineering, client support as part of the same role. You work with the staff you have.
            – Gusdor
            Apr 17 '15 at 8:25




            @PointlessSpike Sometimes one person works two roles. Maybe the developer is also the support staff. I've been employed as a software engineer but also done build, test engineering, client support as part of the same role. You work with the staff you have.
            – Gusdor
            Apr 17 '15 at 8:25




            5




            5




            @Lightning Been there - in micro-startups people have to wear many hats. But even there the only people who should have update access to a production server are individuals who know exactly what they are doing, who know to rehearse even the smallest change in a test environment beforehand, and who are trusted by the business to do it. Preferably as few as practicable.
            – Julia Hayward
            Apr 17 '15 at 10:36




            @Lightning Been there - in micro-startups people have to wear many hats. But even there the only people who should have update access to a production server are individuals who know exactly what they are doing, who know to rehearse even the smallest change in a test environment beforehand, and who are trusted by the business to do it. Preferably as few as practicable.
            – Julia Hayward
            Apr 17 '15 at 10:36










            up vote
            17
            down vote













            Before doing any work which could result in loss of data, you need to make sure you have a rollback plan. This usually means taking a manual database backup before running any sql script that changes data. This is part of your responsibility as the person that is carrying out the work.



            Next time you are asked to do such work, let your line manager know that you are going to take a backup immediately before making the change.



            Side Note: Its always useful to enclose your sql scripts in BEGIN TRANSACTION .. ROLLBACK TRANSACTION for the first time you run them on production data. This executes the script and shows you the output, without actually applying the changes. This gives you a good indication about how many records will be changed and whether there will be any errors.






            share|improve this answer




















            • Yes, think about "What you go wrong?" before it goes wrong. However, the advice about "BEGIN TRANSACTION..." strikes me as a bit dangerous: Even with that applied, the query could still take down a production system, for example by causing too much load. These things depend too much on the specific situation, there is no universal "safe" way to run stuff on a production system.
              – sleske
              Apr 16 '15 at 10:18










            • @sleske if it was going to cause too much load and take down the server then it would do it with or without the begin/rollback transaction so I dont see the downside of using it compared to just executing the script
              – rdab
              Apr 16 '15 at 11:10






            • 2




              Yes, of course, and sorry, that's not what I meant. I meant: Even with the ROLLBACK at the end, running the script may still cause problems. Of course adding the ROLLBACK is much better than just running without it right away. I just wanted to alert people that this could still be dangerous.
              – sleske
              Apr 16 '15 at 11:16










            • Don't forget to verify that you can restore (or at least retrieve your data) from your backup before you actually need to do so for real!
              – Gwyn Evans
              Apr 16 '15 at 17:29














            up vote
            17
            down vote













            Before doing any work which could result in loss of data, you need to make sure you have a rollback plan. This usually means taking a manual database backup before running any sql script that changes data. This is part of your responsibility as the person that is carrying out the work.



            Next time you are asked to do such work, let your line manager know that you are going to take a backup immediately before making the change.



            Side Note: Its always useful to enclose your sql scripts in BEGIN TRANSACTION .. ROLLBACK TRANSACTION for the first time you run them on production data. This executes the script and shows you the output, without actually applying the changes. This gives you a good indication about how many records will be changed and whether there will be any errors.






            share|improve this answer




















            • Yes, think about "What you go wrong?" before it goes wrong. However, the advice about "BEGIN TRANSACTION..." strikes me as a bit dangerous: Even with that applied, the query could still take down a production system, for example by causing too much load. These things depend too much on the specific situation, there is no universal "safe" way to run stuff on a production system.
              – sleske
              Apr 16 '15 at 10:18










            • @sleske if it was going to cause too much load and take down the server then it would do it with or without the begin/rollback transaction so I dont see the downside of using it compared to just executing the script
              – rdab
              Apr 16 '15 at 11:10






            • 2




              Yes, of course, and sorry, that's not what I meant. I meant: Even with the ROLLBACK at the end, running the script may still cause problems. Of course adding the ROLLBACK is much better than just running without it right away. I just wanted to alert people that this could still be dangerous.
              – sleske
              Apr 16 '15 at 11:16










            • Don't forget to verify that you can restore (or at least retrieve your data) from your backup before you actually need to do so for real!
              – Gwyn Evans
              Apr 16 '15 at 17:29












            up vote
            17
            down vote










            up vote
            17
            down vote









            Before doing any work which could result in loss of data, you need to make sure you have a rollback plan. This usually means taking a manual database backup before running any sql script that changes data. This is part of your responsibility as the person that is carrying out the work.



            Next time you are asked to do such work, let your line manager know that you are going to take a backup immediately before making the change.



            Side Note: Its always useful to enclose your sql scripts in BEGIN TRANSACTION .. ROLLBACK TRANSACTION for the first time you run them on production data. This executes the script and shows you the output, without actually applying the changes. This gives you a good indication about how many records will be changed and whether there will be any errors.






            share|improve this answer












            Before doing any work which could result in loss of data, you need to make sure you have a rollback plan. This usually means taking a manual database backup before running any sql script that changes data. This is part of your responsibility as the person that is carrying out the work.



            Next time you are asked to do such work, let your line manager know that you are going to take a backup immediately before making the change.



            Side Note: Its always useful to enclose your sql scripts in BEGIN TRANSACTION .. ROLLBACK TRANSACTION for the first time you run them on production data. This executes the script and shows you the output, without actually applying the changes. This gives you a good indication about how many records will be changed and whether there will be any errors.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 16 '15 at 8:34









            rdab

            32017




            32017











            • Yes, think about "What you go wrong?" before it goes wrong. However, the advice about "BEGIN TRANSACTION..." strikes me as a bit dangerous: Even with that applied, the query could still take down a production system, for example by causing too much load. These things depend too much on the specific situation, there is no universal "safe" way to run stuff on a production system.
              – sleske
              Apr 16 '15 at 10:18










            • @sleske if it was going to cause too much load and take down the server then it would do it with or without the begin/rollback transaction so I dont see the downside of using it compared to just executing the script
              – rdab
              Apr 16 '15 at 11:10






            • 2




              Yes, of course, and sorry, that's not what I meant. I meant: Even with the ROLLBACK at the end, running the script may still cause problems. Of course adding the ROLLBACK is much better than just running without it right away. I just wanted to alert people that this could still be dangerous.
              – sleske
              Apr 16 '15 at 11:16










            • Don't forget to verify that you can restore (or at least retrieve your data) from your backup before you actually need to do so for real!
              – Gwyn Evans
              Apr 16 '15 at 17:29
















            • Yes, think about "What you go wrong?" before it goes wrong. However, the advice about "BEGIN TRANSACTION..." strikes me as a bit dangerous: Even with that applied, the query could still take down a production system, for example by causing too much load. These things depend too much on the specific situation, there is no universal "safe" way to run stuff on a production system.
              – sleske
              Apr 16 '15 at 10:18










            • @sleske if it was going to cause too much load and take down the server then it would do it with or without the begin/rollback transaction so I dont see the downside of using it compared to just executing the script
              – rdab
              Apr 16 '15 at 11:10






            • 2




              Yes, of course, and sorry, that's not what I meant. I meant: Even with the ROLLBACK at the end, running the script may still cause problems. Of course adding the ROLLBACK is much better than just running without it right away. I just wanted to alert people that this could still be dangerous.
              – sleske
              Apr 16 '15 at 11:16










            • Don't forget to verify that you can restore (or at least retrieve your data) from your backup before you actually need to do so for real!
              – Gwyn Evans
              Apr 16 '15 at 17:29















            Yes, think about "What you go wrong?" before it goes wrong. However, the advice about "BEGIN TRANSACTION..." strikes me as a bit dangerous: Even with that applied, the query could still take down a production system, for example by causing too much load. These things depend too much on the specific situation, there is no universal "safe" way to run stuff on a production system.
            – sleske
            Apr 16 '15 at 10:18




            Yes, think about "What you go wrong?" before it goes wrong. However, the advice about "BEGIN TRANSACTION..." strikes me as a bit dangerous: Even with that applied, the query could still take down a production system, for example by causing too much load. These things depend too much on the specific situation, there is no universal "safe" way to run stuff on a production system.
            – sleske
            Apr 16 '15 at 10:18












            @sleske if it was going to cause too much load and take down the server then it would do it with or without the begin/rollback transaction so I dont see the downside of using it compared to just executing the script
            – rdab
            Apr 16 '15 at 11:10




            @sleske if it was going to cause too much load and take down the server then it would do it with or without the begin/rollback transaction so I dont see the downside of using it compared to just executing the script
            – rdab
            Apr 16 '15 at 11:10




            2




            2




            Yes, of course, and sorry, that's not what I meant. I meant: Even with the ROLLBACK at the end, running the script may still cause problems. Of course adding the ROLLBACK is much better than just running without it right away. I just wanted to alert people that this could still be dangerous.
            – sleske
            Apr 16 '15 at 11:16




            Yes, of course, and sorry, that's not what I meant. I meant: Even with the ROLLBACK at the end, running the script may still cause problems. Of course adding the ROLLBACK is much better than just running without it right away. I just wanted to alert people that this could still be dangerous.
            – sleske
            Apr 16 '15 at 11:16












            Don't forget to verify that you can restore (or at least retrieve your data) from your backup before you actually need to do so for real!
            – Gwyn Evans
            Apr 16 '15 at 17:29




            Don't forget to verify that you can restore (or at least retrieve your data) from your backup before you actually need to do so for real!
            – Gwyn Evans
            Apr 16 '15 at 17:29










            up vote
            9
            down vote













            Organizational Risk and Individual Due Diligence




            If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.




            The question of whether or not the company is Doing the Right Thingâ„¢ by making changes to production data that has no backups is really a business decision that is above your paygrade. While you can certainly recommend that they not do that because of the risks, I'd be very surprised if they weren't already aware of the risks and considered them acceptable risks to the business in comparison to the costs of doing something systemic about it.



            For your own part, you should perform your own full or partial backups before making changes. While it might be impractical to backup the entire system, you can certainly dump the records you're planning to change or the configuration files you're planning to modify so that you can roll them back in the case of error.



            This won't protect from catastrophic failures (e.g. dropping the entire database, for example), but would certainly ensure that if you make changes to record 12345 you can restore that record after you make your changes if it turns out your changes were incorrect.



            Just remember that while you have a professional responsibility to bring risks to the attention of your management, and to mitigate those risks as well as you are able in your own work, your organization's management team actually owns 100% of the business risk. If you've done your due diligence to the best of your ability, any residual risk is on the organization rather than on you.



            In Case of Accident...



            In the event that something does go wrong, your professional responsibility is to inform someone in authority immediately. You should let them know what happened, what data has been lost, what (if anything) you are able to recover, and offer to assist with any additional recovery efforts that may be needed.



            These sorts of mistakes should not be covered up. However, while you should own up to any mistakes or errors you have made, remember that the responsibility for having a system without proper safeguards is a risk that belongs to the organization, not you. Take responsibility for your part in the accident, but don't take the blame for the failure of a system without proper safeguards.



            There's a big difference between taking ownership and taking blame. Make sure you only accept the former, and not the latter, unless you've truly done something negligent.






            share|improve this answer


























              up vote
              9
              down vote













              Organizational Risk and Individual Due Diligence




              If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.




              The question of whether or not the company is Doing the Right Thingâ„¢ by making changes to production data that has no backups is really a business decision that is above your paygrade. While you can certainly recommend that they not do that because of the risks, I'd be very surprised if they weren't already aware of the risks and considered them acceptable risks to the business in comparison to the costs of doing something systemic about it.



              For your own part, you should perform your own full or partial backups before making changes. While it might be impractical to backup the entire system, you can certainly dump the records you're planning to change or the configuration files you're planning to modify so that you can roll them back in the case of error.



              This won't protect from catastrophic failures (e.g. dropping the entire database, for example), but would certainly ensure that if you make changes to record 12345 you can restore that record after you make your changes if it turns out your changes were incorrect.



              Just remember that while you have a professional responsibility to bring risks to the attention of your management, and to mitigate those risks as well as you are able in your own work, your organization's management team actually owns 100% of the business risk. If you've done your due diligence to the best of your ability, any residual risk is on the organization rather than on you.



              In Case of Accident...



              In the event that something does go wrong, your professional responsibility is to inform someone in authority immediately. You should let them know what happened, what data has been lost, what (if anything) you are able to recover, and offer to assist with any additional recovery efforts that may be needed.



              These sorts of mistakes should not be covered up. However, while you should own up to any mistakes or errors you have made, remember that the responsibility for having a system without proper safeguards is a risk that belongs to the organization, not you. Take responsibility for your part in the accident, but don't take the blame for the failure of a system without proper safeguards.



              There's a big difference between taking ownership and taking blame. Make sure you only accept the former, and not the latter, unless you've truly done something negligent.






              share|improve this answer
























                up vote
                9
                down vote










                up vote
                9
                down vote









                Organizational Risk and Individual Due Diligence




                If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.




                The question of whether or not the company is Doing the Right Thingâ„¢ by making changes to production data that has no backups is really a business decision that is above your paygrade. While you can certainly recommend that they not do that because of the risks, I'd be very surprised if they weren't already aware of the risks and considered them acceptable risks to the business in comparison to the costs of doing something systemic about it.



                For your own part, you should perform your own full or partial backups before making changes. While it might be impractical to backup the entire system, you can certainly dump the records you're planning to change or the configuration files you're planning to modify so that you can roll them back in the case of error.



                This won't protect from catastrophic failures (e.g. dropping the entire database, for example), but would certainly ensure that if you make changes to record 12345 you can restore that record after you make your changes if it turns out your changes were incorrect.



                Just remember that while you have a professional responsibility to bring risks to the attention of your management, and to mitigate those risks as well as you are able in your own work, your organization's management team actually owns 100% of the business risk. If you've done your due diligence to the best of your ability, any residual risk is on the organization rather than on you.



                In Case of Accident...



                In the event that something does go wrong, your professional responsibility is to inform someone in authority immediately. You should let them know what happened, what data has been lost, what (if anything) you are able to recover, and offer to assist with any additional recovery efforts that may be needed.



                These sorts of mistakes should not be covered up. However, while you should own up to any mistakes or errors you have made, remember that the responsibility for having a system without proper safeguards is a risk that belongs to the organization, not you. Take responsibility for your part in the accident, but don't take the blame for the failure of a system without proper safeguards.



                There's a big difference between taking ownership and taking blame. Make sure you only accept the former, and not the latter, unless you've truly done something negligent.






                share|improve this answer














                Organizational Risk and Individual Due Diligence




                If I mistakenly deleted some valuable data, what am I supposed to do? I manually (using SQL scripts) move things in the database here and there.




                The question of whether or not the company is Doing the Right Thingâ„¢ by making changes to production data that has no backups is really a business decision that is above your paygrade. While you can certainly recommend that they not do that because of the risks, I'd be very surprised if they weren't already aware of the risks and considered them acceptable risks to the business in comparison to the costs of doing something systemic about it.



                For your own part, you should perform your own full or partial backups before making changes. While it might be impractical to backup the entire system, you can certainly dump the records you're planning to change or the configuration files you're planning to modify so that you can roll them back in the case of error.



                This won't protect from catastrophic failures (e.g. dropping the entire database, for example), but would certainly ensure that if you make changes to record 12345 you can restore that record after you make your changes if it turns out your changes were incorrect.



                Just remember that while you have a professional responsibility to bring risks to the attention of your management, and to mitigate those risks as well as you are able in your own work, your organization's management team actually owns 100% of the business risk. If you've done your due diligence to the best of your ability, any residual risk is on the organization rather than on you.



                In Case of Accident...



                In the event that something does go wrong, your professional responsibility is to inform someone in authority immediately. You should let them know what happened, what data has been lost, what (if anything) you are able to recover, and offer to assist with any additional recovery efforts that may be needed.



                These sorts of mistakes should not be covered up. However, while you should own up to any mistakes or errors you have made, remember that the responsibility for having a system without proper safeguards is a risk that belongs to the organization, not you. Take responsibility for your part in the accident, but don't take the blame for the failure of a system without proper safeguards.



                There's a big difference between taking ownership and taking blame. Make sure you only accept the former, and not the latter, unless you've truly done something negligent.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Apr 17 '15 at 19:51

























                answered Apr 17 '15 at 17:21









                CodeGnome

                1,515516




                1,515516




















                    up vote
                    4
                    down vote













                    There are actually two questions in one here:



                    • who should be responsible for changes in Production data?

                    • how best to perform those changes?

                    Let me address them separately.




                    Who should be responsible for changes in Production data?



                    No single person.



                    What the way you are performing the change; a change in Production (or any sensitive system) should be reviewed by at least another (knowledgeable) person, and approved by some manager.



                    This is teamwork and obeying the chain of responsibility. Then at this point, it does not matter if you do an error:



                    1. It will only be applied if someone else has reviewed it (and failed to notice the issue)

                    2. It will only be applied if some manager approved it (and took responsibility for it)

                    If no manager is willing to take responsibility for the change, do not perform it.



                    If people argue about the time-sensitivity of the change, tell them that one should never mistake being fast for rushing. I would actually argue for extra care in case of urgency (another reviewer, for example), specifically because pressure increases the chance of errors. It is much faster to be right the first time, than messing things up, cleaning the mess, and finally performing the change.




                    How best to perform those changes?



                    Ideally:



                    • a back-up is available and there is a restore procedure

                    • the change is performed through a script, which is accompanied by a vetted (*) fallback script

                    Now, unfortunately, conditions are not always ideal.



                    Back-ups are good, but in a live environment where data change every second it is just not possible to keep the back-ups exactly up-to-date; back-ups can only be used in case of massive error and by accepting that the latest changes will be lost. This is why I cannot insist enough on scripting the changes, and checking that the fallback script is working as intended.



                    Some changes cannot be fallen back. For example, when removing a column, the data in this column cannot be restored in case of issue. Those changes should be done in two-steps:



                    • in a first step, disable the access to the piece of data that will be deleted, without actually deleting it; in the case of column, rename it for example. This step can be fallen back.

                    • then, when it has been assessed that the change was valid (several days or weeks have gone by without issues), perform the non-fallbackable change in a single-purpose script

                    (*) To vet a fallback script, you have to run your script against a copy of the real database, then apply the fallback script and check that the data is back to normal.



                    (*) I have seen the suggestion of doing the change in a transaction; this is insufficient (what if you realize your mistake after the commit?), contention-prone (you are blocking all modified rows until you commit) and not always possible (too large change-set/risks of deadlocks). Still, if possible, use transactions within your script as half-done changes are harder to fallback.






                    share|improve this answer
























                      up vote
                      4
                      down vote













                      There are actually two questions in one here:



                      • who should be responsible for changes in Production data?

                      • how best to perform those changes?

                      Let me address them separately.




                      Who should be responsible for changes in Production data?



                      No single person.



                      What the way you are performing the change; a change in Production (or any sensitive system) should be reviewed by at least another (knowledgeable) person, and approved by some manager.



                      This is teamwork and obeying the chain of responsibility. Then at this point, it does not matter if you do an error:



                      1. It will only be applied if someone else has reviewed it (and failed to notice the issue)

                      2. It will only be applied if some manager approved it (and took responsibility for it)

                      If no manager is willing to take responsibility for the change, do not perform it.



                      If people argue about the time-sensitivity of the change, tell them that one should never mistake being fast for rushing. I would actually argue for extra care in case of urgency (another reviewer, for example), specifically because pressure increases the chance of errors. It is much faster to be right the first time, than messing things up, cleaning the mess, and finally performing the change.




                      How best to perform those changes?



                      Ideally:



                      • a back-up is available and there is a restore procedure

                      • the change is performed through a script, which is accompanied by a vetted (*) fallback script

                      Now, unfortunately, conditions are not always ideal.



                      Back-ups are good, but in a live environment where data change every second it is just not possible to keep the back-ups exactly up-to-date; back-ups can only be used in case of massive error and by accepting that the latest changes will be lost. This is why I cannot insist enough on scripting the changes, and checking that the fallback script is working as intended.



                      Some changes cannot be fallen back. For example, when removing a column, the data in this column cannot be restored in case of issue. Those changes should be done in two-steps:



                      • in a first step, disable the access to the piece of data that will be deleted, without actually deleting it; in the case of column, rename it for example. This step can be fallen back.

                      • then, when it has been assessed that the change was valid (several days or weeks have gone by without issues), perform the non-fallbackable change in a single-purpose script

                      (*) To vet a fallback script, you have to run your script against a copy of the real database, then apply the fallback script and check that the data is back to normal.



                      (*) I have seen the suggestion of doing the change in a transaction; this is insufficient (what if you realize your mistake after the commit?), contention-prone (you are blocking all modified rows until you commit) and not always possible (too large change-set/risks of deadlocks). Still, if possible, use transactions within your script as half-done changes are harder to fallback.






                      share|improve this answer






















                        up vote
                        4
                        down vote










                        up vote
                        4
                        down vote









                        There are actually two questions in one here:



                        • who should be responsible for changes in Production data?

                        • how best to perform those changes?

                        Let me address them separately.




                        Who should be responsible for changes in Production data?



                        No single person.



                        What the way you are performing the change; a change in Production (or any sensitive system) should be reviewed by at least another (knowledgeable) person, and approved by some manager.



                        This is teamwork and obeying the chain of responsibility. Then at this point, it does not matter if you do an error:



                        1. It will only be applied if someone else has reviewed it (and failed to notice the issue)

                        2. It will only be applied if some manager approved it (and took responsibility for it)

                        If no manager is willing to take responsibility for the change, do not perform it.



                        If people argue about the time-sensitivity of the change, tell them that one should never mistake being fast for rushing. I would actually argue for extra care in case of urgency (another reviewer, for example), specifically because pressure increases the chance of errors. It is much faster to be right the first time, than messing things up, cleaning the mess, and finally performing the change.




                        How best to perform those changes?



                        Ideally:



                        • a back-up is available and there is a restore procedure

                        • the change is performed through a script, which is accompanied by a vetted (*) fallback script

                        Now, unfortunately, conditions are not always ideal.



                        Back-ups are good, but in a live environment where data change every second it is just not possible to keep the back-ups exactly up-to-date; back-ups can only be used in case of massive error and by accepting that the latest changes will be lost. This is why I cannot insist enough on scripting the changes, and checking that the fallback script is working as intended.



                        Some changes cannot be fallen back. For example, when removing a column, the data in this column cannot be restored in case of issue. Those changes should be done in two-steps:



                        • in a first step, disable the access to the piece of data that will be deleted, without actually deleting it; in the case of column, rename it for example. This step can be fallen back.

                        • then, when it has been assessed that the change was valid (several days or weeks have gone by without issues), perform the non-fallbackable change in a single-purpose script

                        (*) To vet a fallback script, you have to run your script against a copy of the real database, then apply the fallback script and check that the data is back to normal.



                        (*) I have seen the suggestion of doing the change in a transaction; this is insufficient (what if you realize your mistake after the commit?), contention-prone (you are blocking all modified rows until you commit) and not always possible (too large change-set/risks of deadlocks). Still, if possible, use transactions within your script as half-done changes are harder to fallback.






                        share|improve this answer












                        There are actually two questions in one here:



                        • who should be responsible for changes in Production data?

                        • how best to perform those changes?

                        Let me address them separately.




                        Who should be responsible for changes in Production data?



                        No single person.



                        What the way you are performing the change; a change in Production (or any sensitive system) should be reviewed by at least another (knowledgeable) person, and approved by some manager.



                        This is teamwork and obeying the chain of responsibility. Then at this point, it does not matter if you do an error:



                        1. It will only be applied if someone else has reviewed it (and failed to notice the issue)

                        2. It will only be applied if some manager approved it (and took responsibility for it)

                        If no manager is willing to take responsibility for the change, do not perform it.



                        If people argue about the time-sensitivity of the change, tell them that one should never mistake being fast for rushing. I would actually argue for extra care in case of urgency (another reviewer, for example), specifically because pressure increases the chance of errors. It is much faster to be right the first time, than messing things up, cleaning the mess, and finally performing the change.




                        How best to perform those changes?



                        Ideally:



                        • a back-up is available and there is a restore procedure

                        • the change is performed through a script, which is accompanied by a vetted (*) fallback script

                        Now, unfortunately, conditions are not always ideal.



                        Back-ups are good, but in a live environment where data change every second it is just not possible to keep the back-ups exactly up-to-date; back-ups can only be used in case of massive error and by accepting that the latest changes will be lost. This is why I cannot insist enough on scripting the changes, and checking that the fallback script is working as intended.



                        Some changes cannot be fallen back. For example, when removing a column, the data in this column cannot be restored in case of issue. Those changes should be done in two-steps:



                        • in a first step, disable the access to the piece of data that will be deleted, without actually deleting it; in the case of column, rename it for example. This step can be fallen back.

                        • then, when it has been assessed that the change was valid (several days or weeks have gone by without issues), perform the non-fallbackable change in a single-purpose script

                        (*) To vet a fallback script, you have to run your script against a copy of the real database, then apply the fallback script and check that the data is back to normal.



                        (*) I have seen the suggestion of doing the change in a transaction; this is insufficient (what if you realize your mistake after the commit?), contention-prone (you are blocking all modified rows until you commit) and not always possible (too large change-set/risks of deadlocks). Still, if possible, use transactions within your script as half-done changes are harder to fallback.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Apr 17 '15 at 12:42









                        Matthieu M.

                        1,35211211




                        1,35211211




















                            up vote
                            3
                            down vote













                            If you are stuck with this system (and I would seriously push this back as it is extermely risky and a poor practice), this is what I would do.



                            First create a backup table for the data you are going to affect (we have a scratch database for things that are one time use only). Dpending onteh size of the data , you might want to create an index on this)



                            Once you have the backup table, put everything in a transaction.
                            Then when you do your query to affect the data join to the backup table you created.



                            When you run, run one step at at time and note how many records are in the staging table, if the data affected in the action query does not match the number of records in the table, you will want to rollback and then figure out why.



                            This approach also gives you teh most flexibility to restore if teh change was bad as it is easier and generally faster to update one table back to the old values than restore the whole database. And if only a few records were changed in error, you have the ability to change just those back to the old values.



                            An alternative to all this is to have audit tables that record all changes. However, it is unlikely you have those if devs are running things directly on production. Personally I woudl never consider having a datbase without auditing becvause it is great to fix errors that came from the User interface as well as data imports and ad hoc action scripts that are run against the database including where data is maliciously changed. but I work in a regulatory environment where it is a requirement.



                            Added later
                            I forgot to mention, have someone else code review what you are doing before you do it.






                            share|improve this answer


























                              up vote
                              3
                              down vote













                              If you are stuck with this system (and I would seriously push this back as it is extermely risky and a poor practice), this is what I would do.



                              First create a backup table for the data you are going to affect (we have a scratch database for things that are one time use only). Dpending onteh size of the data , you might want to create an index on this)



                              Once you have the backup table, put everything in a transaction.
                              Then when you do your query to affect the data join to the backup table you created.



                              When you run, run one step at at time and note how many records are in the staging table, if the data affected in the action query does not match the number of records in the table, you will want to rollback and then figure out why.



                              This approach also gives you teh most flexibility to restore if teh change was bad as it is easier and generally faster to update one table back to the old values than restore the whole database. And if only a few records were changed in error, you have the ability to change just those back to the old values.



                              An alternative to all this is to have audit tables that record all changes. However, it is unlikely you have those if devs are running things directly on production. Personally I woudl never consider having a datbase without auditing becvause it is great to fix errors that came from the User interface as well as data imports and ad hoc action scripts that are run against the database including where data is maliciously changed. but I work in a regulatory environment where it is a requirement.



                              Added later
                              I forgot to mention, have someone else code review what you are doing before you do it.






                              share|improve this answer
























                                up vote
                                3
                                down vote










                                up vote
                                3
                                down vote









                                If you are stuck with this system (and I would seriously push this back as it is extermely risky and a poor practice), this is what I would do.



                                First create a backup table for the data you are going to affect (we have a scratch database for things that are one time use only). Dpending onteh size of the data , you might want to create an index on this)



                                Once you have the backup table, put everything in a transaction.
                                Then when you do your query to affect the data join to the backup table you created.



                                When you run, run one step at at time and note how many records are in the staging table, if the data affected in the action query does not match the number of records in the table, you will want to rollback and then figure out why.



                                This approach also gives you teh most flexibility to restore if teh change was bad as it is easier and generally faster to update one table back to the old values than restore the whole database. And if only a few records were changed in error, you have the ability to change just those back to the old values.



                                An alternative to all this is to have audit tables that record all changes. However, it is unlikely you have those if devs are running things directly on production. Personally I woudl never consider having a datbase without auditing becvause it is great to fix errors that came from the User interface as well as data imports and ad hoc action scripts that are run against the database including where data is maliciously changed. but I work in a regulatory environment where it is a requirement.



                                Added later
                                I forgot to mention, have someone else code review what you are doing before you do it.






                                share|improve this answer














                                If you are stuck with this system (and I would seriously push this back as it is extermely risky and a poor practice), this is what I would do.



                                First create a backup table for the data you are going to affect (we have a scratch database for things that are one time use only). Dpending onteh size of the data , you might want to create an index on this)



                                Once you have the backup table, put everything in a transaction.
                                Then when you do your query to affect the data join to the backup table you created.



                                When you run, run one step at at time and note how many records are in the staging table, if the data affected in the action query does not match the number of records in the table, you will want to rollback and then figure out why.



                                This approach also gives you teh most flexibility to restore if teh change was bad as it is easier and generally faster to update one table back to the old values than restore the whole database. And if only a few records were changed in error, you have the ability to change just those back to the old values.



                                An alternative to all this is to have audit tables that record all changes. However, it is unlikely you have those if devs are running things directly on production. Personally I woudl never consider having a datbase without auditing becvause it is great to fix errors that came from the User interface as well as data imports and ad hoc action scripts that are run against the database including where data is maliciously changed. but I work in a regulatory environment where it is a requirement.



                                Added later
                                I forgot to mention, have someone else code review what you are doing before you do it.







                                share|improve this answer














                                share|improve this answer



                                share|improve this answer








                                edited Apr 16 '15 at 19:42

























                                answered Apr 16 '15 at 13:55









                                HLGEM

                                133k25226489




                                133k25226489




















                                    up vote
                                    2
                                    down vote













                                    Pretty sure you're aware of this, but I'll point out the obvious anyways: test, test, and do some more testing. There's so much that can go wrong if you don't do proper testing, one of many being removing data by accident. By mimicking the production environment with actual data, you can test in an environment that's realistic and keep errors and bugs to a minimum.



                                    Do keep in mind, even after hours of extensive testing, errors do and will happen. If they do, report to your supervisor/manager and explain. If by any means your code shuts down a database connection, millions of records that should've be inserted will go missing (just an example, though). If you find an error or suspect an error: let people know.






                                    share|improve this answer
























                                      up vote
                                      2
                                      down vote













                                      Pretty sure you're aware of this, but I'll point out the obvious anyways: test, test, and do some more testing. There's so much that can go wrong if you don't do proper testing, one of many being removing data by accident. By mimicking the production environment with actual data, you can test in an environment that's realistic and keep errors and bugs to a minimum.



                                      Do keep in mind, even after hours of extensive testing, errors do and will happen. If they do, report to your supervisor/manager and explain. If by any means your code shuts down a database connection, millions of records that should've be inserted will go missing (just an example, though). If you find an error or suspect an error: let people know.






                                      share|improve this answer






















                                        up vote
                                        2
                                        down vote










                                        up vote
                                        2
                                        down vote









                                        Pretty sure you're aware of this, but I'll point out the obvious anyways: test, test, and do some more testing. There's so much that can go wrong if you don't do proper testing, one of many being removing data by accident. By mimicking the production environment with actual data, you can test in an environment that's realistic and keep errors and bugs to a minimum.



                                        Do keep in mind, even after hours of extensive testing, errors do and will happen. If they do, report to your supervisor/manager and explain. If by any means your code shuts down a database connection, millions of records that should've be inserted will go missing (just an example, though). If you find an error or suspect an error: let people know.






                                        share|improve this answer












                                        Pretty sure you're aware of this, but I'll point out the obvious anyways: test, test, and do some more testing. There's so much that can go wrong if you don't do proper testing, one of many being removing data by accident. By mimicking the production environment with actual data, you can test in an environment that's realistic and keep errors and bugs to a minimum.



                                        Do keep in mind, even after hours of extensive testing, errors do and will happen. If they do, report to your supervisor/manager and explain. If by any means your code shuts down a database connection, millions of records that should've be inserted will go missing (just an example, though). If you find an error or suspect an error: let people know.







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Apr 16 '15 at 7:25









                                        Edwin Lambregts

                                        813513




                                        813513




















                                            up vote
                                            2
                                            down vote













                                            If you have to do ANYTHING in a production system, do it in a transaction. (Or let a DBA do it if you have one)



                                            A number of years ago I saw my boss' look of horror as he ran a "simple" update against a production database, without a WHERE clause.



                                            If he had used a transaction, he could have issued a ROLLBACK and saved himself a panicked night of recovering data from a backup to a still running production system. (The rollback would have taken two seconds, not four hours...)



                                            (This sounds like a Scott Adams cartoon, but yes, I actually watched this happen...)






                                            share|improve this answer
















                                            • 4




                                              My boss regularly tells us about the time he dropped an entire database while doing a department training. He was panicked for a minute that he had done it on production, but he was lucky enough to have done it on our test server database.
                                              – Martin Carney
                                              Apr 16 '15 at 22:42














                                            up vote
                                            2
                                            down vote













                                            If you have to do ANYTHING in a production system, do it in a transaction. (Or let a DBA do it if you have one)



                                            A number of years ago I saw my boss' look of horror as he ran a "simple" update against a production database, without a WHERE clause.



                                            If he had used a transaction, he could have issued a ROLLBACK and saved himself a panicked night of recovering data from a backup to a still running production system. (The rollback would have taken two seconds, not four hours...)



                                            (This sounds like a Scott Adams cartoon, but yes, I actually watched this happen...)






                                            share|improve this answer
















                                            • 4




                                              My boss regularly tells us about the time he dropped an entire database while doing a department training. He was panicked for a minute that he had done it on production, but he was lucky enough to have done it on our test server database.
                                              – Martin Carney
                                              Apr 16 '15 at 22:42












                                            up vote
                                            2
                                            down vote










                                            up vote
                                            2
                                            down vote









                                            If you have to do ANYTHING in a production system, do it in a transaction. (Or let a DBA do it if you have one)



                                            A number of years ago I saw my boss' look of horror as he ran a "simple" update against a production database, without a WHERE clause.



                                            If he had used a transaction, he could have issued a ROLLBACK and saved himself a panicked night of recovering data from a backup to a still running production system. (The rollback would have taken two seconds, not four hours...)



                                            (This sounds like a Scott Adams cartoon, but yes, I actually watched this happen...)






                                            share|improve this answer












                                            If you have to do ANYTHING in a production system, do it in a transaction. (Or let a DBA do it if you have one)



                                            A number of years ago I saw my boss' look of horror as he ran a "simple" update against a production database, without a WHERE clause.



                                            If he had used a transaction, he could have issued a ROLLBACK and saved himself a panicked night of recovering data from a backup to a still running production system. (The rollback would have taken two seconds, not four hours...)



                                            (This sounds like a Scott Adams cartoon, but yes, I actually watched this happen...)







                                            share|improve this answer












                                            share|improve this answer



                                            share|improve this answer










                                            answered Apr 16 '15 at 18:24









                                            Tim

                                            22114




                                            22114







                                            • 4




                                              My boss regularly tells us about the time he dropped an entire database while doing a department training. He was panicked for a minute that he had done it on production, but he was lucky enough to have done it on our test server database.
                                              – Martin Carney
                                              Apr 16 '15 at 22:42












                                            • 4




                                              My boss regularly tells us about the time he dropped an entire database while doing a department training. He was panicked for a minute that he had done it on production, but he was lucky enough to have done it on our test server database.
                                              – Martin Carney
                                              Apr 16 '15 at 22:42







                                            4




                                            4




                                            My boss regularly tells us about the time he dropped an entire database while doing a department training. He was panicked for a minute that he had done it on production, but he was lucky enough to have done it on our test server database.
                                            – Martin Carney
                                            Apr 16 '15 at 22:42




                                            My boss regularly tells us about the time he dropped an entire database while doing a department training. He was panicked for a minute that he had done it on production, but he was lucky enough to have done it on our test server database.
                                            – Martin Carney
                                            Apr 16 '15 at 22:42










                                            up vote
                                            1
                                            down vote













                                            Another simple way to prevent catastrophic mistakes is to get in the habit of adding a limit statement (e.g. limit 1; if you are just changing one record).



                                            So if you're modifying something like a user table, even if you forgot a WHERE clause like @tim's boss did, you'd only mess up one user record and not every single user record.






                                            share|improve this answer
























                                              up vote
                                              1
                                              down vote













                                              Another simple way to prevent catastrophic mistakes is to get in the habit of adding a limit statement (e.g. limit 1; if you are just changing one record).



                                              So if you're modifying something like a user table, even if you forgot a WHERE clause like @tim's boss did, you'd only mess up one user record and not every single user record.






                                              share|improve this answer






















                                                up vote
                                                1
                                                down vote










                                                up vote
                                                1
                                                down vote









                                                Another simple way to prevent catastrophic mistakes is to get in the habit of adding a limit statement (e.g. limit 1; if you are just changing one record).



                                                So if you're modifying something like a user table, even if you forgot a WHERE clause like @tim's boss did, you'd only mess up one user record and not every single user record.






                                                share|improve this answer












                                                Another simple way to prevent catastrophic mistakes is to get in the habit of adding a limit statement (e.g. limit 1; if you are just changing one record).



                                                So if you're modifying something like a user table, even if you forgot a WHERE clause like @tim's boss did, you'd only mess up one user record and not every single user record.







                                                share|improve this answer












                                                share|improve this answer



                                                share|improve this answer










                                                answered Apr 17 '15 at 19:39









                                                Doug Krugman

                                                191




                                                191












                                                    Comments

                                                    Popular posts from this blog

                                                    What does second last employer means? [closed]

                                                    Installing NextGIS Connect into QGIS 3?

                                                    One-line joke