Principle of Least Privilege in the Workplace [closed]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
-3
down vote

favorite












I have been trying to figure out how, logistically, large corporations focused on one huge product (like Google, Facebook, Twitter, Yahoo, etc.) restrict the information known to employees such that no single employee (or small group of employees) poses a major security threat to the company upon termination. Below, a more formal introduction.




In online applications, there is often a need for security, and obscurity. I don't know of any large companies focused on a single huge online product that openly publish the internal workings of their applications. Of course, the employees of the company must know how things work behind the curtain, in order to maintain and grow the product. But doesn't that make them an incredible security risk?



In a company of this type with tens, hundreds, or thousands of employees, it would seem any reasonably high-level employee would have the power, if he wished, to bring down the company through his knowledge of the inner mechanics of the service. The one way to prevent this would be to limit what each employee knows to what he needs to do his job. Except that then a group of one or two employees would still be able to bring down the company. What keeps a security specialist fired from Google from deciding to destroy the company, using the vulnerabilities of the very system he designed? No amount of litigation would make up for hundreds of billions of dollars in damage. Even something as seemingly simple as how the servers are connected to each other could be incredibly valuable to a hacker, and yet would really need to be shared in its entirety with employees managing the datacenters if they are to do their jobs. Some pieces of information so basic that they must be known to one person harbor incredible risk to the company if misused.



How do these companies (like Google, Facebook, Twitter, Yahoo...) restrict what each employee knows, and divide it up in sufficiently small chunks to mitigate this risk?



Update:



I'm not asking about software security, so much as how a company restricts information known to individual employees from a strategic and logistical standpoint. This same problem in different forms applies to basically any company with secrets, i.e. any company, but it is particularly important when there is one large product, especially if it's an online application, as asked above.







share|improve this question














closed as too broad by Vietnhi Phuvan, Jane S♦, scaaahu, Jenny D, The Wandering Dev Manager Jul 25 '15 at 14:06


Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.














  • You are confusing knowledge of how things work with the access and ability to affect those things.
    – Jenny D
    Jul 25 '15 at 12:39










  • @JennyD I might be, but this debate has actually lost the original function of the question. The question was, how do these companies decide what to show everyone? What method do they use to decide who should see what, and how is that enforced?
    – TheEnvironmentalist
    Jul 25 '15 at 12:48






  • 1




    Really all you want to know was "What method do they use to decide who should see what, and how is that enforced?" Then why a title of "Principle of Least Privilege" and all these (flawed) failure scenarios? You have already asked that question. security.stackexchange.com/questions/94882/…
    – paparazzo
    Jul 25 '15 at 15:16










  • @Frisbee Notice how most of the criticism this question has received (including a number of deleted comments on the question itself) argues that only extremely paranoid companies would consider restricting employee information. At that point, I started to doubt the validity of the question, and so set out to confirm companies do in fact do this. Asking whether companies restrict employee knowledge for security purposes is a security question, but asking how they actually do it regards company treatment of employees above everything else, and so is a Workplace SE question
    – TheEnvironmentalist
    Jul 27 '15 at 18:42










  • @Frisbee thus I asked "Do large online software companies limit employee access to company information" in Information Security SE, and the answer I received was, yes, companies certainly restrict employee access to information, which if nothing else confirms that my question here is valid, and that I have some idea of what I am talking about.
    – TheEnvironmentalist
    Jul 27 '15 at 18:44
















up vote
-3
down vote

favorite












I have been trying to figure out how, logistically, large corporations focused on one huge product (like Google, Facebook, Twitter, Yahoo, etc.) restrict the information known to employees such that no single employee (or small group of employees) poses a major security threat to the company upon termination. Below, a more formal introduction.




In online applications, there is often a need for security, and obscurity. I don't know of any large companies focused on a single huge online product that openly publish the internal workings of their applications. Of course, the employees of the company must know how things work behind the curtain, in order to maintain and grow the product. But doesn't that make them an incredible security risk?



In a company of this type with tens, hundreds, or thousands of employees, it would seem any reasonably high-level employee would have the power, if he wished, to bring down the company through his knowledge of the inner mechanics of the service. The one way to prevent this would be to limit what each employee knows to what he needs to do his job. Except that then a group of one or two employees would still be able to bring down the company. What keeps a security specialist fired from Google from deciding to destroy the company, using the vulnerabilities of the very system he designed? No amount of litigation would make up for hundreds of billions of dollars in damage. Even something as seemingly simple as how the servers are connected to each other could be incredibly valuable to a hacker, and yet would really need to be shared in its entirety with employees managing the datacenters if they are to do their jobs. Some pieces of information so basic that they must be known to one person harbor incredible risk to the company if misused.



How do these companies (like Google, Facebook, Twitter, Yahoo...) restrict what each employee knows, and divide it up in sufficiently small chunks to mitigate this risk?



Update:



I'm not asking about software security, so much as how a company restricts information known to individual employees from a strategic and logistical standpoint. This same problem in different forms applies to basically any company with secrets, i.e. any company, but it is particularly important when there is one large product, especially if it's an online application, as asked above.







share|improve this question














closed as too broad by Vietnhi Phuvan, Jane S♦, scaaahu, Jenny D, The Wandering Dev Manager Jul 25 '15 at 14:06


Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.














  • You are confusing knowledge of how things work with the access and ability to affect those things.
    – Jenny D
    Jul 25 '15 at 12:39










  • @JennyD I might be, but this debate has actually lost the original function of the question. The question was, how do these companies decide what to show everyone? What method do they use to decide who should see what, and how is that enforced?
    – TheEnvironmentalist
    Jul 25 '15 at 12:48






  • 1




    Really all you want to know was "What method do they use to decide who should see what, and how is that enforced?" Then why a title of "Principle of Least Privilege" and all these (flawed) failure scenarios? You have already asked that question. security.stackexchange.com/questions/94882/…
    – paparazzo
    Jul 25 '15 at 15:16










  • @Frisbee Notice how most of the criticism this question has received (including a number of deleted comments on the question itself) argues that only extremely paranoid companies would consider restricting employee information. At that point, I started to doubt the validity of the question, and so set out to confirm companies do in fact do this. Asking whether companies restrict employee knowledge for security purposes is a security question, but asking how they actually do it regards company treatment of employees above everything else, and so is a Workplace SE question
    – TheEnvironmentalist
    Jul 27 '15 at 18:42










  • @Frisbee thus I asked "Do large online software companies limit employee access to company information" in Information Security SE, and the answer I received was, yes, companies certainly restrict employee access to information, which if nothing else confirms that my question here is valid, and that I have some idea of what I am talking about.
    – TheEnvironmentalist
    Jul 27 '15 at 18:44












up vote
-3
down vote

favorite









up vote
-3
down vote

favorite











I have been trying to figure out how, logistically, large corporations focused on one huge product (like Google, Facebook, Twitter, Yahoo, etc.) restrict the information known to employees such that no single employee (or small group of employees) poses a major security threat to the company upon termination. Below, a more formal introduction.




In online applications, there is often a need for security, and obscurity. I don't know of any large companies focused on a single huge online product that openly publish the internal workings of their applications. Of course, the employees of the company must know how things work behind the curtain, in order to maintain and grow the product. But doesn't that make them an incredible security risk?



In a company of this type with tens, hundreds, or thousands of employees, it would seem any reasonably high-level employee would have the power, if he wished, to bring down the company through his knowledge of the inner mechanics of the service. The one way to prevent this would be to limit what each employee knows to what he needs to do his job. Except that then a group of one or two employees would still be able to bring down the company. What keeps a security specialist fired from Google from deciding to destroy the company, using the vulnerabilities of the very system he designed? No amount of litigation would make up for hundreds of billions of dollars in damage. Even something as seemingly simple as how the servers are connected to each other could be incredibly valuable to a hacker, and yet would really need to be shared in its entirety with employees managing the datacenters if they are to do their jobs. Some pieces of information so basic that they must be known to one person harbor incredible risk to the company if misused.



How do these companies (like Google, Facebook, Twitter, Yahoo...) restrict what each employee knows, and divide it up in sufficiently small chunks to mitigate this risk?



Update:



I'm not asking about software security, so much as how a company restricts information known to individual employees from a strategic and logistical standpoint. This same problem in different forms applies to basically any company with secrets, i.e. any company, but it is particularly important when there is one large product, especially if it's an online application, as asked above.







share|improve this question














I have been trying to figure out how, logistically, large corporations focused on one huge product (like Google, Facebook, Twitter, Yahoo, etc.) restrict the information known to employees such that no single employee (or small group of employees) poses a major security threat to the company upon termination. Below, a more formal introduction.




In online applications, there is often a need for security, and obscurity. I don't know of any large companies focused on a single huge online product that openly publish the internal workings of their applications. Of course, the employees of the company must know how things work behind the curtain, in order to maintain and grow the product. But doesn't that make them an incredible security risk?



In a company of this type with tens, hundreds, or thousands of employees, it would seem any reasonably high-level employee would have the power, if he wished, to bring down the company through his knowledge of the inner mechanics of the service. The one way to prevent this would be to limit what each employee knows to what he needs to do his job. Except that then a group of one or two employees would still be able to bring down the company. What keeps a security specialist fired from Google from deciding to destroy the company, using the vulnerabilities of the very system he designed? No amount of litigation would make up for hundreds of billions of dollars in damage. Even something as seemingly simple as how the servers are connected to each other could be incredibly valuable to a hacker, and yet would really need to be shared in its entirety with employees managing the datacenters if they are to do their jobs. Some pieces of information so basic that they must be known to one person harbor incredible risk to the company if misused.



How do these companies (like Google, Facebook, Twitter, Yahoo...) restrict what each employee knows, and divide it up in sufficiently small chunks to mitigate this risk?



Update:



I'm not asking about software security, so much as how a company restricts information known to individual employees from a strategic and logistical standpoint. This same problem in different forms applies to basically any company with secrets, i.e. any company, but it is particularly important when there is one large product, especially if it's an online application, as asked above.









share|improve this question













share|improve this question




share|improve this question








edited Jul 25 '15 at 8:40

























asked Jul 25 '15 at 8:15









TheEnvironmentalist

1615




1615




closed as too broad by Vietnhi Phuvan, Jane S♦, scaaahu, Jenny D, The Wandering Dev Manager Jul 25 '15 at 14:06


Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.






closed as too broad by Vietnhi Phuvan, Jane S♦, scaaahu, Jenny D, The Wandering Dev Manager Jul 25 '15 at 14:06


Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.













  • You are confusing knowledge of how things work with the access and ability to affect those things.
    – Jenny D
    Jul 25 '15 at 12:39










  • @JennyD I might be, but this debate has actually lost the original function of the question. The question was, how do these companies decide what to show everyone? What method do they use to decide who should see what, and how is that enforced?
    – TheEnvironmentalist
    Jul 25 '15 at 12:48






  • 1




    Really all you want to know was "What method do they use to decide who should see what, and how is that enforced?" Then why a title of "Principle of Least Privilege" and all these (flawed) failure scenarios? You have already asked that question. security.stackexchange.com/questions/94882/…
    – paparazzo
    Jul 25 '15 at 15:16










  • @Frisbee Notice how most of the criticism this question has received (including a number of deleted comments on the question itself) argues that only extremely paranoid companies would consider restricting employee information. At that point, I started to doubt the validity of the question, and so set out to confirm companies do in fact do this. Asking whether companies restrict employee knowledge for security purposes is a security question, but asking how they actually do it regards company treatment of employees above everything else, and so is a Workplace SE question
    – TheEnvironmentalist
    Jul 27 '15 at 18:42










  • @Frisbee thus I asked "Do large online software companies limit employee access to company information" in Information Security SE, and the answer I received was, yes, companies certainly restrict employee access to information, which if nothing else confirms that my question here is valid, and that I have some idea of what I am talking about.
    – TheEnvironmentalist
    Jul 27 '15 at 18:44
















  • You are confusing knowledge of how things work with the access and ability to affect those things.
    – Jenny D
    Jul 25 '15 at 12:39










  • @JennyD I might be, but this debate has actually lost the original function of the question. The question was, how do these companies decide what to show everyone? What method do they use to decide who should see what, and how is that enforced?
    – TheEnvironmentalist
    Jul 25 '15 at 12:48






  • 1




    Really all you want to know was "What method do they use to decide who should see what, and how is that enforced?" Then why a title of "Principle of Least Privilege" and all these (flawed) failure scenarios? You have already asked that question. security.stackexchange.com/questions/94882/…
    – paparazzo
    Jul 25 '15 at 15:16










  • @Frisbee Notice how most of the criticism this question has received (including a number of deleted comments on the question itself) argues that only extremely paranoid companies would consider restricting employee information. At that point, I started to doubt the validity of the question, and so set out to confirm companies do in fact do this. Asking whether companies restrict employee knowledge for security purposes is a security question, but asking how they actually do it regards company treatment of employees above everything else, and so is a Workplace SE question
    – TheEnvironmentalist
    Jul 27 '15 at 18:42










  • @Frisbee thus I asked "Do large online software companies limit employee access to company information" in Information Security SE, and the answer I received was, yes, companies certainly restrict employee access to information, which if nothing else confirms that my question here is valid, and that I have some idea of what I am talking about.
    – TheEnvironmentalist
    Jul 27 '15 at 18:44















You are confusing knowledge of how things work with the access and ability to affect those things.
– Jenny D
Jul 25 '15 at 12:39




You are confusing knowledge of how things work with the access and ability to affect those things.
– Jenny D
Jul 25 '15 at 12:39












@JennyD I might be, but this debate has actually lost the original function of the question. The question was, how do these companies decide what to show everyone? What method do they use to decide who should see what, and how is that enforced?
– TheEnvironmentalist
Jul 25 '15 at 12:48




@JennyD I might be, but this debate has actually lost the original function of the question. The question was, how do these companies decide what to show everyone? What method do they use to decide who should see what, and how is that enforced?
– TheEnvironmentalist
Jul 25 '15 at 12:48




1




1




Really all you want to know was "What method do they use to decide who should see what, and how is that enforced?" Then why a title of "Principle of Least Privilege" and all these (flawed) failure scenarios? You have already asked that question. security.stackexchange.com/questions/94882/…
– paparazzo
Jul 25 '15 at 15:16




Really all you want to know was "What method do they use to decide who should see what, and how is that enforced?" Then why a title of "Principle of Least Privilege" and all these (flawed) failure scenarios? You have already asked that question. security.stackexchange.com/questions/94882/…
– paparazzo
Jul 25 '15 at 15:16












@Frisbee Notice how most of the criticism this question has received (including a number of deleted comments on the question itself) argues that only extremely paranoid companies would consider restricting employee information. At that point, I started to doubt the validity of the question, and so set out to confirm companies do in fact do this. Asking whether companies restrict employee knowledge for security purposes is a security question, but asking how they actually do it regards company treatment of employees above everything else, and so is a Workplace SE question
– TheEnvironmentalist
Jul 27 '15 at 18:42




@Frisbee Notice how most of the criticism this question has received (including a number of deleted comments on the question itself) argues that only extremely paranoid companies would consider restricting employee information. At that point, I started to doubt the validity of the question, and so set out to confirm companies do in fact do this. Asking whether companies restrict employee knowledge for security purposes is a security question, but asking how they actually do it regards company treatment of employees above everything else, and so is a Workplace SE question
– TheEnvironmentalist
Jul 27 '15 at 18:42












@Frisbee thus I asked "Do large online software companies limit employee access to company information" in Information Security SE, and the answer I received was, yes, companies certainly restrict employee access to information, which if nothing else confirms that my question here is valid, and that I have some idea of what I am talking about.
– TheEnvironmentalist
Jul 27 '15 at 18:44




@Frisbee thus I asked "Do large online software companies limit employee access to company information" in Information Security SE, and the answer I received was, yes, companies certainly restrict employee access to information, which if nothing else confirms that my question here is valid, and that I have some idea of what I am talking about.
– TheEnvironmentalist
Jul 27 '15 at 18:44










2 Answers
2






active

oldest

votes

















up vote
4
down vote













I think people are getting caught up in the details of information security and whether or not one person can "bring down" and entire enterprise.



The fact of the matter is that, YES, it certainly is possible for a disgruntled employee (current or former) to do damage ranging from "not-much" to "very grave" for almost any kind of company.



The reason this doesn't happen more than it does is simply that the vast majority of people are NOT criminals. Most people won't do harm to the lively-hood of coworkers and employers just for the feeling of revenge. There's a certain amount of trust and good-will involved here. Sometimes that trust is broken but most of the time it is not.



Anyone that believes a system is impervious to harm is delusional. The harm doesn't have to be complete destruction for it to be a serious hardship, nor does the perpetrator have to have elite skills or deep knowledge to do this kind of stuff.






share|improve this answer
















  • 1




    So to directly address the question, how would a company prevent, or at least minimize this?
    – TheEnvironmentalist
    Jul 25 '15 at 13:06






  • 1




    @TheEnvironmentalist, just varying levels of security practices-- like everybody else. The trade-off is in cost. How critical is it to prevent shenanigans? If you're a bank, A LOT. If you're a mom-and-pop retailer? Not so much. There is no "one size fits all" here.
    – teego1967
    Jul 25 '15 at 13:11







  • 1




    And among the criminals or potential criminals, most of them would be sane enough to realise that causing damage to the company will cause tons of damages to themselves a little bit later. Only once in a while you get a Terry Childs en.wikipedia.org/wiki/Terry_Childs .
    – gnasher729
    Jul 25 '15 at 14:48










  • You prevent it by giving permissions only to those who need them, for the time that they need them, and tracking who has logged into what where -- and by maintaining secure backups so that if a disaster occurs, deliberate or not, you can recover -- and by structuring systems so that recovery occurs as quickly as possible. In other words, by managing your IS shop properly.
    – keshlam
    Jul 25 '15 at 18:12










  • @keshlam, those are all good things. Regardless of preparation, however, disasters do happen, information gets out the door, and costly mishaps occur even when everything goes back online. Even in organizations where security counts above all else, there are spectacular failures (eg Snowden). These aren't problems that have purely technical solutions.
    – teego1967
    Jul 25 '15 at 20:20

















up vote
2
down vote













I think you got some basics about information security and software engineering wrong.



Let's take a look at chess. When you first learn it, you plan your moves and hope your opponent will not see through your clever plans. If someone would announce your plan, you'd be doomed. Only later, when you get more professional about it, you will see that chess is an open game. Everybody knows everything. Making a move hoping your opponent will not notice your plan is child's play.



The same goes for computer programming. When you first read a book, you are like "I will make a secret password. It's 'fart'. No, that's not secrect enough, I'll make it 'secretfart'. Hihi. I'm secure, nobody will guess that.". But that's child's play. When you get more professional, you will learn methods to secure software and information that are not dependent on this kind of "secrets".



Facebook is no secret. Every programmer could probably code you a Facebook clone in weeks. Amazon is no secret. Look at the thousands of shops. They are successful because they run a successful business. Just like an offline shop might be a successful business even though every highschool drop out knows the "secret" of "buy goods for less than you sell for".



So yes, obviously you don't give your accounting software's administrative access to your janitor. But that's not hot new stuff. You did not give the key to the treasure chest to your stable boy 1000 years ago.



If Facebook published their source code... nothing would happen. A few nerds would get kicks out of the cool things and a few other nerds would find a security hole that would let them post pictures of reproductive organs on peoples walls. A third group of nerds would build a shadow-facebook on the same code that would not take off because nobody would join. And after two days, Facebook would be back to normal. There is no secret kill switch. That only exists in fictional stories, or in software written by kids.






share|improve this answer
















  • 1




    I think you may have misunderstood the question. Obviously obscurity only goes so far, and in my effort to better explain I am getting a bit more computer-technical than I would like to on Workplace SE, but there are absolutely secrets to how things are done. Some things are standard, like encryption, certain protocols, theory behind various types of implementation, but there are certain key secrets that could be extremely valuable for an attack. If Facebook published their source, nothing much would happen for a few weeks, because millions of lines of code would take a long time to dissect...
    – TheEnvironmentalist
    Jul 25 '15 at 12:16







  • 1




    ...but then maybe someone would find some security hole, and start installing backdoors. At that point, they could frankly do whatever they want, and then they could actually shut down Facebook. It wouldn't be permanent, but Facebook would have to wipe all the servers, and reinstall everything, and while they could probably figure out a clever way of automating the process within a day or two, and then complete it in another, Facebook could still be down for days, and their reputation would be severely tarnished. That's not to mention that terabytes of personal data could have been stolen...
    – TheEnvironmentalist
    Jul 25 '15 at 12:20







  • 1




    There is no secret kill switch, but someone who knows exactly how Facebook works could devise any number of ways to attack Facebook, and if not destroy the company, cost them billions in damages. In fact, I think (not sure though) that some CPUs in use at Facebook might still have exploitable halt and catch fire bugs, in which case someone could actually destroy billions of dollars in servers. This also holds true in any other company operating on the same model. For this reason, it seems certain that Facebook and the other companies in its category restrict employee data, I'm just asking how.
    – TheEnvironmentalist
    Jul 25 '15 at 12:23






  • 2




    In IT, the more high-level an employee is, the less he knows about the nitty gritty details that could actually be used to bring the company down. Anyway: Every company restricts employee data. That's natural part of the business. You don't trust your waitress as much as your accountant. There's probably books written about it. And those that were written in the times where computers where rare beasts have not lost their validity.
    – nvoigt
    Jul 25 '15 at 12:39






  • 1




    @TheEnvironmentalist - If there is a remote exploitable hole in Facebook that allows an attacker to install backdoors on Facebook's servers, attackers don't need the source code or an internal user to find that. If it was known by an internal user, it would get patched (unless we're trying to defend against an undercover operative of a hacker collective). And it would undoubtedly affect a small fraction of servers that are responsible for one particular module not all of Facebook.
    – Justin Cave
    Jul 25 '15 at 14:40

















2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
4
down vote













I think people are getting caught up in the details of information security and whether or not one person can "bring down" and entire enterprise.



The fact of the matter is that, YES, it certainly is possible for a disgruntled employee (current or former) to do damage ranging from "not-much" to "very grave" for almost any kind of company.



The reason this doesn't happen more than it does is simply that the vast majority of people are NOT criminals. Most people won't do harm to the lively-hood of coworkers and employers just for the feeling of revenge. There's a certain amount of trust and good-will involved here. Sometimes that trust is broken but most of the time it is not.



Anyone that believes a system is impervious to harm is delusional. The harm doesn't have to be complete destruction for it to be a serious hardship, nor does the perpetrator have to have elite skills or deep knowledge to do this kind of stuff.






share|improve this answer
















  • 1




    So to directly address the question, how would a company prevent, or at least minimize this?
    – TheEnvironmentalist
    Jul 25 '15 at 13:06






  • 1




    @TheEnvironmentalist, just varying levels of security practices-- like everybody else. The trade-off is in cost. How critical is it to prevent shenanigans? If you're a bank, A LOT. If you're a mom-and-pop retailer? Not so much. There is no "one size fits all" here.
    – teego1967
    Jul 25 '15 at 13:11







  • 1




    And among the criminals or potential criminals, most of them would be sane enough to realise that causing damage to the company will cause tons of damages to themselves a little bit later. Only once in a while you get a Terry Childs en.wikipedia.org/wiki/Terry_Childs .
    – gnasher729
    Jul 25 '15 at 14:48










  • You prevent it by giving permissions only to those who need them, for the time that they need them, and tracking who has logged into what where -- and by maintaining secure backups so that if a disaster occurs, deliberate or not, you can recover -- and by structuring systems so that recovery occurs as quickly as possible. In other words, by managing your IS shop properly.
    – keshlam
    Jul 25 '15 at 18:12










  • @keshlam, those are all good things. Regardless of preparation, however, disasters do happen, information gets out the door, and costly mishaps occur even when everything goes back online. Even in organizations where security counts above all else, there are spectacular failures (eg Snowden). These aren't problems that have purely technical solutions.
    – teego1967
    Jul 25 '15 at 20:20














up vote
4
down vote













I think people are getting caught up in the details of information security and whether or not one person can "bring down" and entire enterprise.



The fact of the matter is that, YES, it certainly is possible for a disgruntled employee (current or former) to do damage ranging from "not-much" to "very grave" for almost any kind of company.



The reason this doesn't happen more than it does is simply that the vast majority of people are NOT criminals. Most people won't do harm to the lively-hood of coworkers and employers just for the feeling of revenge. There's a certain amount of trust and good-will involved here. Sometimes that trust is broken but most of the time it is not.



Anyone that believes a system is impervious to harm is delusional. The harm doesn't have to be complete destruction for it to be a serious hardship, nor does the perpetrator have to have elite skills or deep knowledge to do this kind of stuff.






share|improve this answer
















  • 1




    So to directly address the question, how would a company prevent, or at least minimize this?
    – TheEnvironmentalist
    Jul 25 '15 at 13:06






  • 1




    @TheEnvironmentalist, just varying levels of security practices-- like everybody else. The trade-off is in cost. How critical is it to prevent shenanigans? If you're a bank, A LOT. If you're a mom-and-pop retailer? Not so much. There is no "one size fits all" here.
    – teego1967
    Jul 25 '15 at 13:11







  • 1




    And among the criminals or potential criminals, most of them would be sane enough to realise that causing damage to the company will cause tons of damages to themselves a little bit later. Only once in a while you get a Terry Childs en.wikipedia.org/wiki/Terry_Childs .
    – gnasher729
    Jul 25 '15 at 14:48










  • You prevent it by giving permissions only to those who need them, for the time that they need them, and tracking who has logged into what where -- and by maintaining secure backups so that if a disaster occurs, deliberate or not, you can recover -- and by structuring systems so that recovery occurs as quickly as possible. In other words, by managing your IS shop properly.
    – keshlam
    Jul 25 '15 at 18:12










  • @keshlam, those are all good things. Regardless of preparation, however, disasters do happen, information gets out the door, and costly mishaps occur even when everything goes back online. Even in organizations where security counts above all else, there are spectacular failures (eg Snowden). These aren't problems that have purely technical solutions.
    – teego1967
    Jul 25 '15 at 20:20












up vote
4
down vote










up vote
4
down vote









I think people are getting caught up in the details of information security and whether or not one person can "bring down" and entire enterprise.



The fact of the matter is that, YES, it certainly is possible for a disgruntled employee (current or former) to do damage ranging from "not-much" to "very grave" for almost any kind of company.



The reason this doesn't happen more than it does is simply that the vast majority of people are NOT criminals. Most people won't do harm to the lively-hood of coworkers and employers just for the feeling of revenge. There's a certain amount of trust and good-will involved here. Sometimes that trust is broken but most of the time it is not.



Anyone that believes a system is impervious to harm is delusional. The harm doesn't have to be complete destruction for it to be a serious hardship, nor does the perpetrator have to have elite skills or deep knowledge to do this kind of stuff.






share|improve this answer












I think people are getting caught up in the details of information security and whether or not one person can "bring down" and entire enterprise.



The fact of the matter is that, YES, it certainly is possible for a disgruntled employee (current or former) to do damage ranging from "not-much" to "very grave" for almost any kind of company.



The reason this doesn't happen more than it does is simply that the vast majority of people are NOT criminals. Most people won't do harm to the lively-hood of coworkers and employers just for the feeling of revenge. There's a certain amount of trust and good-will involved here. Sometimes that trust is broken but most of the time it is not.



Anyone that believes a system is impervious to harm is delusional. The harm doesn't have to be complete destruction for it to be a serious hardship, nor does the perpetrator have to have elite skills or deep knowledge to do this kind of stuff.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jul 25 '15 at 13:05









teego1967

10.3k42845




10.3k42845







  • 1




    So to directly address the question, how would a company prevent, or at least minimize this?
    – TheEnvironmentalist
    Jul 25 '15 at 13:06






  • 1




    @TheEnvironmentalist, just varying levels of security practices-- like everybody else. The trade-off is in cost. How critical is it to prevent shenanigans? If you're a bank, A LOT. If you're a mom-and-pop retailer? Not so much. There is no "one size fits all" here.
    – teego1967
    Jul 25 '15 at 13:11







  • 1




    And among the criminals or potential criminals, most of them would be sane enough to realise that causing damage to the company will cause tons of damages to themselves a little bit later. Only once in a while you get a Terry Childs en.wikipedia.org/wiki/Terry_Childs .
    – gnasher729
    Jul 25 '15 at 14:48










  • You prevent it by giving permissions only to those who need them, for the time that they need them, and tracking who has logged into what where -- and by maintaining secure backups so that if a disaster occurs, deliberate or not, you can recover -- and by structuring systems so that recovery occurs as quickly as possible. In other words, by managing your IS shop properly.
    – keshlam
    Jul 25 '15 at 18:12










  • @keshlam, those are all good things. Regardless of preparation, however, disasters do happen, information gets out the door, and costly mishaps occur even when everything goes back online. Even in organizations where security counts above all else, there are spectacular failures (eg Snowden). These aren't problems that have purely technical solutions.
    – teego1967
    Jul 25 '15 at 20:20












  • 1




    So to directly address the question, how would a company prevent, or at least minimize this?
    – TheEnvironmentalist
    Jul 25 '15 at 13:06






  • 1




    @TheEnvironmentalist, just varying levels of security practices-- like everybody else. The trade-off is in cost. How critical is it to prevent shenanigans? If you're a bank, A LOT. If you're a mom-and-pop retailer? Not so much. There is no "one size fits all" here.
    – teego1967
    Jul 25 '15 at 13:11







  • 1




    And among the criminals or potential criminals, most of them would be sane enough to realise that causing damage to the company will cause tons of damages to themselves a little bit later. Only once in a while you get a Terry Childs en.wikipedia.org/wiki/Terry_Childs .
    – gnasher729
    Jul 25 '15 at 14:48










  • You prevent it by giving permissions only to those who need them, for the time that they need them, and tracking who has logged into what where -- and by maintaining secure backups so that if a disaster occurs, deliberate or not, you can recover -- and by structuring systems so that recovery occurs as quickly as possible. In other words, by managing your IS shop properly.
    – keshlam
    Jul 25 '15 at 18:12










  • @keshlam, those are all good things. Regardless of preparation, however, disasters do happen, information gets out the door, and costly mishaps occur even when everything goes back online. Even in organizations where security counts above all else, there are spectacular failures (eg Snowden). These aren't problems that have purely technical solutions.
    – teego1967
    Jul 25 '15 at 20:20







1




1




So to directly address the question, how would a company prevent, or at least minimize this?
– TheEnvironmentalist
Jul 25 '15 at 13:06




So to directly address the question, how would a company prevent, or at least minimize this?
– TheEnvironmentalist
Jul 25 '15 at 13:06




1




1




@TheEnvironmentalist, just varying levels of security practices-- like everybody else. The trade-off is in cost. How critical is it to prevent shenanigans? If you're a bank, A LOT. If you're a mom-and-pop retailer? Not so much. There is no "one size fits all" here.
– teego1967
Jul 25 '15 at 13:11





@TheEnvironmentalist, just varying levels of security practices-- like everybody else. The trade-off is in cost. How critical is it to prevent shenanigans? If you're a bank, A LOT. If you're a mom-and-pop retailer? Not so much. There is no "one size fits all" here.
– teego1967
Jul 25 '15 at 13:11





1




1




And among the criminals or potential criminals, most of them would be sane enough to realise that causing damage to the company will cause tons of damages to themselves a little bit later. Only once in a while you get a Terry Childs en.wikipedia.org/wiki/Terry_Childs .
– gnasher729
Jul 25 '15 at 14:48




And among the criminals or potential criminals, most of them would be sane enough to realise that causing damage to the company will cause tons of damages to themselves a little bit later. Only once in a while you get a Terry Childs en.wikipedia.org/wiki/Terry_Childs .
– gnasher729
Jul 25 '15 at 14:48












You prevent it by giving permissions only to those who need them, for the time that they need them, and tracking who has logged into what where -- and by maintaining secure backups so that if a disaster occurs, deliberate or not, you can recover -- and by structuring systems so that recovery occurs as quickly as possible. In other words, by managing your IS shop properly.
– keshlam
Jul 25 '15 at 18:12




You prevent it by giving permissions only to those who need them, for the time that they need them, and tracking who has logged into what where -- and by maintaining secure backups so that if a disaster occurs, deliberate or not, you can recover -- and by structuring systems so that recovery occurs as quickly as possible. In other words, by managing your IS shop properly.
– keshlam
Jul 25 '15 at 18:12












@keshlam, those are all good things. Regardless of preparation, however, disasters do happen, information gets out the door, and costly mishaps occur even when everything goes back online. Even in organizations where security counts above all else, there are spectacular failures (eg Snowden). These aren't problems that have purely technical solutions.
– teego1967
Jul 25 '15 at 20:20




@keshlam, those are all good things. Regardless of preparation, however, disasters do happen, information gets out the door, and costly mishaps occur even when everything goes back online. Even in organizations where security counts above all else, there are spectacular failures (eg Snowden). These aren't problems that have purely technical solutions.
– teego1967
Jul 25 '15 at 20:20












up vote
2
down vote













I think you got some basics about information security and software engineering wrong.



Let's take a look at chess. When you first learn it, you plan your moves and hope your opponent will not see through your clever plans. If someone would announce your plan, you'd be doomed. Only later, when you get more professional about it, you will see that chess is an open game. Everybody knows everything. Making a move hoping your opponent will not notice your plan is child's play.



The same goes for computer programming. When you first read a book, you are like "I will make a secret password. It's 'fart'. No, that's not secrect enough, I'll make it 'secretfart'. Hihi. I'm secure, nobody will guess that.". But that's child's play. When you get more professional, you will learn methods to secure software and information that are not dependent on this kind of "secrets".



Facebook is no secret. Every programmer could probably code you a Facebook clone in weeks. Amazon is no secret. Look at the thousands of shops. They are successful because they run a successful business. Just like an offline shop might be a successful business even though every highschool drop out knows the "secret" of "buy goods for less than you sell for".



So yes, obviously you don't give your accounting software's administrative access to your janitor. But that's not hot new stuff. You did not give the key to the treasure chest to your stable boy 1000 years ago.



If Facebook published their source code... nothing would happen. A few nerds would get kicks out of the cool things and a few other nerds would find a security hole that would let them post pictures of reproductive organs on peoples walls. A third group of nerds would build a shadow-facebook on the same code that would not take off because nobody would join. And after two days, Facebook would be back to normal. There is no secret kill switch. That only exists in fictional stories, or in software written by kids.






share|improve this answer
















  • 1




    I think you may have misunderstood the question. Obviously obscurity only goes so far, and in my effort to better explain I am getting a bit more computer-technical than I would like to on Workplace SE, but there are absolutely secrets to how things are done. Some things are standard, like encryption, certain protocols, theory behind various types of implementation, but there are certain key secrets that could be extremely valuable for an attack. If Facebook published their source, nothing much would happen for a few weeks, because millions of lines of code would take a long time to dissect...
    – TheEnvironmentalist
    Jul 25 '15 at 12:16







  • 1




    ...but then maybe someone would find some security hole, and start installing backdoors. At that point, they could frankly do whatever they want, and then they could actually shut down Facebook. It wouldn't be permanent, but Facebook would have to wipe all the servers, and reinstall everything, and while they could probably figure out a clever way of automating the process within a day or two, and then complete it in another, Facebook could still be down for days, and their reputation would be severely tarnished. That's not to mention that terabytes of personal data could have been stolen...
    – TheEnvironmentalist
    Jul 25 '15 at 12:20







  • 1




    There is no secret kill switch, but someone who knows exactly how Facebook works could devise any number of ways to attack Facebook, and if not destroy the company, cost them billions in damages. In fact, I think (not sure though) that some CPUs in use at Facebook might still have exploitable halt and catch fire bugs, in which case someone could actually destroy billions of dollars in servers. This also holds true in any other company operating on the same model. For this reason, it seems certain that Facebook and the other companies in its category restrict employee data, I'm just asking how.
    – TheEnvironmentalist
    Jul 25 '15 at 12:23






  • 2




    In IT, the more high-level an employee is, the less he knows about the nitty gritty details that could actually be used to bring the company down. Anyway: Every company restricts employee data. That's natural part of the business. You don't trust your waitress as much as your accountant. There's probably books written about it. And those that were written in the times where computers where rare beasts have not lost their validity.
    – nvoigt
    Jul 25 '15 at 12:39






  • 1




    @TheEnvironmentalist - If there is a remote exploitable hole in Facebook that allows an attacker to install backdoors on Facebook's servers, attackers don't need the source code or an internal user to find that. If it was known by an internal user, it would get patched (unless we're trying to defend against an undercover operative of a hacker collective). And it would undoubtedly affect a small fraction of servers that are responsible for one particular module not all of Facebook.
    – Justin Cave
    Jul 25 '15 at 14:40














up vote
2
down vote













I think you got some basics about information security and software engineering wrong.



Let's take a look at chess. When you first learn it, you plan your moves and hope your opponent will not see through your clever plans. If someone would announce your plan, you'd be doomed. Only later, when you get more professional about it, you will see that chess is an open game. Everybody knows everything. Making a move hoping your opponent will not notice your plan is child's play.



The same goes for computer programming. When you first read a book, you are like "I will make a secret password. It's 'fart'. No, that's not secrect enough, I'll make it 'secretfart'. Hihi. I'm secure, nobody will guess that.". But that's child's play. When you get more professional, you will learn methods to secure software and information that are not dependent on this kind of "secrets".



Facebook is no secret. Every programmer could probably code you a Facebook clone in weeks. Amazon is no secret. Look at the thousands of shops. They are successful because they run a successful business. Just like an offline shop might be a successful business even though every highschool drop out knows the "secret" of "buy goods for less than you sell for".



So yes, obviously you don't give your accounting software's administrative access to your janitor. But that's not hot new stuff. You did not give the key to the treasure chest to your stable boy 1000 years ago.



If Facebook published their source code... nothing would happen. A few nerds would get kicks out of the cool things and a few other nerds would find a security hole that would let them post pictures of reproductive organs on peoples walls. A third group of nerds would build a shadow-facebook on the same code that would not take off because nobody would join. And after two days, Facebook would be back to normal. There is no secret kill switch. That only exists in fictional stories, or in software written by kids.






share|improve this answer
















  • 1




    I think you may have misunderstood the question. Obviously obscurity only goes so far, and in my effort to better explain I am getting a bit more computer-technical than I would like to on Workplace SE, but there are absolutely secrets to how things are done. Some things are standard, like encryption, certain protocols, theory behind various types of implementation, but there are certain key secrets that could be extremely valuable for an attack. If Facebook published their source, nothing much would happen for a few weeks, because millions of lines of code would take a long time to dissect...
    – TheEnvironmentalist
    Jul 25 '15 at 12:16







  • 1




    ...but then maybe someone would find some security hole, and start installing backdoors. At that point, they could frankly do whatever they want, and then they could actually shut down Facebook. It wouldn't be permanent, but Facebook would have to wipe all the servers, and reinstall everything, and while they could probably figure out a clever way of automating the process within a day or two, and then complete it in another, Facebook could still be down for days, and their reputation would be severely tarnished. That's not to mention that terabytes of personal data could have been stolen...
    – TheEnvironmentalist
    Jul 25 '15 at 12:20







  • 1




    There is no secret kill switch, but someone who knows exactly how Facebook works could devise any number of ways to attack Facebook, and if not destroy the company, cost them billions in damages. In fact, I think (not sure though) that some CPUs in use at Facebook might still have exploitable halt and catch fire bugs, in which case someone could actually destroy billions of dollars in servers. This also holds true in any other company operating on the same model. For this reason, it seems certain that Facebook and the other companies in its category restrict employee data, I'm just asking how.
    – TheEnvironmentalist
    Jul 25 '15 at 12:23






  • 2




    In IT, the more high-level an employee is, the less he knows about the nitty gritty details that could actually be used to bring the company down. Anyway: Every company restricts employee data. That's natural part of the business. You don't trust your waitress as much as your accountant. There's probably books written about it. And those that were written in the times where computers where rare beasts have not lost their validity.
    – nvoigt
    Jul 25 '15 at 12:39






  • 1




    @TheEnvironmentalist - If there is a remote exploitable hole in Facebook that allows an attacker to install backdoors on Facebook's servers, attackers don't need the source code or an internal user to find that. If it was known by an internal user, it would get patched (unless we're trying to defend against an undercover operative of a hacker collective). And it would undoubtedly affect a small fraction of servers that are responsible for one particular module not all of Facebook.
    – Justin Cave
    Jul 25 '15 at 14:40












up vote
2
down vote










up vote
2
down vote









I think you got some basics about information security and software engineering wrong.



Let's take a look at chess. When you first learn it, you plan your moves and hope your opponent will not see through your clever plans. If someone would announce your plan, you'd be doomed. Only later, when you get more professional about it, you will see that chess is an open game. Everybody knows everything. Making a move hoping your opponent will not notice your plan is child's play.



The same goes for computer programming. When you first read a book, you are like "I will make a secret password. It's 'fart'. No, that's not secrect enough, I'll make it 'secretfart'. Hihi. I'm secure, nobody will guess that.". But that's child's play. When you get more professional, you will learn methods to secure software and information that are not dependent on this kind of "secrets".



Facebook is no secret. Every programmer could probably code you a Facebook clone in weeks. Amazon is no secret. Look at the thousands of shops. They are successful because they run a successful business. Just like an offline shop might be a successful business even though every highschool drop out knows the "secret" of "buy goods for less than you sell for".



So yes, obviously you don't give your accounting software's administrative access to your janitor. But that's not hot new stuff. You did not give the key to the treasure chest to your stable boy 1000 years ago.



If Facebook published their source code... nothing would happen. A few nerds would get kicks out of the cool things and a few other nerds would find a security hole that would let them post pictures of reproductive organs on peoples walls. A third group of nerds would build a shadow-facebook on the same code that would not take off because nobody would join. And after two days, Facebook would be back to normal. There is no secret kill switch. That only exists in fictional stories, or in software written by kids.






share|improve this answer












I think you got some basics about information security and software engineering wrong.



Let's take a look at chess. When you first learn it, you plan your moves and hope your opponent will not see through your clever plans. If someone would announce your plan, you'd be doomed. Only later, when you get more professional about it, you will see that chess is an open game. Everybody knows everything. Making a move hoping your opponent will not notice your plan is child's play.



The same goes for computer programming. When you first read a book, you are like "I will make a secret password. It's 'fart'. No, that's not secrect enough, I'll make it 'secretfart'. Hihi. I'm secure, nobody will guess that.". But that's child's play. When you get more professional, you will learn methods to secure software and information that are not dependent on this kind of "secrets".



Facebook is no secret. Every programmer could probably code you a Facebook clone in weeks. Amazon is no secret. Look at the thousands of shops. They are successful because they run a successful business. Just like an offline shop might be a successful business even though every highschool drop out knows the "secret" of "buy goods for less than you sell for".



So yes, obviously you don't give your accounting software's administrative access to your janitor. But that's not hot new stuff. You did not give the key to the treasure chest to your stable boy 1000 years ago.



If Facebook published their source code... nothing would happen. A few nerds would get kicks out of the cool things and a few other nerds would find a security hole that would let them post pictures of reproductive organs on peoples walls. A third group of nerds would build a shadow-facebook on the same code that would not take off because nobody would join. And after two days, Facebook would be back to normal. There is no secret kill switch. That only exists in fictional stories, or in software written by kids.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jul 25 '15 at 11:51









nvoigt

42.6k18105147




42.6k18105147







  • 1




    I think you may have misunderstood the question. Obviously obscurity only goes so far, and in my effort to better explain I am getting a bit more computer-technical than I would like to on Workplace SE, but there are absolutely secrets to how things are done. Some things are standard, like encryption, certain protocols, theory behind various types of implementation, but there are certain key secrets that could be extremely valuable for an attack. If Facebook published their source, nothing much would happen for a few weeks, because millions of lines of code would take a long time to dissect...
    – TheEnvironmentalist
    Jul 25 '15 at 12:16







  • 1




    ...but then maybe someone would find some security hole, and start installing backdoors. At that point, they could frankly do whatever they want, and then they could actually shut down Facebook. It wouldn't be permanent, but Facebook would have to wipe all the servers, and reinstall everything, and while they could probably figure out a clever way of automating the process within a day or two, and then complete it in another, Facebook could still be down for days, and their reputation would be severely tarnished. That's not to mention that terabytes of personal data could have been stolen...
    – TheEnvironmentalist
    Jul 25 '15 at 12:20







  • 1




    There is no secret kill switch, but someone who knows exactly how Facebook works could devise any number of ways to attack Facebook, and if not destroy the company, cost them billions in damages. In fact, I think (not sure though) that some CPUs in use at Facebook might still have exploitable halt and catch fire bugs, in which case someone could actually destroy billions of dollars in servers. This also holds true in any other company operating on the same model. For this reason, it seems certain that Facebook and the other companies in its category restrict employee data, I'm just asking how.
    – TheEnvironmentalist
    Jul 25 '15 at 12:23






  • 2




    In IT, the more high-level an employee is, the less he knows about the nitty gritty details that could actually be used to bring the company down. Anyway: Every company restricts employee data. That's natural part of the business. You don't trust your waitress as much as your accountant. There's probably books written about it. And those that were written in the times where computers where rare beasts have not lost their validity.
    – nvoigt
    Jul 25 '15 at 12:39






  • 1




    @TheEnvironmentalist - If there is a remote exploitable hole in Facebook that allows an attacker to install backdoors on Facebook's servers, attackers don't need the source code or an internal user to find that. If it was known by an internal user, it would get patched (unless we're trying to defend against an undercover operative of a hacker collective). And it would undoubtedly affect a small fraction of servers that are responsible for one particular module not all of Facebook.
    – Justin Cave
    Jul 25 '15 at 14:40












  • 1




    I think you may have misunderstood the question. Obviously obscurity only goes so far, and in my effort to better explain I am getting a bit more computer-technical than I would like to on Workplace SE, but there are absolutely secrets to how things are done. Some things are standard, like encryption, certain protocols, theory behind various types of implementation, but there are certain key secrets that could be extremely valuable for an attack. If Facebook published their source, nothing much would happen for a few weeks, because millions of lines of code would take a long time to dissect...
    – TheEnvironmentalist
    Jul 25 '15 at 12:16







  • 1




    ...but then maybe someone would find some security hole, and start installing backdoors. At that point, they could frankly do whatever they want, and then they could actually shut down Facebook. It wouldn't be permanent, but Facebook would have to wipe all the servers, and reinstall everything, and while they could probably figure out a clever way of automating the process within a day or two, and then complete it in another, Facebook could still be down for days, and their reputation would be severely tarnished. That's not to mention that terabytes of personal data could have been stolen...
    – TheEnvironmentalist
    Jul 25 '15 at 12:20







  • 1




    There is no secret kill switch, but someone who knows exactly how Facebook works could devise any number of ways to attack Facebook, and if not destroy the company, cost them billions in damages. In fact, I think (not sure though) that some CPUs in use at Facebook might still have exploitable halt and catch fire bugs, in which case someone could actually destroy billions of dollars in servers. This also holds true in any other company operating on the same model. For this reason, it seems certain that Facebook and the other companies in its category restrict employee data, I'm just asking how.
    – TheEnvironmentalist
    Jul 25 '15 at 12:23






  • 2




    In IT, the more high-level an employee is, the less he knows about the nitty gritty details that could actually be used to bring the company down. Anyway: Every company restricts employee data. That's natural part of the business. You don't trust your waitress as much as your accountant. There's probably books written about it. And those that were written in the times where computers where rare beasts have not lost their validity.
    – nvoigt
    Jul 25 '15 at 12:39






  • 1




    @TheEnvironmentalist - If there is a remote exploitable hole in Facebook that allows an attacker to install backdoors on Facebook's servers, attackers don't need the source code or an internal user to find that. If it was known by an internal user, it would get patched (unless we're trying to defend against an undercover operative of a hacker collective). And it would undoubtedly affect a small fraction of servers that are responsible for one particular module not all of Facebook.
    – Justin Cave
    Jul 25 '15 at 14:40







1




1




I think you may have misunderstood the question. Obviously obscurity only goes so far, and in my effort to better explain I am getting a bit more computer-technical than I would like to on Workplace SE, but there are absolutely secrets to how things are done. Some things are standard, like encryption, certain protocols, theory behind various types of implementation, but there are certain key secrets that could be extremely valuable for an attack. If Facebook published their source, nothing much would happen for a few weeks, because millions of lines of code would take a long time to dissect...
– TheEnvironmentalist
Jul 25 '15 at 12:16





I think you may have misunderstood the question. Obviously obscurity only goes so far, and in my effort to better explain I am getting a bit more computer-technical than I would like to on Workplace SE, but there are absolutely secrets to how things are done. Some things are standard, like encryption, certain protocols, theory behind various types of implementation, but there are certain key secrets that could be extremely valuable for an attack. If Facebook published their source, nothing much would happen for a few weeks, because millions of lines of code would take a long time to dissect...
– TheEnvironmentalist
Jul 25 '15 at 12:16





1




1




...but then maybe someone would find some security hole, and start installing backdoors. At that point, they could frankly do whatever they want, and then they could actually shut down Facebook. It wouldn't be permanent, but Facebook would have to wipe all the servers, and reinstall everything, and while they could probably figure out a clever way of automating the process within a day or two, and then complete it in another, Facebook could still be down for days, and their reputation would be severely tarnished. That's not to mention that terabytes of personal data could have been stolen...
– TheEnvironmentalist
Jul 25 '15 at 12:20





...but then maybe someone would find some security hole, and start installing backdoors. At that point, they could frankly do whatever they want, and then they could actually shut down Facebook. It wouldn't be permanent, but Facebook would have to wipe all the servers, and reinstall everything, and while they could probably figure out a clever way of automating the process within a day or two, and then complete it in another, Facebook could still be down for days, and their reputation would be severely tarnished. That's not to mention that terabytes of personal data could have been stolen...
– TheEnvironmentalist
Jul 25 '15 at 12:20





1




1




There is no secret kill switch, but someone who knows exactly how Facebook works could devise any number of ways to attack Facebook, and if not destroy the company, cost them billions in damages. In fact, I think (not sure though) that some CPUs in use at Facebook might still have exploitable halt and catch fire bugs, in which case someone could actually destroy billions of dollars in servers. This also holds true in any other company operating on the same model. For this reason, it seems certain that Facebook and the other companies in its category restrict employee data, I'm just asking how.
– TheEnvironmentalist
Jul 25 '15 at 12:23




There is no secret kill switch, but someone who knows exactly how Facebook works could devise any number of ways to attack Facebook, and if not destroy the company, cost them billions in damages. In fact, I think (not sure though) that some CPUs in use at Facebook might still have exploitable halt and catch fire bugs, in which case someone could actually destroy billions of dollars in servers. This also holds true in any other company operating on the same model. For this reason, it seems certain that Facebook and the other companies in its category restrict employee data, I'm just asking how.
– TheEnvironmentalist
Jul 25 '15 at 12:23




2




2




In IT, the more high-level an employee is, the less he knows about the nitty gritty details that could actually be used to bring the company down. Anyway: Every company restricts employee data. That's natural part of the business. You don't trust your waitress as much as your accountant. There's probably books written about it. And those that were written in the times where computers where rare beasts have not lost their validity.
– nvoigt
Jul 25 '15 at 12:39




In IT, the more high-level an employee is, the less he knows about the nitty gritty details that could actually be used to bring the company down. Anyway: Every company restricts employee data. That's natural part of the business. You don't trust your waitress as much as your accountant. There's probably books written about it. And those that were written in the times where computers where rare beasts have not lost their validity.
– nvoigt
Jul 25 '15 at 12:39




1




1




@TheEnvironmentalist - If there is a remote exploitable hole in Facebook that allows an attacker to install backdoors on Facebook's servers, attackers don't need the source code or an internal user to find that. If it was known by an internal user, it would get patched (unless we're trying to defend against an undercover operative of a hacker collective). And it would undoubtedly affect a small fraction of servers that are responsible for one particular module not all of Facebook.
– Justin Cave
Jul 25 '15 at 14:40




@TheEnvironmentalist - If there is a remote exploitable hole in Facebook that allows an attacker to install backdoors on Facebook's servers, attackers don't need the source code or an internal user to find that. If it was known by an internal user, it would get patched (unless we're trying to defend against an undercover operative of a hacker collective). And it would undoubtedly affect a small fraction of servers that are responsible for one particular module not all of Facebook.
– Justin Cave
Jul 25 '15 at 14:40


Comments

Popular posts from this blog

What does second last employer means? [closed]

List of Gilmore Girls characters

Confectionery