Why medicine modelling is distributed on regular person's computers?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












There are some distributed computing project on the Internet that tries to find cure for diseases, like World Community Grid. Could anyone say why there are such things and how useful those are? I mean, I believe that if you are serious researcher, I think you are able to use university's networks or supercomputers to do modelling. So what is the idea about spread the modelling to regular computers? Does the medicine modelling require so much computer power that computers in universities have not enough computing power?










share|improve this question









New contributor




layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.























    up vote
    1
    down vote

    favorite












    There are some distributed computing project on the Internet that tries to find cure for diseases, like World Community Grid. Could anyone say why there are such things and how useful those are? I mean, I believe that if you are serious researcher, I think you are able to use university's networks or supercomputers to do modelling. So what is the idea about spread the modelling to regular computers? Does the medicine modelling require so much computer power that computers in universities have not enough computing power?










    share|improve this question









    New contributor




    layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      There are some distributed computing project on the Internet that tries to find cure for diseases, like World Community Grid. Could anyone say why there are such things and how useful those are? I mean, I believe that if you are serious researcher, I think you are able to use university's networks or supercomputers to do modelling. So what is the idea about spread the modelling to regular computers? Does the medicine modelling require so much computer power that computers in universities have not enough computing power?










      share|improve this question









      New contributor




      layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      There are some distributed computing project on the Internet that tries to find cure for diseases, like World Community Grid. Could anyone say why there are such things and how useful those are? I mean, I believe that if you are serious researcher, I think you are able to use university's networks or supercomputers to do modelling. So what is the idea about spread the modelling to regular computers? Does the medicine modelling require so much computer power that computers in universities have not enough computing power?







      research






      share|improve this question









      New contributor




      layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited 2 hours ago









      Chris Rogers

      2,906736




      2,906736






      New contributor




      layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 2 hours ago









      layman

      61




      61




      New contributor




      layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      layman is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          2
          down vote













          I don't know how readily available supercomputers are to researchers and universities but I would imagine that a big part of the answer to your question would be down to cost.



          Supercomputers vs Distributed Computing Projects



          Computer performance is measured in FLOPS (Floating Point Operations Per Second), and in June 2018, Summit, an IBM-built supercomputer now running at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot for the fastest computer performance at 122.3 petaFLOPS on the LINPACK benchmark where peta is 1015. When compared to the home PCs, the fastest possible home PC processor at a cost of $2,000 provides approx. 1 teraFLOPS where tera is 1012.



          For distributed computing projects, let's look at Folding@home.




          The project uses the idle processing resources of thousands of personal computers owned by volunteers who have installed the software on their systems. Its main purpose is to determine the mechanisms of protein folding, which is the process by which proteins reach their final three-dimensional structure, and to examine the causes of protein misfolding. This is of significant academic interest with major implications for medical research into Alzheimer's disease, Huntington's disease, and many forms of cancer, among other diseases. To a lesser extent, Folding@home also tries to predict a protein's final structureand determine how other molecules may interact with it, which has applications in drug design. Folding@home is developed and operated by the Pande Laboratory at Stanford University



          [...]



          Since its launch on October 1, 2000, the Pande Lab has produced 200 scientific research papers as a direct result of Folding@home [see https://foldingathome.org/papers-results]




          Stats provided by Folding@home at https://stats.foldingathome.org/os state that their project provides a total performance of 47,344 Native teraFLOPS or 98,747 x86 teraFLOPS.



          Folding@home stats by OS



          Note that these teraFLOPS values are from the software cores, not the peak values from CPU/GPU specs and these figures only just beat the performance of China's Sunway TaihuLight in 2016 which was ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (now the 2nd fastest supercomputer).



          Cost



          IBMs Summit Supercomputer cost $200 million to build and according to Wikipedia, the Sunway TaihuLight cost $273 million. When you consider the computing performance provided by Folding@home is provided by volunteers (so the system is free), it is a no brainer that the computing power on offer should not be turned away.






          share|improve this answer






















            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "607"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );






            layman is a new contributor. Be nice, and check out our Code of Conduct.









             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmedicalsciences.stackexchange.com%2fquestions%2f17859%2fwhy-medicine-modelling-is-distributed-on-regular-persons-computers%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            2
            down vote













            I don't know how readily available supercomputers are to researchers and universities but I would imagine that a big part of the answer to your question would be down to cost.



            Supercomputers vs Distributed Computing Projects



            Computer performance is measured in FLOPS (Floating Point Operations Per Second), and in June 2018, Summit, an IBM-built supercomputer now running at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot for the fastest computer performance at 122.3 petaFLOPS on the LINPACK benchmark where peta is 1015. When compared to the home PCs, the fastest possible home PC processor at a cost of $2,000 provides approx. 1 teraFLOPS where tera is 1012.



            For distributed computing projects, let's look at Folding@home.




            The project uses the idle processing resources of thousands of personal computers owned by volunteers who have installed the software on their systems. Its main purpose is to determine the mechanisms of protein folding, which is the process by which proteins reach their final three-dimensional structure, and to examine the causes of protein misfolding. This is of significant academic interest with major implications for medical research into Alzheimer's disease, Huntington's disease, and many forms of cancer, among other diseases. To a lesser extent, Folding@home also tries to predict a protein's final structureand determine how other molecules may interact with it, which has applications in drug design. Folding@home is developed and operated by the Pande Laboratory at Stanford University



            [...]



            Since its launch on October 1, 2000, the Pande Lab has produced 200 scientific research papers as a direct result of Folding@home [see https://foldingathome.org/papers-results]




            Stats provided by Folding@home at https://stats.foldingathome.org/os state that their project provides a total performance of 47,344 Native teraFLOPS or 98,747 x86 teraFLOPS.



            Folding@home stats by OS



            Note that these teraFLOPS values are from the software cores, not the peak values from CPU/GPU specs and these figures only just beat the performance of China's Sunway TaihuLight in 2016 which was ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (now the 2nd fastest supercomputer).



            Cost



            IBMs Summit Supercomputer cost $200 million to build and according to Wikipedia, the Sunway TaihuLight cost $273 million. When you consider the computing performance provided by Folding@home is provided by volunteers (so the system is free), it is a no brainer that the computing power on offer should not be turned away.






            share|improve this answer


























              up vote
              2
              down vote













              I don't know how readily available supercomputers are to researchers and universities but I would imagine that a big part of the answer to your question would be down to cost.



              Supercomputers vs Distributed Computing Projects



              Computer performance is measured in FLOPS (Floating Point Operations Per Second), and in June 2018, Summit, an IBM-built supercomputer now running at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot for the fastest computer performance at 122.3 petaFLOPS on the LINPACK benchmark where peta is 1015. When compared to the home PCs, the fastest possible home PC processor at a cost of $2,000 provides approx. 1 teraFLOPS where tera is 1012.



              For distributed computing projects, let's look at Folding@home.




              The project uses the idle processing resources of thousands of personal computers owned by volunteers who have installed the software on their systems. Its main purpose is to determine the mechanisms of protein folding, which is the process by which proteins reach their final three-dimensional structure, and to examine the causes of protein misfolding. This is of significant academic interest with major implications for medical research into Alzheimer's disease, Huntington's disease, and many forms of cancer, among other diseases. To a lesser extent, Folding@home also tries to predict a protein's final structureand determine how other molecules may interact with it, which has applications in drug design. Folding@home is developed and operated by the Pande Laboratory at Stanford University



              [...]



              Since its launch on October 1, 2000, the Pande Lab has produced 200 scientific research papers as a direct result of Folding@home [see https://foldingathome.org/papers-results]




              Stats provided by Folding@home at https://stats.foldingathome.org/os state that their project provides a total performance of 47,344 Native teraFLOPS or 98,747 x86 teraFLOPS.



              Folding@home stats by OS



              Note that these teraFLOPS values are from the software cores, not the peak values from CPU/GPU specs and these figures only just beat the performance of China's Sunway TaihuLight in 2016 which was ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (now the 2nd fastest supercomputer).



              Cost



              IBMs Summit Supercomputer cost $200 million to build and according to Wikipedia, the Sunway TaihuLight cost $273 million. When you consider the computing performance provided by Folding@home is provided by volunteers (so the system is free), it is a no brainer that the computing power on offer should not be turned away.






              share|improve this answer
























                up vote
                2
                down vote










                up vote
                2
                down vote









                I don't know how readily available supercomputers are to researchers and universities but I would imagine that a big part of the answer to your question would be down to cost.



                Supercomputers vs Distributed Computing Projects



                Computer performance is measured in FLOPS (Floating Point Operations Per Second), and in June 2018, Summit, an IBM-built supercomputer now running at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot for the fastest computer performance at 122.3 petaFLOPS on the LINPACK benchmark where peta is 1015. When compared to the home PCs, the fastest possible home PC processor at a cost of $2,000 provides approx. 1 teraFLOPS where tera is 1012.



                For distributed computing projects, let's look at Folding@home.




                The project uses the idle processing resources of thousands of personal computers owned by volunteers who have installed the software on their systems. Its main purpose is to determine the mechanisms of protein folding, which is the process by which proteins reach their final three-dimensional structure, and to examine the causes of protein misfolding. This is of significant academic interest with major implications for medical research into Alzheimer's disease, Huntington's disease, and many forms of cancer, among other diseases. To a lesser extent, Folding@home also tries to predict a protein's final structureand determine how other molecules may interact with it, which has applications in drug design. Folding@home is developed and operated by the Pande Laboratory at Stanford University



                [...]



                Since its launch on October 1, 2000, the Pande Lab has produced 200 scientific research papers as a direct result of Folding@home [see https://foldingathome.org/papers-results]




                Stats provided by Folding@home at https://stats.foldingathome.org/os state that their project provides a total performance of 47,344 Native teraFLOPS or 98,747 x86 teraFLOPS.



                Folding@home stats by OS



                Note that these teraFLOPS values are from the software cores, not the peak values from CPU/GPU specs and these figures only just beat the performance of China's Sunway TaihuLight in 2016 which was ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (now the 2nd fastest supercomputer).



                Cost



                IBMs Summit Supercomputer cost $200 million to build and according to Wikipedia, the Sunway TaihuLight cost $273 million. When you consider the computing performance provided by Folding@home is provided by volunteers (so the system is free), it is a no brainer that the computing power on offer should not be turned away.






                share|improve this answer














                I don't know how readily available supercomputers are to researchers and universities but I would imagine that a big part of the answer to your question would be down to cost.



                Supercomputers vs Distributed Computing Projects



                Computer performance is measured in FLOPS (Floating Point Operations Per Second), and in June 2018, Summit, an IBM-built supercomputer now running at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot for the fastest computer performance at 122.3 petaFLOPS on the LINPACK benchmark where peta is 1015. When compared to the home PCs, the fastest possible home PC processor at a cost of $2,000 provides approx. 1 teraFLOPS where tera is 1012.



                For distributed computing projects, let's look at Folding@home.




                The project uses the idle processing resources of thousands of personal computers owned by volunteers who have installed the software on their systems. Its main purpose is to determine the mechanisms of protein folding, which is the process by which proteins reach their final three-dimensional structure, and to examine the causes of protein misfolding. This is of significant academic interest with major implications for medical research into Alzheimer's disease, Huntington's disease, and many forms of cancer, among other diseases. To a lesser extent, Folding@home also tries to predict a protein's final structureand determine how other molecules may interact with it, which has applications in drug design. Folding@home is developed and operated by the Pande Laboratory at Stanford University



                [...]



                Since its launch on October 1, 2000, the Pande Lab has produced 200 scientific research papers as a direct result of Folding@home [see https://foldingathome.org/papers-results]




                Stats provided by Folding@home at https://stats.foldingathome.org/os state that their project provides a total performance of 47,344 Native teraFLOPS or 98,747 x86 teraFLOPS.



                Folding@home stats by OS



                Note that these teraFLOPS values are from the software cores, not the peak values from CPU/GPU specs and these figures only just beat the performance of China's Sunway TaihuLight in 2016 which was ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (now the 2nd fastest supercomputer).



                Cost



                IBMs Summit Supercomputer cost $200 million to build and according to Wikipedia, the Sunway TaihuLight cost $273 million. When you consider the computing performance provided by Folding@home is provided by volunteers (so the system is free), it is a no brainer that the computing power on offer should not be turned away.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited 8 mins ago

























                answered 21 mins ago









                Chris Rogers

                2,906736




                2,906736




















                    layman is a new contributor. Be nice, and check out our Code of Conduct.









                     

                    draft saved


                    draft discarded


















                    layman is a new contributor. Be nice, and check out our Code of Conduct.












                    layman is a new contributor. Be nice, and check out our Code of Conduct.











                    layman is a new contributor. Be nice, and check out our Code of Conduct.













                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmedicalsciences.stackexchange.com%2fquestions%2f17859%2fwhy-medicine-modelling-is-distributed-on-regular-persons-computers%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Comments

                    Popular posts from this blog

                    What does second last employer means? [closed]

                    List of Gilmore Girls characters

                    Confectionery