7zip - Splitting to many zip files

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












If I have a 100GB folder and I split zip it, is there a difference if I split it to 100 zip files at 1GB each or 10 zip files at 10GB each ? Does 100 zip file at 1G each take up more space than 10 zip file at 10G each ?










share|improve this question

























    up vote
    1
    down vote

    favorite












    If I have a 100GB folder and I split zip it, is there a difference if I split it to 100 zip files at 1GB each or 10 zip files at 10GB each ? Does 100 zip file at 1G each take up more space than 10 zip file at 10G each ?










    share|improve this question























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      If I have a 100GB folder and I split zip it, is there a difference if I split it to 100 zip files at 1GB each or 10 zip files at 10GB each ? Does 100 zip file at 1G each take up more space than 10 zip file at 10G each ?










      share|improve this question













      If I have a 100GB folder and I split zip it, is there a difference if I split it to 100 zip files at 1GB each or 10 zip files at 10GB each ? Does 100 zip file at 1G each take up more space than 10 zip file at 10G each ?







      7-zip






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 2 hours ago









      Upvotes All Downvoted Posts

      13115




      13115




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          2
          down vote













          Let's find out!



          100MB files (27 pieces):



          7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



          $ du ./100m/
          2677884 ./100m/


          10MB files (262 pieces):



          7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



          $ du ./10m/
          2677908 ./10m


          Results: The 10MB split archive takes up an extra 24KB. So yes there is a difference, the 100 1GB files will take up more space than the 10 10GB files.



          The difference seems to be negligible though. I would go for whichever is more convenient for you.






          share|improve this answer










          New contributor




          Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.













          • 1




            du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
            – Xen2050
            21 mins ago










          • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
            – Layne Bernardo
            13 mins ago










          • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
            – AFH
            12 mins ago

















          up vote
          1
          down vote













          Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



          There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



          The split files are identical in content to those created by a binary splitter program with the same split size.



          I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



          7z -v1000000 a File; # Create split volumes File.7z.00?
          7z a Full File; # Create full archive Full.7z
          split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
          for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


          To test on another OS you may need to down-load or write an appropriate splitter program.






          share|improve this answer




















            Your Answer







            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "3"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1370713%2f7zip-splitting-to-many-zip-files%23new-answer', 'question_page');

            );

            Post as a guest






























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            2
            down vote













            Let's find out!



            100MB files (27 pieces):



            7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



            $ du ./100m/
            2677884 ./100m/


            10MB files (262 pieces):



            7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



            $ du ./10m/
            2677908 ./10m


            Results: The 10MB split archive takes up an extra 24KB. So yes there is a difference, the 100 1GB files will take up more space than the 10 10GB files.



            The difference seems to be negligible though. I would go for whichever is more convenient for you.






            share|improve this answer










            New contributor




            Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.













            • 1




              du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
              – Xen2050
              21 mins ago










            • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
              – Layne Bernardo
              13 mins ago










            • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
              – AFH
              12 mins ago














            up vote
            2
            down vote













            Let's find out!



            100MB files (27 pieces):



            7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



            $ du ./100m/
            2677884 ./100m/


            10MB files (262 pieces):



            7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



            $ du ./10m/
            2677908 ./10m


            Results: The 10MB split archive takes up an extra 24KB. So yes there is a difference, the 100 1GB files will take up more space than the 10 10GB files.



            The difference seems to be negligible though. I would go for whichever is more convenient for you.






            share|improve this answer










            New contributor




            Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.













            • 1




              du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
              – Xen2050
              21 mins ago










            • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
              – Layne Bernardo
              13 mins ago










            • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
              – AFH
              12 mins ago












            up vote
            2
            down vote










            up vote
            2
            down vote









            Let's find out!



            100MB files (27 pieces):



            7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



            $ du ./100m/
            2677884 ./100m/


            10MB files (262 pieces):



            7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



            $ du ./10m/
            2677908 ./10m


            Results: The 10MB split archive takes up an extra 24KB. So yes there is a difference, the 100 1GB files will take up more space than the 10 10GB files.



            The difference seems to be negligible though. I would go for whichever is more convenient for you.






            share|improve this answer










            New contributor




            Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            Let's find out!



            100MB files (27 pieces):



            7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



            $ du ./100m/
            2677884 ./100m/


            10MB files (262 pieces):



            7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



            $ du ./10m/
            2677908 ./10m


            Results: The 10MB split archive takes up an extra 24KB. So yes there is a difference, the 100 1GB files will take up more space than the 10 10GB files.



            The difference seems to be negligible though. I would go for whichever is more convenient for you.







            share|improve this answer










            New contributor




            Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            share|improve this answer



            share|improve this answer








            edited 15 mins ago





















            New contributor




            Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            answered 41 mins ago









            Layne Bernardo

            313




            313




            New contributor




            Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.





            New contributor





            Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.







            • 1




              du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
              – Xen2050
              21 mins ago










            • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
              – Layne Bernardo
              13 mins ago










            • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
              – AFH
              12 mins ago












            • 1




              du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
              – Xen2050
              21 mins ago










            • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
              – Layne Bernardo
              13 mins ago










            • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
              – AFH
              12 mins ago







            1




            1




            du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
            – Xen2050
            21 mins ago




            du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
            – Xen2050
            21 mins ago












            You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
            – Layne Bernardo
            13 mins ago




            You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
            – Layne Bernardo
            13 mins ago












            Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
            – AFH
            12 mins ago




            Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
            – AFH
            12 mins ago












            up vote
            1
            down vote













            Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



            There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



            The split files are identical in content to those created by a binary splitter program with the same split size.



            I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



            7z -v1000000 a File; # Create split volumes File.7z.00?
            7z a Full File; # Create full archive Full.7z
            split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
            for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


            To test on another OS you may need to down-load or write an appropriate splitter program.






            share|improve this answer
























              up vote
              1
              down vote













              Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



              There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



              The split files are identical in content to those created by a binary splitter program with the same split size.



              I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



              7z -v1000000 a File; # Create split volumes File.7z.00?
              7z a Full File; # Create full archive Full.7z
              split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
              for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


              To test on another OS you may need to down-load or write an appropriate splitter program.






              share|improve this answer






















                up vote
                1
                down vote










                up vote
                1
                down vote









                Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



                There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



                The split files are identical in content to those created by a binary splitter program with the same split size.



                I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



                7z -v1000000 a File; # Create split volumes File.7z.00?
                7z a Full File; # Create full archive Full.7z
                split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
                for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


                To test on another OS you may need to down-load or write an appropriate splitter program.






                share|improve this answer












                Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



                There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



                The split files are identical in content to those created by a binary splitter program with the same split size.



                I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



                7z -v1000000 a File; # Create split volumes File.7z.00?
                7z a Full File; # Create full archive Full.7z
                split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
                for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


                To test on another OS you may need to down-load or write an appropriate splitter program.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 34 mins ago









                AFH

                13.2k31937




                13.2k31937



























                     

                    draft saved


                    draft discarded















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1370713%2f7zip-splitting-to-many-zip-files%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Comments

                    Popular posts from this blog

                    What does second last employer means? [closed]

                    List of Gilmore Girls characters

                    Confectionery