Splitting to many zip files using 7-Zip

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite












If I have a 100 GB folder and I split zip it, is there a difference if I split it to 100 zip files at 1 GB each or 10 zip files at 10 GB each?



Do 100 zip files at 1 GB each take up more space than 10 zip files at 10 GB each?










share|improve this question























  • And you can't find out because?
    – Dave
    1 hour ago










  • Why can't you just try it?
    – Peter Mortensen
    13 mins ago














up vote
4
down vote

favorite












If I have a 100 GB folder and I split zip it, is there a difference if I split it to 100 zip files at 1 GB each or 10 zip files at 10 GB each?



Do 100 zip files at 1 GB each take up more space than 10 zip files at 10 GB each?










share|improve this question























  • And you can't find out because?
    – Dave
    1 hour ago










  • Why can't you just try it?
    – Peter Mortensen
    13 mins ago












up vote
4
down vote

favorite









up vote
4
down vote

favorite











If I have a 100 GB folder and I split zip it, is there a difference if I split it to 100 zip files at 1 GB each or 10 zip files at 10 GB each?



Do 100 zip files at 1 GB each take up more space than 10 zip files at 10 GB each?










share|improve this question















If I have a 100 GB folder and I split zip it, is there a difference if I split it to 100 zip files at 1 GB each or 10 zip files at 10 GB each?



Do 100 zip files at 1 GB each take up more space than 10 zip files at 10 GB each?







7-zip






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 13 mins ago









Peter Mortensen

8,271166184




8,271166184










asked 5 hours ago









Upvotes All Downvoted Posts

14816




14816











  • And you can't find out because?
    – Dave
    1 hour ago










  • Why can't you just try it?
    – Peter Mortensen
    13 mins ago
















  • And you can't find out because?
    – Dave
    1 hour ago










  • Why can't you just try it?
    – Peter Mortensen
    13 mins ago















And you can't find out because?
– Dave
1 hour ago




And you can't find out because?
– Dave
1 hour ago












Why can't you just try it?
– Peter Mortensen
13 mins ago




Why can't you just try it?
– Peter Mortensen
13 mins ago










2 Answers
2






active

oldest

votes

















up vote
5
down vote



accepted










Let's find out!



100 MB files (27 pieces):



7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



$ du ./100m/
2677884 ./100m/


10 MB files (262 pieces):



7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



$ du ./10m/
2677908 ./10m


Results: The 10 MB split archive takes up an extra 24 KB. So yes, there is a difference, the 100 1 GB files will take up more space than the 10 10 GB files.



The difference seems to be negligible though. I would go for whichever is more convenient for you.






share|improve this answer










New contributor




Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.













  • 2




    du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
    – Xen2050
    3 hours ago










  • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
    – Layne Bernardo
    3 hours ago










  • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
    – AFH
    3 hours ago










  • Zip file max size is 4GB.
    – pbies
    1 hour ago










  • Re "The difference seems to be negligible": What is it in %?
    – Peter Mortensen
    12 mins ago

















up vote
5
down vote













Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



The split files are identical in content to those created by a binary splitter program with the same split size.



I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



7z -v1000000 a File; # Create split volumes File.7z.00?
7z a Full File; # Create full archive Full.7z
split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


To test on another OS you may need to down-load or write an appropriate splitter program.






share|improve this answer




















    Your Answer







    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "3"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1370713%2fsplitting-to-many-zip-files-using-7-zip%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    5
    down vote



    accepted










    Let's find out!



    100 MB files (27 pieces):



    7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



    $ du ./100m/
    2677884 ./100m/


    10 MB files (262 pieces):



    7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



    $ du ./10m/
    2677908 ./10m


    Results: The 10 MB split archive takes up an extra 24 KB. So yes, there is a difference, the 100 1 GB files will take up more space than the 10 10 GB files.



    The difference seems to be negligible though. I would go for whichever is more convenient for you.






    share|improve this answer










    New contributor




    Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.













    • 2




      du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
      – Xen2050
      3 hours ago










    • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
      – Layne Bernardo
      3 hours ago










    • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
      – AFH
      3 hours ago










    • Zip file max size is 4GB.
      – pbies
      1 hour ago










    • Re "The difference seems to be negligible": What is it in %?
      – Peter Mortensen
      12 mins ago














    up vote
    5
    down vote



    accepted










    Let's find out!



    100 MB files (27 pieces):



    7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



    $ du ./100m/
    2677884 ./100m/


    10 MB files (262 pieces):



    7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



    $ du ./10m/
    2677908 ./10m


    Results: The 10 MB split archive takes up an extra 24 KB. So yes, there is a difference, the 100 1 GB files will take up more space than the 10 10 GB files.



    The difference seems to be negligible though. I would go for whichever is more convenient for you.






    share|improve this answer










    New contributor




    Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.













    • 2




      du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
      – Xen2050
      3 hours ago










    • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
      – Layne Bernardo
      3 hours ago










    • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
      – AFH
      3 hours ago










    • Zip file max size is 4GB.
      – pbies
      1 hour ago










    • Re "The difference seems to be negligible": What is it in %?
      – Peter Mortensen
      12 mins ago












    up vote
    5
    down vote



    accepted







    up vote
    5
    down vote



    accepted






    Let's find out!



    100 MB files (27 pieces):



    7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



    $ du ./100m/
    2677884 ./100m/


    10 MB files (262 pieces):



    7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



    $ du ./10m/
    2677908 ./10m


    Results: The 10 MB split archive takes up an extra 24 KB. So yes, there is a difference, the 100 1 GB files will take up more space than the 10 10 GB files.



    The difference seems to be negligible though. I would go for whichever is more convenient for you.






    share|improve this answer










    New contributor




    Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    Let's find out!



    100 MB files (27 pieces):



    7z a -tzip -v100M ./100m/archive ./kali-linux-xfce-2018.2-amd64.iso



    $ du ./100m/
    2677884 ./100m/


    10 MB files (262 pieces):



    7z a -tzip -v10M ./10m/archive ./kali-linux-xfce-2018.2-amd64.iso



    $ du ./10m/
    2677908 ./10m


    Results: The 10 MB split archive takes up an extra 24 KB. So yes, there is a difference, the 100 1 GB files will take up more space than the 10 10 GB files.



    The difference seems to be negligible though. I would go for whichever is more convenient for you.







    share|improve this answer










    New contributor




    Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    share|improve this answer



    share|improve this answer








    edited 12 mins ago









    Peter Mortensen

    8,271166184




    8,271166184






    New contributor




    Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    answered 3 hours ago









    Layne Bernardo

    763




    763




    New contributor




    Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





    New contributor





    Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    Layne Bernardo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    • 2




      du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
      – Xen2050
      3 hours ago










    • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
      – Layne Bernardo
      3 hours ago










    • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
      – AFH
      3 hours ago










    • Zip file max size is 4GB.
      – pbies
      1 hour ago










    • Re "The difference seems to be negligible": What is it in %?
      – Peter Mortensen
      12 mins ago












    • 2




      du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
      – Xen2050
      3 hours ago










    • You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
      – Layne Bernardo
      3 hours ago










    • Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
      – AFH
      3 hours ago










    • Zip file max size is 4GB.
      – pbies
      1 hour ago










    • Re "The difference seems to be negligible": What is it in %?
      – Peter Mortensen
      12 mins ago







    2




    2




    du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
    – Xen2050
    3 hours ago




    du doesn't output the size in bytes by default (unless your 270M of files turned into 2,677,908 bytes). It does display the on-disk size of files, which may be different than the actual data size (maybe applicable for uploading or storing on other filesystems)
    – Xen2050
    3 hours ago












    You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
    – Layne Bernardo
    3 hours ago




    You are correct, it's actually outputting in KB. I've edited the answer to correct this discrepancy. The original file is a Kali Linux ISO, it is ~2.6GB. You have a good point about the on-disk size vs actual data size, I was specifically thinking about on-disk size because it accounts for the overhead of having additional files but you're right that it would be different depending on what you're actually doing with the archives.
    – Layne Bernardo
    3 hours ago












    Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
    – AFH
    3 hours ago




    Sorry, I crossed with your largely similar answer while I was double-checking the run strings.
    – AFH
    3 hours ago












    Zip file max size is 4GB.
    – pbies
    1 hour ago




    Zip file max size is 4GB.
    – pbies
    1 hour ago












    Re "The difference seems to be negligible": What is it in %?
    – Peter Mortensen
    12 mins ago




    Re "The difference seems to be negligible": What is it in %?
    – Peter Mortensen
    12 mins ago












    up vote
    5
    down vote













    Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



    There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



    The split files are identical in content to those created by a binary splitter program with the same split size.



    I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



    7z -v1000000 a File; # Create split volumes File.7z.00?
    7z a Full File; # Create full archive Full.7z
    split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
    for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


    To test on another OS you may need to down-load or write an appropriate splitter program.






    share|improve this answer
























      up vote
      5
      down vote













      Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



      There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



      The split files are identical in content to those created by a binary splitter program with the same split size.



      I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



      7z -v1000000 a File; # Create split volumes File.7z.00?
      7z a Full File; # Create full archive Full.7z
      split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
      for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


      To test on another OS you may need to down-load or write an appropriate splitter program.






      share|improve this answer






















        up vote
        5
        down vote










        up vote
        5
        down vote









        Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



        There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



        The split files are identical in content to those created by a binary splitter program with the same split size.



        I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



        7z -v1000000 a File; # Create split volumes File.7z.00?
        7z a Full File; # Create full archive Full.7z
        split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
        for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


        To test on another OS you may need to down-load or write an appropriate splitter program.






        share|improve this answer












        Every file has a file system overhead of unused logical sector space after the end-of-file. but this is eliminated if the split size is a multiple of of the logical sector size (not necessarily true of my example below).



        There may be extra bytes used by the extra directory entries, but these will not figure unless the directory now occupies an extra logical sector.



        The split files are identical in content to those created by a binary splitter program with the same split size.



        I verified these on Linux by using the GUI version on a 7+MB file, giving 8 split files of 1MB size with 7-Zip (File.7z.00?), then created a single, full archive (Full.7z), which I split with:-



        7z -v1000000 a File; # Create split volumes File.7z.00?
        7z a Full File; # Create full archive Full.7z
        split -b 1000000 -a 3 --numeric-suffixes=1 Full.7z Full.7z.; # Split full archive into Full.7z.00?
        for f in 001..008; do cmp Full.7z.$f File.7z.$f; done; # Compare splits with 7z volumes


        To test on another OS you may need to down-load or write an appropriate splitter program.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 3 hours ago









        AFH

        13.2k31937




        13.2k31937



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1370713%2fsplitting-to-many-zip-files-using-7-zip%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            Long meetings (6-7 hours a day): Being “babysat” by supervisor

            Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

            Confectionery