What happens when I kill 'cp'? Is it safe and does it have any consequences?
Clash Royale CLAN TAG#URR8PPP
up vote
18
down vote
favorite
What are the consequences for a ext4 filesystem when I terminate a copying cp
command by typing Ctrl + C while it is running?
Does the filesystem get corrupted? Is the partition's space occupied by the incomplete copied file still usable after deleting it?
And, most importantly, is terminating a cp
process a safe thing to do?
files filesystems ext4 file-copy
add a comment |Â
up vote
18
down vote
favorite
What are the consequences for a ext4 filesystem when I terminate a copying cp
command by typing Ctrl + C while it is running?
Does the filesystem get corrupted? Is the partition's space occupied by the incomplete copied file still usable after deleting it?
And, most importantly, is terminating a cp
process a safe thing to do?
files filesystems ext4 file-copy
1
Keep in mind that while the answers are correct for ext4, filesystems without journaling may not be as safe.
– Ave
Sep 2 at 1:32
2
@Ave Journaling has nothing to do with this. The syscalls are atomic regardless of what filesystem you use. Journaling is useful in situations where power may be abruptly lost.
– forest
Sep 2 at 7:14
add a comment |Â
up vote
18
down vote
favorite
up vote
18
down vote
favorite
What are the consequences for a ext4 filesystem when I terminate a copying cp
command by typing Ctrl + C while it is running?
Does the filesystem get corrupted? Is the partition's space occupied by the incomplete copied file still usable after deleting it?
And, most importantly, is terminating a cp
process a safe thing to do?
files filesystems ext4 file-copy
What are the consequences for a ext4 filesystem when I terminate a copying cp
command by typing Ctrl + C while it is running?
Does the filesystem get corrupted? Is the partition's space occupied by the incomplete copied file still usable after deleting it?
And, most importantly, is terminating a cp
process a safe thing to do?
files filesystems ext4 file-copy
edited Sep 4 at 12:43
asked Sep 1 at 13:45
Seninha
33629
33629
1
Keep in mind that while the answers are correct for ext4, filesystems without journaling may not be as safe.
– Ave
Sep 2 at 1:32
2
@Ave Journaling has nothing to do with this. The syscalls are atomic regardless of what filesystem you use. Journaling is useful in situations where power may be abruptly lost.
– forest
Sep 2 at 7:14
add a comment |Â
1
Keep in mind that while the answers are correct for ext4, filesystems without journaling may not be as safe.
– Ave
Sep 2 at 1:32
2
@Ave Journaling has nothing to do with this. The syscalls are atomic regardless of what filesystem you use. Journaling is useful in situations where power may be abruptly lost.
– forest
Sep 2 at 7:14
1
1
Keep in mind that while the answers are correct for ext4, filesystems without journaling may not be as safe.
– Ave
Sep 2 at 1:32
Keep in mind that while the answers are correct for ext4, filesystems without journaling may not be as safe.
– Ave
Sep 2 at 1:32
2
2
@Ave Journaling has nothing to do with this. The syscalls are atomic regardless of what filesystem you use. Journaling is useful in situations where power may be abruptly lost.
– forest
Sep 2 at 7:14
@Ave Journaling has nothing to do with this. The syscalls are atomic regardless of what filesystem you use. Journaling is useful in situations where power may be abruptly lost.
– forest
Sep 2 at 7:14
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
18
down vote
accepted
This is safe to do, but naturally you may not have finished the copy.
When the cp
command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall is a function that an application can call that requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls, it would look something like:
open("/home/user/hello.txt", O_RDONLY) = 3
open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4
read(3, "Hello, world!n", 131072) = 14
write(4, "Hello, world!n", 14) = 14
close(3) = 0
close(4) = 0
This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished, not while it is running. Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has finished. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, making them atomic operations.
Interestingly, this is why commands like cp
may not terminate immediately when they are killed. If you are copying a very large file and kill it, even with SIGKILL, the process will still run until the current syscall finishes. With a large file, this may take a while, as the process will be in an uninterruptible state.
I triedstrace cp
and it seems to write in chunks of 131072 bytes. Maybe if I looked throughcp
's source I could see where this value comes from.
– qwr
Sep 1 at 21:02
2
@qwr That's most likely part of the glibc library, notcp
itself. It has various file access functions that internally use that as a value.
– forest
Sep 1 at 21:03
2
Great answer! I'd never realized that there's a delay in terminating acp
after SIGKILLing it, even while dealing with large files... maybe the duration of those uninterruptible atomic operations of a process is too short. Does the same explanation work for killingdd
and other disk-reading/writing processes?
– Seninha
Sep 1 at 21:35
3
@qwr 128kb chunks are hardwired default in coreutils when reading from blockdevices, this is done in effort to minimize syscalls. Analysis is given in the coreutils source: git.savannah.gnu.org/cgit/coreutils.git/tree/src/ioblksize.h
– Fiisch
Sep 2 at 11:52
1
@AndrewHenle Perhaps I should have said that it's the filesystem metadata which is atomic. You are correct that a write may be partial.
– forest
Sep 4 at 19:25
 |Â
show 2 more comments
up vote
21
down vote
Since cp
is a userspace command, this does not affect filesystem integrity.
You of course need to be prepared that at least one file will not have been copied completely if you kill a runnning cp
program.
13
Why the downvote? Just because it’s schily?
– Stephen Kitt
Sep 1 at 13:54
6
There definitely seems to be at least one person that downvotes all my answers. Do you know of a way to find out who did the downvote?
– schily
Sep 1 at 14:02
2
Not even moderators can find out who made specific votes - that is understandably restricted to SO employees. You can use the "contact us" link to ask them to investigate.
– Philip Kendall
Sep 1 at 14:06
1
It would be pretty sad if a userspace program were able to compromise filesystem integrity. Note: Of course, there can be, there have been, and there will be bugs in filesystem implementations. Note #2: Also, of course, userspace programs running with elevated privileges (e.g.CAP_SYS_RAWIO
in Linux or the equivalent in other OSs) that give them direct access to the underlying device of the filesystem (e.g.sudo dd if=/dev/urandom of=/dev/sda1
) may wreak all sorts of havoc.
– Jörg W Mittag
Sep 1 at 18:18
3
And if a filesystem was buggy enough to get corrupted after an interruptedcp
, it would probably get corrupted from a finishedcp
too...
– ilkkachu
Sep 1 at 19:59
 |Â
show 8 more comments
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
18
down vote
accepted
This is safe to do, but naturally you may not have finished the copy.
When the cp
command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall is a function that an application can call that requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls, it would look something like:
open("/home/user/hello.txt", O_RDONLY) = 3
open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4
read(3, "Hello, world!n", 131072) = 14
write(4, "Hello, world!n", 14) = 14
close(3) = 0
close(4) = 0
This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished, not while it is running. Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has finished. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, making them atomic operations.
Interestingly, this is why commands like cp
may not terminate immediately when they are killed. If you are copying a very large file and kill it, even with SIGKILL, the process will still run until the current syscall finishes. With a large file, this may take a while, as the process will be in an uninterruptible state.
I triedstrace cp
and it seems to write in chunks of 131072 bytes. Maybe if I looked throughcp
's source I could see where this value comes from.
– qwr
Sep 1 at 21:02
2
@qwr That's most likely part of the glibc library, notcp
itself. It has various file access functions that internally use that as a value.
– forest
Sep 1 at 21:03
2
Great answer! I'd never realized that there's a delay in terminating acp
after SIGKILLing it, even while dealing with large files... maybe the duration of those uninterruptible atomic operations of a process is too short. Does the same explanation work for killingdd
and other disk-reading/writing processes?
– Seninha
Sep 1 at 21:35
3
@qwr 128kb chunks are hardwired default in coreutils when reading from blockdevices, this is done in effort to minimize syscalls. Analysis is given in the coreutils source: git.savannah.gnu.org/cgit/coreutils.git/tree/src/ioblksize.h
– Fiisch
Sep 2 at 11:52
1
@AndrewHenle Perhaps I should have said that it's the filesystem metadata which is atomic. You are correct that a write may be partial.
– forest
Sep 4 at 19:25
 |Â
show 2 more comments
up vote
18
down vote
accepted
This is safe to do, but naturally you may not have finished the copy.
When the cp
command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall is a function that an application can call that requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls, it would look something like:
open("/home/user/hello.txt", O_RDONLY) = 3
open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4
read(3, "Hello, world!n", 131072) = 14
write(4, "Hello, world!n", 14) = 14
close(3) = 0
close(4) = 0
This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished, not while it is running. Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has finished. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, making them atomic operations.
Interestingly, this is why commands like cp
may not terminate immediately when they are killed. If you are copying a very large file and kill it, even with SIGKILL, the process will still run until the current syscall finishes. With a large file, this may take a while, as the process will be in an uninterruptible state.
I triedstrace cp
and it seems to write in chunks of 131072 bytes. Maybe if I looked throughcp
's source I could see where this value comes from.
– qwr
Sep 1 at 21:02
2
@qwr That's most likely part of the glibc library, notcp
itself. It has various file access functions that internally use that as a value.
– forest
Sep 1 at 21:03
2
Great answer! I'd never realized that there's a delay in terminating acp
after SIGKILLing it, even while dealing with large files... maybe the duration of those uninterruptible atomic operations of a process is too short. Does the same explanation work for killingdd
and other disk-reading/writing processes?
– Seninha
Sep 1 at 21:35
3
@qwr 128kb chunks are hardwired default in coreutils when reading from blockdevices, this is done in effort to minimize syscalls. Analysis is given in the coreutils source: git.savannah.gnu.org/cgit/coreutils.git/tree/src/ioblksize.h
– Fiisch
Sep 2 at 11:52
1
@AndrewHenle Perhaps I should have said that it's the filesystem metadata which is atomic. You are correct that a write may be partial.
– forest
Sep 4 at 19:25
 |Â
show 2 more comments
up vote
18
down vote
accepted
up vote
18
down vote
accepted
This is safe to do, but naturally you may not have finished the copy.
When the cp
command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall is a function that an application can call that requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls, it would look something like:
open("/home/user/hello.txt", O_RDONLY) = 3
open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4
read(3, "Hello, world!n", 131072) = 14
write(4, "Hello, world!n", 14) = 14
close(3) = 0
close(4) = 0
This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished, not while it is running. Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has finished. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, making them atomic operations.
Interestingly, this is why commands like cp
may not terminate immediately when they are killed. If you are copying a very large file and kill it, even with SIGKILL, the process will still run until the current syscall finishes. With a large file, this may take a while, as the process will be in an uninterruptible state.
This is safe to do, but naturally you may not have finished the copy.
When the cp
command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall is a function that an application can call that requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls, it would look something like:
open("/home/user/hello.txt", O_RDONLY) = 3
open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4
read(3, "Hello, world!n", 131072) = 14
write(4, "Hello, world!n", 14) = 14
close(3) = 0
close(4) = 0
This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished, not while it is running. Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has finished. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, making them atomic operations.
Interestingly, this is why commands like cp
may not terminate immediately when they are killed. If you are copying a very large file and kill it, even with SIGKILL, the process will still run until the current syscall finishes. With a large file, this may take a while, as the process will be in an uninterruptible state.
edited Sep 4 at 0:30
answered Sep 1 at 19:59


forest
4149
4149
I triedstrace cp
and it seems to write in chunks of 131072 bytes. Maybe if I looked throughcp
's source I could see where this value comes from.
– qwr
Sep 1 at 21:02
2
@qwr That's most likely part of the glibc library, notcp
itself. It has various file access functions that internally use that as a value.
– forest
Sep 1 at 21:03
2
Great answer! I'd never realized that there's a delay in terminating acp
after SIGKILLing it, even while dealing with large files... maybe the duration of those uninterruptible atomic operations of a process is too short. Does the same explanation work for killingdd
and other disk-reading/writing processes?
– Seninha
Sep 1 at 21:35
3
@qwr 128kb chunks are hardwired default in coreutils when reading from blockdevices, this is done in effort to minimize syscalls. Analysis is given in the coreutils source: git.savannah.gnu.org/cgit/coreutils.git/tree/src/ioblksize.h
– Fiisch
Sep 2 at 11:52
1
@AndrewHenle Perhaps I should have said that it's the filesystem metadata which is atomic. You are correct that a write may be partial.
– forest
Sep 4 at 19:25
 |Â
show 2 more comments
I triedstrace cp
and it seems to write in chunks of 131072 bytes. Maybe if I looked throughcp
's source I could see where this value comes from.
– qwr
Sep 1 at 21:02
2
@qwr That's most likely part of the glibc library, notcp
itself. It has various file access functions that internally use that as a value.
– forest
Sep 1 at 21:03
2
Great answer! I'd never realized that there's a delay in terminating acp
after SIGKILLing it, even while dealing with large files... maybe the duration of those uninterruptible atomic operations of a process is too short. Does the same explanation work for killingdd
and other disk-reading/writing processes?
– Seninha
Sep 1 at 21:35
3
@qwr 128kb chunks are hardwired default in coreutils when reading from blockdevices, this is done in effort to minimize syscalls. Analysis is given in the coreutils source: git.savannah.gnu.org/cgit/coreutils.git/tree/src/ioblksize.h
– Fiisch
Sep 2 at 11:52
1
@AndrewHenle Perhaps I should have said that it's the filesystem metadata which is atomic. You are correct that a write may be partial.
– forest
Sep 4 at 19:25
I tried
strace cp
and it seems to write in chunks of 131072 bytes. Maybe if I looked through cp
's source I could see where this value comes from.– qwr
Sep 1 at 21:02
I tried
strace cp
and it seems to write in chunks of 131072 bytes. Maybe if I looked through cp
's source I could see where this value comes from.– qwr
Sep 1 at 21:02
2
2
@qwr That's most likely part of the glibc library, not
cp
itself. It has various file access functions that internally use that as a value.– forest
Sep 1 at 21:03
@qwr That's most likely part of the glibc library, not
cp
itself. It has various file access functions that internally use that as a value.– forest
Sep 1 at 21:03
2
2
Great answer! I'd never realized that there's a delay in terminating a
cp
after SIGKILLing it, even while dealing with large files... maybe the duration of those uninterruptible atomic operations of a process is too short. Does the same explanation work for killing dd
and other disk-reading/writing processes?– Seninha
Sep 1 at 21:35
Great answer! I'd never realized that there's a delay in terminating a
cp
after SIGKILLing it, even while dealing with large files... maybe the duration of those uninterruptible atomic operations of a process is too short. Does the same explanation work for killing dd
and other disk-reading/writing processes?– Seninha
Sep 1 at 21:35
3
3
@qwr 128kb chunks are hardwired default in coreutils when reading from blockdevices, this is done in effort to minimize syscalls. Analysis is given in the coreutils source: git.savannah.gnu.org/cgit/coreutils.git/tree/src/ioblksize.h
– Fiisch
Sep 2 at 11:52
@qwr 128kb chunks are hardwired default in coreutils when reading from blockdevices, this is done in effort to minimize syscalls. Analysis is given in the coreutils source: git.savannah.gnu.org/cgit/coreutils.git/tree/src/ioblksize.h
– Fiisch
Sep 2 at 11:52
1
1
@AndrewHenle Perhaps I should have said that it's the filesystem metadata which is atomic. You are correct that a write may be partial.
– forest
Sep 4 at 19:25
@AndrewHenle Perhaps I should have said that it's the filesystem metadata which is atomic. You are correct that a write may be partial.
– forest
Sep 4 at 19:25
 |Â
show 2 more comments
up vote
21
down vote
Since cp
is a userspace command, this does not affect filesystem integrity.
You of course need to be prepared that at least one file will not have been copied completely if you kill a runnning cp
program.
13
Why the downvote? Just because it’s schily?
– Stephen Kitt
Sep 1 at 13:54
6
There definitely seems to be at least one person that downvotes all my answers. Do you know of a way to find out who did the downvote?
– schily
Sep 1 at 14:02
2
Not even moderators can find out who made specific votes - that is understandably restricted to SO employees. You can use the "contact us" link to ask them to investigate.
– Philip Kendall
Sep 1 at 14:06
1
It would be pretty sad if a userspace program were able to compromise filesystem integrity. Note: Of course, there can be, there have been, and there will be bugs in filesystem implementations. Note #2: Also, of course, userspace programs running with elevated privileges (e.g.CAP_SYS_RAWIO
in Linux or the equivalent in other OSs) that give them direct access to the underlying device of the filesystem (e.g.sudo dd if=/dev/urandom of=/dev/sda1
) may wreak all sorts of havoc.
– Jörg W Mittag
Sep 1 at 18:18
3
And if a filesystem was buggy enough to get corrupted after an interruptedcp
, it would probably get corrupted from a finishedcp
too...
– ilkkachu
Sep 1 at 19:59
 |Â
show 8 more comments
up vote
21
down vote
Since cp
is a userspace command, this does not affect filesystem integrity.
You of course need to be prepared that at least one file will not have been copied completely if you kill a runnning cp
program.
13
Why the downvote? Just because it’s schily?
– Stephen Kitt
Sep 1 at 13:54
6
There definitely seems to be at least one person that downvotes all my answers. Do you know of a way to find out who did the downvote?
– schily
Sep 1 at 14:02
2
Not even moderators can find out who made specific votes - that is understandably restricted to SO employees. You can use the "contact us" link to ask them to investigate.
– Philip Kendall
Sep 1 at 14:06
1
It would be pretty sad if a userspace program were able to compromise filesystem integrity. Note: Of course, there can be, there have been, and there will be bugs in filesystem implementations. Note #2: Also, of course, userspace programs running with elevated privileges (e.g.CAP_SYS_RAWIO
in Linux or the equivalent in other OSs) that give them direct access to the underlying device of the filesystem (e.g.sudo dd if=/dev/urandom of=/dev/sda1
) may wreak all sorts of havoc.
– Jörg W Mittag
Sep 1 at 18:18
3
And if a filesystem was buggy enough to get corrupted after an interruptedcp
, it would probably get corrupted from a finishedcp
too...
– ilkkachu
Sep 1 at 19:59
 |Â
show 8 more comments
up vote
21
down vote
up vote
21
down vote
Since cp
is a userspace command, this does not affect filesystem integrity.
You of course need to be prepared that at least one file will not have been copied completely if you kill a runnning cp
program.
Since cp
is a userspace command, this does not affect filesystem integrity.
You of course need to be prepared that at least one file will not have been copied completely if you kill a runnning cp
program.
answered Sep 1 at 13:52


schily
9,52831437
9,52831437
13
Why the downvote? Just because it’s schily?
– Stephen Kitt
Sep 1 at 13:54
6
There definitely seems to be at least one person that downvotes all my answers. Do you know of a way to find out who did the downvote?
– schily
Sep 1 at 14:02
2
Not even moderators can find out who made specific votes - that is understandably restricted to SO employees. You can use the "contact us" link to ask them to investigate.
– Philip Kendall
Sep 1 at 14:06
1
It would be pretty sad if a userspace program were able to compromise filesystem integrity. Note: Of course, there can be, there have been, and there will be bugs in filesystem implementations. Note #2: Also, of course, userspace programs running with elevated privileges (e.g.CAP_SYS_RAWIO
in Linux or the equivalent in other OSs) that give them direct access to the underlying device of the filesystem (e.g.sudo dd if=/dev/urandom of=/dev/sda1
) may wreak all sorts of havoc.
– Jörg W Mittag
Sep 1 at 18:18
3
And if a filesystem was buggy enough to get corrupted after an interruptedcp
, it would probably get corrupted from a finishedcp
too...
– ilkkachu
Sep 1 at 19:59
 |Â
show 8 more comments
13
Why the downvote? Just because it’s schily?
– Stephen Kitt
Sep 1 at 13:54
6
There definitely seems to be at least one person that downvotes all my answers. Do you know of a way to find out who did the downvote?
– schily
Sep 1 at 14:02
2
Not even moderators can find out who made specific votes - that is understandably restricted to SO employees. You can use the "contact us" link to ask them to investigate.
– Philip Kendall
Sep 1 at 14:06
1
It would be pretty sad if a userspace program were able to compromise filesystem integrity. Note: Of course, there can be, there have been, and there will be bugs in filesystem implementations. Note #2: Also, of course, userspace programs running with elevated privileges (e.g.CAP_SYS_RAWIO
in Linux or the equivalent in other OSs) that give them direct access to the underlying device of the filesystem (e.g.sudo dd if=/dev/urandom of=/dev/sda1
) may wreak all sorts of havoc.
– Jörg W Mittag
Sep 1 at 18:18
3
And if a filesystem was buggy enough to get corrupted after an interruptedcp
, it would probably get corrupted from a finishedcp
too...
– ilkkachu
Sep 1 at 19:59
13
13
Why the downvote? Just because it’s schily?
– Stephen Kitt
Sep 1 at 13:54
Why the downvote? Just because it’s schily?
– Stephen Kitt
Sep 1 at 13:54
6
6
There definitely seems to be at least one person that downvotes all my answers. Do you know of a way to find out who did the downvote?
– schily
Sep 1 at 14:02
There definitely seems to be at least one person that downvotes all my answers. Do you know of a way to find out who did the downvote?
– schily
Sep 1 at 14:02
2
2
Not even moderators can find out who made specific votes - that is understandably restricted to SO employees. You can use the "contact us" link to ask them to investigate.
– Philip Kendall
Sep 1 at 14:06
Not even moderators can find out who made specific votes - that is understandably restricted to SO employees. You can use the "contact us" link to ask them to investigate.
– Philip Kendall
Sep 1 at 14:06
1
1
It would be pretty sad if a userspace program were able to compromise filesystem integrity. Note: Of course, there can be, there have been, and there will be bugs in filesystem implementations. Note #2: Also, of course, userspace programs running with elevated privileges (e.g.
CAP_SYS_RAWIO
in Linux or the equivalent in other OSs) that give them direct access to the underlying device of the filesystem (e.g. sudo dd if=/dev/urandom of=/dev/sda1
) may wreak all sorts of havoc.– Jörg W Mittag
Sep 1 at 18:18
It would be pretty sad if a userspace program were able to compromise filesystem integrity. Note: Of course, there can be, there have been, and there will be bugs in filesystem implementations. Note #2: Also, of course, userspace programs running with elevated privileges (e.g.
CAP_SYS_RAWIO
in Linux or the equivalent in other OSs) that give them direct access to the underlying device of the filesystem (e.g. sudo dd if=/dev/urandom of=/dev/sda1
) may wreak all sorts of havoc.– Jörg W Mittag
Sep 1 at 18:18
3
3
And if a filesystem was buggy enough to get corrupted after an interrupted
cp
, it would probably get corrupted from a finished cp
too...– ilkkachu
Sep 1 at 19:59
And if a filesystem was buggy enough to get corrupted after an interrupted
cp
, it would probably get corrupted from a finished cp
too...– ilkkachu
Sep 1 at 19:59
 |Â
show 8 more comments
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f466244%2fwhat-happens-when-i-kill-cp-is-it-safe-and-does-it-have-any-consequences%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Keep in mind that while the answers are correct for ext4, filesystems without journaling may not be as safe.
– Ave
Sep 2 at 1:32
2
@Ave Journaling has nothing to do with this. The syscalls are atomic regardless of what filesystem you use. Journaling is useful in situations where power may be abruptly lost.
– forest
Sep 2 at 7:14