Bash has performance trouble using argument lists?
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
Calling a function to do nothing f1() :;
is a thousand times slower than :
but only if there are arguments in the parent calling function, Why? :
#!/bin/bash
TIMEFORMAT='%R'
n=1000
m=20000
f1 () :;
f2 () i=0; time while [ "$((i+=1))" -lt "$n" ]; do : ; done
i=0; time while [ "$((i+=1))" -lt "$n" ]; do f1 ; done
test1() set -- $(seq $m)
f2 ""
f2 "$@"
test1
Results of test1
:
0.019
0.028
0.019
19.204
There are no arguments nor input or output used in function f1
, the delay of a factor of a thousand is unexpected.1
Extending the tests to several shells, the results are consistent, most shells have no trouble nor suffer of delays:
test2()
for sh in dash mksh ksh zsh bash b50sh
do
echo "$sh" >&2
# time -f 't%E' seq "$m" >/dev/null
# time -f 't%E' "$sh" -c 'set -- $(seq '"$m"'); for i do :; done'
time -f 't%E' "$sh" -c 'f() :;; while [ "$((i+=1))" -lt '"$n"' ]; do : ; done;' $(seq $m)
time -f 't%E' "$sh" -c 'f() :;; while [ "$((i+=1))" -lt '"$n"' ]; do f ; done;' $(seq $m)
done
test2
Results:
dash
0:00.01
0:00.01
mksh
0:00.01
0:00.02
ksh
0:00.01
0:00.02
zsh
0:00.02
0:00.04
bash
0:10.71
0:30.03
b55sh # --without-bash-malloc
0:00.04
0:17.11
b56sh # RELSTATUS=release
0:00.03
0:15.47
b50sh # Debug enabled (RELSTATUS=alpha)
0:04.62
xxxxxxx More than a day ......
Uncomment the other two tests to confirm that neither seq
or processing the argument list is the source for the delay.
1 It is known that passing results by arguments will increase the execution time. Thanks @slm
linux bash time
add a comment |Â
up vote
0
down vote
favorite
Calling a function to do nothing f1() :;
is a thousand times slower than :
but only if there are arguments in the parent calling function, Why? :
#!/bin/bash
TIMEFORMAT='%R'
n=1000
m=20000
f1 () :;
f2 () i=0; time while [ "$((i+=1))" -lt "$n" ]; do : ; done
i=0; time while [ "$((i+=1))" -lt "$n" ]; do f1 ; done
test1() set -- $(seq $m)
f2 ""
f2 "$@"
test1
Results of test1
:
0.019
0.028
0.019
19.204
There are no arguments nor input or output used in function f1
, the delay of a factor of a thousand is unexpected.1
Extending the tests to several shells, the results are consistent, most shells have no trouble nor suffer of delays:
test2()
for sh in dash mksh ksh zsh bash b50sh
do
echo "$sh" >&2
# time -f 't%E' seq "$m" >/dev/null
# time -f 't%E' "$sh" -c 'set -- $(seq '"$m"'); for i do :; done'
time -f 't%E' "$sh" -c 'f() :;; while [ "$((i+=1))" -lt '"$n"' ]; do : ; done;' $(seq $m)
time -f 't%E' "$sh" -c 'f() :;; while [ "$((i+=1))" -lt '"$n"' ]; do f ; done;' $(seq $m)
done
test2
Results:
dash
0:00.01
0:00.01
mksh
0:00.01
0:00.02
ksh
0:00.01
0:00.02
zsh
0:00.02
0:00.04
bash
0:10.71
0:30.03
b55sh # --without-bash-malloc
0:00.04
0:17.11
b56sh # RELSTATUS=release
0:00.03
0:15.47
b50sh # Debug enabled (RELSTATUS=alpha)
0:04.62
xxxxxxx More than a day ......
Uncomment the other two tests to confirm that neither seq
or processing the argument list is the source for the delay.
1 It is known that passing results by arguments will increase the execution time. Thanks @slm
linux bash time
What are your values for $m and $n in the 2nd test?
– schily
Aug 12 at 11:47
@schily the same values as test1: n=1000 and m=20000.
– Isaac
Aug 12 at 21:33
I don't understand why this question got downvoted. A dirty trick to reduce this performance issue is to save the arguments in an array, unset the arguments list and use the array instead:args=("$@"); set --; f() :; ; for arg in "$args[@]"; do f; done
– nxnev
Aug 15 at 1:21
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Calling a function to do nothing f1() :;
is a thousand times slower than :
but only if there are arguments in the parent calling function, Why? :
#!/bin/bash
TIMEFORMAT='%R'
n=1000
m=20000
f1 () :;
f2 () i=0; time while [ "$((i+=1))" -lt "$n" ]; do : ; done
i=0; time while [ "$((i+=1))" -lt "$n" ]; do f1 ; done
test1() set -- $(seq $m)
f2 ""
f2 "$@"
test1
Results of test1
:
0.019
0.028
0.019
19.204
There are no arguments nor input or output used in function f1
, the delay of a factor of a thousand is unexpected.1
Extending the tests to several shells, the results are consistent, most shells have no trouble nor suffer of delays:
test2()
for sh in dash mksh ksh zsh bash b50sh
do
echo "$sh" >&2
# time -f 't%E' seq "$m" >/dev/null
# time -f 't%E' "$sh" -c 'set -- $(seq '"$m"'); for i do :; done'
time -f 't%E' "$sh" -c 'f() :;; while [ "$((i+=1))" -lt '"$n"' ]; do : ; done;' $(seq $m)
time -f 't%E' "$sh" -c 'f() :;; while [ "$((i+=1))" -lt '"$n"' ]; do f ; done;' $(seq $m)
done
test2
Results:
dash
0:00.01
0:00.01
mksh
0:00.01
0:00.02
ksh
0:00.01
0:00.02
zsh
0:00.02
0:00.04
bash
0:10.71
0:30.03
b55sh # --without-bash-malloc
0:00.04
0:17.11
b56sh # RELSTATUS=release
0:00.03
0:15.47
b50sh # Debug enabled (RELSTATUS=alpha)
0:04.62
xxxxxxx More than a day ......
Uncomment the other two tests to confirm that neither seq
or processing the argument list is the source for the delay.
1 It is known that passing results by arguments will increase the execution time. Thanks @slm
linux bash time
Calling a function to do nothing f1() :;
is a thousand times slower than :
but only if there are arguments in the parent calling function, Why? :
#!/bin/bash
TIMEFORMAT='%R'
n=1000
m=20000
f1 () :;
f2 () i=0; time while [ "$((i+=1))" -lt "$n" ]; do : ; done
i=0; time while [ "$((i+=1))" -lt "$n" ]; do f1 ; done
test1() set -- $(seq $m)
f2 ""
f2 "$@"
test1
Results of test1
:
0.019
0.028
0.019
19.204
There are no arguments nor input or output used in function f1
, the delay of a factor of a thousand is unexpected.1
Extending the tests to several shells, the results are consistent, most shells have no trouble nor suffer of delays:
test2()
for sh in dash mksh ksh zsh bash b50sh
do
echo "$sh" >&2
# time -f 't%E' seq "$m" >/dev/null
# time -f 't%E' "$sh" -c 'set -- $(seq '"$m"'); for i do :; done'
time -f 't%E' "$sh" -c 'f() :;; while [ "$((i+=1))" -lt '"$n"' ]; do : ; done;' $(seq $m)
time -f 't%E' "$sh" -c 'f() :;; while [ "$((i+=1))" -lt '"$n"' ]; do f ; done;' $(seq $m)
done
test2
Results:
dash
0:00.01
0:00.01
mksh
0:00.01
0:00.02
ksh
0:00.01
0:00.02
zsh
0:00.02
0:00.04
bash
0:10.71
0:30.03
b55sh # --without-bash-malloc
0:00.04
0:17.11
b56sh # RELSTATUS=release
0:00.03
0:15.47
b50sh # Debug enabled (RELSTATUS=alpha)
0:04.62
xxxxxxx More than a day ......
Uncomment the other two tests to confirm that neither seq
or processing the argument list is the source for the delay.
1 It is known that passing results by arguments will increase the execution time. Thanks @slm
linux bash time
edited Aug 12 at 21:34
asked Aug 12 at 7:15


Isaac
6,8001834
6,8001834
What are your values for $m and $n in the 2nd test?
– schily
Aug 12 at 11:47
@schily the same values as test1: n=1000 and m=20000.
– Isaac
Aug 12 at 21:33
I don't understand why this question got downvoted. A dirty trick to reduce this performance issue is to save the arguments in an array, unset the arguments list and use the array instead:args=("$@"); set --; f() :; ; for arg in "$args[@]"; do f; done
– nxnev
Aug 15 at 1:21
add a comment |Â
What are your values for $m and $n in the 2nd test?
– schily
Aug 12 at 11:47
@schily the same values as test1: n=1000 and m=20000.
– Isaac
Aug 12 at 21:33
I don't understand why this question got downvoted. A dirty trick to reduce this performance issue is to save the arguments in an array, unset the arguments list and use the array instead:args=("$@"); set --; f() :; ; for arg in "$args[@]"; do f; done
– nxnev
Aug 15 at 1:21
What are your values for $m and $n in the 2nd test?
– schily
Aug 12 at 11:47
What are your values for $m and $n in the 2nd test?
– schily
Aug 12 at 11:47
@schily the same values as test1: n=1000 and m=20000.
– Isaac
Aug 12 at 21:33
@schily the same values as test1: n=1000 and m=20000.
– Isaac
Aug 12 at 21:33
I don't understand why this question got downvoted. A dirty trick to reduce this performance issue is to save the arguments in an array, unset the arguments list and use the array instead:
args=("$@"); set --; f() :; ; for arg in "$args[@]"; do f; done
– nxnev
Aug 15 at 1:21
I don't understand why this question got downvoted. A dirty trick to reduce this performance issue is to save the arguments in an array, unset the arguments list and use the array instead:
args=("$@"); set --; f() :; ; for arg in "$args[@]"; do f; done
– nxnev
Aug 15 at 1:21
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
4
down vote
Copied from: Why the delay in the loop? at your request:
You can shorten the test case to:
time bash -c 'f() :;;for i do f; done' 0..10000
It's calling a function while $@
is large that seems to trigger it.
My guess would be that the time is spent saving $@
onto a stack and restoring it afterwards. Possibly bash
does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).
You get the same kind of time in other shells for:
time zsh -c 'f() :;;for i do f "$@"; done' 0..10000
That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash
ends up being 5 times as slow for that one).
(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash
if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc
)
How is debugging removed ?
– Isaac
Aug 12 at 8:44
@isaac, I did it by changingRELSTATUS=alpha
toRELSTATUS=release
in theconfigure
script.
– Stéphane Chazelas
Aug 12 at 8:45
Added test results for both--without-bash-malloc
andRELSTATUS=release
to the question results. That still show a problem with the call to f.
– Isaac
Aug 12 at 9:12
@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.
– Stéphane Chazelas
Aug 12 at 9:35
No, it is not as bad. Bash5 solves the problem with calling:
and improves a little on callingf
. Look at test2 timings in the question.
– Isaac
Aug 12 at 21:38
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
4
down vote
Copied from: Why the delay in the loop? at your request:
You can shorten the test case to:
time bash -c 'f() :;;for i do f; done' 0..10000
It's calling a function while $@
is large that seems to trigger it.
My guess would be that the time is spent saving $@
onto a stack and restoring it afterwards. Possibly bash
does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).
You get the same kind of time in other shells for:
time zsh -c 'f() :;;for i do f "$@"; done' 0..10000
That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash
ends up being 5 times as slow for that one).
(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash
if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc
)
How is debugging removed ?
– Isaac
Aug 12 at 8:44
@isaac, I did it by changingRELSTATUS=alpha
toRELSTATUS=release
in theconfigure
script.
– Stéphane Chazelas
Aug 12 at 8:45
Added test results for both--without-bash-malloc
andRELSTATUS=release
to the question results. That still show a problem with the call to f.
– Isaac
Aug 12 at 9:12
@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.
– Stéphane Chazelas
Aug 12 at 9:35
No, it is not as bad. Bash5 solves the problem with calling:
and improves a little on callingf
. Look at test2 timings in the question.
– Isaac
Aug 12 at 21:38
add a comment |Â
up vote
4
down vote
Copied from: Why the delay in the loop? at your request:
You can shorten the test case to:
time bash -c 'f() :;;for i do f; done' 0..10000
It's calling a function while $@
is large that seems to trigger it.
My guess would be that the time is spent saving $@
onto a stack and restoring it afterwards. Possibly bash
does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).
You get the same kind of time in other shells for:
time zsh -c 'f() :;;for i do f "$@"; done' 0..10000
That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash
ends up being 5 times as slow for that one).
(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash
if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc
)
How is debugging removed ?
– Isaac
Aug 12 at 8:44
@isaac, I did it by changingRELSTATUS=alpha
toRELSTATUS=release
in theconfigure
script.
– Stéphane Chazelas
Aug 12 at 8:45
Added test results for both--without-bash-malloc
andRELSTATUS=release
to the question results. That still show a problem with the call to f.
– Isaac
Aug 12 at 9:12
@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.
– Stéphane Chazelas
Aug 12 at 9:35
No, it is not as bad. Bash5 solves the problem with calling:
and improves a little on callingf
. Look at test2 timings in the question.
– Isaac
Aug 12 at 21:38
add a comment |Â
up vote
4
down vote
up vote
4
down vote
Copied from: Why the delay in the loop? at your request:
You can shorten the test case to:
time bash -c 'f() :;;for i do f; done' 0..10000
It's calling a function while $@
is large that seems to trigger it.
My guess would be that the time is spent saving $@
onto a stack and restoring it afterwards. Possibly bash
does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).
You get the same kind of time in other shells for:
time zsh -c 'f() :;;for i do f "$@"; done' 0..10000
That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash
ends up being 5 times as slow for that one).
(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash
if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc
)
Copied from: Why the delay in the loop? at your request:
You can shorten the test case to:
time bash -c 'f() :;;for i do f; done' 0..10000
It's calling a function while $@
is large that seems to trigger it.
My guess would be that the time is spent saving $@
onto a stack and restoring it afterwards. Possibly bash
does it very inefficiently by duplicating all the values or something like that. The time seems to be in o(n²).
You get the same kind of time in other shells for:
time zsh -c 'f() :;;for i do f "$@"; done' 0..10000
That is where you do pass the list of arguments to the functions, and this time the shell needs to copy the values (bash
ends up being 5 times as slow for that one).
(I initially thought it was worse in bash 5 (currently in alpha), but that was down to malloc debugging being enabled in development versions as noted by @egmont; also check how your distribution builds bash
if you want to compare your own build with the system's one. For instance, Ubuntu uses --without-bash-malloc
)
answered Aug 12 at 8:12


Stéphane Chazelas
282k53520854
282k53520854
How is debugging removed ?
– Isaac
Aug 12 at 8:44
@isaac, I did it by changingRELSTATUS=alpha
toRELSTATUS=release
in theconfigure
script.
– Stéphane Chazelas
Aug 12 at 8:45
Added test results for both--without-bash-malloc
andRELSTATUS=release
to the question results. That still show a problem with the call to f.
– Isaac
Aug 12 at 9:12
@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.
– Stéphane Chazelas
Aug 12 at 9:35
No, it is not as bad. Bash5 solves the problem with calling:
and improves a little on callingf
. Look at test2 timings in the question.
– Isaac
Aug 12 at 21:38
add a comment |Â
How is debugging removed ?
– Isaac
Aug 12 at 8:44
@isaac, I did it by changingRELSTATUS=alpha
toRELSTATUS=release
in theconfigure
script.
– Stéphane Chazelas
Aug 12 at 8:45
Added test results for both--without-bash-malloc
andRELSTATUS=release
to the question results. That still show a problem with the call to f.
– Isaac
Aug 12 at 9:12
@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.
– Stéphane Chazelas
Aug 12 at 9:35
No, it is not as bad. Bash5 solves the problem with calling:
and improves a little on callingf
. Look at test2 timings in the question.
– Isaac
Aug 12 at 21:38
How is debugging removed ?
– Isaac
Aug 12 at 8:44
How is debugging removed ?
– Isaac
Aug 12 at 8:44
@isaac, I did it by changing
RELSTATUS=alpha
to RELSTATUS=release
in the configure
script.– Stéphane Chazelas
Aug 12 at 8:45
@isaac, I did it by changing
RELSTATUS=alpha
to RELSTATUS=release
in the configure
script.– Stéphane Chazelas
Aug 12 at 8:45
Added test results for both
--without-bash-malloc
and RELSTATUS=release
to the question results. That still show a problem with the call to f.– Isaac
Aug 12 at 9:12
Added test results for both
--without-bash-malloc
and RELSTATUS=release
to the question results. That still show a problem with the call to f.– Isaac
Aug 12 at 9:12
@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.
– Stéphane Chazelas
Aug 12 at 9:35
@Isaac, yes, I just said I used to be wrong to say that it was worse in bash5. It's not worse, it's just as bad.
– Stéphane Chazelas
Aug 12 at 9:35
No, it is not as bad. Bash5 solves the problem with calling
:
and improves a little on calling f
. Look at test2 timings in the question.– Isaac
Aug 12 at 21:38
No, it is not as bad. Bash5 solves the problem with calling
:
and improves a little on calling f
. Look at test2 timings in the question.– Isaac
Aug 12 at 21:38
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f462084%2fbash-has-performance-trouble-using-argument-lists%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
What are your values for $m and $n in the 2nd test?
– schily
Aug 12 at 11:47
@schily the same values as test1: n=1000 and m=20000.
– Isaac
Aug 12 at 21:33
I don't understand why this question got downvoted. A dirty trick to reduce this performance issue is to save the arguments in an array, unset the arguments list and use the array instead:
args=("$@"); set --; f() :; ; for arg in "$args[@]"; do f; done
– nxnev
Aug 15 at 1:21