Optimal strategy for cutting a sausage?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
59
down vote

favorite
21












You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.



The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length.



So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?



And if so, what is the smallest possible ratio?



Example 1 (unit is cm):



  • 1st cut: 50 : 50 ratio: 1

  • 2nd cut: 50 : 25 : 25 ratio: 2 - bad

Example 2



  • 1st cut: 40 : 60 ratio: 1.5

  • 2nd cut: 40 : 30 : 30 ratio: 1.33

  • 3rd cut: 20 : 20 : 30 : 30 ratio: 1.5

  • 4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad

Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.







share|cite|improve this question


















  • 3




    can we have more pieces of sausage, than students?
    – dEmigOd
    Aug 14 at 9:43






  • 2




    @JackD'Aurizio: However, for that you don't need a 1/3 cut to begin with. Simply always bisect the largest chunk you have.
    – Henning Makholm
    Aug 14 at 9:53






  • 7




    @JackM You can cut segments that have been cut before, you aren't serving them as you cut them.
    – Mario Carneiro
    Aug 14 at 13:10






  • 13




    You should add the implied condition that the sausage is only served after all students have entered.
    – Mindwin
    Aug 14 at 17:09






  • 3




    Do you serve them as they come in, or do you serve them after they have all sat down? I was thinking that you just serve half your sausage to each kid as he came. Sort of sucks that one kid gets half a meter of sausage, and others are going to get molecules...
    – Limited Atonement
    Aug 14 at 18:46














up vote
59
down vote

favorite
21












You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.



The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length.



So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?



And if so, what is the smallest possible ratio?



Example 1 (unit is cm):



  • 1st cut: 50 : 50 ratio: 1

  • 2nd cut: 50 : 25 : 25 ratio: 2 - bad

Example 2



  • 1st cut: 40 : 60 ratio: 1.5

  • 2nd cut: 40 : 30 : 30 ratio: 1.33

  • 3rd cut: 20 : 20 : 30 : 30 ratio: 1.5

  • 4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad

Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.







share|cite|improve this question


















  • 3




    can we have more pieces of sausage, than students?
    – dEmigOd
    Aug 14 at 9:43






  • 2




    @JackD'Aurizio: However, for that you don't need a 1/3 cut to begin with. Simply always bisect the largest chunk you have.
    – Henning Makholm
    Aug 14 at 9:53






  • 7




    @JackM You can cut segments that have been cut before, you aren't serving them as you cut them.
    – Mario Carneiro
    Aug 14 at 13:10






  • 13




    You should add the implied condition that the sausage is only served after all students have entered.
    – Mindwin
    Aug 14 at 17:09






  • 3




    Do you serve them as they come in, or do you serve them after they have all sat down? I was thinking that you just serve half your sausage to each kid as he came. Sort of sucks that one kid gets half a meter of sausage, and others are going to get molecules...
    – Limited Atonement
    Aug 14 at 18:46












up vote
59
down vote

favorite
21









up vote
59
down vote

favorite
21






21





You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.



The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length.



So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?



And if so, what is the smallest possible ratio?



Example 1 (unit is cm):



  • 1st cut: 50 : 50 ratio: 1

  • 2nd cut: 50 : 25 : 25 ratio: 2 - bad

Example 2



  • 1st cut: 40 : 60 ratio: 1.5

  • 2nd cut: 40 : 30 : 30 ratio: 1.33

  • 3rd cut: 20 : 20 : 30 : 30 ratio: 1.5

  • 4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad

Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.







share|cite|improve this question














You are a student, assigned to work in the cafeteria today, and it is your duty to divide the available food between all students. The food today is a sausage of 1m length, and you need to cut it into as many pieces as students come for lunch, including yourself.



The problem is, the knife is operated by the rotating door through which the students enter, so every time a student comes in, the knife comes down and you place the cut. There is no way for you to know if more students will come or not, so after each cut, the sausage should be cut into pieces of approximately equal length.



So here the question - is it possible to place the cuts in a manner to ensure the ratio of the largest and the smallest piece is always below 2?



And if so, what is the smallest possible ratio?



Example 1 (unit is cm):



  • 1st cut: 50 : 50 ratio: 1

  • 2nd cut: 50 : 25 : 25 ratio: 2 - bad

Example 2



  • 1st cut: 40 : 60 ratio: 1.5

  • 2nd cut: 40 : 30 : 30 ratio: 1.33

  • 3rd cut: 20 : 20 : 30 : 30 ratio: 1.5

  • 4th cut: 20 : 20 : 30 : 15 : 15 ratio: 2 - bad

Sorry for the awful analogy, I think this is a math problem but I have no real idea how to formulate this in a proper mathematical way.









share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 14 at 16:44

























asked Aug 14 at 9:27









Stenzel

39826




39826







  • 3




    can we have more pieces of sausage, than students?
    – dEmigOd
    Aug 14 at 9:43






  • 2




    @JackD'Aurizio: However, for that you don't need a 1/3 cut to begin with. Simply always bisect the largest chunk you have.
    – Henning Makholm
    Aug 14 at 9:53






  • 7




    @JackM You can cut segments that have been cut before, you aren't serving them as you cut them.
    – Mario Carneiro
    Aug 14 at 13:10






  • 13




    You should add the implied condition that the sausage is only served after all students have entered.
    – Mindwin
    Aug 14 at 17:09






  • 3




    Do you serve them as they come in, or do you serve them after they have all sat down? I was thinking that you just serve half your sausage to each kid as he came. Sort of sucks that one kid gets half a meter of sausage, and others are going to get molecules...
    – Limited Atonement
    Aug 14 at 18:46












  • 3




    can we have more pieces of sausage, than students?
    – dEmigOd
    Aug 14 at 9:43






  • 2




    @JackD'Aurizio: However, for that you don't need a 1/3 cut to begin with. Simply always bisect the largest chunk you have.
    – Henning Makholm
    Aug 14 at 9:53






  • 7




    @JackM You can cut segments that have been cut before, you aren't serving them as you cut them.
    – Mario Carneiro
    Aug 14 at 13:10






  • 13




    You should add the implied condition that the sausage is only served after all students have entered.
    – Mindwin
    Aug 14 at 17:09






  • 3




    Do you serve them as they come in, or do you serve them after they have all sat down? I was thinking that you just serve half your sausage to each kid as he came. Sort of sucks that one kid gets half a meter of sausage, and others are going to get molecules...
    – Limited Atonement
    Aug 14 at 18:46







3




3




can we have more pieces of sausage, than students?
– dEmigOd
Aug 14 at 9:43




can we have more pieces of sausage, than students?
– dEmigOd
Aug 14 at 9:43




2




2




@JackD'Aurizio: However, for that you don't need a 1/3 cut to begin with. Simply always bisect the largest chunk you have.
– Henning Makholm
Aug 14 at 9:53




@JackD'Aurizio: However, for that you don't need a 1/3 cut to begin with. Simply always bisect the largest chunk you have.
– Henning Makholm
Aug 14 at 9:53




7




7




@JackM You can cut segments that have been cut before, you aren't serving them as you cut them.
– Mario Carneiro
Aug 14 at 13:10




@JackM You can cut segments that have been cut before, you aren't serving them as you cut them.
– Mario Carneiro
Aug 14 at 13:10




13




13




You should add the implied condition that the sausage is only served after all students have entered.
– Mindwin
Aug 14 at 17:09




You should add the implied condition that the sausage is only served after all students have entered.
– Mindwin
Aug 14 at 17:09




3




3




Do you serve them as they come in, or do you serve them after they have all sat down? I was thinking that you just serve half your sausage to each kid as he came. Sort of sucks that one kid gets half a meter of sausage, and others are going to get molecules...
– Limited Atonement
Aug 14 at 18:46




Do you serve them as they come in, or do you serve them after they have all sat down? I was thinking that you just serve half your sausage to each kid as he came. Sort of sucks that one kid gets half a meter of sausage, and others are going to get molecules...
– Limited Atonement
Aug 14 at 18:46










3 Answers
3






active

oldest

votes

















up vote
58
down vote



accepted










TLDR: $a_n=log_2(1+1/n)$ works, and is the only smooth solution.



This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.



Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,dots,a_2n-1$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_2n$ and $a_2n+1$. We have the following constraints:



$$a_1=1qquad a_n=a_2n+a_2n+1qquad a_nge a_n+1qquad a_n<2a_2n-1$$



I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_2n+a_2n+1$ as well).



First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_nin (1/2,1)$ then we can define $a_2n=a_nb_n$ and $a_2n+1=a_n(1-b_n)$, and this will completely define the sequence $a_n$.



Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence



$$1,frac12,frac12,frac14,frac14,frac14,frac14,frac18,frac18,frac18,frac18,frac18,frac18,frac18,frac18,dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does not tend to a limit - it varies between $1$ and $2$.



If we average this exponentially, by considering the function $g(x)=2^xa_lfloor 2^xrfloor$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]toBbb R$ such that $g(x+n)to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.



There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,log_2 (3/2)]$ and scaled down on $[log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $int_0^log_2(3/2)h(x),dx$ and $int_log_2(3/2)^1h(x),dx$.



Thus, to keep these balanced we should let $b_1=log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[log_2(2n),log_2(2n+1)]$ and $[log_2(2n+1),log_2(2n+2)]$ (reduced$bmod 1$), so we must set them to
$$b_n=fraclog_2(2n+1)-log_2(2n)log_2(2n+2)-log_2(2n)=fraclog(1+1/2n)log(1+1/n).$$



When we do this, a miracle occurs, and $a_n=log_2(1+1/n)$ becomes analytically solvable:
beginalign
a_1&=log_2(1+1/1)=1\
a_2n+a_2n+1&=log_2Big(1+frac12nBig)+log_2Big(1+frac12n+1Big)\
&=log_2left[Big(1+frac12nBig)Big(1+frac12n+1Big)right]\
&=log_2left[1+frac2n+(2n+1)+12n(2n+1)right]\
&=log_2left[1+frac1nright]=a_n.
endalign



As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
beginalign
2a_m&=2log_2Big(1+frac1mBig)=log_2Big(1+frac1mBig)^2=log_2Big(1+frac2m+frac1m^2Big)\
&gelog_2Big(1+frac2mBig)>log_2Big(1+frac22nBig)=a_n,
endalign



so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $frac 1log 2=log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.



It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):



  • $58:42$, ratio = $1.41$

  • $42:32:26$, ratio = $1.58$

  • $32:26:22:19$, ratio = $1.67$

  • $26:22:19:17:15$, ratio = $1.73$

  • $22:19:17:15:14:13$, ratio = $1.77$

If you are working with a sequence of points treated$bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=log_2(2n+1)bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:



                                  sausages



A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0le xle 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0le xle 1$, and no solution can do better than that (in the limit) for any $x$.






share|cite|improve this answer


















  • 1




    Wow, thank you very much! I must admit it will take me some time to fully understand this.
    – Stenzel
    Aug 14 at 12:48






  • 21




    Thank you! Believe it or not I'm facing a similar programming problem trying to figure out the best way to pick up colors for players on a color wheel such that the hue difference between any two players is always as large as it can possibly be and new players can join the game at any time. Now this post hints me into the right direction.
    – GOTO 0
    Aug 14 at 18:09






  • 11




    @GOTO0 It's interesting that you mention that, because for that problem I usually recommend golden ratio spacing (i.e. $p_n=nphibmod 1$). I know that is optimal when the points are evenly spaced, but it hadn't occurred to me before that nonuniform spacing like this log solution are actually better distributed.
    – Mario Carneiro
    Aug 14 at 18:40






  • 1




    @Ryan Well, I guess simply cutting the largest piece in two parts gives an even better distribution than the golden ratio spacing, with a maximum ratio of 2. It makes it also very efficient to calculate the cutting points $p_n$, if you write $n$ in base 2 and mirror the digits around the "decimal" (binary) point. It also guarantees that you have at most two different lengths at any time. So...
    – GOTO 0
    Aug 15 at 9:14






  • 4




    @cmh Mathematica: Export["sausage.gif", Table[Graphics[ Line[#, 0, #, 1 & /@ Append[Table[Mod[Log[2, 2 n + 1], 1], n, 0, 2^k + 10 k], 1]], PlotRange -> 0, 1, 0, 1, AspectRatio -> 1/10, ImageSize -> 400], k, -0.1, 10, .025]]
    – Mario Carneiro
    Aug 15 at 16:33

















up vote
36
down vote













YES, it is possible!



You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement.
So in fact, you must never have two equal parts.
Make the first cut so that the condition is not violated, say $60:40$.



From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.)
We construct a good cut that maintains this property.



So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/bapprox 1$). All you have to make sure is that



  • $a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round.

  • $a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round).

Then the condition is preserved.
For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.






share|cite|improve this answer


















  • 2




    (+1) A very nice abstract, inductive construction. Much lighter to read than the concrete analytical solution from the other answer (which of course is also valuable).
    – Adayah
    Aug 14 at 19:12










  • Thank you for your feedback!
    – A. Pongrácz
    Aug 14 at 19:13

















up vote
12
down vote













You can't do better than a factor of $2$.



Assume to the contrary that you have a strategy such that the ratio between the largest and smallest remaining piece is always $<R$ for some $R<2$.



Then, first we can see that for every $varepsilon>0$ there will eventually be two pieces whose ratio is at most $1+varepsilon$. Otherwise the ratio between largest and smallest piece would be at least $(1+varepsilon)^n$ which eventually exceeds $R$.



Once you have two pieces of length $a$ and $b$, with $a < b < (1+varepsilon)a$, eventually you will have to cut one of them.



If you cut $a$ first, one of the pieces will be at most $frac12 a < frac12b$, so your goal has immediately failed.



However, if you cut $b$ first, one of the pieces will be at most $frac12b < frac1+varepsilon2 a$, which means you've lost if we choose $varepsilon$ to be small enough that $frac1+varepsilon2 < frac1R$. And that is always possible if $R<2$.






share|cite|improve this answer
















  • 1




    Did I misunderstand the problem? According to the way I understood it, at each round the ratio of biggest and smallest elements must be less than 2. You showed that a stronger requirement is impossible, namely there exists no number $R$ less than $2$ so that the ratio is always less than $R$. This is a nice follow-up observation once you solve the problem, but this is not a solution to the problem. But then why was it accepted?
    – A. Pongrácz
    Aug 14 at 10:00







  • 2




    @A.Pongrácz: Hmm, you're right. I misread the requirement, being too occupied with figuring out what the best limiting ratio is.
    – Henning Makholm
    Aug 14 at 10:12







  • 3




    You never fail! See my answer. Hanning Makholm's answer is indeed elegant, and an important part of the post-mortem. It is not the solution, though.
    – A. Pongrácz
    Aug 14 at 10:17







  • 4




    @Stenzel You're wrong. It's not true that Henning Makholm's 'answer is an elegant proof that the requirement of the ratio being always < 2 is not possible'. Henning Makholm proves that, for any chosen $R$ less than $2$, the obtained ratio will eventually get above $R$. Which is much stronger than what you required. OTOH, A.Pongrácz proves exactly what you requested, namely that you can keep the ratio below $2$ — it will inevitably approach $2$ (which follows from the proof by H.M.) but it can stay below $2$.
    – CiaPan
    Aug 14 at 11:02







  • 2




    @Stenzel: When you do numerical simulation, you must always remember numerical error and instability. In particular, if you keep dividing a floating point number by 2 in many languages, it will eventually become zero or underflow. Also, even if you use arbitrary precision floating point, the numerical errors can still accumulate, even if you keep increasing the precision, because at each step you are only using some finite precision, and hence that may explain what you are observing. However, in most cases you should observe that increasing precision will delay the failure longer.
    – user21820
    Aug 14 at 15:08










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2882265%2foptimal-strategy-for-cutting-a-sausage%23new-answer', 'question_page');

);

Post as a guest






























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
58
down vote



accepted










TLDR: $a_n=log_2(1+1/n)$ works, and is the only smooth solution.



This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.



Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,dots,a_2n-1$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_2n$ and $a_2n+1$. We have the following constraints:



$$a_1=1qquad a_n=a_2n+a_2n+1qquad a_nge a_n+1qquad a_n<2a_2n-1$$



I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_2n+a_2n+1$ as well).



First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_nin (1/2,1)$ then we can define $a_2n=a_nb_n$ and $a_2n+1=a_n(1-b_n)$, and this will completely define the sequence $a_n$.



Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence



$$1,frac12,frac12,frac14,frac14,frac14,frac14,frac18,frac18,frac18,frac18,frac18,frac18,frac18,frac18,dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does not tend to a limit - it varies between $1$ and $2$.



If we average this exponentially, by considering the function $g(x)=2^xa_lfloor 2^xrfloor$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]toBbb R$ such that $g(x+n)to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.



There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,log_2 (3/2)]$ and scaled down on $[log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $int_0^log_2(3/2)h(x),dx$ and $int_log_2(3/2)^1h(x),dx$.



Thus, to keep these balanced we should let $b_1=log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[log_2(2n),log_2(2n+1)]$ and $[log_2(2n+1),log_2(2n+2)]$ (reduced$bmod 1$), so we must set them to
$$b_n=fraclog_2(2n+1)-log_2(2n)log_2(2n+2)-log_2(2n)=fraclog(1+1/2n)log(1+1/n).$$



When we do this, a miracle occurs, and $a_n=log_2(1+1/n)$ becomes analytically solvable:
beginalign
a_1&=log_2(1+1/1)=1\
a_2n+a_2n+1&=log_2Big(1+frac12nBig)+log_2Big(1+frac12n+1Big)\
&=log_2left[Big(1+frac12nBig)Big(1+frac12n+1Big)right]\
&=log_2left[1+frac2n+(2n+1)+12n(2n+1)right]\
&=log_2left[1+frac1nright]=a_n.
endalign



As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
beginalign
2a_m&=2log_2Big(1+frac1mBig)=log_2Big(1+frac1mBig)^2=log_2Big(1+frac2m+frac1m^2Big)\
&gelog_2Big(1+frac2mBig)>log_2Big(1+frac22nBig)=a_n,
endalign



so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $frac 1log 2=log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.



It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):



  • $58:42$, ratio = $1.41$

  • $42:32:26$, ratio = $1.58$

  • $32:26:22:19$, ratio = $1.67$

  • $26:22:19:17:15$, ratio = $1.73$

  • $22:19:17:15:14:13$, ratio = $1.77$

If you are working with a sequence of points treated$bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=log_2(2n+1)bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:



                                  sausages



A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0le xle 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0le xle 1$, and no solution can do better than that (in the limit) for any $x$.






share|cite|improve this answer


















  • 1




    Wow, thank you very much! I must admit it will take me some time to fully understand this.
    – Stenzel
    Aug 14 at 12:48






  • 21




    Thank you! Believe it or not I'm facing a similar programming problem trying to figure out the best way to pick up colors for players on a color wheel such that the hue difference between any two players is always as large as it can possibly be and new players can join the game at any time. Now this post hints me into the right direction.
    – GOTO 0
    Aug 14 at 18:09






  • 11




    @GOTO0 It's interesting that you mention that, because for that problem I usually recommend golden ratio spacing (i.e. $p_n=nphibmod 1$). I know that is optimal when the points are evenly spaced, but it hadn't occurred to me before that nonuniform spacing like this log solution are actually better distributed.
    – Mario Carneiro
    Aug 14 at 18:40






  • 1




    @Ryan Well, I guess simply cutting the largest piece in two parts gives an even better distribution than the golden ratio spacing, with a maximum ratio of 2. It makes it also very efficient to calculate the cutting points $p_n$, if you write $n$ in base 2 and mirror the digits around the "decimal" (binary) point. It also guarantees that you have at most two different lengths at any time. So...
    – GOTO 0
    Aug 15 at 9:14






  • 4




    @cmh Mathematica: Export["sausage.gif", Table[Graphics[ Line[#, 0, #, 1 & /@ Append[Table[Mod[Log[2, 2 n + 1], 1], n, 0, 2^k + 10 k], 1]], PlotRange -> 0, 1, 0, 1, AspectRatio -> 1/10, ImageSize -> 400], k, -0.1, 10, .025]]
    – Mario Carneiro
    Aug 15 at 16:33














up vote
58
down vote



accepted










TLDR: $a_n=log_2(1+1/n)$ works, and is the only smooth solution.



This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.



Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,dots,a_2n-1$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_2n$ and $a_2n+1$. We have the following constraints:



$$a_1=1qquad a_n=a_2n+a_2n+1qquad a_nge a_n+1qquad a_n<2a_2n-1$$



I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_2n+a_2n+1$ as well).



First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_nin (1/2,1)$ then we can define $a_2n=a_nb_n$ and $a_2n+1=a_n(1-b_n)$, and this will completely define the sequence $a_n$.



Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence



$$1,frac12,frac12,frac14,frac14,frac14,frac14,frac18,frac18,frac18,frac18,frac18,frac18,frac18,frac18,dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does not tend to a limit - it varies between $1$ and $2$.



If we average this exponentially, by considering the function $g(x)=2^xa_lfloor 2^xrfloor$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]toBbb R$ such that $g(x+n)to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.



There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,log_2 (3/2)]$ and scaled down on $[log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $int_0^log_2(3/2)h(x),dx$ and $int_log_2(3/2)^1h(x),dx$.



Thus, to keep these balanced we should let $b_1=log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[log_2(2n),log_2(2n+1)]$ and $[log_2(2n+1),log_2(2n+2)]$ (reduced$bmod 1$), so we must set them to
$$b_n=fraclog_2(2n+1)-log_2(2n)log_2(2n+2)-log_2(2n)=fraclog(1+1/2n)log(1+1/n).$$



When we do this, a miracle occurs, and $a_n=log_2(1+1/n)$ becomes analytically solvable:
beginalign
a_1&=log_2(1+1/1)=1\
a_2n+a_2n+1&=log_2Big(1+frac12nBig)+log_2Big(1+frac12n+1Big)\
&=log_2left[Big(1+frac12nBig)Big(1+frac12n+1Big)right]\
&=log_2left[1+frac2n+(2n+1)+12n(2n+1)right]\
&=log_2left[1+frac1nright]=a_n.
endalign



As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
beginalign
2a_m&=2log_2Big(1+frac1mBig)=log_2Big(1+frac1mBig)^2=log_2Big(1+frac2m+frac1m^2Big)\
&gelog_2Big(1+frac2mBig)>log_2Big(1+frac22nBig)=a_n,
endalign



so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $frac 1log 2=log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.



It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):



  • $58:42$, ratio = $1.41$

  • $42:32:26$, ratio = $1.58$

  • $32:26:22:19$, ratio = $1.67$

  • $26:22:19:17:15$, ratio = $1.73$

  • $22:19:17:15:14:13$, ratio = $1.77$

If you are working with a sequence of points treated$bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=log_2(2n+1)bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:



                                  sausages



A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0le xle 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0le xle 1$, and no solution can do better than that (in the limit) for any $x$.






share|cite|improve this answer


















  • 1




    Wow, thank you very much! I must admit it will take me some time to fully understand this.
    – Stenzel
    Aug 14 at 12:48






  • 21




    Thank you! Believe it or not I'm facing a similar programming problem trying to figure out the best way to pick up colors for players on a color wheel such that the hue difference between any two players is always as large as it can possibly be and new players can join the game at any time. Now this post hints me into the right direction.
    – GOTO 0
    Aug 14 at 18:09






  • 11




    @GOTO0 It's interesting that you mention that, because for that problem I usually recommend golden ratio spacing (i.e. $p_n=nphibmod 1$). I know that is optimal when the points are evenly spaced, but it hadn't occurred to me before that nonuniform spacing like this log solution are actually better distributed.
    – Mario Carneiro
    Aug 14 at 18:40






  • 1




    @Ryan Well, I guess simply cutting the largest piece in two parts gives an even better distribution than the golden ratio spacing, with a maximum ratio of 2. It makes it also very efficient to calculate the cutting points $p_n$, if you write $n$ in base 2 and mirror the digits around the "decimal" (binary) point. It also guarantees that you have at most two different lengths at any time. So...
    – GOTO 0
    Aug 15 at 9:14






  • 4




    @cmh Mathematica: Export["sausage.gif", Table[Graphics[ Line[#, 0, #, 1 & /@ Append[Table[Mod[Log[2, 2 n + 1], 1], n, 0, 2^k + 10 k], 1]], PlotRange -> 0, 1, 0, 1, AspectRatio -> 1/10, ImageSize -> 400], k, -0.1, 10, .025]]
    – Mario Carneiro
    Aug 15 at 16:33












up vote
58
down vote



accepted







up vote
58
down vote



accepted






TLDR: $a_n=log_2(1+1/n)$ works, and is the only smooth solution.



This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.



Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,dots,a_2n-1$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_2n$ and $a_2n+1$. We have the following constraints:



$$a_1=1qquad a_n=a_2n+a_2n+1qquad a_nge a_n+1qquad a_n<2a_2n-1$$



I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_2n+a_2n+1$ as well).



First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_nin (1/2,1)$ then we can define $a_2n=a_nb_n$ and $a_2n+1=a_n(1-b_n)$, and this will completely define the sequence $a_n$.



Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence



$$1,frac12,frac12,frac14,frac14,frac14,frac14,frac18,frac18,frac18,frac18,frac18,frac18,frac18,frac18,dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does not tend to a limit - it varies between $1$ and $2$.



If we average this exponentially, by considering the function $g(x)=2^xa_lfloor 2^xrfloor$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]toBbb R$ such that $g(x+n)to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.



There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,log_2 (3/2)]$ and scaled down on $[log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $int_0^log_2(3/2)h(x),dx$ and $int_log_2(3/2)^1h(x),dx$.



Thus, to keep these balanced we should let $b_1=log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[log_2(2n),log_2(2n+1)]$ and $[log_2(2n+1),log_2(2n+2)]$ (reduced$bmod 1$), so we must set them to
$$b_n=fraclog_2(2n+1)-log_2(2n)log_2(2n+2)-log_2(2n)=fraclog(1+1/2n)log(1+1/n).$$



When we do this, a miracle occurs, and $a_n=log_2(1+1/n)$ becomes analytically solvable:
beginalign
a_1&=log_2(1+1/1)=1\
a_2n+a_2n+1&=log_2Big(1+frac12nBig)+log_2Big(1+frac12n+1Big)\
&=log_2left[Big(1+frac12nBig)Big(1+frac12n+1Big)right]\
&=log_2left[1+frac2n+(2n+1)+12n(2n+1)right]\
&=log_2left[1+frac1nright]=a_n.
endalign



As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
beginalign
2a_m&=2log_2Big(1+frac1mBig)=log_2Big(1+frac1mBig)^2=log_2Big(1+frac2m+frac1m^2Big)\
&gelog_2Big(1+frac2mBig)>log_2Big(1+frac22nBig)=a_n,
endalign



so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $frac 1log 2=log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.



It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):



  • $58:42$, ratio = $1.41$

  • $42:32:26$, ratio = $1.58$

  • $32:26:22:19$, ratio = $1.67$

  • $26:22:19:17:15$, ratio = $1.73$

  • $22:19:17:15:14:13$, ratio = $1.77$

If you are working with a sequence of points treated$bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=log_2(2n+1)bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:



                                  sausages



A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0le xle 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0le xle 1$, and no solution can do better than that (in the limit) for any $x$.






share|cite|improve this answer














TLDR: $a_n=log_2(1+1/n)$ works, and is the only smooth solution.



This problem hints at a deeper mathematical question, as follows. As has been observed by Pongrácz, there is a great deal of possible variation in solutions to this problem. I would like to find a "best" solution, where the sequence of pieces is somehow as evenly distributed as possible, given the constraints.



Let us fix the following strategy: at stage $n$ there are $n$ pieces, of lengths $a_n,dots,a_2n-1$, ordered in decreasing length. You cut $a_n$ into two pieces, forming $a_2n$ and $a_2n+1$. We have the following constraints:



$$a_1=1qquad a_n=a_2n+a_2n+1qquad a_nge a_n+1qquad a_n<2a_2n-1$$



I would like to find a nice function $f(x)$ that interpolates all these $a_n$s (and possibly generalizes the relation $a_n=a_2n+a_2n+1$ as well).



First, it is clear that the only degree of freedom is in the choice of cut, which is to say if we take any sequence $b_nin (1/2,1)$ then we can define $a_2n=a_nb_n$ and $a_2n+1=a_n(1-b_n)$, and this will completely define the sequence $a_n$.



Now we should expect that $a_n$ is asymptotic to $1/n$, since it drops by a factor of $2$ every time $n$ doubles. Thus one regularity condition we can impose is that $na_n$ converges. If we consider the "baseline solution" where every cut is at $1/2$, producing the sequence



$$1,frac12,frac12,frac14,frac14,frac14,frac14,frac18,frac18,frac18,frac18,frac18,frac18,frac18,frac18,dots$$
(which is not technically a solution because of the strict inequality, but is on the boundary of solutions), then we see that $na_n$ in fact does not tend to a limit - it varies between $1$ and $2$.



If we average this exponentially, by considering the function $g(x)=2^xa_lfloor 2^xrfloor$, then we get a function which gets closer and closer to being periodic with period $1$. That is, there is a function $h(x):[0,1]toBbb R$ such that $g(x+n)to h(x)$, and we need this function to be constant if we want $g(x)$ itself to have a limit.



There is a very direct relation between $h(x)$ and the $b_n$s. If we increase $b_1$ while leaving everything else the same, then $h(x)$ will be scaled up on $[0,log_2 (3/2)]$ and scaled down on $[log_2 (3/2),1]$. None of the other $b_i$'s control this left-right balance - they make $h(x)$ larger in some subregion of one or the other of these intervals only, but preserving $int_0^log_2(3/2)h(x),dx$ and $int_log_2(3/2)^1h(x),dx$.



Thus, to keep these balanced we should let $b_1=log_2(3/2)$. More generally, each $b_n$ controls the balance of $h$ on the intervals $[log_2(2n),log_2(2n+1)]$ and $[log_2(2n+1),log_2(2n+2)]$ (reduced$bmod 1$), so we must set them to
$$b_n=fraclog_2(2n+1)-log_2(2n)log_2(2n+2)-log_2(2n)=fraclog(1+1/2n)log(1+1/n).$$



When we do this, a miracle occurs, and $a_n=log_2(1+1/n)$ becomes analytically solvable:
beginalign
a_1&=log_2(1+1/1)=1\
a_2n+a_2n+1&=log_2Big(1+frac12nBig)+log_2Big(1+frac12n+1Big)\
&=log_2left[Big(1+frac12nBig)Big(1+frac12n+1Big)right]\
&=log_2left[1+frac2n+(2n+1)+12n(2n+1)right]\
&=log_2left[1+frac1nright]=a_n.
endalign



As a bonus, we obviously have that the $a_n$ sequence is decreasing, and if $m<2n$, then
beginalign
2a_m&=2log_2Big(1+frac1mBig)=log_2Big(1+frac1mBig)^2=log_2Big(1+frac2m+frac1m^2Big)\
&gelog_2Big(1+frac2mBig)>log_2Big(1+frac22nBig)=a_n,
endalign



so this is indeed a proper solution, and we have also attained our smoothness goal — $na_n$ converges, to $frac 1log 2=log_2e$. It is also worth noting that the difference between the largest and smallest piece has limit exactly $2$, which validates Henning Makholm's observation that you can't do better than $2$ in the limit.



It looks like this (rounded to the nearest hundred, so the numbers may not add to 100 exactly):



  • $58:42$, ratio = $1.41$

  • $42:32:26$, ratio = $1.58$

  • $32:26:22:19$, ratio = $1.67$

  • $26:22:19:17:15$, ratio = $1.73$

  • $22:19:17:15:14:13$, ratio = $1.77$

If you are working with a sequence of points treated$bmod 1$, where the intervals between the points are the "sausages", then this sequence of segments is generated by $p_n=log_2(2n+1)bmod 1$. The result is beautifully uniform but with a noticeable sweep edge:



                                  sausages



A more concrete optimality condition that picks this solution uniquely is the following: we require that for any fraction $0le xle 1$, the sausage at the $x$ position (give or take a sausage) in the list, sorted in decreasing order, should be at most $c(x)$ times smaller than the largest at all times. This solution achieves $c(x)=x+1$ for all $0le xle 1$, and no solution can do better than that (in the limit) for any $x$.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Aug 15 at 13:39

























answered Aug 14 at 12:35









Mario Carneiro

18k33888




18k33888







  • 1




    Wow, thank you very much! I must admit it will take me some time to fully understand this.
    – Stenzel
    Aug 14 at 12:48






  • 21




    Thank you! Believe it or not I'm facing a similar programming problem trying to figure out the best way to pick up colors for players on a color wheel such that the hue difference between any two players is always as large as it can possibly be and new players can join the game at any time. Now this post hints me into the right direction.
    – GOTO 0
    Aug 14 at 18:09






  • 11




    @GOTO0 It's interesting that you mention that, because for that problem I usually recommend golden ratio spacing (i.e. $p_n=nphibmod 1$). I know that is optimal when the points are evenly spaced, but it hadn't occurred to me before that nonuniform spacing like this log solution are actually better distributed.
    – Mario Carneiro
    Aug 14 at 18:40






  • 1




    @Ryan Well, I guess simply cutting the largest piece in two parts gives an even better distribution than the golden ratio spacing, with a maximum ratio of 2. It makes it also very efficient to calculate the cutting points $p_n$, if you write $n$ in base 2 and mirror the digits around the "decimal" (binary) point. It also guarantees that you have at most two different lengths at any time. So...
    – GOTO 0
    Aug 15 at 9:14






  • 4




    @cmh Mathematica: Export["sausage.gif", Table[Graphics[ Line[#, 0, #, 1 & /@ Append[Table[Mod[Log[2, 2 n + 1], 1], n, 0, 2^k + 10 k], 1]], PlotRange -> 0, 1, 0, 1, AspectRatio -> 1/10, ImageSize -> 400], k, -0.1, 10, .025]]
    – Mario Carneiro
    Aug 15 at 16:33












  • 1




    Wow, thank you very much! I must admit it will take me some time to fully understand this.
    – Stenzel
    Aug 14 at 12:48






  • 21




    Thank you! Believe it or not I'm facing a similar programming problem trying to figure out the best way to pick up colors for players on a color wheel such that the hue difference between any two players is always as large as it can possibly be and new players can join the game at any time. Now this post hints me into the right direction.
    – GOTO 0
    Aug 14 at 18:09






  • 11




    @GOTO0 It's interesting that you mention that, because for that problem I usually recommend golden ratio spacing (i.e. $p_n=nphibmod 1$). I know that is optimal when the points are evenly spaced, but it hadn't occurred to me before that nonuniform spacing like this log solution are actually better distributed.
    – Mario Carneiro
    Aug 14 at 18:40






  • 1




    @Ryan Well, I guess simply cutting the largest piece in two parts gives an even better distribution than the golden ratio spacing, with a maximum ratio of 2. It makes it also very efficient to calculate the cutting points $p_n$, if you write $n$ in base 2 and mirror the digits around the "decimal" (binary) point. It also guarantees that you have at most two different lengths at any time. So...
    – GOTO 0
    Aug 15 at 9:14






  • 4




    @cmh Mathematica: Export["sausage.gif", Table[Graphics[ Line[#, 0, #, 1 & /@ Append[Table[Mod[Log[2, 2 n + 1], 1], n, 0, 2^k + 10 k], 1]], PlotRange -> 0, 1, 0, 1, AspectRatio -> 1/10, ImageSize -> 400], k, -0.1, 10, .025]]
    – Mario Carneiro
    Aug 15 at 16:33







1




1




Wow, thank you very much! I must admit it will take me some time to fully understand this.
– Stenzel
Aug 14 at 12:48




Wow, thank you very much! I must admit it will take me some time to fully understand this.
– Stenzel
Aug 14 at 12:48




21




21




Thank you! Believe it or not I'm facing a similar programming problem trying to figure out the best way to pick up colors for players on a color wheel such that the hue difference between any two players is always as large as it can possibly be and new players can join the game at any time. Now this post hints me into the right direction.
– GOTO 0
Aug 14 at 18:09




Thank you! Believe it or not I'm facing a similar programming problem trying to figure out the best way to pick up colors for players on a color wheel such that the hue difference between any two players is always as large as it can possibly be and new players can join the game at any time. Now this post hints me into the right direction.
– GOTO 0
Aug 14 at 18:09




11




11




@GOTO0 It's interesting that you mention that, because for that problem I usually recommend golden ratio spacing (i.e. $p_n=nphibmod 1$). I know that is optimal when the points are evenly spaced, but it hadn't occurred to me before that nonuniform spacing like this log solution are actually better distributed.
– Mario Carneiro
Aug 14 at 18:40




@GOTO0 It's interesting that you mention that, because for that problem I usually recommend golden ratio spacing (i.e. $p_n=nphibmod 1$). I know that is optimal when the points are evenly spaced, but it hadn't occurred to me before that nonuniform spacing like this log solution are actually better distributed.
– Mario Carneiro
Aug 14 at 18:40




1




1




@Ryan Well, I guess simply cutting the largest piece in two parts gives an even better distribution than the golden ratio spacing, with a maximum ratio of 2. It makes it also very efficient to calculate the cutting points $p_n$, if you write $n$ in base 2 and mirror the digits around the "decimal" (binary) point. It also guarantees that you have at most two different lengths at any time. So...
– GOTO 0
Aug 15 at 9:14




@Ryan Well, I guess simply cutting the largest piece in two parts gives an even better distribution than the golden ratio spacing, with a maximum ratio of 2. It makes it also very efficient to calculate the cutting points $p_n$, if you write $n$ in base 2 and mirror the digits around the "decimal" (binary) point. It also guarantees that you have at most two different lengths at any time. So...
– GOTO 0
Aug 15 at 9:14




4




4




@cmh Mathematica: Export["sausage.gif", Table[Graphics[ Line[#, 0, #, 1 & /@ Append[Table[Mod[Log[2, 2 n + 1], 1], n, 0, 2^k + 10 k], 1]], PlotRange -> 0, 1, 0, 1, AspectRatio -> 1/10, ImageSize -> 400], k, -0.1, 10, .025]]
– Mario Carneiro
Aug 15 at 16:33




@cmh Mathematica: Export["sausage.gif", Table[Graphics[ Line[#, 0, #, 1 & /@ Append[Table[Mod[Log[2, 2 n + 1], 1], n, 0, 2^k + 10 k], 1]], PlotRange -> 0, 1, 0, 1, AspectRatio -> 1/10, ImageSize -> 400], k, -0.1, 10, .025]]
– Mario Carneiro
Aug 15 at 16:33










up vote
36
down vote













YES, it is possible!



You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement.
So in fact, you must never have two equal parts.
Make the first cut so that the condition is not violated, say $60:40$.



From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.)
We construct a good cut that maintains this property.



So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/bapprox 1$). All you have to make sure is that



  • $a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round.

  • $a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round).

Then the condition is preserved.
For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.






share|cite|improve this answer


















  • 2




    (+1) A very nice abstract, inductive construction. Much lighter to read than the concrete analytical solution from the other answer (which of course is also valuable).
    – Adayah
    Aug 14 at 19:12










  • Thank you for your feedback!
    – A. Pongrácz
    Aug 14 at 19:13














up vote
36
down vote













YES, it is possible!



You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement.
So in fact, you must never have two equal parts.
Make the first cut so that the condition is not violated, say $60:40$.



From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.)
We construct a good cut that maintains this property.



So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/bapprox 1$). All you have to make sure is that



  • $a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round.

  • $a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round).

Then the condition is preserved.
For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.






share|cite|improve this answer


















  • 2




    (+1) A very nice abstract, inductive construction. Much lighter to read than the concrete analytical solution from the other answer (which of course is also valuable).
    – Adayah
    Aug 14 at 19:12










  • Thank you for your feedback!
    – A. Pongrácz
    Aug 14 at 19:13












up vote
36
down vote










up vote
36
down vote









YES, it is possible!



You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement.
So in fact, you must never have two equal parts.
Make the first cut so that the condition is not violated, say $60:40$.



From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.)
We construct a good cut that maintains this property.



So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/bapprox 1$). All you have to make sure is that



  • $a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round.

  • $a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round).

Then the condition is preserved.
For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.






share|cite|improve this answer














YES, it is possible!



You mustn't cut a piece in half, because eventually you have to cut one of them, and then you violate the requirement.
So in fact, you must never have two equal parts.
Make the first cut so that the condition is not violated, say $60:40$.



From now on, assume that the ratio of biggest over smallest is strictly less than $2$ in a given round, and no two pieces are equal. (This holds for the $60:40$ cut.)
We construct a good cut that maintains this property.



So at the next turn, pick the biggest piece, and cut it in two non-equal pieces in an $a:b$ ratio, but very close to equal (so $a/bapprox 1$). All you have to make sure is that



  • $a/b$ is so close to $1$ that the two new pieces are both smaller that the smallest piece in the last round.

  • $a/b$ is so close to $1$ that the smaller piece is bigger than half of the second biggest in the last round (which is going to become the biggest piece in this round).

Then the condition is preserved.
For example, from $60:40$ you can move to $25:35:40$, then cut the fourty to obtain $19:21:25:35$, etc.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Aug 14 at 17:55

























answered Aug 14 at 9:40









A. Pongrácz

4,312725




4,312725







  • 2




    (+1) A very nice abstract, inductive construction. Much lighter to read than the concrete analytical solution from the other answer (which of course is also valuable).
    – Adayah
    Aug 14 at 19:12










  • Thank you for your feedback!
    – A. Pongrácz
    Aug 14 at 19:13












  • 2




    (+1) A very nice abstract, inductive construction. Much lighter to read than the concrete analytical solution from the other answer (which of course is also valuable).
    – Adayah
    Aug 14 at 19:12










  • Thank you for your feedback!
    – A. Pongrácz
    Aug 14 at 19:13







2




2




(+1) A very nice abstract, inductive construction. Much lighter to read than the concrete analytical solution from the other answer (which of course is also valuable).
– Adayah
Aug 14 at 19:12




(+1) A very nice abstract, inductive construction. Much lighter to read than the concrete analytical solution from the other answer (which of course is also valuable).
– Adayah
Aug 14 at 19:12












Thank you for your feedback!
– A. Pongrácz
Aug 14 at 19:13




Thank you for your feedback!
– A. Pongrácz
Aug 14 at 19:13










up vote
12
down vote













You can't do better than a factor of $2$.



Assume to the contrary that you have a strategy such that the ratio between the largest and smallest remaining piece is always $<R$ for some $R<2$.



Then, first we can see that for every $varepsilon>0$ there will eventually be two pieces whose ratio is at most $1+varepsilon$. Otherwise the ratio between largest and smallest piece would be at least $(1+varepsilon)^n$ which eventually exceeds $R$.



Once you have two pieces of length $a$ and $b$, with $a < b < (1+varepsilon)a$, eventually you will have to cut one of them.



If you cut $a$ first, one of the pieces will be at most $frac12 a < frac12b$, so your goal has immediately failed.



However, if you cut $b$ first, one of the pieces will be at most $frac12b < frac1+varepsilon2 a$, which means you've lost if we choose $varepsilon$ to be small enough that $frac1+varepsilon2 < frac1R$. And that is always possible if $R<2$.






share|cite|improve this answer
















  • 1




    Did I misunderstand the problem? According to the way I understood it, at each round the ratio of biggest and smallest elements must be less than 2. You showed that a stronger requirement is impossible, namely there exists no number $R$ less than $2$ so that the ratio is always less than $R$. This is a nice follow-up observation once you solve the problem, but this is not a solution to the problem. But then why was it accepted?
    – A. Pongrácz
    Aug 14 at 10:00







  • 2




    @A.Pongrácz: Hmm, you're right. I misread the requirement, being too occupied with figuring out what the best limiting ratio is.
    – Henning Makholm
    Aug 14 at 10:12







  • 3




    You never fail! See my answer. Hanning Makholm's answer is indeed elegant, and an important part of the post-mortem. It is not the solution, though.
    – A. Pongrácz
    Aug 14 at 10:17







  • 4




    @Stenzel You're wrong. It's not true that Henning Makholm's 'answer is an elegant proof that the requirement of the ratio being always < 2 is not possible'. Henning Makholm proves that, for any chosen $R$ less than $2$, the obtained ratio will eventually get above $R$. Which is much stronger than what you required. OTOH, A.Pongrácz proves exactly what you requested, namely that you can keep the ratio below $2$ — it will inevitably approach $2$ (which follows from the proof by H.M.) but it can stay below $2$.
    – CiaPan
    Aug 14 at 11:02







  • 2




    @Stenzel: When you do numerical simulation, you must always remember numerical error and instability. In particular, if you keep dividing a floating point number by 2 in many languages, it will eventually become zero or underflow. Also, even if you use arbitrary precision floating point, the numerical errors can still accumulate, even if you keep increasing the precision, because at each step you are only using some finite precision, and hence that may explain what you are observing. However, in most cases you should observe that increasing precision will delay the failure longer.
    – user21820
    Aug 14 at 15:08














up vote
12
down vote













You can't do better than a factor of $2$.



Assume to the contrary that you have a strategy such that the ratio between the largest and smallest remaining piece is always $<R$ for some $R<2$.



Then, first we can see that for every $varepsilon>0$ there will eventually be two pieces whose ratio is at most $1+varepsilon$. Otherwise the ratio between largest and smallest piece would be at least $(1+varepsilon)^n$ which eventually exceeds $R$.



Once you have two pieces of length $a$ and $b$, with $a < b < (1+varepsilon)a$, eventually you will have to cut one of them.



If you cut $a$ first, one of the pieces will be at most $frac12 a < frac12b$, so your goal has immediately failed.



However, if you cut $b$ first, one of the pieces will be at most $frac12b < frac1+varepsilon2 a$, which means you've lost if we choose $varepsilon$ to be small enough that $frac1+varepsilon2 < frac1R$. And that is always possible if $R<2$.






share|cite|improve this answer
















  • 1




    Did I misunderstand the problem? According to the way I understood it, at each round the ratio of biggest and smallest elements must be less than 2. You showed that a stronger requirement is impossible, namely there exists no number $R$ less than $2$ so that the ratio is always less than $R$. This is a nice follow-up observation once you solve the problem, but this is not a solution to the problem. But then why was it accepted?
    – A. Pongrácz
    Aug 14 at 10:00







  • 2




    @A.Pongrácz: Hmm, you're right. I misread the requirement, being too occupied with figuring out what the best limiting ratio is.
    – Henning Makholm
    Aug 14 at 10:12







  • 3




    You never fail! See my answer. Hanning Makholm's answer is indeed elegant, and an important part of the post-mortem. It is not the solution, though.
    – A. Pongrácz
    Aug 14 at 10:17







  • 4




    @Stenzel You're wrong. It's not true that Henning Makholm's 'answer is an elegant proof that the requirement of the ratio being always < 2 is not possible'. Henning Makholm proves that, for any chosen $R$ less than $2$, the obtained ratio will eventually get above $R$. Which is much stronger than what you required. OTOH, A.Pongrácz proves exactly what you requested, namely that you can keep the ratio below $2$ — it will inevitably approach $2$ (which follows from the proof by H.M.) but it can stay below $2$.
    – CiaPan
    Aug 14 at 11:02







  • 2




    @Stenzel: When you do numerical simulation, you must always remember numerical error and instability. In particular, if you keep dividing a floating point number by 2 in many languages, it will eventually become zero or underflow. Also, even if you use arbitrary precision floating point, the numerical errors can still accumulate, even if you keep increasing the precision, because at each step you are only using some finite precision, and hence that may explain what you are observing. However, in most cases you should observe that increasing precision will delay the failure longer.
    – user21820
    Aug 14 at 15:08












up vote
12
down vote










up vote
12
down vote









You can't do better than a factor of $2$.



Assume to the contrary that you have a strategy such that the ratio between the largest and smallest remaining piece is always $<R$ for some $R<2$.



Then, first we can see that for every $varepsilon>0$ there will eventually be two pieces whose ratio is at most $1+varepsilon$. Otherwise the ratio between largest and smallest piece would be at least $(1+varepsilon)^n$ which eventually exceeds $R$.



Once you have two pieces of length $a$ and $b$, with $a < b < (1+varepsilon)a$, eventually you will have to cut one of them.



If you cut $a$ first, one of the pieces will be at most $frac12 a < frac12b$, so your goal has immediately failed.



However, if you cut $b$ first, one of the pieces will be at most $frac12b < frac1+varepsilon2 a$, which means you've lost if we choose $varepsilon$ to be small enough that $frac1+varepsilon2 < frac1R$. And that is always possible if $R<2$.






share|cite|improve this answer












You can't do better than a factor of $2$.



Assume to the contrary that you have a strategy such that the ratio between the largest and smallest remaining piece is always $<R$ for some $R<2$.



Then, first we can see that for every $varepsilon>0$ there will eventually be two pieces whose ratio is at most $1+varepsilon$. Otherwise the ratio between largest and smallest piece would be at least $(1+varepsilon)^n$ which eventually exceeds $R$.



Once you have two pieces of length $a$ and $b$, with $a < b < (1+varepsilon)a$, eventually you will have to cut one of them.



If you cut $a$ first, one of the pieces will be at most $frac12 a < frac12b$, so your goal has immediately failed.



However, if you cut $b$ first, one of the pieces will be at most $frac12b < frac1+varepsilon2 a$, which means you've lost if we choose $varepsilon$ to be small enough that $frac1+varepsilon2 < frac1R$. And that is always possible if $R<2$.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Aug 14 at 9:45









Henning Makholm

229k16295526




229k16295526







  • 1




    Did I misunderstand the problem? According to the way I understood it, at each round the ratio of biggest and smallest elements must be less than 2. You showed that a stronger requirement is impossible, namely there exists no number $R$ less than $2$ so that the ratio is always less than $R$. This is a nice follow-up observation once you solve the problem, but this is not a solution to the problem. But then why was it accepted?
    – A. Pongrácz
    Aug 14 at 10:00







  • 2




    @A.Pongrácz: Hmm, you're right. I misread the requirement, being too occupied with figuring out what the best limiting ratio is.
    – Henning Makholm
    Aug 14 at 10:12







  • 3




    You never fail! See my answer. Hanning Makholm's answer is indeed elegant, and an important part of the post-mortem. It is not the solution, though.
    – A. Pongrácz
    Aug 14 at 10:17







  • 4




    @Stenzel You're wrong. It's not true that Henning Makholm's 'answer is an elegant proof that the requirement of the ratio being always < 2 is not possible'. Henning Makholm proves that, for any chosen $R$ less than $2$, the obtained ratio will eventually get above $R$. Which is much stronger than what you required. OTOH, A.Pongrácz proves exactly what you requested, namely that you can keep the ratio below $2$ — it will inevitably approach $2$ (which follows from the proof by H.M.) but it can stay below $2$.
    – CiaPan
    Aug 14 at 11:02







  • 2




    @Stenzel: When you do numerical simulation, you must always remember numerical error and instability. In particular, if you keep dividing a floating point number by 2 in many languages, it will eventually become zero or underflow. Also, even if you use arbitrary precision floating point, the numerical errors can still accumulate, even if you keep increasing the precision, because at each step you are only using some finite precision, and hence that may explain what you are observing. However, in most cases you should observe that increasing precision will delay the failure longer.
    – user21820
    Aug 14 at 15:08












  • 1




    Did I misunderstand the problem? According to the way I understood it, at each round the ratio of biggest and smallest elements must be less than 2. You showed that a stronger requirement is impossible, namely there exists no number $R$ less than $2$ so that the ratio is always less than $R$. This is a nice follow-up observation once you solve the problem, but this is not a solution to the problem. But then why was it accepted?
    – A. Pongrácz
    Aug 14 at 10:00







  • 2




    @A.Pongrácz: Hmm, you're right. I misread the requirement, being too occupied with figuring out what the best limiting ratio is.
    – Henning Makholm
    Aug 14 at 10:12







  • 3




    You never fail! See my answer. Hanning Makholm's answer is indeed elegant, and an important part of the post-mortem. It is not the solution, though.
    – A. Pongrácz
    Aug 14 at 10:17







  • 4




    @Stenzel You're wrong. It's not true that Henning Makholm's 'answer is an elegant proof that the requirement of the ratio being always < 2 is not possible'. Henning Makholm proves that, for any chosen $R$ less than $2$, the obtained ratio will eventually get above $R$. Which is much stronger than what you required. OTOH, A.Pongrácz proves exactly what you requested, namely that you can keep the ratio below $2$ — it will inevitably approach $2$ (which follows from the proof by H.M.) but it can stay below $2$.
    – CiaPan
    Aug 14 at 11:02







  • 2




    @Stenzel: When you do numerical simulation, you must always remember numerical error and instability. In particular, if you keep dividing a floating point number by 2 in many languages, it will eventually become zero or underflow. Also, even if you use arbitrary precision floating point, the numerical errors can still accumulate, even if you keep increasing the precision, because at each step you are only using some finite precision, and hence that may explain what you are observing. However, in most cases you should observe that increasing precision will delay the failure longer.
    – user21820
    Aug 14 at 15:08







1




1




Did I misunderstand the problem? According to the way I understood it, at each round the ratio of biggest and smallest elements must be less than 2. You showed that a stronger requirement is impossible, namely there exists no number $R$ less than $2$ so that the ratio is always less than $R$. This is a nice follow-up observation once you solve the problem, but this is not a solution to the problem. But then why was it accepted?
– A. Pongrácz
Aug 14 at 10:00





Did I misunderstand the problem? According to the way I understood it, at each round the ratio of biggest and smallest elements must be less than 2. You showed that a stronger requirement is impossible, namely there exists no number $R$ less than $2$ so that the ratio is always less than $R$. This is a nice follow-up observation once you solve the problem, but this is not a solution to the problem. But then why was it accepted?
– A. Pongrácz
Aug 14 at 10:00





2




2




@A.Pongrácz: Hmm, you're right. I misread the requirement, being too occupied with figuring out what the best limiting ratio is.
– Henning Makholm
Aug 14 at 10:12





@A.Pongrácz: Hmm, you're right. I misread the requirement, being too occupied with figuring out what the best limiting ratio is.
– Henning Makholm
Aug 14 at 10:12





3




3




You never fail! See my answer. Hanning Makholm's answer is indeed elegant, and an important part of the post-mortem. It is not the solution, though.
– A. Pongrácz
Aug 14 at 10:17





You never fail! See my answer. Hanning Makholm's answer is indeed elegant, and an important part of the post-mortem. It is not the solution, though.
– A. Pongrácz
Aug 14 at 10:17





4




4




@Stenzel You're wrong. It's not true that Henning Makholm's 'answer is an elegant proof that the requirement of the ratio being always < 2 is not possible'. Henning Makholm proves that, for any chosen $R$ less than $2$, the obtained ratio will eventually get above $R$. Which is much stronger than what you required. OTOH, A.Pongrácz proves exactly what you requested, namely that you can keep the ratio below $2$ — it will inevitably approach $2$ (which follows from the proof by H.M.) but it can stay below $2$.
– CiaPan
Aug 14 at 11:02





@Stenzel You're wrong. It's not true that Henning Makholm's 'answer is an elegant proof that the requirement of the ratio being always < 2 is not possible'. Henning Makholm proves that, for any chosen $R$ less than $2$, the obtained ratio will eventually get above $R$. Which is much stronger than what you required. OTOH, A.Pongrácz proves exactly what you requested, namely that you can keep the ratio below $2$ — it will inevitably approach $2$ (which follows from the proof by H.M.) but it can stay below $2$.
– CiaPan
Aug 14 at 11:02





2




2




@Stenzel: When you do numerical simulation, you must always remember numerical error and instability. In particular, if you keep dividing a floating point number by 2 in many languages, it will eventually become zero or underflow. Also, even if you use arbitrary precision floating point, the numerical errors can still accumulate, even if you keep increasing the precision, because at each step you are only using some finite precision, and hence that may explain what you are observing. However, in most cases you should observe that increasing precision will delay the failure longer.
– user21820
Aug 14 at 15:08




@Stenzel: When you do numerical simulation, you must always remember numerical error and instability. In particular, if you keep dividing a floating point number by 2 in many languages, it will eventually become zero or underflow. Also, even if you use arbitrary precision floating point, the numerical errors can still accumulate, even if you keep increasing the precision, because at each step you are only using some finite precision, and hence that may explain what you are observing. However, in most cases you should observe that increasing precision will delay the failure longer.
– user21820
Aug 14 at 15:08

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2882265%2foptimal-strategy-for-cutting-a-sausage%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

Long meetings (6-7 hours a day): Being “babysat” by supervisor

Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

Confectionery