Examples of harmless mistakes (on purpose) in submitted papers

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
43
down vote

favorite
6












Some paper reviewers feel the pressure to criticize something in order to appear competent. Sometimes they feel this pressure due to huge blank form fields for criticism in the reviewing system. As a consequence, they sometimes criticize wrongfully. Fully recovering from wrongful criticism during review is sometimes possible, but not always. This hurts the research community.



A while ago a saw advice in a video to have harmless and rather obvious mistakes (typos, inconsistent notation) on purpose in the manuscript when submitting for peer review, in order to avoid the aforementioned problems. I don't recall the details, nor who gave that talk.



Are you aware of such videos/articles, or can give examples of specific "diversionary tactics"?



Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it.




Edit: This question isn't about the pros and cons (see above). Many answers so far are as if I asked about the pros and cons (which I didn't).



Also, I'm not saying "I plan to do this, try to stop me". I just want to find the information that exists about it.



Note that I mean harmless mistakes. Also, they are fixed before publishing even if not asked to.



A related technique from programming is called "a duck".



The psychological phenomenon is called "Parkinson's law of triviality".










share|improve this question









New contributor




LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 41




    And what if this tactic makes a reviewer to somehow miss an actual serious mistake in your paper?
    – Peaceful
    2 days ago






  • 38




    Why bother to insert something that is practically guaranteed to already be present? And what if this addition makes the reviewer decide that the number of harmless mistakes is so big that it is not worth their time to do anything but send it back for proofreading?
    – Tobias Kildetoft
    2 days ago







  • 29




    It’s a waste of the revieer’s time and effort.
    – Solar Mike
    2 days ago






  • 10




    Is this really a question? This site is not really optimized for open-ended discussion.
    – Daniel R. Collins
    2 days ago






  • 6




    How is this a shopping question?
    – Allure
    yesterday














up vote
43
down vote

favorite
6












Some paper reviewers feel the pressure to criticize something in order to appear competent. Sometimes they feel this pressure due to huge blank form fields for criticism in the reviewing system. As a consequence, they sometimes criticize wrongfully. Fully recovering from wrongful criticism during review is sometimes possible, but not always. This hurts the research community.



A while ago a saw advice in a video to have harmless and rather obvious mistakes (typos, inconsistent notation) on purpose in the manuscript when submitting for peer review, in order to avoid the aforementioned problems. I don't recall the details, nor who gave that talk.



Are you aware of such videos/articles, or can give examples of specific "diversionary tactics"?



Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it.




Edit: This question isn't about the pros and cons (see above). Many answers so far are as if I asked about the pros and cons (which I didn't).



Also, I'm not saying "I plan to do this, try to stop me". I just want to find the information that exists about it.



Note that I mean harmless mistakes. Also, they are fixed before publishing even if not asked to.



A related technique from programming is called "a duck".



The psychological phenomenon is called "Parkinson's law of triviality".










share|improve this question









New contributor




LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 41




    And what if this tactic makes a reviewer to somehow miss an actual serious mistake in your paper?
    – Peaceful
    2 days ago






  • 38




    Why bother to insert something that is practically guaranteed to already be present? And what if this addition makes the reviewer decide that the number of harmless mistakes is so big that it is not worth their time to do anything but send it back for proofreading?
    – Tobias Kildetoft
    2 days ago







  • 29




    It’s a waste of the revieer’s time and effort.
    – Solar Mike
    2 days ago






  • 10




    Is this really a question? This site is not really optimized for open-ended discussion.
    – Daniel R. Collins
    2 days ago






  • 6




    How is this a shopping question?
    – Allure
    yesterday












up vote
43
down vote

favorite
6









up vote
43
down vote

favorite
6






6





Some paper reviewers feel the pressure to criticize something in order to appear competent. Sometimes they feel this pressure due to huge blank form fields for criticism in the reviewing system. As a consequence, they sometimes criticize wrongfully. Fully recovering from wrongful criticism during review is sometimes possible, but not always. This hurts the research community.



A while ago a saw advice in a video to have harmless and rather obvious mistakes (typos, inconsistent notation) on purpose in the manuscript when submitting for peer review, in order to avoid the aforementioned problems. I don't recall the details, nor who gave that talk.



Are you aware of such videos/articles, or can give examples of specific "diversionary tactics"?



Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it.




Edit: This question isn't about the pros and cons (see above). Many answers so far are as if I asked about the pros and cons (which I didn't).



Also, I'm not saying "I plan to do this, try to stop me". I just want to find the information that exists about it.



Note that I mean harmless mistakes. Also, they are fixed before publishing even if not asked to.



A related technique from programming is called "a duck".



The psychological phenomenon is called "Parkinson's law of triviality".










share|improve this question









New contributor




LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











Some paper reviewers feel the pressure to criticize something in order to appear competent. Sometimes they feel this pressure due to huge blank form fields for criticism in the reviewing system. As a consequence, they sometimes criticize wrongfully. Fully recovering from wrongful criticism during review is sometimes possible, but not always. This hurts the research community.



A while ago a saw advice in a video to have harmless and rather obvious mistakes (typos, inconsistent notation) on purpose in the manuscript when submitting for peer review, in order to avoid the aforementioned problems. I don't recall the details, nor who gave that talk.



Are you aware of such videos/articles, or can give examples of specific "diversionary tactics"?



Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it.




Edit: This question isn't about the pros and cons (see above). Many answers so far are as if I asked about the pros and cons (which I didn't).



Also, I'm not saying "I plan to do this, try to stop me". I just want to find the information that exists about it.



Note that I mean harmless mistakes. Also, they are fixed before publishing even if not asked to.



A related technique from programming is called "a duck".



The psychological phenomenon is called "Parkinson's law of triviality".







publications peer-review paper-submission reference-request psychology






share|improve this question









New contributor




LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 18 mins ago









Cullub

1436




1436






New contributor




LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 2 days ago









LvB

349126




349126




New contributor




LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






LvB is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







  • 41




    And what if this tactic makes a reviewer to somehow miss an actual serious mistake in your paper?
    – Peaceful
    2 days ago






  • 38




    Why bother to insert something that is practically guaranteed to already be present? And what if this addition makes the reviewer decide that the number of harmless mistakes is so big that it is not worth their time to do anything but send it back for proofreading?
    – Tobias Kildetoft
    2 days ago







  • 29




    It’s a waste of the revieer’s time and effort.
    – Solar Mike
    2 days ago






  • 10




    Is this really a question? This site is not really optimized for open-ended discussion.
    – Daniel R. Collins
    2 days ago






  • 6




    How is this a shopping question?
    – Allure
    yesterday












  • 41




    And what if this tactic makes a reviewer to somehow miss an actual serious mistake in your paper?
    – Peaceful
    2 days ago






  • 38




    Why bother to insert something that is practically guaranteed to already be present? And what if this addition makes the reviewer decide that the number of harmless mistakes is so big that it is not worth their time to do anything but send it back for proofreading?
    – Tobias Kildetoft
    2 days ago







  • 29




    It’s a waste of the revieer’s time and effort.
    – Solar Mike
    2 days ago






  • 10




    Is this really a question? This site is not really optimized for open-ended discussion.
    – Daniel R. Collins
    2 days ago






  • 6




    How is this a shopping question?
    – Allure
    yesterday







41




41




And what if this tactic makes a reviewer to somehow miss an actual serious mistake in your paper?
– Peaceful
2 days ago




And what if this tactic makes a reviewer to somehow miss an actual serious mistake in your paper?
– Peaceful
2 days ago




38




38




Why bother to insert something that is practically guaranteed to already be present? And what if this addition makes the reviewer decide that the number of harmless mistakes is so big that it is not worth their time to do anything but send it back for proofreading?
– Tobias Kildetoft
2 days ago





Why bother to insert something that is practically guaranteed to already be present? And what if this addition makes the reviewer decide that the number of harmless mistakes is so big that it is not worth their time to do anything but send it back for proofreading?
– Tobias Kildetoft
2 days ago





29




29




It’s a waste of the revieer’s time and effort.
– Solar Mike
2 days ago




It’s a waste of the revieer’s time and effort.
– Solar Mike
2 days ago




10




10




Is this really a question? This site is not really optimized for open-ended discussion.
– Daniel R. Collins
2 days ago




Is this really a question? This site is not really optimized for open-ended discussion.
– Daniel R. Collins
2 days ago




6




6




How is this a shopping question?
– Allure
yesterday




How is this a shopping question?
– Allure
yesterday










7 Answers
7






active

oldest

votes

















up vote
98
down vote













enter image description here



You're not the first one to come up with this idea.



In case it's not obvious, I recommend against doing this:



  • Having stupid mistakes in your submission makes you look stupid.

  • It wastes the reviewer's time.

  • It wastes the editor's time.

  • If all the mistakes get through the review process, it wastes the reader's time deciphering what you means, and also makes you look stupid again.

Just don't.






share|improve this answer
















  • 9




    I had a team lead who would send back legitimately good programs with a "fix suggestion" that made things worse unless you put in a bug for him to find. To be fair, he also admitted he was a bad team lead but he couldn't turn down the pay increase and constantly flip-flipped between apologizing for being a bad team lead one day to managing by fear (reminding a guy that he used to deliver pizza and could easily again) the next. I think your point stands, if people are genuinely trying to help, don't do it. But there are reasons to do it.
    – corsiKa
    2 days ago






  • 13




    "If all the mistakes get through the review process" -- I think, the idea is that you'll fix mistake during review, asked or not
    – aaaaaa
    2 days ago






  • 11




    @user71659, why so hostile towards academia? Reviewers aren't normally paid and they spend time and effort to read your work and offer suggestions for improvement. True, they also serve as a gate to keep garbage out of the mainstream. Is any of that bad? Or do you think their main purpose is to attack you personally and keep your ideas down? Same question about teachers, I guess. Are they out to harm you personally? Sorry, I don't get it.
    – Buffy
    2 days ago






  • 7




    @Buffy It is very common in smaller fields that your paper ends up going to either a friend or a competitor, for which the competitor has a vested interest in harshly criticizing your work and pushing their own viewpoints, especially when it comes to high-impact journals and grants. The problem is it isn't "garbage" but almost all journals, and especially grants, consider novelty and significance, and that's inherently a personal opinion. Go search for the fight for publications in Cell, Nature, Science, and also NIH R01s. Simply, eat or be eaten.
    – user71659
    2 days ago







  • 6




    @Buffy Maybe things are different in math/CS, but all of the major physical and life science journal rely on the opinion of editors and they have inherent beliefs. And if you're talking about Cell/Nature/Science, close to 90% of their papers are rejected at the editor pre-review. What do you think happens? The editor reads the title, authors, abstract and decides. My field had a major fight in which a number of authors got together and wrote a letter pointing out that a particular Cell/Nature/Science topical editor accepted only papers from a small number of groups. This is the harsh reality.
    – user71659
    2 days ago

















up vote
42
down vote













This is a terrible idea. Just a couple of days ago, I reviewed a paper with a lot of confusing descriptions and elaborate mathematics. It was not clear that the explanatory sections were going to be clear enough for me to be able to evaluate the mathematical material in a useful fashion, but ordinarily I would have given it a try.



However, the very first equation of the paper (which, dealing with introductory matters, was not particularly complicated) contained an obvious error. I concluded that if the authors were that careless, it was not worth my time to try to pick through each and every poorly documented equation, trying to see if they were all valid. The editor agreed and rejected the paper.



So including stupid mistakes like this will only call into question whether you have been careful enough in preparing the manuscript. If it looks like the authors have been lazy or careless, there is little motivation for reviewers and editors to try to fix things for the authors.






share|improve this answer
















  • 12




    This answer has nothing to do with my question. I wrote in the question: "Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it."
    – LvB
    yesterday






  • 9




    @LvB If you ask for different ways to commit suicide, other people always have a right to say that you shouldn't do it instead of simply listing those for you.
    – Peaceful
    21 hours ago






  • 2




    @Peaceful But if you ask for it on the appropriate site (e.g. Worldbuilding for fanciful techniques), then any answer telling you not to would quickly get closed as NAA, especially if the OP explicitly says he does not intend to do so.
    – forest
    19 hours ago






  • 6




    This is a real life situation, so users can say don't do it. Arrogantly saying that that is not what I asked is quite insulting to people who invest their time and energy writing the answers.
    – Peaceful
    19 hours ago

















up vote
20
down vote













This interesting strategy has been identified in a programming context, where it has been dubbed "the duck technique" (see this post on the Coding Horror blog):




This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to [project managers]) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn't, they weren't adding value.



The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen's animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the "actual" animation.



Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, "that looks great. Just one thing - get rid of the duck."




I have not heard of doing this in regard to the peer review process for journal publication, and in that context I would advise against it. Perhaps others have had different experiences, but I have not observed any tendency for journal referees to ask for changes merely for the sake of appearing to add value. Since most of these processes are blind review, the referee is not usually identified to the author, and there is little reason for referees to grandstand like this.






share|improve this answer


















  • 1




    Worth noting that the duck wasn't an error. A closer analogy here would be deliberately including an outright bug in the software, which seems considerably more likely to backfire.
    – Geoffrey Brent
    yesterday







  • 9




    The duck was a thing included specifically to be targeted for removal, so it is analogous to the "errors" that the OP is talking about in his question.
    – Ben
    yesterday






  • 3




    @Ben Just out of curiosity what percentage of your reviews have come back "Content and format are perfect, no improvements suggested"?
    – Myles
    yesterday






  • 2




    +1 for an amusing and informative answer that makes a clear distinction between programming culture and academic culture.
    – Pete L. Clark
    yesterday






  • 3




    Note also that the duck technique came up in a well-defined corporate environment where people knew who would be reviewing their work, whereas in refereeing you generally can't anticipate who will be refereeing your work. These situations call for different behavior.
    – darij grinberg
    14 hours ago

















up vote
10
down vote













In case you feel the reviewer is either biased or psychologically set to validate his/her competence by unjustly criticizing your work as if implying that it was properly reviewed by an expert, it is actually possible to introduce something which should be edited out but I advise extreme caution. It should be very subtle and yet conspicuous enough. It should not cast any doubt on your own competence: perhaps, something superfluous or murky but well-known to the reviewer so that he/she will enjoy criticizing it. However if you don’t know the reviewer and his/her level of competence, it’s better to be extra cautious about such things.



A few words about typos and sloppiness in formatting: The reviewer will feel more justified to pile up his/her criticisms so much as to consider the paper to be low quality stuff or a mess. I saw it happen when excellent papers got almost scrapped for lack of clarity and accidental errata. So, typos, bad formatting and inconsistencies are no-go; such stuff will only detract from your paper. Only much more subtle strategy is viable and only when you know the reviewers are less competent or unjustly biased. As to diversionary tactics, such things should be specified and discussed with the experts in the area of your work. Using some primitive generic tactics (typos, notation inconsistency, etc.) will only draw criticism.



If some "criticizing" blanks or forms or any bureaucracy are pulled on you, you should seek advice of colleagues in your area. It's better not to leave anything to chance, and discuss everything with the experts in your area. Finding out what forms/blanks or bureaucracy you are dealing with is crucial.



Your paper might be reviewed on general (non-expert in the area) principles just as if it were only reviewed by proofreaders or lay people. So making it more consistent, coherent, logical, clear, succinct, and free of spelling and formatting errors will be a big plus.



You should probably rely more on the guidelines that are used when reviewing papers in your area, rather then focusing on diversionary tactics. A lot of bad reviews are a result of carelessness (both on the part of the reviewer and on the part of the scientist), overzelousness and bureaucracy (or should I say strict "proofreader-like" guidelines for scientists), rather than malicious intent assuming the paper in question displays outstanding ideas and great substance. Please check some guidelines as How to Write a Good Scientific Paper: a Reviewer’s Checklist A Checklist for Editors, Reviewers, and Authors by Chris Mack or suchlike articles and guidelines. Their number in astonishing. Nowadays a lot of science is about сitation indexes; and a lot of reviews are about formal structure, clarity of reasoning, proper references, nice presentation of data, etc. used in the paper.



Please note that this is very generic advice. You may want to tailor it to your area of expertise with all the corresponding changes you deem necessary. Bottom line: I strongly advise against generic diversionary tactics. Tailor everything to your area of expertise. Unfortunately, if they don't want to publish it, they won't, even if it is a breakthrough. You might need extra recommendations and credentials. Please also note how much pseudoscience we have today, and some of it sneaks in respected publications! So a huge number of papers need to be weeded out.






share|improve this answer








New contributor




Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.













  • 16




    Your bolded comment on formatting errors has a formatting error. Having satisfied my need to criticize something, I can now comfortably give a +1.
    – Mario Carneiro
    yesterday










  • @LvB Sorry to hear that my advice was totally off. Actually, I was trying to suggest to employ diversionary tactics in line with your area, which should be subtle but conspicuous, and it should not cast doubt on your competence in the area. I meant avoiding seeking generic diversionary tactics as the worst plan possible. Sorry. I gave you specific tactics but it's not that specific -- you need to discuss it with colleagues in your area IMO.
    – Ken Draco
    yesterday






  • 1




    @KenDraco Sorry, I had accidentally commented under the wrong answer. You're right. Subtle and conspicuous, not casting doubt on the authors' competence, non-generic/area-specific - makes sense. Thank you!
    – LvB
    yesterday

















up vote
8
down vote













This tactic (which is by the way more of a joke than something that people actually do) is designed to deal with an archetypal incompetent manager who is incapable to understand the work they are given to review, yet unable to admit their incompetence and so are resorting to bike shedding to compensate.



Trying it on people who are actually competent will result in one of the outcomes:



  • they will not realize you're using the tactic on them and decide you're sloppier than you actually are

  • they will realize it and feel offended that you take them for someone who would hide their incompetence behind irrelevant changes

Neither improves the chances for your paper to be received well.






share|improve this answer



























    up vote
    5
    down vote













    This is not an exact match to your request, but is similar enough in spirit and relevant enough to your question to mention.



    There's a very interesting and entertaining article by Karl Friston published in Neuroimage, called "Ten ironic rules for non-statistical reviewers". The point of the article is to give a generic 'slap on the wrist' to "common" review points by people who may or may not understand the statistical implications of their suggestions, and who may simply be making statistically-correct-sounding generic statements out of a need to seem useful / not-unknowledgeable, and err on the side of rejection.



    It does so in the highly unusual format of starting off with a highly sarcastic and humorous introduction of why a reviewer is under "so much pressure to reject these days", given articles have increased in quality, and proceeds to offer ten tongue-in-cheek "rules" for them to try in order to ensure a malicious rejection even in the presence of good papers. It only enters non-sarcastic serious discussion as to why those rules are poor interpretations of statistics in the much-longer "appendix" of the paper, which is in fact the 'real' article.



    So in a sense, this is the same thing as you're talking about, except seen from the reverse direction: instead of instructing authors on how to keep reviewers "busy" with trivialities, it is a tongue-in-cheek article instructing reviewers on how to focus on trivialities in the presence of an actually well-presented paper, in order to sound like they have critical influence over the outcome and / or to ensure a rejection / wasting of time for the author (i.e. heavily implying that this is a common-enough occurence to warrant such a sarcastic article).



    (the original paper is paywalled, but a version of the pdf should be available to read for free online via a simple search engine search; it's a rather popular paper!)






    share|improve this answer



























      up vote
      0
      down vote













      Let's set up a simple static game: assume that there are two kinds of reviewers, "Good R" and "Bad R". "Good R" are those that they know the subject well but even if they don't, they will try to honestly review the paper on merit. "Bad R" are those who will go for "wrongful criticism" in the logic laid out by the OP, and for "Superficial Criticism" if we submit a superficially sloppy paper. We consider two strategies, "Superficial Sloppiness" and "Tidy manuscript". In all we have four possible states.



      I argue that the most preferred state is "Good R - Tidy manuscript". In such a case the paper will be reviewed on merit by an appropriate reviewer. Let's assign the numerical value/utility 4 to this outcome (the scale is ordinal).



      Consider now the state "Good R- Superficial Sloppiness". As other answers indicated, we will most likely get a quick reject (and acquire a bad reputation in the eyes of a person that we shouldn't). This is the worst it could happen to our paper, in light of the fact that it was assigned to a Good Reviewer. We assign the value 1 to this state.



      Let's move to the situation "Bad R - Wrongful Criticism". Supposedly this is the state that we want to avoid by the proposed tactic. I argue that this state is not worse than the state "Good R- Superficial Sloppiness", because "Good R- Superficial Sloppiness" is a bad state because we shoot our own feet really, while "Bad R - Wrongful Criticism" is an unfortunate but expected situation. So we assign the value 2 to this state.



      Finally the state "Bad R - Superficial Criticism" is what we try to guarantee with this tactic. We certainly consider it as better than the previous one, but not as good as having a Good Reviewer assessing our tidy paper on merit. So we assign the value 3 to this state. The normal form of the game is therefore



      enter image description here



      There is no strictly or weakly dominant strategy here. But let's not go into mixed strategy equilibria. From our point of view, reviewers are chosen by nature (mother nature, not Nature the journal) with some probability, say p for the probability that we get a Bad Reviewer. Then the expected utility for each strategy is



      V(Sprf Slop) = p x 3 + 1 x (1-p) =2p+1
      V(Tidy Mnscr) = p x 2 + 4 x (1-p) =4-2p



      It appears rational to chose the Superficial Sloppiness tactic iff



      V(Sprf Slop) > V(Tidy Mnscr) => 2p+1 > 4-2p => p > 3/4



      In words, if you think that the chance that you will get a Bad Reviewer is higher then 3/4, then your expected utility will be indeed higher by applying such an embarrassing tactic.



      Do 3 out of 4 reviewers belong to the Bad Reviewer category in your field?






      share|improve this answer



















        protected by Alexandros yesterday



        Thank you for your interest in this question.
        Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



        Would you like to answer one of these unanswered questions instead?














        7 Answers
        7






        active

        oldest

        votes








        7 Answers
        7






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        98
        down vote













        enter image description here



        You're not the first one to come up with this idea.



        In case it's not obvious, I recommend against doing this:



        • Having stupid mistakes in your submission makes you look stupid.

        • It wastes the reviewer's time.

        • It wastes the editor's time.

        • If all the mistakes get through the review process, it wastes the reader's time deciphering what you means, and also makes you look stupid again.

        Just don't.






        share|improve this answer
















        • 9




          I had a team lead who would send back legitimately good programs with a "fix suggestion" that made things worse unless you put in a bug for him to find. To be fair, he also admitted he was a bad team lead but he couldn't turn down the pay increase and constantly flip-flipped between apologizing for being a bad team lead one day to managing by fear (reminding a guy that he used to deliver pizza and could easily again) the next. I think your point stands, if people are genuinely trying to help, don't do it. But there are reasons to do it.
          – corsiKa
          2 days ago






        • 13




          "If all the mistakes get through the review process" -- I think, the idea is that you'll fix mistake during review, asked or not
          – aaaaaa
          2 days ago






        • 11




          @user71659, why so hostile towards academia? Reviewers aren't normally paid and they spend time and effort to read your work and offer suggestions for improvement. True, they also serve as a gate to keep garbage out of the mainstream. Is any of that bad? Or do you think their main purpose is to attack you personally and keep your ideas down? Same question about teachers, I guess. Are they out to harm you personally? Sorry, I don't get it.
          – Buffy
          2 days ago






        • 7




          @Buffy It is very common in smaller fields that your paper ends up going to either a friend or a competitor, for which the competitor has a vested interest in harshly criticizing your work and pushing their own viewpoints, especially when it comes to high-impact journals and grants. The problem is it isn't "garbage" but almost all journals, and especially grants, consider novelty and significance, and that's inherently a personal opinion. Go search for the fight for publications in Cell, Nature, Science, and also NIH R01s. Simply, eat or be eaten.
          – user71659
          2 days ago







        • 6




          @Buffy Maybe things are different in math/CS, but all of the major physical and life science journal rely on the opinion of editors and they have inherent beliefs. And if you're talking about Cell/Nature/Science, close to 90% of their papers are rejected at the editor pre-review. What do you think happens? The editor reads the title, authors, abstract and decides. My field had a major fight in which a number of authors got together and wrote a letter pointing out that a particular Cell/Nature/Science topical editor accepted only papers from a small number of groups. This is the harsh reality.
          – user71659
          2 days ago














        up vote
        98
        down vote













        enter image description here



        You're not the first one to come up with this idea.



        In case it's not obvious, I recommend against doing this:



        • Having stupid mistakes in your submission makes you look stupid.

        • It wastes the reviewer's time.

        • It wastes the editor's time.

        • If all the mistakes get through the review process, it wastes the reader's time deciphering what you means, and also makes you look stupid again.

        Just don't.






        share|improve this answer
















        • 9




          I had a team lead who would send back legitimately good programs with a "fix suggestion" that made things worse unless you put in a bug for him to find. To be fair, he also admitted he was a bad team lead but he couldn't turn down the pay increase and constantly flip-flipped between apologizing for being a bad team lead one day to managing by fear (reminding a guy that he used to deliver pizza and could easily again) the next. I think your point stands, if people are genuinely trying to help, don't do it. But there are reasons to do it.
          – corsiKa
          2 days ago






        • 13




          "If all the mistakes get through the review process" -- I think, the idea is that you'll fix mistake during review, asked or not
          – aaaaaa
          2 days ago






        • 11




          @user71659, why so hostile towards academia? Reviewers aren't normally paid and they spend time and effort to read your work and offer suggestions for improvement. True, they also serve as a gate to keep garbage out of the mainstream. Is any of that bad? Or do you think their main purpose is to attack you personally and keep your ideas down? Same question about teachers, I guess. Are they out to harm you personally? Sorry, I don't get it.
          – Buffy
          2 days ago






        • 7




          @Buffy It is very common in smaller fields that your paper ends up going to either a friend or a competitor, for which the competitor has a vested interest in harshly criticizing your work and pushing their own viewpoints, especially when it comes to high-impact journals and grants. The problem is it isn't "garbage" but almost all journals, and especially grants, consider novelty and significance, and that's inherently a personal opinion. Go search for the fight for publications in Cell, Nature, Science, and also NIH R01s. Simply, eat or be eaten.
          – user71659
          2 days ago







        • 6




          @Buffy Maybe things are different in math/CS, but all of the major physical and life science journal rely on the opinion of editors and they have inherent beliefs. And if you're talking about Cell/Nature/Science, close to 90% of their papers are rejected at the editor pre-review. What do you think happens? The editor reads the title, authors, abstract and decides. My field had a major fight in which a number of authors got together and wrote a letter pointing out that a particular Cell/Nature/Science topical editor accepted only papers from a small number of groups. This is the harsh reality.
          – user71659
          2 days ago












        up vote
        98
        down vote










        up vote
        98
        down vote









        enter image description here



        You're not the first one to come up with this idea.



        In case it's not obvious, I recommend against doing this:



        • Having stupid mistakes in your submission makes you look stupid.

        • It wastes the reviewer's time.

        • It wastes the editor's time.

        • If all the mistakes get through the review process, it wastes the reader's time deciphering what you means, and also makes you look stupid again.

        Just don't.






        share|improve this answer












        enter image description here



        You're not the first one to come up with this idea.



        In case it's not obvious, I recommend against doing this:



        • Having stupid mistakes in your submission makes you look stupid.

        • It wastes the reviewer's time.

        • It wastes the editor's time.

        • If all the mistakes get through the review process, it wastes the reader's time deciphering what you means, and also makes you look stupid again.

        Just don't.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 2 days ago









        eykanal♦

        41.6k1499203




        41.6k1499203







        • 9




          I had a team lead who would send back legitimately good programs with a "fix suggestion" that made things worse unless you put in a bug for him to find. To be fair, he also admitted he was a bad team lead but he couldn't turn down the pay increase and constantly flip-flipped between apologizing for being a bad team lead one day to managing by fear (reminding a guy that he used to deliver pizza and could easily again) the next. I think your point stands, if people are genuinely trying to help, don't do it. But there are reasons to do it.
          – corsiKa
          2 days ago






        • 13




          "If all the mistakes get through the review process" -- I think, the idea is that you'll fix mistake during review, asked or not
          – aaaaaa
          2 days ago






        • 11




          @user71659, why so hostile towards academia? Reviewers aren't normally paid and they spend time and effort to read your work and offer suggestions for improvement. True, they also serve as a gate to keep garbage out of the mainstream. Is any of that bad? Or do you think their main purpose is to attack you personally and keep your ideas down? Same question about teachers, I guess. Are they out to harm you personally? Sorry, I don't get it.
          – Buffy
          2 days ago






        • 7




          @Buffy It is very common in smaller fields that your paper ends up going to either a friend or a competitor, for which the competitor has a vested interest in harshly criticizing your work and pushing their own viewpoints, especially when it comes to high-impact journals and grants. The problem is it isn't "garbage" but almost all journals, and especially grants, consider novelty and significance, and that's inherently a personal opinion. Go search for the fight for publications in Cell, Nature, Science, and also NIH R01s. Simply, eat or be eaten.
          – user71659
          2 days ago







        • 6




          @Buffy Maybe things are different in math/CS, but all of the major physical and life science journal rely on the opinion of editors and they have inherent beliefs. And if you're talking about Cell/Nature/Science, close to 90% of their papers are rejected at the editor pre-review. What do you think happens? The editor reads the title, authors, abstract and decides. My field had a major fight in which a number of authors got together and wrote a letter pointing out that a particular Cell/Nature/Science topical editor accepted only papers from a small number of groups. This is the harsh reality.
          – user71659
          2 days ago












        • 9




          I had a team lead who would send back legitimately good programs with a "fix suggestion" that made things worse unless you put in a bug for him to find. To be fair, he also admitted he was a bad team lead but he couldn't turn down the pay increase and constantly flip-flipped between apologizing for being a bad team lead one day to managing by fear (reminding a guy that he used to deliver pizza and could easily again) the next. I think your point stands, if people are genuinely trying to help, don't do it. But there are reasons to do it.
          – corsiKa
          2 days ago






        • 13




          "If all the mistakes get through the review process" -- I think, the idea is that you'll fix mistake during review, asked or not
          – aaaaaa
          2 days ago






        • 11




          @user71659, why so hostile towards academia? Reviewers aren't normally paid and they spend time and effort to read your work and offer suggestions for improvement. True, they also serve as a gate to keep garbage out of the mainstream. Is any of that bad? Or do you think their main purpose is to attack you personally and keep your ideas down? Same question about teachers, I guess. Are they out to harm you personally? Sorry, I don't get it.
          – Buffy
          2 days ago






        • 7




          @Buffy It is very common in smaller fields that your paper ends up going to either a friend or a competitor, for which the competitor has a vested interest in harshly criticizing your work and pushing their own viewpoints, especially when it comes to high-impact journals and grants. The problem is it isn't "garbage" but almost all journals, and especially grants, consider novelty and significance, and that's inherently a personal opinion. Go search for the fight for publications in Cell, Nature, Science, and also NIH R01s. Simply, eat or be eaten.
          – user71659
          2 days ago







        • 6




          @Buffy Maybe things are different in math/CS, but all of the major physical and life science journal rely on the opinion of editors and they have inherent beliefs. And if you're talking about Cell/Nature/Science, close to 90% of their papers are rejected at the editor pre-review. What do you think happens? The editor reads the title, authors, abstract and decides. My field had a major fight in which a number of authors got together and wrote a letter pointing out that a particular Cell/Nature/Science topical editor accepted only papers from a small number of groups. This is the harsh reality.
          – user71659
          2 days ago







        9




        9




        I had a team lead who would send back legitimately good programs with a "fix suggestion" that made things worse unless you put in a bug for him to find. To be fair, he also admitted he was a bad team lead but he couldn't turn down the pay increase and constantly flip-flipped between apologizing for being a bad team lead one day to managing by fear (reminding a guy that he used to deliver pizza and could easily again) the next. I think your point stands, if people are genuinely trying to help, don't do it. But there are reasons to do it.
        – corsiKa
        2 days ago




        I had a team lead who would send back legitimately good programs with a "fix suggestion" that made things worse unless you put in a bug for him to find. To be fair, he also admitted he was a bad team lead but he couldn't turn down the pay increase and constantly flip-flipped between apologizing for being a bad team lead one day to managing by fear (reminding a guy that he used to deliver pizza and could easily again) the next. I think your point stands, if people are genuinely trying to help, don't do it. But there are reasons to do it.
        – corsiKa
        2 days ago




        13




        13




        "If all the mistakes get through the review process" -- I think, the idea is that you'll fix mistake during review, asked or not
        – aaaaaa
        2 days ago




        "If all the mistakes get through the review process" -- I think, the idea is that you'll fix mistake during review, asked or not
        – aaaaaa
        2 days ago




        11




        11




        @user71659, why so hostile towards academia? Reviewers aren't normally paid and they spend time and effort to read your work and offer suggestions for improvement. True, they also serve as a gate to keep garbage out of the mainstream. Is any of that bad? Or do you think their main purpose is to attack you personally and keep your ideas down? Same question about teachers, I guess. Are they out to harm you personally? Sorry, I don't get it.
        – Buffy
        2 days ago




        @user71659, why so hostile towards academia? Reviewers aren't normally paid and they spend time and effort to read your work and offer suggestions for improvement. True, they also serve as a gate to keep garbage out of the mainstream. Is any of that bad? Or do you think their main purpose is to attack you personally and keep your ideas down? Same question about teachers, I guess. Are they out to harm you personally? Sorry, I don't get it.
        – Buffy
        2 days ago




        7




        7




        @Buffy It is very common in smaller fields that your paper ends up going to either a friend or a competitor, for which the competitor has a vested interest in harshly criticizing your work and pushing their own viewpoints, especially when it comes to high-impact journals and grants. The problem is it isn't "garbage" but almost all journals, and especially grants, consider novelty and significance, and that's inherently a personal opinion. Go search for the fight for publications in Cell, Nature, Science, and also NIH R01s. Simply, eat or be eaten.
        – user71659
        2 days ago





        @Buffy It is very common in smaller fields that your paper ends up going to either a friend or a competitor, for which the competitor has a vested interest in harshly criticizing your work and pushing their own viewpoints, especially when it comes to high-impact journals and grants. The problem is it isn't "garbage" but almost all journals, and especially grants, consider novelty and significance, and that's inherently a personal opinion. Go search for the fight for publications in Cell, Nature, Science, and also NIH R01s. Simply, eat or be eaten.
        – user71659
        2 days ago





        6




        6




        @Buffy Maybe things are different in math/CS, but all of the major physical and life science journal rely on the opinion of editors and they have inherent beliefs. And if you're talking about Cell/Nature/Science, close to 90% of their papers are rejected at the editor pre-review. What do you think happens? The editor reads the title, authors, abstract and decides. My field had a major fight in which a number of authors got together and wrote a letter pointing out that a particular Cell/Nature/Science topical editor accepted only papers from a small number of groups. This is the harsh reality.
        – user71659
        2 days ago




        @Buffy Maybe things are different in math/CS, but all of the major physical and life science journal rely on the opinion of editors and they have inherent beliefs. And if you're talking about Cell/Nature/Science, close to 90% of their papers are rejected at the editor pre-review. What do you think happens? The editor reads the title, authors, abstract and decides. My field had a major fight in which a number of authors got together and wrote a letter pointing out that a particular Cell/Nature/Science topical editor accepted only papers from a small number of groups. This is the harsh reality.
        – user71659
        2 days ago










        up vote
        42
        down vote













        This is a terrible idea. Just a couple of days ago, I reviewed a paper with a lot of confusing descriptions and elaborate mathematics. It was not clear that the explanatory sections were going to be clear enough for me to be able to evaluate the mathematical material in a useful fashion, but ordinarily I would have given it a try.



        However, the very first equation of the paper (which, dealing with introductory matters, was not particularly complicated) contained an obvious error. I concluded that if the authors were that careless, it was not worth my time to try to pick through each and every poorly documented equation, trying to see if they were all valid. The editor agreed and rejected the paper.



        So including stupid mistakes like this will only call into question whether you have been careful enough in preparing the manuscript. If it looks like the authors have been lazy or careless, there is little motivation for reviewers and editors to try to fix things for the authors.






        share|improve this answer
















        • 12




          This answer has nothing to do with my question. I wrote in the question: "Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it."
          – LvB
          yesterday






        • 9




          @LvB If you ask for different ways to commit suicide, other people always have a right to say that you shouldn't do it instead of simply listing those for you.
          – Peaceful
          21 hours ago






        • 2




          @Peaceful But if you ask for it on the appropriate site (e.g. Worldbuilding for fanciful techniques), then any answer telling you not to would quickly get closed as NAA, especially if the OP explicitly says he does not intend to do so.
          – forest
          19 hours ago






        • 6




          This is a real life situation, so users can say don't do it. Arrogantly saying that that is not what I asked is quite insulting to people who invest their time and energy writing the answers.
          – Peaceful
          19 hours ago














        up vote
        42
        down vote













        This is a terrible idea. Just a couple of days ago, I reviewed a paper with a lot of confusing descriptions and elaborate mathematics. It was not clear that the explanatory sections were going to be clear enough for me to be able to evaluate the mathematical material in a useful fashion, but ordinarily I would have given it a try.



        However, the very first equation of the paper (which, dealing with introductory matters, was not particularly complicated) contained an obvious error. I concluded that if the authors were that careless, it was not worth my time to try to pick through each and every poorly documented equation, trying to see if they were all valid. The editor agreed and rejected the paper.



        So including stupid mistakes like this will only call into question whether you have been careful enough in preparing the manuscript. If it looks like the authors have been lazy or careless, there is little motivation for reviewers and editors to try to fix things for the authors.






        share|improve this answer
















        • 12




          This answer has nothing to do with my question. I wrote in the question: "Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it."
          – LvB
          yesterday






        • 9




          @LvB If you ask for different ways to commit suicide, other people always have a right to say that you shouldn't do it instead of simply listing those for you.
          – Peaceful
          21 hours ago






        • 2




          @Peaceful But if you ask for it on the appropriate site (e.g. Worldbuilding for fanciful techniques), then any answer telling you not to would quickly get closed as NAA, especially if the OP explicitly says he does not intend to do so.
          – forest
          19 hours ago






        • 6




          This is a real life situation, so users can say don't do it. Arrogantly saying that that is not what I asked is quite insulting to people who invest their time and energy writing the answers.
          – Peaceful
          19 hours ago












        up vote
        42
        down vote










        up vote
        42
        down vote









        This is a terrible idea. Just a couple of days ago, I reviewed a paper with a lot of confusing descriptions and elaborate mathematics. It was not clear that the explanatory sections were going to be clear enough for me to be able to evaluate the mathematical material in a useful fashion, but ordinarily I would have given it a try.



        However, the very first equation of the paper (which, dealing with introductory matters, was not particularly complicated) contained an obvious error. I concluded that if the authors were that careless, it was not worth my time to try to pick through each and every poorly documented equation, trying to see if they were all valid. The editor agreed and rejected the paper.



        So including stupid mistakes like this will only call into question whether you have been careful enough in preparing the manuscript. If it looks like the authors have been lazy or careless, there is little motivation for reviewers and editors to try to fix things for the authors.






        share|improve this answer












        This is a terrible idea. Just a couple of days ago, I reviewed a paper with a lot of confusing descriptions and elaborate mathematics. It was not clear that the explanatory sections were going to be clear enough for me to be able to evaluate the mathematical material in a useful fashion, but ordinarily I would have given it a try.



        However, the very first equation of the paper (which, dealing with introductory matters, was not particularly complicated) contained an obvious error. I concluded that if the authors were that careless, it was not worth my time to try to pick through each and every poorly documented equation, trying to see if they were all valid. The editor agreed and rejected the paper.



        So including stupid mistakes like this will only call into question whether you have been careful enough in preparing the manuscript. If it looks like the authors have been lazy or careless, there is little motivation for reviewers and editors to try to fix things for the authors.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered yesterday









        Buzz

        14k94472




        14k94472







        • 12




          This answer has nothing to do with my question. I wrote in the question: "Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it."
          – LvB
          yesterday






        • 9




          @LvB If you ask for different ways to commit suicide, other people always have a right to say that you shouldn't do it instead of simply listing those for you.
          – Peaceful
          21 hours ago






        • 2




          @Peaceful But if you ask for it on the appropriate site (e.g. Worldbuilding for fanciful techniques), then any answer telling you not to would quickly get closed as NAA, especially if the OP explicitly says he does not intend to do so.
          – forest
          19 hours ago






        • 6




          This is a real life situation, so users can say don't do it. Arrogantly saying that that is not what I asked is quite insulting to people who invest their time and energy writing the answers.
          – Peaceful
          19 hours ago












        • 12




          This answer has nothing to do with my question. I wrote in the question: "Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it."
          – LvB
          yesterday






        • 9




          @LvB If you ask for different ways to commit suicide, other people always have a right to say that you shouldn't do it instead of simply listing those for you.
          – Peaceful
          21 hours ago






        • 2




          @Peaceful But if you ask for it on the appropriate site (e.g. Worldbuilding for fanciful techniques), then any answer telling you not to would quickly get closed as NAA, especially if the OP explicitly says he does not intend to do so.
          – forest
          19 hours ago






        • 6




          This is a real life situation, so users can say don't do it. Arrogantly saying that that is not what I asked is quite insulting to people who invest their time and energy writing the answers.
          – Peaceful
          19 hours ago







        12




        12




        This answer has nothing to do with my question. I wrote in the question: "Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it."
        – LvB
        yesterday




        This answer has nothing to do with my question. I wrote in the question: "Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it."
        – LvB
        yesterday




        9




        9




        @LvB If you ask for different ways to commit suicide, other people always have a right to say that you shouldn't do it instead of simply listing those for you.
        – Peaceful
        21 hours ago




        @LvB If you ask for different ways to commit suicide, other people always have a right to say that you shouldn't do it instead of simply listing those for you.
        – Peaceful
        21 hours ago




        2




        2




        @Peaceful But if you ask for it on the appropriate site (e.g. Worldbuilding for fanciful techniques), then any answer telling you not to would quickly get closed as NAA, especially if the OP explicitly says he does not intend to do so.
        – forest
        19 hours ago




        @Peaceful But if you ask for it on the appropriate site (e.g. Worldbuilding for fanciful techniques), then any answer telling you not to would quickly get closed as NAA, especially if the OP explicitly says he does not intend to do so.
        – forest
        19 hours ago




        6




        6




        This is a real life situation, so users can say don't do it. Arrogantly saying that that is not what I asked is quite insulting to people who invest their time and energy writing the answers.
        – Peaceful
        19 hours ago




        This is a real life situation, so users can say don't do it. Arrogantly saying that that is not what I asked is quite insulting to people who invest their time and energy writing the answers.
        – Peaceful
        19 hours ago










        up vote
        20
        down vote













        This interesting strategy has been identified in a programming context, where it has been dubbed "the duck technique" (see this post on the Coding Horror blog):




        This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to [project managers]) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn't, they weren't adding value.



        The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen's animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the "actual" animation.



        Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, "that looks great. Just one thing - get rid of the duck."




        I have not heard of doing this in regard to the peer review process for journal publication, and in that context I would advise against it. Perhaps others have had different experiences, but I have not observed any tendency for journal referees to ask for changes merely for the sake of appearing to add value. Since most of these processes are blind review, the referee is not usually identified to the author, and there is little reason for referees to grandstand like this.






        share|improve this answer


















        • 1




          Worth noting that the duck wasn't an error. A closer analogy here would be deliberately including an outright bug in the software, which seems considerably more likely to backfire.
          – Geoffrey Brent
          yesterday







        • 9




          The duck was a thing included specifically to be targeted for removal, so it is analogous to the "errors" that the OP is talking about in his question.
          – Ben
          yesterday






        • 3




          @Ben Just out of curiosity what percentage of your reviews have come back "Content and format are perfect, no improvements suggested"?
          – Myles
          yesterday






        • 2




          +1 for an amusing and informative answer that makes a clear distinction between programming culture and academic culture.
          – Pete L. Clark
          yesterday






        • 3




          Note also that the duck technique came up in a well-defined corporate environment where people knew who would be reviewing their work, whereas in refereeing you generally can't anticipate who will be refereeing your work. These situations call for different behavior.
          – darij grinberg
          14 hours ago














        up vote
        20
        down vote













        This interesting strategy has been identified in a programming context, where it has been dubbed "the duck technique" (see this post on the Coding Horror blog):




        This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to [project managers]) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn't, they weren't adding value.



        The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen's animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the "actual" animation.



        Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, "that looks great. Just one thing - get rid of the duck."




        I have not heard of doing this in regard to the peer review process for journal publication, and in that context I would advise against it. Perhaps others have had different experiences, but I have not observed any tendency for journal referees to ask for changes merely for the sake of appearing to add value. Since most of these processes are blind review, the referee is not usually identified to the author, and there is little reason for referees to grandstand like this.






        share|improve this answer


















        • 1




          Worth noting that the duck wasn't an error. A closer analogy here would be deliberately including an outright bug in the software, which seems considerably more likely to backfire.
          – Geoffrey Brent
          yesterday







        • 9




          The duck was a thing included specifically to be targeted for removal, so it is analogous to the "errors" that the OP is talking about in his question.
          – Ben
          yesterday






        • 3




          @Ben Just out of curiosity what percentage of your reviews have come back "Content and format are perfect, no improvements suggested"?
          – Myles
          yesterday






        • 2




          +1 for an amusing and informative answer that makes a clear distinction between programming culture and academic culture.
          – Pete L. Clark
          yesterday






        • 3




          Note also that the duck technique came up in a well-defined corporate environment where people knew who would be reviewing their work, whereas in refereeing you generally can't anticipate who will be refereeing your work. These situations call for different behavior.
          – darij grinberg
          14 hours ago












        up vote
        20
        down vote










        up vote
        20
        down vote









        This interesting strategy has been identified in a programming context, where it has been dubbed "the duck technique" (see this post on the Coding Horror blog):




        This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to [project managers]) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn't, they weren't adding value.



        The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen's animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the "actual" animation.



        Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, "that looks great. Just one thing - get rid of the duck."




        I have not heard of doing this in regard to the peer review process for journal publication, and in that context I would advise against it. Perhaps others have had different experiences, but I have not observed any tendency for journal referees to ask for changes merely for the sake of appearing to add value. Since most of these processes are blind review, the referee is not usually identified to the author, and there is little reason for referees to grandstand like this.






        share|improve this answer














        This interesting strategy has been identified in a programming context, where it has been dubbed "the duck technique" (see this post on the Coding Horror blog):




        This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to [project managers]) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn't, they weren't adding value.



        The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen's animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the "actual" animation.



        Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, "that looks great. Just one thing - get rid of the duck."




        I have not heard of doing this in regard to the peer review process for journal publication, and in that context I would advise against it. Perhaps others have had different experiences, but I have not observed any tendency for journal referees to ask for changes merely for the sake of appearing to add value. Since most of these processes are blind review, the referee is not usually identified to the author, and there is little reason for referees to grandstand like this.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited yesterday

























        answered yesterday









        Ben

        10.1k22648




        10.1k22648







        • 1




          Worth noting that the duck wasn't an error. A closer analogy here would be deliberately including an outright bug in the software, which seems considerably more likely to backfire.
          – Geoffrey Brent
          yesterday







        • 9




          The duck was a thing included specifically to be targeted for removal, so it is analogous to the "errors" that the OP is talking about in his question.
          – Ben
          yesterday






        • 3




          @Ben Just out of curiosity what percentage of your reviews have come back "Content and format are perfect, no improvements suggested"?
          – Myles
          yesterday






        • 2




          +1 for an amusing and informative answer that makes a clear distinction between programming culture and academic culture.
          – Pete L. Clark
          yesterday






        • 3




          Note also that the duck technique came up in a well-defined corporate environment where people knew who would be reviewing their work, whereas in refereeing you generally can't anticipate who will be refereeing your work. These situations call for different behavior.
          – darij grinberg
          14 hours ago












        • 1




          Worth noting that the duck wasn't an error. A closer analogy here would be deliberately including an outright bug in the software, which seems considerably more likely to backfire.
          – Geoffrey Brent
          yesterday







        • 9




          The duck was a thing included specifically to be targeted for removal, so it is analogous to the "errors" that the OP is talking about in his question.
          – Ben
          yesterday






        • 3




          @Ben Just out of curiosity what percentage of your reviews have come back "Content and format are perfect, no improvements suggested"?
          – Myles
          yesterday






        • 2




          +1 for an amusing and informative answer that makes a clear distinction between programming culture and academic culture.
          – Pete L. Clark
          yesterday






        • 3




          Note also that the duck technique came up in a well-defined corporate environment where people knew who would be reviewing their work, whereas in refereeing you generally can't anticipate who will be refereeing your work. These situations call for different behavior.
          – darij grinberg
          14 hours ago







        1




        1




        Worth noting that the duck wasn't an error. A closer analogy here would be deliberately including an outright bug in the software, which seems considerably more likely to backfire.
        – Geoffrey Brent
        yesterday





        Worth noting that the duck wasn't an error. A closer analogy here would be deliberately including an outright bug in the software, which seems considerably more likely to backfire.
        – Geoffrey Brent
        yesterday





        9




        9




        The duck was a thing included specifically to be targeted for removal, so it is analogous to the "errors" that the OP is talking about in his question.
        – Ben
        yesterday




        The duck was a thing included specifically to be targeted for removal, so it is analogous to the "errors" that the OP is talking about in his question.
        – Ben
        yesterday




        3




        3




        @Ben Just out of curiosity what percentage of your reviews have come back "Content and format are perfect, no improvements suggested"?
        – Myles
        yesterday




        @Ben Just out of curiosity what percentage of your reviews have come back "Content and format are perfect, no improvements suggested"?
        – Myles
        yesterday




        2




        2




        +1 for an amusing and informative answer that makes a clear distinction between programming culture and academic culture.
        – Pete L. Clark
        yesterday




        +1 for an amusing and informative answer that makes a clear distinction between programming culture and academic culture.
        – Pete L. Clark
        yesterday




        3




        3




        Note also that the duck technique came up in a well-defined corporate environment where people knew who would be reviewing their work, whereas in refereeing you generally can't anticipate who will be refereeing your work. These situations call for different behavior.
        – darij grinberg
        14 hours ago




        Note also that the duck technique came up in a well-defined corporate environment where people knew who would be reviewing their work, whereas in refereeing you generally can't anticipate who will be refereeing your work. These situations call for different behavior.
        – darij grinberg
        14 hours ago










        up vote
        10
        down vote













        In case you feel the reviewer is either biased or psychologically set to validate his/her competence by unjustly criticizing your work as if implying that it was properly reviewed by an expert, it is actually possible to introduce something which should be edited out but I advise extreme caution. It should be very subtle and yet conspicuous enough. It should not cast any doubt on your own competence: perhaps, something superfluous or murky but well-known to the reviewer so that he/she will enjoy criticizing it. However if you don’t know the reviewer and his/her level of competence, it’s better to be extra cautious about such things.



        A few words about typos and sloppiness in formatting: The reviewer will feel more justified to pile up his/her criticisms so much as to consider the paper to be low quality stuff or a mess. I saw it happen when excellent papers got almost scrapped for lack of clarity and accidental errata. So, typos, bad formatting and inconsistencies are no-go; such stuff will only detract from your paper. Only much more subtle strategy is viable and only when you know the reviewers are less competent or unjustly biased. As to diversionary tactics, such things should be specified and discussed with the experts in the area of your work. Using some primitive generic tactics (typos, notation inconsistency, etc.) will only draw criticism.



        If some "criticizing" blanks or forms or any bureaucracy are pulled on you, you should seek advice of colleagues in your area. It's better not to leave anything to chance, and discuss everything with the experts in your area. Finding out what forms/blanks or bureaucracy you are dealing with is crucial.



        Your paper might be reviewed on general (non-expert in the area) principles just as if it were only reviewed by proofreaders or lay people. So making it more consistent, coherent, logical, clear, succinct, and free of spelling and formatting errors will be a big plus.



        You should probably rely more on the guidelines that are used when reviewing papers in your area, rather then focusing on diversionary tactics. A lot of bad reviews are a result of carelessness (both on the part of the reviewer and on the part of the scientist), overzelousness and bureaucracy (or should I say strict "proofreader-like" guidelines for scientists), rather than malicious intent assuming the paper in question displays outstanding ideas and great substance. Please check some guidelines as How to Write a Good Scientific Paper: a Reviewer’s Checklist A Checklist for Editors, Reviewers, and Authors by Chris Mack or suchlike articles and guidelines. Their number in astonishing. Nowadays a lot of science is about сitation indexes; and a lot of reviews are about formal structure, clarity of reasoning, proper references, nice presentation of data, etc. used in the paper.



        Please note that this is very generic advice. You may want to tailor it to your area of expertise with all the corresponding changes you deem necessary. Bottom line: I strongly advise against generic diversionary tactics. Tailor everything to your area of expertise. Unfortunately, if they don't want to publish it, they won't, even if it is a breakthrough. You might need extra recommendations and credentials. Please also note how much pseudoscience we have today, and some of it sneaks in respected publications! So a huge number of papers need to be weeded out.






        share|improve this answer








        New contributor




        Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.













        • 16




          Your bolded comment on formatting errors has a formatting error. Having satisfied my need to criticize something, I can now comfortably give a +1.
          – Mario Carneiro
          yesterday










        • @LvB Sorry to hear that my advice was totally off. Actually, I was trying to suggest to employ diversionary tactics in line with your area, which should be subtle but conspicuous, and it should not cast doubt on your competence in the area. I meant avoiding seeking generic diversionary tactics as the worst plan possible. Sorry. I gave you specific tactics but it's not that specific -- you need to discuss it with colleagues in your area IMO.
          – Ken Draco
          yesterday






        • 1




          @KenDraco Sorry, I had accidentally commented under the wrong answer. You're right. Subtle and conspicuous, not casting doubt on the authors' competence, non-generic/area-specific - makes sense. Thank you!
          – LvB
          yesterday














        up vote
        10
        down vote













        In case you feel the reviewer is either biased or psychologically set to validate his/her competence by unjustly criticizing your work as if implying that it was properly reviewed by an expert, it is actually possible to introduce something which should be edited out but I advise extreme caution. It should be very subtle and yet conspicuous enough. It should not cast any doubt on your own competence: perhaps, something superfluous or murky but well-known to the reviewer so that he/she will enjoy criticizing it. However if you don’t know the reviewer and his/her level of competence, it’s better to be extra cautious about such things.



        A few words about typos and sloppiness in formatting: The reviewer will feel more justified to pile up his/her criticisms so much as to consider the paper to be low quality stuff or a mess. I saw it happen when excellent papers got almost scrapped for lack of clarity and accidental errata. So, typos, bad formatting and inconsistencies are no-go; such stuff will only detract from your paper. Only much more subtle strategy is viable and only when you know the reviewers are less competent or unjustly biased. As to diversionary tactics, such things should be specified and discussed with the experts in the area of your work. Using some primitive generic tactics (typos, notation inconsistency, etc.) will only draw criticism.



        If some "criticizing" blanks or forms or any bureaucracy are pulled on you, you should seek advice of colleagues in your area. It's better not to leave anything to chance, and discuss everything with the experts in your area. Finding out what forms/blanks or bureaucracy you are dealing with is crucial.



        Your paper might be reviewed on general (non-expert in the area) principles just as if it were only reviewed by proofreaders or lay people. So making it more consistent, coherent, logical, clear, succinct, and free of spelling and formatting errors will be a big plus.



        You should probably rely more on the guidelines that are used when reviewing papers in your area, rather then focusing on diversionary tactics. A lot of bad reviews are a result of carelessness (both on the part of the reviewer and on the part of the scientist), overzelousness and bureaucracy (or should I say strict "proofreader-like" guidelines for scientists), rather than malicious intent assuming the paper in question displays outstanding ideas and great substance. Please check some guidelines as How to Write a Good Scientific Paper: a Reviewer’s Checklist A Checklist for Editors, Reviewers, and Authors by Chris Mack or suchlike articles and guidelines. Their number in astonishing. Nowadays a lot of science is about сitation indexes; and a lot of reviews are about formal structure, clarity of reasoning, proper references, nice presentation of data, etc. used in the paper.



        Please note that this is very generic advice. You may want to tailor it to your area of expertise with all the corresponding changes you deem necessary. Bottom line: I strongly advise against generic diversionary tactics. Tailor everything to your area of expertise. Unfortunately, if they don't want to publish it, they won't, even if it is a breakthrough. You might need extra recommendations and credentials. Please also note how much pseudoscience we have today, and some of it sneaks in respected publications! So a huge number of papers need to be weeded out.






        share|improve this answer








        New contributor




        Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.













        • 16




          Your bolded comment on formatting errors has a formatting error. Having satisfied my need to criticize something, I can now comfortably give a +1.
          – Mario Carneiro
          yesterday










        • @LvB Sorry to hear that my advice was totally off. Actually, I was trying to suggest to employ diversionary tactics in line with your area, which should be subtle but conspicuous, and it should not cast doubt on your competence in the area. I meant avoiding seeking generic diversionary tactics as the worst plan possible. Sorry. I gave you specific tactics but it's not that specific -- you need to discuss it with colleagues in your area IMO.
          – Ken Draco
          yesterday






        • 1




          @KenDraco Sorry, I had accidentally commented under the wrong answer. You're right. Subtle and conspicuous, not casting doubt on the authors' competence, non-generic/area-specific - makes sense. Thank you!
          – LvB
          yesterday












        up vote
        10
        down vote










        up vote
        10
        down vote









        In case you feel the reviewer is either biased or psychologically set to validate his/her competence by unjustly criticizing your work as if implying that it was properly reviewed by an expert, it is actually possible to introduce something which should be edited out but I advise extreme caution. It should be very subtle and yet conspicuous enough. It should not cast any doubt on your own competence: perhaps, something superfluous or murky but well-known to the reviewer so that he/she will enjoy criticizing it. However if you don’t know the reviewer and his/her level of competence, it’s better to be extra cautious about such things.



        A few words about typos and sloppiness in formatting: The reviewer will feel more justified to pile up his/her criticisms so much as to consider the paper to be low quality stuff or a mess. I saw it happen when excellent papers got almost scrapped for lack of clarity and accidental errata. So, typos, bad formatting and inconsistencies are no-go; such stuff will only detract from your paper. Only much more subtle strategy is viable and only when you know the reviewers are less competent or unjustly biased. As to diversionary tactics, such things should be specified and discussed with the experts in the area of your work. Using some primitive generic tactics (typos, notation inconsistency, etc.) will only draw criticism.



        If some "criticizing" blanks or forms or any bureaucracy are pulled on you, you should seek advice of colleagues in your area. It's better not to leave anything to chance, and discuss everything with the experts in your area. Finding out what forms/blanks or bureaucracy you are dealing with is crucial.



        Your paper might be reviewed on general (non-expert in the area) principles just as if it were only reviewed by proofreaders or lay people. So making it more consistent, coherent, logical, clear, succinct, and free of spelling and formatting errors will be a big plus.



        You should probably rely more on the guidelines that are used when reviewing papers in your area, rather then focusing on diversionary tactics. A lot of bad reviews are a result of carelessness (both on the part of the reviewer and on the part of the scientist), overzelousness and bureaucracy (or should I say strict "proofreader-like" guidelines for scientists), rather than malicious intent assuming the paper in question displays outstanding ideas and great substance. Please check some guidelines as How to Write a Good Scientific Paper: a Reviewer’s Checklist A Checklist for Editors, Reviewers, and Authors by Chris Mack or suchlike articles and guidelines. Their number in astonishing. Nowadays a lot of science is about сitation indexes; and a lot of reviews are about formal structure, clarity of reasoning, proper references, nice presentation of data, etc. used in the paper.



        Please note that this is very generic advice. You may want to tailor it to your area of expertise with all the corresponding changes you deem necessary. Bottom line: I strongly advise against generic diversionary tactics. Tailor everything to your area of expertise. Unfortunately, if they don't want to publish it, they won't, even if it is a breakthrough. You might need extra recommendations and credentials. Please also note how much pseudoscience we have today, and some of it sneaks in respected publications! So a huge number of papers need to be weeded out.






        share|improve this answer








        New contributor




        Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        In case you feel the reviewer is either biased or psychologically set to validate his/her competence by unjustly criticizing your work as if implying that it was properly reviewed by an expert, it is actually possible to introduce something which should be edited out but I advise extreme caution. It should be very subtle and yet conspicuous enough. It should not cast any doubt on your own competence: perhaps, something superfluous or murky but well-known to the reviewer so that he/she will enjoy criticizing it. However if you don’t know the reviewer and his/her level of competence, it’s better to be extra cautious about such things.



        A few words about typos and sloppiness in formatting: The reviewer will feel more justified to pile up his/her criticisms so much as to consider the paper to be low quality stuff or a mess. I saw it happen when excellent papers got almost scrapped for lack of clarity and accidental errata. So, typos, bad formatting and inconsistencies are no-go; such stuff will only detract from your paper. Only much more subtle strategy is viable and only when you know the reviewers are less competent or unjustly biased. As to diversionary tactics, such things should be specified and discussed with the experts in the area of your work. Using some primitive generic tactics (typos, notation inconsistency, etc.) will only draw criticism.



        If some "criticizing" blanks or forms or any bureaucracy are pulled on you, you should seek advice of colleagues in your area. It's better not to leave anything to chance, and discuss everything with the experts in your area. Finding out what forms/blanks or bureaucracy you are dealing with is crucial.



        Your paper might be reviewed on general (non-expert in the area) principles just as if it were only reviewed by proofreaders or lay people. So making it more consistent, coherent, logical, clear, succinct, and free of spelling and formatting errors will be a big plus.



        You should probably rely more on the guidelines that are used when reviewing papers in your area, rather then focusing on diversionary tactics. A lot of bad reviews are a result of carelessness (both on the part of the reviewer and on the part of the scientist), overzelousness and bureaucracy (or should I say strict "proofreader-like" guidelines for scientists), rather than malicious intent assuming the paper in question displays outstanding ideas and great substance. Please check some guidelines as How to Write a Good Scientific Paper: a Reviewer’s Checklist A Checklist for Editors, Reviewers, and Authors by Chris Mack or suchlike articles and guidelines. Their number in astonishing. Nowadays a lot of science is about сitation indexes; and a lot of reviews are about formal structure, clarity of reasoning, proper references, nice presentation of data, etc. used in the paper.



        Please note that this is very generic advice. You may want to tailor it to your area of expertise with all the corresponding changes you deem necessary. Bottom line: I strongly advise against generic diversionary tactics. Tailor everything to your area of expertise. Unfortunately, if they don't want to publish it, they won't, even if it is a breakthrough. You might need extra recommendations and credentials. Please also note how much pseudoscience we have today, and some of it sneaks in respected publications! So a huge number of papers need to be weeded out.







        share|improve this answer








        New contributor




        Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer






        New contributor




        Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered yesterday









        Ken Draco

        1925




        1925




        New contributor




        Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        Ken Draco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.







        • 16




          Your bolded comment on formatting errors has a formatting error. Having satisfied my need to criticize something, I can now comfortably give a +1.
          – Mario Carneiro
          yesterday










        • @LvB Sorry to hear that my advice was totally off. Actually, I was trying to suggest to employ diversionary tactics in line with your area, which should be subtle but conspicuous, and it should not cast doubt on your competence in the area. I meant avoiding seeking generic diversionary tactics as the worst plan possible. Sorry. I gave you specific tactics but it's not that specific -- you need to discuss it with colleagues in your area IMO.
          – Ken Draco
          yesterday






        • 1




          @KenDraco Sorry, I had accidentally commented under the wrong answer. You're right. Subtle and conspicuous, not casting doubt on the authors' competence, non-generic/area-specific - makes sense. Thank you!
          – LvB
          yesterday












        • 16




          Your bolded comment on formatting errors has a formatting error. Having satisfied my need to criticize something, I can now comfortably give a +1.
          – Mario Carneiro
          yesterday










        • @LvB Sorry to hear that my advice was totally off. Actually, I was trying to suggest to employ diversionary tactics in line with your area, which should be subtle but conspicuous, and it should not cast doubt on your competence in the area. I meant avoiding seeking generic diversionary tactics as the worst plan possible. Sorry. I gave you specific tactics but it's not that specific -- you need to discuss it with colleagues in your area IMO.
          – Ken Draco
          yesterday






        • 1




          @KenDraco Sorry, I had accidentally commented under the wrong answer. You're right. Subtle and conspicuous, not casting doubt on the authors' competence, non-generic/area-specific - makes sense. Thank you!
          – LvB
          yesterday







        16




        16




        Your bolded comment on formatting errors has a formatting error. Having satisfied my need to criticize something, I can now comfortably give a +1.
        – Mario Carneiro
        yesterday




        Your bolded comment on formatting errors has a formatting error. Having satisfied my need to criticize something, I can now comfortably give a +1.
        – Mario Carneiro
        yesterday












        @LvB Sorry to hear that my advice was totally off. Actually, I was trying to suggest to employ diversionary tactics in line with your area, which should be subtle but conspicuous, and it should not cast doubt on your competence in the area. I meant avoiding seeking generic diversionary tactics as the worst plan possible. Sorry. I gave you specific tactics but it's not that specific -- you need to discuss it with colleagues in your area IMO.
        – Ken Draco
        yesterday




        @LvB Sorry to hear that my advice was totally off. Actually, I was trying to suggest to employ diversionary tactics in line with your area, which should be subtle but conspicuous, and it should not cast doubt on your competence in the area. I meant avoiding seeking generic diversionary tactics as the worst plan possible. Sorry. I gave you specific tactics but it's not that specific -- you need to discuss it with colleagues in your area IMO.
        – Ken Draco
        yesterday




        1




        1




        @KenDraco Sorry, I had accidentally commented under the wrong answer. You're right. Subtle and conspicuous, not casting doubt on the authors' competence, non-generic/area-specific - makes sense. Thank you!
        – LvB
        yesterday




        @KenDraco Sorry, I had accidentally commented under the wrong answer. You're right. Subtle and conspicuous, not casting doubt on the authors' competence, non-generic/area-specific - makes sense. Thank you!
        – LvB
        yesterday










        up vote
        8
        down vote













        This tactic (which is by the way more of a joke than something that people actually do) is designed to deal with an archetypal incompetent manager who is incapable to understand the work they are given to review, yet unable to admit their incompetence and so are resorting to bike shedding to compensate.



        Trying it on people who are actually competent will result in one of the outcomes:



        • they will not realize you're using the tactic on them and decide you're sloppier than you actually are

        • they will realize it and feel offended that you take them for someone who would hide their incompetence behind irrelevant changes

        Neither improves the chances for your paper to be received well.






        share|improve this answer
























          up vote
          8
          down vote













          This tactic (which is by the way more of a joke than something that people actually do) is designed to deal with an archetypal incompetent manager who is incapable to understand the work they are given to review, yet unable to admit their incompetence and so are resorting to bike shedding to compensate.



          Trying it on people who are actually competent will result in one of the outcomes:



          • they will not realize you're using the tactic on them and decide you're sloppier than you actually are

          • they will realize it and feel offended that you take them for someone who would hide their incompetence behind irrelevant changes

          Neither improves the chances for your paper to be received well.






          share|improve this answer






















            up vote
            8
            down vote










            up vote
            8
            down vote









            This tactic (which is by the way more of a joke than something that people actually do) is designed to deal with an archetypal incompetent manager who is incapable to understand the work they are given to review, yet unable to admit their incompetence and so are resorting to bike shedding to compensate.



            Trying it on people who are actually competent will result in one of the outcomes:



            • they will not realize you're using the tactic on them and decide you're sloppier than you actually are

            • they will realize it and feel offended that you take them for someone who would hide their incompetence behind irrelevant changes

            Neither improves the chances for your paper to be received well.






            share|improve this answer












            This tactic (which is by the way more of a joke than something that people actually do) is designed to deal with an archetypal incompetent manager who is incapable to understand the work they are given to review, yet unable to admit their incompetence and so are resorting to bike shedding to compensate.



            Trying it on people who are actually competent will result in one of the outcomes:



            • they will not realize you're using the tactic on them and decide you're sloppier than you actually are

            • they will realize it and feel offended that you take them for someone who would hide their incompetence behind irrelevant changes

            Neither improves the chances for your paper to be received well.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered yesterday









            Dmitry Grigoryev

            3,595625




            3,595625




















                up vote
                5
                down vote













                This is not an exact match to your request, but is similar enough in spirit and relevant enough to your question to mention.



                There's a very interesting and entertaining article by Karl Friston published in Neuroimage, called "Ten ironic rules for non-statistical reviewers". The point of the article is to give a generic 'slap on the wrist' to "common" review points by people who may or may not understand the statistical implications of their suggestions, and who may simply be making statistically-correct-sounding generic statements out of a need to seem useful / not-unknowledgeable, and err on the side of rejection.



                It does so in the highly unusual format of starting off with a highly sarcastic and humorous introduction of why a reviewer is under "so much pressure to reject these days", given articles have increased in quality, and proceeds to offer ten tongue-in-cheek "rules" for them to try in order to ensure a malicious rejection even in the presence of good papers. It only enters non-sarcastic serious discussion as to why those rules are poor interpretations of statistics in the much-longer "appendix" of the paper, which is in fact the 'real' article.



                So in a sense, this is the same thing as you're talking about, except seen from the reverse direction: instead of instructing authors on how to keep reviewers "busy" with trivialities, it is a tongue-in-cheek article instructing reviewers on how to focus on trivialities in the presence of an actually well-presented paper, in order to sound like they have critical influence over the outcome and / or to ensure a rejection / wasting of time for the author (i.e. heavily implying that this is a common-enough occurence to warrant such a sarcastic article).



                (the original paper is paywalled, but a version of the pdf should be available to read for free online via a simple search engine search; it's a rather popular paper!)






                share|improve this answer
























                  up vote
                  5
                  down vote













                  This is not an exact match to your request, but is similar enough in spirit and relevant enough to your question to mention.



                  There's a very interesting and entertaining article by Karl Friston published in Neuroimage, called "Ten ironic rules for non-statistical reviewers". The point of the article is to give a generic 'slap on the wrist' to "common" review points by people who may or may not understand the statistical implications of their suggestions, and who may simply be making statistically-correct-sounding generic statements out of a need to seem useful / not-unknowledgeable, and err on the side of rejection.



                  It does so in the highly unusual format of starting off with a highly sarcastic and humorous introduction of why a reviewer is under "so much pressure to reject these days", given articles have increased in quality, and proceeds to offer ten tongue-in-cheek "rules" for them to try in order to ensure a malicious rejection even in the presence of good papers. It only enters non-sarcastic serious discussion as to why those rules are poor interpretations of statistics in the much-longer "appendix" of the paper, which is in fact the 'real' article.



                  So in a sense, this is the same thing as you're talking about, except seen from the reverse direction: instead of instructing authors on how to keep reviewers "busy" with trivialities, it is a tongue-in-cheek article instructing reviewers on how to focus on trivialities in the presence of an actually well-presented paper, in order to sound like they have critical influence over the outcome and / or to ensure a rejection / wasting of time for the author (i.e. heavily implying that this is a common-enough occurence to warrant such a sarcastic article).



                  (the original paper is paywalled, but a version of the pdf should be available to read for free online via a simple search engine search; it's a rather popular paper!)






                  share|improve this answer






















                    up vote
                    5
                    down vote










                    up vote
                    5
                    down vote









                    This is not an exact match to your request, but is similar enough in spirit and relevant enough to your question to mention.



                    There's a very interesting and entertaining article by Karl Friston published in Neuroimage, called "Ten ironic rules for non-statistical reviewers". The point of the article is to give a generic 'slap on the wrist' to "common" review points by people who may or may not understand the statistical implications of their suggestions, and who may simply be making statistically-correct-sounding generic statements out of a need to seem useful / not-unknowledgeable, and err on the side of rejection.



                    It does so in the highly unusual format of starting off with a highly sarcastic and humorous introduction of why a reviewer is under "so much pressure to reject these days", given articles have increased in quality, and proceeds to offer ten tongue-in-cheek "rules" for them to try in order to ensure a malicious rejection even in the presence of good papers. It only enters non-sarcastic serious discussion as to why those rules are poor interpretations of statistics in the much-longer "appendix" of the paper, which is in fact the 'real' article.



                    So in a sense, this is the same thing as you're talking about, except seen from the reverse direction: instead of instructing authors on how to keep reviewers "busy" with trivialities, it is a tongue-in-cheek article instructing reviewers on how to focus on trivialities in the presence of an actually well-presented paper, in order to sound like they have critical influence over the outcome and / or to ensure a rejection / wasting of time for the author (i.e. heavily implying that this is a common-enough occurence to warrant such a sarcastic article).



                    (the original paper is paywalled, but a version of the pdf should be available to read for free online via a simple search engine search; it's a rather popular paper!)






                    share|improve this answer












                    This is not an exact match to your request, but is similar enough in spirit and relevant enough to your question to mention.



                    There's a very interesting and entertaining article by Karl Friston published in Neuroimage, called "Ten ironic rules for non-statistical reviewers". The point of the article is to give a generic 'slap on the wrist' to "common" review points by people who may or may not understand the statistical implications of their suggestions, and who may simply be making statistically-correct-sounding generic statements out of a need to seem useful / not-unknowledgeable, and err on the side of rejection.



                    It does so in the highly unusual format of starting off with a highly sarcastic and humorous introduction of why a reviewer is under "so much pressure to reject these days", given articles have increased in quality, and proceeds to offer ten tongue-in-cheek "rules" for them to try in order to ensure a malicious rejection even in the presence of good papers. It only enters non-sarcastic serious discussion as to why those rules are poor interpretations of statistics in the much-longer "appendix" of the paper, which is in fact the 'real' article.



                    So in a sense, this is the same thing as you're talking about, except seen from the reverse direction: instead of instructing authors on how to keep reviewers "busy" with trivialities, it is a tongue-in-cheek article instructing reviewers on how to focus on trivialities in the presence of an actually well-presented paper, in order to sound like they have critical influence over the outcome and / or to ensure a rejection / wasting of time for the author (i.e. heavily implying that this is a common-enough occurence to warrant such a sarcastic article).



                    (the original paper is paywalled, but a version of the pdf should be available to read for free online via a simple search engine search; it's a rather popular paper!)







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered yesterday









                    Tasos Papastylianou

                    28515




                    28515




















                        up vote
                        0
                        down vote













                        Let's set up a simple static game: assume that there are two kinds of reviewers, "Good R" and "Bad R". "Good R" are those that they know the subject well but even if they don't, they will try to honestly review the paper on merit. "Bad R" are those who will go for "wrongful criticism" in the logic laid out by the OP, and for "Superficial Criticism" if we submit a superficially sloppy paper. We consider two strategies, "Superficial Sloppiness" and "Tidy manuscript". In all we have four possible states.



                        I argue that the most preferred state is "Good R - Tidy manuscript". In such a case the paper will be reviewed on merit by an appropriate reviewer. Let's assign the numerical value/utility 4 to this outcome (the scale is ordinal).



                        Consider now the state "Good R- Superficial Sloppiness". As other answers indicated, we will most likely get a quick reject (and acquire a bad reputation in the eyes of a person that we shouldn't). This is the worst it could happen to our paper, in light of the fact that it was assigned to a Good Reviewer. We assign the value 1 to this state.



                        Let's move to the situation "Bad R - Wrongful Criticism". Supposedly this is the state that we want to avoid by the proposed tactic. I argue that this state is not worse than the state "Good R- Superficial Sloppiness", because "Good R- Superficial Sloppiness" is a bad state because we shoot our own feet really, while "Bad R - Wrongful Criticism" is an unfortunate but expected situation. So we assign the value 2 to this state.



                        Finally the state "Bad R - Superficial Criticism" is what we try to guarantee with this tactic. We certainly consider it as better than the previous one, but not as good as having a Good Reviewer assessing our tidy paper on merit. So we assign the value 3 to this state. The normal form of the game is therefore



                        enter image description here



                        There is no strictly or weakly dominant strategy here. But let's not go into mixed strategy equilibria. From our point of view, reviewers are chosen by nature (mother nature, not Nature the journal) with some probability, say p for the probability that we get a Bad Reviewer. Then the expected utility for each strategy is



                        V(Sprf Slop) = p x 3 + 1 x (1-p) =2p+1
                        V(Tidy Mnscr) = p x 2 + 4 x (1-p) =4-2p



                        It appears rational to chose the Superficial Sloppiness tactic iff



                        V(Sprf Slop) > V(Tidy Mnscr) => 2p+1 > 4-2p => p > 3/4



                        In words, if you think that the chance that you will get a Bad Reviewer is higher then 3/4, then your expected utility will be indeed higher by applying such an embarrassing tactic.



                        Do 3 out of 4 reviewers belong to the Bad Reviewer category in your field?






                        share|improve this answer
























                          up vote
                          0
                          down vote













                          Let's set up a simple static game: assume that there are two kinds of reviewers, "Good R" and "Bad R". "Good R" are those that they know the subject well but even if they don't, they will try to honestly review the paper on merit. "Bad R" are those who will go for "wrongful criticism" in the logic laid out by the OP, and for "Superficial Criticism" if we submit a superficially sloppy paper. We consider two strategies, "Superficial Sloppiness" and "Tidy manuscript". In all we have four possible states.



                          I argue that the most preferred state is "Good R - Tidy manuscript". In such a case the paper will be reviewed on merit by an appropriate reviewer. Let's assign the numerical value/utility 4 to this outcome (the scale is ordinal).



                          Consider now the state "Good R- Superficial Sloppiness". As other answers indicated, we will most likely get a quick reject (and acquire a bad reputation in the eyes of a person that we shouldn't). This is the worst it could happen to our paper, in light of the fact that it was assigned to a Good Reviewer. We assign the value 1 to this state.



                          Let's move to the situation "Bad R - Wrongful Criticism". Supposedly this is the state that we want to avoid by the proposed tactic. I argue that this state is not worse than the state "Good R- Superficial Sloppiness", because "Good R- Superficial Sloppiness" is a bad state because we shoot our own feet really, while "Bad R - Wrongful Criticism" is an unfortunate but expected situation. So we assign the value 2 to this state.



                          Finally the state "Bad R - Superficial Criticism" is what we try to guarantee with this tactic. We certainly consider it as better than the previous one, but not as good as having a Good Reviewer assessing our tidy paper on merit. So we assign the value 3 to this state. The normal form of the game is therefore



                          enter image description here



                          There is no strictly or weakly dominant strategy here. But let's not go into mixed strategy equilibria. From our point of view, reviewers are chosen by nature (mother nature, not Nature the journal) with some probability, say p for the probability that we get a Bad Reviewer. Then the expected utility for each strategy is



                          V(Sprf Slop) = p x 3 + 1 x (1-p) =2p+1
                          V(Tidy Mnscr) = p x 2 + 4 x (1-p) =4-2p



                          It appears rational to chose the Superficial Sloppiness tactic iff



                          V(Sprf Slop) > V(Tidy Mnscr) => 2p+1 > 4-2p => p > 3/4



                          In words, if you think that the chance that you will get a Bad Reviewer is higher then 3/4, then your expected utility will be indeed higher by applying such an embarrassing tactic.



                          Do 3 out of 4 reviewers belong to the Bad Reviewer category in your field?






                          share|improve this answer






















                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            Let's set up a simple static game: assume that there are two kinds of reviewers, "Good R" and "Bad R". "Good R" are those that they know the subject well but even if they don't, they will try to honestly review the paper on merit. "Bad R" are those who will go for "wrongful criticism" in the logic laid out by the OP, and for "Superficial Criticism" if we submit a superficially sloppy paper. We consider two strategies, "Superficial Sloppiness" and "Tidy manuscript". In all we have four possible states.



                            I argue that the most preferred state is "Good R - Tidy manuscript". In such a case the paper will be reviewed on merit by an appropriate reviewer. Let's assign the numerical value/utility 4 to this outcome (the scale is ordinal).



                            Consider now the state "Good R- Superficial Sloppiness". As other answers indicated, we will most likely get a quick reject (and acquire a bad reputation in the eyes of a person that we shouldn't). This is the worst it could happen to our paper, in light of the fact that it was assigned to a Good Reviewer. We assign the value 1 to this state.



                            Let's move to the situation "Bad R - Wrongful Criticism". Supposedly this is the state that we want to avoid by the proposed tactic. I argue that this state is not worse than the state "Good R- Superficial Sloppiness", because "Good R- Superficial Sloppiness" is a bad state because we shoot our own feet really, while "Bad R - Wrongful Criticism" is an unfortunate but expected situation. So we assign the value 2 to this state.



                            Finally the state "Bad R - Superficial Criticism" is what we try to guarantee with this tactic. We certainly consider it as better than the previous one, but not as good as having a Good Reviewer assessing our tidy paper on merit. So we assign the value 3 to this state. The normal form of the game is therefore



                            enter image description here



                            There is no strictly or weakly dominant strategy here. But let's not go into mixed strategy equilibria. From our point of view, reviewers are chosen by nature (mother nature, not Nature the journal) with some probability, say p for the probability that we get a Bad Reviewer. Then the expected utility for each strategy is



                            V(Sprf Slop) = p x 3 + 1 x (1-p) =2p+1
                            V(Tidy Mnscr) = p x 2 + 4 x (1-p) =4-2p



                            It appears rational to chose the Superficial Sloppiness tactic iff



                            V(Sprf Slop) > V(Tidy Mnscr) => 2p+1 > 4-2p => p > 3/4



                            In words, if you think that the chance that you will get a Bad Reviewer is higher then 3/4, then your expected utility will be indeed higher by applying such an embarrassing tactic.



                            Do 3 out of 4 reviewers belong to the Bad Reviewer category in your field?






                            share|improve this answer












                            Let's set up a simple static game: assume that there are two kinds of reviewers, "Good R" and "Bad R". "Good R" are those that they know the subject well but even if they don't, they will try to honestly review the paper on merit. "Bad R" are those who will go for "wrongful criticism" in the logic laid out by the OP, and for "Superficial Criticism" if we submit a superficially sloppy paper. We consider two strategies, "Superficial Sloppiness" and "Tidy manuscript". In all we have four possible states.



                            I argue that the most preferred state is "Good R - Tidy manuscript". In such a case the paper will be reviewed on merit by an appropriate reviewer. Let's assign the numerical value/utility 4 to this outcome (the scale is ordinal).



                            Consider now the state "Good R- Superficial Sloppiness". As other answers indicated, we will most likely get a quick reject (and acquire a bad reputation in the eyes of a person that we shouldn't). This is the worst it could happen to our paper, in light of the fact that it was assigned to a Good Reviewer. We assign the value 1 to this state.



                            Let's move to the situation "Bad R - Wrongful Criticism". Supposedly this is the state that we want to avoid by the proposed tactic. I argue that this state is not worse than the state "Good R- Superficial Sloppiness", because "Good R- Superficial Sloppiness" is a bad state because we shoot our own feet really, while "Bad R - Wrongful Criticism" is an unfortunate but expected situation. So we assign the value 2 to this state.



                            Finally the state "Bad R - Superficial Criticism" is what we try to guarantee with this tactic. We certainly consider it as better than the previous one, but not as good as having a Good Reviewer assessing our tidy paper on merit. So we assign the value 3 to this state. The normal form of the game is therefore



                            enter image description here



                            There is no strictly or weakly dominant strategy here. But let's not go into mixed strategy equilibria. From our point of view, reviewers are chosen by nature (mother nature, not Nature the journal) with some probability, say p for the probability that we get a Bad Reviewer. Then the expected utility for each strategy is



                            V(Sprf Slop) = p x 3 + 1 x (1-p) =2p+1
                            V(Tidy Mnscr) = p x 2 + 4 x (1-p) =4-2p



                            It appears rational to chose the Superficial Sloppiness tactic iff



                            V(Sprf Slop) > V(Tidy Mnscr) => 2p+1 > 4-2p => p > 3/4



                            In words, if you think that the chance that you will get a Bad Reviewer is higher then 3/4, then your expected utility will be indeed higher by applying such an embarrassing tactic.



                            Do 3 out of 4 reviewers belong to the Bad Reviewer category in your field?







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 1 hour ago









                            Alecos Papadopoulos

                            3,055917




                            3,055917















                                protected by Alexandros yesterday



                                Thank you for your interest in this question.
                                Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                Would you like to answer one of these unanswered questions instead?


                                Comments

                                Popular posts from this blog

                                Long meetings (6-7 hours a day): Being “babysat” by supervisor

                                Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

                                Confectionery