Inadmissible theorems in research

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
60
down vote

favorite
8












One of my engineering friends told me how he once had to take a make-up calculus I exam due to being hospitalised and so self-studied a lot of the missed topics. For the make-up exam, he used L'Hôpital's rule, although we weren't taught that until 1 or 2 exams later. My friend told me that the professor wrote




'You are not yet allowed to use L'Hôpital's rule.'




So, I like to say that L'Hôpital's rule was inadmissible in that exam.



Now, it absolutely makes sense that if you're the student that you're not allowed to use propositions, theorems, etc from future topics, all the more for future classes and especially for something as basic as calculus I. It also makes sense to adjust for majors: Certainly maths majors shouldn't be allowed to use topics in discrete mathematics or linear algebra to have an edge over their business, environmental science or engineering (who take linear algebra later than maths majors in my university) classmates in calculus I or II.



But after bachelor's and master's and maths PhD coursework, you're the researcher and not merely the student: Say, you're doing your maths PhD dissertation or even after you've finished the PhD.



Does maths research have anything inadmissible?



I can't imagine you have something to prove and then you find some paper that helps you prove something and then you go to your advisor who would then tell you, 'You are not yet allowed to use Poincaré theorem' or for something proven true more than 12 years ago: 'You are not yet allowed to use Cauchy's differentiation formula'.



Actually what about outside maths, say, physics or computer science?







share|improve this question


















  • 85




    I would have said by virtue of being hospitalized, L'Hopital's rule should be fair game.
    – Azor Ahai
    Aug 29 at 21:02






  • 1




    Comments are not for extended discussion; this conversation has been moved to chat. Please do not post answers in the comments. If you want to debate the practice of banning L’Hôpital’s rule in an exam situation, please take it to chat. Please read this FAQ before posting another comment.
    – Wrzlprmft♦
    Aug 31 at 7:06















up vote
60
down vote

favorite
8












One of my engineering friends told me how he once had to take a make-up calculus I exam due to being hospitalised and so self-studied a lot of the missed topics. For the make-up exam, he used L'Hôpital's rule, although we weren't taught that until 1 or 2 exams later. My friend told me that the professor wrote




'You are not yet allowed to use L'Hôpital's rule.'




So, I like to say that L'Hôpital's rule was inadmissible in that exam.



Now, it absolutely makes sense that if you're the student that you're not allowed to use propositions, theorems, etc from future topics, all the more for future classes and especially for something as basic as calculus I. It also makes sense to adjust for majors: Certainly maths majors shouldn't be allowed to use topics in discrete mathematics or linear algebra to have an edge over their business, environmental science or engineering (who take linear algebra later than maths majors in my university) classmates in calculus I or II.



But after bachelor's and master's and maths PhD coursework, you're the researcher and not merely the student: Say, you're doing your maths PhD dissertation or even after you've finished the PhD.



Does maths research have anything inadmissible?



I can't imagine you have something to prove and then you find some paper that helps you prove something and then you go to your advisor who would then tell you, 'You are not yet allowed to use Poincaré theorem' or for something proven true more than 12 years ago: 'You are not yet allowed to use Cauchy's differentiation formula'.



Actually what about outside maths, say, physics or computer science?







share|improve this question


















  • 85




    I would have said by virtue of being hospitalized, L'Hopital's rule should be fair game.
    – Azor Ahai
    Aug 29 at 21:02






  • 1




    Comments are not for extended discussion; this conversation has been moved to chat. Please do not post answers in the comments. If you want to debate the practice of banning L’Hôpital’s rule in an exam situation, please take it to chat. Please read this FAQ before posting another comment.
    – Wrzlprmft♦
    Aug 31 at 7:06













up vote
60
down vote

favorite
8









up vote
60
down vote

favorite
8






8





One of my engineering friends told me how he once had to take a make-up calculus I exam due to being hospitalised and so self-studied a lot of the missed topics. For the make-up exam, he used L'Hôpital's rule, although we weren't taught that until 1 or 2 exams later. My friend told me that the professor wrote




'You are not yet allowed to use L'Hôpital's rule.'




So, I like to say that L'Hôpital's rule was inadmissible in that exam.



Now, it absolutely makes sense that if you're the student that you're not allowed to use propositions, theorems, etc from future topics, all the more for future classes and especially for something as basic as calculus I. It also makes sense to adjust for majors: Certainly maths majors shouldn't be allowed to use topics in discrete mathematics or linear algebra to have an edge over their business, environmental science or engineering (who take linear algebra later than maths majors in my university) classmates in calculus I or II.



But after bachelor's and master's and maths PhD coursework, you're the researcher and not merely the student: Say, you're doing your maths PhD dissertation or even after you've finished the PhD.



Does maths research have anything inadmissible?



I can't imagine you have something to prove and then you find some paper that helps you prove something and then you go to your advisor who would then tell you, 'You are not yet allowed to use Poincaré theorem' or for something proven true more than 12 years ago: 'You are not yet allowed to use Cauchy's differentiation formula'.



Actually what about outside maths, say, physics or computer science?







share|improve this question














One of my engineering friends told me how he once had to take a make-up calculus I exam due to being hospitalised and so self-studied a lot of the missed topics. For the make-up exam, he used L'Hôpital's rule, although we weren't taught that until 1 or 2 exams later. My friend told me that the professor wrote




'You are not yet allowed to use L'Hôpital's rule.'




So, I like to say that L'Hôpital's rule was inadmissible in that exam.



Now, it absolutely makes sense that if you're the student that you're not allowed to use propositions, theorems, etc from future topics, all the more for future classes and especially for something as basic as calculus I. It also makes sense to adjust for majors: Certainly maths majors shouldn't be allowed to use topics in discrete mathematics or linear algebra to have an edge over their business, environmental science or engineering (who take linear algebra later than maths majors in my university) classmates in calculus I or II.



But after bachelor's and master's and maths PhD coursework, you're the researcher and not merely the student: Say, you're doing your maths PhD dissertation or even after you've finished the PhD.



Does maths research have anything inadmissible?



I can't imagine you have something to prove and then you find some paper that helps you prove something and then you go to your advisor who would then tell you, 'You are not yet allowed to use Poincaré theorem' or for something proven true more than 12 years ago: 'You are not yet allowed to use Cauchy's differentiation formula'.



Actually what about outside maths, say, physics or computer science?









share|improve this question













share|improve this question




share|improve this question








edited Sep 4 at 13:32









Volker Siegel

14918




14918










asked Aug 29 at 14:18









BCLC

6001622




6001622







  • 85




    I would have said by virtue of being hospitalized, L'Hopital's rule should be fair game.
    – Azor Ahai
    Aug 29 at 21:02






  • 1




    Comments are not for extended discussion; this conversation has been moved to chat. Please do not post answers in the comments. If you want to debate the practice of banning L’Hôpital’s rule in an exam situation, please take it to chat. Please read this FAQ before posting another comment.
    – Wrzlprmft♦
    Aug 31 at 7:06













  • 85




    I would have said by virtue of being hospitalized, L'Hopital's rule should be fair game.
    – Azor Ahai
    Aug 29 at 21:02






  • 1




    Comments are not for extended discussion; this conversation has been moved to chat. Please do not post answers in the comments. If you want to debate the practice of banning L’Hôpital’s rule in an exam situation, please take it to chat. Please read this FAQ before posting another comment.
    – Wrzlprmft♦
    Aug 31 at 7:06








85




85




I would have said by virtue of being hospitalized, L'Hopital's rule should be fair game.
– Azor Ahai
Aug 29 at 21:02




I would have said by virtue of being hospitalized, L'Hopital's rule should be fair game.
– Azor Ahai
Aug 29 at 21:02




1




1




Comments are not for extended discussion; this conversation has been moved to chat. Please do not post answers in the comments. If you want to debate the practice of banning L’Hôpital’s rule in an exam situation, please take it to chat. Please read this FAQ before posting another comment.
– Wrzlprmft♦
Aug 31 at 7:06





Comments are not for extended discussion; this conversation has been moved to chat. Please do not post answers in the comments. If you want to debate the practice of banning L’Hôpital’s rule in an exam situation, please take it to chat. Please read this FAQ before posting another comment.
– Wrzlprmft♦
Aug 31 at 7:06











16 Answers
16






active

oldest

votes

















up vote
8
down vote



accepted










The error, such as it is, your friend made was not the use of l'Hôpital, but the lack of proof that it is correct. If he had stated l'Hôpital as a lemma and provided a sufficiently elementary proof, then presumably the lecturer would not have had an issue with the solution.



An analogous phenomenon happens in research mathematics. There are plenty of folklore results, where researchers are pretty sure the result is true, and the techniques for proving the result are known, but nobody happens to have written the proof down or at least published it. These can be found, for example, in the classical regularity theory for partial differential equations.



Should one provide a proof of such a result when using it as a tool? Sometimes people simply refer to the result without being explicit about it. Sometimes they prove it "because we cannot find a proof in the literature", even if the proof is simple or not to the point of a given article. There is no absolutely right solution in these cases.



I think that folklore results are as close to "inadmissible" as one gets in research mathematics; one should be careful about them, sometimes prove them, but sometimes they are also used without proof.






share|improve this answer






















  • Upon reflection, I believe this analogy is excellent. I am leaning towards accepting this as answer.
    – BCLC
    Aug 29 at 17:14










  • I found only 3 instances of 'cannot find a proof in the literature' on google. Here's one. Is this actually more common? (in perhaps papers that are not public domain)
    – BCLC
    Aug 29 at 17:16











  • Your first paragraph makes sense if the student is writing a paper for homework, but seems a bit much for an examination. But you don't seem to address the main question. Is anything disallowed in research. Assuming it is known to be true.
    – Buffy
    Aug 29 at 17:32










  • Example phrase: "The proof is completely standard and is usually given for the case K=R [30]. We show that the same proof works for K=C". The phrases are likely to not be uniform. I cannot say how often it happens, but it does happen.
    – Tommi Brander
    Aug 29 at 17:32






  • 1




    @Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander?
    – BCLC
    Aug 29 at 18:18

















up vote
113
down vote














Does maths research have anything inadmissible?




No, but trying to prove X without using Y is still a very useful concept even in research, because it can lead to interesting generalizations, or new proof techniques that can be applied to a larger set of problems.



For instance, in some sense the Lebesgue integral is "just" trying to prove the properties of integrals without using the continuity of f, or the theory of matroids is "just" trying to prove the properties of linearly independent vectors without using a lot of properties from the vector space structure.



So this is far from being a pointless exercise, if that's what you had in mind.






share|improve this answer


















  • 36




    This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed.
    – Pete L. Clark
    Aug 29 at 22:00






  • 1




    @FedericoPoloni I’m not familiar with that use of punctuation, and I don’t think it’s commonly understood. I think you probably mean to write “the Lebesgue integral is ‘just’ trying to prove …”, which uses more conventional punctuation and grammar to express what I think you’re trying to express.
    – Konrad Rudolph
    Aug 30 at 9:52







  • 1




    @KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker)
    – Yemon Choi
    Aug 30 at 12:07







  • 1




    An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit.
    – user168715
    Aug 31 at 6:53






  • 1




    @PeteL.Clark There's even a relevant XKCD about that.
    – Nic Hartley
    Sep 1 at 1:49

















up vote
45
down vote













In the sense that you are asking, I cannot imagine there ever being a method that is ruled inadmissible because the researcher is "not ready for it." Every intellectual approach is potentially fair game.



If the specific goal of a work is to find an alternate approach to establishing something, however, it could well be the case that one or more prior methods are ruled out of scope, as it would assume the result that you want to establish by another independent path. For example, the constant e has been derived in multiple ways.



Finally, once you step outside of pure theory and into experimental work, one must also consider the ethics of an experimental method. Many potential approaches are considered inadmissible due to the objectionable nature of the experiment. In extreme cases, such Nazi medical experiments, even referencing the prior work may be considered inadmissible.






share|improve this answer
















  • 2




    Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal!
    – BCLC
    Aug 29 at 14:50






  • 3




    Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure
    – BCLC
    Aug 29 at 15:24


















up vote
32
down vote













It is worth pointing out, that theorems are usually inadmissible if they lead to circular theorem proving. If you study math you learn how mathematical theories are built lemma by lemma and theorem by theorem. These theorems and their dependencies form a directed acyclic graph (DAG).



If you are asked to reproduce the proof of a certain theorem and you use a "later" result, this results usually depends on the theorem you are supposed to prove, so using it is not just inadmissible for educational reasons, it actually would lead to an incorrect proof in the context of the DAG.



In that sense there cannot be any inadmissible theorems in research, because research usually consists of proving the "latest" theorems. However, if you publish a shorter, more elegant or more beautiful proof of a known result, you might have to look out for inadmissible theorems again.






share|improve this answer


















  • 3




    +1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem...
    – Yemon Choi
    Aug 30 at 12:09






  • 1




    This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for?
    – Noah Snyder
    Aug 30 at 12:20







  • 3




    @NoahSnyder: DAG doubtless stands for directed acyclic graph.
    – J W
    Aug 30 at 13:13










  • @JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math.
    – Noah Snyder
    Aug 30 at 13:21






  • 4




    The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not.
    – MSalters
    Aug 30 at 14:53

















up vote
30
down vote













While there are indeed no inadmissible theorems in research, there are certain things that one sometimes tries to avoid.



Two examples come to mind:



The first is the classification of finite simple groups. The classification itself is not particularly complicated, but the proof is absurdly so. This makes mathematicians working in group theory prefer to avoid using it when possible. It is in fact quite often explicitly pointed out in a paper if a key result relies on it.



The reason for this preference was probably to some extend originally that the proof was too complicated for people to have full confidence in, but my impression is that this is no longer the case, and the preference is now due to the fact that relying on the classification makes the "real reason" for the truth of a result more opaque and thus less likely to lead to further insights.




The other example is the huge effort that has gone into trying to prove the so-called Kazhdan-Lusztig conjecture using purely algebraic methods.



The result itself is algebraic in nature, but the original proof uses a lot of very deep results from geometry, which made it impossible to use it as a stepping stone to settings not allowing for this geometric structure.




Such an algebraic proof was achieved in 2012 by Elias and Williamson, when they proved Soergel's conjecture, which has the Kazhdan-Lusztig conjecture as one of several consequences.



The techniques used in this proof allowed just the sort of generalizations hoped for, leading first to a disproof of Lusztig's conjecture in 2013 (a characteristic $p$ analogue of the Kazhdan-Lusztig conjecture), and then to a proof of a replacement for Lusztig's conjecture in 2015 (for type $A$) and 2017 (in general), at least under some mild assumptions on the characteristic.






share|improve this answer


















  • 2




    Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things?
    – darij grinberg
    Aug 29 at 15:08










  • @darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it.
    – Tobias Kildetoft
    Aug 29 at 17:04


















up vote
17
down vote













There are cases where the researcher restricts himself not to use certain theorems. Example:




Atle Selberg,"An elementary proof of the prime-number theorem". Ann. of Math. (2) 50 (1949), 305--313.




The author restricts himself to use only "elementary" (in a technical sense) methods.



Other cases may be proofs in geometry using only straightedge and compasses. Gauss showed that the regular 257-gon may be constructed with straightedge and compasses. I would not consider that to be "a new proof of a known result".






share|improve this answer






















  • So same as jakebeal?
    – BCLC
    Aug 29 at 17:03






  • 1




    That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known).
    – Hilder Vitor Lima Pereira
    Aug 30 at 13:21






  • 2




    @HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more “elementary” (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point.
    – Dan Romik
    Aug 30 at 15:46










  • @DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment.
    – Hilder Vitor Lima Pereira
    Aug 30 at 16:03










  • @HilderVitorLimaPereira maybe that thought could be called "weaker claims"?
    – elliot svensson
    Aug 31 at 14:20

















up vote
10
down vote













It is perhaps worth noting that some results are in a sense inadmissible because they aren't actually theorems. Some conjectures/axioms are so central that they are widely used, even though they haven't yet been established. Proofs relying on these should make that clear in the hypotheses. However, it wouldn't be that hard to have a bad day and forget that something you use frequently hasn't actually been proved yet, or that it is needed for a later result you want to use.






share|improve this answer




















  • Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...?
    – BCLC
    Aug 29 at 15:13






  • 1




    There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use.
    – darij grinberg
    Aug 29 at 15:14






  • 1




    @BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis.
    – Jessica B
    Aug 29 at 15:17










  • @darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an “unequivocal” theorem (i.e., a “theorem”), then it is a conjecture, not “somewhere on the spectrum between unequivocal theorem and conjecture”. I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I’m pretty sure I do understand what you’re getting at, but others likely won’t, and your use of an adjective like “unequivocal” next to “theorem” is likely to sow confusion and lead some people to think ...
    – Dan Romik
    Aug 29 at 20:42






  • 2




    @DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl...
    – darij grinberg
    Aug 29 at 20:45


















up vote
8
down vote













In intuitionistic logic and constructive mathematics we try to prove stuff without the law of excluded middle, which excludes many of the normal tools used in math. And in logic in general we often try to prove stuff using only a defined set of axioms, which often means that we are not allowed to follow our 'normal' intuitions. Especially when proving something in multiple axiomatic systems of different strengt you can get that some tool only become available towards the end(the more powerful systems) , and are as such inadmissible in the weaker systems.






share|improve this answer
















  • 2




    That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun.
    – Buffy
    Aug 29 at 20:43











  • In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions.
    – Andrés E. Caicedo
    Aug 30 at 20:29

















up vote
6
down vote













To answer your main question, no. Nothing is disallowed. Any advisor would (or at least should) allow any valid mathematics. There is nothing in mathematics that is disallowed, especially in doctoral research. Of course this implies acceptance (now settled) on Poincaré's theorem. Prior to an accepted proof you couldn't depend on it.



In fact, you can even write a dissertation based on a hypothetical (If Prof Buffy's Large Theorem is true, then it follows that...). You can explore the consequences of things not proven. Sometimes it helps connect them to known results, leading to a proof of the "large theorem" and sometimes it helps to lead to a contradiction showing it false.




However, I have an issue with the background you have given on what is appropriate in teaching and examining students. I question the wisdom of the first professor disallowing anything that the student knows. That seems shortsighted and turns the professor into a gate that allows only some things to trickle through.



Of course, if the professor wants to test the student on a particular technique he can try to find questions that do so, but this also points up the basic stupidity of exams in general. There are other ways to assure that the student learns essential techniques.



A university education isn't about competition with other students and the (horrors) problem of an unfair advantage. it is about learning. If the professor or the system grades students competitively they are doing a poor job.



If you have the 20 absolutely best students in the world and grade purely competitively, then half of them will be below average.






share|improve this answer


















  • 4




    I feel like you have misunderstood the question.
    – Jessica B
    Aug 29 at 15:05






  • 2




    @Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level.
    – cHao
    Aug 29 at 15:54






  • 11




    One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_x -> 0 sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_x -> 0 sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising.
    – Nate Eldredge
    Aug 29 at 16:48






  • 5




    Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood.
    – Nate Eldredge
    Aug 29 at 16:56






  • 4




    I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute lim_xrightarrow 0 fracx^2x isn't showing a more advanced understanding of the material, they're showing they don't understand the material!
    – Noah Snyder
    Aug 30 at 12:52


















up vote
4
down vote













I don't think there are inadmissible theorems in research, although obviously one has to care not to rely on assumptions that has yet to be proven for a particular problem.



However, in terms of PhD or postdoc work, I feel that some approaches may be rather "off-topic" because of not-really-academic reasons. For example, if you secure a PhD funding to study topic X, you should not normally use it to study Y. Similarly, if you secure a postdoc in a team which develops method A, and you want to study your competitor's method B, your PI may want to keep the time you spend on B limited, so it does not exceed the time you spend to develop A. Some PIs are quite notorious in a sense that they won't tolerate you even touching some method C, because of their important reasons, so even though you have full academic freedom to go and explore method C if you like it, it may be "inadmissible" to do so within your current work arrangements.






share|improve this answer




















  • Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research?
    – BCLC
    Aug 29 at 15:10







  • 1




    Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps.
    – Dmitry Savostyanov
    Aug 29 at 15:11

















up vote
2
down vote













I'm going to give a related point of view from outside of academia, namely a commercial/government research organisation.



I have come across researchers and managers who are hindered by what I call an exam mentality, whereby they assume that a research question can only be answered with a data set provided, and cannot make reference to other data, results, studies etc.



I've found this exam mentality to be extremely limiting and comes about because the researcher or manager has a misconception about research that has been indoctrinated from their (mostly exam-based) education.



The fact of the matter is that by not using data/techniques/studies on arbitrary grounds stifles research. It leads to missed opportunities for commercial organisations to make profit, or missed consequences when governments introduce new policy, or missed side-effects of new drugs etc.






share|improve this answer



























    up vote
    2
    down vote













    I will add a small example from Theoretical Computer Science and algorithm design.




    It is a very important open problem to find a combinatorial (or even LP based) algorithm that achieves the Goemans-Williamson bound (0.878) for approximating the MaxCut problem in polynomial time.




    We know that using Semidefinite Programming techniques, a bound on the approximation factor of alpha = 0.878 can be achieved in poly time. But can we achieve this bound using other techniques? Slightly less ambitiously but probably equally important: Can we find a combinatorial algorithm with approximation guarantee strictle better than 1/2?



    Luca Trevisan had made important progress towards that direction using spectral techniques.






    share|improve this answer



























      up vote
      0
      down vote













      In research you would use the most applicable method (that you know) to demonstrate a solution, and would possibly also be in situations where you are asked about or offered alternative approaches to your solution (and then you learn a new method).



      In the example where L'Hôpital's rule was "not permitted", it could be that the question could have been worded better as it sounds like a "solve this" question, assuming that only the methods taught in the course are known to students and therefore only the methods taught in the course will be used in the exam.






      share|improve this answer




















      • There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did)
        – BCLC
        Aug 29 at 14:46










      • I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation.
        – Mick
        Aug 29 at 14:51











      • Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time?
        – BCLC
        Aug 29 at 14:56







      • 2




        It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet.
        – cHao
        Aug 29 at 15:51











      • Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition.
        – WGroleau
        Aug 30 at 12:15

















      up vote
      0
      down vote













      Well in pure Math research I am sure brute force approximations by computer are disallowed except as a way to introduce the interest in the topic and possibly a way to narrow the area to be explored. Perhaps even to suggest an approach to solution.



      Math research requires equations that describe an exact answer and proof that the answer is correct by derivation from established math facts and theorems. Computer approximations may use ever smaller intervals to narrow the range of an answer but they don't actually reach the infinitely small limit of L'hospital style.



      The separate area of computerized derivations basically just automates what is already known. I am sure many places leave researchers free to use such computerization to speed documentation of work as far as such software goes. I am sure that plenty of human guidance is still needed to formulate the problem, introduce postulates and choose which available solution steps to try. But the key thing is that all such software derivation would have to verified by hand before any outside review for software error and that techniques stay within allowed boundaries (the IF portion of theorems etc).



      And after such hand checks...how many mathematical researchers would credit computer software for assistance?



      Well I saw applications mathematicians cite software as a quick check method for colleagues to check the reasonableness of the work way back in the 1980s. In that applications mathematics sometimes has an almost engineering math view of practical results, I suppose that they still do give the computer software approximations as quick demonstration AFTER the formal derivations. And I hear that applications math sometimes solves the nearest approximation to the problem possible when solution to the exact problem still evades them. So again more room for assistance by computer software derivation. Not sure that such operations research type topics fits everyone's definition of mathematical research though.






      share|improve this answer




















      • Please try to avoid leaving two separate answers; you should edit your first one
        – Yemon Choi
        Sep 2 at 3:36










      • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research
        – Yemon Choi
        Sep 2 at 3:37

















      up vote
      0
      down vote













      In shorter terms, yes computer approximation techniques are often used in a shotgun manner to look for areas of potential convergence on solutions. As in "give me a hint". Especially in applied math topics where real world boundaries can be described.



      Again a there is the question of whether real world problems other than fundamental physics are true math research or the much looser applied math or even operations research.



      But in the actual derivation of theorems from underlying and proven theorems to new theorems...computers are more limited to documentation tools similar to word processors for prose. Still tools becoming more and more important to speed the more common equation checking of documented work as word processors check spelling and grammar for prose. And more areas where human must override or redirect.






      share|improve this answer




















      • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else
        – Yemon Choi
        Sep 2 at 3:35










      • Also, don't create two new user identities. Register one which can be used consistently
        – Yemon Choi
        Sep 2 at 3:37

















      up vote
      0
      down vote













      The axiom of choice (and its corollaries) are pretty well-accepted these days in the mathematical community, but you might occasionally run across a few old-school mathematicians who think that it's "wrong", and therefore that any corollary that you use the axiom of choice to prove is also "wrong". (Of course, what it even means for the axiom of choice to be "wrong" is a largely philosophical question.)






      share|improve this answer




















        Your Answer







        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "415"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: true,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f116019%2finadmissible-theorems-in-research%23new-answer', 'question_page');

        );

        Post as a guest






























        16 Answers
        16






        active

        oldest

        votes








        16 Answers
        16






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        8
        down vote



        accepted










        The error, such as it is, your friend made was not the use of l'Hôpital, but the lack of proof that it is correct. If he had stated l'Hôpital as a lemma and provided a sufficiently elementary proof, then presumably the lecturer would not have had an issue with the solution.



        An analogous phenomenon happens in research mathematics. There are plenty of folklore results, where researchers are pretty sure the result is true, and the techniques for proving the result are known, but nobody happens to have written the proof down or at least published it. These can be found, for example, in the classical regularity theory for partial differential equations.



        Should one provide a proof of such a result when using it as a tool? Sometimes people simply refer to the result without being explicit about it. Sometimes they prove it "because we cannot find a proof in the literature", even if the proof is simple or not to the point of a given article. There is no absolutely right solution in these cases.



        I think that folklore results are as close to "inadmissible" as one gets in research mathematics; one should be careful about them, sometimes prove them, but sometimes they are also used without proof.






        share|improve this answer






















        • Upon reflection, I believe this analogy is excellent. I am leaning towards accepting this as answer.
          – BCLC
          Aug 29 at 17:14










        • I found only 3 instances of 'cannot find a proof in the literature' on google. Here's one. Is this actually more common? (in perhaps papers that are not public domain)
          – BCLC
          Aug 29 at 17:16











        • Your first paragraph makes sense if the student is writing a paper for homework, but seems a bit much for an examination. But you don't seem to address the main question. Is anything disallowed in research. Assuming it is known to be true.
          – Buffy
          Aug 29 at 17:32










        • Example phrase: "The proof is completely standard and is usually given for the case K=R [30]. We show that the same proof works for K=C". The phrases are likely to not be uniform. I cannot say how often it happens, but it does happen.
          – Tommi Brander
          Aug 29 at 17:32






        • 1




          @Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander?
          – BCLC
          Aug 29 at 18:18














        up vote
        8
        down vote



        accepted










        The error, such as it is, your friend made was not the use of l'Hôpital, but the lack of proof that it is correct. If he had stated l'Hôpital as a lemma and provided a sufficiently elementary proof, then presumably the lecturer would not have had an issue with the solution.



        An analogous phenomenon happens in research mathematics. There are plenty of folklore results, where researchers are pretty sure the result is true, and the techniques for proving the result are known, but nobody happens to have written the proof down or at least published it. These can be found, for example, in the classical regularity theory for partial differential equations.



        Should one provide a proof of such a result when using it as a tool? Sometimes people simply refer to the result without being explicit about it. Sometimes they prove it "because we cannot find a proof in the literature", even if the proof is simple or not to the point of a given article. There is no absolutely right solution in these cases.



        I think that folklore results are as close to "inadmissible" as one gets in research mathematics; one should be careful about them, sometimes prove them, but sometimes they are also used without proof.






        share|improve this answer






















        • Upon reflection, I believe this analogy is excellent. I am leaning towards accepting this as answer.
          – BCLC
          Aug 29 at 17:14










        • I found only 3 instances of 'cannot find a proof in the literature' on google. Here's one. Is this actually more common? (in perhaps papers that are not public domain)
          – BCLC
          Aug 29 at 17:16











        • Your first paragraph makes sense if the student is writing a paper for homework, but seems a bit much for an examination. But you don't seem to address the main question. Is anything disallowed in research. Assuming it is known to be true.
          – Buffy
          Aug 29 at 17:32










        • Example phrase: "The proof is completely standard and is usually given for the case K=R [30]. We show that the same proof works for K=C". The phrases are likely to not be uniform. I cannot say how often it happens, but it does happen.
          – Tommi Brander
          Aug 29 at 17:32






        • 1




          @Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander?
          – BCLC
          Aug 29 at 18:18












        up vote
        8
        down vote



        accepted







        up vote
        8
        down vote



        accepted






        The error, such as it is, your friend made was not the use of l'Hôpital, but the lack of proof that it is correct. If he had stated l'Hôpital as a lemma and provided a sufficiently elementary proof, then presumably the lecturer would not have had an issue with the solution.



        An analogous phenomenon happens in research mathematics. There are plenty of folklore results, where researchers are pretty sure the result is true, and the techniques for proving the result are known, but nobody happens to have written the proof down or at least published it. These can be found, for example, in the classical regularity theory for partial differential equations.



        Should one provide a proof of such a result when using it as a tool? Sometimes people simply refer to the result without being explicit about it. Sometimes they prove it "because we cannot find a proof in the literature", even if the proof is simple or not to the point of a given article. There is no absolutely right solution in these cases.



        I think that folklore results are as close to "inadmissible" as one gets in research mathematics; one should be careful about them, sometimes prove them, but sometimes they are also used without proof.






        share|improve this answer














        The error, such as it is, your friend made was not the use of l'Hôpital, but the lack of proof that it is correct. If he had stated l'Hôpital as a lemma and provided a sufficiently elementary proof, then presumably the lecturer would not have had an issue with the solution.



        An analogous phenomenon happens in research mathematics. There are plenty of folklore results, where researchers are pretty sure the result is true, and the techniques for proving the result are known, but nobody happens to have written the proof down or at least published it. These can be found, for example, in the classical regularity theory for partial differential equations.



        Should one provide a proof of such a result when using it as a tool? Sometimes people simply refer to the result without being explicit about it. Sometimes they prove it "because we cannot find a proof in the literature", even if the proof is simple or not to the point of a given article. There is no absolutely right solution in these cases.



        I think that folklore results are as close to "inadmissible" as one gets in research mathematics; one should be careful about them, sometimes prove them, but sometimes they are also used without proof.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 29 at 18:22

























        answered Aug 29 at 15:36









        Tommi Brander

        2,61511026




        2,61511026











        • Upon reflection, I believe this analogy is excellent. I am leaning towards accepting this as answer.
          – BCLC
          Aug 29 at 17:14










        • I found only 3 instances of 'cannot find a proof in the literature' on google. Here's one. Is this actually more common? (in perhaps papers that are not public domain)
          – BCLC
          Aug 29 at 17:16











        • Your first paragraph makes sense if the student is writing a paper for homework, but seems a bit much for an examination. But you don't seem to address the main question. Is anything disallowed in research. Assuming it is known to be true.
          – Buffy
          Aug 29 at 17:32










        • Example phrase: "The proof is completely standard and is usually given for the case K=R [30]. We show that the same proof works for K=C". The phrases are likely to not be uniform. I cannot say how often it happens, but it does happen.
          – Tommi Brander
          Aug 29 at 17:32






        • 1




          @Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander?
          – BCLC
          Aug 29 at 18:18
















        • Upon reflection, I believe this analogy is excellent. I am leaning towards accepting this as answer.
          – BCLC
          Aug 29 at 17:14










        • I found only 3 instances of 'cannot find a proof in the literature' on google. Here's one. Is this actually more common? (in perhaps papers that are not public domain)
          – BCLC
          Aug 29 at 17:16











        • Your first paragraph makes sense if the student is writing a paper for homework, but seems a bit much for an examination. But you don't seem to address the main question. Is anything disallowed in research. Assuming it is known to be true.
          – Buffy
          Aug 29 at 17:32










        • Example phrase: "The proof is completely standard and is usually given for the case K=R [30]. We show that the same proof works for K=C". The phrases are likely to not be uniform. I cannot say how often it happens, but it does happen.
          – Tommi Brander
          Aug 29 at 17:32






        • 1




          @Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander?
          – BCLC
          Aug 29 at 18:18















        Upon reflection, I believe this analogy is excellent. I am leaning towards accepting this as answer.
        – BCLC
        Aug 29 at 17:14




        Upon reflection, I believe this analogy is excellent. I am leaning towards accepting this as answer.
        – BCLC
        Aug 29 at 17:14












        I found only 3 instances of 'cannot find a proof in the literature' on google. Here's one. Is this actually more common? (in perhaps papers that are not public domain)
        – BCLC
        Aug 29 at 17:16





        I found only 3 instances of 'cannot find a proof in the literature' on google. Here's one. Is this actually more common? (in perhaps papers that are not public domain)
        – BCLC
        Aug 29 at 17:16













        Your first paragraph makes sense if the student is writing a paper for homework, but seems a bit much for an examination. But you don't seem to address the main question. Is anything disallowed in research. Assuming it is known to be true.
        – Buffy
        Aug 29 at 17:32




        Your first paragraph makes sense if the student is writing a paper for homework, but seems a bit much for an examination. But you don't seem to address the main question. Is anything disallowed in research. Assuming it is known to be true.
        – Buffy
        Aug 29 at 17:32












        Example phrase: "The proof is completely standard and is usually given for the case K=R [30]. We show that the same proof works for K=C". The phrases are likely to not be uniform. I cannot say how often it happens, but it does happen.
        – Tommi Brander
        Aug 29 at 17:32




        Example phrase: "The proof is completely standard and is usually given for the case K=R [30]. We show that the same proof works for K=C". The phrases are likely to not be uniform. I cannot say how often it happens, but it does happen.
        – Tommi Brander
        Aug 29 at 17:32




        1




        1




        @Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander?
        – BCLC
        Aug 29 at 18:18




        @Buffy The first paragraph is an introduction to the answer that is folklore. Right, Tommi Brander?
        – BCLC
        Aug 29 at 18:18










        up vote
        113
        down vote














        Does maths research have anything inadmissible?




        No, but trying to prove X without using Y is still a very useful concept even in research, because it can lead to interesting generalizations, or new proof techniques that can be applied to a larger set of problems.



        For instance, in some sense the Lebesgue integral is "just" trying to prove the properties of integrals without using the continuity of f, or the theory of matroids is "just" trying to prove the properties of linearly independent vectors without using a lot of properties from the vector space structure.



        So this is far from being a pointless exercise, if that's what you had in mind.






        share|improve this answer


















        • 36




          This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed.
          – Pete L. Clark
          Aug 29 at 22:00






        • 1




          @FedericoPoloni I’m not familiar with that use of punctuation, and I don’t think it’s commonly understood. I think you probably mean to write “the Lebesgue integral is ‘just’ trying to prove …”, which uses more conventional punctuation and grammar to express what I think you’re trying to express.
          – Konrad Rudolph
          Aug 30 at 9:52







        • 1




          @KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker)
          – Yemon Choi
          Aug 30 at 12:07







        • 1




          An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit.
          – user168715
          Aug 31 at 6:53






        • 1




          @PeteL.Clark There's even a relevant XKCD about that.
          – Nic Hartley
          Sep 1 at 1:49














        up vote
        113
        down vote














        Does maths research have anything inadmissible?




        No, but trying to prove X without using Y is still a very useful concept even in research, because it can lead to interesting generalizations, or new proof techniques that can be applied to a larger set of problems.



        For instance, in some sense the Lebesgue integral is "just" trying to prove the properties of integrals without using the continuity of f, or the theory of matroids is "just" trying to prove the properties of linearly independent vectors without using a lot of properties from the vector space structure.



        So this is far from being a pointless exercise, if that's what you had in mind.






        share|improve this answer


















        • 36




          This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed.
          – Pete L. Clark
          Aug 29 at 22:00






        • 1




          @FedericoPoloni I’m not familiar with that use of punctuation, and I don’t think it’s commonly understood. I think you probably mean to write “the Lebesgue integral is ‘just’ trying to prove …”, which uses more conventional punctuation and grammar to express what I think you’re trying to express.
          – Konrad Rudolph
          Aug 30 at 9:52







        • 1




          @KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker)
          – Yemon Choi
          Aug 30 at 12:07







        • 1




          An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit.
          – user168715
          Aug 31 at 6:53






        • 1




          @PeteL.Clark There's even a relevant XKCD about that.
          – Nic Hartley
          Sep 1 at 1:49












        up vote
        113
        down vote










        up vote
        113
        down vote










        Does maths research have anything inadmissible?




        No, but trying to prove X without using Y is still a very useful concept even in research, because it can lead to interesting generalizations, or new proof techniques that can be applied to a larger set of problems.



        For instance, in some sense the Lebesgue integral is "just" trying to prove the properties of integrals without using the continuity of f, or the theory of matroids is "just" trying to prove the properties of linearly independent vectors without using a lot of properties from the vector space structure.



        So this is far from being a pointless exercise, if that's what you had in mind.






        share|improve this answer















        Does maths research have anything inadmissible?




        No, but trying to prove X without using Y is still a very useful concept even in research, because it can lead to interesting generalizations, or new proof techniques that can be applied to a larger set of problems.



        For instance, in some sense the Lebesgue integral is "just" trying to prove the properties of integrals without using the continuity of f, or the theory of matroids is "just" trying to prove the properties of linearly independent vectors without using a lot of properties from the vector space structure.



        So this is far from being a pointless exercise, if that's what you had in mind.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 30 at 11:27









        Konrad Rudolph

        2,8991030




        2,8991030










        answered Aug 29 at 17:31









        Federico Poloni

        22.9k1167124




        22.9k1167124







        • 36




          This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed.
          – Pete L. Clark
          Aug 29 at 22:00






        • 1




          @FedericoPoloni I’m not familiar with that use of punctuation, and I don’t think it’s commonly understood. I think you probably mean to write “the Lebesgue integral is ‘just’ trying to prove …”, which uses more conventional punctuation and grammar to express what I think you’re trying to express.
          – Konrad Rudolph
          Aug 30 at 9:52







        • 1




          @KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker)
          – Yemon Choi
          Aug 30 at 12:07







        • 1




          An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit.
          – user168715
          Aug 31 at 6:53






        • 1




          @PeteL.Clark There's even a relevant XKCD about that.
          – Nic Hartley
          Sep 1 at 1:49












        • 36




          This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed.
          – Pete L. Clark
          Aug 29 at 22:00






        • 1




          @FedericoPoloni I’m not familiar with that use of punctuation, and I don’t think it’s commonly understood. I think you probably mean to write “the Lebesgue integral is ‘just’ trying to prove …”, which uses more conventional punctuation and grammar to express what I think you’re trying to express.
          – Konrad Rudolph
          Aug 30 at 9:52







        • 1




          @KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker)
          – Yemon Choi
          Aug 30 at 12:07







        • 1




          An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit.
          – user168715
          Aug 31 at 6:53






        • 1




          @PeteL.Clark There's even a relevant XKCD about that.
          – Nic Hartley
          Sep 1 at 1:49







        36




        36




        This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed.
        – Pete L. Clark
        Aug 29 at 22:00




        This is an excellent answer. There is a very broad phenomenon that can be paraphrased as "constraint breeds creativity." E.g. there is a reason that people have been writing haikus for more than eight hundred years. But one of the essences of "creative constraints" is that they are largely self-imposed.
        – Pete L. Clark
        Aug 29 at 22:00




        1




        1




        @FedericoPoloni I’m not familiar with that use of punctuation, and I don’t think it’s commonly understood. I think you probably mean to write “the Lebesgue integral is ‘just’ trying to prove …”, which uses more conventional punctuation and grammar to express what I think you’re trying to express.
        – Konrad Rudolph
        Aug 30 at 9:52





        @FedericoPoloni I’m not familiar with that use of punctuation, and I don’t think it’s commonly understood. I think you probably mean to write “the Lebesgue integral is ‘just’ trying to prove …”, which uses more conventional punctuation and grammar to express what I think you’re trying to express.
        – Konrad Rudolph
        Aug 30 at 9:52





        1




        1




        @KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker)
        – Yemon Choi
        Aug 30 at 12:07





        @KonradRudolph FWIW, I think the original was fine, although I don't have a strong preference. (Native English speaker)
        – Yemon Choi
        Aug 30 at 12:07





        1




        1




        An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit.
        – user168715
        Aug 31 at 6:53




        An important note, though: I consider there to be a very significant difference between proving results using fewer hypotheses or axioms, and "pretending" not to know theorems which are consequences of the hypotheses you do assume. Banning l'Hopital, while assuming stronger results like the mean value and squeeze theorem, is both ill-defined (the first lemma of my solution can just be a proof of l'Hopital) and of dubious benefit.
        – user168715
        Aug 31 at 6:53




        1




        1




        @PeteL.Clark There's even a relevant XKCD about that.
        – Nic Hartley
        Sep 1 at 1:49




        @PeteL.Clark There's even a relevant XKCD about that.
        – Nic Hartley
        Sep 1 at 1:49










        up vote
        45
        down vote













        In the sense that you are asking, I cannot imagine there ever being a method that is ruled inadmissible because the researcher is "not ready for it." Every intellectual approach is potentially fair game.



        If the specific goal of a work is to find an alternate approach to establishing something, however, it could well be the case that one or more prior methods are ruled out of scope, as it would assume the result that you want to establish by another independent path. For example, the constant e has been derived in multiple ways.



        Finally, once you step outside of pure theory and into experimental work, one must also consider the ethics of an experimental method. Many potential approaches are considered inadmissible due to the objectionable nature of the experiment. In extreme cases, such Nazi medical experiments, even referencing the prior work may be considered inadmissible.






        share|improve this answer
















        • 2




          Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal!
          – BCLC
          Aug 29 at 14:50






        • 3




          Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure
          – BCLC
          Aug 29 at 15:24















        up vote
        45
        down vote













        In the sense that you are asking, I cannot imagine there ever being a method that is ruled inadmissible because the researcher is "not ready for it." Every intellectual approach is potentially fair game.



        If the specific goal of a work is to find an alternate approach to establishing something, however, it could well be the case that one or more prior methods are ruled out of scope, as it would assume the result that you want to establish by another independent path. For example, the constant e has been derived in multiple ways.



        Finally, once you step outside of pure theory and into experimental work, one must also consider the ethics of an experimental method. Many potential approaches are considered inadmissible due to the objectionable nature of the experiment. In extreme cases, such Nazi medical experiments, even referencing the prior work may be considered inadmissible.






        share|improve this answer
















        • 2




          Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal!
          – BCLC
          Aug 29 at 14:50






        • 3




          Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure
          – BCLC
          Aug 29 at 15:24













        up vote
        45
        down vote










        up vote
        45
        down vote









        In the sense that you are asking, I cannot imagine there ever being a method that is ruled inadmissible because the researcher is "not ready for it." Every intellectual approach is potentially fair game.



        If the specific goal of a work is to find an alternate approach to establishing something, however, it could well be the case that one or more prior methods are ruled out of scope, as it would assume the result that you want to establish by another independent path. For example, the constant e has been derived in multiple ways.



        Finally, once you step outside of pure theory and into experimental work, one must also consider the ethics of an experimental method. Many potential approaches are considered inadmissible due to the objectionable nature of the experiment. In extreme cases, such Nazi medical experiments, even referencing the prior work may be considered inadmissible.






        share|improve this answer












        In the sense that you are asking, I cannot imagine there ever being a method that is ruled inadmissible because the researcher is "not ready for it." Every intellectual approach is potentially fair game.



        If the specific goal of a work is to find an alternate approach to establishing something, however, it could well be the case that one or more prior methods are ruled out of scope, as it would assume the result that you want to establish by another independent path. For example, the constant e has been derived in multiple ways.



        Finally, once you step outside of pure theory and into experimental work, one must also consider the ethics of an experimental method. Many potential approaches are considered inadmissible due to the objectionable nature of the experiment. In extreme cases, such Nazi medical experiments, even referencing the prior work may be considered inadmissible.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 29 at 14:38









        jakebeal

        142k30504747




        142k30504747







        • 2




          Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal!
          – BCLC
          Aug 29 at 14:50






        • 3




          Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure
          – BCLC
          Aug 29 at 15:24













        • 2




          Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal!
          – BCLC
          Aug 29 at 14:50






        • 3




          Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure
          – BCLC
          Aug 29 at 15:24








        2




        2




        Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal!
        – BCLC
        Aug 29 at 14:50




        Ah, you mean like if you want to, say, prove Fourier inversion formula probabilistically, you would want to avoid anything that sounds like what you already know to be the proof/s of the Fourier inversion formula because that would defeat coming up with a different proof? Or something like my question here? Thanks jakebeal!
        – BCLC
        Aug 29 at 14:50




        3




        3




        Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure
        – BCLC
        Aug 29 at 15:24





        Re outside of pure: Okay now that seems pretty obvious in hindsight (i.e. dumb question for outside of pure). I think it's far less obvious for pure
        – BCLC
        Aug 29 at 15:24











        up vote
        32
        down vote













        It is worth pointing out, that theorems are usually inadmissible if they lead to circular theorem proving. If you study math you learn how mathematical theories are built lemma by lemma and theorem by theorem. These theorems and their dependencies form a directed acyclic graph (DAG).



        If you are asked to reproduce the proof of a certain theorem and you use a "later" result, this results usually depends on the theorem you are supposed to prove, so using it is not just inadmissible for educational reasons, it actually would lead to an incorrect proof in the context of the DAG.



        In that sense there cannot be any inadmissible theorems in research, because research usually consists of proving the "latest" theorems. However, if you publish a shorter, more elegant or more beautiful proof of a known result, you might have to look out for inadmissible theorems again.






        share|improve this answer


















        • 3




          +1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem...
          – Yemon Choi
          Aug 30 at 12:09






        • 1




          This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for?
          – Noah Snyder
          Aug 30 at 12:20







        • 3




          @NoahSnyder: DAG doubtless stands for directed acyclic graph.
          – J W
          Aug 30 at 13:13










        • @JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math.
          – Noah Snyder
          Aug 30 at 13:21






        • 4




          The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not.
          – MSalters
          Aug 30 at 14:53














        up vote
        32
        down vote













        It is worth pointing out, that theorems are usually inadmissible if they lead to circular theorem proving. If you study math you learn how mathematical theories are built lemma by lemma and theorem by theorem. These theorems and their dependencies form a directed acyclic graph (DAG).



        If you are asked to reproduce the proof of a certain theorem and you use a "later" result, this results usually depends on the theorem you are supposed to prove, so using it is not just inadmissible for educational reasons, it actually would lead to an incorrect proof in the context of the DAG.



        In that sense there cannot be any inadmissible theorems in research, because research usually consists of proving the "latest" theorems. However, if you publish a shorter, more elegant or more beautiful proof of a known result, you might have to look out for inadmissible theorems again.






        share|improve this answer


















        • 3




          +1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem...
          – Yemon Choi
          Aug 30 at 12:09






        • 1




          This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for?
          – Noah Snyder
          Aug 30 at 12:20







        • 3




          @NoahSnyder: DAG doubtless stands for directed acyclic graph.
          – J W
          Aug 30 at 13:13










        • @JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math.
          – Noah Snyder
          Aug 30 at 13:21






        • 4




          The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not.
          – MSalters
          Aug 30 at 14:53












        up vote
        32
        down vote










        up vote
        32
        down vote









        It is worth pointing out, that theorems are usually inadmissible if they lead to circular theorem proving. If you study math you learn how mathematical theories are built lemma by lemma and theorem by theorem. These theorems and their dependencies form a directed acyclic graph (DAG).



        If you are asked to reproduce the proof of a certain theorem and you use a "later" result, this results usually depends on the theorem you are supposed to prove, so using it is not just inadmissible for educational reasons, it actually would lead to an incorrect proof in the context of the DAG.



        In that sense there cannot be any inadmissible theorems in research, because research usually consists of proving the "latest" theorems. However, if you publish a shorter, more elegant or more beautiful proof of a known result, you might have to look out for inadmissible theorems again.






        share|improve this answer














        It is worth pointing out, that theorems are usually inadmissible if they lead to circular theorem proving. If you study math you learn how mathematical theories are built lemma by lemma and theorem by theorem. These theorems and their dependencies form a directed acyclic graph (DAG).



        If you are asked to reproduce the proof of a certain theorem and you use a "later" result, this results usually depends on the theorem you are supposed to prove, so using it is not just inadmissible for educational reasons, it actually would lead to an incorrect proof in the context of the DAG.



        In that sense there cannot be any inadmissible theorems in research, because research usually consists of proving the "latest" theorems. However, if you publish a shorter, more elegant or more beautiful proof of a known result, you might have to look out for inadmissible theorems again.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 30 at 13:30









        J W

        740613




        740613










        answered Aug 30 at 11:19









        BlindKungFuMaster

        67446




        67446







        • 3




          +1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem...
          – Yemon Choi
          Aug 30 at 12:09






        • 1




          This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for?
          – Noah Snyder
          Aug 30 at 12:20







        • 3




          @NoahSnyder: DAG doubtless stands for directed acyclic graph.
          – J W
          Aug 30 at 13:13










        • @JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math.
          – Noah Snyder
          Aug 30 at 13:21






        • 4




          The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not.
          – MSalters
          Aug 30 at 14:53












        • 3




          +1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem...
          – Yemon Choi
          Aug 30 at 12:09






        • 1




          This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for?
          – Noah Snyder
          Aug 30 at 12:20







        • 3




          @NoahSnyder: DAG doubtless stands for directed acyclic graph.
          – J W
          Aug 30 at 13:13










        • @JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math.
          – Noah Snyder
          Aug 30 at 13:21






        • 4




          The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not.
          – MSalters
          Aug 30 at 14:53







        3




        3




        +1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem...
        – Yemon Choi
        Aug 30 at 12:09




        +1 for bringing up explicitly what seems to have only been implicit, or mentioned in comments to other answers. I have a hazy memory of marking someone's comprehensive graduate exam in Canada where the simplicity of the algebra of n-by-n matrices (which carried non-negligible marks) was proved by appealing to Wedderburn's structure theorem...
        – Yemon Choi
        Aug 30 at 12:09




        1




        1




        This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for?
        – Noah Snyder
        Aug 30 at 12:20





        This the right answer to my mind. It would be strengthened by explaining what this has to do with l'Hopital as in Nate Eldridge's comment. But what does DAG stand for?
        – Noah Snyder
        Aug 30 at 12:20





        3




        3




        @NoahSnyder: DAG doubtless stands for directed acyclic graph.
        – J W
        Aug 30 at 13:13




        @NoahSnyder: DAG doubtless stands for directed acyclic graph.
        – J W
        Aug 30 at 13:13












        @JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math.
        – Noah Snyder
        Aug 30 at 13:21




        @JW: Thanks! I was expecting it was a technical term in pedagogy or philosophy of science, not math.
        – Noah Snyder
        Aug 30 at 13:21




        4




        4




        The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not.
        – MSalters
        Aug 30 at 14:53




        The acyclical bit of DAG's is probably worded a bit carelessly. It's common enough to have theorems A and B that are essential equivalent, such that A can be proven from B and vice versa. This creates an obvious cycle, but it doesn't matter. There are then at least two acyclical subgraphs that connect the theorem to prove and its axioms - axioms being the graph roots. IOW, while any particular proof is acyclical, the union of them is not.
        – MSalters
        Aug 30 at 14:53










        up vote
        30
        down vote













        While there are indeed no inadmissible theorems in research, there are certain things that one sometimes tries to avoid.



        Two examples come to mind:



        The first is the classification of finite simple groups. The classification itself is not particularly complicated, but the proof is absurdly so. This makes mathematicians working in group theory prefer to avoid using it when possible. It is in fact quite often explicitly pointed out in a paper if a key result relies on it.



        The reason for this preference was probably to some extend originally that the proof was too complicated for people to have full confidence in, but my impression is that this is no longer the case, and the preference is now due to the fact that relying on the classification makes the "real reason" for the truth of a result more opaque and thus less likely to lead to further insights.




        The other example is the huge effort that has gone into trying to prove the so-called Kazhdan-Lusztig conjecture using purely algebraic methods.



        The result itself is algebraic in nature, but the original proof uses a lot of very deep results from geometry, which made it impossible to use it as a stepping stone to settings not allowing for this geometric structure.




        Such an algebraic proof was achieved in 2012 by Elias and Williamson, when they proved Soergel's conjecture, which has the Kazhdan-Lusztig conjecture as one of several consequences.



        The techniques used in this proof allowed just the sort of generalizations hoped for, leading first to a disproof of Lusztig's conjecture in 2013 (a characteristic $p$ analogue of the Kazhdan-Lusztig conjecture), and then to a proof of a replacement for Lusztig's conjecture in 2015 (for type $A$) and 2017 (in general), at least under some mild assumptions on the characteristic.






        share|improve this answer


















        • 2




          Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things?
          – darij grinberg
          Aug 29 at 15:08










        • @darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it.
          – Tobias Kildetoft
          Aug 29 at 17:04















        up vote
        30
        down vote













        While there are indeed no inadmissible theorems in research, there are certain things that one sometimes tries to avoid.



        Two examples come to mind:



        The first is the classification of finite simple groups. The classification itself is not particularly complicated, but the proof is absurdly so. This makes mathematicians working in group theory prefer to avoid using it when possible. It is in fact quite often explicitly pointed out in a paper if a key result relies on it.



        The reason for this preference was probably to some extend originally that the proof was too complicated for people to have full confidence in, but my impression is that this is no longer the case, and the preference is now due to the fact that relying on the classification makes the "real reason" for the truth of a result more opaque and thus less likely to lead to further insights.




        The other example is the huge effort that has gone into trying to prove the so-called Kazhdan-Lusztig conjecture using purely algebraic methods.



        The result itself is algebraic in nature, but the original proof uses a lot of very deep results from geometry, which made it impossible to use it as a stepping stone to settings not allowing for this geometric structure.




        Such an algebraic proof was achieved in 2012 by Elias and Williamson, when they proved Soergel's conjecture, which has the Kazhdan-Lusztig conjecture as one of several consequences.



        The techniques used in this proof allowed just the sort of generalizations hoped for, leading first to a disproof of Lusztig's conjecture in 2013 (a characteristic $p$ analogue of the Kazhdan-Lusztig conjecture), and then to a proof of a replacement for Lusztig's conjecture in 2015 (for type $A$) and 2017 (in general), at least under some mild assumptions on the characteristic.






        share|improve this answer


















        • 2




          Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things?
          – darij grinberg
          Aug 29 at 15:08










        • @darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it.
          – Tobias Kildetoft
          Aug 29 at 17:04













        up vote
        30
        down vote










        up vote
        30
        down vote









        While there are indeed no inadmissible theorems in research, there are certain things that one sometimes tries to avoid.



        Two examples come to mind:



        The first is the classification of finite simple groups. The classification itself is not particularly complicated, but the proof is absurdly so. This makes mathematicians working in group theory prefer to avoid using it when possible. It is in fact quite often explicitly pointed out in a paper if a key result relies on it.



        The reason for this preference was probably to some extend originally that the proof was too complicated for people to have full confidence in, but my impression is that this is no longer the case, and the preference is now due to the fact that relying on the classification makes the "real reason" for the truth of a result more opaque and thus less likely to lead to further insights.




        The other example is the huge effort that has gone into trying to prove the so-called Kazhdan-Lusztig conjecture using purely algebraic methods.



        The result itself is algebraic in nature, but the original proof uses a lot of very deep results from geometry, which made it impossible to use it as a stepping stone to settings not allowing for this geometric structure.




        Such an algebraic proof was achieved in 2012 by Elias and Williamson, when they proved Soergel's conjecture, which has the Kazhdan-Lusztig conjecture as one of several consequences.



        The techniques used in this proof allowed just the sort of generalizations hoped for, leading first to a disproof of Lusztig's conjecture in 2013 (a characteristic $p$ analogue of the Kazhdan-Lusztig conjecture), and then to a proof of a replacement for Lusztig's conjecture in 2015 (for type $A$) and 2017 (in general), at least under some mild assumptions on the characteristic.






        share|improve this answer














        While there are indeed no inadmissible theorems in research, there are certain things that one sometimes tries to avoid.



        Two examples come to mind:



        The first is the classification of finite simple groups. The classification itself is not particularly complicated, but the proof is absurdly so. This makes mathematicians working in group theory prefer to avoid using it when possible. It is in fact quite often explicitly pointed out in a paper if a key result relies on it.



        The reason for this preference was probably to some extend originally that the proof was too complicated for people to have full confidence in, but my impression is that this is no longer the case, and the preference is now due to the fact that relying on the classification makes the "real reason" for the truth of a result more opaque and thus less likely to lead to further insights.




        The other example is the huge effort that has gone into trying to prove the so-called Kazhdan-Lusztig conjecture using purely algebraic methods.



        The result itself is algebraic in nature, but the original proof uses a lot of very deep results from geometry, which made it impossible to use it as a stepping stone to settings not allowing for this geometric structure.




        Such an algebraic proof was achieved in 2012 by Elias and Williamson, when they proved Soergel's conjecture, which has the Kazhdan-Lusztig conjecture as one of several consequences.



        The techniques used in this proof allowed just the sort of generalizations hoped for, leading first to a disproof of Lusztig's conjecture in 2013 (a characteristic $p$ analogue of the Kazhdan-Lusztig conjecture), and then to a proof of a replacement for Lusztig's conjecture in 2015 (for type $A$) and 2017 (in general), at least under some mild assumptions on the characteristic.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 30 at 9:40









        Konrad Rudolph

        2,8991030




        2,8991030










        answered Aug 29 at 15:05









        Tobias Kildetoft

        2,88421522




        2,88421522







        • 2




          Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things?
          – darij grinberg
          Aug 29 at 15:08










        • @darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it.
          – Tobias Kildetoft
          Aug 29 at 17:04













        • 2




          Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things?
          – darij grinberg
          Aug 29 at 15:08










        • @darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it.
          – Tobias Kildetoft
          Aug 29 at 17:04








        2




        2




        Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things?
        – darij grinberg
        Aug 29 at 15:08




        Didn't Elias and Williamson put the KL conjecture on an algebraic footing, or am I misremembering things?
        – darij grinberg
        Aug 29 at 15:08












        @darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it.
        – Tobias Kildetoft
        Aug 29 at 17:04





        @darijgrinberg They did indeed. I actually meant to add that, but forgot it again while typing. I have added some details about it.
        – Tobias Kildetoft
        Aug 29 at 17:04











        up vote
        17
        down vote













        There are cases where the researcher restricts himself not to use certain theorems. Example:




        Atle Selberg,"An elementary proof of the prime-number theorem". Ann. of Math. (2) 50 (1949), 305--313.




        The author restricts himself to use only "elementary" (in a technical sense) methods.



        Other cases may be proofs in geometry using only straightedge and compasses. Gauss showed that the regular 257-gon may be constructed with straightedge and compasses. I would not consider that to be "a new proof of a known result".






        share|improve this answer






















        • So same as jakebeal?
          – BCLC
          Aug 29 at 17:03






        • 1




          That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known).
          – Hilder Vitor Lima Pereira
          Aug 30 at 13:21






        • 2




          @HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more “elementary” (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point.
          – Dan Romik
          Aug 30 at 15:46










        • @DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment.
          – Hilder Vitor Lima Pereira
          Aug 30 at 16:03










        • @HilderVitorLimaPereira maybe that thought could be called "weaker claims"?
          – elliot svensson
          Aug 31 at 14:20














        up vote
        17
        down vote













        There are cases where the researcher restricts himself not to use certain theorems. Example:




        Atle Selberg,"An elementary proof of the prime-number theorem". Ann. of Math. (2) 50 (1949), 305--313.




        The author restricts himself to use only "elementary" (in a technical sense) methods.



        Other cases may be proofs in geometry using only straightedge and compasses. Gauss showed that the regular 257-gon may be constructed with straightedge and compasses. I would not consider that to be "a new proof of a known result".






        share|improve this answer






















        • So same as jakebeal?
          – BCLC
          Aug 29 at 17:03






        • 1




          That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known).
          – Hilder Vitor Lima Pereira
          Aug 30 at 13:21






        • 2




          @HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more “elementary” (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point.
          – Dan Romik
          Aug 30 at 15:46










        • @DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment.
          – Hilder Vitor Lima Pereira
          Aug 30 at 16:03










        • @HilderVitorLimaPereira maybe that thought could be called "weaker claims"?
          – elliot svensson
          Aug 31 at 14:20












        up vote
        17
        down vote










        up vote
        17
        down vote









        There are cases where the researcher restricts himself not to use certain theorems. Example:




        Atle Selberg,"An elementary proof of the prime-number theorem". Ann. of Math. (2) 50 (1949), 305--313.




        The author restricts himself to use only "elementary" (in a technical sense) methods.



        Other cases may be proofs in geometry using only straightedge and compasses. Gauss showed that the regular 257-gon may be constructed with straightedge and compasses. I would not consider that to be "a new proof of a known result".






        share|improve this answer














        There are cases where the researcher restricts himself not to use certain theorems. Example:




        Atle Selberg,"An elementary proof of the prime-number theorem". Ann. of Math. (2) 50 (1949), 305--313.




        The author restricts himself to use only "elementary" (in a technical sense) methods.



        Other cases may be proofs in geometry using only straightedge and compasses. Gauss showed that the regular 257-gon may be constructed with straightedge and compasses. I would not consider that to be "a new proof of a known result".







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Sep 3 at 12:56

























        answered Aug 29 at 16:55









        GEdgar

        9,96662138




        9,96662138











        • So same as jakebeal?
          – BCLC
          Aug 29 at 17:03






        • 1




          That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known).
          – Hilder Vitor Lima Pereira
          Aug 30 at 13:21






        • 2




          @HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more “elementary” (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point.
          – Dan Romik
          Aug 30 at 15:46










        • @DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment.
          – Hilder Vitor Lima Pereira
          Aug 30 at 16:03










        • @HilderVitorLimaPereira maybe that thought could be called "weaker claims"?
          – elliot svensson
          Aug 31 at 14:20
















        • So same as jakebeal?
          – BCLC
          Aug 29 at 17:03






        • 1




          That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known).
          – Hilder Vitor Lima Pereira
          Aug 30 at 13:21






        • 2




          @HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more “elementary” (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point.
          – Dan Romik
          Aug 30 at 15:46










        • @DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment.
          – Hilder Vitor Lima Pereira
          Aug 30 at 16:03










        • @HilderVitorLimaPereira maybe that thought could be called "weaker claims"?
          – elliot svensson
          Aug 31 at 14:20















        So same as jakebeal?
        – BCLC
        Aug 29 at 17:03




        So same as jakebeal?
        – BCLC
        Aug 29 at 17:03




        1




        1




        That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known).
        – Hilder Vitor Lima Pereira
        Aug 30 at 13:21




        That case is different because the researchers are justing showing a new proof for a known theorem but that is simpler (or more elegant) than the known proofs. In math, there is a kind of consensus that simpler proofs are better (for many reasons, for instance, they are easier to be checked and usually depend on weaker results), so, an elementary proof is an original research result even if it is a proof of the "same type" as the existing ones (e.g, a simpler algebraic proof when other algebraic proof is already known).
        – Hilder Vitor Lima Pereira
        Aug 30 at 13:21




        2




        2




        @HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more “elementary” (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point.
        – Dan Romik
        Aug 30 at 15:46




        @HilderVitorLimaPereira if I may nitpick a bit, the elementary proof of the prime number theorem is regarded by most people who have studied it as neither simpler nor more elegant than the analytic family of proofs. It is however more “elementary” (specifically, does not use complex or Fourier analysis), which is also a very important and interesting feature. Certainly its discovery was a major research result, so in that sense you make a good and valid point.
        – Dan Romik
        Aug 30 at 15:46












        @DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment.
        – Hilder Vitor Lima Pereira
        Aug 30 at 16:03




        @DanRomik I see. Yes, when I said "weaker results" I actually was think about more elementary results in the sense that they use theories that do not depend on deep sequence of constructions and other theorems or that are considered basic knowledge in the math comunity. Thank you for that comment.
        – Hilder Vitor Lima Pereira
        Aug 30 at 16:03












        @HilderVitorLimaPereira maybe that thought could be called "weaker claims"?
        – elliot svensson
        Aug 31 at 14:20




        @HilderVitorLimaPereira maybe that thought could be called "weaker claims"?
        – elliot svensson
        Aug 31 at 14:20










        up vote
        10
        down vote













        It is perhaps worth noting that some results are in a sense inadmissible because they aren't actually theorems. Some conjectures/axioms are so central that they are widely used, even though they haven't yet been established. Proofs relying on these should make that clear in the hypotheses. However, it wouldn't be that hard to have a bad day and forget that something you use frequently hasn't actually been proved yet, or that it is needed for a later result you want to use.






        share|improve this answer




















        • Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...?
          – BCLC
          Aug 29 at 15:13






        • 1




          There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use.
          – darij grinberg
          Aug 29 at 15:14






        • 1




          @BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis.
          – Jessica B
          Aug 29 at 15:17










        • @darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an “unequivocal” theorem (i.e., a “theorem”), then it is a conjecture, not “somewhere on the spectrum between unequivocal theorem and conjecture”. I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I’m pretty sure I do understand what you’re getting at, but others likely won’t, and your use of an adjective like “unequivocal” next to “theorem” is likely to sow confusion and lead some people to think ...
          – Dan Romik
          Aug 29 at 20:42






        • 2




          @DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl...
          – darij grinberg
          Aug 29 at 20:45















        up vote
        10
        down vote













        It is perhaps worth noting that some results are in a sense inadmissible because they aren't actually theorems. Some conjectures/axioms are so central that they are widely used, even though they haven't yet been established. Proofs relying on these should make that clear in the hypotheses. However, it wouldn't be that hard to have a bad day and forget that something you use frequently hasn't actually been proved yet, or that it is needed for a later result you want to use.






        share|improve this answer




















        • Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...?
          – BCLC
          Aug 29 at 15:13






        • 1




          There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use.
          – darij grinberg
          Aug 29 at 15:14






        • 1




          @BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis.
          – Jessica B
          Aug 29 at 15:17










        • @darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an “unequivocal” theorem (i.e., a “theorem”), then it is a conjecture, not “somewhere on the spectrum between unequivocal theorem and conjecture”. I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I’m pretty sure I do understand what you’re getting at, but others likely won’t, and your use of an adjective like “unequivocal” next to “theorem” is likely to sow confusion and lead some people to think ...
          – Dan Romik
          Aug 29 at 20:42






        • 2




          @DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl...
          – darij grinberg
          Aug 29 at 20:45













        up vote
        10
        down vote










        up vote
        10
        down vote









        It is perhaps worth noting that some results are in a sense inadmissible because they aren't actually theorems. Some conjectures/axioms are so central that they are widely used, even though they haven't yet been established. Proofs relying on these should make that clear in the hypotheses. However, it wouldn't be that hard to have a bad day and forget that something you use frequently hasn't actually been proved yet, or that it is needed for a later result you want to use.






        share|improve this answer












        It is perhaps worth noting that some results are in a sense inadmissible because they aren't actually theorems. Some conjectures/axioms are so central that they are widely used, even though they haven't yet been established. Proofs relying on these should make that clear in the hypotheses. However, it wouldn't be that hard to have a bad day and forget that something you use frequently hasn't actually been proved yet, or that it is needed for a later result you want to use.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 29 at 15:12









        Jessica B

        14.2k23661




        14.2k23661











        • Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...?
          – BCLC
          Aug 29 at 15:13






        • 1




          There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use.
          – darij grinberg
          Aug 29 at 15:14






        • 1




          @BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis.
          – Jessica B
          Aug 29 at 15:17










        • @darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an “unequivocal” theorem (i.e., a “theorem”), then it is a conjecture, not “somewhere on the spectrum between unequivocal theorem and conjecture”. I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I’m pretty sure I do understand what you’re getting at, but others likely won’t, and your use of an adjective like “unequivocal” next to “theorem” is likely to sow confusion and lead some people to think ...
          – Dan Romik
          Aug 29 at 20:42






        • 2




          @DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl...
          – darij grinberg
          Aug 29 at 20:45

















        • Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...?
          – BCLC
          Aug 29 at 15:13






        • 1




          There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use.
          – darij grinberg
          Aug 29 at 15:14






        • 1




          @BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis.
          – Jessica B
          Aug 29 at 15:17










        • @darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an “unequivocal” theorem (i.e., a “theorem”), then it is a conjecture, not “somewhere on the spectrum between unequivocal theorem and conjecture”. I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I’m pretty sure I do understand what you’re getting at, but others likely won’t, and your use of an adjective like “unequivocal” next to “theorem” is likely to sow confusion and lead some people to think ...
          – Dan Romik
          Aug 29 at 20:42






        • 2




          @DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl...
          – darij grinberg
          Aug 29 at 20:45
















        Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...?
        – BCLC
        Aug 29 at 15:13




        Perhaps Poincare was a bad example because it was a conjecture with a high bounty for quite sometime, but let's pretend I used something that had been proven for decades old. Your answer is now...?
        – BCLC
        Aug 29 at 15:13




        1




        1




        There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use.
        – darij grinberg
        Aug 29 at 15:14




        There is (unfortunately...) a whole spectrum between "unequivocal theorem" and "conjecture" in combinatorics and geometry, due to the rigorous methods lagging behind the sort of arguments researchers actually use.
        – darij grinberg
        Aug 29 at 15:14




        1




        1




        @BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis.
        – Jessica B
        Aug 29 at 15:17




        @BCLC Actually, the Poincare Conjecture was widely 'used' before its proof. The resulting theorems include a hypothesis of 'no fake 3-balls'. But I also know of a paper proving a topological result using the generalised continuum hypothesis.
        – Jessica B
        Aug 29 at 15:17












        @darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an “unequivocal” theorem (i.e., a “theorem”), then it is a conjecture, not “somewhere on the spectrum between unequivocal theorem and conjecture”. I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I’m pretty sure I do understand what you’re getting at, but others likely won’t, and your use of an adjective like “unequivocal” next to “theorem” is likely to sow confusion and lead some people to think ...
        – Dan Romik
        Aug 29 at 20:42




        @darijgrinberg I disagree with your assertion. If something is believed true, no matter with what level of confidence, but is not an “unequivocal” theorem (i.e., a “theorem”), then it is a conjecture, not “somewhere on the spectrum between unequivocal theorem and conjecture”. I challenge you to show me a pure math paper, published in a credible journal, that uses different terminology. I’m pretty sure I do understand what you’re getting at, but others likely won’t, and your use of an adjective like “unequivocal” next to “theorem” is likely to sow confusion and lead some people to think ...
        – Dan Romik
        Aug 29 at 20:42




        2




        2




        @DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl...
        – darij grinberg
        Aug 29 at 20:45





        @DanRomik: I guess I was ambiguous. Of course these things are stated as theorems in the papers they're published in. But when you start asking people about them, you start hearing eehms and uuhms. I don't think the problem is concentrated with certain authors -- rather it's specific to certain kinds of combinatorics, and the same people that write very clearly about (say) algebra become vague and murky when they need properties of RSK or Hillman-Grassl...
        – darij grinberg
        Aug 29 at 20:45











        up vote
        8
        down vote













        In intuitionistic logic and constructive mathematics we try to prove stuff without the law of excluded middle, which excludes many of the normal tools used in math. And in logic in general we often try to prove stuff using only a defined set of axioms, which often means that we are not allowed to follow our 'normal' intuitions. Especially when proving something in multiple axiomatic systems of different strengt you can get that some tool only become available towards the end(the more powerful systems) , and are as such inadmissible in the weaker systems.






        share|improve this answer
















        • 2




          That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun.
          – Buffy
          Aug 29 at 20:43











        • In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions.
          – Andrés E. Caicedo
          Aug 30 at 20:29














        up vote
        8
        down vote













        In intuitionistic logic and constructive mathematics we try to prove stuff without the law of excluded middle, which excludes many of the normal tools used in math. And in logic in general we often try to prove stuff using only a defined set of axioms, which often means that we are not allowed to follow our 'normal' intuitions. Especially when proving something in multiple axiomatic systems of different strengt you can get that some tool only become available towards the end(the more powerful systems) , and are as such inadmissible in the weaker systems.






        share|improve this answer
















        • 2




          That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun.
          – Buffy
          Aug 29 at 20:43











        • In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions.
          – Andrés E. Caicedo
          Aug 30 at 20:29












        up vote
        8
        down vote










        up vote
        8
        down vote









        In intuitionistic logic and constructive mathematics we try to prove stuff without the law of excluded middle, which excludes many of the normal tools used in math. And in logic in general we often try to prove stuff using only a defined set of axioms, which often means that we are not allowed to follow our 'normal' intuitions. Especially when proving something in multiple axiomatic systems of different strengt you can get that some tool only become available towards the end(the more powerful systems) , and are as such inadmissible in the weaker systems.






        share|improve this answer












        In intuitionistic logic and constructive mathematics we try to prove stuff without the law of excluded middle, which excludes many of the normal tools used in math. And in logic in general we often try to prove stuff using only a defined set of axioms, which often means that we are not allowed to follow our 'normal' intuitions. Especially when proving something in multiple axiomatic systems of different strengt you can get that some tool only become available towards the end(the more powerful systems) , and are as such inadmissible in the weaker systems.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 29 at 20:26









        epa095

        1812




        1812







        • 2




          That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun.
          – Buffy
          Aug 29 at 20:43











        • In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions.
          – Andrés E. Caicedo
          Aug 30 at 20:29












        • 2




          That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun.
          – Buffy
          Aug 29 at 20:43











        • In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions.
          – Andrés E. Caicedo
          Aug 30 at 20:29







        2




        2




        That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun.
        – Buffy
        Aug 29 at 20:43





        That is a great thing to do, but not the same as having parts of math closed off from you by an advisor unless you are both working in that space. The axiom of choice is another example that explores proof in a reduced space. I once worked in systems with a small set of axioms in which more could be true, but less could be proved to be true. Fun.
        – Buffy
        Aug 29 at 20:43













        In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions.
        – Andrés E. Caicedo
        Aug 30 at 20:29




        In the same vein, working in reverse mathematics usually requires one's arguments to be provable from rather weak systems of axioms, which leads to all sorts of complications that would not be present using standard sets of assumptions.
        – Andrés E. Caicedo
        Aug 30 at 20:29










        up vote
        6
        down vote













        To answer your main question, no. Nothing is disallowed. Any advisor would (or at least should) allow any valid mathematics. There is nothing in mathematics that is disallowed, especially in doctoral research. Of course this implies acceptance (now settled) on Poincaré's theorem. Prior to an accepted proof you couldn't depend on it.



        In fact, you can even write a dissertation based on a hypothetical (If Prof Buffy's Large Theorem is true, then it follows that...). You can explore the consequences of things not proven. Sometimes it helps connect them to known results, leading to a proof of the "large theorem" and sometimes it helps to lead to a contradiction showing it false.




        However, I have an issue with the background you have given on what is appropriate in teaching and examining students. I question the wisdom of the first professor disallowing anything that the student knows. That seems shortsighted and turns the professor into a gate that allows only some things to trickle through.



        Of course, if the professor wants to test the student on a particular technique he can try to find questions that do so, but this also points up the basic stupidity of exams in general. There are other ways to assure that the student learns essential techniques.



        A university education isn't about competition with other students and the (horrors) problem of an unfair advantage. it is about learning. If the professor or the system grades students competitively they are doing a poor job.



        If you have the 20 absolutely best students in the world and grade purely competitively, then half of them will be below average.






        share|improve this answer


















        • 4




          I feel like you have misunderstood the question.
          – Jessica B
          Aug 29 at 15:05






        • 2




          @Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level.
          – cHao
          Aug 29 at 15:54






        • 11




          One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_x -> 0 sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_x -> 0 sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising.
          – Nate Eldredge
          Aug 29 at 16:48






        • 5




          Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood.
          – Nate Eldredge
          Aug 29 at 16:56






        • 4




          I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute lim_xrightarrow 0 fracx^2x isn't showing a more advanced understanding of the material, they're showing they don't understand the material!
          – Noah Snyder
          Aug 30 at 12:52















        up vote
        6
        down vote













        To answer your main question, no. Nothing is disallowed. Any advisor would (or at least should) allow any valid mathematics. There is nothing in mathematics that is disallowed, especially in doctoral research. Of course this implies acceptance (now settled) on Poincaré's theorem. Prior to an accepted proof you couldn't depend on it.



        In fact, you can even write a dissertation based on a hypothetical (If Prof Buffy's Large Theorem is true, then it follows that...). You can explore the consequences of things not proven. Sometimes it helps connect them to known results, leading to a proof of the "large theorem" and sometimes it helps to lead to a contradiction showing it false.




        However, I have an issue with the background you have given on what is appropriate in teaching and examining students. I question the wisdom of the first professor disallowing anything that the student knows. That seems shortsighted and turns the professor into a gate that allows only some things to trickle through.



        Of course, if the professor wants to test the student on a particular technique he can try to find questions that do so, but this also points up the basic stupidity of exams in general. There are other ways to assure that the student learns essential techniques.



        A university education isn't about competition with other students and the (horrors) problem of an unfair advantage. it is about learning. If the professor or the system grades students competitively they are doing a poor job.



        If you have the 20 absolutely best students in the world and grade purely competitively, then half of them will be below average.






        share|improve this answer


















        • 4




          I feel like you have misunderstood the question.
          – Jessica B
          Aug 29 at 15:05






        • 2




          @Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level.
          – cHao
          Aug 29 at 15:54






        • 11




          One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_x -> 0 sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_x -> 0 sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising.
          – Nate Eldredge
          Aug 29 at 16:48






        • 5




          Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood.
          – Nate Eldredge
          Aug 29 at 16:56






        • 4




          I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute lim_xrightarrow 0 fracx^2x isn't showing a more advanced understanding of the material, they're showing they don't understand the material!
          – Noah Snyder
          Aug 30 at 12:52













        up vote
        6
        down vote










        up vote
        6
        down vote









        To answer your main question, no. Nothing is disallowed. Any advisor would (or at least should) allow any valid mathematics. There is nothing in mathematics that is disallowed, especially in doctoral research. Of course this implies acceptance (now settled) on Poincaré's theorem. Prior to an accepted proof you couldn't depend on it.



        In fact, you can even write a dissertation based on a hypothetical (If Prof Buffy's Large Theorem is true, then it follows that...). You can explore the consequences of things not proven. Sometimes it helps connect them to known results, leading to a proof of the "large theorem" and sometimes it helps to lead to a contradiction showing it false.




        However, I have an issue with the background you have given on what is appropriate in teaching and examining students. I question the wisdom of the first professor disallowing anything that the student knows. That seems shortsighted and turns the professor into a gate that allows only some things to trickle through.



        Of course, if the professor wants to test the student on a particular technique he can try to find questions that do so, but this also points up the basic stupidity of exams in general. There are other ways to assure that the student learns essential techniques.



        A university education isn't about competition with other students and the (horrors) problem of an unfair advantage. it is about learning. If the professor or the system grades students competitively they are doing a poor job.



        If you have the 20 absolutely best students in the world and grade purely competitively, then half of them will be below average.






        share|improve this answer














        To answer your main question, no. Nothing is disallowed. Any advisor would (or at least should) allow any valid mathematics. There is nothing in mathematics that is disallowed, especially in doctoral research. Of course this implies acceptance (now settled) on Poincaré's theorem. Prior to an accepted proof you couldn't depend on it.



        In fact, you can even write a dissertation based on a hypothetical (If Prof Buffy's Large Theorem is true, then it follows that...). You can explore the consequences of things not proven. Sometimes it helps connect them to known results, leading to a proof of the "large theorem" and sometimes it helps to lead to a contradiction showing it false.




        However, I have an issue with the background you have given on what is appropriate in teaching and examining students. I question the wisdom of the first professor disallowing anything that the student knows. That seems shortsighted and turns the professor into a gate that allows only some things to trickle through.



        Of course, if the professor wants to test the student on a particular technique he can try to find questions that do so, but this also points up the basic stupidity of exams in general. There are other ways to assure that the student learns essential techniques.



        A university education isn't about competition with other students and the (horrors) problem of an unfair advantage. it is about learning. If the professor or the system grades students competitively they are doing a poor job.



        If you have the 20 absolutely best students in the world and grade purely competitively, then half of them will be below average.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 29 at 19:31

























        answered Aug 29 at 14:34









        Buffy

        15.7k55188




        15.7k55188







        • 4




          I feel like you have misunderstood the question.
          – Jessica B
          Aug 29 at 15:05






        • 2




          @Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level.
          – cHao
          Aug 29 at 15:54






        • 11




          One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_x -> 0 sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_x -> 0 sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising.
          – Nate Eldredge
          Aug 29 at 16:48






        • 5




          Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood.
          – Nate Eldredge
          Aug 29 at 16:56






        • 4




          I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute lim_xrightarrow 0 fracx^2x isn't showing a more advanced understanding of the material, they're showing they don't understand the material!
          – Noah Snyder
          Aug 30 at 12:52













        • 4




          I feel like you have misunderstood the question.
          – Jessica B
          Aug 29 at 15:05






        • 2




          @Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level.
          – cHao
          Aug 29 at 15:54






        • 11




          One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_x -> 0 sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_x -> 0 sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising.
          – Nate Eldredge
          Aug 29 at 16:48






        • 5




          Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood.
          – Nate Eldredge
          Aug 29 at 16:56






        • 4




          I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute lim_xrightarrow 0 fracx^2x isn't showing a more advanced understanding of the material, they're showing they don't understand the material!
          – Noah Snyder
          Aug 30 at 12:52








        4




        4




        I feel like you have misunderstood the question.
        – Jessica B
        Aug 29 at 15:05




        I feel like you have misunderstood the question.
        – Jessica B
        Aug 29 at 15:05




        2




        2




        @Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level.
        – cHao
        Aug 29 at 15:54




        @Buffy: The question wasn't actually about the class. The question was about whether "inadmissible" stuff exists at the graduate level.
        – cHao
        Aug 29 at 15:54




        11




        11




        One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_x -> 0 sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_x -> 0 sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising.
        – Nate Eldredge
        Aug 29 at 16:48




        One reason to "disallow" results not yet studied is that it helps to avoid circular logic. A standard example: student is asked to show that lim_x -> 0 sin(x)/x = 1. Student applies L'Hôpital's rule, taking advantage of the fact that the derivative of sin(x) is cos(x). However, the usual way of proving that the derivative of sin(x) is cos(x) requires knowing the value of lim_x -> 0 sin(x)/x. If you "forbid" L'Hôpital's rule in solving the original problem, you prevent this issue from arising.
        – Nate Eldredge
        Aug 29 at 16:48




        5




        5




        Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood.
        – Nate Eldredge
        Aug 29 at 16:56




        Well, you can have a standing course policy not to assume results not yet proved. This is sufficiently common that the instructor may have assumed it went without saying. Or, the downgrade may have actually been for circular logic, but the reasoning was explained poorly or misunderstood.
        – Nate Eldredge
        Aug 29 at 16:56




        4




        4




        I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute lim_xrightarrow 0 fracx^2x isn't showing a more advanced understanding of the material, they're showing they don't understand the material!
        – Noah Snyder
        Aug 30 at 12:52





        I think L'Hopital's rule is uniquely pernicious and results in students failing to learn about limits and immediately forgetting everything about limits, in a way that has essentially no good parallels elsewhere in the elementary math curriculum. So I don't think you can substitute in something else and make it the same question. Someone who uses L'Hopital to say compute lim_xrightarrow 0 fracx^2x isn't showing a more advanced understanding of the material, they're showing they don't understand the material!
        – Noah Snyder
        Aug 30 at 12:52











        up vote
        4
        down vote













        I don't think there are inadmissible theorems in research, although obviously one has to care not to rely on assumptions that has yet to be proven for a particular problem.



        However, in terms of PhD or postdoc work, I feel that some approaches may be rather "off-topic" because of not-really-academic reasons. For example, if you secure a PhD funding to study topic X, you should not normally use it to study Y. Similarly, if you secure a postdoc in a team which develops method A, and you want to study your competitor's method B, your PI may want to keep the time you spend on B limited, so it does not exceed the time you spend to develop A. Some PIs are quite notorious in a sense that they won't tolerate you even touching some method C, because of their important reasons, so even though you have full academic freedom to go and explore method C if you like it, it may be "inadmissible" to do so within your current work arrangements.






        share|improve this answer




















        • Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research?
          – BCLC
          Aug 29 at 15:10







        • 1




          Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps.
          – Dmitry Savostyanov
          Aug 29 at 15:11














        up vote
        4
        down vote













        I don't think there are inadmissible theorems in research, although obviously one has to care not to rely on assumptions that has yet to be proven for a particular problem.



        However, in terms of PhD or postdoc work, I feel that some approaches may be rather "off-topic" because of not-really-academic reasons. For example, if you secure a PhD funding to study topic X, you should not normally use it to study Y. Similarly, if you secure a postdoc in a team which develops method A, and you want to study your competitor's method B, your PI may want to keep the time you spend on B limited, so it does not exceed the time you spend to develop A. Some PIs are quite notorious in a sense that they won't tolerate you even touching some method C, because of their important reasons, so even though you have full academic freedom to go and explore method C if you like it, it may be "inadmissible" to do so within your current work arrangements.






        share|improve this answer




















        • Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research?
          – BCLC
          Aug 29 at 15:10







        • 1




          Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps.
          – Dmitry Savostyanov
          Aug 29 at 15:11












        up vote
        4
        down vote










        up vote
        4
        down vote









        I don't think there are inadmissible theorems in research, although obviously one has to care not to rely on assumptions that has yet to be proven for a particular problem.



        However, in terms of PhD or postdoc work, I feel that some approaches may be rather "off-topic" because of not-really-academic reasons. For example, if you secure a PhD funding to study topic X, you should not normally use it to study Y. Similarly, if you secure a postdoc in a team which develops method A, and you want to study your competitor's method B, your PI may want to keep the time you spend on B limited, so it does not exceed the time you spend to develop A. Some PIs are quite notorious in a sense that they won't tolerate you even touching some method C, because of their important reasons, so even though you have full academic freedom to go and explore method C if you like it, it may be "inadmissible" to do so within your current work arrangements.






        share|improve this answer












        I don't think there are inadmissible theorems in research, although obviously one has to care not to rely on assumptions that has yet to be proven for a particular problem.



        However, in terms of PhD or postdoc work, I feel that some approaches may be rather "off-topic" because of not-really-academic reasons. For example, if you secure a PhD funding to study topic X, you should not normally use it to study Y. Similarly, if you secure a postdoc in a team which develops method A, and you want to study your competitor's method B, your PI may want to keep the time you spend on B limited, so it does not exceed the time you spend to develop A. Some PIs are quite notorious in a sense that they won't tolerate you even touching some method C, because of their important reasons, so even though you have full academic freedom to go and explore method C if you like it, it may be "inadmissible" to do so within your current work arrangements.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 29 at 15:07









        Dmitry Savostyanov

        18.9k64492




        18.9k64492











        • Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research?
          – BCLC
          Aug 29 at 15:10







        • 1




          Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps.
          – Dmitry Savostyanov
          Aug 29 at 15:11
















        • Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research?
          – BCLC
          Aug 29 at 15:10







        • 1




          Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps.
          – Dmitry Savostyanov
          Aug 29 at 15:11















        Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research?
        – BCLC
        Aug 29 at 15:10





        Thanks Dmitry Savostyanov! This sounds like something I had in mind, but this is for applied research? Or also for theoretical research?
        – BCLC
        Aug 29 at 15:10





        1




        1




        Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps.
        – Dmitry Savostyanov
        Aug 29 at 15:11




        Even in pure maths, people can be very protective sometimes. And people in applied maths can be very open-minded. It's more about personal approaches to science, perhaps.
        – Dmitry Savostyanov
        Aug 29 at 15:11










        up vote
        2
        down vote













        I'm going to give a related point of view from outside of academia, namely a commercial/government research organisation.



        I have come across researchers and managers who are hindered by what I call an exam mentality, whereby they assume that a research question can only be answered with a data set provided, and cannot make reference to other data, results, studies etc.



        I've found this exam mentality to be extremely limiting and comes about because the researcher or manager has a misconception about research that has been indoctrinated from their (mostly exam-based) education.



        The fact of the matter is that by not using data/techniques/studies on arbitrary grounds stifles research. It leads to missed opportunities for commercial organisations to make profit, or missed consequences when governments introduce new policy, or missed side-effects of new drugs etc.






        share|improve this answer
























          up vote
          2
          down vote













          I'm going to give a related point of view from outside of academia, namely a commercial/government research organisation.



          I have come across researchers and managers who are hindered by what I call an exam mentality, whereby they assume that a research question can only be answered with a data set provided, and cannot make reference to other data, results, studies etc.



          I've found this exam mentality to be extremely limiting and comes about because the researcher or manager has a misconception about research that has been indoctrinated from their (mostly exam-based) education.



          The fact of the matter is that by not using data/techniques/studies on arbitrary grounds stifles research. It leads to missed opportunities for commercial organisations to make profit, or missed consequences when governments introduce new policy, or missed side-effects of new drugs etc.






          share|improve this answer






















            up vote
            2
            down vote










            up vote
            2
            down vote









            I'm going to give a related point of view from outside of academia, namely a commercial/government research organisation.



            I have come across researchers and managers who are hindered by what I call an exam mentality, whereby they assume that a research question can only be answered with a data set provided, and cannot make reference to other data, results, studies etc.



            I've found this exam mentality to be extremely limiting and comes about because the researcher or manager has a misconception about research that has been indoctrinated from their (mostly exam-based) education.



            The fact of the matter is that by not using data/techniques/studies on arbitrary grounds stifles research. It leads to missed opportunities for commercial organisations to make profit, or missed consequences when governments introduce new policy, or missed side-effects of new drugs etc.






            share|improve this answer












            I'm going to give a related point of view from outside of academia, namely a commercial/government research organisation.



            I have come across researchers and managers who are hindered by what I call an exam mentality, whereby they assume that a research question can only be answered with a data set provided, and cannot make reference to other data, results, studies etc.



            I've found this exam mentality to be extremely limiting and comes about because the researcher or manager has a misconception about research that has been indoctrinated from their (mostly exam-based) education.



            The fact of the matter is that by not using data/techniques/studies on arbitrary grounds stifles research. It leads to missed opportunities for commercial organisations to make profit, or missed consequences when governments introduce new policy, or missed side-effects of new drugs etc.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Aug 30 at 9:48









            Bad_Bishop

            604411




            604411




















                up vote
                2
                down vote













                I will add a small example from Theoretical Computer Science and algorithm design.




                It is a very important open problem to find a combinatorial (or even LP based) algorithm that achieves the Goemans-Williamson bound (0.878) for approximating the MaxCut problem in polynomial time.




                We know that using Semidefinite Programming techniques, a bound on the approximation factor of alpha = 0.878 can be achieved in poly time. But can we achieve this bound using other techniques? Slightly less ambitiously but probably equally important: Can we find a combinatorial algorithm with approximation guarantee strictle better than 1/2?



                Luca Trevisan had made important progress towards that direction using spectral techniques.






                share|improve this answer
























                  up vote
                  2
                  down vote













                  I will add a small example from Theoretical Computer Science and algorithm design.




                  It is a very important open problem to find a combinatorial (or even LP based) algorithm that achieves the Goemans-Williamson bound (0.878) for approximating the MaxCut problem in polynomial time.




                  We know that using Semidefinite Programming techniques, a bound on the approximation factor of alpha = 0.878 can be achieved in poly time. But can we achieve this bound using other techniques? Slightly less ambitiously but probably equally important: Can we find a combinatorial algorithm with approximation guarantee strictle better than 1/2?



                  Luca Trevisan had made important progress towards that direction using spectral techniques.






                  share|improve this answer






















                    up vote
                    2
                    down vote










                    up vote
                    2
                    down vote









                    I will add a small example from Theoretical Computer Science and algorithm design.




                    It is a very important open problem to find a combinatorial (or even LP based) algorithm that achieves the Goemans-Williamson bound (0.878) for approximating the MaxCut problem in polynomial time.




                    We know that using Semidefinite Programming techniques, a bound on the approximation factor of alpha = 0.878 can be achieved in poly time. But can we achieve this bound using other techniques? Slightly less ambitiously but probably equally important: Can we find a combinatorial algorithm with approximation guarantee strictle better than 1/2?



                    Luca Trevisan had made important progress towards that direction using spectral techniques.






                    share|improve this answer












                    I will add a small example from Theoretical Computer Science and algorithm design.




                    It is a very important open problem to find a combinatorial (or even LP based) algorithm that achieves the Goemans-Williamson bound (0.878) for approximating the MaxCut problem in polynomial time.




                    We know that using Semidefinite Programming techniques, a bound on the approximation factor of alpha = 0.878 can be achieved in poly time. But can we achieve this bound using other techniques? Slightly less ambitiously but probably equally important: Can we find a combinatorial algorithm with approximation guarantee strictle better than 1/2?



                    Luca Trevisan had made important progress towards that direction using spectral techniques.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Aug 30 at 15:26









                    PsySp

                    5,08742043




                    5,08742043




















                        up vote
                        0
                        down vote













                        In research you would use the most applicable method (that you know) to demonstrate a solution, and would possibly also be in situations where you are asked about or offered alternative approaches to your solution (and then you learn a new method).



                        In the example where L'Hôpital's rule was "not permitted", it could be that the question could have been worded better as it sounds like a "solve this" question, assuming that only the methods taught in the course are known to students and therefore only the methods taught in the course will be used in the exam.






                        share|improve this answer




















                        • There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did)
                          – BCLC
                          Aug 29 at 14:46










                        • I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation.
                          – Mick
                          Aug 29 at 14:51











                        • Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time?
                          – BCLC
                          Aug 29 at 14:56







                        • 2




                          It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet.
                          – cHao
                          Aug 29 at 15:51











                        • Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition.
                          – WGroleau
                          Aug 30 at 12:15














                        up vote
                        0
                        down vote













                        In research you would use the most applicable method (that you know) to demonstrate a solution, and would possibly also be in situations where you are asked about or offered alternative approaches to your solution (and then you learn a new method).



                        In the example where L'Hôpital's rule was "not permitted", it could be that the question could have been worded better as it sounds like a "solve this" question, assuming that only the methods taught in the course are known to students and therefore only the methods taught in the course will be used in the exam.






                        share|improve this answer




















                        • There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did)
                          – BCLC
                          Aug 29 at 14:46










                        • I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation.
                          – Mick
                          Aug 29 at 14:51











                        • Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time?
                          – BCLC
                          Aug 29 at 14:56







                        • 2




                          It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet.
                          – cHao
                          Aug 29 at 15:51











                        • Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition.
                          – WGroleau
                          Aug 30 at 12:15












                        up vote
                        0
                        down vote










                        up vote
                        0
                        down vote









                        In research you would use the most applicable method (that you know) to demonstrate a solution, and would possibly also be in situations where you are asked about or offered alternative approaches to your solution (and then you learn a new method).



                        In the example where L'Hôpital's rule was "not permitted", it could be that the question could have been worded better as it sounds like a "solve this" question, assuming that only the methods taught in the course are known to students and therefore only the methods taught in the course will be used in the exam.






                        share|improve this answer












                        In research you would use the most applicable method (that you know) to demonstrate a solution, and would possibly also be in situations where you are asked about or offered alternative approaches to your solution (and then you learn a new method).



                        In the example where L'Hôpital's rule was "not permitted", it could be that the question could have been worded better as it sounds like a "solve this" question, assuming that only the methods taught in the course are known to students and therefore only the methods taught in the course will be used in the exam.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Aug 29 at 14:38









                        Mick

                        1,972822




                        1,972822











                        • There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did)
                          – BCLC
                          Aug 29 at 14:46










                        • I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation.
                          – Mick
                          Aug 29 at 14:51











                        • Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time?
                          – BCLC
                          Aug 29 at 14:56







                        • 2




                          It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet.
                          – cHao
                          Aug 29 at 15:51











                        • Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition.
                          – WGroleau
                          Aug 30 at 12:15
















                        • There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did)
                          – BCLC
                          Aug 29 at 14:46










                        • I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation.
                          – Mick
                          Aug 29 at 14:51











                        • Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time?
                          – BCLC
                          Aug 29 at 14:56







                        • 2




                          It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet.
                          – cHao
                          Aug 29 at 15:51











                        • Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition.
                          – WGroleau
                          Aug 30 at 12:15















                        There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did)
                        – BCLC
                        Aug 29 at 14:46




                        There was no ambiguity in the question. L'Hôpital's rule wasn't introduced to us until our third or fourth exam. My engineering friend was taking a make-up for either our second exam or our midterm or both (i forgot). It would've been like using the sequence definition of continuity in the first exam of an elementary analysis class if such class teaches sequences last (like mine did)
                        – BCLC
                        Aug 29 at 14:46












                        I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation.
                        – Mick
                        Aug 29 at 14:51





                        I understand that, but when it was introduced has no bearing on whether students may already know how to use it. It would be the same asking, "Show that the first derivative of x^2 is 2x, and then telling students that solved it using implicit differentiation that that is not allowed and they should have used explicit differentiation.
                        – Mick
                        Aug 29 at 14:51













                        Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time?
                        – BCLC
                        Aug 29 at 14:56





                        Mick, but it was a make-up exam. It would be unfair to students who took the exam on time because we didn't know L'Hôpital's rule at the time?
                        – BCLC
                        Aug 29 at 14:56





                        2




                        2




                        It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet.
                        – cHao
                        Aug 29 at 15:51





                        It's not about being fair. It's about math building on itself. Often you're expected to solve things a certain way in order to ensure you understand what the later stuff allows you to simplify or ignore. If there was an intended method, it should have been in the instructions. But it's a common assumption that if you haven't been taught it, you don't know it yet.
                        – cHao
                        Aug 29 at 15:51













                        Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition.
                        – WGroleau
                        Aug 30 at 12:15




                        Without denying the other suggestions on why it might be disallowed, fairness to other students is irrelevant. The purpose of an exam is to assess or verify what you have learned, not to decide who wins a competition.
                        – WGroleau
                        Aug 30 at 12:15










                        up vote
                        0
                        down vote













                        Well in pure Math research I am sure brute force approximations by computer are disallowed except as a way to introduce the interest in the topic and possibly a way to narrow the area to be explored. Perhaps even to suggest an approach to solution.



                        Math research requires equations that describe an exact answer and proof that the answer is correct by derivation from established math facts and theorems. Computer approximations may use ever smaller intervals to narrow the range of an answer but they don't actually reach the infinitely small limit of L'hospital style.



                        The separate area of computerized derivations basically just automates what is already known. I am sure many places leave researchers free to use such computerization to speed documentation of work as far as such software goes. I am sure that plenty of human guidance is still needed to formulate the problem, introduce postulates and choose which available solution steps to try. But the key thing is that all such software derivation would have to verified by hand before any outside review for software error and that techniques stay within allowed boundaries (the IF portion of theorems etc).



                        And after such hand checks...how many mathematical researchers would credit computer software for assistance?



                        Well I saw applications mathematicians cite software as a quick check method for colleagues to check the reasonableness of the work way back in the 1980s. In that applications mathematics sometimes has an almost engineering math view of practical results, I suppose that they still do give the computer software approximations as quick demonstration AFTER the formal derivations. And I hear that applications math sometimes solves the nearest approximation to the problem possible when solution to the exact problem still evades them. So again more room for assistance by computer software derivation. Not sure that such operations research type topics fits everyone's definition of mathematical research though.






                        share|improve this answer




















                        • Please try to avoid leaving two separate answers; you should edit your first one
                          – Yemon Choi
                          Sep 2 at 3:36










                        • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research
                          – Yemon Choi
                          Sep 2 at 3:37














                        up vote
                        0
                        down vote













                        Well in pure Math research I am sure brute force approximations by computer are disallowed except as a way to introduce the interest in the topic and possibly a way to narrow the area to be explored. Perhaps even to suggest an approach to solution.



                        Math research requires equations that describe an exact answer and proof that the answer is correct by derivation from established math facts and theorems. Computer approximations may use ever smaller intervals to narrow the range of an answer but they don't actually reach the infinitely small limit of L'hospital style.



                        The separate area of computerized derivations basically just automates what is already known. I am sure many places leave researchers free to use such computerization to speed documentation of work as far as such software goes. I am sure that plenty of human guidance is still needed to formulate the problem, introduce postulates and choose which available solution steps to try. But the key thing is that all such software derivation would have to verified by hand before any outside review for software error and that techniques stay within allowed boundaries (the IF portion of theorems etc).



                        And after such hand checks...how many mathematical researchers would credit computer software for assistance?



                        Well I saw applications mathematicians cite software as a quick check method for colleagues to check the reasonableness of the work way back in the 1980s. In that applications mathematics sometimes has an almost engineering math view of practical results, I suppose that they still do give the computer software approximations as quick demonstration AFTER the formal derivations. And I hear that applications math sometimes solves the nearest approximation to the problem possible when solution to the exact problem still evades them. So again more room for assistance by computer software derivation. Not sure that such operations research type topics fits everyone's definition of mathematical research though.






                        share|improve this answer




















                        • Please try to avoid leaving two separate answers; you should edit your first one
                          – Yemon Choi
                          Sep 2 at 3:36










                        • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research
                          – Yemon Choi
                          Sep 2 at 3:37












                        up vote
                        0
                        down vote










                        up vote
                        0
                        down vote









                        Well in pure Math research I am sure brute force approximations by computer are disallowed except as a way to introduce the interest in the topic and possibly a way to narrow the area to be explored. Perhaps even to suggest an approach to solution.



                        Math research requires equations that describe an exact answer and proof that the answer is correct by derivation from established math facts and theorems. Computer approximations may use ever smaller intervals to narrow the range of an answer but they don't actually reach the infinitely small limit of L'hospital style.



                        The separate area of computerized derivations basically just automates what is already known. I am sure many places leave researchers free to use such computerization to speed documentation of work as far as such software goes. I am sure that plenty of human guidance is still needed to formulate the problem, introduce postulates and choose which available solution steps to try. But the key thing is that all such software derivation would have to verified by hand before any outside review for software error and that techniques stay within allowed boundaries (the IF portion of theorems etc).



                        And after such hand checks...how many mathematical researchers would credit computer software for assistance?



                        Well I saw applications mathematicians cite software as a quick check method for colleagues to check the reasonableness of the work way back in the 1980s. In that applications mathematics sometimes has an almost engineering math view of practical results, I suppose that they still do give the computer software approximations as quick demonstration AFTER the formal derivations. And I hear that applications math sometimes solves the nearest approximation to the problem possible when solution to the exact problem still evades them. So again more room for assistance by computer software derivation. Not sure that such operations research type topics fits everyone's definition of mathematical research though.






                        share|improve this answer












                        Well in pure Math research I am sure brute force approximations by computer are disallowed except as a way to introduce the interest in the topic and possibly a way to narrow the area to be explored. Perhaps even to suggest an approach to solution.



                        Math research requires equations that describe an exact answer and proof that the answer is correct by derivation from established math facts and theorems. Computer approximations may use ever smaller intervals to narrow the range of an answer but they don't actually reach the infinitely small limit of L'hospital style.



                        The separate area of computerized derivations basically just automates what is already known. I am sure many places leave researchers free to use such computerization to speed documentation of work as far as such software goes. I am sure that plenty of human guidance is still needed to formulate the problem, introduce postulates and choose which available solution steps to try. But the key thing is that all such software derivation would have to verified by hand before any outside review for software error and that techniques stay within allowed boundaries (the IF portion of theorems etc).



                        And after such hand checks...how many mathematical researchers would credit computer software for assistance?



                        Well I saw applications mathematicians cite software as a quick check method for colleagues to check the reasonableness of the work way back in the 1980s. In that applications mathematics sometimes has an almost engineering math view of practical results, I suppose that they still do give the computer software approximations as quick demonstration AFTER the formal derivations. And I hear that applications math sometimes solves the nearest approximation to the problem possible when solution to the exact problem still evades them. So again more room for assistance by computer software derivation. Not sure that such operations research type topics fits everyone's definition of mathematical research though.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Sep 1 at 17:46









                        Observation

                        91




                        91











                        • Please try to avoid leaving two separate answers; you should edit your first one
                          – Yemon Choi
                          Sep 2 at 3:36










                        • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research
                          – Yemon Choi
                          Sep 2 at 3:37
















                        • Please try to avoid leaving two separate answers; you should edit your first one
                          – Yemon Choi
                          Sep 2 at 3:36










                        • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research
                          – Yemon Choi
                          Sep 2 at 3:37















                        Please try to avoid leaving two separate answers; you should edit your first one
                        – Yemon Choi
                        Sep 2 at 3:36




                        Please try to avoid leaving two separate answers; you should edit your first one
                        – Yemon Choi
                        Sep 2 at 3:36












                        I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research
                        – Yemon Choi
                        Sep 2 at 3:37




                        I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else, and doesn't really address the OP's question about whether there are situations when one should not make use of certain theorems while doing research
                        – Yemon Choi
                        Sep 2 at 3:37










                        up vote
                        0
                        down vote













                        In shorter terms, yes computer approximation techniques are often used in a shotgun manner to look for areas of potential convergence on solutions. As in "give me a hint". Especially in applied math topics where real world boundaries can be described.



                        Again a there is the question of whether real world problems other than fundamental physics are true math research or the much looser applied math or even operations research.



                        But in the actual derivation of theorems from underlying and proven theorems to new theorems...computers are more limited to documentation tools similar to word processors for prose. Still tools becoming more and more important to speed the more common equation checking of documented work as word processors check spelling and grammar for prose. And more areas where human must override or redirect.






                        share|improve this answer




















                        • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else
                          – Yemon Choi
                          Sep 2 at 3:35










                        • Also, don't create two new user identities. Register one which can be used consistently
                          – Yemon Choi
                          Sep 2 at 3:37














                        up vote
                        0
                        down vote













                        In shorter terms, yes computer approximation techniques are often used in a shotgun manner to look for areas of potential convergence on solutions. As in "give me a hint". Especially in applied math topics where real world boundaries can be described.



                        Again a there is the question of whether real world problems other than fundamental physics are true math research or the much looser applied math or even operations research.



                        But in the actual derivation of theorems from underlying and proven theorems to new theorems...computers are more limited to documentation tools similar to word processors for prose. Still tools becoming more and more important to speed the more common equation checking of documented work as word processors check spelling and grammar for prose. And more areas where human must override or redirect.






                        share|improve this answer




















                        • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else
                          – Yemon Choi
                          Sep 2 at 3:35










                        • Also, don't create two new user identities. Register one which can be used consistently
                          – Yemon Choi
                          Sep 2 at 3:37












                        up vote
                        0
                        down vote










                        up vote
                        0
                        down vote









                        In shorter terms, yes computer approximation techniques are often used in a shotgun manner to look for areas of potential convergence on solutions. As in "give me a hint". Especially in applied math topics where real world boundaries can be described.



                        Again a there is the question of whether real world problems other than fundamental physics are true math research or the much looser applied math or even operations research.



                        But in the actual derivation of theorems from underlying and proven theorems to new theorems...computers are more limited to documentation tools similar to word processors for prose. Still tools becoming more and more important to speed the more common equation checking of documented work as word processors check spelling and grammar for prose. And more areas where human must override or redirect.






                        share|improve this answer












                        In shorter terms, yes computer approximation techniques are often used in a shotgun manner to look for areas of potential convergence on solutions. As in "give me a hint". Especially in applied math topics where real world boundaries can be described.



                        Again a there is the question of whether real world problems other than fundamental physics are true math research or the much looser applied math or even operations research.



                        But in the actual derivation of theorems from underlying and proven theorems to new theorems...computers are more limited to documentation tools similar to word processors for prose. Still tools becoming more and more important to speed the more common equation checking of documented work as word processors check spelling and grammar for prose. And more areas where human must override or redirect.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Sep 1 at 17:59









                        Observation

                        91




                        91











                        • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else
                          – Yemon Choi
                          Sep 2 at 3:35










                        • Also, don't create two new user identities. Register one which can be used consistently
                          – Yemon Choi
                          Sep 2 at 3:37
















                        • I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else
                          – Yemon Choi
                          Sep 2 at 3:35










                        • Also, don't create two new user identities. Register one which can be used consistently
                          – Yemon Choi
                          Sep 2 at 3:37















                        I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else
                        – Yemon Choi
                        Sep 2 at 3:35




                        I find this answer slightly misses the point of the original question, since it seems more about the use of computers than anything else
                        – Yemon Choi
                        Sep 2 at 3:35












                        Also, don't create two new user identities. Register one which can be used consistently
                        – Yemon Choi
                        Sep 2 at 3:37




                        Also, don't create two new user identities. Register one which can be used consistently
                        – Yemon Choi
                        Sep 2 at 3:37










                        up vote
                        0
                        down vote













                        The axiom of choice (and its corollaries) are pretty well-accepted these days in the mathematical community, but you might occasionally run across a few old-school mathematicians who think that it's "wrong", and therefore that any corollary that you use the axiom of choice to prove is also "wrong". (Of course, what it even means for the axiom of choice to be "wrong" is a largely philosophical question.)






                        share|improve this answer
























                          up vote
                          0
                          down vote













                          The axiom of choice (and its corollaries) are pretty well-accepted these days in the mathematical community, but you might occasionally run across a few old-school mathematicians who think that it's "wrong", and therefore that any corollary that you use the axiom of choice to prove is also "wrong". (Of course, what it even means for the axiom of choice to be "wrong" is a largely philosophical question.)






                          share|improve this answer






















                            up vote
                            0
                            down vote










                            up vote
                            0
                            down vote









                            The axiom of choice (and its corollaries) are pretty well-accepted these days in the mathematical community, but you might occasionally run across a few old-school mathematicians who think that it's "wrong", and therefore that any corollary that you use the axiom of choice to prove is also "wrong". (Of course, what it even means for the axiom of choice to be "wrong" is a largely philosophical question.)






                            share|improve this answer












                            The axiom of choice (and its corollaries) are pretty well-accepted these days in the mathematical community, but you might occasionally run across a few old-school mathematicians who think that it's "wrong", and therefore that any corollary that you use the axiom of choice to prove is also "wrong". (Of course, what it even means for the axiom of choice to be "wrong" is a largely philosophical question.)







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Sep 4 at 3:02









                            tparker

                            210210




                            210210



























                                 

                                draft saved


                                draft discarded















































                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f116019%2finadmissible-theorems-in-research%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                Comments

                                Popular posts from this blog

                                What does second last employer means? [closed]

                                Installing NextGIS Connect into QGIS 3?

                                One-line joke