"Especially if you are already well-established. Publish less, but publish better research. Put time and effort into transparency. Share everything you can share, as openly as you can share it. Use your privileged position to do research in the way you think it ought to be done, even if that’s not the quickest way to achieve academic success. (...) Be aware of the implicit signal you might be giving those you supervise when you say things like ‘you need to get a result’ or ‘we need to make this publishable’."
While I agree in the abstract, the problem is that when you're well-established, in most areas, your research basically amounts to supervising PhD students and postdocs who are not well-established. And they're struggling to meet the requirements to finish their thesis, get a permanent position, etc. So if you encourage them to do slow science and publish less, there's a high risk that you're basically letting them down. Plus, to do research you're probably using some grant funding and guess what the funding agency expects...
Thus, most people never get to a point in their career where they can safely say "let's ignore incentives and just pursue this project slowly and carefully". There might be some exceptions. Probably in math, where research is often individual. And maybe in other areas if you can have a smallish side project with other professors that doesn't require much specific funding, or if you have a student who is finishing and has already secured a position in industry so their stakes aren't high. I've been in those situations sometimes, but it's the exception rather than the rule. The truth is that even senior professors seldom have the luxury of not being heavily pressured by incentives.
I think this is exactly the hard part: individual virtue alone does not solve a system where supervisors, trainees, and funders are all pulled by the same incentives. "Do slower, better science" is not actionable unless the surrounding infrastructure and rewards change too. That is a big part of what we're thinking about with Liberata, especially around peer review and attribution. If relevant, our beta waitlist is open: https://liberata.info/beta-signup
Once they're established, they can decide how many PhD students to take on. And a lot of foreign students who come on J-1 visas and are sponsored by their governments are not under that pressure. A lot of them will get a position in their home country with a lot less publishing pressure than in the US.
The professor can always set his terms, and it's up to a student to have him as an advisor. In both universities I attended, there were professors who were very fussy about how much research they did and how much money they brought in (could be 0), and if a student wanted them as an advisor, they needed to understand the risks involved.
"Many of us are doing our stinking best to maintain scientific integrity and produce rigorous research. But we do that mostly despite the incentives created by the academic system, not because of them"
This is the line that matters most.
Academia rewards polished narratives, novelty, and publication volume more than transparency, rigor, and verification, with the current system. Fraud may be the extreme case, but the deeper issue is a system that too rarely rewards truth-seeking in its messiest form
We need better research infrastructure and incentive systems that make openness, critique, and quality easier to pursue.
There's a project out of Duke University called Liberata that's aiming to address the peer review incentive + attribution issues specifically.
At the same time, knowing someone who committed academic fraud during his PhD and was caught, I can say two things:
A lot of people do it when they simply don't need to. They're not trying to "survive in academia". They're trying to get to the top. The person in question was smart, bright, and did good research (at least excluding the stuff he made up). He could have gotten an academic position without committing fraud. And he could have had a great industry job without it too.
No matter - he simply switched to another top tier university, got his PhD, and is now running a startup. Which comes to the second point: The repercussions are minor even when you do get caught.
This is what makes the problem feel so systemic, in that weak consequences after the fact, and weak incentives for transparency before the fact. If the system mostly rewards output and prestige, then misconduct can remain a high-upside bet. We should be building research infrastructure that makes review trails, contribution, and verification more visible much earlier. That is part of what Liberata is aiming at, if of interest: https://liberata.info/beta-signup
No - It was kept within the team and he was "fired" from the research group. Word got around and all the professors in the department (in the same field) knew (as did their students), so he couldn't just find another professor.
So he switched universities.
But still, didn't he worry that he'd bump into his former professor at a conference and that he would tell his new advisor? I don't know if he made some deal with him ...
That same professor will happily take money from the student's startup to conduct research assuming it is successful and has funds to spare. That should tell you right there how the incentives are aligned.
Academia is no different from any other profession or sport. Holding it to a higher bar than say, medicine, engineering, law or accounting, doesn't make sense.
As an example, let's take soccer: All players will tackle if they think they can get away with it. Even Messi, Ronaldo, Mbappe do it. Those who are caught receive a red card and are sent off the field. Do red cards stop tackles? No. Players just try hard not to get caught.
Here's an important aspect to understand: successful professors don't read papers in full. They're too busy for that. They only take a look at the title, abstract and introduction — and perhaps they will glance at the figures. This is why telling a compelling story is so important.
This (also) feels like a core failure mode, in that papers are optimized for skim-level persuasion because the system is too overloaded for deep evaluation at scale. Then a lot of the actual scrutiny gets pushed onto under-credited sub-review labour. Peer review is too important to stay this invisible and under-incentivized. Liberata is exploring exactly that problem, and our beta waitlist is open if you want to follow along: https://liberata.info/beta-signup
Thats not true at all. If anything, they will read the figures and skip the introduction.
If it is your field, you don't need an intro, and don't want to hear whatever yarn they are spinning in the abstract/discussion. You jump straight to the figures / table to review the data yourself.
I'm not in academia, so I might be fully ignorant about how things operate, but if professors don't reaed the actual paper, can do they know if it's BS or not?
Here's how it works in our group. The professor gives papers to the PhD students or PostDocs, who read the paper completely. I regularly 'sub-review', as it is called, meticulously looking for issues. I have heard that there are professors who review entire papers in 2-3 hours, since they have a lot (10+) of papers per conference to review without any compensation while they have their own research, teaching, and funding to juggle.
It's not a pretty system sometimes.
Edited to add: Conference's also require declaring that there was someone who sub-reviewed the paper. The professor / PI mentions the PhD student's name in the review form of the paper. Of course, the professor also double-checks all the sub-reviews
This feels like a core failure mode: papers are optimized for skim-level persuasion because the system is too overloaded for deep evaluation at scale. Then a lot of the actual scrutiny gets pushed onto under-credited sub-review labour. Peer review is too important to stay this invisible and under-incentivized. Liberata is exploring exactly that problem, and our beta waitlist is open if you want to follow along: https://liberata.info/beta-signup
One thing I noticed on the CS PhD side of the house is because many researchers don't want others to easily build upon their work (for whatever reasons), they don't often release the source code/data required to quickly validate it. This is a recipe for shortcuts, errors, and even in the worst cases, fraud.
Strongly agree. When code, data, and workflow details stay hidden, the system rewards claims more than verification. That is where shortcuts, irreproducibility, and worse can thrive. We need infrastructure that gives more credit to transparency and reusability, not less. That is part of what we’re building at Liberata if you’re curious: https://liberata.info/beta-signup
When I was in grad school, this was the norm across the board (engineering/physics). No one wanted to reveal their secret sauce.
Things have changed since, but in my time, if a journal required source code for publication, most of the professors in my department would not have published there.
I understand this is a cheeky section heading and the author is not really making this point, but this may be one of the dumbest popular phrases out there. You're effectively saying "Don't get upset at me for being an awful person, I probably wouldn't have succeeded if I'd been a good person." "The game," of course, is made up of players and if no one played that way there would be no game.
Ok but if you are the first person to decide to be "good" in a rotten game, you aren't going to be held up as some example of virtue. You are just going to lose the game.
Of course the thing that makes the game rotten is incentives. The academic profession as a whole has decided to incentivize and reward this behavior.
But if winning the game requires you to do shitty science and defraud the public, why play it at all? There's no desperation justification here, because anyone who can succeed in academia almost certainly has the brains and credentials to get a decent non-academic job.
Because, for one thing, some people are shitty frauds, and they're not bothered by it. Those people see messed-up incentives as an opportunity.
Do serious workers tend to get out of the field, if the incentives are wrongheaded enough? Sure. Some. Does that fix the incentives or the outcomes within that field? No, not at all.
> anyone who can succeed in academia almost certainly has the brains and credentials to get a decent non-academic job.
I suspect the way this usually gets started is similar to embezzlement schemes. “Oh I’ll just borrow a few dollars from the till and pay it back tomorrow” is akin to “The manuscript is due tonight so I’ll just touch up this microphotograph to look like the other one that had bad focus.”
That escalates into forging invoices on the one hand and completely fabricated data on the other. By that point they’re in too deep to stop until they get caught.
Because it's not a requirement, and most people are not intentionally or accidentally defrauding the government.
The issue is that there is no incentive to do the additional work necessary to generate reproducible results because of the pressure to constantly generate sufficiently novel results to publish.
If you spend the additional time required to have fully reproducible results and your competition is not, you're probably going to lose the game (where the game is obtaining more funding).
Not generating reproducible results doesn't mean you're a fraud, but the absence of a requirement to generate them in order to publish means that it's easier for fraudsters to operate that it would be with that requirement.
It's definitely important to change the game, because there will (sadly) always be a supply of unscrupulous people if dishonesty is rewarded. But I do think the incentive-focused approach sometimes undermines itself. One of the ways to disincentivize dishonesty is to have strong social sanctions against dishonest people, so it's (arguably) pretty stupid to weaken this with a "don't hate the player" attitude. And we tend to work harder to prevent and punish offenses that stir our emotions, so if everyone is blasé about academic dishonesty then we'll probably continue to see lax enforcement and weak penalties.
I think this is the right tension, in that bad incentives matter, but that does not remove personal responsibility. We probably need both stronger accountability for clear misconduct and better systems that make rigor, transparency, and verification easier to pursue in the first place. The second piece gets much less attention than it should. That is a big part of what we’re trying to tackle at Liberata: https://liberata.info/beta-signup
Look at you. Posting on the internet wasting resources. Probably from a house large enough to house 10x more people in barracks configuration. Eating food from the clearcut forest. Buying tech mined out of pristine wilderness. While people go hungry in your city and sleep unsheltered.
But I don’t hate you for this. None of these terrible moves you make are your fault. Just a reality of the world we live in. Hate the game, not the player.
you're right about the phrase, its basically an assertion that "we're all cheating scum, so I have no choice but to be a cheating scum myself", which is hugely corrosive. and in this case its the funding system more broadly that's imposing these non-goals from above that are incentivizing bad science.
but why are they imposing these structures? to try to weed out the cheating scum. once you start walking down that path, you're signing up for a distortion of value.
As I said to the parent poster, that's not what it means at all. It means that you should look at the system's incentives, not the behavior of individuals as the root cause of any issues.
You don't need to be a "cheating scum" to succeed, but there are not enough checks in place to prevent that from being a successful strategy for someone who wouldn't succeed otherwise.
The people who need to change the most are the nameless "they" who issue funding because they have the most control over these systems, along with the publishing cartel which has almost no redeeming value in today's environment.
Nobody says the phrase when they are calling people to look at a system's incentives. They use the phrase as a response to personal criticism excusing and rationalizing their own bad behavior.
It is a deflection of personal responsibility, full stop.
Agreed. Accountability matters, but changing the game usually scales better than hoping for better individual behaviour under the same pressures. Academia needs systems that reward transparency, verification, and contribution more directly. That is part of what we are building with Liberata if of interest: https://liberata.info/beta-signup
> Don't get upset at me for being an awful person, I probably wouldn't have succeeded if I'd been a good person
That's not what that phrase means in general, and it's normally not used to describe one's own behavior (when it is, I would say your definition is closer to correct because it's being used as an excuse for antisocial behavior).
The point is that the system's incentives are at a minimum misaligned with what would be considered "good" behavior and in the worst case actively encourage undesirable behavior.
It is often the case that people have no meaningful alternative to participating in these systems and have no control over the rules, and the behavior they induce is generally not bad enough to be seen as "awful", let alone bad enough to call the person themselves "awful".
Lots of words that boil down to a 2500 year old mathematical formula, 天下之所惡唯孤寡不穀而王公以自名也, which in English translates as something like, Society's only problems are performative victimhood, colonization of the moral virtue of the vulnerable and oppressed, and mandatory penance rituals, especially when presidents and professors make it their job.
It's a lightly edited stream of consciousness commentary that appears to have been written by a non-native English speaker, potentially translated from Dutch into English after the initial writing.
I wouldn't say it's pleasant to read, but I didn't have any issue understanding it.
While I agree in the abstract, the problem is that when you're well-established, in most areas, your research basically amounts to supervising PhD students and postdocs who are not well-established. And they're struggling to meet the requirements to finish their thesis, get a permanent position, etc. So if you encourage them to do slow science and publish less, there's a high risk that you're basically letting them down. Plus, to do research you're probably using some grant funding and guess what the funding agency expects...
Thus, most people never get to a point in their career where they can safely say "let's ignore incentives and just pursue this project slowly and carefully". There might be some exceptions. Probably in math, where research is often individual. And maybe in other areas if you can have a smallish side project with other professors that doesn't require much specific funding, or if you have a student who is finishing and has already secured a position in industry so their stakes aren't high. I've been in those situations sometimes, but it's the exception rather than the rule. The truth is that even senior professors seldom have the luxury of not being heavily pressured by incentives.
The professor can always set his terms, and it's up to a student to have him as an advisor. In both universities I attended, there were professors who were very fussy about how much research they did and how much money they brought in (could be 0), and if a student wanted them as an advisor, they needed to understand the risks involved.
This is the line that matters most.
Academia rewards polished narratives, novelty, and publication volume more than transparency, rigor, and verification, with the current system. Fraud may be the extreme case, but the deeper issue is a system that too rarely rewards truth-seeking in its messiest form
We need better research infrastructure and incentive systems that make openness, critique, and quality easier to pursue.
There's a project out of Duke University called Liberata that's aiming to address the peer review incentive + attribution issues specifically.
Liberata is also has an open waitlist for beta users this summer: https://liberata.info/beta-signup
At the same time, knowing someone who committed academic fraud during his PhD and was caught, I can say two things:
A lot of people do it when they simply don't need to. They're not trying to "survive in academia". They're trying to get to the top. The person in question was smart, bright, and did good research (at least excluding the stuff he made up). He could have gotten an academic position without committing fraud. And he could have had a great industry job without it too.
No matter - he simply switched to another top tier university, got his PhD, and is now running a startup. Which comes to the second point: The repercussions are minor even when you do get caught.
Was it made public?
So he switched universities.
But still, didn't he worry that he'd bump into his former professor at a conference and that he would tell his new advisor? I don't know if he made some deal with him ...
As an example, let's take soccer: All players will tackle if they think they can get away with it. Even Messi, Ronaldo, Mbappe do it. Those who are caught receive a red card and are sent off the field. Do red cards stop tackles? No. Players just try hard not to get caught.
If it is your field, you don't need an intro, and don't want to hear whatever yarn they are spinning in the abstract/discussion. You jump straight to the figures / table to review the data yourself.
It's not a pretty system sometimes.
Edited to add: Conference's also require declaring that there was someone who sub-reviewed the paper. The professor / PI mentions the PhD student's name in the review form of the paper. Of course, the professor also double-checks all the sub-reviews
Things have changed since, but in my time, if a journal required source code for publication, most of the professors in my department would not have published there.
I understand this is a cheeky section heading and the author is not really making this point, but this may be one of the dumbest popular phrases out there. You're effectively saying "Don't get upset at me for being an awful person, I probably wouldn't have succeeded if I'd been a good person." "The game," of course, is made up of players and if no one played that way there would be no game.
Of course the thing that makes the game rotten is incentives. The academic profession as a whole has decided to incentivize and reward this behavior.
Do serious workers tend to get out of the field, if the incentives are wrongheaded enough? Sure. Some. Does that fix the incentives or the outcomes within that field? No, not at all.
I suspect the way this usually gets started is similar to embezzlement schemes. “Oh I’ll just borrow a few dollars from the till and pay it back tomorrow” is akin to “The manuscript is due tonight so I’ll just touch up this microphotograph to look like the other one that had bad focus.”
That escalates into forging invoices on the one hand and completely fabricated data on the other. By that point they’re in too deep to stop until they get caught.
The issue is that there is no incentive to do the additional work necessary to generate reproducible results because of the pressure to constantly generate sufficiently novel results to publish.
If you spend the additional time required to have fully reproducible results and your competition is not, you're probably going to lose the game (where the game is obtaining more funding).
Not generating reproducible results doesn't mean you're a fraud, but the absence of a requirement to generate them in order to publish means that it's easier for fraudsters to operate that it would be with that requirement.
That's not obviously true at all.
You don't have to hate someone in order to, er, apply incentives against whatever it is they just did.
But I don’t hate you for this. None of these terrible moves you make are your fault. Just a reality of the world we live in. Hate the game, not the player.
but why are they imposing these structures? to try to weed out the cheating scum. once you start walking down that path, you're signing up for a distortion of value.
You don't need to be a "cheating scum" to succeed, but there are not enough checks in place to prevent that from being a successful strategy for someone who wouldn't succeed otherwise.
The people who need to change the most are the nameless "they" who issue funding because they have the most control over these systems, along with the publishing cartel which has almost no redeeming value in today's environment.
It is a deflection of personal responsibility, full stop.
That's not what that phrase means in general, and it's normally not used to describe one's own behavior (when it is, I would say your definition is closer to correct because it's being used as an excuse for antisocial behavior).
The point is that the system's incentives are at a minimum misaligned with what would be considered "good" behavior and in the worst case actively encourage undesirable behavior.
It is often the case that people have no meaningful alternative to participating in these systems and have no control over the rules, and the behavior they induce is generally not bad enough to be seen as "awful", let alone bad enough to call the person themselves "awful".
I wouldn't say it's pleasant to read, but I didn't have any issue understanding it.