Publications, Lectures and Other Stuff
Just read ‘How Gobbledygook Ended Up in Respected Scientific Journals’ by Konstantin Kakaes. Slate. 27 February 2014.
This is the latest in a long line of criticisms of the present obsession with journal citations as a means of assessing academic excellence. The article notes that in 2005 a group of MIT grad students created a computer program that generated meaningless academic articles which they and others then succeeded in publishing in reputable academic journals — one of the invented authors briefly becoming one of the most highly-cited scientists in the world.
Far from being an anomaly, Kakaes sees this evident subversion of genuine scholarly publication as being symptomatic of a deep malaise in scientific publishing (and in academic publishing as a whole I would add). This dysfunctionality has arisen for 2 reasons: because (i) in recent decades academic publishing has become a way of making a lot of money, and (ii) administrators wrongly came to believe that journal publications were an objective measure of academic excellence and made academic advancement dependent on such publications. This approach to perceived excellence is entirely faulty. As the Nobel Laureate Jens Skou argued, the present system pressured scientists to publish as many papers as they could. Inevitably, the resultant work was more superficial than work produced under the old system in which scientists only published after prolonged research and thought.
By implication, deep and prolonged thought is no longer valued (and those who painstakingly address major questions are at risk of unemployment or lack of advancement): as I have noted in another post, Peter Higgs, the Nobel Prize winner in Physics, opined that he would not have been able to get an academic job in the present market — apart from winning the Nobel he had published very little and would therefore have been deemed unproductive.
There was also an unacknowledged problem with the production of an ‘impact factor’ for various journals. The constant pressure to publish ensured that scientists ‘gamed the system’ to seek the highest impact factor for their publications. To counter this, there were growing demands to assess research on its own merits rather than via some citation index. Only if there was concerted effort throughout academe to challenge this system would the problems it generated cease.
The full article is here: