Image
© Reuters
More than 120 bogus scientific articles have been published in peer-reviewed publications) from 2008 to 2013, according to computer scientist Cyril Labbé, confirming suspicions that sometimes, papers that read like gibberish are actually gibberish. Again.
Image
This is the abstract of a fictional article written by me, generated by SCIgen.
In 2005, MIT students developed (the super fun to use) SCIgen, a program which throws random, jargon-laden sentences together to produce documents that seem like computer-science papers. The program was designed to "maximize amusement, rather than coherence." According to the creators of the program, it can also be used to test the paper-acceptance standards of science conferences. They wrote in 2005 that they had, in fact, submitted a fake paper titled "Rooter: A Methodology for the Typical Unification of Access Points and Redundancy," to the World Multiconference on Systemics, Cybernetics and Informatics (WMSCI) for just this reason - and that it was accepted (at least at first).

Back in 2010, Labbé himself used SCIgen to generate fake articles under the name of fictitious scientist Ike Antkare who, thanks to Labbé's manipulation of Google Scholar, became the 21st most-cited author on the site. That's a big deal, considering Albert Einstein is 36th on the list.

Now, according to Labbé's finding, others have also used SCIgen to successfully trick peer-review publishers. To test out whether a scientific paper is genuine or fake, Labbé wrote a piece of software to determine whether a document was created by SCIgen. "The papers are quite easy to spot," he told Nature last week, explaining that the detection program involves a search for words commonly used by the generator. Labbé has spent the past two years cataloguing fraudulent papers before reporting them to Springer, a academic publishing company, and IEEE, which have since pulled the false papers. He says that it's possible the authors didn't realize their names were attached to the papers. He also said that the conferences which accepted the papers were based in China, and that most of the fake report authors have Chinese affiliations.

It's still not clear how the papers were accepted in the first place. IEEE Corporate Communications Director Monika Stickel commented to Nature that the group "took immediate action to remove the papers" and "refined our processes to prevent papers not meeting our standards from being published in the future," but she did not say whether they had been peer-reviewed. If the articles did go through the publisher's peer review process, IEEE would probably have to revise the way it describes the quality-control measure:
The peer review process is designed to enhance the quality of every article submitted to and published by IEEE. The peer review process benefits the author, editor, and the reader. Journal, magazine, and conference papers undergo a thorough peer review process before acceptance for publication or presentation, ensuring only the highest quality information is published or presented at conferences. All IEEE books are peer-reviewed prior to publication as well.... The reader or conference attendee is assured the research published is strong and credible.
Springer's UK head of communications confirmed to Nature that the false articles the group had published went through their peer-review system.

According to Labbé, the sloppy system can be traced back to high pressure on scientists to publish, which "leads directly to too prolific and less meaningful publications." This is not the first time the peer-review system has come under fire. The Guardian's David Colquhoun wrote back in 2011 that the process "doesn't work very well any more, mainly as a result of the enormous number of papers that are being published... there simply aren't enough competent people to do the job." And late last year, Nobel prize winner Randy Schekman announced that he will boycott top scientific journals (including Nature) because they encourage scientists to pursue sexy scientific topics instead of engaging in more serious (if boring) subjects.

For our part, we think it shouldn't be that hard to differentiate fake essay topics from real ones. For example, some of the fake studies found are titled "On the Emulation of Expert Systems Based on Development of Agents" and "Effective of Semantic Communication on Machine Learning Based on Collaborative Theory." Real ones are titled "PRIDE: An Expert System for the Design of Paper Handling Systems," and "R2D: A Bridge between the Semantic Web and Relational Visualization Tools." So we understand the confusion.