Home Science Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect

Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect

0
Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect

[ad_1]

Experts say there’s a balance to strike in the academic world when using generative AI—it could make the writing process more efficient and help researchers more clearly convey their findings. But the tech—when used in many kinds of writing—has also dropped fake references into its responses, made things up, and reiterated sexist and racist content from the internet, all of which would be problematic if included in published scientific writing.

If researchers use these generated responses in their work without strict vetting or disclosure, they raise major credibility issues. Not disclosing use of AI would mean authors are passing off generative AI content as their own, which could be considered plagiarism. They could also potentially be spreading AI’s hallucinations, or its uncanny ability to make things up and state them as fact.

It’s a big issue, David Resnik, a bioethicist at the National Institute of Environmental Health Sciences, says of AI use in scientific and academic work. Still, he says, generative AI is not all bad—it could help researchers whose native language is not English write better papers. “AI could help these authors improve the quality of their writing and their chances of having their papers accepted,” Resnik says. But those who use AI should disclose it, he adds.

For now, it’s impossible to know how extensively AI is being used in academic publishing, because there’s no foolproof way to check for AI use, as there is for plagiarism. The Resources Policy paper caught a researcher’s attention because the authors seem to have accidentally left behind a clue to a large language model’s possible involvement. “Those are really the tips of the iceberg sticking out,” says Elisabeth Bik, a science integrity consultant who runs the blog Science Integrity Digest. “I think this is a sign that it’s happening on a very large scale.”

In 2021, Guillaume Cabanac, a professor of computer science at the University of Toulouse in France, found odd phrases in academic articles, like “counterfeit consciousness” instead of “artificial intelligence.” He and a team coined the idea of looking for “tortured phrases,” or word soup in place of straightforward terms, as indicators that a document likely comes from text generators. He’s also on the lookout for generative AI in journals, and is the one who flagged the Resources Policy study on X.

Cabanac investigates studies that may be problematic, and he has been flagging potentially undisclosed AI use. To protect scientific integrity as the tech develops, scientists must educate themselves, he says. “We, as scientists, must act by training ourselves, by knowing about the frauds,” Cabanac says. “It’s a whack-a-mole game. There are new ways to deceive.”

Tech advances since have made these language models even more convincing—and more appealing as a writing partner. In July, two researchers used ChatGPT to write an entire research paper in an hour to test the chatbot’s abilities to compete in the scientific publishing world. It wasn’t perfect, but prompting the chatbot did pull together a paper with solid analysis.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here