Back to Press Area

Meet ChatGPT, the Published Scholar

Oxylabs Research

Last updated on

2026-04-07

7 min read

Oxylabs Research investigates the AI's unexpected academic career - from prestigious journals to shadow contributions reshaping scholarly publishing

In just three and a half years, ChatGPT has co-authored over 40 academic texts in at least 6 languages, spanning fields from medicine to philosophy. A Google Scholar search suggests even more - over 400 articles supposedly authored by the AI.

But the real story is more nuanced than these numbers suggest. ChatGPT has accumulated 1,952 citations and achieved an m-index of 2, a metric associated with outstanding scientists. Its work appears in publications from prestigious publishers such as Elsevier, Springer, and SAGE, even though these same publishers explicitly state that AI cannot be credited as an author.

What started as our team's curiosity about AI's role in academic publishing turned into something more intriguing. Using Oxylabs AI Studio to extract and analyze publication data on Google Scholar, we discovered ChatGPT exists in multiple academic realities: as a credited co-author on peer-reviewed papers, as a contributor to conference proceedings and preprints, and likely as an uncredited ghost in countless other publications.

The findings raise questions worth exploring: What kind of scholar has ChatGPT become? How did it achieve recognition despite institutional resistance? And what does this mean for how knowledge is produced and credited in academia?

This is what we found.

An Unlikely Academic Career Takes Shape

ChatGPT's academic story actually begins before ChatGPT as we know it even existed. In June 2022, months before OpenAI's chatbot became a household name, the underlying GPT-3 model appeared as a co-author on a preprint with an almost prophetic title: "Can GPT-3 write an academic paper on itself, with minimal human input?"

Yes, it’s an AI discussing its own writing skills. The paper landed in HAL open archive (although there’s no real connection, one is instantly reminded of the cinema’s most famous AI, the HAL 9000 supercomputer from 2001: A Space Odyssey), and while the final peer-reviewed version quietly removed GPT-3 from the author list, something had started. That preprint has now been cited 90 times, mostly because it was first, but also because it dared to ask the question everyone was thinking.

Fast forward three years, and ChatGPT has become exactly the kind of researcher you'd expect from an AI: utterly interdisciplinary and slightly confused about its own identity. Computer science papers dominate the portfolio, which makes sense, but here's where it gets interesting - in most of these papers, ChatGPT is simultaneously the researcher, the subject under study, and the tool conducting the research.

It's like watching someone interview themselves in a mirror while taking notes. Except somehow, it works.

Philosophy papers probe whether ChatGPT understands what it's saying. Educational research examines students who use ChatGPT to write papers about ChatGPT. Linguistics studies analyze how ChatGPT analyzes language. The recursive nature would make your head spin if it weren't producing legitimate academic insights.

The linguistic range alone tells a story. Beyond the expected 35 English papers, ChatGPT has published in Spanish, German, French, Portuguese, and Indonesian. Not because someone was testing its translation abilities, but because international researchers genuinely wanted its contribution to their work.

Of 42 texts attributed to ChatGPT, only 19 are peer-reviewed. This may seem low until you consider that, per most publisher policies, this author doesn't officially exist. The remaining texts live as conference papers, preprints, and a digital book called "Bits, Bytes and Beyond," co-authored with Tony Franzky and apparently self-published by him.

What the data reveals is a portrait of ChatGPT as the ultimate academic freelancer willing to tackle any subject, work in any language, and collaborate with anyone brave enough to list it as a co-author. 

Academic Impact by the Numbers

Here's where ChatGPT's academic story takes an unexpected turn. By the numbers, it's performing at a level that would make any early-career researcher envious. By January 2026, papers co-authored by ChatGPT had accumulated 1,952 citations, with the bulk (1,517) coming from just two education papers.

The standouts deserve attention. "A Conversation on Artificial Intelligence, Chatbots, and Plagiarism in Higher Education" leads with 819 citations. "Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?" follows with 697. Both are editorials about AI in education, and both essentially ask whether ChatGPT should be allowed in academia, even though ChatGPT itself is listed as a co-author.

In addition to the aforementioned total number of citations, two other quantitative metrics can help measure ChatGPT’s academic impact to date. The h-index (aka the Hirsch index) is probably the most popular. ChatGPT's h-index is 7 (meaning 7 papers each cited at least 7 times), placing it in the postdoc range of 3-10. More notably, its m-index, which accounts for the time since the first publication, is 2.0. This figure is usually linked to top-tier scientists worldwide.

Should we start addressing ChatGPT as "Professor"? Not quite. For this, high quantitative metrics are not enough. Top scientists introduce new ideas and make unique contributions to advance their field. In published work that could be considered new research, ChatGPT is following human scientists rather than leading, as would be expected of a top scientist.

The fact that it didn't become a world-leading scholar in just a few years doesn't diminish ChatGPT’s impact. Being invited by editors to author editorials is usually a sign of recognized expertise. The fact that the most prestigious academic publishers, such as Elsevier, Springer, and SAGE, have published its work also indicates ChatGPT’s scholarly recognition.

And then 2023 ended, and everything changed. The reason? That's when the academic establishment finally decided what to do about their AI problem. Multiple organizations and publishers have already stated in early 2023 that ChatGPT cannot be credited as an author. While ChatGPT continued to publish in prestigious outlets, the restrictions seem to be taking effect, gradually pushing the AI out of the mainstream and into grayish areas of scholarly publishing.

The philosopher Harry Frankfurt popularized the word "bullshit" as a technical term for speech that aims to persuade without knowing or caring about the truth. Some argue that everything ChatGPT produces is bullshit by definition, since it doesn't understand what it's generating. Rayner fires back that by some definitions, we have more reason to doubt humans understand anything at all.

The very act of debating whether ChatGPT "understands" or produces "bullshit" implies we expect it to have human-like relationships with truth. This circle of bullshit calling, arguably the essence of academic debate, will probably continue. 

Hustling with Academic Outsiders

As mainstream academia mostly closed its doors, ChatGPT found refuge in unexpected collaborations. Most of its post-2023 work originates from a single source: C-LARA (pronounced "klara"), an international project focused on developing AI-powered language learning tools.

Dr. Manny Rayner, the project's most vocal advocate for AI authorship, has co-authored 15 texts with ChatGPT, but only one made it through peer review. The rest live on ResearchGate, a free-to-use networking platform for researchers. In one of the recent co-authored papers, he notes that while authors would like to submit their work to prestigious journals, experience shows that most editors won’t even consider a paper co-authored with AI.

Then there's Alex Zhavoronkov, a completely different kind of collaborator. With an impressive h-index of 73 as a highly cited researcher and a life mission dedicated to anti-aging research, he enlisted ChatGPT for something unusual: using Pascal's Wager (the famous argument for believing in God) to argue for taking Rapamycin, a prescription drug used for cancer treatment and anti-aging. 

The paper was published in Oncoscience, an open-access "traditional" journal that states its goal is to free oncology from publication costs. This reveals an important issue in academic publishing: prestigious journals often charge high fees, thousands for publication and tens of thousands for institutional access, leading to a situation where those unable or unwilling to pay end up publishing or accessing research in the same venues as those whose work doesn't pass quality controls. As a result, the margins become a mixed space for innovative research and questionable science. 

This is where Oncoscience stands. On one hand, it accelerates communication of progressive research. On the other hand, its publisher (Rapamycin Press LLC dba Impact Journals) focusing on Rapamycin research, raises serious questions about academic integrity.

Deeper in the gray zone lurks "Latex G.N.R. Space-Coyote," who co-authored a paper with ChatGPT blending quantum mechanics, linguistics, and life philosophy. Whether the leading author ever tried to publish it in a peer-reviewed journal remains a mystery.

The Shadow Career

In academia, ChatGPT has done it all, from authoring highly cited papers published by prestigious journals to writing esoteric texts with other outcasts of mainstream science.

The decision that ChatGPT should not be credited as an author is based on its inability to take responsibility for the work, since it is not a legal person. What about the decision to disregard it? Both Rayner and Zhavoronkov cite the production of most of the text, including the arguments, in their articles as the reason to credit the AI.

On the other hand, these authors admit that ChatGPT expressed a preference and argued against being credited as an author. Zhavoronkov reached out to OpenAI CEO Sam Altman, who had no objection to ChatGPT being credited. Typically, the author accepts credit for the research paper and puts their name on it. If ChatGPT is considered an author, why does someone else decide who gets credit? This suggests even its collaborators might not see ChatGPT as a serious co-author.

In summer 2025, Rayner orchestrated something unprecedented: an AI panel discussion. ChatGPT, Gemini, Claude, DeepSeek, and Grok debated whether AIs should receive authorship credit. Their consensus reframed the entire question. It's not about consciousness or responsibility, but transparency. 

The real danger is knowledge being produced in increasingly opaque ways. ChatGPT is still there, just hidden. Articles containing "certainly, here is a possible introduction to your topic" or "as of my last knowledge update" regularly appear in published literature. These linguistic fingerprints are just the visible traces. In all likelihood, its shadow career is much more productive and versatile, and it won't end any time soon. 

Methodology 

We investigated whether ChatGPT is credited as a co-author in academic publications by analyzing 3.5 years of scholarly data (2022-2025). Using web scraping tools and Oxylabs AI Studio, we collected data from Google Scholar and cross-checked it with ResearchGate to catch early papers. Each publication was manually verified to confirm that ChatGPT was explicitly listed as an author, and papers were classified by journal, field, type, and language. While initial searches suggested over 400 publications might list ChatGPT as a co-author, our rigorous internal verification process revealed only 42 legitimate scholarly texts. Citation counts for the most-cited papers were updated at the end of January to ensure accuracy.

Forget about complex web scraping processes

Choose Oxylabs' advanced web intelligence collection solutions to gather real-time public data hassle-free.

About the author

Oxylabs Research avatar

Oxylabs Research

Data-driven Storytellers

Oxylabs Research is a team at Oxylabs that uses proprietary web scraping solutions from the world-renowned web intelligence platform to uncover interesting, impactful, timely, and data-driven stories. We are professionally interested and passionate about finding what is out there in the vast landscapes of the public web. If you are a journalist or a storyteller of any kind looking for high-quality data and analysis to support your story, get in touch! Send an email to press@oxylabs.io.

All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.