Veterinary Vertex

AI in Scientific Writing: Opportunity, Risk, and Responsibility

AVMA Journals

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 23:52

Send us Fan Mail

A citation can be polished, specific, and completely fake and that’s the scary part. We sit down with Morna Conway, PhD, Scholarly Journal Consultant and JAVMA and AJVR Copy Editor Vic Schultz to unpack how generative AI tools like ChatGPT can hallucinate references, remixing real author names, familiar journal titles, and plausible article wording into sources that simply do not exist. If you write, review, edit, or read scientific articles in veterinary medicine, this conversation is a practical guide to protecting research integrity in the age of AI-assisted writing.

We walk through how these fabricated citations get discovered, from peer reviewers who know the field well enough to spot a suspicious claim to copy editors who notice missing DOIs, dead Crossref links, absent PMIDs, or volume and page details that don’t add up. Dr. Lisa Fortier shares how editorial workflows shape when problems are caught and why JAVMA and AJVR take a hard line: if hallucinated references are found, the editorial team can reject the manuscript even after acceptance because accuracy is non-negotiable for credible scientific publishing.

We also get specific about responsible AI use in scientific writing: disclose how you used AI, describe the workflow, and personally verify every output before submission. The best advice sounds old-school because it works: proofread, slow down, and click every DOI. If you found this helpful, subscribe, share the episode with a colleague, and leave a rating and review to help more researchers find it.

JAVMA editorial: https://doi.org/10.2460/javma.264.4.382

Scientific Reports article: Fabrication and errors in the bibliographic citations generated by ChatGPT | Scientific Reports

INTERESTED IN SUBMITTING YOUR MANUSCRIPT TO JAVMA ®  OR AJVR ® ?

JAVMA ® : https://avma.org/JAVMAAuthors

AJVR ® : https://avma.org/AJVRAuthors

FOLLOW US:

JAVMA ® :

Facebook: Journal of the American Veterinary Medical Association - JAVMA | Facebook

Instagram: JAVMA (@avma_javma) • Instagram photos and videos

Twitter: JAVMA (@AVMAJAVMA) / Twitter

 

AJVR ® : 

Facebook: American Journal of Veterinary Research - AJVR | Facebook

Instagram: AJVR (@ajvroa) • Instagram photos and videos

Twitter: AJVR (@AJVROA) / Twitter

JAVMA ®  and AJVR ®  LinkedIn: https://linkedin.com/company/avma-journals

Dental Treat Ad And Setup

Greenies Dental Treats

We love dogs. We tolerate their breath. But we know bad breath isn't just a nuisance. It can be a sign of canine dental disease. Sure. Daily brushing to keep teeth clean is ideal, but getting a dog to cooperate can be a real challenge for many pet owners. Greenies Dental treats are vet recommended and clinically proven to promote oral care when used daily between professional cleanings. Plus, they're irresistibly tasty and VOHD accepted for plaque and tartar control, with a texture designed to clean teeth effectively with every chew. So if you're looking for a routine they'll actually stick with, Green's Dental Treats are the perfect place to start.

SPEAKER_04

Welcome to Veterinary Vertex, the AVMade Journal's podcast, where we delve into behind-the-scenes look with manuscript authors. I'm editor-in-chief Lisa Fortier, joined by Associate Editor Sarah Wright. Today we welcome a couple members of our journal team, Morna Conway and Vic Schultz, to discuss AI and scientific publishing.

Sarah Wright

Thanks for joining us, Morna and Vic.

SPEAKER_00

Thanks for having me, Sarah and Lisa. Thanks for having us.

Sarah Wright

So, Morna, why are verifiable citations essential for scientific credibility?

SPEAKER_03

Okay, so verified citations establish that the current research, the manuscript that you're publishing, um that the findings in it um are firmly based on prior research. So it's the whole notion of the building blocks of science. That you don't just pluck something completely new out of the air without it having a solid uh pathway from earlier research. And that's basically the the whole fundamental notion that science progresses building block by building block uh to advance knowledge. And so if you can't verify the uh citations, the references in a work, um, it means that that work is questionable at best.

How AI Invents Real Looking References

SPEAKER_04

Yeah, you know, it's starting to happen in uh grant review as well. I was just speaking to someone on a review panel that read uh my latest uh uh editorial, and they were saying that on the grant review panel they found these hallucinated references as well and and did not score the grants. It was controversial for some people, but they drew a hard line like we did. Yeah. Mourna, how do these AI tools end up generating citations that don't exist?

SPEAKER_03

So these AI tools like ChatGPT and large language models um are basically um they're not databases of factual information. They're not the same as Pit PubMed or Google Scholar. They are uh the the databases they the the LLMs are trained on and what they're supposed to do is prior prioritize and predict the next likely word, the next likely concept. And so what happens is that um, I mean, basically they they are not verified, they're not working from verified sources, the the general GPTs uh or LLMs. Um so it's the difference between information processing and language processing systems, and AI is based on in this instance on language processing, not uh data processing, which is a big problem. And very smart. I mean, so I think uh if you think about it, how do they come up with something that looks genuine in the first place at first glance? Well, basically it's because it's taken the manuscript that you're looking at and it's searching around and finding uh predicting that it's going to have these or that it would be referencing these authors or referencing these keywords or titles. So it's fabricating, it's hallucinating. Uh, and it's uh a really serious problem.

Sarah Wright

Yeah, thank you for providing that clarification. I think that's important, especially when we have younger authors, more early career that are looking at using AI more and in workflows and such. So thank you. So, Vic, why do fabricated references often appear convincing?

SPEAKER_00

Well, Mora touched on it with uh you know her description of the way the LLMs are built. Uh they scrape, they scrape all this information and and uh and and build their body, their corpus, uh whatever you want to call it, they work from, uh, and they incorporate real references, authors, and terminology that's common in the literature. All those things are in there. And is essentially when they uh look, I'm not a software engineer, so I might be putting it in the wrong terms, but this is this is uh what a copy, you know, as a copy of this is what I see and what it feels like. It it looks like um the the kind of remixing when when the when a reference is fabricated, it it feels like as if they've remixed these these other sources and and uh each element of the references, well, that sounds right. You know, that's that's a real author that's well published in that area of uh of of veterinary medicine. Uh, but you know, maybe they've never worked with uh this this other author that I can find. And uh it turns out that this this article title uh it sounds right, but doesn't actually exist. And this journal, this journal certainly exists, but if you go to that issue or that volume, uh you know, the pagination is totally off and and that that reference turns out not to not to exist at all. So yeah, it's because they're using all these all these elements to spit something back at you that I guess may or may not be right. Yeah, the bad average is pretty good. Oftentimes I think you know the the outputs are accurate, but yeah, they sure look good.

SPEAKER_04

So Vic, it's they look so good. How are these hallucinated references discovered during peer review and by our fabulous copy editing team, including you?

SPEAKER_00

Well, uh our peer reviewers uh might just know the material well enough, honestly. Uh the the copy editors, uh we we use um in in the course of our editing, we use uh uh software to format our references. And and then in the course of uh of checking uh the outputs from that software to make sure they're formatted correctly, we can see uh if the if the DOI has a link, for example, or if there's a DOI at all. And and if there is not a DOI, for example, or any kind of a PMID or any of these other indicators, that's usually a red flag that this might not be a real reference. Uh it might it oftentimes is still a real reference. Then we dig a little bit deeper and and we can find, okay, this is maybe this is maybe just a not very well-known journal, or a journal that, you know, for whatever reason doesn't use DOIs. But uh in 2026, it's very rare that there's just no sign of an article at all that's a real article. Um so that's pretty much what it is, is in the course of styling the references that we that we you know, some red flags pop up that might tell us that that's not a real reference.

SPEAKER_03

I found a study, a recent study in scientific reports by this is September 2023, uh Walters and Wilder found that 55% of GPT 3.5, that version of GPT um citations were fabricated versus 18% for GPT 4.0. And likewise, 43% of the real GPT 3.5 citations, this is the actual real real citations included substantive citation errors versus 24%. So I could of the real of the the the later version, the GPT-4 citations. So I can see that Vic would be kept incredibly busy just with the formatting issues, let alone you know the the the verification uh issues. Because when you think about, you know, every article seems to have at least 20 references, you know, and um that's a lot of checking.

SPEAKER_00

It's true. The authors do keep us busy, yeah. But that's why we love them.

SPEAKER_04

You know, I think uh over the weekend you were talking about how reviewers can pick this up as well, uh Vic, and you were saying so. We had an article that was submitted about chatbot in veterinary medicine, and a reviewer who is an expert in AI looked at it and thought, you know, the language wasn't perfect, but you know, sometimes it isn't perfect, especially if English isn't their first language. So it got past me and it got past one of our very astute associate editors at Javma. It got assigned to this reviewer, and this reviewer got back and said, This is an entirely fabricated manuscript, and it's our first at JAVMA and AJVR that we know of. And this reviewer, as you were saying, but then does know the field. This reviewer actually looked at this person's affiliation and went to the affiliation website and didn't find them. They're they listed their affiliation, yeah. And like, and then that reviewer then took that author and plugged that author into PubMed and looked at all that author's uh references, and it they actually listed the AVMA as one of their affiliations in one of their manuscripts. So terrifying world, but we're trying to keep up. That is super scary.

Sarah Wright

Well, that's how we're doing this episode. So hopefully we can bring some education, shed some light on this really important issue that we're facing in scientific publishing. So, Vic, what role do copy editors play in safeguarding reference accuracy?

SPEAKER_00

Well, I guess I think of us as the last line of defense. Um, but we shouldn't be the first. Uh really, you know, I I would I would like authors to be more vigilant on this type of thing too. Uh reviewers are great, but I don't know that I can that I think of the think of this as like a primary responsibility of theirs or anything to catch stuff like this. So so yeah, I mean it really it's just become uh it's just become part of uh part of our standard you know standard role, I I guess. Uh you know, the we have a new copy editor coming in, and I've I I plan on talking with with with with this copy editor extensively about looking for looking for indicators of AI use. And and look, AI generative AI use even isn't necessarily out of bounds, it's just using it responsibly. So so yeah, I mean we you know we're we're all thinking this through as we go, but but yeah, I think of it as part of our normal responsibilities now.

SPEAKER_03

So Vic, are you working on papers that have been accepted? So this you see them after acceptance?

SPEAKER_00

Yes.

SPEAKER_03

So you're really the last resort to catch something. Yeah.

SPEAKER_00

Yeah, we're the we're the bulwark, I guess. Uh so I I hope we hold up. I think we usually do. So far. The technology evolves quickly.

SPEAKER_04

So Yeah, Vic, I like how you're referring to, you know, AI is great, but really still needs a human element. Uh there's, you know, as you know, we've tested multiple AI detection softwares, and none of them are great. And so why isn't even the automated side casing uh checking software sufficient?

Editorial Hard Line On Fabrications

SPEAKER_00

Yeah, I've discussed this with uh with software engineers, even. And I uh uh to some extent, at least with what we use, it might just be an issue of the way that software engineers think. Uh they've been like, I guess that might be useful. Uh I'll talk about that with the team. Uh that that type of thing. Uh and so as what when I when I say this, that they might uh flag uh more uh questionable references. Uh yeah, I can see it's questionable reading it, but uh, you know, it it doesn't, you know, what we use doesn't flag it necessarily. But um, you know, when a reference is is hallucinated, uh, you know, or there's some problem with AI, with AI has caused uh a fake reference, you want to be very sure before you reject someone's paper. I would I would never want, even if we had reference software that that flagged everything and said this is you know, this even something like this journal doesn't exist or whatever, something that's very obvious. I would still want to follow up on that. This is this is people, you know, this is people's life's work that they're trying to publish with us. And so uh you know, like you mentioned, we you always want a human to make to make that final call. So before I before I come to you know come to you, Lisa, with with a paper saying uh these appear to be you know fabricated references or some portion of the reference list, you know, we want a human looking at that, not just feed it through software that says, oh, this is fake and that's it. So we we would always double check no matter what.

Sarah Wright

So now we get to turn the tables a bit and I get to ask Lisa a question. So, Lisa, uh with your insight as editor-in-chief for Javmon AJVR, how do editorial workflows influence when and how citation issues are caught?

SPEAKER_04

I think it's been touched on, but it's great to always reiterate this. You know, the first, the first stopgap are the reviewers. You know, and if if I'm reviewing a manuscript and I think, wow, I haven't read that article, I'm going to click on the link or go look for it. And then you're like, well, that's weird. It doesn't exist. So most of the time, those reviewers will write to us and say, I'm concerned that this is a fake reference hallucination. Uh, but then they're caught by our copy editor. So in our workflow, we have, as you know, taken a heart stance in whether you have an AI hallucinated reference during the review process, or as you just heard in the exchange between Mourna and Vic, even if it's after acceptance and the it's so manuscripts accepted, it goes to Vic and his team for copy editing. Even then we will rescind the accept and reject the manuscript. We because of all the things that we've talked about, it's super important that it's important to the integrity of the entire article that these uh references are correct. Most of the time, in other journals that aren't self-published, like we are, don't take the personal approach that we do. You would get an author query on your Gallup proof and it would just say, please check this reference. Uh so I think this is happening obviously in every journal area, in the the way that the editorial workflow and then what they what the journal ultimately does once they discover an artificial uh intelligence generated reference is highly variable, but uh we've taken a hard stance on this.

Sarah Wright

Yeah, thank you for sharing. I think it's important just to have that stated out there so that people know and so that when they go into even drafting a manuscript, they can have this in their mind as they're doing that literature search.

SPEAKER_03

You think about how much distrust of science there is nowadays and all the misinformation that's out there. So j journals, reputable journals like Javan AJVR really have to be very, very strict in upholding the standards of accuracy. I mean, it it's it's sort of a question of accuracy. And if you're publishing inaccurate information, which is what a hallucination is, um, this is a terrible disservice to science, especially in this day and age when there's so much skepticism and so much misinformation out there, so many conspiracy theories and so on and so forth about uh various about so many things. And I'm sure true in veterinary medicine as well as in human medicine.

Responsible AI Use And Author Habits

SPEAKER_04

That's really well said, Mourna. And so AI is here and people are using it. What does responsible use of AI look like in scientific writing?

SPEAKER_03

I think the first and most important thing is disclosure of how it's used and that it has been used. I think we have to have authors really understand that it's not enough to say, well, I I I checked the references or if they actually used AI to generate the some or all of the references. I mean, I think it's it's disclosure is the number one tool. Um, and then the description of how exactly the AI was deployed. Um and I think the final step is that the authors have to be asked to sign off on uh saying that they have fully debt any content uh for which AI was used in any way. Um so uh it it's really up to the author. And I think that's why when the author fails to follow these rules, why we have to take such a hard line.

Sarah Wright

So going back to authors, fake, what verification habits should authors adopt when using AI tools?

SPEAKER_00

Well, proofread people. Uh that's that would be my advice to authors submitting to Jabma and anywhere else, really. Uh uh our editors are great, but uh you know, there's there's an old uh old old saying in copy editing, uh comparing it to to a car wash where the the uh car is only gonna uh uh the car is only gonna come out so clean. So the dirtier the car is going in, you know, it's it's not gonna be as clean as you like coming out. And so if you if you submit stuff to us that is has been proofread better, and that you know you've checked your AI AI outputs more closely, um then you we're all gonna we're all gonna publish a better paper together, uh, and obviously avoid things like fabrications and uh hallucinations and and the various tortured logic phrases and things like that that that AI, generative AI can produce. Um and so yeah, in particular, you know, you know, Morton mentioned it, you know, check every reference yourself. Yes, our copy editors will too, but you know, uh uh we also will reject the paper if we s as you mentioned, if we find a hallucinated reference. So uh don't don't just uh generate a list and say that looks right and send it to us and and uh you know don't just delegate maybe to someone else on the author group too. I mean I guess uh you know every everybody's got a stake in it, obviously.

SPEAKER_03

So yeah, I think the the DOIs are really useful as well here because if you I mean I would really encourage people to click on the links and if there comes back with an an a on the doi link, the digital object identifier link from Crossraft, I think it would be um a dead giveaway if you came back with a DOI not found, which happens when you have a a bad reference. Yeah.

SPEAKER_00

Yeah, yeah. That the that that piece wouldn't even take that long for authors to do it. Yeah, if they just go through and click on the DOI, that that's that would that would be uh that would be a nice step for them to take.

SPEAKER_04

All right, Zic, our next editorial is gonna be proofread people. I love it. Uh it is kind of old school, and if you're not taught by somebody who's old school, uh and and I agree, it's everybody's responsibility as an author, not just the first or corresponding. So this is an evolving and it's not going to stop, right? We need to evolve and meet people where they are with AI. So, Vic, us in the publishing community, how should we respond to this rise in AI-assisted writing?

Education Guidelines And Closing

SPEAKER_00

Yeah, uh, I would say, look, it's easy to say just be vigilant. Uh, and and that's it's true. We need to be to be vigilant, but it might not be clear what that looks like, right? So, you know, for for authors, you know, check your outputs closely, as we've mentioned. Uh make sure the data are accurate, that the collusions conclusions follow from the results if you've used if if you've used AI for to help in the composition of the paper. Obviously, you know, you know, you want to be thorough in checking your references, as we've discussed. Uh for editors and production, you know, be thorough. You really you want to slow down. Uh it sounds counter in counterintuitive counterintuitive because AI is for speeding things up. Uh, but really, that goes for authors too. You want you you you want to slow down uh these days and make sure that AI has given you good outputs. And so on the edit editor slash production side, you want to take it more slowly. And what when you're reading a manuscript, I guess this is very, very copy editor specific, uh, but you know, it almost listen for notes of uh the uncanny, make sure the logic tracks, um, and then yeah, just always be very scrupulous as you go through the references. Um and so yeah, that that would be my my takeaway for everybody, uh, is slow down and uh you know, look, uh with AI I would say it's a shortcut, but it's a shortcut that will be best used by people who know how to do it the long way. And so so you know, you keep that in mind, I guess, when when you're working with AI.

SPEAKER_03

I like that.

SPEAKER_04

Mourna, how about you? What do you think?

SPEAKER_03

I think uh that really it's about education. Um, and uh I think uh how do we educate potential authors or actual authors? Well, so much communication occurs at um conferences and and meet and scientific meetings. Um, and I think it would be behoove us to have sessions on topics like uh, you know, the responsibilities of authorship and what happens. I mean, maybe just pointing out the bad consequences if the literature gets loaded up with hell hallucinations. What is why is that such a problem? And maybe we have to reinforce the basic principles of scientific uh exploration and uh research and publication. So going back to the the bit the basics, I think reaching early career researchers where they are is very important. And I think some of these uh you know nice uh things like boards of early career researchers uh involved with journals in some way, um maybe publishing some tutorials on um the the whole topic uh would be helpful. Uh it it's it's about really, I think uh teaching people about the seriousness of the issue because it doesn't seem very serious. I mean, it seems kind of funny to talk about a machine hallucinating. That just doesn't seem like something that we uh would uh take terribly seriously. Um so I think by giving it the do. Amount of serious uh consideration and building it into our process for educating authors would be a a key thing. I don't know, of course, I mean I'm talking about early career people. It probably is just as prevalent with seasoned authors who can't be bothered to check their references. So maybe not so much emphasis on early career, but or an equal emphasis on um later later career, which we don't talk about much. You don't hear all about LCRs, it's all ECRs. So um I I think just you know finding ways to get that message across.

Sarah Wright

Yeah, I think I like to add too is publishers also just updating our AI guidelines for authors. As we've talked about, AI is rapidly advancing, so making sure that we're staying abreast of that too and updating our guidelines to stay contemporary with what's currently happening with AI as well.

SPEAKER_00

That's a great point, Sarah.

SPEAKER_03

You might have to start putting some of that in your maybe you already are in your communications with authors as their paper progresses through the peer review process. You know, just last reminder uh bad consequences if you have hallucinations here. And uh, you know, that we can with we can withdraw the acceptance if uh if we find this. So, you know, just sort of really putting it very upfront in author communications, I believe.

Sarah Wright

For sure. That's why we're doing this too. So, Mornavic, thank you so much for joining us. We really appreciate you sharing your insight on this really important topic.

SPEAKER_03

Thanks for having me. Thanks for having me.

Sarah Wright

I'm Sarah Wright here with Lisa Fortier. Be sure to tune in next week for another episode of Veterinary Vertex. And don't forget to leave us a rating and review on Epic Podcasts wherever you listen.