I have heard that referees are using LLM’s to “speed up” the reviewing process. I recently received two reviews from SPL. The paper is just a silly little result which works out very well, so I wanted to send it somewhere. The reviews are attached.
These two reviews do not seem independent to me. I have annotated (and sent back to the editor). Check the attached if you are interested. Some comments indicate that the reviewer really did not read even the abstract of the paper. (Yet it looks like some human input has been added).
Would LLM’s produce identical reviews or just similar? I guess it depends a lot on the prompt and the exact model being used.
SPL (in rejecting the paper) also sent me a list of suggested journal where I might sent it and they would facilitate transfer.
- Journal of Multivariate Analysis (there is nothing multivariate in my paper).
- International Journal of Approximate Reasoning
- Informatics in Medicine Unlocked (open access, costs $3000, seems predatory)
- Results in Applied Mathematics (open access, costs $2500, seems predatory)
- Franklin Open (what the hell?) (open access, costs $3000, seems predatory)
This list was also surely generated by a LLM. No academic with a passing knowledge of statistics or data science who has seen the abstract (or even title) of my paper would suggest this.
Can LLM’s be used for good instead of evil?
An experienced colleague, who is pretty bullish on AI, recently had two LLM’s produce hostile reports of his own unpublished research. Some of it was nonsense, but it did alert him to some clear weaknesses which allowed him to revise. It also revealed some weaknesses that he could not address. (He told me he just hopes that the referees do not use the same LLM!).