PubTech Radar Scan: Issue 39
Usual mix of publishing tech launches, news, AI developments, and longer reads, somewhat haphazardly collected and curated.
Highlights include the major OpenReview breach that leaked thousands of ICLR submissions (and revealed how much AI is already being used in peer review), a wave of new launches from VeriMe’s identity verification service to Elsevier’s LeapSpace workspace, thoughtful pieces on search evolution, AI transparency policies, competing visions of AI in education, and the ongoing debate about how AI should be used in peer review.
🆕News
🦹 A bug in OpenReview allowed author, submissions, reviewer details, and their reviews to be leaked, and a dataset of 10,000 ICLR submissions circulated. The Glass Wall Shatters: A Professor’s Reflection on the ICLR 2026 Breach is a good summary of the implications and the fragility of the peer review system. I’ve only just twigged that this is where the data for Pangram Labs’ “21% of the 75,800 peer reviews for ICLR 2026 were fully generated by AI” came from.
📈 This stat from Richard Wynne caught my eye “...during the next three years scholarly journals will send out an additional 150 million invitations to volunteer peer reviewers OVER AND ABOVE the current baseline”. I think you can quibble with the calculations behind this stat, but it’s still a thought-provoking number.
🙄 Springer Nature flags paper with fabricated reference to article (not) written by Retraction Watch’s cofounder Ivan Oransky. Retraction Watch says “The paper with the nonexistent reference, published November 13 in DARU Journal of Pharmaceutical Sciences, criticizes platforms for post-publication peer review — and PubPeer specifically — as being vulnerable to “misuse” and “hyper-skepticism.” Five of the paper’s 17 references do not appear to exist, three others have incorrect DOIs or links, and one has been retracted. ”
📅 UX in Publishing is a new cross-industry community of practice for anyone interested in user experience in academic publishing. Next meeting 8th December, 2:30 pm GMT (online) on Demonstrating impact and Linking UX to business goals.
🚀 Launches
VeriMe is a new global digital identity verification service for researchers run as a limited cooperative association based in Colorado, US. You can take their survey to help them better understand experiences with identity management and privacy.
openRxiv announced is piloting a reviewing tool from q.e.d Science to generate rapid AI-generated feedback on biomedical manuscripts. See also: AI reviewers are here — we are not ready
Elsevier has launched LeapSpace, the “...integrated workspace will combine the broadest collection of trusted scientific content with responsible AI to help researchers move from curiosity to discovery faster” [Press release].
Google Scholar has launched Scholar Labs, an AI-powered Scholar search that is designed to help you answer detailed research questions. [Google’s blog post][Aaron Tay’s review]
A British start-up, Books By People, is introducing an ‘Organic Literature’ certification to help readers identify books written by humans rather than machines.
🎧The Informed Frontier. A new podcast “examining innovation at the intersection of science, technology, and media. Season 1 explores AI as a ‘reader’ of content, and how this fundamental shift is changing the way we access, interpret, and share information. Hosted by Jessica Miles, each episode looks at the philosophical considerations and practical consequences of this disruption, focusing on scientific and scholarly content.”
🎧 Deanta’s Trends in Academic Publishing podcast features Patrick Shafe, talking to Jennifer Regala (Wolters Kluwer Health), Tim Williams (Edward Elgar Publishing), Nicola Ramsey (Edinburgh University Press), Gillian Shasby (Journal of Neurosurgery) and Darren Ryan (Deanta).
Scholarly Futures is a new weekly Substack from seasoned publishing voices, offering shared pondering on AI, integrity, policy shifts, and the unknowable roads ahead.
🤖 AI
Ann Michael from AIP Publishing on What We Learned from the Peer Review Assistant Trial. “Citation checking, metadata validation, and ethics compliance stood out as areas where automation added genuine value. … Not every feature landed perfectly. The manuscript digest didn’t always capture a paper’s nuance, and there were some latency issues - the full report sometimes took longer than ideal to generate. Addressing the latency issue is a top priority to allow a more effective fit into editorial workflows.”
📈 Nick Morely, from Veracity, asks: What is the cost of citation checking? “Est 75s (1.25 mins) average per ref (or 41 clicks) x 30 refs = 37.5 mins per paper (1230 clicks)”.
According to Amazon, “less than 5% of titles” on its site are available in multiple languages, so it’s launched Kindle Translate.
“Gross, disrespectful, and infuriating! The Association for the Advancement of Artificial Intelligence (AAAI)’s move to include “AI”-produced reviews into their selection process is UNACCEPTABLE.” says Dagmar Monett on the introduction of AI reviews. You can’t always please everyone, but you can, like AAAI, be transparent about the process and how AI is being used, as the FT article below explores.
I think Tricky Trade-Offs on a Transparency Spectrum: How the Financial Times Approaches Transparency about AI Use in News is a helpful read for anyone thinking about policies around the use of AI and what to consider.
Notes from Nancy Roberts about Deanta’s roundtable about The Future of AI and Its Implications for Academic Publishing. I think this point does need a lot more thought: “Google Zero – the idea that Google’s AI summaries will soon eliminate the need for anyone to click through to a website – is a concern for any of us who publish high value, peer reviewed content. There are no easy answers here but it seems to me that we need to think about the implications of this as an industry, and in conjunction with the wider ecosystem; how do we measure impact of academic research without the traditional metrics of downloads and citations? What are the implications for our publishing platforms? For academic libraries? For our authors?”
📈 Steve Cramer in This Liaison Life highlights this stat and beautifully diplomatic reporting language from the UX Team in Design and Discovery at the University of Michigan Libraries survey: “Undergraduate students led overall AI usage at 72.1%, graduate students followed at 62%, while faculty trailed at just 39.4%. Librarian participants reported the lowest usage of AI at just 34%. This inverse technology adoption pattern means librarians have opportunities to deeply understand student AI usage and to integrate much needed guidance for more ethical and effective use of AI tools within library instruction and consultations.”
The Journalism Benchmark Cookbook is creating a community-oriented AI benchmark for journalism. Is anyone creating community benchmarks for LLMs in journal publishing? Seems like a good idea?
🔎 Search
I enjoyed Michael Upshall’s summary of the Search Solutions Conference, especially the closing point: ““Is AI making search better?”. One respondent put it nicely: “Search has fundamentally changed because of AI, but I am confused about what search is now”. ”
“We’re Good at Search”… Just Not the Kind That the AI era Demands - a Provocation. I think this is a must-read about the uncomfortable reality that traditional search expertise doesn’t help you evaluate AI-powered tools - and librarians are looking at the wrong metrics entirely.
Andrew White on how The Edison API can be used to convert PowerPoint presentations to scientific claims and then diligence each claim to see if they are contradicted in literature/patents/clinical trials is a nice demo of how scientific agents can be used.
📚 Longer reads
Short but challenging read about trust and the internet by Jimmy Wales [Behind a paywall]: “The challenge of our time is not that information is scarce but that authenticity is. Important aspects of the early internet succeeded because people could trace what they read to another human being, even if the other human being was operating behind a pseudonym.
The new internet must restore that chain of custody. We are entering an era when machines can mimic any voice and invent any image. If we want truth to survive that onslaught, we must embed transparency, independence and empathy into the digital architecture itself. The early days of the web showed it could be done. The question is whether we still have the will to do it again.“ 🎧Or listen to HBR’s interview with Jimmy Wales on the same topic.Book Publishing & Technology: The ONIX 3 Debacle by Thad McIlroy, “Will the management of publishing companies figure out ways to harness the power of AI, while mitigating its drawbacks? There are some positive early indicators. But publishing’s historic discomfort with technology seems likely to block the industry from robustly integrating AI into current workflows. The implications are concerning.”
🎧TWiST podcast interviews Medium CEO Tony Stubblebine on the company’s new Really Simple Licensing (or RSL) initiative, and how it’s helping to compensate writers for their content and Human Native CEO Dr. James Smith on AI licensing, why a writer’s content isn’t just DATA to them, and why the company is pivoting away from its old marketplace model.
Deepak Varuvel Dennison, a PhD student at Cornell University, explores the knowledge that is missing from LLMs and the complexity of including that knowledge: Generative AI has access to a small slice of human knowledge. “The marginalisation of local and Indigenous knowledge has long been driven by entrenched power structures. GenAI simply puts this process on steroids.”
CUP’s Publishing Futures report is an insight into how the community is thinking about the future. 📈 I’m not sure that the future of traditional journal publishing is all that hopeful if only “...54% agreed that the peer review system was still effective in ensuring the quality of published research”.
Two recent white papers show how differently people see AI in Education. Google’s AI and the Future of Learning is pure Silicon Valley optimism. The authors see AI solving education’s resource problems through personalisation at scale. Their vision: AI tutors providing “high-dosage” support while freeing teachers for “meaningful human interaction.” They acknowledge problems like hallucinations and cheating, but mostly as bugs to fix. For Google, education’s messy human problems just need better tech. Tom Chatfield’s AI and the Future of Pedagogy comes from a completely different world. Citing a survey showing that 92% of British undergraduates already use AI tools, Chatfield worries about what he calls “distant writing,” humans designing prompts while LLMs generate text. A way of looking productive without actually doing the thinking. Where Google celebrates AI handling the “cognitive load”, Chatfield worries about “metacognitive laziness,” students losing the ability to think about their own thinking. Learning, he argues, cannot be automated. “You cannot outsource the construction of knowledge any more than you can outsource physical exercise.”
🔗 Extra links
Extra links from my backlog that didn’t make it into the main issue. More reading for the truly committed
And finally…
How to Spot a Poison Book: I had no idea that some Victorian publishers used arsenic-based green dyes in their book covers. Suddenly, going to the library seems like a much more dangerous and exciting activity - at least AI-generated content can’t actually poison you!

End Notes
If you found this useful, you can always buy me a coffee.
If you’re interested in AI, you can subscribe for free to my other Substack, GenAI for Curious People, to see entries for my new book.
If you need consulting help navigating any of this, find me at Maverick.
Looking for my AI-powered use cases for Academic Publishing? Find the list here




