buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
"Cornell Tech has received more than $7 million from Schmidt Sciences and NASA to upgrade arXiv, an open-access research repository of more than 2.8 million articles. "
It's great news for arXiv. I'm just wondering why they want actually move to the cloud:
"finish migrating to cloud infrastructure"
NREN are providing access for educational networks, I hope the cloud mentioned there is operated by a NREN operator.
🔗 https://news.cornell.edu/stories/2025/11/7m-grant-nasa-schmidt-sciences-upgrade-arxiv
Often research are led with public money, and the results put behind paywalls by “publishers” whose role seems today more and more anachronistic.
#research #science #peerreview #editors #paywall #OpenAccess #freeknowledge #freescience #arxiv
I gather they've finally taken this measure because of the preponderance AI-generated slop, but with any luck these other issues will improve too. The arXiv press release states “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv” so it does sound like they are acknowledging the other problems and intend to enforce their rules more strictly in the future.
"arXiv says it will no longer accept Computer Science papers that are still under review due to the wave of AI-generated ones it has received."
From https://infosec.exchange/users/josephcox/statuses/115486903712973154
Update. Here's how #arXiv is dealing with a similar problem in computer science.
https://blog.arxiv.org/2025/10/31/attention-authors-updated-practice-for-review-articles-and-position-papers-in-arxiv-cs-category/
"Before being considered for submission to arXiv’s #CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review…In the past few years, arXiv has been flooded with papers. Generative #AI / #LLMs have added to this flood by making papers – especially papers not introducing new research results – fast and easy to write. While categories across arXiv have all seen a major increase in submissions, it’s particularly pronounced in arXiv’s CS category."
Modern iOS Security Features – A Deep Dive into SPTM, TXM, and Exclaves
Howdy #arxiv folks, could someone please endorse my colleague Ambrose Carr on q-bio.QM for a preprint? Endorsement code O488JD thank you!
https://peertube.dair-institute.org/w/gAAnkju7qjfrWjG9NZVy2L?start=2m31s
@[email protected] and @[email protected] are unpacking a recent OpenAI press release in the form of a blog post titled "Learning to Reason with LLMs" (1). At the point in the video I linked above, Prof Bender is commenting on the "Contributions" section of the blog post. She expands it, and notes that it resembles what one might see in a proper scientific publication. But this is a blog post, and really it's a press release, not a scientific publication. The rest of the episode is well worth watching to see just how misleading this press release is and why you should not believe any of the claims made in it. No, LLMs can't reason like a PhD or win coding competitions.
Since the beginning of this AI hype cycle I've been arguing here and elsewhere that companies are putting lab coats on their corporate whitepapers and press releases in a cynical attempt to afford them more credibility than they deserve. They are gaming the academic publishing system to circulate them, flooding #arXiv (a scientific and scholarly pre-print server) with whitepapers and press releases formatted to look like academic writing, even getting low-quality "articles" in some of Nature's publications and web sites (see e.g. https://theluddite.org/post/replika.html ). It looks like they're also formatting their blog posts to resemble peer-reviewed scholarly or scientific work. This is a parasitical way of behaving, and personally I believe it's being done deliberately and consciously.
Richard Feynman (racistly (2)) referred to this sort of thing as "cargo cult science". It's all surface with no depth, and it's not intended to further human knowledge.
#GenAI #GenerativeAI #LLMs #GPT #ChatGPT #OpenAI #ScientificPublishing #AcademicPublishing
(1) In case you're curious no, what OpenAI is describing is not reasoning, neither technically speaking nor intuitively speaking.
(2) I'm very sorry but I think what Feynman gets at in this essay is important and I don't know any other accessible source for the critique he makes.
https://mastodon.world/@Mer__edith/113197090927589168
Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI
With the growing attention and investment in recent AI approaches such as large language models, the narrative that the larger the AI system the more valuable, powerful and interesting it is is increasingly seen as common sense. But what is this assumption based on, and how are we measuring value, power, and performance? And what are the collateral consequences of this race to ever-increasing scale? Here, we scrutinize the current scaling trends and trade-offs across multiple axes and refute two common assumptions underlying the 'bigger-is-better' AI paradigm: 1) that improved performance is a product of increased scale, and 2) that all interesting problems addressed by AI require large-scale models. Rather, we argue that this approach is not only fragile scientifically, but comes with undesirable consequences. First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint. Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate. Finally, it exacerbates a concentration of power, which centralizes decision-making in the hands of a few actors while threatening to disempower others in the context of shaping both AI research and its applications throughout society.Currently this is on #arXiv which, if you've read any of my critiques, is a dubious source. I'd love to see this article appear in a peer-reviewed or otherwise vetted venue, given the importance of its subject.
I've heard through the grapevine that US federal grantmaking agencies like the #NSF (National Science Foundation) are also consolidating around generative AI. This trend is evident if you follow directorates like CISE (Computer and Information Science and Engineering). A friend told me there are several NSF programs that tacitly demand LLMs of some form be used in project proposals, even when doing so is not obviously appropriate. A friend of a friend, who is a university professor, has said "if you're not doing LLMs you're not doing machine learning".
This is an absolutely devastating mindset. While it might be true at a certain cynical, pragmatic level, it's clearly indefensible at an intellectual, scholarly, scientific, and research level. Willingly throwing away the diversity of your own discipline is bizarre, foolish, and dangerous.