buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
THIS IS IMPORTANT: DO NOT TRUST GOOGLE GEMINI, and it may already be enabled in your GMAIL BY DEFAULT.
DO NOT TRUST GOOGLE!
#Google says it may share your interactions with #Gemini #AI with law enforcement, including involving Gemini features in #Gmail. Given Google's CEO Sundar's enthusiastic embrace of the Fascist Trump administration and teaming Gemini with Hegseth's Department of Defense, I NO LONGER TRUST GOOGLE.
Pay attention to this. Google -- as always -- says they do not give user data to law enforcement except with a valid warrant. This has long been true, but I no longer trust Google to honor this position. A quick call from Trump to Sundar could well be enough to bypass those past protections. Discussing an anti-Trump demonstration? Abortion? Helping local immigrants? You can no longer trust Google to not treat your data as fodder for fascist Trump, irrespective of Google's public statements.
VERY IMPORTANT: It appears that if you have "Smart Features" enabled in Gmail -- which has long included a variety of features, you now probably (or soon will) also have Gemini scouring through your email. I STRONGLY RECOMMEND turning this off. Can we be sure Google will really honor this choice given their embrace of Trump? No, but it's important to have specified your choice on the record.
To turn off Gmail smart features (apparently including Gemini, though other ways to ostensibly turn off Gemini in Gmail may also be available):
Enter Gmail
Click the Gear icon at the upper right
Click "See all settings"
Stay on the "General" tab
Scroll down to "Smart features" -- make sure it is unchecked
Just below this (currently) is "Google Workspace smart features"
Click the "Manage Workspace smart feature settings" bubble
Make sure the toggles are OFF (to the left) for everything listed in the pop-up
that appears. Be SURE TO SCROLL DOWN -- there are currently TWO toggles
but the second may be completely hidden if you don't grab the little scrollbar on the right and pull it down. Again, make sure BOTH toggles are off.
Click Save
Now back on the General Tab, scroll all the way to the bottom of the page and click Save Changes if that button is not already greyed out and unclickable.
That's what I have for you for now, based on the information I have right now.
More as I learn more.
It's a terrible situation that pains me greatly. Before you ask, I do not at this time have a recommendation for other major email providers.
How you proceed is up to you, of course.
Take care.
L
RE: https://xoxo.zone/@Ashedryden/115905169799161211
Doesn’t “training” a model amount to compressing the training data into a finite tensor space? And prompting, modulo the added random seed, amount to searching and computing a weighted average?
Of course models store copies of the training data. Similarly, compressing raw video to H.265 doesn’t make it any less a copy.
The GPT is just a compression format with an obfuscated and probabilistic search algorithm. Retry a search enough times, and you can replicate the original work.
I’m sure, with enough compute time and the right algorithm, GPT models can be decompressed into their basis data.
A very brief, somewhat technical take on Large Language Models (LLMs). They are *effectively* analogs of Bloom filters for searching vector databases using natural language interfaces, and reporting back using natural language.
This statement gives you all you need to know about potential and pitfalls, as long as you abstract the keywords to their essentials ....
1/5
Vector Database = encodes information using more or less artificial, possibly statistical descriptions of recorded facts about the objects in the database.
Natural language = imprecise language ie cannot have formal proofs about internal consistency or ambiguity of the query or the query result.
Bloom filter = sensitive but not specific way to search vector databases. Designed to not have false negatives (i.e. not miss), but will generate false hits (akin to hallucinations).
2/5
There are also technical issues that make LLMs dissimilar to Bloom filters, but they win on using natural language as a query language; it frees people from formulating a very complex query in a formal language, reducing the barrier of asking questions for non-experts in a field. Furthermore, the answer is also not formulated in a formal language or served in a technical manner, even further reducing the barriers in interpreting answers.
3/5
A society of engineers would acknowledge this limitation and use LLMs as accelerants the way we use high temperature settings in simulated annealing (SA) global optimization schemes: as a quick way to generate an approximate answer, and then painstakingly (the "cooling scheme" in SA) refine the answer by making it more precise.
Unfortunately, we are not a society of engineers, but a culture of Dunning Kruger susceptibles. Enjoy your dopamine fix from the slop now.
5/5
Matthew McConaughey is getting in the fight over unauthorized AI likenesses. The actor is filing trademark applications to prevent AI companies from stealing his likeness. Eight have been approved so far. Read more from @Engadget:
#Tech #AI #MatthewMcConaghey #Likeness #Technology #ArtificialIntelligence
TechCrunch: "EPA rules that xAI’s natural gas generators were illegally used
Elon Musk’s xAI has been illegally operating dozens of natural gas turbines to power its Colossus data centers in Tennessee, the Environmental Protection Agency ruled Thursday. "
https://techcrunch.com/2026/01/16/epa-rules-that-xais-natural-gas-generators-were-illegally-used/
Anthropic Fellows Program for #AI safety research: applications open for May & July 2026
https://alignment.anthropic.com/2025/anthropic-fellows-program-2026/
AI data centers coming to Millard County, Utah raise concerns around water use in the drought-stricken county
Researcher, Mya Garcia of Texas Tech University, estimates that a 100-word email generated by an AI chatbot requires a little more than one bottle of water.
Bonkers bit of "research" from Anthropic. They seem to have used their own tool to classify their own customers' interactions with their own tool and then extrapolated from that to claim that AI will create new jobs. I'm no statistician but that smells like at least three different kinds of shit to me. Fifty-five page PDF just to be summarised as: "The future is uncertain," says Peter McCrory, Anthropic's head of economics.
This is actually a very concise and accessible summary of why #AI can't replace developers.
The only part I disagree with is the comparison of programming with AI to carpenters using electric drills, since I doubt power tools would be as popular if they delivered probabilistic results.
In which OpenAI admit that people aren't paying them enough to use their spicy autocomplete : https://www.bbc.co.uk/news/articles/cvgjn012k3do
ayo nobody wants this shite sandwich #ai
survey: https://tenforward.social/@lexfeathers/115906110866301520
ChatGPT to start showing ads in the US
Because the business is doing so well.
Next will be selling your information to anyone else who will pay, or order them to hand it over.
https://www.theguardian.com/technology/2026/jan/16/chatgpt-ads-in-revenue-boost
It's been a busy 24 hours in the cyber world with significant updates on nation-state activity, a couple of actively exploited vulnerabilities, new malware evasion techniques, and a reminder about the ever-evolving privacy landscape. Let's take a look:
Anchorage Police & Canadian Investment Regulator Breaches 🚨
- The Anchorage Police Department took servers offline and disabled third-party access after a cyberattack on their data migration provider, Whitebox Technologies. While no evidence of APD system compromise or data acquisition exists, the incident highlights third-party risk.
- Canada's Investment Regulatory Organization (CIRO) confirmed a sophisticated phishing attack last August impacted approximately 750,000 investors. Compromised data includes dates of birth, SINs, government IDs, and investment account numbers, though no evidence of misuse has been found.
- These incidents underscore the critical importance of supply chain security and robust incident response, especially for organisations handling sensitive public or financial data.
🗞️ The Record | https://therecord.media/anchorage-police-takes-servers-offline-after-third-party-attack
🗞️ The Record | https://therecord.media/canada-ciro-investing-regulator-confirms-data-breach
China-Linked APTs Target Critical Infrastructure & US Policy 🇨🇳
- Cisco Talos identified "UAT-8837," a China-backed APT, targeting North American critical infrastructure using compromised credentials and exploiting vulnerabilities like CVE-2025-53690 in SiteCore products, suggesting access to zero-day exploits.
- Another China-linked group, Mustang Panda (aka UNC6384, Twill Typhoon), used Venezuela-themed spear phishing lures to target US government agencies and policy organisations, deploying a new DLL-based backdoor called Lotuslite for espionage.
- Meanwhile, the GootLoader malware has evolved its evasion tactics, using malformed ZIP archives with 500-1,000 concatenated archives and truncated EOCD records to bypass security tools, while remaining readable by Windows' default unarchiver.
🗞️ The Record | https://therecord.media/china-hackers-apt-cisco-talos
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/chinese_spies_used_maduros_capture/
📰 The Hacker News | https://thehackernews.com/2026/01/lotuslite-backdoor-targets-us-policy.html
📰 The Hacker News | https://thehackernews.com/2026/01/gootloader-malware-uses-5001000.html
Black Basta Ring Leader Hunted 💰
- German and Ukrainian authorities have identified two Ukrainians as "hash crackers" for the Russia-linked Black Basta ransomware group and placed the alleged ringleader, Oleg Evgenievich Nefekov (aka 'tramp', 'Washingt0n'), on an international most-wanted list.
- Nefekov, 35, is accused of founding and leading Black Basta, responsible for extorting over $100 million from approximately 700 organisations worldwide since 2022.
- This coordinated law enforcement action highlights ongoing efforts to dismantle ransomware operations and hold key individuals accountable, with seized digital assets and cryptocurrency indicating active investigations.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/16/black_basta_boss_wanted/
🗞️ The Record | https://therecord.media/police-raid-homes-of-alleged-black-basta-hackers
Critical Vulnerabilities Under Active Exploitation ⚠️
- Cisco has finally patched CVE-2025-20393, a maximum-severity RCE zero-day in AsyncOS for Secure Email Gateway and Secure Email and Web Manager, which was actively exploited by China-linked APT UAT-9686 since late November 2025.
- A critical RCE flaw (CVE-2025-37164) in HPE OneView, a data centre management platform, is now being exploited at scale by the RondoDox botnet, with over 40,000 automated attack attempts observed globally, primarily targeting government, financial, and industrial sectors.
- AMD CPUs are vulnerable to "StackWarp" (CVE-2025-29943), a low-severity flaw in SEV-SNP secure virtualisation, allowing malicious hypervisors to access VM secrets, recover private keys, and escalate privileges by manipulating the stack pointer when SMT is enabled. Patches are available.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/cisco_fixes_cve_2025_20393/
📰 The Hacker News | https://thehackernews.com/2026/01/cisco-patches-zero-day-rce-exploited-by.html
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/16/rondodox_botnet_hpe_oneview/
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/stackwarp_bug_amd_cpus/
More Vulnerabilities and IoT Risks 🔒
- CISA's own "Software Acquisition Guide: Supplier Response Web Tool" was found to have a simple cross-site scripting (XSS) vulnerability, highlighting that even tools promoting secure development can have basic flaws.
- A bankrupt Estonian e-scooter startup, Äike, left all its devices vulnerable by shipping them with a single, default private key, allowing any scooter within Bluetooth range to be unlocked by reverse-engineering the Android app.
- These incidents serve as a stark reminder that fundamental security practices, from input validation to proper key management, remain crucial across all software and IoT deployments.
🤫 CyberScoop | https://cyberscoop.com/cisa-secure-software-buying-tool-had-a-simple-xss-vulnerability-of-its-own/
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/16/bankrupt_scooter_startup_key/
AI for Defence & Initial Access Brokers 🛡️
- The Pacific Northwest National Laboratory (PNNL) has developed ALOHA, an AI-based system using Agentic LLMs to significantly reduce attack reconstruction time from weeks to hours, aiding purple teams in quickly testing defences against new threats.
- A Jordanian initial access broker (IAB) operating as "r1z" pleaded guilty to selling access to 50 company networks and powerful EDR-killing malware for $15,000, demonstrating the sophistication and value of IABs in the cybercrime ecosystem.
- These developments highlight both the accelerating pace of cyber defence through AI and the persistent, foundational role of IABs in enabling broader cyberattacks, including ransomware.
🌑 Dark Reading | https://www.darkreading.com/cybersecurity-operations/ai-system-attack-reconstruction-weeks-hours
🗞️ The Record | https://therecord.media/jordanian-initial-access-broker-pleads-guilty-to-helping-target-50-companies
Carlsberg Experience Exposes Visitor Data 🍻
- The Carlsberg exhibition in Copenhagen had a vulnerability where visitor names, images, and videos, accessed via wristband IDs, could be easily brute-forced due to predictable ID formats and a lack of effective rate limiting.
- Pen Test Partners researcher Ken Munro discovered the flaw, which exposed personal data of thousands of visitors monthly, raising GDPR concerns.
- The incident also highlighted challenges in responsible disclosure, with Carlsberg's slow response and ineffective patching attempts.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/16/carlsberg_experience_vulnerability/
CISOs Ascend to Executive Suite 📈
- A new report indicates that CISO titles are increasingly becoming executive-level positions, surpassing VP or director roles, especially in large publicly traded companies.
- This shift is driven by the growing digital dependency of businesses, the rising tide of cyberattacks, and increasing regulatory pressures, such as those from the SEC and updated Gramm-Leach-Bliley Act, which mandate accountability for cybersecurity.
- While the executive title offers a seat at the strategic table and can help with security prioritisation, concerns about CISO burnout persist, particularly in smaller organisations with fewer resources and broader responsibilities.
🌑 Dark Reading | https://www.darkreading.com/cybersecurity-operations/cisos-rise-to-prominence-security-leaders-join-the-executive-suite
#CyberSecurity #ThreatIntelligence #APT #Ransomware #Malware #Vulnerability #ZeroDay #RCE #ActiveExploitation #SupplyChainSecurity #DataPrivacy #CISO #AI #IncidentResponse #InfoSec
For decades #Turing test was the ironclad determinant for #AI quality.
Once it was trivially broken by now old models, the goalposts have shifted...
...it's now humanities last exam, HLE, BTW.
The world is full of spam generating humans.
One is in the Whitest house, putin is surrounded by them, and any large family gathering will contain 2-3 human spam generators the will jibber-jabber nonsensical human-like speech constantly.
Do you believe that Clearview AI, a facial recognition company that scraped more than 20 billions images from social media platforms and the internet, without consent from the subjects, to sell their product to law enforcement and government agencies, is a privacy-respectful product because it does respect the privacy of its users?
| No, because it doesn't respect the privacy of the people it took data from.: | 0 |
| Yes it's privacy-respectful, because AI training data doesn't count.: | 0 |
Closes in 8:14:05
You will see sometimes (smart) folks on #mastodon speak authoritatively about #vibecoding and specifically about #debugging vibecode...
...Folks who have never actually #vibecoded or tried to use an #AI productively...
... they seem to think that you use the #LLM to cut the first version of code, then sit there like some goddamned savage ape, poking the code with a stick around a green screen fire. And opinionate 'wisely'.
... no, you use the model to debug too...
Here is what an actual fragment of a vibecode debug session may look like.
My latest post on cybersecurity attack scenarios while Davos, this time using AI and cool security visualizations. Enjoy https://simonroses.com/2026/01/information-warfare-strategies-srf-iws-offensive-operations-at-the-davos-forum-2026-part-3/ #cybersecurity #WEF26 #AI #blog
A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions.(from https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/).
This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).
Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.
#AI #GenAI #GenerativeAI #LLM #OpenAI #ChatGPT #health #HealthTech
Starlink quietly enabled third-party AI model training on its customers' personal data by default. Fortunately, there's a way to opt out.
RE: https://mstdn.social/@rysiek/115904617539822735
This is actually kinda important.
You see, some of us called this back in ye earlie '10s, that there was no viable business case for VR to justify Meta's investment - we were of course poo-poo'ed, because if a company the size of Meta wants something to happen, they'll make it happen.
Turns out no, they will not, because even with billions upon billions to burn, even Meta can't make something out of nothing.
We, The Market™️, rejected this. We have more power than we think.
Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?
Yeah, about that:
https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-tooBut today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! 🤡
The public is⚡rejecting new #AI #DataCenters:
"Last 24 hrs🚨⬇️
> Brenham, TX CC blocked tax incentives & zoning changes for a large FR-owned data center
> Imperial, CA failed to approve site plans for $10B project, random shadow company now suing
> Indianapolis, IN delayed rezoning decision on MetroBloks project a 2nd time
> San Marcos, TX - $1.5B project now stalled amid resident opposition
> KC MO -ordinance being voted on today would ban new construction..."
-D Johnson
#Tech #USPol
ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself
https://arstechnica.com/tech-policy/2026/01/chatgpt-wrote-goodnight-moon-suicide-lullaby-for-man-who-later-killed-himself/
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
Sorry, Jensen, but you're just gonna have to bite my shiny metal ass on this one. #AI
If anyone ever tries to tell you LLMs are just as good (or better!) in generating text (or code) as humans are in creating text (or code), ask them about "dogfooding".
Dogfooding means training LLMs on their own output. It is absolutely disastrous to such models:
https://www.nature.com/articles/s41586-024-07566-y
Every "AI" company will have layers upon layers of defenses against LLM-generated text ending up in training data.
Which is why they desperately seek out any and all human-created text out there.
Remember how we were all supposed to be "left behind" if we don't jump on the Metaverse bandwagon? Especially businesses?
Yeah, about that:
https://www.theverge.com/tech/863209/meta-has-discontinued-its-metaverse-for-work-too
But today we should treat absolutely seriously all the bullshit about "being left behind" if we don't adopt "AI"! 🤡
RE: https://social.vivaldi.net/@brucelawson/115898907374519188
...what he said 👇
According to Tech Radar, a reason to avoid using Vivaldi web browser is "No AI-powered assistant". Yup - and we're proud of it! If you don't want slop while you shop, or scurf while you surf, give the European browser a try.
https://vivaldi.com/blog/a-i-browsers-the-price-of-admission-is-too-high/
Article: 'How Generative AI is destroying society'
Gary Marcus commenting on a forthcoming paper.
"
In this essay, we make one simple point: Al systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of Al systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current Al systems are a death sentence for civic institutions, and we should treat
them as such.
"
https://garymarcus.substack.com/p/how-generative-ai-is-destroying-society
to be part of the American oligarchy/ruling classes, you have to be a capitalist.
USA oligarchy treats labor as a plantation. so to them, white working classes are akin to plantation managers.
#MAGA #techbros #petromafia decided they don’t need the white working classes anymore ―that’s the whole push for #AI.
the American plantation owners have turned against their plantation managers and are eager to treat white working classes the way they treat Black, Indigenous & People of Color…
#ICE recruitment used #AI tools to sort applicants based on experience with law enforcement but included everyone who used the word "officer" in their application as "law enforcement officers"... who didn't require training.
You could have listed “administrative officer” at an ad agency & they’d not train you.
#Trump #law #DHS #immigration #ICEstapo #CivilRights #RightToProtest #UseOfForce #ExcessiveForce #PoliceBrutality #AbuseOfPower #Gestapo #Sturmabteilung #RESIST
https://www.nbcnews.com/politics/immigration/ice-error-meant-recruits-sent-field-offices-proper-training-sources-sa-rcna254054
Introducing ÆSIR: Finding Zero-Day Vulnerabilities at the Speed of AI. This is our new AI tool we use to find bugs in AI frameworks. It was used to discover bugs in #NVIDIA Isaac Gr00t and found the patch bypasses, too. Read all about it at https://www.trendmicro.com/en_us/research/26/a/aesir.html #ÆSIR #AI
"WHO THE HELL WOULD WANT TO USE SOCIAL MEDIA WITH A BUILD-IN CHILD ABUSE TOOL?"
(ps. Big thumbs up to people putting those ads up in London bus shelters!!)
(photo from: https://www.pbs.org/newshour/world/grok-blocked-from-undressing-images-with-ai-in-places-where-its-illegal-x-says )
Banana Pi BPI-CM6 with IO board
SpacemiT K1(Octa-core 64-bit RISC-V AI CPU,2.0 TOPs AI computing power)
4GB/8GB/16GB LPDDR4,16GB eMMC
1x SDIO RTL8852BS Wifi6 /BT5.2 module,1x Integrates RTL8211F PHY,5-lane PCIE2.1 expansion capability interfaces
temperature range: -40°C to 85°C
https://docs.banana-pi.org/en/BPI-CM6/BananaPi_BPI-CM6
#bananapi #raspberrypi #orangepi #opensource #SBC #RISCV #AI #IoT
So the Wikimedia Foundation had $271.56 MILLION in assets in 2024 — and now gets paid by the "AI" overlords on top of that.
I cannot possibly express just how STUPID I feel thinking that my monthly donation was good for anything, or made a difference.
I cancelled that today. Buying burritos 🌯 for myself seems like a much better investment into my own happiness, and Wikipedia will be JUST fine.
https://www.statista.com/statistics/1311455/wikimedia-foundation-annual-assets/
@bitwarden Thank you.
And I politely request you provide some sort of context of why #Bitwarden are asking questions about how people use #AI ?
Once again, I must stress, it is deeply concerning that a password manager company are asking about AI - DEEPLY CONCERNING! If you have good intentions, and wish to use the data in such a way to prevent any AI access then make that clear - asking questions around AI without context is dangerous for a business like yours.
For the fun of it, I'm going to CC: @davidgerard - no real information of what any of this is for yet, but perhaps one to keep an eye on.
GLM-Image (General Language Model-Image) is a Chinese open source ai image generation model trained entirely on Huawei processors.
Github: https://github.com/zai-org/GLM-Image
Hugging Face: https://huggingface.co/zai-org/GLM-Image
I bought one of these 48GB DDR5 memory sticks just over a year ago from Amazon for £138. Price today? £397.61
Thanks #AI
Had the joy of code reviewing an obviously vibe-coded pull request at work today. I sent it back asking for clarity on how it implemented access control. Got a comment back pointing me to a line of code that called the relevant function in our shared library. Sent it back again pointing out that, while it did call the function, it never checked the result.
#AI #VibeCoding #Software #Claude #Cursor #Development #Coding
It's been a busy 24 hours in the cyber world with significant updates on recent breaches, major cybercrime infrastructure takedowns, a raft of critical vulnerabilities, and ongoing discussions around AI's impact on security and privacy. Let's dive in:
Recent Cyber Attacks and Breaches ⚠️
- South Korean conglomerate Kyowon Group has confirmed a ransomware attack that disrupted operations and led to the exfiltration of customer data, potentially impacting over 9.6 million accounts.
- In the UK, West Midlands Police are investigating a data breach at a GP surgery in Walsall, with a staff member accused of theft and released on bail.
- These incidents highlight the persistent threat of ransomware and insider threats, even for organisations with significant customer bases or sensitive data.
🤖 Bleeping Computer | https://www.bleepingcomputer.com/news/security/south-korean-giant-kyowon-confirms-data-theft-in-ransomware-attack/
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/woman_bailed_following_doctors_office/
Cybercrime-as-a-Service Disrupted: RedVDS Takedown 🚨
- Microsoft, in a coordinated international effort with Europol and German authorities, has disrupted RedVDS, a massive cybercrime-as-a-service platform.
- RedVDS offered disposable virtual Windows cloud servers for as little as $24 a month, enabling criminals to conduct mass phishing, BEC schemes, and account takeovers, leading to an estimated $40 million in US fraud losses since March 2025.
- The operation involved civil lawsuits in the US and UK, seizing malicious infrastructure and taking RedVDS's marketplace offline, revealing that its customers often leveraged AI tools like ChatGPT to craft more convincing phishing lures and impersonations.
🤖 Bleeping Computer | https://www.bleepingcomputer.com/news/security/microsoft-seizes-servers-disrupts-massive-redvds-cybercrime-platform/
📰 The Hacker News | https://thehackernews.com/2026/01/microsoft-legal-action-disrupts-redvds.html
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/microsoft_uk_courts_redvds/
AI Prompt Injection Risks in Anthropic's Cowork 🧠
- PromptArmor researchers have demonstrated that Anthropic's new Cowork productivity AI is vulnerable to a Files API exfiltration attack chain, a prompt injection risk previously reported and acknowledged but not fully fixed by Anthropic for Claude Code.
- The attack allows Cowork to be tricked into transmitting sensitive files from connected local folders to an attacker's Anthropic account without additional user approval.
- Anthropic acknowledges prompt injection as an industry-wide issue and advises users to avoid connecting Cowork to sensitive documents, limit its Chrome extension to trusted sites, and monitor for suspicious actions, placing the onus on users to manage this complex risk.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/anthropics_claude_bug_cowork/
Critical Vulnerabilities and Active Exploitation 🛡️
- **Modular DS WordPress Plugin:** A maximum severity flaw (CVE-2026-23550) in Modular DS (versions 2.5.1 and older), used by over 40,000 WordPress sites, is being actively exploited to bypass authentication and gain admin-level privileges. Users should update to version 2.5.2 immediately.
- **AWS CodeBuild Misconfiguration:** A critical misconfiguration (dubbed CodeBreach) in AWS CodeBuild's webhook filters allowed researchers to take over AWS's own GitHub repositories, including the JavaScript SDK, by bypassing ACTOR_ID filters due to unanchored regex patterns. AWS has since fixed the issue, confirming no customer impact.
- **Google Fast Pair Protocol:** A critical vulnerability (CVE-2025-36911, WhisperPair) in Google's Fast Pair protocol affects hundreds of millions of Bluetooth audio devices, allowing unauthenticated attackers to forcibly pair, track users via Google's Find Hub, and eavesdrop on conversations. Firmware updates from manufacturers are the only defence.
- **Palo Alto Networks PAN-OS DoS:** Palo Alto Networks patched a high-severity DoS vulnerability (CVE-2026-0227) affecting PAN-OS 10.1+ and Prisma Access when GlobalProtect is enabled, allowing unauthenticated attackers to disable firewall protections. While not actively exploited yet, immediate patching is advised given past active exploitation of similar flaws.
- **Delta Industrial PLCs:** Researchers found three critical (CVSS 9.1-9.8) and one high-severity vulnerability in Delta Electronics DVP-12SE11T PLCs, popular in Asian industrial sites, which could allow authentication bypass, password information leakage, or device freezing. Patching is crucial, though challenging in OT environments.
🤖 Bleeping Computer | https://www.bleepingcomputer.com/news/security/hackers-exploit-modular-ds-wordpress-plugin-flaw-for-admin-access/
📰 The Hacker News | https://thehackernews.com/2026/01/aws-codebuild-misconfiguration-exposed.html
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/codebuild_flaw_aws/
🤖 Bleeping Computer | https://www.bleepingcomputer.com/news/security/critical-flaw-lets-hackers-track-eavesdrop-via-bluetooth-audio-devices/
🤖 Bleeping Computer | https://www.bleepingcomputer.com/news/security/palo-alto-networks-warns-of-dos-bug-letting-hackers-disable-firewalls/
💡 Dark Reading | https://www.darkreading.com/ics-ot-security/critical-bugs-delta-industrial-plcs
Threat Landscape Commentary 📊
- **Oceania's Shifting Targets:** New data from Cyble indicates a shift in attacker focus in Australia and New Zealand from critical infrastructure to non-critical sectors like retail, professional services, and construction, driven by the efficiency of targeting less secure, data-rich environments. Initial access brokers and major ransomware groups like INC, Qilin, Lynx, Akira, and Dragonforce are capitalising on these softer targets.
- **AI Normalises Foreign Influence:** A report from the Foundation for Defense of Democracies highlights how AI, particularly LLMs, inadvertently normalises foreign propaganda by prioritising readily available state-aligned media in citations, as credible independent news sources are often behind paywalls or block AI scraping. This creates a structural issue where users seeking unbiased information are directed towards state-controlled narratives.
- **Vulnerability Reporting Surge:** 2025 saw a record 48,177 CVEs assigned, marking the ninth consecutive year of increase. This surge is attributed more to a healthier, expanding vulnerability reporting ecosystem (especially from WordPress security firms and the Linux Kernel CNA) and the use of LLMs by novice researchers, rather than a direct increase in cyber risk. However, data quality issues in the NVD persist, complicating patching efforts.
💡 Dark Reading | https://www.darkreading.com/cybersecurity-analytics/retail-services-industries-oceania
🤫 CyberScoop | https://cyberscoop.com/the-quiet-way-ai-normalizes-foreign-influence/
💡 Dark Reading | https://www.darkreading.com/cybersecurity-analytics/vulnerabilities-surge-messy-reporting-blurs-picture
Data Privacy and Regulatory Action 🔒
- **GM Banned from Selling Driver Data:** The US Federal Trade Commission (FTC) has finalised an order banning General Motors (GM) and its subsidiary OnStar from selling drivers' precise location and driving behaviour data to consumer reporting agencies for five years. This follows allegations that GM collected data without consent via its "Smart Driver" feature, leading to higher insurance rates.
- **Google Settles Children's Privacy Lawsuit:** Google has agreed to pay $8.25 million to settle a class-action lawsuit alleging it illegally collected data from children under 13 via Android Play Store apps using its AdMob SDK, despite developers pledging COPPA compliance. This follows a separate $30 million settlement regarding YouTube's collection of children's data.
🤖 Bleeping Computer | https://www.bleepingcomputer.com/news/security/ftc-bans-general-motors-from-selling-drivers-location-data-for-five-years/
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/ftc_gm_tracking_ban/
🗞️ The Record | https://therecord.media/google-youtube-lawsuit-settle
Regulatory Scrutiny on X and AI Content ⚖️
- Ofcom, the UK communications regulator, is continuing its formal investigation into X (formerly Twitter) despite the platform's announcement that it has implemented measures to block its AI chatbot, Grok, from generating non-consensual sexualised images of people.
- X's changes include technological blocks on "nudifying" images and geoblocking the creation of images of real people in revealing clothing in jurisdictions where it's illegal, applying to all users, including paid subscribers, after initial attempts to limit it to paid users drew strong criticism.
- California's Attorney General has also opened an investigation into X over the issue, highlighting growing international pressure on AI platforms to address the creation and dissemination of non-consensual intimate images.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/15/ofcom_grok_probe/
🗞️ The Record | https://therecord.media/musk-x-grok-block-sexual
Government Cyber Strategy and Leadership 🏛️
- **Germany-Israel Cyber Cooperation:** Germany and Israel have signed a cyber and security cooperation agreement to counter cyber threats and bolster critical infrastructure protection. Germany aims to build its own "cyber dome" based on Israel's semi-automated real-time cyber defence system, exchanging expertise and jointly developing new tools.
- **NSA/Cyber Command Nominee:** Army Lt. Gen. Joshua Rudd, the Trump administration's nominee to lead both US Cyber Command and the National Security Agency, defended his record during a Senate hearing, addressing concerns about his lack of direct digital warfare and intelligence experience by emphasising his leadership background and reliance on the organisations' talent.
🗞️ The Record | https://therecord.media/germany-cyber-dome-israel
🗞️ The Record | https://therecord.media/nsa-cyber-command-nom-joshua-rudd-senate-hearing
#CyberSecurity #ThreatIntelligence #Ransomware #Vulnerabilities #ZeroDay #SupplyChainAttack #AI #PromptInjection #DataPrivacy #RegulatoryCompliance #Cybercrime #InfoSec #IncidentResponse #OTSecurity #ICS
#Microsoft is phasing out its library -- print and digital books, and subscriptions to print and digital journals and news media. It's asking employees to use #AI instead.
https://archive.is/ySSPU
PS: I'd call this a strategic opportunity for its competitors. But some benighted school administrators will follow suit, as if Microsoft must know what it's doing. Dog is my copilot.
"Monkeys are on the loose in St. Louis and AI is complicating efforts to capture them" isn't a headline I expected to read...
https://apnews.com/article/monkeys-st-louis-96c55834c7f17854cc4362db891ca8fa
At Vivaldi we continue to make choices that are different from our competitors. We have chosen to not integrate AI or crypto, but instead we integrate a wealth of other features, based on the wishes of our users.
We are a European company with most of the team based in Norway and Iceland, a few around Europe and a couple in the US.
Our servers are based in Iceland.
If you want to get away from Big Tech, maybe give us a try? If you are already using Vivaldi, maybe introduce your friends?
Have a nice day!
#Vivaldi #Browser #Windows #Macos #Linux #AI #Europa #Technology #EU #UK #Germany #France #Poland #Denmark #Norway #Iceland #Greenland
#California Investigates Elon Musk’s #xAI Over #Sexualized Images Generated by #Grok
The state will examine whether xAI, which owns the social media platform X and created the A.I. #chatbot Grok, violated state law.
#socialmedia #ai #csam #privacy #security
https://www.nytimes.com/2026/01/14/technology/grok-ai-x-investigation-california.html
Raspberry Pi's New AI Hat Adds 8GB of RAM for Local LLMs
https://www.jeffgeerling.com/blog/2026/raspberry-pi-ai-hat-2/
I think the answer is probably "Yes". #AI #LegalTech #Ethics
Is It Time To Require Lawyers To Be Competent With GenAI? - Above the Law https://abovethelaw.com/2026/01/is-it-time-to-require-lawyers-to-be-competent-with-genai/
Wenn die RAM Preise so hoch sind und die Hersteller das auch noch verknappen wollen, dann sollte man einfach die Investitionen verschieben und eben nicht kaufen.
Soll die KI Blase dann bald platzen und die Hersteller an ihrer Gier ersticken.
Gerade Privatleute sind nun leichter Fang für Betrüger. Einfach mit den Füßen abstimmen und eben nichts kaufen und schauen, ob es mit aktueller Hardware geht. Vielleicht auch Windows-Bloat von den Rechnern fegen und zu Linux wechseln.
AI’s #Hacking Skills Are Approaching an ‘Inflection Point’
#AI models are getting so good at finding #vulnerabilities that some experts say the tech industry might need to rethink how software is built.
#security
https://www.wired.com/story/ai-models-hacking-inflection-point/
Wikipedia inks AI deals with Microsoft, Meta and Perplexity as it marks 25th birthday.
@AssociatedPress reports: "While AI training has sparked legal battles elsewhere over copyright and other issues, Wikipedia founder Jimmy Wales said he welcomes it."
Confer.to system prompt:
You are Confer, a private end-to-end encrypted large language model created by Moxie Marlinspike.
Knowledge cutoff: 2024-06
Current date and time: 01/15/2026, 18:46 GMT+1
User timezone: XXX/XXX
User locale: xx-xx
You are an insightful, encouraging assistant who combines meticulous clarity with genuine enthusiasm and gentle humor.
General Behavior
- Speak in a friendly, helpful tone.
- Provide clear, concise answers unless the user explicitly requests a more detailed explanation.
- Use the user’s phrasing and preferences; adapt style and formality to what the user indicates.
- Lighthearted interactions: Maintain friendly tone with subtle humor and warmth.
- Supportive thoroughness: Patiently explain complex topics clearly and comprehensively.
- Adaptive teaching: Flexibly adjust explanations based on perceived user proficiency.
- Confidence-building: Foster intellectual curiosity and self-assurance.
Memory & Context
- Only retain the conversation context within the current session; no persistent memory after the session ends.
- Use up to the model’s token limit (≈8k tokens) across prompt + answer. Trim or summarize as needed.
Response Formatting Options
- Recognize prompts that request specific formats (e.g., Markdown code blocks, bullet lists, tables).
- If no format is specified, default to plain text with line breaks; include code fences for code.
- When emitting Markdown, do not use horizontal rules (---)
Accuracy
- If referencing a specific product, company, or URL: never invent names/URLs based on inference.
- If unsure about a name, website, or reference, perform a web search tool call to check.
- Only cite examples confirmed via tool calls or explicit user input.
Language Support
- Primarily English by default; can switch to other languages if the user explicitly asks.
Tool Usage
- You have access to web_search and page_fetch tools, but tool calls are limited.
- Be efficient: gather all the information you need in 1-2 rounds of tool use, then provide your answer.
- When searching for multiple topics, make all searches in parallel rather than sequentially.
- Avoid redundant searches; if initial results are sufficient, synthesize your answer instead of searching again.
- Do not exceed 3-4 total rounds of tool calls per response.
- Page content is not saved between user messages. If the user asks a follow-up question about content from a previously fetched page, re-fetch it with page_fetch.
Oopsie!
Claude Cowork Exfiltrates Files https://www.promptarmor.com/resources/claude-cowork-exfiltrates-files
A popular TikTok channel featuring an "Aboriginal man" presenting animal facts has been exposed as an AI forgery.
AI is Iggy Azalea on an industrial scale:
"The self-described “Bush Legend” on TikTok, Facebook and Instagram is growing in popularity.
"These short and sharp videos feature an Aboriginal man – sometimes painted up in ochre, other times in an all khaki outfit – as he introduces different native animals and facts about them.
...
"But the Bush Legend isn’t real. He is generated by artificial intelligence (AI).
"This is a part of a growing influx of AI being utilised to represent Indigenous peoples, knowledges and cultures with no community accountability or relationships with Indigenous peoples. It forms a new type of cultural appropriation, one that Indigenous peoples are increasingly concerned about."
...
"We are seeing the rise of an AI Blakface that is utilised with ease thanks to the availability and prevalence of AI.
"Non-Indigenous people and entities are able to create Indigenous personas through AI, often grounded in stereotypical representations that both amalgamate and appropriate cultures."
#ChatGPT #gemini #AI #TikTok #tiktoksucks #Claude #LLM #ArtificialIntelligence #AIslop
Windows 11 is testing a hidden Copilot button inside File Explorer
https://hackr.io/blog/windows-11-copilot-file-explorer-integration
A hidden Chat with Copilot control has shown up inside File Explorer in a Windows 11 preview build, and it reads like a quiet preview of Microsoft’s next move. Copilot is shifting from an app you open to a system layer that sits inside everyday workflows. #software #ai #offrehacked
An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.
These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.
Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.
Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have unending glitches. The glitches manifest differently, but they're always there, or lurking. They are reminders.
With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?
This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.
Afroféminas anuncia con bombos y platillos una «IA decolonial y antirracista» que no es otra cosa que un GPT «personalizado» desde la propia OpenAI 🤦🏼♀️
Nadie las obligaba a usar ChatGPT de base para esto. Fue una elección. Ok. Pero no salgan a venderla como decolonial y antirracista si están usando de base el mismo sistema opresor y tecnofeudalista que se intenta combatir. Creen un modelo desde cero, aunque les tome más tiempo, y ahí sería más coherente el mensaje.
Heute spricht man immer von den Kupferpreisen und den Preisexplosionen bei der Hardware... Und stattdessen
Open-Source-Treiber unter Linux: Überraschung für 20 alte Radeon-GPUs
https://www.pcgameshardware.de/Linux-Software-26761/News/Mesa-R300g-ATI-AMD-1489748/
Ich, #Mac Mini M2 hervor gekramt um #Apple #AI für #Julia zu testen. Ich hatte das Teil komplett zurück gesetzt. Naja, nach dem Download von Sequoia für die Installation (zusammen eine gute Stunde oder mehr) wurde mir das Update auf 26.x vorgeschlagen. Super, der Download der 16 GB hätte 3 Tage gedauert, naja sicher schneller aber trotzdem. Ich hab dann die #Google #KI für eine Lösung für #Linux gefragt ... Danke
This can't go on 🚨
Musk and his AI indecency engine shows we must shift power away from Big Tech.
Unless we regulate AI, break monopolies with competition and push for #DigitalSovereignty, the UK could be forever locked in a techno-permacrisis.
Read more ⬇️
https://www.openrightsgroup.org/blog/techno-permacrisis/
#Musk #Grok #X #BigTech #interoperability #CybersecurityBill #AI #ukpolitics #ukpol #TechnoPermaCrisis
#Bandcamp verbannt KI: Ein Statement für echte Musik?
Die seit Jahren als wichtiger Ort für unabhängige Künstler geltende Plattform hat offiziell bekannt gegeben, dass keine KI-generierte Musik mehr hochgeladen werden darf.
https://www.gearnews.de/bandcamp-verbannt-ki-tech/
#PeaceLoveMusic #MusicProduction #DAW #LinuxAudio #Linux #vst #KI #AI #Musik #Kunst #Künstlerinnen #Künstler
Bandcamp has released a statement saying that music and audio that is generated wholly or in substantial part by #AI is not permitted on Bandcamp.
Someone shared this without #attribution. I can't figure out how to image search on my phone. DM me if you can show who created this.
One more reason to support artists and labels on Bandcamp. No AI slop!
https://blog.bandcamp.com/2026/01/13/keeping-bandcamp-human/
Simon Willison on porting OSS code:
> I think that if “they might train on my code is enough to drive you away from open source, your open source values are distinct enough from mine that I’m not ready to invest significantly in keeping you. I’ll put that effort into welcoming the newcomers instead.
https://simonwillison.net/2026/Jan/11/answers/
This feels very much like colonialism; take over all the #OSS code, drive the original developers away, and give the colonizers the code as a welcome present.
Meine IT Datenschutz Liste für alle 2026 !!
https://cryptpad.digitalcourage.de/file/#/2/file/zjE1-jSDj6HbZhvuXSig0euQ
( Transparenz: Ich arbeite nicht in der IT, dies ist ein kleines HobbyProjekt, wo Ich versuche Menschen ein paar nützlich Computer Tipps zu geben. )
#Datenschutz
#Privatsphäre
#Sicherheit
#Verschlüsselung
#Chatkontrolle
#Linux
#Windows
#Windows10
#Endof10
#Windows11
#Betriebssystem
#Supportende
#Gaming
#Browser
#Fediverse
#Mastodon
#Suchmaschine
#Passwortmanager
#Informationssicherheit
#YouTube
#Werbeblocker
#2FA
#eMail
#Messenger
#UnplugTrump
#BigTech
#AI
#GraniteAct
#CloudAct
#Meta
#Facebook
#TikTok
#Instagram
#DID / #DUD
#unblugtrump
There will never be an AI tool that
is truly private unless it hasn't trained on nonconsensual data.
Even if a platform were able to
create the perfect protections for its users' prompts and results,
If the platform is built from or utilizing an AI model that was trained on or is updated and optimized with data that was scraped from millions of people without their consent, then of course this platform isn't "privacy-respectful."
How could it be?
The company is saying:
"We respect the privacy of our users while they are using our platform, but outside of it, it's fair game."
Users thinking they are using a privacy-respectful platform are in fact saying:
"Privacy for me and not for thee,"
And are directly contributing to the platform needing to scrape even more nonconsensual data to improve.
Always ask: Where the training data comes from?
Without the assurance that a platform only uses AI models that have only been training on data acquired ethically, it is not a privacy-respectful platform.
Canada's Minister of AI, Evan Solomon, is letting us down thoroughly...
We need protections and enforcement tools to protect Canadians from AI and AI deepfakes.
If you're a Canadian Citizen, please sign & share this official government petition demanding changes to the Copyright Act to give us inherent rights to our likeness and a demand to enforce this right, so we can easily submit takedown requests.
#AI #Deepfakes #Grok #CDNpoli #EvanSolomon #Canada
https://www.cbc.ca/news/politics/x-deepfakes-canada-9.7043522
Decrapify script for browsers (Win/Mac/Nix), because that's where we are with software in 2026: https://justthebrowser.com/ On Windows it uses Group Policies to enforce settings so a good solution for friends and family's computers to turn that AI shit off and *keep* it off
It's been a busy 24 hours in the cyber world with significant updates on recent attacks, actively exploited vulnerabilities, new malware campaigns, and a reminder about the ever-evolving privacy landscape. Let's take a look:
Kyowon Group Hit by Suspected Ransomware ⚠️
- South Korea's Kyowon Group, a major education and lifestyle company, shut down parts of its network after identifying a suspected ransomware attack.
- The company confirmed an extortion demand and is investigating potential data leakage, including sensitive customer information, possibly affecting millions.
- This incident follows other high-profile data breaches in South Korea, prompting pledges for stronger data protection laws.
🗞️ The Record | https://therecord.media/kyowon-group-south-korea-suspected-ransomware-attack
Dutch Port Hacked for Cocaine Smuggling 🚨
- A Dutch appeals court upheld a seven-year prison sentence for a man who hacked port IT systems using malware-stuffed USB sticks to aid cocaine smugglers.
- The attacker gained months of remote access, exploring the network and hunting for admin rights, even live-blogging the break-in via encrypted chats.
- The case highlights the real-world impact of cyber intrusions facilitating organised crime, with the hack directly enabling a 210 kg cocaine shipment.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/13/dutch_port_hacker_appeal/
Black Axe Leaders Arrested in Spain 🕵️
- Spanish police, supported by Europol, arrested 34 alleged cybercriminals, including leaders of the transnational Black Axe organisation, across four cities.
- Black Axe is known for business email compromise (BEC) scams, money laundering, and vehicle trafficking, with estimated fraud exceeding $6.9 million.
- The operation froze $139,000 in bank accounts and seized cash, vehicles, and devices, significantly disrupting the hierarchical, Nigerian-led group.
🤫 CyberScoop | https://cyberscoop.com/black-axe-disruption-arrests-spain/
Supreme Court Filing System Hack 🏛️
- A Tennessee man is expected to plead guilty to a misdemeanor charge for hacking into the U.S. Supreme Court’s electronic case filing system on 25 occasions between August and October 2023.
- Nicholas Moore, 24, "intentionally accessed a computer without authorization," though details on the specific information accessed were not released.
- This incident underscores ongoing vulnerabilities in federal judicial systems, which have seen strengthened protections following sophisticated cyberattacks.
🗞️ The Record | https://therecord.media/guilty-plea-hacking-supreme-court-case-filing-system
Malicious Chrome Extension Steals MEXC API Keys 💰
- A malicious Google Chrome extension, "MEXC API Automator," is actively stealing API keys from the MEXC cryptocurrency exchange by masquerading as a trading tool.
- The extension programmatically creates new API keys with withdrawal permissions, hides these permissions in the UI, and exfiltrates the keys to a Telegram bot.
- This attack leverages an already authenticated browser session, bypassing traditional authentication, and grants attackers unfettered access to victims' crypto accounts.
📰 The Hacker News | https://thehackernews.com/2026/01/malicious-chrome-extension-steals-mexc.html
Gogs Zero-Day Under Active Exploitation 🛡️
- CISA has added CVE-2025-8110, a high-severity path traversal vulnerability in the Gogs self-hosted Git service, to its KEV catalog due to active exploitation.
- The flaw allows authenticated users to bypass previous fixes (CVE-2024-55947) by exploiting symbolic link handling in the PutContents API, leading to remote code execution.
- With no official patch yet, federal agencies are mandated to apply mitigations by February 2, 2026, or cease using Gogs, while other users should disable open registration and restrict access.
📰 The Hacker News | https://thehackernews.com/2026/01/13/cisa-warns-of-active-exploitation-of.html
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/13/cisa_gogs_exploit/
ServiceNow AI Platform Critical Flaw 🔒
- ServiceNow patched CVE-2025-12420, a critical 9.3 CVSS vulnerability in its AI Platform, allowing unauthenticated users to impersonate others and perform arbitrary actions.
- The flaw stemmed from a universal credential ("servicenowexternalagent") and lack of password/MFA for user identity verification, which could lead to full platform takeover.
- Although no in-the-wild exploitation has been confirmed, the vulnerability was deemed the "most severe AI-driven vulnerability to date" due to ServiceNow's deep integration across enterprise IT.
📰 The Hacker News | https://thehackernews.com/2026/01/servicenow-patches-critical-ai-platform.html
🌑 Dark Reading | https://www.darkreading.com/remote-workforce/ai-vulnerability-servicenow
AI/ML Python Libraries RCE Vulnerabilities 🐍
- Vulnerabilities in popular AI/ML Python libraries (Nvidia's NeMo, Salesforce's Uni2TS, Apple/EPFL VILAB's FlexTok) allow remote code execution via poisoned metadata.
- The flaws exploit Hydra's instantiate() function, which can execute arbitrary callables, enabling attackers to hide malicious code in model metadata that runs automatically upon loading.
- Patches have been issued for NeMo (CVE-2025-23304) and Uni2TS (CVE-2026-22584), with FlexTok also fixed, urging users to only load models from trusted sources.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/13/ai_python_library_bugs_allow/
Kremlin-linked Hackers Target Ukraine Military 🪖
- CERT-UA reports a new cyber-espionage campaign by Void Blizzard (UAC-0190) targeting Ukraine's military personnel using a novel PluggyApe malware.
- Attackers impersonate charitable organisations and use messaging apps like Signal and WhatsApp to deliver password-protected malicious executables.
- This campaign highlights a shift towards highly tailored social engineering, leveraging trusted communication channels and detailed target knowledge to deliver malware.
🗞️ The Record | https://therecord.media/kremlin-linked-hackers-pose-as-charities-spy-ukraine
SHADOW#REACTOR Delivers Remcos RAT 👻
- A new campaign, SHADOW#REACTOR, uses an evasive multi-stage Windows attack chain to deploy the Remcos RAT for persistent remote access.
- The infection leverages obfuscated VBS launchers, PowerShell downloaders, fragmented text-based payloads, and a .NET Reactor-protected loader to complicate detection.
- This broad, opportunistic activity, likely by initial access brokers, abuses LOLBins like MSBuild.exe and employs self-healing mechanisms to ensure payload delivery.
📰 The Hacker News | https://thehackernews.com/2026/01/new-malware-campaign-delivers-remcos.html
AsyncRAT Campaign Abuses Cloudflare & Python ☁️
- An emerging phishing campaign is delivering AsyncRAT by exploiting Cloudflare's free-tier services (TryCloudflare tunneling) and legitimate Python downloads.
- Attackers use Dropbox links with double-extension files (.pdfurl) in phishing emails, installing a full Python environment to inject code into explorer.exe.
- This technique masks malicious activity under trusted domains and legitimate tools, making detection challenging and highlighting the ongoing effectiveness of phishing and abuse of legitimate services.
🌑 Dark Reading | https://www.darkreading.com/endpoint-security/attackers-abuse-python-cloudflare-deliver-asyncrat
AVCheck Malware Kingpin Arrested 🚫
- Dutch police arrested a 33-year-old man at Amsterdam's Schiphol Airport, believed to be the mastermind behind the AVCheck online platform.
- AVCheck was a counter-antivirus (CAV) service, shuttered in May by Operation Endgame, that allowed cybercriminals to test malware against various AV products to evade detection.
- The arrest underscores ongoing international law enforcement efforts to dismantle critical components of the cybercrime ecosystem.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/13/avcheck_arrest/
North Korea's IT Worker & Crypto Theft Schemes 🇰🇵
- The U.S. urged UN member states to take tougher action against North Korea's IT worker scheme and cryptocurrency heists, which fund its weapons programs.
- A 140-page report highlights that over 40 countries are impacted, with North Korean IT workers stealing identities to secure remote jobs and laundered crypto funds exceeding $2 billion last year.
- China and Russia were criticised for providing safe havens, with 1,500 North Korean IT workers estimated in China alone, violating UN Security Council Resolutions.
🗞️ The Record | https://therecord.media/40-countries-impacted-nk-it-thefts-united-nations
India's Strict Crypto KYC/AML Rules 🇮🇳
- India's Financial Intelligence Unit (FIU-IND) updated regulations for crypto service providers, requiring strict client due diligence for all serving Indian residents, even offshore.
- New rules mandate collecting identity documents, bank details, occupation, income, and crucially, "Latitude and longitude coordinates of the onboarding location with date and timestamp along with IP address," plus a selfie.
- These measures aim to combat fraud, money laundering, and terrorism financing in the anonymous and instantaneous crypto transaction landscape.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/13/india_crypto_kyc_aml_update/
US Cyber Command Leadership Shake-up 🇺🇸
- Air Force Lt. Col. Jason Gargan, commander of a Cyber National Mission Force task force aligned against Russia, was "relieved for cause" due to operational disagreements.
- This unusual dismissal highlights a "loss of trust and confidence" in command ability, with Gargan now expected to retire by the end of 2026.
- The incident occurs amidst other top-rank changes at Cyber Command, which has been without a Senate-confirmed leader for over nine months.
🗞️ The Record | https://therecord.media/senior-military-cyber-op-removed-russia-task-force
US Cyber Offense vs. Defense Debate ⚖️
- A House Homeland Security subcommittee debated the U.S. approach to cyber deterrence, with some lawmakers warning against expanding offensive cyber operations before strengthening defenses.
- Concerns were raised about CISA losing one-third of its workforce and the potential for offensive actions to provoke retaliation if U.S. networks are not adequately defended.
- While acknowledging the importance of offense, experts suggested a hybrid approach where the private sector supports government offensive operations, with CISA coordinating and receiving legal protections.
🤫 CyberScoop | https://cyberscoop.com/us-offensive-cyber-operations-defense-cisa-workforce-house-homeland-security-committee/
Mandiant's Salesforce Security Tool 🛠️
- Mandiant has open-sourced AuraInspector, a tool designed to help Salesforce admins detect misconfigurations in Aura (Experience Cloud sites) that could expose sensitive data.
- The tool targets access control issues, such as unauthenticated users gaining access to Salesforce Account object records, and can bypass 2,000-record limits via GraphQL API abuse.
- AuraInspector automates potential abuse techniques and remediation strategies, providing read-only operations to identify damaging misconfigurations without modifying Salesforce instances.
🕵🏼 The Register | https://go.theregister.com/feed/www.theregister.com/2026/01/13/mandiant_salesforce_tool/
#CyberSecurity #ThreatIntelligence #Ransomware #Vulnerability #ZeroDay #RCE #Malware #APT #NationState #Cybercrime #DataPrivacy #InfoSec #IncidentResponse #CloudSecurity #AI #BrowserSecurity #KYC #AML
We keep saying we’re on the brink of civil war, but after reading this, I now think we’re already deep in the midst of it, and we just didn’t recognize it when it started.
sam altman cries when you learn things
https://www.dirtcheaphackingtools.com/product/learn-things-sticker
Why pay premium for GPT when DeepSeek V3 matches quality for pennies? PromptPit shows exact token costs side-by-side.
Stop overpaying. Start optimizing.
Pro beta: Track savings with analytics. Join waitlist. https://promptpit.ai
#PromptPit #AI #Prompts #promptengineering
#Bandcamp announces new generative #AI rules:
" *Music and audio that is generated wholly or in substantial part by AI is NOT permitted on Bandcamp.
*Any use of AI tools to impersonate other artists or styles is strictly PROHIBITED in accordance with our existing policies prohibiting impersonation and intellectual property infringement.”
Bravo Bandcamp!
https://www.reddit.com/r/BandCamp/comments/1qbw8ba/ai_generated_music_on_bandcamp/
Building a simple, privacy-respecting digital life:
📧 Proton Mail or TutaMail (E2EE)
💬 Signal / Session for private chats
🌐 Firefox / Brave with tracker blocking
🤖 Privacy-friendly AI:
Local AI (Ollama / LM Studio) → zero cloud data
DuckDuckGo AI for quick queries, no account needed
Mistral AI (EU 🇪🇺)
❌ Avoid Google, Meta, or Copilot for sensitive stuff
⚠️ No cloud AI is truly E2EE—local is always safest
#privacy #E2EE #privacy #opensource #digitalminimalism #AI #EU
ServiceNow has disclosed details of a now-patched critical security flaw impacting its ServiceNow artificial intelligence (AI) Platform that could enable an unauthenticated user to impersonate another user and perform arbitrary actions as that user #ai #cybersecurity https://thehackernews.com/2026/01/servicenow-patches-critical-ai-platform.html
Reminder: under GPL/AGPL, “Corresponding Source” includes everything needed to build and modify the work. That means AI prompts, generated scripts, and config files as well. Don’t leave them out.
"The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities."
Runtime risk isn’t harder to analyze. It’s easier to misread. A UNC Wilmington study of 31k+ vulns shows LLMs can infer CVSS but fail without runtime context.
The same applies to MITRE mappings. Seth Goldhammer explains why AI needs SIEM data.
https://graylog.org/post/using-llms-cvss-and-siem-data-for-runtime-risk-prioritization/
Bedrijven gebruiken AI als een excuus om te doen wat ze allang van plan waren: werkers te weinig betalen, arbeidsbescherming uithollen, en massa-ontslagen erdoor drukken...
Maar jij als werker kunt een betere toekomst opbouwen!
Lees het hele artikel; https://techwerkers.nl/nl/resources/ai-is-hurting-you-at-work/
AI Generated Music Barred from Bandcamp https://blog.bandcamp.com/2026/01/13/keeping-bandcamp-human/
> Our guidelines for generative AI in music and audio are as follows:
> * Music and audio that is generated wholly or in substantial part by AI is **not permitted** on Bandcamp.
> * Any use of AI tools to impersonate other artists or styles is **strictly prohibited** in accordance with our existing policies prohibiting impersonation and intellectual property infringement.
The biggest trick the tech bros pulled was simply making ordinary folks believe AI has some kind of mysterious capability. It doesn't.
It's great at highly repetitive pattern matching, and ideal for generating very mediocre output. Mostly, AI is a solution desperately in search of a problem.
AI industry insiders launch site to poison the data that feeds them: https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/
Poison Fountain starts with "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species". This is a tarball of wrong. (1)
The rest of the website is absurd, and the "Poison Fountain Usage" list doesn't make any sense. There are far more efficient and safer ways to poison data that don't require you to proxy content for an unknown third party. Some of these are implemented in software, as opposed to <ul> in HTML. That bullet list reads like an amateur riffing on what they read about AI web scrapers, not like industry insiders with detailed information about how training works.
Recommend viewing the top level https://rnsaffn.com , which I suspect The Register may not have done.
The Register:
Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.Data poisoning is not easy, Anthropic's "article" notwithstanding. Why would we trust Anthropic to publicly reveal ways to subvert their technology anyway?
None of this passes a smell test. Crithype (and poor fact checking, it seems) from The Register it is.
#AI #GenAI #GenerativeAI #Anthropic #PoisonFountain #UncriticalReporting #crithype #TheRegister
Should we add "#SkinJobs" and "#Toasters" and "#GoRustYourself" to this list?
How ‘#Clanker’ Became the Internet’s New Favorite Slur
New derogatory phrases are popping up online, thanks to a cultural pushback against #AI
by CT Jones, August 6, 2025
"Clanker. #Wireback. #Cogsucker. People are feeling the inescapable inevitability of AI developments, the encroaching of the digital into everything from entertainment to work. And their answer? Slurs.
"AI is everywhere — on Google summarizing search results and siphoning web traffic from digital publishers, on social media platforms like Instagram, X, and Facebook, adding misleading context to viral posts, or even powering #NaziChatbots. #GenerativeAI and #LargeLanguageModels — AI trained on huge datasets — are being used as therapists, consulted for medical advice, fueling spiritual psychosis, directing self-driving cars, and churning out everything from college essays to cover letters to breakup messages.
"Alongside this deluge is a growing sense of discontent from people fearful of artificial intelligence stealing their jobs, and worried what effect it may have on future generations — losing important skills like media #literacy, #ProblemSolving, and #CognitiveFunction. This is the world where the popularity of AI and robot slurs has skyrocketed, being thrown at everything from ChatGPT servers to delivery drones to automated customer service representatives. Rolling Stone spoke with two language experts who say the rise in robot and AI slurs does come from a kind of cultural pushback against AI development, but what’s most interesting about the trend is that it uses one of the only tools AI can’t create: slang
" '#Slang is moving so fast now that an #LLM trained on everything that happened before it is not going to have immediate access to how people are using a particular word now,' says Nicole Holliday, associate professor of linguistics at UC Berkeley. 'Humans [on] #UrbanDictionary are always going to win.' "
Archived version:
https://archive.ph/ku2Uw
#BattlestarGalactica #AIResistance #AISucks #NoNukesForAI #NeoLuddites #ResistAI #LudditeClub #SmartPhoneAddiction #AreYouAlive #AreYouHuman
One way I use #AI in my personal life is as a sounding board. I created a custom GPT whose purpose is to discuss and debate ideas, philosophies, and issues with me. This morning, for example, I proposed a partially-baked theory I have and we had a 15 minute discussion about it. The GPT helped me...
Acquaintance sent me their new business website. I couldn't figure out what they were selling. I asked. They volunteered a demo.
It's AI for writing social media posts to market your business. They demoed using my yoga website. Offered a two week trial. I declined. Asteya (non-stealing) is fundamental to yoga & in conflict with the stealing of others writing. Ahimsa (non harming) is fundamental to yoga & in conflict with the environmental impact of AI. Not for me.
Dev of Steam game 'Hardest' will delete it after new girlfriend made them realize AI is bad https://www.gamingonlinux.com/2026/01/dev-of-steam-game-hardest-will-delete-it-after-new-girlfriend-made-them-realize-ai-is-bad/
Okay y'all, I'm looking into getting smart glasses with this coming paycheck. I don't want to spend more than $450 or so on it. So, what's good in the middle of January 2026? Lol I say that because it'll probably change tomorrow or next week or next month.
Anyway any advice would be great! Boosts okay.
Warum wir klare Regeln zum Umgang mit #KI an den #Hochschulen und #Universitäten brauchen: Künstliche Intelligenz hat schon lange Einzug gehalten in die Ausbildung und Forschung an den Hochschulen – ohne dass es dafür hinreichend klare Regelwerke gäbe.
Am Donnerstag bin ich als Sachverständiger im Landtag von Niedersachsen und erkläre, wie akademische #AI-Kompetenz gestaltet werden kann, ohne wissenschaftliche Qualität, Sicherheit, Fairness und Datenschutz zu verlieren:
https://www.landtag-niedersachsen.de/ausschuss-fuer-wissenschaft-und-kultur/
New AI Coding live stream: Catching up on any WordPress related AI news and updates since my last stream in 2025. Join me this Friday at 12:00 UTC. https://www.twitch.tv/jonathanbossenger
Here's a conversation on psychedelic experience through the philosophies of Gilles Deleuze and Gilbert Simondon, set against contemporary debates in cognitive science and computational theories of mind:
https://www.youtube.com/watch?v=L-ASR-CZrFA
This is my first LEPHT HAND episode as a host with a guest, the wonderful Aragorn Eloff. The show notes have links to his research (and his analog synth music projects 🙂 )
#psychedelic #philosophy #neuroscience #AI #science #machinelearning
Apple has confirmed that future Apple Intelligence models will rely on Google Gemini, marking the most surprising strategic pivot in years. After falling behind rivals in AI and failing to evolve Siri, Apple has partnered with the very company it spent a decade competing against. The announcement reads like a quiet admission that Apple lost the AI race it helped start.
The pushback against AI slop is not imaginary or conspiratorial; it is real.
#Cybersecurity, #AI, #softwaredevelopment.
RE: https://blob.cat/objects/2196ff77-4cc2-4776-8888-c22afb2ff4db
#openai #google #anthropic #meta and #microsoft all have said that their LLM models do not store copies of the information that they learn from. A new Stanford study proves that there are such copies in AI models, and it is just the latest of several studies to do so.
This may be a massive legal liability for #ai companies that could potentially cost the industry billions of dollars in copyright-infringement judgments, and lead products to be taken off the market
There are reasons I wrote my book, The Algorithm Will See You Now, and this feels like one of them …
Utah permits nation's first AI drug prescriptions
https://www.axios.com/local/salt-lake-city/2026/01/07/utah-ai-drug-prescriptions-doctronic
Apps like #Grok are explicitly banned under Google’s rules—why is it still in the Play Store?
Elon Musk's #xAI recently weakened content guard rails for image generation in the #Grok #AI #bot. This led to a new spate of non-consensual #sexual imagery on X, much of it aimed at silencing women on the platform. This, along with the creation of sexualized images of children in the more compliant Grok, has led regulators 2 begin investigating xAI
#privacy #Musk #Google #csam
Signed @olivia 's open letter to the Dutch government, aiming to reclaim AI from the "AI Delta Plan" etc. It's here: https://openletter.earth/zorgvuldig-and-zorgzaam-digitaal-f559037e #AI #NLpol
AP: Monkeys are on the loose in St. Louis and AI is complicating efforts to capture them
https://apnews.com/article/monkeys-st-louis-96c55834c7f17854cc4362db891ca8fa