OpenAI just released its answer to Claude Mythos
OpenAI is launching Daybreak, an AI initiative focused on detecting and patching vulnerabilities before attackers find them. Daybreak uses the Codex Security AI agent that launched in March to create a threat model based on an organization's code and focus on possible attack paths, validate likely v
Your AI Use Is Breaking My Brain
Google Says Hackers Used AI to Find Critical Security Flaw
Google Says Hackers Used AI to Find Critical Security Flaw The Information
Unitree Launches World’s First Mass-Produced Manned Mecha GD01
GPT-5.5 was used to flag fatal errors in FrontierMath problems
FrontierMath is supposed to be one of the hard benchmarks for frontier models, and now Epoch is saying an AI-assisted review found fatal errors in about a third of Tiers 1-4. Noam Brown says the initial flags came from GPT-5.5. Obviously we’ll have to wait for the corrected scores, but this is a p
GitLab announces workforce reduction and end of their CREDIT values
ChatGPT is now creating content for textbooks.
MTP on Unsloth
[https://huggingface.co/unsloth/Qwen3.6-27B-GGUF-MTP](https://huggingface.co/unsloth/Qwen3.6-27B-GGUF-MTP) [https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF-MTP](https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF-MTP) Unsloth release the model with preserved MTP layer, but you still have to ch
Ilya Sutskever Says His OpenAI Stake is Worth About $7 billion
The former OpenAI chief scientist may be estranged from the company, but he still came to its defense as he testified on Monday.
Found a way to cool the DGX
Tap water keeps the temperature below 68 degree Celsius at 95% GPU utilization running Qwen3.5-122b-a10B Q6\_K precision. 110 GB Memory usage, 80k context window, 18.77 tokens/second for continuous vision analyses. Not sure how often do I have to change the water but so far so good.
A new video model "Omni" from Google is leaked, user notes text coherence
https://x.com/i/status/2053824398503678108
I let AI build a tool to help me figure out what was waking me up at night
Local AI needs to be the norm
Canva's Magic Layers AI Changed "Palestine" to "Ukraine" in User Designs
Students boo commencement speaker after she calls AI next industrial revolution
Computer build using Intel Optane Persistent Memory - Can run 1 trillion parameter model at over 4 tokens/sec
As the title states, my build is indeed able to run a 1 trillion parameter model (in this case Kimi K2.5) locally at \~4 tokens/second. I thought r/LocalLLaMA would be interested in the build due to that stat line, and also due to the inclusion of an unusual part, Intel Optane Persistent Memory, whi
What model for coding?
I am amazed by the development of locallm in the coding area. Right now im Testing Qwen3.6 27b and it works quite well, even tho this is not made for coding. Sometimes it randomly stoppes working immediatly before a tool-call. It might be misconfiguration. But my Question is, what do people actually
Water Use Isn’t a Data Center Problem, It’s an AI Problem
Can AI save us from the AI industry’s endless thirst for water? Outlook not so good.
AA introduces Coding Agent Index - Performance Comparisons between Model & Harness Combinations
>**The Artificial Analysis Coding Agent Index includes 3 leading benchmarks that represent a broad spectrum of coding agent use:** ➤ **SWE-Bench-Pro-Hard-AA**, 150 realistic coding tasks that frontier models struggle with, sampled from Scale AI’s SWE-Bench Pro ➤ **Terminal-Bench v2**, 84 agen
Llama models: still valuable for finetuning or surpassed by everything new?
Hello there people. So I have noticed that people are pretty much ignoring Llama 3 plus 3.1, 3.2, and 3.3 these days. They never mention how their experience goes with fine-tuning those models. But we haven't been getting many entries into the 70 billion space. So is, for example, Llama 3.3 70B th