Meta new open source model is coming?
https://preview.redd.it/sxj1lcqvkzrg1.jpg?width=2400&format=pjpg&auto=webp&s=2fd448fc6402739546295e384fe2264df29b74be An internal model selector reveals several Avocado configurations currently under evaluation. These include: **- Avocado 9B, a smaller 9 billion parameter version.** \
Stanford study outlines dangers of asking AI chatbots for personal advice
While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.
How we monitor internal coding agents for misalignment
How OpenAI uses chain-of-thought monitoring to study misalignment in internal coding agents—analyzing real-world deployments to detect risks and strengthen AI safety safeguards.
Ask HN: Best stack for building a tiny game with an 11-year-old?
Police used AI facial recognition to wrongly arrest TN woman for crimes in ND
Miasma: A tool to trap AI web scrapers in an endless poison pit
What if AI doesn't need more RAM but better math?
LocalLLaMA 2026
Further human + AI + proof assistant work on Knuth's "Claude Cycles" problem
The first 40 months of the AI era
[P] Built an open source tool to find the location of any street picture
Hey guys, Thank you so much for your love and support regarding Netryx Astra V2 last time. Many people are not that technically savvy to install the GitHub repo and test the tool out immediately so I built a small web demo covering a 10km radius of New York, it's completely free and uses the same p
Andrew Curran: Anthropic May Have Had An Architectural Breakthrough!
>Three weeks ago there were rumors that **one of the labs had completed its largest ever successful training run, and that the model that emerged from it performed far above both internal expectations and what people assumed the scaling laws would predict.** At the time these were only rumors, an
If AI is really making us more productive... why does it feel like we are working more, not less...?
The promise of AI was the ultimate system optimisation: Efficiency. On paper, the tools are delivering something similar to what they promised: \- Github Copilot / Claude writes effective code. \- LLMs summarise the meeting minutes. \- Automations handle Jira tickets. But I see a pattern:
Tinylora shows lora training works at 13 parameters + own experiments to verify claims
The tinylora paper shows that we can alter model behavior with only a few parameters. [https://arxiv.org/pdf/2602.04118](https://arxiv.org/pdf/2602.04118) I tried replicating the paper, and made a tinylora implementation for qwen3.5, and it does work, it's crazy to think about. I got the same resu
Its not sci-fi anymore! A Chinese company, Unipath has launched a household robot
Founder of GitLab battles cancer by founding companies
Dario Amodei: OpenAI President Brockman's $25 Million Dollar Donation To Pro-Trump Super PAC Is Evil, Also Compares Altman And Elon To Hitler And Stalin
Lots of shocking details from this WSJ article: https://www.wsj.com/tech/ai/the-decadelong-feud-shaping-the-future-of-ai-7075acde?st=7WRXF6 Interesting snippets from the article, but I recommend reading the full article. Very good insights into how Anthropic was formed: >In communication with
What even happened to deepseek
Stanford Chair of Medicine: LLMs Are Superhuman Guessers
A Stanford study (co authored by Fei Fei Li) asked LLMs to perform tasks requiring an image to solve but were not actually given the image. They were able to solve the questions better than radiologists by 10% on average just by guessing the contents of the image from the prompt, even on questions
Taalas rumoured to etch Qwen 3.5 27B into silicon. Which price would you buy their PCIe card for?
I posted about them before because of their incredible 17.000 tokens/second for Llama 3.1 8B. With production costs rumoured to be $300 to $400, would you buy a PCIe card for $600 to $800 enabling you to get 10.000 tokens/s of Qwen 3.5 27B intelligence with LORA support? I myself feel torn. I wou