1.5M GitHub pull requests have had ads injected into them by Microsoft Copilot
microsoft/harrier-oss 27B/0.6B/270M
harrier-oss-v1 is a family of multilingual text embedding models developed by Microsoft. The models use decoder-only architectures with last-token pooling and L2 normalization to produce dense text embeddings. They can be applied to a wide range of tasks, including but not limited to **retrieval**,
Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer
Eli Lilly signs $2.75 billion deal with AI drug developer Insilico Medicine
US pharmaceutical giant Eli Lilly is betting big on AI-driven drug development, signing a $2.75 billion deal with Hong Kong-listed Insilico Medicine. The article Eli Lilly signs $2.75 billion deal with AI drug developer Insilico Medicine appeared first on The Decoder.
How the AI Bubble Bursts
Spain shuts airspace for US planes involved in Iran war
Mistral AI raises $830M in debt to set up a data center near Paris
Mistral AI is borrowing 830 million dollars to build a data center near Paris with nearly 14,000 NVIDIA GPUs. The banks are on board, but the risk is significant for a startup that likely isn't profitable yet. The article Mistral AI borrows 830 million dollars to operate a new data center near Paris
Microsoft rolls out Copilot Cowork more broadly and lets AI models check each other's work
With "Cowork," Microsoft 365 Copilot is getting an AI assistant that handles entire workflows on its own. A new research tool also lets multiple AI models check each other's work. The article Microsoft rolls out Copilot Cowork more broadly and lets AI models check each other's work appeared first on
Semantic video search using local Qwen3-VL embedding, no API, no transcription
I've been experimenting with Qwen3-VL-Embedding for native video search, embedding raw video directly into a vector space alongside text queries. No transcription, no frame captioning, no intermediate text. You just search with natural language and it matches against video clips. The surprising par
Figure AI's Humanoid Walks into A Photoshoot By Itself!
What is the secret sauce Claude has and why hasn't anyone replicated it?
I've noticed something about Claude from talking to it. It's very very distinct in its talking style, much more of an individual than some other LLMs I know. I tried feeding that exact same system prompt Sonnet 4.5 to Qwen3.5 27B and it didn't change how it acted, so I ruled out the system prompt do
Computer use is now in Claude Code
Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans. Source: https://x.com/claudeai/status/2038663014098899416
I am definitely missing the pre-AI writing era
China announces its first automated manufacturing line capable of producing 10K humanoid robots per year - 1 robot every 30 minutes
UBTECH, AgiBot, Unitree seem to be producing humanoid robots at similar output rates now. Starting today this one appears to be a new contender company which brand hasn’t been disclosed by cctv or China pulss?
Mathematical methods and human thought in the age of AI
Running Qwen3.5-27B locally as the primary model in OpenCode
This weekend I wanted to test how well a local LLM can work as the primary model for an agentic coding assistant like OpenCode or OpenAI Codex. I picked Qwen3.5-27B, a hybrid architecture model that has been getting a lot of attention lately for its performance relative to its size, set it up locall
Claude Code runs Git reset –hard origin/main against project repo every 10 mins
Technical clarification on TurboQuant / RaBitQ for people following the recent TurboQuant discussion
I am Jianyang Gao, first author of the RaBitQ papers. I am posting this here because TurboQuant is now being discussed in \`r/LocalLLaMA\` in the context of local inference / KV-cache compression, and I think the community should have a technically precise comparison on the public record. We are po
Qwen3.5 Omni - Qwen’s latest generation of fully omnimodal LLM
>**Qwen3.5-Omni** is Qwen’s latest generation of fully omnimodal LLM, supporting the understanding of text, images, audio, and audio-visual content. Both the Thinker and Talker in Qwen3.5-Omni adopt the Hybrid-Attention MoE. Qwen3.5-Omni series includes Instruct versions in three sizes: Plus, Fla
I tested as many of the small local and OpenRouter models I could with my own agentic text-to-SQL benchmark. Surprises ensured...
Last week I asked for some feedback about what extra models I should test. I've added them all and now the benchmark is available at [https://sql-benchmark.nicklothian.com/](https://sql-benchmark.nicklothian.com/) I didn't say a lot about what the agent at the time, but in simple terms it takes an