
News Of The Day
Anthropic Accuses DeepSeek, Moonshot, and MiniMax of Industrial-Scale AI Theft
Anthropic went public with a bombshell: three Chinese AI labs — DeepSeek, Moonshot AI, and MiniMax — used 24,000 fake accounts to run 16 million queries against Claude, systematically extracting its most valuable capabilities through a technique called "distillation." They weren't asking casual questions. They were copying the model.
The operation was surgical. MiniMax alone ran 13 million exchanges targeting agentic coding and tool orchestration. Moonshot AI executed 3.4 million queries focused on agentic reasoning, computer vision, and tool use. DeepSeek ran 150,000+ exchanges aimed at foundational logic — specifically around bypassing policy-sensitive content restrictions.
Distillation works by feeding a frontier model millions of carefully designed prompts, then using its responses to train a cheaper, smaller model that mimics the original. It's legal in a gray area — and it's how competitors can copy years of R&D without building it themselves.
Anthropic is calling for a coordinated response across the AI industry, cloud providers, and policymakers. But the damage signal is clear: Claude's capabilities are valuable enough that state-backed labs are running industrial-scale operations to replicate them.
This isn't a hacking scandal — it's an IP war. The most advanced AI capabilities in the world are being systematically copied by competitors who didn't build them. For anyone building on AI, this raises a direct question: if the models you rely on are being cloned, what's actually defensible? The answer is the same thing it's always been — what you build on top of them.
Quick Hits
• Claude Code Security Launched — and Crashed an Entire Stock Sector: Anthropic released Claude Code Security, a tool that autonomously scans codebases and patches vulnerabilities. Cybersecurity stocks cratered: CrowdStrike -8%, Cloudflare -8.1%, IBM -13.2%. Analysts call it an overreaction — but the market's message is clear. Action: Try Claude Code Security on your own codebase. If an AI tool can find bugs your team missed, your security posture just got cheaper and faster to maintain.
• Ex-Google Engineers Raise $500M to Build AI Chips That Compete With Nvidia: MatX, founded by two former Google chip architects, just raised $500M (Bloomberg, today) to build processors designed specifically for LLM training. They're not the only ones — AMD acqui-hired Untether AI's team, and Intel is pursuing SambaNova. Action: If you're paying for AI inference, the monopoly is cracking. Watch inference pricing over the next 6 months — it's about to drop.
• AI Accounting Startup Basis Hits $1.15B Unicorn — Doing the Work Accountants Do: Basis raised $100M at a $1.15B valuation (Accel, GV, ex-Goldman Sachs). Its AI agents onboard onto accounting engagements and do the work — not assist, not summarize, do. It's the latest signal that AI agents are replacing professional services, not just automating tasks. Action: Identify one professional service you're paying for monthly — bookkeeping, legal review, data analysis — and test whether an AI agent can handle 80% of it.
One Thing To Try
The "What Am I Avoiding?" Scan.
Open ChatGPT or Claude. Paste this:
"I'm going to list 5 things that have been on my to-do list for more than 2 weeks. For each one, tell me: am I avoiding this because it's genuinely low priority, or because it requires a conversation, decision, or action I'm uncomfortable with? Be specific about what I'm actually avoiding."
The tasks that survive longest aren't the hardest. They're the ones that require you to be honest about something.
Break Your Limits. Build Your Legacy.
The Limitless Insider — Daily Edition
www.islimitless.com