This Week in AI + What I Learned Building for Real Clients

·8 min read
ai-newsbuilding-in-publiclessons

AI News This Week

1. Anthropic is suing the Pentagon, and the whole industry is watching

The Department of Defense labeled Anthropic, the company that makes Claude, a "supply chain risk." Anthropic sued them over it. Then over 30 employees from OpenAI and Google DeepMind signed a public letter backing Anthropic's position.

Why does this matter? This isn't just corporate drama. This fight determines which AI companies get access to government contracts worth billions of dollars, and more importantly, who gets to decide how AI is used in the military. OpenAI signed a deal with AWS to sell AI tools to government customers. Google is providing AI agents to the Pentagon's 3 million person workforce. Anthropic said no, and is now fighting to make sure that decision doesn't get them blacklisted.

The fallout has been interesting. Claude's downloads jumped 51% and it hit #1 on the U.S. App Store for the first time. Turns out a lot of people respect the company that pushed back.

2. Anthropic launched a $100M partner network

Anthropic committed $100 million to help enterprises adopt Claude through a new Claude Partner Network. They also opened a new office in Sydney, their fourth in Asia-Pacific.

This signals that Anthropic isn't trying to just be the best model. They want to be the enterprise platform. If you're building AI tools with Claude (like we are), this is a big deal because it means more documentation, more support, and a growing ecosystem of companies building on the same technology. The long-term bet here is that Claude becomes the foundation layer for enterprise AI the same way AWS became the foundation for cloud.

3. AI now runs natively on your phone

Qualcomm's new Snapdragon 8 Elite 2 chip can run 13-billion parameter AI models entirely on-device. No cloud calls, no latency, no internet required. Google followed up with Gemini Nano 3, which brings multimodal AI to phones with as little as 6GB of RAM. That covers about 65% of all active Android devices.

This is a bigger deal than it sounds. Up until now, serious AI needed a server somewhere. Now your phone can do real-time translation, image generation, and document summarization locally. For builders, this opens up a whole category of AI apps that work offline, in the field, in areas with bad internet. For everyone else, it means AI tools are about to get way faster and more private.

4. Big tech is betting the farm on AI infrastructure

Amazon committed $200 billion in capital expenditure for 2026. Alphabet is at $180 billion. Microsoft at $155 billion. Combined, that's over half a trillion dollars going into AI infrastructure this year alone.

These companies don't spend this kind of money on speculation. They're building data centers, custom chips, and networking infrastructure because they see AI compute demand growing for the next decade. If you're trying to understand where the economy is headed, follow the capital. Every major tech company is restructuring around AI. Atlassian just laid off 1,600 people (10% of their workforce) to pivot toward AI and enterprise sales. Oracle is cutting 20,000 to 30,000 jobs to redirect $8-10 billion toward AI. The transition isn't gradual. It's happening now.

5. OpenAI crossed $25B in revenue, Anthropic is at $19B

Two years ago these companies barely had revenue. Now OpenAI is taking early steps toward going public. The AI market isn't a bubble that might pop. It's a market that's scaling faster than any technology sector in history. For context, it took Google 8 years to reach $25 billion in annual revenue. OpenAI did it in about 3.


What This Means For You

All of that news is interesting, but here's what it actually means if you're someone watching from the outside trying to figure out how to get involved.

The barrier to entry has never been lower. You don't need to code to start building with AI. Tools like Claude, ChatGPT, n8n, Make, and Zapier let you build real automations and workflows with plain English. You can build an AI chatbot for a local business, automate a company's email responses, or create content pipelines without writing a single line of code. People are doing this right now and charging $500 to $2,000 a month for it.

Start with one tool and one problem. Don't try to learn everything. Pick one AI tool. Learn it deeply. Then find one business that has a problem that tool can solve. That's your first client. That's your first revenue. We started the exact same way. One tool, one client, one problem.

The money is in implementation, not in knowing about AI. Millions of people can explain what AI does. Very few people can actually sit down with a business owner, understand their operation, and build something that saves them time or money. That skill, the ability to implement, is what companies pay for. And right now there aren't enough people who can do it.

Free resources to start today:

  • Anthropic Academy (free, teaches you how to build with Claude)
  • Google AI Essentials (free, broad foundation)
  • n8n (free tier, visual AI automation builder)
  • Claude or ChatGPT (free tiers available, start building immediately)

The companies spending half a trillion dollars on AI infrastructure aren't doing it for fun. They're doing it because the demand for AI implementation is about to explode. The question isn't whether there's opportunity. It's whether you're going to be ready for it.


What I've Been Working On + Lessons Learned

The 40-hour sprint

This week my partner and I put in about 40 hours across two days finishing a client project. We were at the client's office, heads down, doing final testing and iteration before presenting the build. There's a specific kind of focus you get when you know the presentation is coming and there's no room for "we'll fix it later." Everything has to work. Every edge case has to be covered. Every button has to do what it's supposed to do.

That kind of pressure is where you learn the most. Not from tutorials, not from courses. From the reality that a client is going to sit down, use what you built, and either trust you with more work or move on.

Lesson 1: You cannot cram testing

This was the single biggest takeaway from the week. We were building fast, shipping features, feeling good about the progress. Then we started the testing phase and realized how many things we'd missed. Edge cases we didn't think about. Flows that worked 90% of the time but broke in specific scenarios. UI elements that looked right but behaved wrong.

Testing needs space. It needs fresh eyes. It needs you to intentionally try to break your own work, which is hard to do when you've been staring at the same code for 20 hours straight. Going forward, I'm structuring every project with testing spread across weeks, not compressed into the final days. The temptation to ship fast is real, but shipping something broken costs more than shipping something late.

Lesson 2: Real security is a different world

We're building a platform that collects Social Security numbers for a client. That single requirement changed the entire architecture of the project. We needed AES-256 encryption for data at rest, envelope encryption so no single key compromise exposes everything, HMAC hashing so we can look up records without decrypting them, full audit logging on every single data access, and compliance with IRS Publication 4557, FTC Safeguards Rule, and multiple state-level regulations.

One input field. Nine digits. And the security infrastructure behind it took longer to build than most of the product features combined.

If you're getting into AI and you want to work with real businesses, especially in industries like finance, healthcare, or legal, you need to understand that security isn't a feature you add at the end. It's the foundation you build everything on top of. Most AI demos look impressive. Very few of them could survive a compliance audit. That gap is where serious money is made, because most builders won't do this work.

Lesson 3: Every project gets more complex than you think

We got hired to solve a specific problem. Simple scope, clear deliverable. Then we got inside the client's operation and reality hit. Legal requirements we didn't anticipate. Compliance standards specific to their industry. Edge cases unique to how their team actually works versus how we assumed they worked.

This happens on every single project. The gap between "I can build an AI tool" and "I can build an AI tool that a real company can deploy in production" is massive. That gap is uncomfortable. It means long nights, things taking twice as long as you quoted, and constantly learning on the fly. But it's also the reason this work pays well. Most people stop at the demo. The ones who push through the complexity and actually deliver something production-ready are the ones who get referrals, repeat business, and a reputation that compounds.

What's next

I'm launching a free community this week for people who want to learn how to build with AI and make real money from it. Not theory. Not hype. The actual process of finding clients, building solutions, and delivering results. Comment AI on any of my Instagram posts to join.