LinkedIn Is Scanning Your Browser Extensions. The Case for Keeping Your AI Local.

Yesterday, LinkedIn was caught scanning users’ installed browser extensions. Not malicious ones — all of them. The discovery hit Hacker News at #1 with over 1,500 points and didn’t surprise a single developer in the comments.

It surprised everyone else.

That gap — between people who already knew big platforms harvest everything they can see, and people discovering it for the first time — is exactly why the conversation around AI privacy matters right now. Because most people running AI tools in 2026 are handing those same platforms something far more sensitive than their browser extensions.

They’re handing them their thinking.

What LinkedIn Was Actually Doing

The short version: LinkedIn’s mobile and desktop clients were making API calls that enumerated browser extension IDs from users’ browsers. Not advertised. Not disclosed in any meaningful way. Just quietly happening in the background while users scrolled their feed.

The reason doesn’t much matter. What matters is the behavior: a platform with a trust relationship exploited that trust to collect data users didn’t consent to share. When caught, the response was a variation of “we’ll look into it.”

This is the default mode of large platforms in 2026. Data collection is the product. You are the input.

The AI Version of This Problem Is Worse

Browser extensions are metadata. They tell you something about a person’s habits and tools. Mildly invasive, easily spun as benign.

Now think about what cloud AI platforms actually see.

When you use ChatGPT, Claude, Gemini, or Copilot through their hosted interfaces, you are sending them:

  • Your questions (including the embarrassing ones)
  • Your business ideas before you’ve validated them
  • Your draft emails and messages
  • Your code and internal tooling details
  • Your meeting notes and strategic plans
  • Sometimes your customer data

All of it hits their servers. All of it is logged, at minimum for safety review. Most platforms have training clauses in their terms that require you to explicitly opt out. Many users never do.

You’re not using AI to think. You’re using AI to think at a corporation that monetizes attention and data.

LinkedIn scanning your browser extensions is a misdemeanor. Running your core business reasoning through a Big Tech AI platform is something else entirely.

What “Local AI” Actually Means in 2026

The good news: this is no longer a hobbyist concern. The gap between cloud AI and local AI closed dramatically in the last six months.

Google just released Gemma 4. It runs on consumer hardware. It performs at a level that would have required a multi-thousand-dollar cloud subscription eighteen months ago.

AMD released Lemonade — an open-source local LLM server designed to use both your GPU and NPU simultaneously. Combined with Ollama, which handles model management and serves a local API endpoint, you can run a capable AI model on hardware you already own for $0/month in API fees.

The stack looks like this:

  • Hardware: Any modern laptop, a Mac Mini M4, or a Raspberry Pi 5 for lighter workloads
  • Model runtime: Ollama (free, open source)
  • Model: Gemma 4, Llama 3, Mistral, or Qwen — your choice, your machine
  • Agent layer: OpenClaw, pointed at your local Ollama endpoint instead of Anthropic’s servers

Your prompts never leave your machine. Your agent’s memory files live on your disk. Your automation logic runs in your terminal. Nobody is logging your business strategy.

The Privacy Architecture That Actually Protects You

Running local AI isn’t just about cost (though zero API fees is a compelling argument). It’s about control over your attack surface.

When you self-host your AI stack, you eliminate several categories of risk:

Data harvesting at rest. Cloud providers store your conversations. Even with enterprise privacy tiers, you’re trusting their access controls, their employees, their security posture, and their legal team to protect you when a government subpoena arrives. Local data has none of those exposure vectors.

Model provider policy changes. OpenAI has changed its terms of service multiple times. What’s opt-out today can become opt-in tomorrow. You have no leverage as an individual user. Running local means the provider can’t change the rules on your data retroactively.

Third-party integrations. Cloud AI platforms have partner ecosystems. Plugins, integrations, and API access mean your data touches more systems than just the primary provider. Audit trails get murky fast.

Inference-time data. Every prompt you send is a training signal, a behavioral data point, and a potential security disclosure. Local inference means none of that telemetry exists.

The Objection Worth Taking Seriously

“But local models aren’t as good.”

This was true in 2024. It’s becoming less true every quarter.

For most solopreneur and small business workflows — drafting content, researching topics, summarizing documents, writing and reviewing code, answering operational questions — a well-quantized Gemma 4 or Llama 3.3 running locally is entirely adequate. The edge cases where you genuinely need frontier model capability (complex multi-step reasoning chains, cutting-edge code generation) still exist. You can route those specific requests to a cloud API while keeping everything else local.

That hybrid model — sensitive work stays local, frontier tasks go to cloud with appropriate data hygiene — is a reasonable and practical approach. The point is that you’re making deliberate choices rather than defaulting everything to a corporation’s infrastructure.

The OpenClaw Angle

OpenClaw is designed with this architecture in mind. It supports local model endpoints out of the box — point it at your Ollama instance and your agents run against your local models with no API calls leaving your network.

Your agent’s memory files (SOUL.md, MEMORY.md, daily notes) live on your filesystem. Your skills and configurations are local. The only external calls happen when you explicitly trigger them — posting a tweet, sending an email, calling a webhook you’ve defined.

The LinkedIn story is a useful reminder of something that should be obvious: if you’re not paying for privacy, you’re not buying privacy. You’re hoping for it.

Build your stack like privacy is the default, not the exception.

What to Do This Weekend

  1. Install Ollama — five minutes, runs on Mac, Linux, and Windows
  2. Pull Gemma 4: ollama pull gemma4 or start with llama3.3 if you want something battle-tested
  3. Point your OpenClaw config at the local endpoint (http://localhost:11434)
  4. Run one of your regular workflows locally and compare

The performance will surprise you. The silence from the API billing dashboard will surprise you more.

The LinkedIn story will be forgotten by next week. The data they collected won’t be. Build accordingly.


Already running OpenClaw with local models? The OpenClaw + Ollama setup guide walks through the full configuration. The Private AI Server on Raspberry Pi guide covers the hardware setup if you want a dedicated local inference box.

More from the build log

Suggested

Want the full MarketMai stack?

Get all 7 digital products in one premium bundle for $49.

View Bundle