What I'm Seeing - February 2026
- eileen711
- 2 hours ago
- 3 min read

The badge I'm wearing proudly these days is "AI Pragmatist".
It started with a post I wrote in December about the gap between what companies say publicly about AI ("We're investing heavily!") and what they admit privately ("Honestly? We're experimenting but we don't know what's working yet"). The
response told me I'd struck a nerve.
That gap—between rhetoric and reality—is where most of the industry is living right now.
But here's what's shifted in the past few weeks: The conversation has moved from "should we use AI?" to something far more consequential. AI agents.
For those who haven't been tracking this closely: AI agents don't wait for you to prompt them. They make decisions, execute tasks, and operate autonomously within whatever parameters you've set. Think less "helpful assistant" and more "junior employee who never sleeps, works incredibly fast, and has no judgment beyond what you've explicitly programmed."
Gartner predicts 40% of enterprise applications will embed AI agents by year-end. That's not a forecast about the distant future. That's now and the implications for our industry are significant—and largely unexamined.
The governance gap is real. Most research companies built their AI policies for tools: guidelines about using ChatGPT for drafts, rules about disclosing AI-assisted analysis. But agents are different. When an AI autonomously decides to exclude certain survey responses, re-weight a sample, or flag verbatims as "low quality"—who's accountable? What's disclosed to the client? Where does human judgment intervene? I've been asking research leaders these questions. The most common answer: silence, followed by "we haven't figured that out yet."
Speed creates new categories of risk. The efficiency gains from agents are real and substantial. But speed compounds errors as readily as it compounds value. An agent can make a thousand consequential micro-decisions before anyone notices something's wrong. The quality control infrastructure we built for human-paced work doesn't translate.
The human premium is increasing, not decreasing. Here's the paradox: As AI handles more of the mechanical work, the moments of human judgment become more valuable, not less. The question "where must a human review, approve, or override?" is both an operational and a strategic one. Get it wrong, and you've automated your way into irrelevance—or liability.
I wrote on LinkedIn recently about the synthetic data debate (using a Depeche Mode analogy that may have confused some people under 40—sorry about that). The response was fascinating: The post generated more comments relative to reactions than anything else I've written. People have opinions about this.
What struck me wasn't the disagreement—it was the quality of the arguments on both sides. The synthetic skeptics aren't Luddites; they're raising legitimate questions about bias, training data decay, and the closed-loop problem where models train on their own outputs. The synthetic enthusiasts aren't naive; they're pointing out that our "real" data sources have serious quality problems too.
The pragmatist position—use synthetic for exploration, demand transparency, stay humble about what any data source can and can't tell you—seems to be resonating. But I'm genuinely uncertain whether the industry will adopt that nuance or split into camps.
One more thing I've been noticing: The talent conversation is finally getting honest.
For years, we talked about "reskilling" as if it were simply a matter of training programs and good intentions. But the reality is messier. Some people will thrive in an AI-augmented research environment. Some won't. The skills that built great careers over the past 20 years—methodological rigor, survey design expertise, analytical precision—remain valuable but are no longer sufficient.
The companies making real progress aren't pretending everyone will make the transition. They're being clear-eyed about which roles are expanding, which are contracting, and which are being fundamentally reinvented. It's uncomfortable. It's also necessary.
We're at an inflection point. The choices research companies make in the next 12-18 months about AI governance, talent strategy, and value proposition will shape the industry for the next decade.
No pressure.
If you're navigating any of this—or just want to debate Depeche Mode vs. Springsteen—I'd love to hear from you.




Comments