The Express Train to Slopville

Photo by Jp Valery on Unsplash

Are we approaching the peak of the hype cycle? I hope so.

In Selling Certainty, I talked about how AI-powered research tools can help us feel in control, even as they quietly erode our agency and capacity for asking better questions. Since then, two thoughtful pieces have landed, both circling that same anxiety from different angles.

Judd Antin wrote ResearchSlop, critiquing how shiny, fast, AI-enabled outputs often look like research, but wind up being hollow, and without substance upon closer scrutiny. Their piece ultimately concludes that we need to “create space for the reasonable majority — the expert practitioners who see both the value and the limitations [of AI] — to shape how these tools get integrated into practice.”

Jess Holbrook also wrote a piece called Research Slop (Yes, with a space). It makes a similar observation: Organizations are being flooded with low-effort, high-volume sludge that is devoid of reflection or context. Their piece ultimately concludes we need to examine the incentives driving the adoption of AI-powered research tools and make sure they’re pushing us toward deeper understanding of people, continual organizational learning, and meaningful improvements to products & services.

Give ‘em a read. I loved both of these, and not just because they support my existing view about the loss of inquiry we’re experiencing as a result of unchecked enthusiasm for AI-ing the sh*t out of things to cut costs and “move fast.”

Like both Jess and Judd, I think these tools have the potential to help us do better work, and achieve better outcomes. But, like so many technologies and software solutions that precede them, they’re only capable of accelerating whatever mindset you bring to them. If you rush toward an answer, and don’t properly frame your exploration, the outputs are going to reflect that.

So, instead of a third piece on research slop, let’s talk about how to avoid contributing to it.

Where Things Go Wrong

Every business challenge or research question is built a little differently, so the onramp to Slopville is similarly varied. Though the general mandate inside of big organizations right now is to AI everything, and spend as little as possible, there are certain areas of research thinking where the danger of slopification, and poor outcomes are particularly high.

Framing the problem

Healthy questioning involves asking “what’s the problem to be solved here?” Or, “what do we actually need to understand?” If your category is in the midst of disruption or you’re seeing conflicting signals, that’s usually a sign the problem is ambiguous and can be defined in multiple ways. If your team can’t agree on what the root of the problem is, that’s an even stronger signal that you’ve got framing to do.

The onramp to Slopville: LLMs are pattern recognition tools. They aren’t generative or capable of meaning-making. If you haven’t clarified what you’re looking for, they’ll happily give you the illusion that you have. Some light, well-triangulated qual, where you cast for insight, can help you better define the problem space before you jump to the fancy tools.

Defining The Method

AI tools are seductive because they promise things like being able to “instantly summarize the voice of the customer,” but they also push us toward standardization and away from healthy questioning of how to best understand something. I wrote about this in Selling Certainty: Under pressure, teams make lower-quality choices. They also tend to engage in method fixation, using the same toolkit across very different problems because it feels familiar and defensible. Method stops being something you choose because it’s right, but instead becomes something you use because it's available

The onramp to Slopville: LLMs work best when given narrow, repeatable protocols. They can be great at doing one very specific thing, but we’re far from tools that are broadly useful. When insight teams institutionalize a small set of tools for consistency, they trade range for repeatability. When the tool doesn’t fit the challenge, you risk it giving you the wrong kind of certainty.

Larger research firms may not be able to, but independents and smaller shops can run custom studies for you at around the same costs people are charging to use these shiny new tools. Take a beat to at least explore that before you charge ahead.

Data Gathering & Synthesis

AI tools like neatness. They cluster, flatten, and then hand things to you. They’re also brilliant at expanding what you can capture and analyse in a short period of time. That neatness comes with a cost: These tools see repetition as signal and contradiction as noise, so the very texture that makes qualitative work powerful gets filtered out.

The onramp to Slopville: LLMs can cluster things thematically, but they can’t weigh meaning. They aren’t actually making judgment calls when they’re gathering data or engaging in synthesis. They treat every signal as equal, and don’t know which data points matter unless that context has already been established for them. That’s why AI summaries can feel both impressive and strangely hollow. They reflect the shape of a conversation, but not its stakes. What’s lost isn’t just nuance, but the ability to tell which tensions are worth chasing and which are safe to ignore.

A Cautionary Tale

Take what is happening in the world of coding right now as a cautionary tale: There is a blossoming industry of people who are now fixing AI-created code - because it's full of mistakes, flaws, and hallucinations. If we look at outcomes and not outputs, AI tools are often wasting more time and money than they save. And that’s before we even consider the downstream costs of these types of blunders.

When To Keep Things Human vs. When To Lean On AI Research Tools

I’m not trying to be an AI gatekeeper or naysayer. What I am advocating for is some thoughtfulness here, rather than blanket company-wide policies about AI use in research. AI-powered tools aren’t the enemy of good research, but more discipline is needed when it comes to deciding what we automate, delegate, or are willing to tolerate being done inside a black box.

With every shortcut we take to cut costs or save time, we trade something: Depth for speed, nuance for neatness, or agency for efficiency. The trick is knowing which trade-offs are acceptable and which ones  are quietly eroding your organization’s ability to respond to the pressures and shifting sands of the outside world -  because you’re stuck in Sloptown, USA.

Here’s a simple way to think about it:

Use AI research tools when the focus is on what already exists: Today’s perceptions, feedback on products already in the market, or quantifying performance over time. And when the creative and strategic stakes are relatively deductive or inductive.

But, when the work requires creating something new, reframing a problem, or imagining what doesn’t yet exist? Or engaging in abductive reasoning? That’s when things should stay human.


Framing the problem

  • Automation benefit: Low — can summarize what’s known, but that’s about it.

  • Human depth benefit: High — framing requires context, judgment, and lateral leaps.

Pattern detection

  • Automation benefit: High — speed, scale, and consistency when you have lots to analyze.

  • Human depth benefit: Mid — actual people are good at seeing anomalies or offering context behind clusters.

Trend interpretation

  • Automation benefit: Low — can surface things quickly, but you’ll never catch things early.

  • Human depth benefit: High — cultural awareness, cross-impact analysis, and understanding significance all need human judgment.

Speed & summarization

  • Automation benefit: High — this is probably the only thing I actually fully trust AI for.

  • Human depth benefit: Low — people can make sure important nuance isn’t lost or misrepresented, but more slowly.

Sensemaking & synthesis

  • Automation benefit: Low — can organize inputs but not weigh their meaning or engage in actual sensemaking.

  • Human depth benefit: High — creative people are great at interpretation, prioritization, and synthesis of context.

Prediction based on past data

  • Automation benefit: High — these tools are great at data-driven extrapolation.

  • Human depth benefit: Mid — slow to model, but bring judgment to things like wind-tunneling or risk analysis.

Exploration of future possibility

  • Automation benefit: Low — lacks imagination and interpretive range. Awful at this.

  • Human depth benefit: High — creative, abductive thinking grounded in lived understanding.


The New Discipline in Corporate Research

The real question isn’t whether AI can do the work, it’s what happens to our thinking when we let it, and then how that impacts our ability to make good decisions that drive the business forward.

The discipline now is less about organizational policies around AIor automating things. That part is easy. It’s in deciding when not to.

These new tools can make research faster, but only people can make it matter and give it meaning. The health of our industry won’t be defined by how well we automate the work, but by whether we keep the discipline and curiosity to think for ourselves when the situation warrants it

Next
Next

Selling Certainty