A few months ago, a prospect asked me a question that I hear at least twice a month: “How will we know which channel is driving the results?”
It’s a reasonable question. A responsible question, even. If you’re spending $15,000 a month on marketing, you want to know where the return is coming from. You want to see the line from investment to outcome — this dollar went into content, this content generated this lead, this lead became this customer. Clean, linear, measurable.
The problem is that it almost never works that way.
The Attribution Fantasy
Modern B2B buying is not a straight line. It’s a web. A typical buyer might see your LinkedIn post on Tuesday, read a blog article on Thursday, get forwarded your newsletter by a colleague the following week, visit your website three times before clicking “Book a Demo,” and then tell your sales team they found you through a Google search.
In that scenario, what gets the credit? If you’re using last-touch attribution, Google gets the credit. If you’re using first-touch, LinkedIn gets it. If you’re using multi-touch, everyone gets a fraction, and the math is only as good as your tracking setup — which, in my experience, is never as good as people think it is.
The deeper problem is that most of the touchpoints that actually influence a B2B purchase are invisible to analytics. The conversation a buyer had with a peer at a conference. The Slack thread in a private community where your company was mentioned. The podcast episode where your founder’s quote was referenced. None of these show up in your CRM. None of them get attributed. But they might be the most important touchpoints in the entire journey.
The Measurement Trap
I’ve watched companies spend extraordinary amounts of time and money trying to solve attribution. Custom UTM frameworks. Multi-touch attribution platforms. Data science teams building proprietary models. Expensive MarTech stacks designed to track every click, every impression, every interaction.
And at the end of it, they still can’t tell you with confidence which marketing activity is responsible for revenue.
This isn’t a technology problem. It’s a category problem. B2B buying decisions are made by committees, over months, through channels that are fundamentally untrackable. The signal is noisy, the data is incomplete, and the models are approximations at best.
The trap is that the pursuit of perfect attribution becomes a substitute for judgment. Teams spend weeks debating whether content or paid ads deserves credit for a deal, when the real question should be: are we growing? And if so, are we doing the things that are likely contributing to that growth based on everything we know?
What We Do Instead
I’m not arguing against measurement. I’m arguing against the specific flavor of measurement that promises to tell you exactly which dollar produced which outcome. That precision is an illusion in most B2B contexts.
What we do instead is measure at two levels.
At the macro level, we track the numbers that matter: pipeline generated, revenue closed, customer acquisition cost, payback period. These are lagging indicators, which means they take time to show up. But they’re the numbers that tell you whether your marketing is working in aggregate. If pipeline is growing and CAC is stable or declining, something is working. You may not know exactly which channel to credit, but the system is producing results.
At the activity level, we track leading indicators for each channel independently. For content, it’s organic traffic growth, keyword rankings, and engagement metrics. For email, it’s open rates, reply rates, and meetings booked. For paid, it’s cost per click, conversion rates, and cost per qualified lead. These metrics don’t tell you which channel is most responsible for revenue, but they tell you whether each channel is healthy and improving.
The combination of these two levels gives you something more useful than attribution: directional confidence. You can say, “Our overall pipeline is growing, our content traffic is up 40%, our email reply rates are strong, and our paid CPL is declining. We’re doing the right things.” You can’t say, “Blog post #47 generated $23,000 in revenue.” But the first statement is more actionable than the second.
The Question Behind the Question
When clients ask about attribution, they’re usually asking a different question underneath. They’re asking: “Can I trust that this investment is worth it?”
That’s a fair question, and it deserves an honest answer. The honest answer is: you’ll have strong directional evidence within 90 days and convincing evidence within six months. You’ll be able to see whether the trajectory is right. What you won’t have is a spreadsheet that connects every dollar spent to a specific revenue outcome. If that’s the standard, you’ll be disappointed — and you’ll probably kill good programs prematurely because they don’t “prove” their value on a timeline that was unrealistic from the start.
I’ve seen this happen. A company invests in content marketing, sees modest results in month three, decides attribution isn’t clear enough, and cuts the program. Six months later, the content they published is ranking on page one for high-intent keywords. But the team has moved on to the next channel, chasing the same attribution mirage.
The Heresy
Here’s the part that makes data-driven marketers uncomfortable: some of the most effective marketing activities are the hardest to measure.
Brand building. Community engagement. Thought leadership. Executive visibility. These things influence buyer perception over months and years, not days and weeks. They create the conditions for demand to exist — conditions that your demand capture tools (paid ads, SEO, outbound) then capitalize on.
Trying to attribute revenue to a brand-building effort is like trying to attribute a plant’s growth to a specific rainstorm. The rain matters. Sunlight matters. Soil matters. You can measure the plant’s growth, but assigning credit to any single input misses the point. The system produces the outcome, not any individual element.
Attribution is not a myth in the sense that measurement is useless. It’s a myth in the sense that the clean, linear, cause-and-effect story that most people want doesn’t exist. Accepting that — and building a measurement framework that accounts for ambiguity rather than pretending it away — is one of the most important shifts a marketing team can make.
Measure rigorously. But don’t mistake the map for the territory.