Jon Miller
Former CMO, Demandbase
Since the release of ChatGPT on November 30, 2022, the topic of artificial intelligence (AI) in the workplace has been on everyone’s minds. Companies are struggling to figure out how and where to use it best, and are wondering how to quantify the impact it will have. Is it a productivity powerhouse, or does it carry the risk of diminishing human skill?
Recent studies are starting to provide us a nuanced answer: it’s both. While AI was proven to significantly elevate performance across various tasks, there’s a caveat. The technology excels in some areas but falls short in others, particularly when it comes to accuracy and creativity.
Let’s explore these studies and what they say about maximizing the benefits of AI in our B2B go-to-market (GTM) strategies.
A recent study by Harvard Business School found that Boston Consulting Group (BCG) consultants using ChatGPT-4 significantly outperformed those who did not across 18 real-world tasks. These included creative tasks (“Propose at least 10 ideas for a new shoe targeting an underserved market or sport.”), analytical tasks (“Segment the footwear industry market based on users.”), writing and marketing tasks (“Draft a press release marketing copy for your product.”), and persuasiveness tasks (“Pen an inspirational memo to employees detailing why your product would outshine competitors.”).
The study found that the BCG consultants using AI completed 12.2% more tasks while doing it 25.1% faster. They also produced over 40% higher quality results compared to those not using AI.
That’s not just a marginal improvement; it’s a significant leap in productivity and quality that can translate into real competitive advantages for companies. Projects move more quickly from planning to execution, and higher quality work leads to better customer satisfaction, fewer revisions, and a more robust bottom line. These improvements suggest AI, when used properly, will be truly transformative for go-to-market.
Notably, the study also uncovered that AI acts as a “skill leveler.” The consultants who initially scored the lowest saw the biggest increases in performance when they teamed up with AI. While top performers also improved, the boost was less dramatic. This has deep implications for performance management across functions and disciplines. But, as we’ll see, AI isn’t always the right answer.
Another study in Nature investigated the creative abilities of humans and AI chatbots. Participants were tasked with thinking of unique uses for common items. On average, the AI chatbots performed better and came up with more creative ideas than humans (as measured by an objective calculation of “semantic distance” and subjective ratings by human judges). The chatbots were also more consistent and showed less variability than the humans.
However, the most creative ideas from humans were on par with or better than those from the chatbots. The study concluded that in instances of high creativity and divergent thinking, the best humans still outshine AI, underlining the unique aspects of human creativity that AI has yet to replicate or surpass.
While Generative AI is immensely powerful in some tasks, it fails completely or subtly in others. It’s great at turning CMO challenges into a lyrical poem, but it’s terrible at math and I’ve never been able to get it to return something that fits a specific word count.
There’s a boundary that separates tasks where AI does well from tasks where AI does poorly, but unless you use AI frequently, it’s hard to know where that boundary is. The HBS study calls that unclear line “The Jagged Frontier”.
Ethan Mollick, one of the HBS study’s authors, explains it like this in his excellent post:
“Some tasks that might logically seem [to be]…equally difficult – say, writing a sonnet and an exactly 50-word poem – are actually on different sides of the wall. The AI is great at the sonnet, but, because of how it conceptualizes the world in tokens, rather than words, it consistently produces poems of more or less than 50 words. Similarly, some unexpected tasks (like idea generation) are easy for AIs while other tasks that seem to be easy for machines to do (like basic math) are challenges for LLMs.”
To examine this, the HBS study included a task that would exploit the blind spots of AI to make it give a wrong, but convincing, answer to a problem that humans could easily solve. Sure enough, human consultants got the problem right 84% of the time without AI help, but when they used the AI, they did worse, only getting it right 60-70% of the time.
That’s why Mollick warns against “falling asleep at the wheel.” Over-reliance on AI can lead to mistakes, especially when humans let AI take over tasks it’s not equipped to handle. In another HBS study from Fabrizio Dell’Acqua, recruiters who used advanced AI found themselves becoming careless and less discerning in their judgments. They overlooked highly qualified applicants and ultimately made poorer decisions than those who either used less sophisticated AI or no AI at all. When AI performs extremely well, there’s a tendency for humans to disengage, allowing the machine to take full control rather than using it as an augmentative tool.
OK, we’ve learned that:
So how should we use all these insights to navigate the path of when and where to use artificial intelligence?
The HBS/BCG study identifies two approaches to navigate this jagged landscape. Workers using the “Centaur” approach clearly divide up the work between humans and machines, strategically allocating tasks based on each entity’s strengths (e.g. the human guides the strategy, and the AI does the brute force work). On the flip side, workers using the “Cyborg” approach integrate humans and machines deeply, working in tandem on almost every step (e.g. such as initiating a sentence for the AI to complete).
There’s no one right strategy to use. For example, I used both approaches in helping to write this post, sometimes using ChatGPT-4 to summarize the original research and other times having it draft or finish specific sentences and paragraphs. And no matter what, I reviewed the results and made sure it worked with my voice. The key takeaway is that the strongest approach combines the strengths of humans with the strengths of AI.
From account-based marketing (ABM) to branding content to customer success, these results suggest nuanced ways in which humans and AI can work together to maximize efficiency and effectiveness in your go-to-market.
AI offers significant advantages in automating and optimizing various aspects of a B2B go-to-market strategy. However, it’s crucial to remember that the technology is not a silver bullet. A hybrid approach, blending AI’s speed and data-crunching abilities with human creativity and nuance, appears to be the most effective strategy to optimize B2B go-to-market for the foreseeable future.
Jon Miller
Former CMO, Demandbase