Podcast title: Current state of middle market AI adoption
Host: Bill Hayes, Editor-in-Chief of Directors & Boards
Featured guest: George Casey
High Contrast
Podcast title: Current state of middle market AI adoption
Host: Bill Hayes, Editor-in-Chief of Directors & Boards
Featured guest: George Casey
Bill Hayes
Welcome to Executive Session from Directors & Boards.
Hello, and welcome to Executive Session, the official podcast of Directors & Boards. I'm your host, Bill Hayes, editor-in-chief of Directors & Boards. In this technology-focused episode of Executive Session, we'll also address AI with a two-pronged discussion. You'll hear about the state of middle market adoption of AI from RSM's George Casey and the best ways boards can accelerate their knowledge of artificial intelligence from Alan Grafman.
This episode is brought to you by RSM, a proven provider of high-quality assurance, tax and consulting services that enable companies to flourish no matter their complexities. Now let's get to our first interview.
RSM recently released a report, The RSM Middle Market AI Survey 2024: U.S. and Canada, which aggregated the responses of over 500 executives across a range of middle market companies in varied industries within the stated countries. The goal of the report was to discover, among other things, how the middle market is using AI, how generative AI is making an impact, and what generative AI risks and challenges companies are facing.
Among the findings, 77% of respondents report formally or informally using generative AI within their operations, and 54% of executives who use AI say implementing the technology has been harder than suggested. We spoke to George Casey, data science and AI practice leader for RSM US LLP, to go deeper into the findings.
BH: George, as part of RSM's recent report, The RSM Middle Market AI Survey 2023: U.S. and Canada, it was revealed that 78% of middle-market executives surveyed reported either formally or informally using AI. Can you tell us a bit more about this finding and how companies in the U.S. and Canada are getting started with AI technology?
George Casey
Yeah, we thought this was really interesting in terms of it being a fairly high number, especially since we think the middle market might be slower to adopt. We think the distinction here is that notion of "informally." We're looking at a lot of people who, kind of on the side, have ChatGPT open or are using a beta product or something as part of supporting their day-to-day workload. So we think that's creating a lot of this early adoption.
We also see adoption in the form of applications they're using. They might have a Microsoft application suite with Copilot enabled, and so they feel like, "Hey, we're starting to use AI," even though it's fairly lightly layered into the application as opposed to major business use cases.
Another example we're seeing is proofs-of-concept type projects or initiatives—"Hey, let's see if we can't answer this question or address this prediction." I think that's where we're seeing lots of early adoption but fairly shallow as it relates to how it's integrated with the business.
The other thing we saw in the survey was we asked this by department. You saw certain departments were more likely to be adopters. For example, IT was very early and obviously strong adopters of new technology, whereas legal was very much a laggard, obviously looking at things like uncertainty around compliance or risk, being slower to adopt. I think it's not uncommon but a good confirmation of some of our expectations there.
BH: What use cases have you found as far as companies beginning their AI journeys, so to speak? And are there success stories or companies whose failings with AI perhaps can be learned from?
GC: Yeah, I think we're seeing some definite interesting use cases in terms of more advanced and specifically generative AI use cases. So we think about this notion of generative AI like, "Hey, is there an agent that can help me, can support me in my tasks, or make me more effective?" Or, "Is there a way I can engage stakeholders?" Whether those are our customers and thinking about AI chatbots beyond the traditional chatbot we've seen for years, which are more decision-tree oriented—like if this or that—whereas here, using some of these language models, we can say, "I may have never seen this question, but I understand enough about the context or I have enough document support that I can answer this question the first time I've seen it."
I think those types of use cases, we're seeing definite early success in engagement and people becoming aware of some of these capabilities. Other areas we're seeing are kind of around language, like document summarization. A great example we have is one of our clients is using this for grant applications, so being able to increase their throughput tenfold—their ability to respond to grants by both summarizing "What is it I have to answer as part of this response or this RFP or this grant?" and then, "What would a good answer be based on what I've responded in the past that was successful?"
Another example use case we're seeing around AI is good because it's the old world, new world: churn. We've been predicting churn for a while. We can look at things like behaviors and demographics and say, "What's the likelihood this customer is going to renew their relationship with me?" But now, looking at generative AI, being able to say, "All right, now that we have a good idea of who might defect, what's the best strategy to intervene or to compel them to renew?" And I can do that by doing these massive experiments and quickly trying lots of different engagement techniques because I can rapidly create marketing content as opposed to one-off manual campaigns or little A/B tests. I can find all of these different segments or clusters of behavior and get really hyper-focused on what's the right compelling interaction.
Then thinking about the failures we're seeing or the lessons learned, I think some of the challenges are just kind of overhyped expectations in terms of how quickly these are going to change the world. ROI is one of those components. I think the other challenge we still see is, when we think about fundamentally, these machines are still prediction machines, right? They are predicting what it is you're asking me, what it is you're looking for, what the right answer would be, and I'm going to respond if I have enough confidence.
The challenge is, in most of these models, if I'm 51% confident, that means more often than not I'll be right, but just barely. The other thing we're seeing is there's a lot of concern when they're wrong or they give an answer that's actually not appropriate, or this idea of hallucinations as an outcome. There's concern about, "Hey, maybe this isn't perfect." I think the challenge is recognizing that we like to say all models are wrong, and some are useful. The idea being, "How can I get value out of something, recognizing it isn't perfect, and I do have human-in-the-loop, I do have some control?"
I think that's the other lesson learned around making sure we think about the impact of these applications and when we need to have quality control or review, or how they can still add value without necessarily being completely autonomous.
BH: A lot of times when a new technology comes along, the tendency can be to say, "It's there, so we should use it." But of course, it has to make sense for the individual companies. Whether it's from the report or your own personal learnings, what is the return on investment on implementing AI technology? What's in it for companies?
GC: Yeah, this is a great question. Very true, because I would tell you, for the last year and a half, most of my conversations start with, "Hey, our CEO said we should use AI," or "Our board is asking what we're doing with AI," and AI by itself shouldn't be the goal, right? We should still be thinking about value and think about, "Hey, what are the valuable problems to solve, and what would be useful?"
I think applying that in terms of ROI is a challenge. I think about this like physics: we have potential energy, like I might hold this pen up in the air, but if I drop it, it hasn't converted to energy. I think with AI, we see potential ROI. If I say, "Hey, I have 100 employees, and I can save them 15 minutes a day," so that's 1,500 minutes of savings. But that's potential, right? Until I have something for them to do and I've easily distributed that workload and I can easily make sure I go right to that next task, we see a lot of leakage there. Like, "Hey, I'm 15 minutes more effective, but I don't know what I'm doing with that time yet." So there really wasn't any ROI.
If I go at the end of the quarter and check my financials, I haven't seen anything change. I haven't reduced staff; I haven't been able to grow with less staff or change ratios yet. I think that's where we're seeing some of these things might take longer to mature into. Now that we've created the extra capacity or productivity, how do we use it or apply it?
I think the other thing we're seeing is how do I stack these wins? They say, "Hey, it's a combination of these four agents I'm using, these scenarios, use cases, better predictions." When I start stacking them up over time, now I'm getting into dramatic ROI that can be measured and can be really applied to the business. We're seeing some of that at large-scale companies talking about some of their wins, but I think mid-market is slower to adopt. Just have to be realistic in terms of, "Hey, what's going to be the immediate return, and what's the longer-term return in terms of, 'Hey, let's upskill, let's get good at solving these problems, let's get better at being data-driven,'" and understand there's a whole slew of benefits that come with that.
I think part of that is the promise. It's like, "Hey, what's the value or the ROI in your first year of college?" Most of the ROI comes when I've completed college. Well, I'm that much closer to completing. I think part of this is foundationally, people are going to get better with their data, better at organizing and capturing their data and curating it in a way to serve these use cases and these models. I think there's still some promise to come in terms of when we'll achieve that ROI. But I'm a big believer—sort of like, "What's the ROI of having electricity?" Well, it's just a given. I think as we look forward, it's going to be a given that we are going to need these types of capabilities. Getting good at this, understanding how to upskill your organization, your teams, your tools, is going to be important and valuable.
BH: That was George Casey, Advanced Analytics Practice Leader for RSM.