2024 AI Outlook: Expert advice on navigating the AI economy

December 18, 2023
#
Artificial intelligence Digital transformation

Harnessing AI’s opportunities and avoiding risks

AI has become every organization’s top challenge—and their top opportunity. Appian interviewed industry experts across top consulting firms, including our own George Casey. 

George leads the Advanced Analytics practice at RSM. In this role, he advises clients on both strategic and technology issues important to delivering value with data science. George has been published in several professional and trade journals and is a frequent seminar speaker. He is a Microsoft Certified Trainer and has written several manuals for Microsoft on Reporting and Business Analytics.

With AI, everyone sees opportunity. I’ve yet to come across an industry that can’t take advantage of these techniques. It just depends on the industry.
George Casey, principal, data scientist, RSM US LLP

Q&A with George Casey

Q: How have you thought about AI over the past 5 or 10 years and how has it changed recently?

There are multiple ways of defining AI and its components. The one I like is that AI replaces some tasks we used to think a human was required for. Whether that’s driving a car, reading a text message, or interpreting a picture, it all starts with a prediction and an action. With ChatGPT, for example, it’ll see you put in a bunch of characters, then predict your intended meaning. And it’s doing that using natural language processing and the ability to understand, “oh, well, this looks like the English language and it looks like these words that I’ve seen. And when I see them in this pattern, this is typically what I can infer from that.”

What's more available today is the massive amounts of data and scalable computing power. We can do these things in real time in memory, just like if I’m driving my car, it can predict that I’m closing too fast on the car ahead, then act and apply the brakes. The ability to do that prediction has been around for 50 years. But before, it would take a long time, and by the time it finished its prediction, you would have hit the car. So, I think that’s why we’re seeing such an uptick in this web of disruptive technologies. We’ve seen interconnected devices and the growth of Internet of Things devices, then massive available datasets and compute power.

Q: Where do you think we're seeing the most impact when it comes to AI?

Everyone sees opportunity. I’ve yet to come across an industry that can’t take advantage of these techniques. It just depends on the industry.

Take healthcare. I have colleagues who will say that if you don’t use AI as a large healthcare practice, you’re committing malpractice because you’re not serving the patients using the best and greatest state-of-the-art techniques. In life sciences, we're seeing people do massive things around drug development and clinical trials. It’s changing the game in clinical trials by using a simulation or model so we quickly evaluate compounds without all of the physical testing we used to have to do.

Industrial companies are seeing uptake on the shop floor, in what we'd call “factory of the future” or “industrial 4.0.” They’re moving beyond basic automation. Now we introduce things like computer vision where we can use cameras, video images, and data to infer things like shop floor safety, predictive maintenance, quality, or even being able to look at all the parts that come off of the shop floor and say, “Hey, is this a good part or a bad part?” In the old days, they would have a sampling program with human inspection that hopefully catches most of the problems. Now, we can catch 100% of the issues because it’s all passing by a camera that instantly detects defects.

Now we're talking about saving a lot of money or creating additional value opportunities for organizations. It starts with understanding that they have a problem. Then, it becomes, ‘how can we solve it?’
George Casey, principal, data scientist, RSM US LLP

Q: It seems like the world is focused on generative AI, but there are a lot of use cases around data and AI in general. Is that fair?

I 100% agree. It’s not about the technology—all innovation starts with a problem to be solved. When you look at opportunity that way, it’s important to focus on the “why?” Why would we do this? Why are we trying to solve a particular problem? What's in it for us? Where is there value? Answer those, and then we can start getting to the how.

For example, think about nonprofits and how they operate. Their challenge is around predicting member engagement or predicting donor engagement depending on their charter and their structure. So, being able to better understand the signals they get from their members through their behavior or demographics, can they understand if the members will renew their membership? And if they can infer or predict that they won’t renew their membership, can the nonprofit team design an intervention strategy?

Q: What big risks should people think about around AI?

The first one that people get concerned with is just access to data. If I am sharing data with an entity, whether that’s the computer or organization that controls it… I don’t necessarily want them to have full access to that data.

The example we talked about before with healthcare is illustrative. A lot of people can have concerns. The first risk is that many of these systems work based on access to massive datasets. How can you make sure you're governing that data access appropriately to avoid misuse or misappropriation?

Another big risk is the data we use to train these models. The underlying data may lead to bias. With AI, we may institutionalize that bias because we base decisions off of how we gathered the data, not necessarily what's appropriate or representative.

So it’s important to assess whether a dataset is appropriate to be used for a model or if it represents a specific bias that you wouldn’t want to be pervasive. That's a risk.

A third risk is around autonomous control, where you take humans out of the loop and the machines do more than you originally planned for. While that’s the most publicized worry, I think it’s a lower risk due to the controls we have as developers and consultants when implementing these systems. We can design around this risk. We just need to ask the right questions, assess what data the system is exposed to, and then decide what actions it’s allowed to take.

Q: One more question. How do you feel AI plays in the automation space?

You can look at companies like Appian and others that have applied AI. You’re not necessarily creating bespoke models, but you’re applying the technology in a low-code environment where you enable a citizen developer to take advantage of AI. That gives people a head start or leap ahead rather than having to build all this from scratch. They can say, “Hey, there’s a specific process we want to automate and we’re going to use some AI to take what was difficult before and help us make some of those decisions.” This approach makes it much easier to adopt than growing systems from the ground up.

Download the guide to read George's interview, and to see the insights from other AI professionals.

RSM contributors

Related insights