••• SPONSORED SECTION •••
Billions of dollars are being invested in North Carolina for the development of the newest tool for business: Artificial Intelligence.
For example, computing giant Lenovo has partnered with N.C. State to develop geospatial AI, aiming to optimize agriculture applications. The university has a $20 million grant from the National Science Foundation to study AI’s impact on education.
Cerebras Systems, a Sunnyvale, California-based AI company, is working to develop one of the nation’s largest supercomputers in Asheville. Healthcare, finance and virtually every industry is seeking to integrate AI into their business. Duke and Microsoft recently announced a partnership aimed at using the tech giant’s Azure AI system to simplify and optimize every facet of the healthcare system.
In April, Cary-based SAS Institute introduced new AI capabilities to its customers and has said it’s spending $2 billion in the next few years on AI.
Business North Carolina recently gathered a group of industry leaders to discuss the issues facing healthcare and what can be done to improve the situation. The conversation was moderated by Executive Editor Chris Roush. What follows is an edited transcript.
The discussion was sponsored by:
- Smith Anderson law firm
- SAS
Executive Editor Chris Roush moderated the roundtable. What follows is an edited transcript
WHAT ARE THE BIGGEST ISSUES SURROUNDING ARTIFICIAL INTELLIGENCE?
OLSON: Like any new technology, one of the biggest challenges is the mismatch between the hype and reality. I think we’re in this interesting time where there’s going to be an imbalance between those two competing forces, and I think navigating through it in the next few years are going to be very interesting, particularly at the pace of innovation we’re seeing with AI.
FRUTH: I work with AI in a number of ways, helping clients that are using AI in their business, offering services for oncology, imaging, even interviewing people to screen for jobs, which is crazy to me. And to me, the biggest issue, the one that is the most important, is who’s responsible when things go wrong.
GOLDSTEIN: Given the large number of people that I interact with on this topic, it’s probably no surprise that what I think is the biggest challenge in generative AI specifically is the people factor, the cultural changes that require the level to which people have to reimagine their tasks. Some of them can’t deal with feelings of being threatened, concerns about being made obsolete. This cultural transformation is going to be very important.
MCCLURE: If I had to pinpoint one of the major challenges we have in front of us, particularly from an enterprise wide business perspective, it’s getting data governance and AI governance right and getting that right quickly. Now is the time to look at your governance strategies, to look across your business holistically, to partner with your legal team now more than ever, and in establishing those practices, those processes.
BOYD: I see a number of issues that are of great concern and of great opportunity today with AI. No. 1 is just separating AI from machine learning, from generative AI, from chatbots, all these different components. There are some great, low-hanging fruit opportunities in business and education and healthcare where we can save lives and do great things. I’m interested in helping people do those things first, and then let’s push some of the more difficult things out a little bit further.
CUKIC: This is not necessarily a new technology, but it’s kind of astonishing how it has become the center of the universe in the last couple of years. There are several challenges that I see with the technology. On one side, we are witnessing the early phases of a complete revolution in the workplace and automation. On the other side, this is one of those technologies with a very, very high barrier for entry. Training of large models costs millions and millions of dollars, and it is really important not to get that kind of power in the hands of very few companies that can afford it at this point in time. And because of the energy usage, we will have to decide between the level of progress in AI, and the carbon-neutral future that we have to achieve. I don’t think we can achieve both together.
HOW DO WE OVERCOME THE FEAR THAT EMPLOYEES HAVE OF BEING REPLACED?
MCCLURE: My answer, with AI on whole, is that generally speaking, you shouldn’t feel that it is going to immediately replace you. The human element cannot be understated in all of this. I’m in marketing, right? So what are the skill sets that you bring to bear as a marketer? If I don’t know how to create derivative content based on the long-form content that we create ourselves, and how to use AI to do those things, if someone else knows how to do that, they’re going to replace me. It’s less about the technology itself being the sole replacement, and about people that have better skills around AI.
BOYD: This is really the age-old battle between capital and labor that we’re right in the midst of and it started with the Industrial Revolution. Two-hundred years ago, 91% of us were working in agriculture. And then 100 years later, like in 1900, it was 41%. Today it’s 2%. There are more app developers and software engineers than there are farmers in the U.S. We found something else for people to do. We made those adjustments. The difference is we had 200 years to make that adjustment. What’s happening now is it’s happening faster than most of our systems are designed to adjust to. I spent eight years on the board of trustees of Wake Tech, and I went to radiology saying, “Why are we still teaching this like this?” We already had machines much better by several orders of magnitude.
GOLDSTEIN: My perspective is a little brighter because I’m more on Team Capital than Team Labor. If I’m putting together a business, I want to be able to do as many things as possible quickly. The key takeaway is that a number of roles are going to be transformed dramatically. When I was running my last startup, a lot of what we were struggling with in the marketing department was the time to launch a campaign would be months. I was quite happy with the email campaigns and the like that ChatGPT was able to generate, in combination with image generators. I do think there’s a human role, but that human role is less going to be about creating the end item than about evaluating what you’re getting and providing feedback. People are going to have to adjust and become students again and learn skills as if they were 18 or 20.
BOYD: The central problem of the century is figuring out what the right balance is between humans and automation to optimize the outcome. Anybody who gets it right is going to prosper. Anyone who doesn’t figure
that out is going to have trouble.
OLSON: It’s hard for me to think of a practical way to overcome fear other than just continual exposure. The more we expose folks to it, the more they understand and appreciate and start adapting their own behaviors to how
this can actually help them, not hurt them. I think that’s ultimately, fundamentally how it’s going to happen.
I don’t want to hire people out of college that have never used OpenAI or ChatGPT or any of these systems. I want folks that downloaded open- source-models onto their laptops and played around with them and configured them for their own needs. I want people that are AI literate, and that eventually is going to come into the workforce. The second piece is long term. At the end of the day, all of our companies are getting measured on growth. Companies care about growth so they can save money in one area and plow it into more sales, maybe M&A, maybe R&D, that will continue to help them sustain their high growth.
The third piece is I’m a people person. I like being in the office. I like being around humans. I think this is going to give rise to a new set of jobs that are very service oriented. You’re going to see jobs that have yet to be invented, that are going to be service-oriented jobs because humans like being with other humans. Nothing’s going to automate relationships. I’m very pro human and optimistic.
HOW MANY STUDENTS ARE GETTING EXPOSED TO AI?
CUKIC: They are getting exposed whether they want it or not, and whether we want them or not. What has to change is the way we educate them and put that into the context. There are professions that are going to be minimally impacted – manual labor, professions, construction, transportation, things like that. They have already gone through their automation cycles. But 90% of the workforce will need to have the user level, familiarity and literacy with the AI concepts. Everyone who finished not just college but even high school will need to be trained in and be literate in artificial intelligence. For most of us, we will have to adapt to the new demands and the new position descriptions.
ARE LAW FIRMS USING AI TO WRITE BRIEFS?
FRUTH: There’s some examples where it didn’t work out so well because sometimes large language models make things up. My firm Smith Anderson is using AI for specific things, and I’m actually in charge of the task force there to figure out how we need to use it more and how to use it responsibly, and to find the tasks where maybe you could write draft briefs.
But what’s really helpful for us is, if you have 100 examples of this contract you’ve negotiated. AI is really good at just sorting that out and figuring out where your customers tend to push things. It would be really interesting if you end up having AI negotiate things for you. And you can imagine each side having kind of a playbook, and you could see a situation where those two AI agents might actually do a better job than lawyers, because they could have iterations, and they could really optimize each side in a way that after two or three times comes up with a compromise. Lawyers are expensive, and just imagine if you could have people not having to spend as much to get decent legal advice.
IS THERE AN INDUSTRY NOT LOOKING AT AI AS MUCH AS IT SHOULD?
BOYD: If you go into the doctor and they’re trying to diagnose you, why aren’t they looking at a big database of people that have my same sort of history of healthcare interventions, my family history, my genetic information. Therefore, based on this massive database, what are the recommended steps I should take next? Whether you’re coming out of college and you want to work with Pendo or you’re working in marketing, if you’re not augmenting yourself with understanding that at machine speed, you’re at a disadvantage, and it’s just not sufficient anymore.
MCCLURE: We’re seeing, there’s certain types of technologies from a healthcare perspective. You have to be able to run a test, and ultimately understand what the patient outcomes are going to look like. There needs to be that synthetic data layer to remove any confidential information. I would actually say there is a lot going on in health, but it’s mostly behind the covers. So your doctors aren’t transforming, but the companies that provide them things are starting to do things.
In health, you have both Moderna and Oscar Health partnering with OpenAI, as well as programs run by both Google and Microsoft that they publish about all the time.
GOLDSTEIN: I think there’s an underestimation of what this is likely to mean for the physical things like construction. It is very clear that the application of large language models to embodied robots worked vastly better than any one.
The target range that gets tossed around is about $30,000 for a humanoid robot, and at that kind of number it becomes very competitive, especially in things like the modern oil rig. If the robot falls in the ocean, you’re out $30,000. That’s a lot better than if the employee falls into the ocean and the consequences both personally for the employee and their family and for the business. There’s a tendency to try and draw boxes and say, “This won’t impact here,” and I just don’t believe that’s true. It’s a general purpose technology that’s going to sweep over everything and already is making dramatic impacts day to day in my working life.
CUKIC: I do have a little bit of a problem with the synthetic data, especially in medicine. IBM Watson failed primarily because it didn’t have enough data to train their models, and they used synthetic data. Let me give you another example. We are talking about how to make driving systems, and lots of hype and marketing about the status of those systems. The vast majority of those systems are at level two, which explicitly says you have to pay attention to the road and keep your hands on the steering wheel. What is the actual savings of such a technology at the current time, when you cannot do anything else but drive the same way? I just want all of us to be a little bit hesitant, especially in the areas where we are impacting human lives, when we are impacting the critical situation.
GOLDSTEIN: I completely disagree. I would not go against this technology. Think of the data on the self-driving car. It knows when I grab the wheel and take control. That is the best data possible. Swiss Re published the death rates and the anticipated cost of insurance from autos, and it’s a factor of 100 better with self-driving cars. The records are very clear. The level two is largely an artifact of how people who are – Luddites would be a strong way to put it – deeply fearful of this kind of technology. The idea that technology is worse than people at dealing with unexpected situations is a massive overestimate of how people behave.
BOYD: We’re going to look back at how we’re driving on Interstate 40 and just go, “That was ridiculous. I can’t believe we have a million accidents a year.” If you instrument the highway right to help the cars, and then the cars actually talk to each other, not just to themselves or to a system, you get a system that is way more safe than any human could possibly be.
I HAVE DRIVEN WITH FRIENDS IN THEIR TESLAS, AND WHEN THEY TURN ON THE AUTOMATIC, IT SCARES ME. HOW DO WE OVERCOME THAT?
GOLDSTEIN: It’s an interesting question. Look, I have a Tesla, and when I turn on automated driving, I get that same feeling. I’m uncomfortable, but I understand. It’s not based on anything but my own sort of need for control, right? And I ask myself, OK, if I were in a plane with some pilot who I’ve never met, why would I not feel nervous in a plane? And why do I feel nervous in a car? What I have to do is sort of adjust my behavior. That’s the challenge. We are all creatures of our experience.
HOW LONG IS THAT GOING TO TAKE?
FRUTH: I think one interesting aspect of that is when you have customizable AI agents, I feel like it needs to know a little bit about me. And let’s imagine that they’re like, “OK, well, when you drive, I will drive like you.” The AI agents will observe the rules of the road. But I like to accelerate fast. I don’t like to be pokey. And then I think that gets really interesting about the user. Is it the algorithm, or is it the user?
BOYD: It comes back to what is the right balance between humans and automation optimized outcomes? And the central idea there is that we humans haven’t had an upgrade, by my reckoning, since the Pleistocene whereas these machines are advancing at the rate of Moore’s Laws continuously. We are going to be turning over more of these activities to automated systems.
CUKIC: We have to disagree in order to come to solutions and in order to come to natural conclusions. And there is nothing wrong with that. I also think we cannot extrapolate the rate of development and perfection of the technology based on the current rates. Eight years ago, we were all discussing the grim future of truck drivers who are going to be completely replaced by automated driving systems. And truck drivers are still very well employed and will be for quite some time. I think the last 5% of the performance is always the slowest, and it will take a little longer than we think. There is nothing wrong in hype, as long as the hype solves the real problems. But human humans will need to be working with the technology in order to assess how much they can trust it.
FRUTH: We may be overestimating the desire and the need for customization. If I’m on the road, I may not want you to drive like you because you may not be a very good driver, right? I certainly wouldn’t want cars generally to drive like me because I’m a mediocre driver.
LET’S END ON A HIGH NOTE. WHAT ARE YOU EXCITED ABOUT WITH AI?
CUKIC: I’m absolutely excited that the technology has reached a mediocre level of maturity. And there is a lot of development ahead of us that will make it so much better. And I’m really excited about the opportunities in the labor market and all the new opportunities for jobs that are going to emerge from this. I’m just cautiously optimistic because I think we cannot hype too much because that creates a backlash.
BOYD: This is the simulation century. All the stuff I did at Lockheed, stuff I did in gaming, making simulated models of the world, that’s where the next big breakthrough is going to come from. For example, you can bring AI and simulated digital twin models of the world into the service of trying to predict the future. Let’s decide what kind of North Carolina we want? What kind of Raleigh do we want? And then have the AI actually help us design toward that.
And in this political climate we’re in right now, if you’re running for governor, bring your data and your models and your policies, and let’s run them in the simulation and show me 10 years from now how we’re better off. And then you would say, “Let me see my opponent’s simulation.” That’s what the argument should be about – your data and your model. It’s not about personalities or feelings or any of that stuff. That’s what I want to make happen. Let’s put AI in the service of better health, more economic mobility, all those things.
MCCLURE: I think we are finally getting past the hype of it all. I’m excited to get past that hype and get into the reality of implementation. And I think the transparency aspect of this needs to be so critical, not just from a business perspective. When I’m on social media, when I am looking at an advertisement, is this something that has been AI generated? If we are not demanding that ourselves, then it’s going to be a slow roll in a lot of ways.
GOLDSTEIN: I’m much more excited about the transformation that’s happening today. Just today, the New England Journal of Medicine described how AI allowed us to predict 95 potential antibiotics, and 78 of them have turned out to be biologically active and effective. Producing new antibiotics has been at the level of like one a year or two traditionally. So that’s an enormous deal.
I see what’s coming down the pike is a transformation in the e-commerce experience – how you engage with the software systems that are all around us. I say, “Here’s a picture of this dress I like, right? Or can you find me shoes that go with it? And it works.” This stuff is already in demo at things like Google Cloud. You’re going to see these kinds of much more human-centric interfaces and tools that make human life fundamentally much much easier, much much better, much much cheaper.
It’s pretty clear I’m the most enthusiastic on this panel about where we are, but I think it’s important to understand that I’m enthusiastic because I see the outcomes. I see the transformation in medicine. I see the possibility that we can cut our road deaths by multiple orders of magnitude. I see these things here today, not five or 10 years down the line.
FRUTH: I don’t know how we’re going to regulate all this stuff. I honestly don’t know what’s going to be effective. There’s the stuff they’re doing in the
EU where that’s risk-based, and then there’s the stuff that we do in the U.S.
and some of the states focusing a lot on self-governance and voluntary standards. By the time you get done with the hearings, the technology has already changed, right? So it’s going to be a big challenge to figure out how to have regulations that don’t impede technology when no one wants to slow things down too much, but then also make sure that we meet these goals that we’ve all identified. I’m excited to try to be part of figuring all this out.
OLSON: I think one of the overall outcomes of AI is just generally growth in the economy, growth in jobs. I think this is going to be a growth opportunity for everyone. Every revolution we’ve had has resulted in growth. Every revolution started with fear, and they’ve all resulted in growth. It will be no different in this one. We’re going to basically take all the manual, tedious things out of our business that we have to do but none of us really love doing. We’re going to simplify, automate and redirect, reallocate all those dollars to innovation, to building new things, to hiring more people.
And if I put my societal hat on, I think one of the things that we haven’t talked about on this panel is AI levels the playing field. This tech levels the playing field for different backgrounds. Imagine if you’re not a great writer, but you want to write something and you want to sound polished and professional. You can now do this. Knowledge is the great equalizer in our economy, and this democratizes knowledge for a lot of people. That is super powerful. ■