As AI continues to dominate headlines, conversations about the right way to use these systems keep growing louder. Leaders know AI can transform their business, but concerns about privacy and potential biases hold some companies back from taking advantage of it. At the same time, other organizations are in a race to move forward at all costs—even if it means creating bigger risks down the line.
To separate AI facts from fiction, we gathered experts from Salesforce, Telefonica, and Fox Rothschild to shed light on the frameworks leaders need to harness these innovations ethically. All three speakers emphasized that we’re at a critical turning point in the evolution of ethical AI.
“This is the time where we’re still in the early enough stages to mitigate potential harm. We’re at the point where we can still pivot, redirect, and recalculate so now is the time for a strong call to action,” say Kagan. “It’s really important that there’s a buzz now because the public opinion really is driving new regulations.”
Goldman expands on that point, encouraging executives to not only consider how ethical potential systems are but also to assess their internal implementation processes.
“When it comes to implementing things responsibly, leaders need to think about the safeguards they’ve set up and how they can test that data that’s going into AI systems. They need to consider the fairness of the models themselves and then the controls about what data goes in and goes out.”
Leaders who thoroughly assess potential AI systems, implementation processes, and internal data collection can feel confident that they’re taking the steps needed to ensure their organizations harness these innovations ethically.
While AI innovations have the potential to level the playing field, there’s also growing concern that these systems will cause some employees to get overlooked—an oversight that’s particularly detrimental as skill gaps widen. Learn how to approach AI innovation responsibly to drive better outcomes for your people and your business.
“There’s a lot of conversation about the impact of generative AI in the workplace. What I will say is that right now, generative AI is like a really good assistant. It has a lot of limitations. But I think there’s no doubt that there are really serious workforce transformations issues that we’re going to need to grapple with as a society and you see that playing out already, almost faster than anyone might have anticipated.”
“Everybody agrees that generative AI will have an impact but there’s uncertainty about the type of impact and how good or bad it will be. And I think that will have an impact on talent management and workforce planning because if the type of jobs shift over a short period of time you’ll have to plan for that. But then nobody knows in what sense it will shift.”
“There’s a saying, ‘If you pull one hair, then the whole body moves,’ and I think that applies here. AI strategy, data strategy, and digital transformation are all intertwined with each other. If you mess with one, it impacts the rest. And sometimes it actually impacts in a way that you can’t really tell immediately. They’re really intertwined and the inter-effects are not obvious, they might manifest later.”
“Executives have a really important role in terms of setting the culture and setting the cues in terms of how things get done responsibly and making sure people know that it’s their job and everyone’s job to mind the ethical implications of these products and to understand what AI is and isn’t good at.”
This info will help our workforce agility experts personalize your experience.