As artificial intelligence’s influence grows in all aspects of business, from recruiting the right talent to ensuring the best succession plan is in place for key roles, businesses are looking for ways to make sure their AI is ethical. From making sure that systemic biases are limited and people are seen for their skills—not their backgrounds—to connecting talent to the right opportunities within an organization at speed and scale, balancing the power of automation with the humanity of its workforce has become one of the priorities of business leaders learning to use the technology effectively.
Sydney Coleman, Senior Product Inclusion & Equity Program Manager at Google; Nithya Vaduganathan, Managing Director and Partner at Boston Consulting Group; and Yoni Friedman, Gloat’s VP of Solutions Consulting tackled how AI can help companies create better work outcomes.
“I would define ethical AI as systems that have respect and recognize the importance of human rights,” notes Sydney Coleman. “That spans from things like anti-discrimination to privacy and confidentiality, but it’s also about thinking about accessibility and the importance of having a human-centered focus with the development of AI technology.”
Recent surveys show that nearly 55% of companies are investing in recruitment automation and believe that it’ll enhance efficiency and enable data-driven judgments, reducing biases along the way. As that share continues growing, it’s up to leaders to make sure that they’re getting the most out of these systems without leaving talent behind—namely, by considering the operational and cultural changes that must accompany AI-based systems.
D&I is increasingly being recognized as a critical business function—but to fully implement it in today’s developing technological landscape, we need new approaches to what talent, experiences, and inclusion actually mean.
“In BCG’s latest research, what we found is that only 10% of new roles are filled by internal lateral hires, which is shocking because 60% of people who leave a company do so because they don't find the right sort of career opportunities within. So in that sense, the power of internal marketplaces to create transparency of opportunity to be proactive in suggesting things that people may not have otherwise considered is incredibly powerful.”
“We might have a technology that works really well for black people holistically, but when we actually slice and dice the data, it's not working for black women or older black people,” notes Sydney Coleman. “Think about how those intersections impact the experience of using a technology and not looking at the data always in aggregate, but actually looking at those intersections of identity and what are the implications and unintended consequences.
“Oftentimes we're making technology with the best intentions, but we have these unintended harms and we need to mitigate that and anticipate that by really going deep into the data and looking at those identity intersections.”
“When I try to think about AI and using AI to really drive a change or really do something that is unique within the company I work for, I'm thinking about finding solutions that don't just augment or automate, but rather enhance the process.
“I try to bring in an additional creative element or just an additional capability, whether it be a superior computational ability or a streamlined element into the process. It has to be something that will move the needle, not just in terms of speed, but also in terms of complexity or the cognitive breadth that AI can provide.”
“Underrepresented talent is often passive talent,” Sydney Coleman says. “To build a representative pipeline, there are ways we can leverage technology to think about the different factors that go into pipeline management and how we can make that a more representative feature of an organization. Even seemingly insignificant details, like what's an optimal word count, can have a major impact on representation in different roles.
“After a certain point, we know there's bias in terms of who's going to respond to not only different terminology but also just the length of a posting,” Coleman continues. “There’s a long list of requirements that I think can be really deterring for certain candidates and certain backgrounds. Thinking about how to optimize that process using AI can be a really powerful tool for D&I efforts.”
“I think there's a little bit of a perception that there's a black box associated with any type of sort of automation or AI. In many ways, especially as it pertains to recruiting, the human mind is arguably more of a black box and could be more of a black box than an algorithm. There could be tons of unconscious biases that then vary from person to person inside someone’s brain, but the fear of algorithms is a little more opaque.
“Of course, the issue with sort of AI-based algorithms is if there's something that even un unintentionally is wrong, it automatically scales to everyone versus sort of being contained to sort of the one or two use cases.”
This info will help our workforce agility experts personalize your experience.