.

Using AI to Strengthen Workplace Culture—Not Undermine It

Artificial intelligence is no longer a future concern for HR leaders—it’s a present-day reality. From hiring and performance management to productivity tools and workforce analytics, AI is rapidly reshaping how work gets done. The question facing organizations today isn’t whether AI will be used, but how it will be used. And whether it will help create workplaces where everyone can do their best work and thrive.

As a former tech executive and the author of the Better Allies® book series, I spend a lot of time thinking about the intersection of technology and workplace culture. Over the past several years, I’ve been paying close attention to how AI interacts with bias, power, and opportunity. What I’ve learned is both promising and concerning: AI can absolutely support better workforce management—but only if we design, deploy, and evaluate it intentionally.

Here are seven ways HR and people leaders can use AI thoughtfully, while minimizing the risk that it reinforces inequity.

  1. Beware the Hidden Competence Penalty of Using AI

Recent research reveals a troubling dynamic: when people from stereotyped groups use AI, it can backfire. Stereotyped groups include women in male-dominated fields, such as engineering, or older workers in youth-dominated roles.

In one study, participants reviewed identical software code samples. The only variable was whether the engineer was described as having used AI assistance. When AI was involved, perceived competence dropped by an average of 9 percent. But the penalty wasn’t evenly distributed. Men experienced a 6 percent drop, while women saw a 13 percent drop—more than double.

Follow-up surveys showed that many engineers anticipated this penalty and chose to avoid AI altogether to protect their reputations. Notably, those most likely to fear competence penalties were women and older workers—the very groups organizations often struggle to retain and advance.

And this bias is not limited to technical roles. Another study found that physicians who use AI are perceived as less competent, less empathetic, and less trustworthy.

For HR leaders encouraging AI adoption, this raises an important question: Are some employees paying an invisible price for using the very tools we’re promoting?

  1. Proactively Counter Bias in Performance Evaluations

The good news is that awareness can significantly reduce bias. Google famously found that simply reminding managers about common evaluation biases before promotion decisions eliminated many of them. This insight led to the creation of its Unbiasing Performance Review Checklist.

Organizations can take a similar approach with AI. Before peer feedback cycles or performance reviews begin, remind reviewers that bias can creep in when evaluating AI-assisted work, particularly for women and older employees. Naming the risk upfront helps people pause and reflect, rather than unconsciously penalize.

Another structural solution is to explicitly recognize AI proficiency as a valued skill. If you have influence over job levels or career frameworks, consider adding “AI competency” as an expected capability. When AI use is normalized and rewarded, it becomes less vulnerable to biased interpretation.

  1. Broaden Access to AI Pilot Projects

Who gets invited to work on AI initiatives matters—a lot. Research from Boston Consulting Group shows that junior women lag behind their male peers in awareness of generative AI. Two primary reasons stand out: they’re less likely to be part of informal networks where AI strategy is discussed, and they’re underrepresented on AI pilots.

This reflects a long-standing pattern. Research by Joan C. Williams and Marina Multhaup has shown that high-profile, career-advancing assignments disproportionately go to men.

The next time you’re forming an AI pilot team, pause and examine your selection criteria. Are you relying on the same familiar names? If your shortlist feels small because only a few people “seem ready,” consider expanding your network and creating opportunities for others to demonstrate readiness. Pilot projects build skills, visibility, and momentum—doors that should be open to everyone.

  1. Audit AI Hiring Tools for Bias

AI may speed up hiring, but it should not accelerate discrimination. Multiple studies raise red flags. A University of Washington study found that AI hiring tools overwhelmingly favor white and male candidates. A Stanford study showed that generative AI can reinforce age and gender bias, particularly against older women.

Rather than allowing AI outputs to move directly into decision-making, build in pause points. For example: “Before finalizing candidates, let’s review AI-generated materials for biased assumptions related to race, age, or gender.” This small step can prevent biased automation from becoming baked into your hiring process.

  1. Review AI-Generated Performance Feedback Carefully

Bias doesn’t disappear when AI writes feedback—in fact, it can amplify it. Dr. Kieran Snyder found that when ChatGPT is asked to write performance feedback, the output varies significantly by gender. When identical prompts were changed only by pronouns, feedback for women tended to be longer and more critical in tone.

If AI is used to draft performance reviews, peer feedback, or letters of recommendation, HR teams should actively audit the results. Look for patterns in length, tone, and specificity. Ensure that feedback is roughly equivalent across employees and that women and marginalized groups are not receiving disproportionate criticism.

  1. Question Bias in AI Models Themselves

Bias can enter AI systems through skewed training data, flawed algorithms, or insufficient oversight. Examples abound: job recommendation engines that favor certain racial groups, chatbots that advise women to negotiate lower salaries, facial recognition tools that misidentify people of color, and medical AI systems that misdiagnose patients with dark skin.

As your organization adopts AI, assume bias is possible—even likely. Test outputs across different populations. Ask vendors tough questions. Build guardrails and escalation paths. Fairness is not automatic; it must be intentionally designed.

  1. Ask Whether AI Should Be Used at All

Finally, not every problem requires an AI solution. Beyond bias, AI raises serious ethical questions—from its environmental impact and energy consumption to concerns about eroding critical thinking skills and the use of copyrighted or stolen training data.

Before deploying AI, ask: Is it necessary here? What problem are we truly trying to solve? And is the benefit worth the broader impact?

AI isn’t going away. But organizations do get to decide how to use it, if at all. With thoughtful design, oversight, and allyship, AI can support more equitable workforce management. Without that care, it risks reinforcing the very inequities many HR leaders are working hard to dismantle.

The choice is ours.

About Karen Catlin
Karen Catlin is a best-selling author and speaker on on creating cultures that enable employees to do their best work. She has published four books: Better Allies, Belonging in Healthcare, The Better Allies Approach to Hiring, and The Better Allies Way. She also emails a roundup of “5 Ally Actions” to over 40,000 newsletter subscribers every week.

Previously, Karen spent 25 years building software products and serving as a vice president of engineering at Adobe. During that time, she witnessed a sharp decline in the number of women working in tech. Frustrated but galvanized, she knew it was time to switch gears and focus on creating more inclusive workplaces.

Share post:

Popular

More like this
Related

During Change, What Makes Us Good Makes Us Bad

It’s a paradox: The things that make leaders successful...

Employee Benefits Platforms: Driving Transformation Across Business Sectors

In today’s competitive labour market, organisations are under increasing...