.

Candidates are using chatGPT to do assessments… cheating or fair game?

Date:

Vervoe: The explosion of chatGPT and other generative AI tools has commoditized knowledge. But hang on, didn’t Google already do that? Surely Google brought the answer to virtually any question to our fingertips. That’s true, but chatGPT goes a step further. It doesn’t just give us information, it writes the answer for us as if it was our own. It can even do it in hip hop if we ask nicely enough.

So now we can outsource almost any knowledge-based task to a robot in a personalized way. In other words, the robot can replace us if we ask it to.

For some use cases it’s a no brainer. chatGPT can write prospecting emails for sales reps, website messaging or even job advertisements. But in other cases it’s not as straightforward, particularly when human discretion or creativity is more important than knowing the right answer.

In recruitment, candidates and employers alike have been looking for an edge. Candidates want to stand out and employers want to increase accuracy and reduce time and effort. Some candidates utilize career coaches and resume writers to help market themselves to companies. This is entirely legitimate.

But what happens when candidates use chatGPT to help them write responses to an asynchronous interview or online assessment? Are they cheating?

This is happening right as we speak.

When I was in law school all the exams were “open book” exams. We could literally bring in whatever we wanted, including books and notes. But mobile phones and computers were banned. The idea was that knowledge alone was useless as what we knew wasn’t being examined. Rather, it was all about how we applied knowledge to different situations, each requiring thought and analysis. A book can’t help with that. Google might be able to help. chatGPT can most definitely help.

And herein lies the problem.

When companies are evaluating candidates they need to ask themselves what they care about. Is it knowledge in a specific domain? Is it resourcefulness and the ability to acquire knowledge quickly? Or is it critical thinking?

In the new world of work, using Google to look things up is commonplace. If we can find the answer on Google, why not? And if all a candidate brings is knowledge that can be looked up in seconds then, respectfully, that is not of great value. Companies need employees who can perform tasks at a high standard, be it making hamburgers, giving tax advice or building jet engines.

So here’s the real question. Do they want to test whether people can do those tasks unassisted? Or, are they happy for prospective employees to use any means to learn – and perhaps even outsource – those tasks so long as the job is done?

Does a sales development rep need to be good at crafting emails, or is it ok for them to outsource that to chatGPT? What about an accountant and financial analysis? Or a designer and logo design?

This raises an important question about AI in the workplace that goes well beyond hiring. What skills do we actually care about? And if we determine that an employee must possess a certain skill even though it could potentially be done by a robot, then what is the reason?

When we think about the rules of engagement in the hiring process – and cheating specifically – it’s important to clarify which skills must be “owned” by the candidate and which can be “artificially acquired”. Let me explain.

If we want to hire a salesperson, building rapport, performing a product walkthrough and commercial negotiation might be non-negotiable “owned” skills. We should therefore test those skills in a way that can’t be gamed. Candidates should be forced to prove their capability by performing tasks, and it should be made clear to them which resources they can avail themselves of, if any.

Conversely, email copy and prospecting might be skills that we consider capable of being “artificially acquired”, which means that we don’t care how the task is done so long as the outcome is good. If a candidate can hack their way to a good outcome, that’s perfectly legitimate. Impressive even. Therefore, when validating those skills we should allow candidates to use all means necessary, and doing so is not cheating.

As AI continues to evolve, both knowledge-based tasks will continue to be commoditized. In those instances the skill in demand is not performance of the underlying task, but rather the wherewithal to outsource that task to a high standard. And that is not cheating.

Share post:

Popular

More like this
Related

How RPO Can Securely and Confidentially Hire the Most Cost-Effective Global Talent

Recruitment Process Outsourcing (RPO) has redefined hiring, offering companies...

Benefits Administration Evolution

When I first started in the benefits consulting world...

Powering More Precision in Recruitment to De-risk Your Workforce Strategies

isolved®, the most-trusted human capital management (HCM) partner, has...

Giving Your Workforce the Ultimate AI Assistance in a Pursuit for Better Customer Service

Salesforce, the world’s #1 AI CRM, has officially announced...