People Analytics: To AI, Or Not to AI? The Future of Artificial Intelligence in People Analytics
Updated: Nov 16, 2020
By utilizing big data related to people and making better personnel decisions, we aim to improve employee happiness and thereby improve company-wide business performance. People analytics has attracted a lot of attention, but it has not established clear data governance and policies for handling data related to employees and candidates, and the process of recruitment and evaluation. There are some cases where troubles have occurred because AI is incorporated into the hiring and evaluation process without giving much thought to the downstream effects.
However, to hastily conclude that "data utilization and AI/machine learning should not be incorporated into the human resources area" by looking only at those failure cases would be foolish. The HR area is horribly manual and routine, and poor decisions are often made in the process. Rather, any person with field experience should think that there are many tasks that require objective assistance in decision making should be performed by machines. In short, whether it's data or AI, it's important not to make a mistake in how to use it.
In this article, I would like to talk about the attitude toward data governance and AI traps (and possibilities) that must be kept in mind in order to effectively perform people analytics. Who owns the data rights? What is the correct attitude to have to avoid troubles?
Prior approval of these points is Step 1 of data governance. There are many other things that companies need to check around data. What problem will be solved using data? Who will access the data and when? How often should the policy be reviewed?
Recently, it seems that more and more companies have added an item called data governance in their employee guidebooks and employment agreements (or application forms) and mentioned the right to use these data. Data governance also changes depending on what level and what your company's people analytics are doing so it is important to review your data policy each time.
Failure and success of AI adoption. The difference between light and dark is "what to use"
The number of personnel services using AI (artificial intelligence) has been increasing in recent years. I think this has great potential and possibility of danger at the same time. I think that both service providers and the personnel who are considering introducing them must develop and introduce them after fully understanding the pros and cons.
There is an example that illustrates this point very clearly. Two large overseas companies I know have begun developing and putting AI-based resume screening into practical use. Both are growing tech companies and are so popular that they receive millions of applications annually. The HR staff visually screens this amount of resumes and selects the right candidates for each job type. One person must compare thousands of resumes one after another. It can last for weeks. It is also resource-intensive to reconfirm a resume with numerous personnel to prevent it from being missed. If you imagine such a situation, you can imagine how easy it is for human errors to occur in the initial screening process.
Therefore, attention was focused on the initiative of "mechanically sifting resumes using AI." Fortunately, both companies had millions of application data per year for multiple years, and even one company alone had sufficient teacher data. How efficient would the hiring process be if we could capture this and have them screen their resumes in the same way as human recruiters? Streamlining the hiring process with AI has become a dream for many companies.
By the way, the two companies started working on AI screening almost at the same time, but the routes taken after that are very different.
The first company released AI Resume Screening, which mechanically evaluated and selected candidates for the company on a 5-point scale to see if they should be called in for a subsequent interview. However, about two years later, they suddenly abolished this process. It is said that there was concern that female candidates were judged unfavorably in certain technical occupations. Because most of the past applicants were men, when they trained the computer model to learn the patterns of resumes for the last 10 years, the system recognized that the names and expressions that indicate men were preferable candidates.
This is not uncommon, and another company has also found that AI screening clearly chose white candidates over Black and Hispanic candidates. While we can modify a program to eliminate discrimination in these particular areas, there is no guarantee that it will not be unfairly judged elsewhere. As a result, the company has abolished AI screening.
The second company, on the other hand, developed a similar system but was concerned that AI could take in human bias as it was from the beginning, and applied the system to places other than initial screening. They employed a “drop-out prevention measure”, whereby AI screened the candidate pool which was once judged to be not qualified by human recruiters and revived candidates who seemed to be highly suitable from past data.
As I mentioned earlier, these popular companies collect too many resumes for humans to visually check, making the operation extremely prone to human errors. I hear that the company has employed 100 additional qualified candidates each year as a result of this AI-based omission prevention measure. You can tell how many mistakes are made by human judgment alone, but at the same time, you can see the impact of working well alongside machines.
It may still be difficult for the time being to completely eliminate bias from AI, as human biased judgments are used as training data. However, if we understand the flaws in AI and make effective use of it to make up for the parts that humans cannot reach, the inefficiency of the human resources department can be significantly improved.
To AI, or not to AI? That's the problem
What kind of personnel field is AI suitable for, and what kind of field is it not suitable for? I find it difficult to summarize this in a one-size-fits-all answer that applies to all companies. However, I can absolutely say that we should take in multifaceted opinions from various experts so that we can eliminate the blind spots as much as possible.
An experience from back when I was at the Human Resources Strategy Office at Google Headquarters - When a new job opened, we considered developing an engine that automatically picked up candidates with similar profiles that passed similar positions based on the past applicant database and recommended them to hiring managers. It was devised by a team manager who primarily looked at "efficiency" in the hiring process.
The general hiring process is that immediately after a job is opened, the recruiter will interview the hiring manager about the job and then hold a meeting to sort out the recruiting requirements. If, like Amazon's auto-suggestion feature, the profile of a candidate drawn from a similar position in the past could be sent to the hiring manager on the same day, there would be no need for a meeting between the recruiter and the hiring manager, and the recruiter's resources. The issue was that it would reduce the number of hires and shorten the hiring time, as well as improve the overall “experience” of hiring managers and candidates. At that time, the parable given by the inventor was very impressive, and I still remember it clearly.
“For example, when you enter, "I want new socks " into the Amazon search bar, would you prefer the shopping concierge to show you candidates that have been selected through a detailed consulting process or the best one from your purchase history? Don't you think the latter is a better experience?"
Certainly, it seems reasonable. However, after much consideration, we consciously decided to forgo this project. Why? The answer is clear. Employees are not socks.
Even if the job type and job grade are exactly the same in writing, as long as the organization is a group of people, it is necessary to consider the candidate in balance with others. It is necessary to clarify the requirements of the human resources for the team while controlling factors such as the current situation of the team, the characteristics of the project, the awareness of other members, etc., and then recruit and select candidates.
As a humorous extension of the same parable,
"What should I do with socks that I don't want to put in the drawer with other socks?" (Personal work style preferences and philosophies)
"What do you do with socks that you want to use as gloves in the future?" (Career path)
"The drawers are full of yellow socks, but shouldn't we have other colors so that we don't have to worry about formal occasions in the future?" (Team Diversity)
The conclusion was that human consideration was absolutely necessary.
In addition, a meeting for organizing requirements, which is often regarded as a troublesome process at first glance, but this is for the recruiter to give knowledge and another perspective to the hiring manager as a “hiring expert” and build relationships through consultation. Therefore, there was also a viewpoint that it was not possible to simply eliminate it in pursuit of efficiency.
In this way, it is necessary to consider whether "efficiency," "accuracy," "experience," "fairness," and so on, can be addressed properly by mechanical/AI means. From various aspects, I think it is necessary for us in human resources, who will become more and more involved with AI in the future, to consider whether it is really better to perform certain tasks mechanically.