- History of Artificial Intelligence
Born in 1912, Alan Turing is the forefather of Artificial Intelligence. An excellent mathematician, Turing was a leading participant during World War II in the breaking of German ciphers at Bletchley Park.
The United Kingdom declared war on Germany on September 3, 1939, and Turing reported to Bletchley Park the next day. Turing’s main focus was to crack the “Enigma” code, a type of enciphering machine used by the German armed forces to send messages securely. Polish mathematicians had already shared with the British its ability to read Enigma messages. The Germans, however, thwarted Poland’s successes by increasing its security at the outbreak of war by changing the cipher system daily. Thus, understanding the code became more difficult and, to some, impossible.
Turing played a key role in deciphering Enigma, inventing a machine known as the Bombe. This device helped to reduce the codebreakers’ work significantly. From mid-1940, German Air Force signals were being read at Bletchley and the intelligence gained from them was helping the war effort. Within weeks of arriving at Bletchley Park, Turing specified a used Bombe (an electromechanical machine) to break Enigma more effectively than the Polish bomba kryptologiczna. The bombe became one of the primary major automated tools used to attack Enigma-enciphered messages.
- Turing’s Legacy to Artificial Intelligence
The study of mathematical logic led to Turing’s theory of computation that suggested that a machine – by shuffling symbols as simple as “0” and “1,” could simulate any conceivable act of any process of formal reasoning known as the “Church-Turin Thesis.” This, along with concurrent discoveries in neurobiology, information theory and cybernetics, led researchers to consider the possibility of building an electronic brain. The first work now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons.” - Artificial Intelligence and Human Resources
Before the Internet, most employers relied on local newspapers’ classified advertisements to attract viable candidates. The Internet officially became open to the public on January 1, 1983.[1] Employers no longer had to rely on classified ads. Human Resource professionals quickly realized that the Internet gave them an incredible tool to reach out to a large number of applicants that HR had never seen before; yet, the increased applicant pool created problems that HR hadn’t envisioned. Now HR was going from 10-20 applicants to literally THOUSANDS of resumes and applications. How does an employer weed through these applicants? - EEOC’s Recommendations
It didn’t take long for the EEOC to opine. In 2021, the EEOC launched an agency-wide initiative to ensure that the use of software, including artificial intelligence (AI), machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal laws. The EEOC has opined that AI that screens out applicants without engaging in the interactive process required under the Americans with Disabilities Act (ADA) is problematic. Similarly, employers should be wary of potential adverse impact issues. Employers must assess whether a selection procedure such as AI has an adverse impact on a protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group. The EEOC and courts generally use the “four-fifths rule” in determining adverse impact. Under this rule, one rate is substantially different than another if their ratio is less than four-fifths (80%). Employers may need to conduct their own adverse impact analysis even if they purchased a test that was developed by a vendor. According to the EEOC, employers should ask the vendor about what steps the vendor has taken to evaluate whether use of the tool causes a substantially lower selection rte for individuals with a characteristic protected by Title VII. Even if the vendor has taken these steps, employers may be liable if the vendor is incorrect about its own assessment of adverse impact, the employer could still be liable.If use of an algorithmic decision-making tool has an adverse impact on individuals due to a protected class, then use of the tool violates Title VII unless the employer can show that the use of AI is “job related and consistent with business necessity.”
- Conclusion
The use of AI in employment decisions is in its infancy. Employers should be cautious in using this valuable tool to avoid potential discrimination laws and seek competent legal advice before doing so.