A few weeks ago my wife and I were discussing some of the new Gen AI tools that can create very professional headshots that were historically the realm of professional photographers. After discussing it for a few days, she decided to try out a solution a friend of hers had recommended. We started by taking a few headshots that she could upload to provide the tool an understanding of her appearance. Once uploaded we waited the 20-30 seconds it took to generate.
The results were mixed to say the least. Some pictures were quite good and I could see why people were taking advantage. Then there were the others. The remaining pictures gave me a very clear understanding that the model used to train the tool had some strong biases towards what it thought was an average woman’s appearance. The tool had a tendency to lengthen her hair, change her eye color and it even went so far as to estimate the appearance of her body, despite being provided headshots.
The vast majority of the images were worth nothing more than a good laugh, but it left me wondering about how we can implement these tools in a responsible, ethical and dare I say it human way? While I don’t expect this article to address every issue you will encounter, I am hopeful it provides some considerations and guidelines around responsible use as you embark on your own AI journey.
To help us in our understanding, I went to the source and asked ChatGPT 3.5 to define the responsible use of AI within the context of an HR Shared Services team. Below is what was produced:
“Responsible use of AI involves deploying technology in a manner that aligns with ethical standards, legal compliance, and respect for human rights. In the context of HR Shared Services, it means utilizing AI solutions to enhance efficiency, accuracy, and employee experiences while ensuring fairness, transparency, and compliance with privacy regulations.”
~ChatGPT 3.5
This definition is simple and intuitive, but it doesn’t tell us how we should think about our responsibilities as an organization or a leader. I believe the responsibilities we have in HR Shared Services can be broken into 4 categories: Transparency and Accountability, Bias Mitigation, Security and Privacy Protections and Employee Empowerment. It is important to note that at the time of writing this no internationally accepted standards exist, but many companies (Deloitte, Google, Meta, etc.), consortiums (EqualAI, The Data and Trust Alliance, etc.) and governments are developing their own standards. Until one standard emerges organizations and their leaders will need to set up a program that fits their culture and internal definition of responsible use.
Transparency and Accountability
AI can be a little like a magician pulling a rabbit out of a hat. You know it isn’t really magic and yet you can’t readily explain how it happened unless you are familiar with the trick. The same can be said for an AI tool that doesn’t provide clear understanding of its design and use. Failure to provide full transparency will result in mistrust, limited adoption and potentially misuse. A few points to consider as you explore your opportunities:
- When creating and deploying a tool there needs to be clear documentation around the training model used, algorithms built, and intended use cases. People need to understand how their data is being processed and utilized to build trust in the tool. Radical transparency should be the norm, not the exception.
- Put organizational structures and policies in place to determine who is responsible for the output and decisions of the systems. This is incredibly important and often overlooked. Take the example of a self-driving car. Without the proper accountability how do we determine who or what is at fault when it crashes? Is it the person who was in the vehicle, the AI for the decision it made, the manufacturer or the lack of regulation? Structures designed for accountability may not be as extreme as this example, but it is no less important.
- Use RAG (Retrieval-Augmented Generation) to enhance your tool where accuracy is most critical (pay, benefits, etc.). RAG is a very technical name for a very simple concept. RAG is simply a technique for enhancing the accuracy of a Large Language Model (LLM) by training it on trusted sources. Many of us have existing and curated knowledge bases already in use. Combine this information with an LLM tool and some personal attributes from your HCM and you have a 24x7 chatbot that can provide personalized and trusted responses at scale.
Bias Mitigation
All humans are inherently biased. We cannot escape this fact, but we can acknowledge our biases, name them and work to improve ourselves. Why then would we expect a machine that is built and trained by humans to be unbiased?
Guarding against bias is a critical aspect of responsible AI implementation. In HR Shared Services, where decisions impact employees’ careers and well-being, it is imperative to mitigate bias in AI algorithms throughout the life cycle of the tool. Consider the following when implementing your solution:
- Regular audits and reviews of AI models should be conducted to identify and rectify any biases that may emerge. You won’t find what you aren’t actively looking for so you need to spend time and resources on monitoring. Sharing this information and how you are working to improve will build trust in the tool.
- Use large and diverse datasets to train your tool. In my story about my wife the issue wasn’t the tool's ability to generate an image. The dataset it was trained on created a biased response that was hard to ignore, but yours may not be as obvious. A large and diverse data set will be less biased (not unbiased) as it is more representative of the overall population. Understanding the dataset you are using will go a long way to ensuring you understand where you need to be on the lookout for bias.
- The power of an AI tool is that it learns over time allowing it to evolve with the organization and solve new and challenging problems. The issue with their ability to learn is it also causes “Drift”. Drift happens when a model learns new things and starts to move away (aka Drift) from how it was originally trained. Sometimes drift can be positive as it can increase accuracy, help solve new challenges or surface new solutions. The problem is it can also result in biased and inaccurate responses. Depending on the inputs and speed of learning this can be a quick and drastic shift. If you don’t believe me, do a quick search for “Microsoft Tay” and you will understand how quickly things can go bad.
Security and Privacy Protection
We have access to a great deal of sensitive employee information, and privacy protection is paramount. Access, storage and processing of information in an AI tool should be held to the data privacy standards you have today and be flexible enough to accommodate new standards that will emerge. This includes clearly documenting intended use, retention, safeguards and controls. Some additional guidance to consider:
- Partner closely with your Security, Risk, Legal and Technology teams before you ever start work. Understand what you are trying to accomplish through the lens of a clear business case and build security and privacy into your governance structure.
- This is still a new and emerging area of risk for many organizations and government oversight is limited, but growing. Work with your Legal team to understand the legal and compliance risk of using an AI tool, especially when it is deployed across borders. Additional regulation is coming. Expect changes and additional oversight.
- Security and privacy controls should be included in any tool documentation you created. Ideally this should be documented alongside the model for complete transparency. Providing the user an understanding of the use and protection of their data builds trust that you have their best interest in mind.
Employee Empowerment
Responsible use of AI in HR Shared Services should empower employees rather than alienate them. AI is meant to assist, not replace human decision-making, and yet there is the real possibility of displacing workers. We must be empathetic, thoughtful and supportive of employees and their growth opportunities.
- Have honest conversations with your employees about roles that could be impacted by the implementation of an AI tool. Treat people with respect and empathy like the professionals they are and they will respond in kind. Help them understand that their value isn’t in the job itself, but the potential they represent.
- Start building training and development programs now for the roles with the greatest risk for displacement. Do this before it is critical, otherwise it will be too late. Some of the best companies out there are training their entire employee population on AI and data science to prepare them for the changes ahead.
- Be understanding of your team’s fear. It is real and valid. There are a lot of stories out there about AI taking people’s jobs and much worse (reference any AI movie Hollywood has ever produced). What our team’s need to understand is that their fear is reasonable, but largely unfounded and based in the unknown. Bring them on the journey and show them that much like previous Industrial Revolutions the need for new skills and jobs will emerge. There is a lot of opportunity for their career growth; they just need help seeing it and understanding how to take advantage.
This article covered a lot of ground and yet only scratched the surface. One area I mentioned, but didn’t cover in detail is the need to document use cases and the business problem before you start. You need to plan carefully and find ways to deliver business value. AI investments are not a side project you can do with limited resources and a shoestring budget. These projects are resource heavy, have high failure rates and may require years of preparation to align the people, technology and data to deliver the expected results.
Remember “With great power comes great responsibility.” Voltaire…or Spiderman (depending on your preference). It is up to you as leaders to decide where best to use and wield the power that comes with AI. Will you build something that reduces bias, enhances productivity and delivers real business value or will you be left explaining why your tool changes everyone’s eyes to blue?