CAIO: Chief AI officer is the latest entrant into the C-suite
As they pursue AI initiatives, companies are appointing chief artificial intelligence officers (CAIO). Does your organization need one?
Say I hop into a large language model (LLM) to do cursory research for an article and the query, “What can universities do to eliminate bias in AI systems they use?” returns results that are irrelevant, off the mark, or inherently biased. No harm done, probably.
But what if a university uses a customized LLM-based AI to filter thousands of student applications and even a small percentage of the AI’s assessments are irrelevant, off the mark or inherently biased? It can cost an institution some excellent students and it can cost some excellent students educational opportunity.My LLM search on eliminating AI bias wasn’t harmful because a human being—I, the writer—vetted the results, recognizing nuggets of potential value to follow up on while disregarding the irrelevant (actually, the ChatGPT results were pretty good). That same sort of human vetting must happen with any university AI system. But who are the humans? And when and where does the AI vetting happen?
These are ultimately questions of AI governance, and governing AI behaviors is new for everyone. Here are a few best practices that have emerged in these early days.
As they pursue AI initiatives, companies are appointing chief artificial intelligence officers (CAIO). Does your organization need one?
First, establish an institutional artificial intelligence policy. It serves as a constitution for the ethical application of AI of all sorts. Having a document with guiding principles is critical in setting expectations and providing a foundation for the organizations and activities associated with AI development, implementation, and operation.
On the governance end, establish an AI ethics office. It should convene an AI advisory board for big-picture direction on matters concerning AI. In addition, it should be home to an AI ethics steering committee of leaders to consult regularly on AI matters escalated from across the institution.
So, what sorts of matters get escalated? Your institution’s AI governance ethics policy helps determine that. What we call “red-line” cases shouldn’t need escalation—they should be well understood as nonstarters throughout the organization. These include AI involving human surveillance, discrimination, the deanonymization of already-anonymized data, deception and manipulation, the undermining of human debate, and environmental harm.
In contrast, “high-risk” cases can rise to the attention of the AI ethics steering committee. These include cases where AI may drive automated decision making or process personal data. Other high-risk cases include the possible intrusion of fundamental rights or freedoms or damage to individuals’ social wellbeing.
The building of AI into high-risk IT applications also fit into this rubric, including, among others, applications involved in the management and operation of critical infrastructure, employment and HR, and health care.
Your AI ethics policy should cover human validation and interventions that should take place throughout the AI development process, from ideation through operations. Universities also must be able to vet the ethics of vendor-built AI systems.
Humans don't have trust in AI for many reasons, but without trust, AI can't reach its full potential.
The focus on data is not accidental. The black-box algorithms of LLM developers that enable generative AI get a lot of attention, and these LLMs are the rocket engines of today’s AI boom.
But data is the rocket fuel, and the “garbage-in, garbage-out” truism still applies regardless of how sophisticated the subsequent digital manipulation and refinement.
Said differently, unbiased AI depends overwhelmingly on unbiased data sets, and, for the foreseeable future, human governance and vetting will be indispensable to ensuring that data sets are unbiased.
For human vetting to be done as extensively and systematically as it will need to be given the broad application of AI in higher education, universities must establish deliberate governance based on clearly delineated ethical principles. The provenance and quality of training data should be the primary focus.
ChatGPT didn’t write that, but it would surely agree.