Who Should Manage AI?

Artificial intelligence is the premier technology initiative of most organizations, and it is entering the door through multiple departments in BYOT (bring your own technology), vendor, and home-built varieties. To manage this incoming technology, the trust, risk, and security measures for AI must be defined, implemented, and managed. Who does this? Most companies aren’t sure, but CIOs should get ready as the responsibility is likely to fall on IT. Here are some steps that chief information officers can take now. 

1. Meet with upper management and the board 

AI adoption is still in early stages, but we’ve already seen a series of embarrassing failures that have ranged from job discrimination that violated federal statutes, to the production of phony court documents, the failure of automated vehicles to recognize traffic hazards, and false retail promises presented to consumers that companies had to pay damages for.  Most of these disasters were inadvertent. They originated from users not checking the verity of their data and algorithms or using data that was misleading because it was wrong or incomplete. The end result was damage to company reputations and brands, which no CEO or board wants to deal with. 

This is the conversation that the CIO should have with the CEO and the board now, even though user departments (and IT) might already be in stages of AI implementation. The takeaway from discussions should be that the company needs a formal methodology for implementing, vetting, and maintaining AI — and that AI is a new risk factor that should be incorporated into the enterprise’s corporate risk management plan. 

Related:Forecast for Today’s CIOs Is Simple: Turbulence

2. Update the corporate risk management plan 

The corporate risk management plan should be updated to include AI as a new risk area that must be actively managed. 

3. Collaborate with purchasing 

Gartner predicted that 70% of new application development will be from user departments. Users are using low- or no-code tools that are AI-enabled. The rise of citizen development is a direct result of IT taking too long to fulfill user requests. It’s also generated a flurry of mini-IT budgets in user departments that bypass IT and go directly through the company’s purchasing function. 

The risk is that users can purchase AI solutions that aren’t properly vetted, and that can present risk to the company. 

One way that CIOs can help is by creating an active and collaborative relationship with purchasing that enables IT to perform its due diligence for AI offerings before they are ordered. 

Related:IT Leadership Is More Change Management Than Technical Management

4. Participate in user RFP processes for IT products 

Although many users are going off on their own when they purchase IT products, there is still room for IT to insert itself into the process by regularly engaging with users, understanding the issues users want to solve, and helping users solve them before products are purchased. Business analysts are in the best position to do this, since they regularly interact with users — and CIOs should encourage these interactions. 

5. Upgrade IT security practices 

Enterprises have upgraded perimeter and in-network security tools and methods for transactional systems, but AI applications and data present unique security challenges. An AI chat function on a website can be compromised by repetitive user or customer prompts that trick the chat function into taking wrong actions. The data AI operates on can be poisoned so as to deliver false results that the company acts on. Over time, AI models can also grow obsolete, generating false results. 

AI systems, whether hosted by IT or end users, can be improved by making revisions to the QA process so that systems undergo testing by users and/or IT trying to imagine every possible way that a hacker would try to break a system, and then trying these ways to see if the system can be compromised. An additional approach, known as red teaming, is when the company brings in an outside firm to perform the QA by trying to break the system. 

Related:Digital Transformation Is a Golden Opportunity for RGP

IT can install this new QA approach for AI, selling it to upper management and then making it a company requirement for the pre-release testing of any new AI solution, whether purchased by IT or end users. 

6. Upskill IT workers 

A new QA procedure to hacker-test AI solutions before they are released to production, or new tools for vetting and cleaning data before it is authorized for AI use, or methods to check the “goodness” of AI models and algorithms are all skills that will be needed in IT to achieve AI competence. Staff upskilling is an important directive, since less than one quarter of companies feel that they are ready for AI. Users are even less prepared, so would likely welcome an active partnership with a, AI- skilled IT department. 

7. Report monthly on AI 

The burden of AI management is likely to fall on IT, so the best thing for CIOs to do is to aggressively embrace AI from the top down. This means making AI management a regular topic in the monthly IT report that goes to the board, and also periodically briefing the board on AI. Some CIOs might be hesitant to assume this role, but it has its advantages. It clearly establishes IT as the enterprise’s AI focal point, which makes it easier for IT to establish corporate guidelines for AI investments and deployments. 

8. Clean data and vet data vendors 

IT is the data steward of the enterprise. It’s responsible for ensuring that data is of the highest quality, and it does this by using data transformation tools that clean and normalize data. IT also has a long history of vetting outside vendors for data quality. Quality data is essential to AI.  

9. Work with auditors and regulators 

Outside auditors and regulators can be extremely helpful in identifying AI best practices for IT, and in requiring AI practices for the enterprise which in turn can be presented to boards and users. Outside audit firms can assist in red team exercises that kick the tires of a new AI system in the many ways that a hacker-exploiter would, with the goal of finding all holes in the system so these holes can be closed. 

10. Develop an AI life cycle methodology 

To date, most companies have focused on building or purchasing AI systems and getting them implemented. Not much thought has been given to system maintenance or sustainability. Accordingly, an AI system life cycle should be defined, and IT is the one to do it.  

As part of this life cycle methodology, AI systems in production should be regularly monitored for accuracy against pre-established metrics. If a weather prediction system starts with 95% accuracy and degrades to 80% accuracy in the next nine months, a tune-up should be made to the system’s algorithms, data, or both — until it returns to its 95% accuracy level. 


source

Leave a Comment

Your email address will not be published. Required fields are marked *