Keeping humans in the AI loop

Two agents may collaborate by sharing information, but those communications can be monitored and controlled. This is similar to the way some companies, financial firms, for example, have controls in place to prevent collusion and corruption.

Human on the loop

Once all of an AI agent’s actions and communications are logged and monitored, a human can go from being in the loop to on the loop.

“If you try to put a human in the loop on a 50-step process, the human isn’t going to look at everything,” says McGowan. “So what am I evaluating across that lifecycle of 50 tasks to make sure I’m comfortable with the outcomes?”

A company might want to know the steps were completed, done accurately, and so on. That means logging what the agent does, tracking the sequential steps it performed, and how its behavior compares to what was expected of it.

So for example, if a human user asks the AI to send an email, and the AI sends five, that would be suspicious behavior, he says. Accurate logging is a critical part of the oversight process. “I want a log of what the agent does, and I want the log to be immutable, so the agent won’t modify it,” he adds. Then, to evaluate those logs, a company could use a quality assurance AI agent, or traditional analytics.

“It’s not possible for humans to check everything,” says UT’s Thuraisingham. “So we need these checkers to be automated. That’s the only solution we have.”

source

Leave a Comment

Your email address will not be published. Required fields are marked *