The launch this week by Anthropic of an update to its Responsible Scaling Policy (RSP), the risk governance framework it says it uses to “mitigate potential catastrophic risks from frontier AI systems,” is part of the company’s push to be perceived as an AI safety first provider compared to its competitors such as OpenAI, an industry analyst said Wednesday.
Thomas Randall, director of AI market research at Info-Tech Research Group said that while there will not be immediate business benefits that come from the changes, the firm’s founding was “grounded in two OpenAI executives leaving that company due to concerns about OpenAI’s safety commitment.”
In the executive summary of the updated RSP, Anthropic stated, “in September 2023, we released our Responsible Scaling Policy (RSP), a public commitment not to train or deploy models capable of causing catastrophic harm unless we have implemented safety and security measures that will keep risks below acceptable levels. We are now updating our RSP to account for the lessons we have learned over the last year. This updated policy reflects our view that risk governance in this rapidly evolving domain should be proportional, iterative, and exportable.”