By 2028, AI regulatory violations will result in a 30% increase in legal disputes for tech companies, according to a Gartner survey.
Conducted among 360 IT leaders involved in the rollout of generative AI (GenAI) tools, the survey found that over 70% indicated regulatory compliance is among the top three challenges for their organisation’s widespread GenAI productivity assistants deployment.
Only 23% of respondents were very confident in their organisation’s ability to manage security and governance components when rolling out GenAI tools in enterprise applications.
“Global AI regulations vary widely, reflecting each country’s assessment of its appropriate alignment of AI leadership, innovation and agility with risk mitigation priorities,” said Lydia Clougherty Jones, senior director analyst at Gartner.
“This leads to inconsistent and often incoherent compliance obligations, complicating alignment of AI investment with demonstrable and repeatable enterprise value and possibly opening enterprises up to other liabilities.”
At the same time the impact from the geopolitical climate is steadily growing, but the ability to respond lags behind.
In the same survey, 57% of non-US IT leaders indicated that the geopolitical climate at least moderately impacted GenAI strategy and deployment, with 19% of respondents reporting it has a significant impact.
Yet, nearly 60% of those respondents reported that they were unable or unwilling to adopt non-US GenAI tool alternatives.
In a recent webinar poll organised by Gartner, 40% of the 489 respondents indicated that their organisation’s sentiment to AI sovereignty is “positive” (hope and opportunity), and 36% indicated their organisation’s sentiment was “neutral” (“wait and see” approach).
In the same poll, 66% of the respondents indicated they were proactive and/or engaged in response to sovereign AI strategy, and 52% indicated that their organisation was making strategic or operating model changes as a direct result of sovereign AI.
IT leaders were urged to strengthen the moderation of outputs by training models to engineer self-correction, create rigorous use-case review procedures that evaluate risks, and use control testing around AI-generated speech.
Gartner also urged the leaders to increase model testing/sandboxing by building a cross-disciplinary fusion team of decision engineers, data scientists, and legal counsel to design pre-testing protocols and test and validate the model output against unwanted conversational output.
ALSO READ: OpenAI’s New Partnership with Shopify, Etsy will Allow ChatGPT to Sell Products