Reports shed new light on AI monetization and the regulatory landscape
Two recently published reports shed new light on the current state of the monetization of generative AI and its regulatory landscape. The first study focuses on self-reports on AI monetization from business and IT leaders in several global organizations. The second one examines the regulatory landscape of AI in September 2023 and recently regained popularity following President Biden's executive order aiming to monitor and regulate risks associated with AI. Together, these two reports help us lay the groundwork for a more substantial overview of AI in the workplace, contrasting the attitudes of the respondents of the monetization study regarding the risks and challenges faced while implementing AI in their organization with the current status of global AI policy and governance, as reflected by the regulatory landscape report and some other trendsetting events that have happened since.
The first study, commissioned by Microsoft, was carried out by the research firm IDC. Respondents who already work at organizations that have already deployed AI solutions in the workplace report that, on average, they are getting an impressive 3.5x return on their investment, meaning that for each $1, businesses are earning $3.50 back, that is, 250% of their original expenditure. Moreover, the report states that 5% of the respondent organizations worldwide are reaping $8 in returns. Of the companies surveyed that aren't already using AI tools in the workplace, 22% planned on deploying AI solutions in the following 12 months.
Another remarkable conclusion drawn from the report is that for those businesses that had already deployed AI tools, 92% of the deployments took 12 months or less, with 40% taking less than six months. Moreover, organizations reported that they started realizing a return on their AI implementations within the following 14 months. A key takeaway is that the exceptional reported ROI, combined with the speed with which most respondents deployed their solutions and started seeing returns on their investments, leads to the conclusion that enterprises are safe to begin progressively deploying AI solutions if they have not done so.
There are, however, some limitations that anyone interested in implementing AI in an enterprise setting should also keep in mind before and throughout the process. The first one is that there are reports that have found less optimistic reports on the ROI of AI solutions. Some even discuss the difficulties behind the correct estimation of ROI since it is not that controversial to think that, in the case of the IDC report, respondents may be omitting experiments that possibly didn't yield the expected ROI or even respond under the influence of the current hype around generative AI. Only time and continued monitoring will tell if the numbers reported in the IDC study hold up.
In addition to biased ROI calculations, the study asked respondents to name the most pressing obstacles they have faced while implementing AI tools at their organizations. It turned out that a skilled workforce was the most frequent concern, with cost, concerns of data or IP loss due to improper use, and lack of AI governance and risk management being other secondary issues. In an interview with VentureBeat, Ritu Jyoti, the Global AI Research Lead at IDC, stated that there were already governance concerns surrounding traditional AI, and the arrival of generative AI has only intensified the apprehension surrounding the lack of AI governance and risk mitigation.
The second report was conducted by Ernst & Young and is based on an analysis of the jurisdictions that have been the most active in the development of AI legislation and regulations: Canada, China, the European Union, Japan, Korea, Singapore, the United Kingdom, and the United States. Overall, all these entities seem to have the same goals in mind, as most of their policy is based on risk mitigation that allows the most benefits to society. The actions also align with the OECD AI Principles, emphasizing human rights, transparency, and risk management.
On the downside, the report does find some divergence among the analyzed entities' approaches toward AI governance, declaring the European Union as the most proactive jurisdiction after its proposal of the comprehensive AI Act. Since its publication predates President Biden's executive order, the report also claims that, in contrast with the EU, the US had adopted a light-touch approach, focusing on voluntary industry guidance and sector-specific rules. Furthermore, the EU and the US are no longer the only jurisdictions with substantial policy proposals, as the UK has published an AI White Paper outlining its proposed framework for AI regulation, compatible with the EU's AI Act.
Overall, businesses and organizations looking to deploy AI solutions should carry on with their plans, considering the time they will take to complete and the expected timeline for ROI. It is also sensible that organizations exercise a healthy dose of caution when considering and implementing any AI-powered tool in the workplace, given the current state of fragmentation and the variety of approaches that dominate the regulatory landscapes. In particular, organizations working across jurisdictions must remember that they should deploy tools that satisfy the restrictions placed by every relevant jurisdiction, which may not always be the most accessible task and impact the estimated time until deployment is completed and the ROI obtained.
Moreover, as the EY report rightfully states, one of the essential tasks for harnessing the potential while minimizing the risk of AI adoption is the open dialog between policymakers, the private sector, and society. Until then, measuring the positive impact of artificial intelligence accurately will remain a challenge.