Quarterly Outlook
Macro Outlook: The US rate cut cycle has begun
Peter Garnry
Chief Investment Strategist
Chief Investment Strategist
Summary: Generative AI, hailed as a productivity boon, becomes a national security threat after a daring AI deepfake heist against a high-ranking official in a developed country. Governments crack down on AI with new regulations, puncturing the AI hype as VCs flee the industry. Public distrust in AI-generated news soars and governments impose new laws, allowing only a small group of entities to disseminate public news.
While everyone from McKinsey to the corporate sector and leading economists see generative AI as a great productivity-enhancing tool that will lift growth in decades to come, others see it as a potential new weapon. In a high-stakes game, a criminal group deploys the most deceptive generative AI deepfake the world has ever seen, phishing a high-ranking government official to hand over top-secret state information from a developed country. The daring move and success triggers the biggest national security crisis since WWII, ushering in a new era of far-reaching AI regulation.
In a historic move to deal with the catastrophic side effects of generative AI, the US and EU declare that all content produced by a generative AI should have the label ‘Made by AI’ and failure to comply will lead to a harsh penalty. In an even bigger blow to the generative AI hype, the government forces OpenAI and Google to reign in third-party access to their foundational large language models on national security grounds, meaning that only government-approved entities are allowed to use these new generative AI systems. The new regulation kills the generative AI hype, as VC investments dry up on concerns that generative AI will be more difficult to commercialise.
The generative AI deepfake incident goes from national security crisis to full-blown public distrust in information delivered on the Internet, as AI-produced content swells to 90% of all information. In all developed countries, governments make harsh laws that only news organisations approved by the government are allowed to disseminate news to the public in a big blow to social media platforms and ‘non-compliant’ news organisations. Know-your-customers regulation, well known in the banking sector to avoid money laundering, is forced on technology companies delivering generative AI applications to avoid certain countries and entities accessing the technology.
Market impact: Traditional media companies approved by their governments for disseminating public news soar in value, with shares in The New York Times Company doubling. Adobe shares plunge as government penalises the company, as the catastrophic deepfake was made using its software.