Managing Systemic Risks In Tech: Lessons From Finance
Last month, the heads of seven of America's largest AI companies walked out of the White House with a " self-regulatory " deal. Across the Atlantic, Europeans are debating the long-awaited European AI legislation, the next major digital regulation after the European Digital Services Act (DSA). The goal of the DSA is to mitigate technology-related "systemic risks" , which include the "potentially rapid and widespread dissemination of illegal content and information", which is "contrary to the conditions of major Internet platforms".
These are radically different approaches to solving AI problems. The risks associated with artificial intelligence have been discussed for some time, including potential systemic risks to political systems or public health due to misinformation or the spread of misinformation through recommender systems and forgery technologies. deep identity . Finding the right balance between encouraging innovation and ensuring safety is at the center of the debate.
Given the pace of innovation, addressing systemic technology risks requires rapid collaboration between regulators and industry. Fortunately, lessons from other sectors can be learned without repeating costly mistakes such as over-reliance on self-regulation. The financial industry has spent decades, if not centuries, developing and refining mechanisms to contain, mitigate, and respond to relatively similar risks. These efforts can be the starting point for technological regulation.
Finance Research
The financial sector faces the phenomenon of systemic risk , including the risk that a shock to certain components of the financial system (such as individual banks) could endanger the system as a whole. This happened in 2007-2008 when the US mortgage shock escalated into the global financial crisis. The results go beyond finance and affect global migration and inequality within and between countries. Thus, the crisis was "systemic" in another sense: the collapse of one industry had serious consequences for the entire "world system". This is the risk that many fear in the face of AI .
While technology and finance can create systemic risk, they differ significantly in their approaches to risk management. The tech sector, as a startup, would benefit from drawing inspiration from the world of finance due to the similarities between AI and finance. Both sectors rely on opaque mathematical models based on large amounts of data and complex calculations. More importantly, these models are being used by leaders in both industries with very limited understanding of the models, while boards and regulators are even further out of touch with the models they are supposed to manage. The same applies to other areas of risk, such as anti-money laundering issues and the need for effective monitoring of processes to deal with so-called artificial intelligence incidents.
Of course, there is also a big difference between technology and money. If the financial industry faces a lack of knowledge between market participants and regulators, then in the field of artificial intelligence, this gap is even wider and is likely to widen over time. While financial regulators are aware of the specific types of risk their industry faces, AI introduces many unknowns that make it difficult to manage risk.
What can technology teach us about financial risk management?
Self-regulation is necessary but not sufficient
Few attempts at technological self-regulation have been successful (perhaps with the exception of the gambling industry in Japan ). Even proper enterprise-level risk management cannot manage system-wide risks. The technology must allow for some outside oversight to ensure what the financial world takes for granted: a regulatory and independent third party role to secure and protect the public interest, and a long-term “social license.” organizations.
Regulatory dialogue should cover the entire sector
Regulatory dialogue should primarily take place at the sectoral level and be aimed at achieving a balance between maintaining an innovative and competitive industry and protecting society. The debate often revolves around whether regulators should ban a given "systemically important" active ingredient. However, the true effectiveness lies in the cooperation of industry and government in combating systemic risks. Interestingly, such partnerships were more developed in Canada and Scandinavia, where there is a culture of more collaboration and less individualism.
Equipment requires "interconnected" lines of defense.
While self-regulation is not enough, tech companies should still apply rigorous risk management practices using checks and balances and a bank-like governance structure. In essence, this is about giving artificial intelligence experts within the organization independent authority to assess the possibility of using technology in specific areas of business. An " Artificial Intelligence Oversight Board ", with real independence and authority, can allow companies to develop or use artificial intelligence to identify, implement and develop rigorous internal risk management practices. However, in addition to individual authorities, the technology industry must be regulated by the relevant authorities in each jurisdiction.
There are many models, from licensing requirements in the banking and pharmaceutical industries to strict legal obligations of corporations. In addition, credible communication processes and governance criteria should be established, such as organizational structures, board members, disclosure requirements, contingency plans, and transparency. Of course, there will be product safety requirements, but due to the probabilistic nature of artificial intelligence systems, new processes need to be developed, such as continuous monitoring .
"Too big to fail" also applies to technology...
As with money, certain technological entities , such as Facebook or X (formerly Twitter) , are important to the entire system. Banks that are considered “systematic” in terms of materiality, both domestically and internationally, are subject to more stringent regulatory prudential and liquidity requirements. Tech giants may need to create critical infrastructure for deploying artificial intelligence or mandatory stress testing, as well as provisions for interpretability standards and red standards. orders . In fact, the DSA already has much stricter requirements for very large online platforms (defined as over 45 million unique users per month in Europe).
... However, not all risks are related to size
As the industry becomes more interconnected, financial regulators have come to realize that size alone is not a sufficient measure of risk. The recent bankruptcies of Silicon Valley and Signature Banks illustrate this point. Although regulators were quick to prevent pollution, it was clear that the failure of these institutions represented a significant risk to the system, even if their size fell below the Federal Reserve's strict prudential threshold . The same can be applied to artificial intelligence. For example, while LLMs may come from large technology companies, applications from smaller players in various industries may pose significant risks in certain areas, such as the security of critical infrastructure. Effective risk management requires a comprehensive view of technology systems with sensitive applications in or by non-technology organizations.
New global institutions and international coordination are essential
Large technology companies operate all over the world and are forced to adapt to different regulatory frameworks. As with money, global cooperation is essential to prevent “jurisdictional arbitrage” and to properly coordinate government responses to crises. The guidelines and their implementation should be consistent and uniform across regions and business models. For example, the safety net of the financial system in the event of a systemic crisis is to give the governments of the G20 countries time (30 days in the case of the banking system) to coordinate their actions. Consequently, these governments are demanding that, with the help of liquidity coverage ratios, systemic institutions can survive for 30 days when the world freezes over.
Continuous innovation requires a balance between regulatory rigor and the profitability and competitiveness of the sector.
Finding a balance between strong regulation and the profitability of the sector is important to ensure continued investment in new technologies, especially in ways to improve the security of artificial intelligence. For example, tighter EU banking regulation has hurt overall profitability compared to US banks. This imbalance in the global financial markets is simply unbearable. This creates a risk that European banks will not be able to effectively recycle their capital and contribute to the growth and stability of their country, especially compared to their American counterparts. A parallel AI scenario would have the strategic cost of delaying technological development and could reflect the huge profitability gap affecting the US and European banking sectors. This is not a call to soften the rules, but to make them more thoughtful and flexible.
Study hard, work fast and stay cool
While the technology sector can learn valuable lessons from industry-level oversight and funding associated with international cooperation, certain practices should also be avoided.
Technology requires a fast regulatory process
There is a significant difference in transaction speed between technology and finance. Despite centuries of financial regulation, the fastest response time is 30 days. Most would agree that in severe crisis situations AI response times should be a day or less . This will require regulators and industry to agree on faster processes and protocols than currently exist in funding. This problem should be addressed by using a balance between fast and slow approaches to avoid deploying the system and turning the regulator into a risk factor.
Technology probably requires a different interaction model.
While big banks are important, the concentration of power in technology, especially in the field of artificial intelligence, is much higher. The system will rely on fewer giants who control the intellectual property and critical resources that underlie advanced AI products. Combined with regulators' lack of understanding of technology, this requires greater collaboration between big technology companies and regulators, and a greater commitment by these companies to fulfill their responsibilities in the public interest. Tech companies can help regulators create a strong regulatory framework based on principles rather than rules, which may be required in the rapidly evolving field of AI.
Technology must be constantly aware of its unknowns.
Financial institutions and regulators can rely on quantitative risk models that use rich historical information from previous crises. As noted, the funds have provided a clearer picture of what the crisis looks like, even if the possible root causes are not always identified. But in the age of artificial intelligence, things are very different, because there is no reliable history, no data on past crises. Therefore, any attempt to replicate the “risk meter” used in finance risks losing sight of important sources of risk in a rapidly changing technological world.
Collaborative learning is at the core of intelligence
CTOs often support self-regulation to avoid stifling innovation. However, effective and flexible regulation will not necessarily lead to stagnation if it avoids unnecessary complications. Imperfect policies and rules that develop and improve over time are undoubtedly better than no rules at all.
যদি প্রযুক্তি শিল্প আর্থিক খাত থেকে একটি শিক্ An example is: যদিও সমস্ত ঝুঁকি দূর করা বা The is is not the case. শেষ পর্যন্ত চাবিকাঠি নিহিত রয়েছে ক্রমাগত শি ক্ষা কৃত্রিম বুদ্ধিমতর সাম্প্রতিক অগ্রগতিগুল ি (ম In ার অন্তর্গত। এটি এআই এবং সম্প্রদায়ের কাছে অবাক হওয়ার মতো more shows