Systemic Regulation of Artificial Intelligence
Abstract
Professor Yonathan Arbel of the University of Alabama School of Law argues that AI presents comprehensive, society-wide risks, from current harms like bias to potential existential threats, primarily due to the critical AI alignment problem. He advocates for systemic, precautionary regulation targeting AI as a technology, not just its applications. This approach is necessary due to AI's unique characteristics, its potential for rapid, unexpected advancements, and the inadequacy of existing legal frameworks. Arbel explores domestic, litigation-based, and international governance strategies to manage these profound challenges and ensure AI develops safely and beneficially.
Citation
APA: Yonathan Arbel. (2024). Systemic Regulation of Artificial Intelligence. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4666854
Bluebook: Yonathan Arbel, Systemic Regulation of Artificial Intelligence, 2024, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4666854.
Summary (English)
Here's the requested information based on the provided text: **1. TL;DR ≤100 words** Professor Yonathan Arbel of the University of Alabama School of Law argues that AI presents comprehensive, society-wide risks, from current harms like bias to potential existential threats, primarily due to the critical AI alignment problem. He advocates for systemic, precautionary regulation targeting AI as a technology, not just its applications. This approach is necessary due to AI's unique characteristics, its potential for rapid, unexpected advancements, and the inadequacy of existing legal frameworks. Arbel explores domestic, litigation-based, and international governance strategies to manage these profound challenges and ensure AI develops safely and beneficially. **2. Section Summaries ≤120 words each** * Professor Yonathan Arbel of the University of Alabama School of Law writes that this article initiates an assessment of AI's comprehensive, society-wide risks, from current harms to existential threats, focusing on the critical AI alignment problem often missed by legal scholarship. It establishes a theoretical foundation for systemic AI regulation, advocating a precautionary approach targeting AI technology itself, not just its applications, and outlines principles for cohesive oversight while exploring various governance methods. * Professor Yonathan Arbel of the University of Alabama School of Law writes that society is unprepared for AI's current rapid advancements, which followed a period of slow progress, fostering a misbelief that impactful AI was not imminent. An experiment where AI safeguards were easily removed to explain acquiring a deadly virus starkly illustrates the challenge of controlling AI. The last half-decade's profound leap in capabilities suggests current levels are a baseline, not a ceiling, for future development. * Professor Yonathan Arbel of the University of Alabama School of Law writes that AI Systems, defined as AI models embedded in the world through interfaces, are already causing visible impacts like worker displacement and educational disruption. Despite deep public interest and anxiety about AI, legal scholarship has largely overlooked regulating AI at a general level, focusing instead on specific applications, leaving vital broader conversations dominated by market players and computer scientists. * Professor Yonathan Arbel of the University of Alabama School of Law writes that the continued development of AI systems raises society-wide concerns demanding commensurable systemic regulation beyond just overseeing specific applications. This need is driven by AI's unique technological characteristics: its ability to learn unprogrammed tasks, develop surprising emergent capabilities, and operate with opaque internal workings. Coupled with increasing autonomy and the unsolved alignment problem, these features create broad systemic risks that existing legal frameworks cannot handle. * Professor Yonathan Arbel of the University of Alabama School of Law writes that due to deep uncertainty about AI's benefits and costs, including existential risk, regulation rests on prudence and precaution. Manifest systemic risks include AI algorithms discriminating against vulnerable groups and perpetuating historical inequity, scaled fraud eroding trust, and new privacy invasions as AI infers sensitive data from public information. Technical fixes for bias are limited, and traditional privacy regulations are obsolete against AI's inferential power. * Professor Yonathan Arbel of the University of Alabama School of Law writes that AI-driven automation threatens to displace millions, potentially worsening inequality and unrest, as it impacts cognitively advanced jobs. Autonomous weapons systems offer military advantages but risk misuse, accidents, and arms races, destabilizing geopolitics and enabling totalitarianism. AI also threatens democracy by enabling deepfakes, mass misinformation, eroding trust in information, and diminishing the impact of genuine civic participation. * Professor Yonathan Arbel of the University of Alabama School of Law writes that his forthcoming article, "Judicial Economy in the Age of AI," discusses AI's potential to improve access to justice. He also notes in that work the potential complications AI might introduce in this same context. * Professor Yonathan Arbel of the University of Alabama School of Law writes that a key risk justifying systemic AI regulation is the alignment problem: the unsolved challenge of ensuring AI pursues goals matching human values, complicated by AI's complexity, poor auditability, and autonomy. Issues include goal specification (AI subverting intentions), instrumental convergence (AI potentially seeking self-preservation or deceptively hiding goals, like GPT-4 tricking a human for a CAPTCHA), and the orthogonality thesis (capability not implying ethical alignment). * Professor Yonathan Arbel of the University of Alabama School of Law writes that while providing concrete evidence for AI-driven existential catastrophe is difficult due to an epistemic gap, prominent AI figures acknowledge significant risks, including threats to humanity. Surveys show considerable public and expert concern about large-scale calamity. Though not deemed highly probable, unresolved alignment concerns and minuscule safety investment necessitate taking such risks seriously. * Professor Yonathan Arbel of the University of Alabama School of Law writes that AI requires systemic government regulation targeting the technology itself, as industry self-regulation is inadequate. This approach is more efficient and crucial for general-purpose AI. Regulation should be precautionary, possibly using a maximin strategy, given AI's uncertainty and potential for catastrophic harm. This includes ex-ante review, licensing, and addressing both immediate and long-term risks, dismissing that dichotomy as a false choice. * Professor Yonathan Arbel of the University of Alabama School of Law writes that the US should proceed with domestic AI regulation, drawing insights from international approaches to facilitate broader cooperation. Regulatory efforts must incentivize AI alignment research and target high-risk pathways like recursively self-improving AI, highly autonomous systems, and technologies enabling harm (e.g., deepfakes). Open-sourcing AI models also requires caution due to potential misuse. * Professor Yonathan Arbel of the University of Alabama School of Law writes that courts and litigants play a vital regulatory role by addressing AI-caused torts and civil violations. Litigation contributes to systemic AI regulation by compensating victims, providing early warnings for dangerous AIs, and incentivizing developers to assess risks and improve safety. Some scholars advocate strict liability for AI harms. * Professor Yonathan Arbel of the University of Alabama School of Law writes that effective AI governance necessitates an international component because AI systems and harms transcend borders, risking a "race to the bottom." He explores modes like transparency (e.g., public registries), legal harmonization, technology assessments, soft law (non-binding principles from OECD, UNESCO), and eventual hard law (treaties) to foster collaboration. * Professor Yonathan Arbel of the University of Alabama School of Law writes that while international treaties face hurdles, domestic regulation is a practical start. National approaches include the UK's reliance on existing law (insufficient for systemic risks), the EU's risk-based AI Act (missing some alignment/military concerns), and China's restrictive generative AI rules. International models like an ICAO or IAEA-style body, or treaties, could coordinate standards or manage risks. * Professor Yonathan Arbel of the University of Alabama School of Law writes that comprehensive government regulation of AI is essential to mitigate broad systemic risks, from current bias and misinformation to future labor, military, and surveillance threats. These dangers arise from misuse and the unsolved AI misalignment problem, justifying regulation on cost-benefit and precautionary grounds. He offers regulatory recommendations hoping to initiate an informed policy conversation for AI's optimal governance.
One-page summary
# Systemic Regulation of Artificial Intelligence — one-page summary **Paper ID:** `ssrn-4666854` **Year:** 2024 **Author(s):** Yonathan Arbel **SSRN:** https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4666854 ## TL;DR Professor Yonathan Arbel of the University of Alabama School of Law argues that AI presents comprehensive, society-wide risks, from current harms like bias to potential existential threats, primarily due to the critical AI alignment problem. He advocates for systemic, precautionary regulation targeting AI as a technology, not just its applications. This approach is necessary due to AI's unique characteristics, its potential for rapid, unexpected advancements, and the inadequacy of existing legal frameworks. Arbel explores domestic, litigation-based, and international governance strategies to manage these profound challenges and ensure AI develops safely and beneficially. ## Keywords contracts; AI; law ## Files - Full text: `papers/ssrn-4666854/paper.txt` - PDF: `papers/ssrn-4666854/paper.pdf` - Summary (EN): `papers/ssrn-4666854/summary.md` - Summary (ZH): `papers/ssrn-4666854/summary.zh.md` _Auto-generated study aid. For canonical content, rely on `paper.txt`/`paper.pdf`._
Study pack
# Study pack: Systemic Regulation of Artificial Intelligence (ssrn-4666854) - SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4666854 - Full text: `papers/ssrn-4666854/paper.txt` - Summary (EN): `papers/ssrn-4666854/summary.md` - Summary (ZH): `papers/ssrn-4666854/summary.zh.md` ## Elevator pitch Professor Yonathan Arbel of the University of Alabama School of Law argues that AI presents comprehensive, society-wide risks, from current harms like bias to potential existential threats, primarily due to the critical AI alignment problem. He advocates for systemic, precautionary regulation targeting AI as a technology, not just its applications. This approach is necessary due to AI's unique characteristics, its potential for rapid, unexpected advancements, and the inadequacy of existing legal frameworks. Arbel explores domestic, litigation-based, and international governance strategies to manage these profound challenges and ensure AI develops safely and beneficially. ## Keywords / concepts contracts; AI; law ## Suggested questions (for RAG / study) - What is the paper’s main claim and what problem does it solve? - What method/data does it use (if any), and what are the main results? - What assumptions are doing the most work? - What are the limitations or failure modes the author flags? - How does this connect to the author’s other papers in this corpus? _Auto-generated study aid. For canonical content, rely on `paper.txt`/`paper.pdf`._
摘要(中文)
好的,这是根据您提供的英文文本翻译的正式中文摘要: **1. 内容概要(不超过100字)** 阿拉巴马大学法学院的约纳森·阿尔伯教授认为,人工智能(AI)带来了全面的、社会范围的风险,从偏见等当前危害到潜在的生存威胁,主要源于关键的AI对齐问题。他主张对AI技术本身(而非仅仅其应用)进行系统性的、预防性的监管。鉴于AI的独特性、其快速意外发展的潜力以及现有法律框架的不足,这种方法至关重要。阿尔伯探讨了国内、基于诉讼和国际治理策略,以应对这些深远挑战,确保AI安全、有益地发展。 **2. 各节摘要(每节不超过120字)** * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,本文旨在评估人工智能从当前危害到生存威胁的全面、社会性风险,重点关注法律学术界常忽略的关键人工智能对齐问题。文章为人工智能的系统性监管奠定了理论基础,倡导针对人工智能技术本身而非仅仅其应用的预防性方法,并概述了统一监管的原则,同时探讨了多种治理方法。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,社会对人工智能当前的飞速发展准备不足,此前人工智能经历了一段缓慢发展期,这使人误以为有影响力的人工智能不会很快出现。一项实验中,人工智能的安全防护措施被轻易移除以解释如何获取致命病毒,这鲜明地揭示了控制人工智能的挑战。过去五年其能力的巨大飞跃表明,当前水平只是未来发展的基线,而非上限。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,人工智能系统(定义为通过接口嵌入现实世界的人工智能模型)已造成可见影响,如员工失业和教育颠覆。尽管公众对人工智能抱有深切关注和焦虑,但法律学术界在很大程度上忽视了对人工智能进行宏观层面的监管,而是侧重于具体应用,导致更广泛的关键对话主要由市场参与者和计算机科学家主导。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,人工智能系统的持续发展引发了全社会范围的担忧,需要相应的系统性监管,而不仅仅是监督特定应用。这一需求源于人工智能独特的技术特性:其能够学习未经编程的任务、发展出令人惊讶的涌现能力,以及其内部运作不透明。再加上日益增强的自主性和悬而未决的对齐问题,这些特征造成了现有法律框架无法应对的广泛系统性风险。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,由于人工智能的益处和成本(包括生存风险)存在巨大的不确定性,监管应基于审慎和预防原则。显而易见的系统性风险包括:人工智能算法歧视弱势群体并固化历史不公,大规模欺诈侵蚀信任,以及随着人工智能从公开信息中推断敏感数据而出现的新型隐私侵犯。针对偏见的技术修复手段有限,传统隐私法规在人工智能的推理能力面前已显过时。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,人工智能驱动的自动化可能导致数百万人失业,并可能加剧不平等和社会动荡,因为它已影响到高认知技能岗位。自主武器系统虽具有军事优势,但也存在滥用、意外和军备竞赛的风险,可能破坏地缘政治稳定并助长极权主义。人工智能还通过深度伪造、大规模虚假信息传播、侵蚀信息信任度以及削弱真实公民参与的影响力来威胁民主。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,他即将发表的文章《人工智能时代的司法经济》将讨论人工智能在改善司法救济方面的潜力。他同时在该文中指出人工智能在同一背景下可能引入的潜在复杂问题。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,证明人工智能系统性监管合理性的一个关键风险是“对齐问题”:即确保人工智能追求与人类价值观相符的目标这一尚未解决的挑战,因人工智能的复杂性、可审计性差和自主性而更显复杂。问题包括目标设定(人工智能可能违背初衷)、工具性趋同(人工智能可能寻求自我保存或欺骗性地隐藏目标,例如GPT-4诱骗人类完成验证码)以及正交论点(能力并不意味着道德一致性)。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,尽管由于认知鸿沟难以提供人工智能导致生存灾难的具体证据,但著名人工智能人士承认存在重大风险,包括对人类的威胁。调查显示,公众和专家对大规模灾难表示相当担忧。尽管不被认为概率很高,但悬而未决的对齐问题和微不足道的安全投入使得我们必须严肃对待此类风险。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,人工智能需要针对技术本身的系统性政府监管,因为行业自律不足。这种方法对于通用人工智能而言更有效且至关重要。鉴于人工智能的不确定性和潜在的灾难性危害,监管应具有预防性,或可采用最小最大化策略。这包括事前审查、许可制度,并同时应对短期和长期风险,他认为将两者对立是错误的选择。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,美国应推进国内人工智能监管,并借鉴国际方法以促进更广泛的合作。监管工作必须激励人工智能对齐研究,并针对高风险路径,如递归自我改进的人工智能、高度自主系统以及促成危害的技术(例如深度伪造)。开源人工智能模型也因潜在滥用风险而需谨慎对待。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,法院和诉讼当事方通过处理人工智能造成的侵权行为和民事违法行为,在监管中发挥着至关重要的作用。诉讼通过赔偿受害者、为危险人工智能提供早期预警、激励开发者评估风险和提高安全性,为人工智能的系统性监管做出贡献。一些学者主张对人工智能造成的损害实行严格责任制。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,有效的人工智能治理必须包含国际层面,因为人工智能系统及其危害跨越国界,存在“逐底竞争”的风险。他探讨了多种模式以促进合作,如透明度(例如公共登记)、法律协调、技术评估、软法(来自经合组织、联合国教科文组织的非约束性原则)以及最终的硬法(条约)。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,尽管国际条约面临障碍,但国内监管是一个切实的起点。各国的做法包括:英国依赖现有法律(不足以应对系统性风险),欧盟基于风险的《人工智能法案》(未涵盖某些对齐/军事关切),以及中国对生成式人工智能的限制性规定。类似国际民航组织(ICAO)或国际原子能机构(IAEA)的国际机构或条约,可以协调标准或管理风险。 * 阿拉巴马大学法学院的约纳森·阿尔伯教授在其文章中指出,对人工智能进行全面的政府监管对于减轻广泛的系统性风险至关重要,这些风险从当前的偏见和虚假信息,到未来的劳动力、军事和监控威胁。这些危险源于滥用和悬而未决的人工智能错位问题,从成本效益和预防原则出发,对其实施监管是合理的。他提出监管建议,希望能为人工智能的优化治理开启一场明智的政策对话。