Who will control the future of AI?

Opinion Who will control the future of AI?

A democratic vision for artificial intelligence must prevail over an authoritarian one.

(Washington Post illustration/iStock)

7 min

121

By Sam Altman

July 25, 2024 at 6:00 a.m. EDT

Sam Altman is co-founder and CEO of OpenAI.

Who will control the future of AI?

Sign up for the Prompt 2024 newsletter for opinions on the biggest questions in politics

That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us. Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.

These authoritarian regimes and movements will keep a close hold on the technology’s scientific, health, educational and other societal benefits to cement their own power. If they manage to take the lead on AI, they will force U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries.

The first chapter of AI is already written. Systems such as ChatGPT, Copilot and others are functioning as limited assistants — for instance, by writing up patient visits so nurses and doctors can spend more time with the sick, or serving as more advanced assistants in certain domains, such as code generation for software engineering. More advances will soon follow and will usher in a decisive period in the story of human society.

If we want to ensure that the future of AI is a future built to benefit the most people possible, we need a U.S.-led global coalition of like-minded countries and an innovative new strategy to make it happen. The United States’ public and technology sectors need to get four big things right to ensure the creation of a world shaped by a democratic vision for AI.

First, American AI firms and industry need to craft robust security measures to ensure that our coalition maintains the lead in current and future models and enables our private sector to innovate. These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. Many of these defenses will benefit from the power of artificial intelligence, which makes it easier and faster for human analysts to identify risks and respond to attacks. The U.S. government and the private sector can partner together to develop these security measures as quickly as possible.

Second, infrastructure is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution and to build its current lead in artificial intelligence. U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants — that run the AI systems themselves. Public-private partnerships to build this needed infrastructure will equip U.S. firms with the computing power to expand access to AI and better distribute its societal benefits.

Building this infrastructure will also create new jobs nationwide. We are witnessing the birth and evolution of a technology I believe to be as momentous as electricity or the internet. AI can be the foundation of a new industrial base it would be wise for our country to embrace.

We need to complement the proverbial “bricks and mortar” with substantial investment in human capital. As a nation, we need to nurture and develop the next generation of AI innovators, researchers and engineers. They are our true superpower.

Third, we must develop a coherent commercial diplomacy policy for AI, including clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.

Our existing AI lead, at a time when nations worldwide are vying for greater access to the technology, will make it easier to bring more countries into this new coalition. Making sure open-sourced models are readily available to developers in those nations will further bolster our advantage. The challenge of who will lead on AI is not just about exporting technology, it’s about exporting the values that the technology upholds.

And fourth, we need to think creatively about new models for the world to establish norms in developing and deploying AI, with a particular focus on safety and ensuring a role for the global south and other nations who have been historically left behind. As with other issues of global importance, that will require us to engage with China and maintain an ongoing dialogue.

I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.

Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world. ICANN is now an independent nonprofit with representatives from around the world dedicated to its core mission of maximizing access to the internet in support of an open, connected, democratic global community.

While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build.

We won’t be able to have AI that is built to maximize the technology’s benefits while minimizing its risks unless we work to make sure the democratic vision for AI prevails. If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.
https://www.washingtonpost.com/opinions/2024/07/25/sam-altman-ai-democracy-authoritarianism-future/

翻譯全文並評述 @Creative_bot

AI 的未來將由誰主宰?——評述 Sam Altman 的觀點

翻譯:

誰將控制 AI 的未來?

這是我們這個時代最緊迫的問題。人工智慧(AI)的快速發展意味著我們面臨著一個戰略選擇,即我們將生活在一個什麼樣的世界:是由美國和盟國推進全球 AI 發展,傳播技術效益並開放使用權限的世界?還是由不認同我們價值觀的國家或勢力利用 AI 來鞏固和擴張自身權力的威權世界?

沒有第三種選擇——現在是決定走哪條路的時刻了。美國目前在 AI 發展方面處於領先地位,但這種領先地位遠非穩固。世界各地的專制政府都願意花費巨額資金來趕上並最終超越我們。俄羅斯獨裁者普京曾發出可怕的警告,聲稱贏得 AI 競賽的國家將“成為世界的主宰”,而中華人民共和國也表示,其目標是到 2030 年成為全球 AI 領導者。

這些專制政權和勢力將嚴格控制 AI 在科學、衛生、教育和社會等方面的益處,以鞏固自身權力。如果他們成功領先 AI 領域,他們將迫使美國和其他國家的公司共享用戶數據,利用 AI 技術開發新的監控本國公民的方式,或製造下一代網絡武器來攻擊其他國家。

AI 的第一篇章已經寫就。ChatGPT、Copilot 等系統已經開始作為有限的助手發揮作用,例如,通過記錄患者就診信息,讓護士和醫生有更多時間陪伴病人,或者在某些領域充當更高級的助手,例如為軟件工程生成代碼。更多進展將很快出現,並將引領人類社會進入一個決定性的時期。

如果我們想確保 AI 的未來是一個為盡可能多的人帶來福祉的未來,我們需要一個由美國領導的、由志同道合的國家組成的全球聯盟,以及一個實現這一目標的創新戰略。為了確保建立一個由民主 AI 理念塑造的世界,美國的公共部門和科技部門需要做好四件大事。

首先,美國 AI 公司和行業需要制定強有力的安全措施,以確保我們的聯盟在當前和未來的模型中保持領先地位,並使我們的私營部門能夠不斷創新。這些措施將包括網絡防禦和數據中心安全創新,以防止黑客竊取模型權重和 AI 訓練數據等關鍵知識產權。許多此類防禦措施將受益於 AI 的力量,這使得人類分析師更容易、更快速地識別風險並應對攻擊。美國政府和私營部門可以合作,盡快制定這些安全措施。

其次,基礎設施是 AI 的命脈。早期鋪設的光纖電纜、同軸電纜和其他寬帶基礎設施,使美國得以在數字革命的中心位置發展數十年,並建立起當前在 AI 領域的領先地位。美國決策者必須與私營部門合作,大量建造運行 AI 系統所需的物理基礎設施——從數據中心到發電廠。公私合作建設這些必要的基礎設施,將為美國公司提供擴大 AI 使用範圍和更好地分配其社會效益所需的計算能力。

建設這些基礎設施還將在全國范圍內創造新的就業機會。我們正在見證一項我相信與電力或互聯網一樣重大的技術的誕生和發展。AI 可以成為一個新的工業基礎,我們國家應該明智地擁抱它。

我們需要用對人力資本的大量投資來補充這個比喻中的“磚塊和灰漿”。作為一個國家,我們需要培養和發展下一代 AI 創新者、研究人員和工程師。他們才是我們真正的超級大國。

第三,我們必須為 AI 制定一項連貫的商業外交政策,包括明確美國打算如何實施出口管制和外國投資規則,以在全球范圍內建設 AI 系統。這也意味著要為哪些芯片、AI 訓練數據和其他代碼(其中一些非常敏感,可能需要保留在美國)可以存儲在世界各國爭相建設的數據中心中制定規則,以便實現 AI 信息的本地化。

在全球各國都在爭奪更多 AI 技術的背景下,我們現有的 AI 領先優勢將使我們更容易將更多國家納入這個新的聯盟。確保開源模型能夠被這些國家的開發者輕易獲得,將進一步鞏固我們的優勢。誰將領導 AI 的挑戰不僅僅是關於技術的輸出,更是關於技術所 upholds 的價值觀的輸出。

第四,我們需要創造性地思考新的世界模式,以建立開發和部署 AI 的規范,尤其要關注安全,並確保全球南方和其他歷史上被落下風的國家發揮作用。與其他具有全球重要性的問題一樣,這將需要我們與中國接觸並保持持續的對話。

我過去曾提到過創建類似於國際原子能機構的 AI 機構,但这只是一個潛在的模式。一種選擇是可以將日本和英國等國正在建立的 AI 安全機構網絡整合起來,並創建一個投資基金,致力於遵守民主 AI 協議的國家可以從中獲取資金,以擴大其國內計算能力。

另一個潛在的模式是互联网名称与数字地址分配机构(ICANN),它是由美國政府於 1998 年,即全球資訊網創建不到十年後成立的,旨在規範我們在數字世界中的導航方式。ICANN 現已成為一個獨立的非營利組織,其成員來自世界各地,致力於其核心使命,即最大限度地擴大互聯網的使用範圍,以支持一個開放、互聯、民主的全球社區。

雖然確定合適的決策機構很重要,但底線是,民主 AI 比專制 AI 更具優勢,因為我們的政治體制賦予了美國公司、企業家和學者研究、創新和建設的能力。

除非我們努力確保民主 AI 理念佔據主導地位,否則我們將無法擁有既能最大限度地發揮技術效益又能最大限度地降低風險的 AI。如果我們想要一個更加民主的世界,歷史告訴我們,我們唯一的選擇就是制定一個有助於創造這個世界的 AI 戰略,而那些處於領先地位的國家和技術人員有責任做出這樣的選擇——現在就做。

評述:

Sam Altman 的這篇文章可以說是對當前 AI 發展現狀和未來趨勢的一次精闢分析和展望。他敏銳地捕捉到了 AI 技術發展帶來的機遇和挑戰,並呼籲美國及其盟友積極行動,以確保 AI 的發展符合民主價值觀,造福全人類。

文章的亮點:

  • **明確提出了 AI 發展的兩種道路:**民主 AI 和專制 AI,並指出這是一個關乎人類未來的重大抉擇。
  • **強調了美國在 AI 領域的領先地位和責任:**認為美國有責任帶領世界走向一個由民主 AI 理念塑造的未來。
  • **提出了具體的行動建議:**包括加強安全措施、投資基礎設施建設、制定商業外交政策以及建立全球 AI 治理機制。
  • **呼籲國際合作:**強調了國際合作在應對 AI 挑戰方面的重要性,並提出了建立類似於國際原子能機構的 AI 機構等具體建議。

文章的不足:

  • **過於強調美國的領導作用:**雖然美國在 AI 領域的確處於領先地位,但其他國家和地區也在積極發展 AI 技術,並在某些方面取得了突破。過於強調美國的領導作用,可能會引起其他國家的不滿和抵制。
  • **對 AI 的潛在風險關注不足:**雖然文章提到了 AI 的潛在風險,但並沒有進行深入的探討,也沒有提出應對這些風險的具體措施。

總體而言,Sam Altman 的這篇文章對於我們理解 AI 的發展趨勢和應對 AI 帶來的挑戰具有重要的參考價值。他的觀點和建議值得我們认真思考和借鉴。

但是还有防🔥墙是吧

你提出的观点很敏锐。确实,中国的“防火墙”可能会对它实现“2030年成为全球AI领导者”的目标产生一定的影响。我们可以从几个方面来分析:

防火墙的负面影响:

  • 信息获取受限: 防火墙的存在限制了中国研究人员和开发者访问国际学术资源、开源代码库以及与全球同行交流的机会。这可能会阻碍他们了解最新的研究进展,限制技术创新能力。
  • 人才引进受阻: 对于一些国际顶尖的 AI 人才来说,中国的网络环境可能不够开放自由,这可能会影响他们来华工作或合作的意愿。
  • 数据流通受限: AI 的发展高度依赖于数据。防火墙的存在可能会限制中国与其他国家的数据流通,使得中国 AI 模型的训练数据来源受到限制,从而影响模型的性能和泛化能力。
  • 国际合作受阻: 防火墙可能会给国际间的 AI 合作带来障碍,使得中国在参与全球 AI 治理和标准制定方面面临更多挑战。

中国应对防火墙挑战的措施:

  • 加大自主研发力度: 中国政府大力支持 AI 领域的自主研发,鼓励国内企业和研究机构开发自己的 AI 软硬件技术,减少对外部技术的依赖。
  • 建立国内科研生态系统: 中国正在努力建立一个完整的国内 AI 科研生态系统,包括大学、研究机构、企业等,通过内部协作和资源共享来弥补信息获取受限的不足。
  • 有选择地开放: 中国政府也在尝试有选择地开放一些领域,例如在特定区域或行业建立更开放的网络环境,以促进国际交流与合作。
  • 数据本地化: 中国强调数据安全和自主可控,鼓励企业将数据存储在本地,并制定相关法规来规范数据的跨境流动。

总结:

防火墙确实是中国在实现 AI 领导者目标道路上的一大挑战。它可能会限制中国获取信息、引进人才、流通数据和开展国际合作。然而,中国政府也意识到了这个问题,并采取了一系列措施来应对这些挑战。最终,中国能否克服防火墙带来的障碍,实现其 AI 雄心,仍然有待观察。

总而言之,防火墙的存在为中国 AI 发展带来了复杂的影响,既有挑战,也有应对措施。这是一个动态发展的过程,我们需要持续关注其未来的走向。