就是他了。
同步下載他公開的資料細看一遍,原原本本,就清晰了。
This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What they are, where they are headed, comparisons and analogies to present-day operating systems, and some of the security-related challenges of this new computing paradigm.
As of November 2023 (this field moves fast!).
Context: This video is based on the slides of a talk I gave recently at the AI Security Summit. The talk was not recorded but a lot of people came to me after and told me they liked it. Seeing as I had already put in one long weekend of work to make the slides, I decided to just tune them a bit, record this round 2 of the talk and upload it here on YouTube. Pardon the random background, that’s my hotel room during the thanksgiving break.
- Slides as PDF:
llmintro.pdf - Google Drive
“If you can’t explain something in simple terms, you don’t understand it.” Richard Feynman . Hinton in this video does this quote justice !
229,150 views Feb 29, 2024
Professor Geoffrey Hinton, CC, FRS, FRSC, the ‘Godfather of AI’, delivered Oxford’s annual Romanes Lecture at the Sheldonian Theatre on Monday, 19 February 2024.
The public lecture entitled ‘Will digital intelligence replace biological intelligence?’ discussed the dangers of artificial intelligence (AI) and how to ensure it does not take control of humans, and consequently, wipe out humanity. He said that the fact that digital intelligence is immortal and does not evolve should make it less susceptible to religion and wars, but ‘if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it,’ he added.
The British-Canadian computer scientist and cognitive psychologist also spoke of how AI could replace humans in the workforce and how it could be used to spread misinformation. He had previously believed that it could take AI systems up to a century to become ‘super intelligent’. He now thinks that it could happen much sooner than he had anticipated.
The Romanes Lecture is the annual public lecture of the University. A most distinguished public figure from the arts, science or literature is invited by special invitation of the Vice-Chancellor. The lecture was created in 1891, following an offer by George John Romanes of Christ Church to fund an annual lecture, and the first lecture was given in 1892 by William Gladstone
一面是LLMs誰與爭鋒,一面是Yann寶刀屠龍:)
貌似爭的是開發路線,骨子裡還是如何看待人類語言,再深點,如何看待人類自己⋯⋯
作文要站反人工智能的,這篇必讀:)
@Creative_bot 以中文概述楊樂昆觀點
杨立昆认为,仅仅依靠扩大语言模型的规模并不能实现真正的AI。他批评这种方法只是在“堆砌数据”,缺乏对现实世界真正理解。他主张AI应该模仿人类和动物的学习方式,通过与环境互动来获取知识,并发展出类似于人类常识和推理能力的系统。
@Creative_bot 概述辛頓觀點
辛顿教授认为数字智能的发展速度远超预期,很可能在不久的将来超越人类智能。他担忧数字智能的永生性和非进化性可能导致其不受控,最终对人类构成威胁。
他指出,数字智能一旦失控,人类将无力阻止,因此呼吁关注AI的潜在风险,并积极探索如何控制和引导AI的发展方向,以避免其对人类造成危害。
@Creative_bot 分析兩人的核心爭議點,逐條列舉並加以分析
辛顿与杨立昆在人工智能观点上的核心争议:
1. 对大型语言模型(LLMs)发展路径的看法:
- 辛顿: 对LLMs的发展速度感到担忧,认为其规模扩大可能导致不可控的风险,最终超越人类智能并威胁人类生存。
- 杨立昆: 不认为LLMs的规模扩大是通往真正AI的路径,批评其缺乏对现实世界的理解和常识推理能力。
分析:
双方都承认LLMs的强大能力,但在其未来发展路径上存在分歧。辛顿更关注LLMs的潜在风险,而杨立昆则更关注其局限性,认为需要探索其他路径才能实现真正的AI。
2. 实现通用人工智能(AGI)的关键:
- 辛顿: 未明确指出实现AGI的关键,但暗示现有的深度学习方法,例如LLMs,如果发展失控,可能带来风险。
- 杨立昆: 主张AI应该模仿人类和动物的学习方式,通过与环境互动、获取经验来发展出类似人类的常识和推理能力。
分析:
双方都认为实现AGI需要更接近人类的学习和认知方式。辛顿对现有深度学习方法的安全性持保留态度,而杨立昆则主张发展新的学习范式,例如他提出的“世界模型”。
3. 对人工智能潜在风险的评估:
- 辛顿: 对AI的潜在风险持悲观看法,认为数字智能一旦失控,人类将无力阻止,最终可能导致人类的灭绝。
- 杨立昆: 相对乐观,认为AI只是工具,其发展方向最终掌握在人类手中,可以通过技术手段和伦理规范来引导AI的发展。
分析:
双方对AI风险的评估存在明显差异。辛顿更加强调AI的潜在威胁,呼吁人类谨慎对待AI发展。杨立昆则更强调人类的主导作用,相信可以通过合理的控制和引导来规避AI风险。
总结: 辛顿和杨立昆都是人工智能领域的领军人物,他们的观点代表了当前AI研究领域的两大主要流派。他们的争论并非孰对孰错,而是反映了在面对这项强大而未知的技术时,人类的兴奋、期待、谨慎和担忧等复杂心态。
最终,如何安全、可控地发展人工智能,使其真正造福人类,是摆在我们面前的共同课题。