比爾·蓋茨對話薩姆·奧爾特曼
如果讓人們列舉人工智能領域的領軍人物,有一個名字你可能會聽得最多:薩姆·奧爾特曼(Sam Altman)。他在OpenAI的團隊正在用ChatGPT挑戰人工智能的極限,我很高興能和他談談下一步的計劃。我們的談話涵蓋了爲什麼今天的人工智能模型是最愚蠢的,社會將如何適應技術變革,甚至當我們完善了人工智能之後,人類將在哪裡找到目標。
比爾·蓋茨:我今天的嘉賓是薩姆·奧爾特曼。當然,他是OpenAI的首席執行官。長期以來,他一直是科技行業的創業者和領導者,包括經營Y Combinator,這家公司做了很多了不起的事情,比如資助Reddit、Dropbox、Airbnb。
在我錄製本期節目不久之後,他被解除了OpenAI首席執行官的職務,這完全驚到了我,至少是短暫的驚訝。解僱後的幾天裡發生了很多事情,包括幾乎所有OpenAI員工聯名支持薩姆迴歸,而現在,薩姆又回來了。所以,在你聽到我們的對話之前,讓我們先來了解一下薩姆,看看他現在過得怎麼樣。
比爾·蓋茨:嘿,薩姆。
薩姆·奧爾特曼:嘿,比爾。
比爾·蓋茨:你好嗎?
薩姆·奧爾特曼:哦,天哪。這真的太瘋狂了,我還好。這是一個非常激動人心的時期。
比爾·蓋茨:團隊情況怎麼樣?
薩姆·奧爾特曼:我想,你知道很多人都注意到了這樣一個事實,那就是團隊從未如此高效、樂觀、出色。所以,我猜這也正是藏在所有事情背後的一線希望。
在某種意義上,這是我們成長的真正時刻,我們非常有動力變得更好,變成一個爲我們所面臨的挑戰做好準備的公司。
比爾·蓋茨:太棒了。
所以,我們在對話中不會討論那件事;然而,你會聽到薩姆致力於建立一個安全和負責任的人工智能的承諾。我希望你喜歡這次對話。
歡迎來到《爲自己解惑》。我是比爾·蓋茨。
比爾·蓋茨:今天我們將主要關注人工智能,因爲它如此令人興奮,人們同時也對它感到擔憂。歡迎,薩姆。
薩姆·奧爾特曼:非常感謝你邀請我來參加節目。
比爾·蓋茨:我有幸見證了你們工作的進展,但開始的時候我是非常懷疑的,我也沒期待過ChatGPT能做得這麼好。它讓我十分驚訝,我們實際上並不懂這種編碼方式。我們知道數字,我們可以看到它相乘,但如何把莎士比亞的作品編碼?你認爲我們能對這種表示有更深的理解嗎?
薩姆·奧爾特曼:百分之百可以。要在人腦中做到這一點非常難,你可以說這是一個類似的問題,就是有這些神經元,它們彼此相連。但它們的連接在變化,我們不可能切開你的大腦來觀察它是如何進化的,但我們可以完美地透視。目前在可解釋性方面已經有一些非常好的工作,而且我認爲隨着時間的推移會有更多的解釋出現。我認爲我們將能夠理解這些網絡,但我們目前的理解能力還很低。而正如你所樂見的,我們僅瞭解的那一點點已經對改進這些東西非常有幫助。撇開科學好奇心不談,我們都有動力去真正瞭解它們,儘管它們的規模是如此龐大。我們還可以說,莎士比亞(的作品)在你大腦的哪個位置編碼的,又是如何表現的?
比爾·蓋茨:我們不知道。
薩姆·奧爾特曼:我們確實不知道,甚至可以說在這些我們本應能夠完美透視、觀察並進行任何測試的大量數字中我們還是找不到答案,這就更讓人缺少滿足感。
比爾·蓋茨:我非常確信,在接下來的五年內,我們會理解它。就訓練效率和準確性而言,這種理解將讓我們做得比今天能做的好得多。
薩姆·奧爾特曼:百分之百同意。你會在許多經驗性發現的技術發展史中看到這一點。他們雖然不知道發生了什麼,但顯然它行得通。然後,隨着科學理解的加深,他們可以使它變得更好。
比爾·蓋茨:是的,在物理學、生物學中,有時只是隨便一通亂試,然後就“哇”的一聲——這究竟是怎麼實現的?
薩姆·奧爾特曼:在我們的案例中,構建GPT-1的那個人自己解決了這個“哇”的問題,這有些令人印象深刻,但並沒有深入理解它是如何工作的,以及爲什麼它是有效的。然後我們有了拓展規律,可以預測它會變得多好。這就是爲什麼當我們告訴你可以做一個演示時,我們相當有信心它會成功。我們還沒有訓練模型,但我們很有信心。這讓我們做了大量嘗試,對正在發生的事情有了越來越科學的認識。但這確實源於經驗結果先行。
比爾·蓋茨:當你展望未來兩年,你認爲會有哪些重要的里程碑?
薩姆·奧爾特曼:多模態肯定會很重要。
比爾·蓋茨:你指的是語音輸入、語音輸出?
薩姆·奧爾特曼:語音輸入、語音輸出,然後是圖像,最終是視頻。顯然,人們真的需要這些。我們已經推出了圖像和音頻,反響比我們的預期要強烈得多。我們能夠將其推進得更遠,但也許最重要的進步領域將圍繞推理能力展開。現在,GPT-4的推理能力還非常有限。還有可靠性,如果你問GPT-4大部分問題10000次,這10000次中可能有一次回答得很好,但它不一定知道是哪一次。而你卻希望每次都能得到這10000次中最好的回答,因此可靠性的提升將非常重要。
可定製性和個性化也將非常重要。人們對GPT-4的需求各不相同:不同的風格,不同的假設集,我們將使所有這些成爲可能,然後還能讓它使用你自己的數據。它能夠了解你、你的電子郵件、你的日曆、你喜歡的預約方式,並與其他外部數據源連接,所有這些都將是最重要的改進領域。
比爾·蓋茨:在目前的基礎算法中,它只是在做簡單的前饋、乘法,所以爲了生成每一個新詞,它本質上都在做同樣的事情。我會很感興趣的是,你們能夠像解決複雜的數學方程式那樣,可能需要任意次數的應用變換,那麼用於推理的控制邏輯可能需要比我們今天所做的複雜得多。
薩姆·奧爾特曼:至少,我們似乎需要某種形式的自適應計算。現在,我們在每個標記上都花費同樣多的計算資源,不管它是一個簡單的標記,還是解決一些複雜的數學問題。
比爾·蓋茨:是的,比如說,“解決黎曼假設……”
薩姆·奧爾特曼:那需要大量的計算。
比爾·蓋茨:但它用的計算資源跟說個“The”一樣。
薩姆·奧爾特曼:對,我們至少得讓它能用。我們可能還需要在它之上更復雜的東西。
比爾·蓋茨:你和我都參加過一個參議院的教育會議,我很高興有大約30名參議員參加了那次會議,並幫助他們快速跟上進展,因爲這是一個重大的變革推動者。我不認爲我們爲了吸引政客已經做的過多。然而,當他們說,“哦,我們在社交媒體上搞砸了,我們應該做得更好”——這是一個巨大的挑戰,在兩極分化方面有非常負面的因素。即使是現在,我也不確定我們該如何應對。
薩姆·奧爾特曼:我不明白爲什麼政府在社交媒體方面不能更有效,但這似乎值得作爲一個研究案例去理解,因爲他們現在將要面臨的是與AI相關的挑戰。
比爾·蓋茨:這是一個很好的研究案例,那麼當你談論監管時,你是否清楚該構建哪種類型的監管?
薩姆·奧爾特曼:我認爲我們開始弄清楚了。在這個領域進行過度監管是非常容易的,你也可以看到過去許多此類事情的發生。但同樣的,如果我們是對的,可能結果卻顯示我們錯了,但如果在最後我們是對的,這項技術發展到我們認爲它會達到的程度,它將影響社會,影響地緣政治力量的平衡,以及其他許多事物。對於這些仍然是假設性的,但未來極其強大的系統——不是說GPT-4,而是針對計算能力是其10萬倍或100萬倍的系統,我們已經接受了一個全球監管機構的想法,這個機構將緊盯這些超級強大的系統,因爲它們確實會產生如此大的全球影響。我們談到的一個模式就是類似國際原子能機構的模式。對於核能,我們的決定也是如此。由於其潛在的全球影響,它需要一個全球性的機構,我認爲這是合理的。會有很多短期問題,比如這些模式可以說什麼,不可以說什麼?我們如何看待版權問題?不同的國家會有不同的考慮,這沒問題。
比爾·蓋茨:有些人認爲,如果一些模型非常強大,我們就會對它們感到害怕——全球核監管之所以行之有效,基本上是因爲至少在民用方面,每個人都希望共享安全實踐,而且這一點做得非常好。當你涉及核武器方面時,就沒有這種情況了。如果關鍵在於阻止整個世界做危險的事情,你會希望有一個全球政府,但今天對於許多問題,如氣候問題、恐怖主義,可以看到我們很難合作。人們甚至援引中美競爭來解釋爲什麼任何放緩的想法都是不恰當的。難道任何放慢腳步的想法,或者說放慢腳步到足夠謹慎的程度,都難以實施嗎?
薩姆·奧爾特曼:是的,我認爲要求其放慢速度是非常困難的。如果改成“做你想做的事,但任何計算集羣都不能超過一個特定的、極高的功率門檻”——鑑於這裡的成本,我們可能只會看到五個這樣的集羣——像這樣的任何集羣都必須接受類似國際武器檢查員的審查。那裡的模型必須接受安全審計,通過訓練期間的一些測試,並在部署前通過審計和測試。對我來說,這似乎是可能的。我之前不太確定,但今年我進行了一次環球之旅,與需要參與這一計劃的許多國家的元首進行了交談,他們幾乎都表示了支持。這不會讓我們免於所有事情,仍然會有一些問題出現在規模較小的系統上,有些情況可能會出現相當嚴重的錯誤,但我認爲這可以幫助我們應對最高層面的風險。
比爾·蓋茨:我確實認爲,在最好的情況下,人工智能可以幫助我們解決一些難題。
薩姆·奧爾特曼:當然可以。
比爾·蓋茨:包括兩極分化的問題,因爲它可能會破壞民主,而那將是一個極其糟糕的事情。現在,我們看到人工智能帶來了很多生產力的提升,這是非常好的事情。你最興奮的領域是哪些?
薩姆·奧爾特曼:首先,我始終認爲值得記住的是,我們正處在這一長期、連續的曲線上。現在,我們有能夠完成任務的人工智能系統。它們當然不能完成一個完整的工作(崗位所做的事情),但它們可以做些任務,並且在那裡有生產力的提升。最終,它們將能夠做更多類似今天人類工作的事情,我們人類當然也會找到新的、更好的工作。我完全相信,如果你給人們更強大的工具,他們不僅僅可以工作得更快,還可以做一些本質上不同的事情。現在,我們或許可以將程序員的工作速度提高三倍。這就是我們所看到的,也是我們最興奮的領域之一,它運行得非常好。但是,如果你能讓程序員的效率提高三倍,那就不僅僅是他們能做的事情多了三倍,而是他們能在更高的抽象層次上、使用更多的腦力去思考完全不同的事情。這就好比從打孔卡到更高級的語言,不僅僅是讓我們的編程速度快了一點,而是讓我們得到了質的提升。我們確實看到了這一點。
當我們看向這些能夠完成更完整任務的下一代人工智能時,你可以將它想象成一個小代理,你可以對它說:“幫我寫這整個程序,我會在過程中問你幾個問題,但它不僅僅是一次只寫幾個函數”,這樣就會有很多新生事物出現。然後,它還能做更復雜的事情。有一天,也許會有一個人工智能,你可以對它說:“幫我建立並運營這家公司”。然後有一天,也許會有一個人工智能,你可以對它說:“去發現新的物理學”。我們現在看到的東西既令人興奮又美妙,但我認爲把它放在這項技術的背景下是值得的,至少在未來的五年或十年內,這項技術將處於一個非常陡峭的成長曲線上。現有這些模型都將變成最愚蠢的模型。
編程可能是我們今天感到最興奮的一個提高生產力的領域。目前,它已經被大規模部署和使用。醫療保健和教育也是另外兩個我們非常期待的快速發展的領域。
比爾·蓋茨:有點令人生畏的是,與以往的技術改進不同,這項技術的改進速度非常快,而且沒有上限。它可以在很多工作領域達到人類的水平,即使做不出獨特的科學研究,它也可以打客服電話和銷售電話。我想你和我確實有一些擔憂,儘管這是一件好事,但它將迫使我們比以往任何時候都要更快地適應。
薩姆·奧爾特曼:這纔是可怕的地方。這並不是說我們必須適應,並不是說人類沒有超強的適應能力。我們已經經歷過這些大規模的技術變革,人們所從事的大量工作可能在幾代人的時間裡發生變化,而在幾代人的時間裡,我們似乎可以很好地吸收這些變化。在過去那些偉大的技術革命中,我們已經看到了這一點。每一次技術革命都會變得更快,而這次將是迄今爲止最快的一次。這就是我覺得有點可怕的地方,我們的社會需要以何種速度去適應它的發展,以及勞動力市場將發生的變化。
比爾·蓋茨:人工智能的一個方面是機器人技術(學),或者說藍領工作,當你得到具有人類水平能力的手和腳時。ChatGPT令人難以置信的突破是讓我們開始關注白領工作,這完全沒問題,但我擔心人們會失去對藍領工作的關注。你如何看待機器人技術?
薩姆·奧爾特曼:我對此非常興奮。我們太早開始研究機器人了,所以不得不擱置那個項目。它也因爲錯誤的原因而變得困難,無助於我們在機器學習研究的困難部分取得進展。我們一直在處理糟糕的模擬器和肌腱斷裂之類的問題。隨着時間的推移,我們也越來越意識到,我們首先需要的是智能和認知,然後才能想辦法讓它適應物理特性。從我們構建這些語言模型的方式來看,從那開始更容易。但我們一直計劃回到這個問題上來。
我們已經開始對一些機器人公司進行投資。在物理硬件方面,我終於第一次看到了真正令人興奮的新平臺被建立起來。到時候,我們就能利用我們的模型,就像你剛纔說的,利用它們的語言理解能力和未來的視頻理解能力,說:“好吧,讓我們用機器人做一些了不起的事情吧。”
比爾·蓋茨:如果那些已經把腿部做得很好的硬件人員真的把手臂、手掌和手指做出來,然後我們再把它們組合起來,而且價格也不會貴得離譜,那麼這將會迅速改變很多藍領類工作的就業市場。
薩姆·奧爾特曼:是的。當然,如果我們回溯七到十年,共識性的預測是其影響的首先是藍領工作,其次是白領工作,創造力可能永遠不會,起碼是最後一個,因爲那是魔法和人類的強項。
顯然,現在的情況正好相反。我認爲這其中有很多有趣的原因能夠解釋它爲什麼會發生。創造性工作,GPT模型的幻覺是一個特性,而不是缺陷,它能讓你發現一些新事物。而如果你要讓機器人移動重型機械,你最好能做到非常精確。我認爲這只是一個你必須跟隨技術發展的案例。你可能有一些先入爲主的觀念,但有時科學並不往那個方向發展。
比爾·蓋茨:那麼你手機上最常用的應用是什麼?
薩姆·奧爾特曼:Slack。
比爾·蓋茨:真的嗎?
薩姆·奧爾特曼:是的,我希望我能說是ChatGPT。
比爾·蓋茨:【笑】甚至比電子郵件還多?
薩姆·奧爾特曼:遠遠超過電子郵件。我認爲唯一可能超過它的是iMessages,但確實Slack比iMessages還多。
比爾·蓋茨:在OpenAI內部,有很多協調工作要做。
薩姆·奧爾特曼:是的,那你呢?
比爾·蓋茨:我是Outlook。我是傳統的電子郵件派,要麼就是瀏覽器,因爲,當然,我的許多新聞都是通過瀏覽器看來的。
薩姆·奧爾特曼:我沒有把瀏覽器算作一個應用,有可能我使用它的頻率更高,但我仍然打賭是Slack,我整天都在使用它。
比爾·蓋茨:不可思議。
比爾·蓋茨:好吧,我們這裡有一個黑膠唱片機。我像對其他嘉賓那樣,要求薩姆帶來一張他最喜歡的唱片。那麼,你今天帶來了什麼?
薩姆·奧爾特曼:我帶來了馬克斯·裡希特重新編曲的維瓦爾第的《新四季》。我工作時喜歡無歌詞的音樂,這張唱片既保留了維瓦爾第原作的舒適感,也有我非常熟悉的曲子,但又有足夠多新的音符帶來完全不同的體驗。有些音樂作品,你會因爲在人生的關鍵時期大量地聽它們而形成強烈的情感依戀,而《新四季》正是我在我們初創OpenAI時經常聽的東西。
我認爲這是非常美妙的音樂,它高亢而樂觀,完美適配我工作時的需求,我覺得新版本非常棒。
比爾·蓋茨:這是由交響樂團演奏的嗎?
薩姆·奧爾特曼:是的,是由Chineke!樂團演奏的。
比爾·蓋茨:太棒了。
薩姆·奧爾特曼:現在就播嗎?
比爾·蓋茨:是的,我們來聽聽。
薩姆·奧爾特曼:這是我們要聽的樂章的序曲。
比爾·蓋茨:你戴耳機嗎?
薩姆·奧爾特曼:我戴。
比爾·蓋茨:你的同事們會因爲你聽古典音樂而取笑你嗎?
薩姆·奧爾特曼:我不認爲他們知道我在聽什麼,因爲我確實戴着耳機。在寂靜中工作對我來說非常困難,我可以做到,但這不是我的自然狀態。
比爾·蓋茨:這很有趣。我同意,帶歌詞的歌曲會讓我覺得分心,但這更多是一種情緒類型的東西。
薩姆·奧爾特曼:是的,而且我把它調得很輕,我也不能聽響亮的音樂,不知爲何這是我一直以來的習慣。
比爾·蓋茨:太棒了,感謝你帶來美妙的音樂。
比爾·蓋茨:現在,對我來說,如果你真的藉助人工智能達到了令人難以置信的能力,AGI(通用人工智能),AGI+(超級通用人工智能),我擔心的有三件事:一是壞人控制了系統,如果我們有好人擁有同樣強大的系統,這有希望能最小化那個問題;二是系統控制一切的可能性,出於某些原因,我不太擔心這個問題,但我很高興其他人關注這個問題;最讓我感到困惑的是人類的目的,我對這點感到非常興奮,我很擅長研究瘧疾和根除瘧疾,也很擅長召集聰明人併爲此投入資源。當機器人對我說:“比爾,去打匹克球吧,我能根除瘧疾。你只是個思維遲鈍的人。”那時它就是一個哲學上令人困惑的事情。我們如何組織社會?是的,我們要改善教育,但教育要做什麼,如果你走向極端,我們仍然有很大的不確定性。第一次,這種情況可能在未來20年內發生的機會不爲零。
薩姆·奧爾特曼:從事技術工作有很多心理上的困難,但你說的這些對我來說是最困難的,因爲我也從中獲得了很多滿足感。
比爾·蓋茨:你確實帶來了價值。
薩姆·奧爾特曼:從某種意義上來說,這可能是我做的最後一件難事。
比爾·蓋茨:我們的思維如此依賴於稀缺性,教師、醫生和好的想法的稀缺,部分原因是,我確實在想,如果一代人在沒有這種稀缺的情況下成長,他們會對如何組織社會以及要做什麼這個哲學概念會產生什麼看法,也許他們會想出一個解決方案。我擔心我的思維如此受到稀缺性的影響,以至於我甚至很難思考這個問題。
薩姆·奧爾特曼:這也是我告訴自己的,而且我真心相信,雖然我們在某種意義上放棄了一些東西,但我們將會擁有比我們人類更聰明的東西。如果我們能進入這個“後稀缺”世界,我們將會找到新的事情去做。它們會感覺非常不同。也許你不是在解決瘧疾問題,而是在決定你喜歡哪個星系,以及你打算如何處理它。我相信我們永遠不會缺少問題,不會缺少獲得滿足感和爲彼此做事的方式,不會缺少對我們如何爲其他人玩人類遊戲的方式的理解,這將仍然非常重要。這肯定會有所不同,但我認爲唯一的出路就是走下去。我們必須去做這件事,它必將會發生,且現在已經是一個不可阻擋的技術進程,因爲其價值太大了。我非常非常有信心,我們會成功的,但感覺確實會很不一樣。
比爾·蓋茨:將這項技術應用於某些當前問題,比如爲孩子們提供家教,幫助激發他們的動力,或發現治療阿爾茨海默症的藥物,我認爲如何做是非常清楚的。無論人工智能能否幫助我們減少戰爭,減少分化。你會認爲隨着智能的提升,不分化是常識,不發動戰爭也是常識,但我確實認爲很多人會持懷疑態度。我很願意讓人們致力於解決最困難的人類問題,比如我們是否能和睦相處。如果我們認爲人工智能可以幫助人類更好地相處,我認爲那將是非常積極的。
薩姆·奧爾特曼:我相信它會在這方面給我們帶來意外的驚喜。這項技術會讓我們驚訝於它能做的事情有多麼多。我們還得拭目以待,但我非常樂觀。我同意你的看法,這將是非常大的貢獻。
比爾·蓋茨:就公平性而言,技術通常很昂貴,比如個人電腦或互聯網連接,而降低成本需要時間。我想,運行這些人工智能系統的成本看起來很不錯,每次評估的成本會降低很多嗎?
薩姆·奧爾特曼:它已經降低了很多。GPT-3是我們推出時間最長、優化最久的模型,在它推出的三年多時間裡,我們已經將成本降低了40倍。對於三年的時間來說,這是一個很好的開始。至於GPT-3.5版,我敢打賭,目前我們已經將其成本降低了近10倍。GPT-4是新產品,所以我們還沒有那麼多時間來降低成本,但我們會繼續。我認爲,在我所知道的所有技術中,我們的成本下降曲線是最陡峭的,優於摩爾定律。這不僅是因爲我們想出瞭如何讓模型更高效的方法,還因爲我們對研究有了更好的理解,我們可以在更小的模型中獲得更多的知識和能力。我認爲,我們將把智能的成本降低到接近於零的程度,這對社會來說將是一個改頭換面的轉變。
現在,我的世界基本模型由智能成本和能源成本組成。【比爾笑了】這是影響生活質量的兩個最大因素,尤其是對窮人而言,但總體來看也是如此。如果你能同時降低這兩方面的成本,你能擁有的東西就會更多,你能爲人們帶來的改善就會更大。我們正走在一條曲線上,至少在智能方面,我們將真正實現這一承諾。即使按照目前的價格(這也是有史以來最高的價格,而且遠遠超出了我們的預期),每月20美元,你就能獲得大量的GPT-4訪問權限,而且價值遠遠超過20美元。我們已經降得很低了。
比爾·蓋茨:那競爭呢?很多人一下子同時擠進這個賽道是不是一件有趣的事情?
薩姆·奧爾特曼:既令人討厭,又充滿動力和樂趣,【比爾笑了】我相信你也有過類似的感覺。這確實促使我們做得更快、更好,我們對自己的方法很有信心。我們有很多人,我認爲他們都在往冰球所在的地方滑,而我們也在往冰球要去的地方滑,這感覺很好。
比爾·蓋茨:我認爲人們會對OpenAI的規模之小感到驚訝。你們有多少員工?
薩姆·奧爾特曼:大約500人,所以我們比以前稍微大一些。
比爾·蓋茨:但那很小,【笑】要是以谷歌、微軟、蘋果的標準來看。
薩姆·奧爾特曼:確實很小,我們不僅要經營研究實驗室,現在還要經營一家真正的企業和兩款產品。
比爾·蓋茨:你所有能力的擴展,包括與世界上所有的人交談,傾聽所有支持者的聲音,這對你來說一定很有趣。
薩姆·奧爾特曼:非常令人着迷。
比爾·蓋茨:這是一家員工都很年輕的公司嗎?
薩姆·奧爾特曼:比平均年齡要大一些。
比爾·蓋茨:好的。
薩姆·奧爾特曼:這裡不是一羣24歲的程序員。
比爾·蓋茨:的確,我的視角有些扭曲了,因爲我已經60多歲了。我看到你,你比我年輕,但你說得對,你們有很多人四十多歲了。
薩姆·奧爾特曼:三十多歲、四十多歲、五十多歲(的人)。
比爾·蓋茨:這不像早期的蘋果、微軟,那時我們真的還是孩子。
薩姆·奧爾特曼:不是的,我也反思過這個問題。我認爲公司普遍變老了,我不知道該如何看待這個問題。我認爲這在某種程度上對社會是個不好的跡象,但我在 YC(Y Combinator)追蹤過這個問題。隨着時間的推移,最優秀的創始人年齡都呈增長趨勢。
比爾·蓋茨:這很有意思。
薩姆·奧爾特曼:就我們的情況而言,甚至還比平均年齡還要大一些。
比爾·蓋茨:你在YC扮演的角色幫助這些公司學到了很多,我想這對你現在的工作也是很好的鍛鍊。【笑】
薩姆·奧爾特曼:那非常有幫助。
比爾·蓋茨:包括看到錯誤。
薩姆·奧爾特曼:完全可以這麼說。OpenAI做了很多與YC建議的標準相反的事情。我們花了四年半時間才推出我們的第一個產品。公司成立之初,我們對產品沒有任何概念,我們沒有與用戶交流。我仍然不建議大多數公司這樣做,但在YC學習和見識過這些規則後,我覺得自己明白了何時、如何以及爲什麼我們可以打破這些規則,我們所做的事情真的與我見過的其他公司大相徑庭。
比爾·蓋茨:關鍵是你集結的人才團隊,讓他們專注於大問題,而不是某些短期的收益問題。
薩姆·奧爾特曼:我認爲硅谷的投資者不會在我們需要的水平上支持我們,因爲我們必須在研究上花費如此多的資金才能推出產品。我們只是說:“最終模型會足夠好,我們知道它會對人們有價值。”但我們非常感激與微軟的合作,因爲這種超前投資並不是風險投資行業擅長的。
比爾·蓋茨:確實不是,而且資本成本相當可觀,幾乎達到了風險投資所能承受的極限。
薩姆·奧爾特曼:可能已經超過了。
比爾·蓋茨:確實可能。我非常贊同薩蒂亞對於“如何將這個傑出的人工智能組織與大型軟件公司結合起來?”的思考,甚至可以說一加一遠遠大於二。
薩姆·奧爾特曼:是的,這很棒。你真說到點上了,這也是我從YC學到的。我們可以說要找世界上最好的人來做這件事。我們要確保我們的目標和AGI的使命是一致的。但除此之外,我們要讓人們做自己的事情。我們會意識到這將經歷一些曲折,需要一段時間。
我們有一個大致正確的理論,但一路上的很多策略都被證明是大錯特錯的,我們只是試圖遵循科學。
比爾·蓋茨:我記得我去看了演示,也確實想過這個項目的收入途徑是什麼?是什麼樣的?在這個狂熱的時代,你仍然手握一個令人難以置信的團隊。
薩姆·奧爾特曼:是的,優秀的人都希望與優秀的同事共事。
比爾·蓋茨:那是一種吸引力。
薩姆·奧爾特曼:那裡有一個很深的引力中心。此外,這聽起來很陳詞濫調,每家公司都這麼說,但人們感受到了深深地使命感,每個人都想參與AGI的創建。
比爾·蓋茨:那一定很激動人心。當你再次用演示震撼我時,我可以感受到那股能量。我看到了新的人,新的想法,而你們仍以非常不可思議的速度前進着。
薩姆·奧爾特曼:你最常給出的建議是什麼?
比爾·蓋茨:才能可以分很多種,在我職業生涯的早期,我認爲只有純粹的智商,比如工程智商,當然,你可以將其應用於金融和銷售。但這種想法被證明是如此錯誤,建立一個擁有正確技能組合的團隊是如此重要。針對他們的問題,引導他們思考應該如何建立一個擁有所有不同技能的團隊,這可能是我認爲最有幫助的建議之一。是的,告訴孩子們,數學、科學很酷,如果你喜歡的話,但真正讓我驚訝的是才能的混合。
那你呢?你給出的建議是什麼?
薩姆·奧爾特曼:關於大多數人對風險的誤判。他們害怕離開舒適的工作,去做他們真正想做的事情。實際上,如果他們不這樣做,他們回顧自己的一生時就會想,“天啊,我從來沒有去創辦我想創辦的公司,或者我從未嘗試成爲一名人工智能研究員。”我認爲實際上這樣風險更大。
與此相關的是,明確自己想要做什麼,並向別人提出自己的要求,會有意想不到的收穫。很多人受困於把時間花在自己不想做的事情上,而我最常給的建議可能就是想辦法解決這個問題。
比爾·蓋茨:如果你能讓人們從事一份讓他們感到有目標的工作,那會更有趣。有時,他們就是這樣產生巨大影響的。
薩姆·奧爾特曼:當然。
比爾·蓋茨:感謝你的到來,這是一次精彩的對話。在未來的日子裡,我相信我們還會有更多的交流,因爲我們正努力以最好的方式塑造人工智能。
薩姆·奧爾特曼:非常感謝你的邀請,我真的很享受與你對話。
比爾·蓋茨:《爲自己解惑》是蓋茨筆記的一個節目。特別感謝我今天的嘉賓薩姆·奧爾特曼。
比爾·蓋茨:告訴我你的第一臺電腦是什麼?
薩姆·奧爾特曼:是Mac LC2。
比爾·蓋茨:不錯的選擇。
薩姆·奧爾特曼:是個好東西,我還留着它,它到現在還能用。
If you ask people to name leaders in artificial intelligence, there’s one name you’ll probably hear more than any other: Sam Altman. His team at OpenAI is pushing the boundaries of what AI can do with ChatGPT, and I loved getting to talk to him about what’s next. Our conversation covered why today’s AI models are the stupidest they’ll ever be, how societies adapt to technological change, and even where humanity will find purpose once we’ve perfected artificial intelligence.
BILL GATES:My guest today is Sam Altman. He, of course, is the CEO of OpenAI. He’s been an entrepreneur and a leader in the tech industry for a long time, including running Y Combinator, that did amazing things like funding Reddit, Dropbox, Airbnb.
A little while after I recorded this episode, I was completely taken by surprise when, at least briefly, he was let go as the CEO of OpenAI. A lot happened in the days after the firing, including a show of support from nearly all of OpenAI’s employees, and Sam is back. So, before you hear the conversation that we had, let’s check in with Sam and see how he’s doing.
[audio – Teams call initiation]
BILL GATES:Hey, Sam.
SAM ALTMAN: Hey, Bill.
BILL GATES:How are you?
SAM ALTMAN:Oh, man. It’s been so crazy. I’m all right. It’s a very exciting time.
BILL GATES:How’s the team doing?
SAM ALTMAN:I think, you know a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that’s like a silver lining of all of this.
In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us.
BILL GATES:Fantastic.
[audio – Teams call end]
[music]
So, we won’t be discussing that situation in the conversation; however, you will hear about Sam’s commitment to build a safe and responsible AI. I hope you enjoy the conversation.
Welcome to Unconfuse Me. I’m Bill Gates.
[music fades]
BILL GATES:Today we’re going to focus mostly on AI, because it’s such an exciting thing, and people are also concerned. Welcome, Sam.
SAM ALTMAN:Thank you so much for having me.
BILL GATES:I was privileged to see your work as it evolved, and I was very skeptical. I didn’t expect ChatGPT to get so good. It blows my mind, and we don’t really understand the encoding. We know the numbers, we can watch it multiply, but the idea of where is Shakespearean encoded? Do you think we’ll gain an understanding of the representation?
SAM ALTMAN:A hundred percent. Trying to do this in a human brain is very hard. You could say it’s a similar problem, which is there are these neurons, they’re connected. The connections are moving and we’re not going to slice up your brain and watch how it’s evolving, but this we can perfectly x-ray. There has been some very good work on interpretability, and I think there will be more over time. I think we will be able to understand these networks, but our current understanding is low. The little bits we do understand have, as you’d expect, been very helpful in improving these things. We’re all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast. We also could say, where in your brain is Shakespeare encoded, and how is that represented?
BILL GATES:We don’t know.
SAM ALTMAN:We don’t really know, but it somehow feels even less satisfying to say we don’t know yet in these masses of numbers that we’re supposed to be able to perfectly x-ray and watch and do any tests we want to on.
BILL GATES:I’m pretty sure, within the next five years, we’ll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we’re able to do today.
SAM ALTMAN:A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what’s going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.
BILL GATES:Yes, in physics, biology, it’s sometimes just messing around, and it’s like, whoa – how does this actually come together?
SAM ALTMAN:In our case, the guy that built GPT-1 sort of did it off by himself and solved this, and it was somewhat impressive, but no deep understanding of how it worked or why it worked. Then we got the scaling laws. We could predict how much better it was going to be. That was why, when we told you we could do a demo, we were pretty confident it was going to work. We hadn’t trained the model, but we were pretty confident. That has led us to a bunch of attempts and better and better scientific understanding of what’s going on. But it really came from a place of empirical results first.
BILL GATES:When you look at the next two years, what do you think some of the key milestones will be?
SAM ALTMAN:Multimodality will definitely be important.
BILL GATES:Which means speech in, speech out?
SAM ALTMAN:Speech in, speech out. Images. Eventually video. Clearly, people really want that. We’ve launched images and audio, and it had a much stronger response than we expected. We’ll be able to push that much further, but maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important.
Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.
BILL GATES:In the basic algorithm right now, it’s just feed forward, multiply, and so to generate every new word, it’s essentially doing the same thing. I’ll be interested if you ever get to the point where, like in solving a complex math equation, you might have to apply transformations an arbitrary number of times, that the control logic for the reasoning may have to be quite a bit more complex than just what we do today.
SAM ALTMAN:At a minimum, it seems like we need some sort of adaptive compute. Right now, we spend the same amount of compute on each token, a dumb one, or figuring out some complicated math.
BILL GATES:Yes, when we say, "Do the Riemann hypothesis …"
SAM ALTMAN:That deserves a lot of compute.
BILL GATES:It’s the same compute as saying, "The."
SAM ALTMAN:Right, so at a minimum, we’ve got to get that to work. We may need much more sophisticated things beyond it.
BILL GATES:You and I were both part of a Senate Education Session, and I was pleased that about 30 senators came to that, and helping them get up to speed, since it’s such a big change agent. I don’t think we could ever say we did too much to draw the politicians in. And yet, when they say, "Oh, we blew it on social media, we should do better," – that is an outstanding challenge that there are very negative elements to, in terms of polarization. Even now, I’m not sure how we would deal with that.
SAM ALTMAN:I don’t understand why the government was not able to be more effective around social media, but it seems worth trying to understand as a case study for what they’re going to go through now with AI.
BILL GATES:It’s a good case study, and when you talk about the regulation, is it clear to you what sort of regulations would be constructed?
SAM ALTMAN:I think we’re starting to figure that out. It would be very easy to put way too much regulation on this space. You can look at lots of examples of where that’s happened before.But also, if we are right, and we may turn out not to be, but if we are right, and this technology goes as far as we think it’s going to go, it will impact society, geopolitical balance of power, so many things, that for these, still hypothetical, but future extraordinarily powerful systems – not like GPT-4, but something with 100,000 or a million times the compute power of that, we have been socialized in the idea of a global regulatory body that looks at those super-powerful systems, because they do have such global impact. One model we talk about is something like the IAEA. For nuclear energy, we decided the same thing. This needs a global agency of some sort, because of the potential for global impact. I think that could make sense. There will be a lot of shorter term issues, issues of what are these models allowed to say and not say? How do we think about copyright? Different countries are going to think about those differently and that’s fine.
BILL GATES:Some people think if there are models that are so powerful, we’re scared of them –the reason nuclear regulation works globally, is basically everyone, at least on the civilian side, wants to share safety practices, and it has been fantastic. When you get over into the weapons side of nuclear, you don’t have that same thing. If the key is to stop the entire world from doing something dangerous, you’d almost want global government, which today for many issues, like climate, terrorism, we see that it’s hard for us to cooperate. People even invoke U.S.-China competition to say why any notion of slowing down would be inappropriate. Isn’t any idea of slowing down, or going slow enough to be careful, hard to enforce?
SAM ALTMAN:Yes, I think if it comes across as asking for a slowdown, that will be really hard. If it instead says, "Do what you want, but any compute cluster above a certain extremely high-power threshold" – and given the cost here, we’re talking maybe five in the world, something like that –any cluster like that has to submit to the equivalent of international weapons inspectors. The model there has to be made available for safety audit, pass some tests during training, and before deployment. That feels possible to me. I wasn’t that sure before, but I did a big trip around the world this year, and talked to heads of state in many of the countries that would need to participate in this, and there was almost universal support for it. That’s not going to save us from everything. There are still going to be things that are going to go wrong with much smaller-scale systems, in some cases, probably pretty badly wrong. But I think that can help us with the biggest tier of risks.
BILL GATES:I do think AI, in the best case, can help us with some hard problems.
SAM ALTMAN:For sure.
BILL GATES:Including polarization because potentially that breaks democracy and that would be a super-bad thing. Right now, we’re looking at a lot of productivity improvement from AI, which isoverwhelmingly a very good thing. Which areas are you most excited about?
SAM ALTMAN:First of all, I always think it’s worth remembering that we’re on this long, continuous curve. Right now, we have AI systems that can do tasks. They certainly can’t do jobs, but they can do tasks, and there’s productivity gain there. Eventually, they will be able to do more things that we think of like a job today, and we will, of course, find new jobs and better jobs. I totally believe that if you give people way more powerful tools, it’s not just that they can work a little faster, they can do qualitatively different things. Right now, maybe we can speed up a programmer 3x. That’s about what we see, and that’s one of the categories that we’re most excited about it. It’s working super-well. But if you make a programmer three times more effective, it’s not just that they can do three times more stuff, it’s that they can – at that higher level of abstraction, using more of their brainpower – they can now think of totally different things. It’s like going from punch cards to higher level languages didn’t just let us program a little faster, it let us do these qualitatively new things. We’re really seeing that.
As we look at these next steps of things that can do a more complete task, you can imagine a little agent that you can say, "Go write this whole program for me, I’ll ask you a few questions along the way, but it won’t just be writing a few functions at a time." That’ll enable a bunch of new stuff. And then again, it’ll do even more complex stuff. Someday, maybe there’s an AI where you can say, "Go start and run this company for me." And then someday, there’s maybe an AI where you can say, "Go discover new physics." The stuff that we’re seeing now is very exciting and wonderful, but I think it’s worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be.
Coding is probably the single area from a productivity gain we’re most excited about today. It’s massively deployed and at scaled usage at this point. Healthcare and education are two things that are coming up that curve that we’re very excited about too.
BILL GATES:The thing that is a little daunting is, unlike previous technology improvements, this one could improve very rapidly, and there’s kind of no upper bound. The idea that it achieves human levels on a lot of areas of work, even if it’s not doing unique science, it can do support calls and sales calls. I guess you and I do have some concern, along with this good thing, that it’ll force us to adapt faster than we’ve had to ever before.
SAM ALTMAN:That’s the scary part. It’s not that we have to adapt. It’s not that humanity is not super-adaptable. We’ve been through these massive technological shifts, and a massive percentage of the jobs that people do can change over a couple of generations, and over a couple of generations, we seem to absorb that just fine. We’ve seen that with the great technological revolutions of the past. Each technological revolution has gotten faster, and this will be the fastest by far. That’s the part that I find potentially a little scary, is the speed with which society is going to have to adapt, and that the labor market will change.
BILL GATES:One aspect of AI is robotics, or blue-collar jobs, when you get hands and feet that are at human-level capability. The incredible ChatGPT breakthrough has kind of gotten us focused on the white-collar thing, which is super appropriate, but I do worry that people are losing the focus on the blue-collar piece. So how do you see robotics?
SAM ALTMAN:Super-excited for that. We started robots too early, so we had to put that project on hold. It was hard for the wrong reasons. It wasn’t helping us make progress with the difficult parts of the ML research. We were dealing with bad simulators and breaking tendons and things like that. We also realized more and more over time that we first needed intelligence and cognition, and then we could figure out how to adapt it to physicality. It was easier to start with that with the way we built these language models. But we have always planned to come back to it.
We’ve started investing a little bit in robotics companies. On the physical hardware side, there’s finally, for the first time that I’ve ever seen, really exciting new platforms being built there. At some point, we will be able to use our models, as you were saying, with their language understanding and future video understanding, to say, "All right, let’s do amazing things with a robot."
BILL GATES:If the hardware guys who’ve done a good job on legs actually get the arms, hands, fingers piece, and then we couple it, and it’s not ridiculously expensive, that could change the job market for a lot of the blue-collar type work, pretty rapidly.
SAM ALTMAN:Yes. Certainly, the prediction, the consensus prediction, if we rewind seven or ten years, was that the impact was going to be blue-collar work first, white-collar work second, creativity maybe never, but certainly last, because that was magic and human.
Obviously, it’s gone exactly the other direction. I think there are a lot of interesting takeaways about why that happened. Creative work, the hallucinations of the GPT models is a feature, not a bug. It lets you discover some new things. Whereas if you’re having a robot move heavy machinery around, you’d better be really precise with that. I think this is just a case of you’ve got to follow where technology goes. You have preconceptions, but sometimes the science doesn’t want to go that way.
BILL GATES:So what application on your phone do you use the most?
SAM ALTMAN:Slack.
BILL GATES:Really?
SAM ALTMAN:Yes. I wish I could say ChatGPT.
BILL GATES:[laughs] Even more than e-mail?
SAM ALTMAN:Way more than e-mail. The only thing that I was thinking possibly was iMessages, but yes, more than that.
BILL GATES:Inside OpenAI, there’s a lot of coordination going on.
SAM ALTMAN:Yes. What about you?
BILL GATES:It’s Outlook. I’m this old-style e-mail guy, either that or the browser, because, of course, a lot of my news is coming through the browser.
SAM ALTMAN:I didn’t quite count the browser as an app. It’s possible I use it more, but I still would bet Slack. I’m on Slack all day.
BILL GATES:Incredible.
BILL GATES:Well, we’ve got a turntable here. I asked Sam, like I have for other guests, to bring one of his favorite records. So, what have we got?
SAM ALTMAN:I brought The New Four Seasons - Vivaldi Recomposed by Max Richter. I like music with no words for working. That had the old comfort of Vivaldi and pieces I knew really well, but enough new notes that it was a totally different experience. There are pieces of music that you form these strong emotional attachments to, because you listened to them a lot in a key period of your life. This was something that I listened to a lot while we were starting OpenAI.
I think it’s very beautiful music. It’s soaring and optimistic, and just perfect for me for working. I thought the new version is just super great.
BILL GATES:Is it performed by an orchestra?
SAM ALTMAN:It is. The Chineke! Orchestra.
BILL GATES:Fantastic.
SAM ALTMAN:Should I play it?
BILL GATES:Yes, let’s.
[music – "The New Four Seasons – Vivaldi Recomposed: Spring 1" by Max Richter]
SAM ALTMAN:This is the intro to the sound we’re going for.
[music]
BILL GATES:Do you wear headphones?
SAM ALTMAN:I do.
BILL GATES:Do your colleagues give you a hard time about listening to classical music?
SAM ALTMAN:I don’t think they know what I listen to, because I do wear headphones. It’s very hard for me to work in silence. I can do it, but it’s not my natural state.
BILL GATES:It’s fascinating. Songs with words, I agree, I would find that distracting, but this is more of a mood type thing.
SAM ALTMAN:Yes, and I have it quiet. I can’t listen to loud music either, but it’s just somehow always what I’ve done.
BILL GATES:It’s fantastic. Thanks for bringing it.
[music fades]
BILL GATES:Now, with AI, to me, if you do get to the incredible capability, AGI, AGI+, there are three things I worry about. One is that a bad guy is in control of the system. If we have good guys who have equally powerful systems that hopefully minimizes that problem. There’s the chance of the system taking control. For some reasons, I’m less concerned about that. I’m glad other people are. The one that sort of befuddles me is human purpose. I get a lot of excitement that, hey, I’m good at working on malaria, and malaria eradication, and getting smart people and applying resources to that. When the machine says to me, "Bill, go play pickleball, I’ve got malaria eradication.You’re just a slow thinker," then it is a philosophically confusing thing. How do you organize society? Yes, we’re going to improve education, but education to do what, if you get to this extreme, which we still have a big uncertainty. For the first time, the chance that might come in the next 20 years is not zero.
SAM ALTMAN:There’s a lot of psychologically difficult parts of working on the technology, but this is for me, the most difficult, because I also get a lot of satisfaction from that.
BILL GATES:You have real value added.
SAM ALTMAN:In some real sense, this might be the last hard thing I ever do.
BILL GATES:Our minds are so organized around scarcity; scarcity of teachers and doctors and good ideas that, partly, I do wonder if a generation that grows up without that scarcity will find the philosophical notion of how to organize society and what to do. Maybe they’ll come up with a solution. I’m afraid my mind is so shaped around scarcity, I even have a hard time thinking of it.
SAM ALTMAN:That’s what I tell myself too, and it’s what I truly believe, that although we are giving something up here, in some sense, we are going to have things that are smarter than us. If we can get into this world of post-scarcity, we will find new things to do. They will feel very different. Maybe instead of solving malaria, you’re deciding which galaxy you like, and what you’re going to do with it. I’m confident we’re never going to run out of problems, and we’re never going to run out of different ways to find fulfilment and do things for each other and understand how we play our human games for other humans in this way that’s going to remain really important. It is going to be different for sure, but I think the only way out is through. We have to go do this thing. It’s going to happen. This is now an unstoppable technological course. The value is too great. And I’m pretty confident, very confident, we’ll make it work, but it does feel like it’s going to be so different.
BILL GATES:The way to apply this to certain current problems, like getting kids a tutor and helping to motivate them, or discover drugs for Alzheimer’s, I think it’s pretty clear how to do that. Whether AI can help us go to war less, be less polarized; you’d think as you drive intelligence, and not being polarized kind of is common sense, and not having war is common sense, but I do think a lot of people would be skeptical. I’d love to have people working on the hardest human problems, like whether we get along with each other. I think that would be extremely positive, if we thought the AI could contribute to humans getting along with each other.
SAM ALTMAN:I believe that it will surprise us on the upside there. The technology will surprise us with how much it can do. We’ve got to find out and see, but I’m very optimistic. I agree with you, what a contribution that would be.
BILL GATES:In terms of equity, technology is often expensive, like a PC or Internet connection, and it takes time to come down in cost. I guess the costs of running these AI systems, it looks pretty good that the cost per evaluation is going to come down a lot?
SAM ALTMAN:It’s come down an enormous amount already. GPT-3, which is the model we’ve had out the longest and the most time to optimize, in the three and a little bit years that it has been out, we’ve been able to bring the cost down by a factor of 40. For three years’ time, that’s a pretty good start. For 3.5, we’ve brought it down, I would bet, close to 10 at this point. Four is newer, so we haven’t had as much time to bring the cost down there, but we will continue to bring the cost down. I think we are on the steepest curve of cost reduction ever of any technology I know, way better than Moore’s Law. It’s not only that we figured out how to make the models more efficient, but also, as we understand the research better, we can get more knowledge, we can get more ability into a smaller model. I think we are going to drive the cost of intelligence down to so close to zero that it will be this before-and-after transformation for society.
Right now, my basic model of the world is cost of intelligence, cost of energy. [Bill laughs] Those are the two biggest inputs to quality of life, particularly for poor people, but overall. If you can drive both of those way down at the same time, the amount of stuff you can have, the amount of improvement you can deliver for people, it’s quite enormous. We are on a curve, at least for intelligence, we will really, really deliver on that promise. Even at the current cost, which again, this is the highest it will ever be and much more than we want, for 20 bucks a month, you get a lot of GPT-4 access, and way more than 20 bucks’ worth of value. We’ve come down pretty far.
BILL GATES:What about the competition? Is that kind of a fun thing that many people are working on this all at once?
SAM ALTMAN:It’s both annoying and motivating and fun. [Bill laughs] I’m sure you’ve felt similarly. It does push us to be better and do things faster. We are very confident in our approach. We have a lot of people that I think are skating to where the puck was, and we’re going to where the puck is going. It feels all right.
BILL GATES:I think people would be surprised at how small OpenAI is. How many employees do you have?
SAM ALTMAN:About 500, so we’re a little bigger than before.
BILL GATES:But that’s tiny. [laughs] By Google, Microsoft, Apple standards –
SAM ALTMAN:It’s tiny. We have to not only run the research lab, but now we have to run a real business and two products.
BILL GATES:The scaling of all your capacities, including talking to everybody in the world, and listening to all those constituencies, that’s got to be fascinating for you right now.
SAM ALTMAN:It’s very fascinating.
BILL GATES:Is it mostly a young company?
SAM ALTMAN: It’s an older company than average.
BILL GATES:Okay.
SAM ALTMAN:It’s not a bunch of 24-year-old programmers.
BILL GATES:It’s true, my perspective is warped, because I’m in my 60s. I see you, and you’re younger, but you’re right. You have a lot in their 40s.
SAM ALTMAN:Thirties, 40s, 50s.
BILL GATES:It’s not the early Apple, Microsoft, which we were really kids.
SAM ALTMAN:It’s not, and I’ve reflected on that. I think companies have gotten older in general, and I don’t know quite what to make of that. I think it’s somehow a bad sign for society, but I tracked this at YC. The best founders have trended older over time.
BILL GATES:That’s fascinating.
SAM ALTMAN:Then in our case, it’s a little bit older than the average, even still.
BILL GATES:You got to learn a lot by your role at Y Combinator, helping these companies. I guess that was good training for what you’re doing now. [laughs]
SAM ALTMAN:That was super helpful.
BILL GATES:Including seeing mistakes.
SAM ALTMAN:Totally. OpenAI did a lot of things that are very against the standard YC advice. We took four and a half years to launch our first product. We started the company without any idea of what a product would be. We were not talking to users. I still don’t recommend that for most companies, but having learned the rules and seen them at YC made me feel like I understood when and how and why we could break them. We really did things that were just so different than any other company I’ve seen.
BILL GATES:The key was the talent that you assembled, and letting them be focused on the big, big problem, not some near-term revenue thing.
SAM ALTMAN:I think Silicon Valley investors would not have supported us at the level we needed, because we had to spend so much capital on the research before getting to the product. We just said, "Eventually the model will be good enough that we know it’s going to be valuable to people." But we were very grateful for the partnership with Microsoft, because this kind of way-ahead-of-revenue investing is not something that the venture capital industry is good at.
BILL GATES:No, and the capital costs were reasonably significant, almost at the edge of what venture would ever be comfortable with.
SAM ALTMAN:Maybe past.
BILL GATES:Maybe past. I give Satya incredible credit for thinking through ‘how do you take this brilliant AI organization, and couple it into the large software company?’ It has been very, very synergistic.
SAM ALTMAN:It’s been wonderful, yes. You really touched on it, though, and this was something I learned from Y Combinator. We said, we are going to get the best people in the world at this. We are going to make sure that we’re all aligned at where we’re going and this AGI mission. But beyond that, we’re going to let people do their thing. We’re going to realize it’s going to go through some twists and turns and take a while.
We had a theory that turned out to be roughly right, but a lot of the tactics along the way turned out to be super wrong. We just tried to follow the science.
BILL GATES:I remember going and seeing the demonstration and thinking, okay, what’s the path to revenue on that one? What is that like? In these frenzied times, you’re still holding on to an incredible team.
SAM ALTMAN:Yes. Great people really want to work with great colleagues.
BILL GATES:That’s an attractive force.
SAM ALTMAN:There’s a deep center of gravity there. Also, it sounds so cliche, and every company says it, but people feel the mission so deeply. Everyone wants to be in the room for the creation of AGI.
BILL GATES:It must be exciting. I can see the energy when you come up and blow me away again with the demos; I’m seeing new people, new ideas. You’re continuing to move at a really incredible speed.
SAM ALTMAN:What’s the piece of advice you give most often?
BILL GATES:There are so many different forms of talent. Early in my career, I thought, just pure IQ, like engineering IQ, and of course, you can apply that to financial and sales. That turned out to be so wrong. Building teams where you have the right mix of skills is so important. Getting people to think, for their problem, how do they build that team that has all the different skills, that’s probably the one that I think is the most helpful. Yes, telling kids, you know, math, science is cool, if you like it, but it’s that talent mix that really surprised me.
What about you? What advice do you give?
SAM ALTMAN:It’s something about how most people are mis-calibrated on risk. They’re afraid to leave the soft, cushy job behind to go do the thing they really want to do, when, in fact, if they don’t do that, they look back at their lives like, "Man, I never went to go start this company I wanted to start, or I never tried to go be an AI researcher." I think that’s sort of much riskier.
Related to that, being clear about what you want to do, and asking people for what you want goes a surprisingly long way. A lot of people get trapped in spending their time in not the way they want to do. Probably the most frequent advice I give is to try to fix that some way or other.
BILL GATES:If you can get people into a job where they feel they have a purpose, it’s more fun. Sometimes that’s how they can have gigantic impact.
SAM ALTMAN:That’s for sure.
BILL GATES:Thanks for coming. It was a fantastic conversation. In the years ahead, I’m sure we’ll get to talk a lot more, as we try to shape AI in the best way possible.
SAM ALTMAN:Thanks a lot for having me. I really enjoyed it.
[music]
BILL GATES:Unconfuse Me is a production of the Gates Notes. Special thanks to my guest today, Sam Altman.
BILL GATES:Remind me what your first computer was?
SAM ALTMAN:A Mac LC2.
BILL GATES:Nice choice.
SAM ALTMAN:It was a good one. I still have it; it still works.
本文經授權轉載自比爾蓋茨(ID:gatesnotes),如需二次轉載請聯繫原作者。歡迎轉發到朋友圈。