To get smarter, traditional AI models rely on exponential increases in the scale of data and computing power. Noam Brown, a leading research scientist at OpenAI, presents a potentially transformative shift in this paradigm. He reveals his work on OpenAI’s new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems. (Recorded at TEDAI San Francisco on October 22, 2024)
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership
Follow TED!
X: https://twitter.com/TEDTalks
Instagram: https://www.instagram.com/ted
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences
TikTok: https://www.tiktok.com/@tedtoks
The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more: https://go.ted.com/noambrown
TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com
#TED #TEDTalks #ai
source
22 Comments
@Gobberfisch
02/20/2025 - 2:34 PMOctober 22, 2024
@kriss6997
02/20/2025 - 2:34 PMTimestamps:
00:09 – AI's progress relies on scaling data and compute resources, not just algorithms.
01:46 – AI can learn complex strategies, as demonstrated by poker research.
03:22 – Human decision-making relies on reflective thinking unlike instant AI responses.
04:45 – Elongating thought time in AI significantly enhances performance.
06:14 – AI's success in games comes from strategic thinking time.
07:38 – Increased thinking time significantly enhances AI performance.
09:12 – AI models benefit from additional thinking time to enhance their performance.
10:38 – AI advancements are ongoing and can significantly impact critical issues.
@gunnerandersen4634
02/20/2025 - 2:34 PMI dont know why, but this guy seems to be selling bullshit 😊 Ilya would beat you on the ASI race, wanna bet 😎?
@primingdotdev
02/20/2025 - 2:34 PMin conclusion buy my stock
@tkonan
02/20/2025 - 2:34 PMWhat a horrifically sensationalist promo video with an outrageous click-bait tag-line on the thumbnail "…thinks like a human". How insulting to humans everywhere – as if it's fully understood how humans think anyway.
Don't ask the audience whether they'd pay a few dollars for a cancer cure – bring the cure with you.
@tomarmstrong1281
02/20/2025 - 2:34 PMI will only start to fear about AI when one has been built with a limbic system
@Paul-rs4gd
02/20/2025 - 2:34 PMThere are so many ways AI can continue to improve. I think the time will come when 'thinking' will no longer produce intermediate results in human language, but rather a more efficient use of tokens (exploiting them as general high-dimensional vectors). I believe this has already started to happen when AIs train themselves via reinforcement learning. Of course, the intermediate thinking will no longer be comprehensible, and it's a nightmare for AI safety – but it may be much more effective.
I also think there is more parallelism to come. An LLM might well use multiple threads of system 2 thinking and combine their output into the final answer.
@DatPuz
02/20/2025 - 2:34 PMNot to dismiss this work, because it is interesting and worthwhile, but beating humans at games with extremely simple sets of rules, with a simple objective function, and only a few inputs, is never going to lead us to intelligent systems. I’ve been using o1 daily as a software engineer. It’s not perceptively better than ChatGPT was a year ago (GPT 4 I think it was then). They just made it slower at producing output, while it pretends to “think,” just to have it proceed to generate the same low quality of boilerplate bullshit that it did before. If this is saving anyone days worth of work, they are quite literally useless in their field.
@DatPuz
02/20/2025 - 2:34 PMAI plateaued like a year ago though so he may want to connect whatever he is peddling to reality. The plateau is not measured by how much it costs to train AI. Why are you showing a graph of the skyrocketing costs? The thing that has already plateaued is quality of its output.
@diliupg
02/20/2025 - 2:34 PM😂😂😂😂😂😂
@legatodi3000
02/20/2025 - 2:34 PMThis guy likes to gamble and talk is trying to sell what his company is selling.
Well, I’m definitely buying this!
@kellynatalytwine
02/20/2025 - 2:34 PM🎉. The only profitable application for AI is warfare..it's a weapon. Build and deliver the first Terminator to the battle field, before our enemies do. America must prevail.😮😮
@TheDaimond8V
02/20/2025 - 2:34 PMGrok3 proved this again—there’s no wall. The key now is to achieve AGI as quickly as possible, then deploy millions of them to evolve themselves, using the ultimate version of them to drive groundbreaking scientific research.
@MotorDetroit
02/20/2025 - 2:34 PMPlease make an AI that calls out people on their baseless political claims. That’s the AI we need right now. Who cares if it can make better solar panels or vaccines when people won’t take them.
@aidanthompson5053
02/20/2025 - 2:34 PM3:30
@motorheadmaximus
02/20/2025 - 2:34 PMNeil ?
@ismaelplaca244
02/20/2025 - 2:34 PMCan't trust an OpenAI employee that's looking for funding
@Moocow4576
02/20/2025 - 2:34 PMQuantum computing will help. If it can handle and go through data really quickly. However tech right now is still in quantum computing infancy. Able to replace so jobs, but overall weak in capacities.
@alexandermoody1946
02/20/2025 - 2:34 PMArtificial intelligence may not be effected by a wall if we can use acroprops to support the weight whilst we install a lintel and doorway.
Of course there are caveats, the difference of peering through windows and building working doorways requires a different kind of societal model that knows and interprets how to create usable data long term. The data for free model that was and is currently in use will have to end to walk through the doorway we build.
@doitforjohnny3502
02/20/2025 - 2:34 PMBro speaks like he's an AI😂
@awsmith1007
02/20/2025 - 2:34 PMIt's literally still plateauing, we can see the graphs. It's literally log scale.
@bogusphone8000
02/20/2025 - 2:34 PMIn all of this, has anyone taken the time to ask if we should? While technical advancement is a positive, when it begins to offset the human element, we are wise to proceed cautiously.
As we aggregate capabilities and services, competition and human engagement will decrease. Those champions of this change encourage the exploration of new market and opportunities, and rightfully so. However, they rarely stop to consider the cost and impact of such – the time required to acquire new skills, the unemployment during said transition, the inability of some to adapt.
A society does well to have multiple opportunities and tiers of those opportunities for each member thereof to engage, work, and relish the contribution made.
A realistic anecdote at a recent tech conference: A vendor was presenting their AI integration and how it would surface key insights, accelerate data gathering and searching, and truly drive on-demand reporting. This was received with great celebration by the attendees. On this high, the presenter then stated "…and next we are working on AI to AI integration so that multiple systems can identify, triage, and resolve numerous issues and human needs." The room went silent. This announcement touched numerous attendees and their jobs. They saw the future where their expertise in the platform is no longer needed. In one moment, AI went from the great pinnacle to the greatest threat.