Our Distinguished Scientist John Jumper dropped out of his first PhD, but it led him to AI and biology – and a new doctorate along the way. 🧪 In the latest episode of our podcast with Hannah Fry, he shares his journey from his studies and helping develop #AlphaFold to being awarded @TheNobelPrize. As we continue our work, John explains how confidence in AI tools for science can come from seeing them perform reliably – like how ancient builders constructed safe bridges without knowing the equations of their engineering – and that we don’t yet need to perfectly understand each step for it to be useful. Hear their full discussion on our podcast ↓ https://goo.gle/3KkIGpy

This is a powerful analogy. So many of us get stuck trying to engineer the “perfect plan” before taking a step, especially in fields as complex as AI and science. 🤔 But progress rarely comes from perfect planning. It comes from momentum. You jump in, get real-world feedback, iterate, and let intuition help guide you with each rep. What we’re seeing with AI right now, especially breakthroughs like AlphaFold, is proof of that the speed of improvement isn’t because we solved everything upfront. It’s because we started, learned fast, and kept moving. 📈 As someone transitioning into tech, that mindset has been huge for me too. Action builds clarity. Momentum builds mastery.

John’s journey is a reminder that breakthroughs rarely follow a straight line. Seeing AI tools earn trust the same way old builders trusted their bridges, through consistent results, is a powerful analogy. Excited to hear the full discussion.

Incredible story. John’s journey shows how nonlinear paths often lead to the most transformative breakthroughs. AlphaFold is a perfect example of how persistence + interdisciplinary thinking can change an entire field.

Just as humanity harnessed nature through experience before fully decoding its laws, we must embrace AI with the same empirical mindset. Applying this to Hardware Architecture, trusting verified AI performance is the key to bridging digital intelligence with physical reality. This approach will accelerate the true synergy between AI and humanity.

Like
Reply

John’s journey is such a powerful reminder that breakthroughs rarely follow a straight line. His analogy about ancient builders is spot on. Consistent, reliable performance often earns trust long before full theoretical understanding catches up. AlphaFold is a perfect example of how usefulness can lead the way for science.

Like
Reply

This story highlights how unconventional paths can spark major breakthroughs when curiosity stays intact. It also shows how scientific trust grows from consistent performance long before every mechanism is fully understood.

Like
Reply

John Jumper Everyone celebrates breakthroughs in AI that promise to reshape human history. But almost no one asks the question beneath the achievement: Breakthroughs toward what future, and for whom? Confidence in tools is powerful. But confidence without understanding has another name in history: Faith without accountability. Ancient builders constructed bridges without equations, but they were building for their own children with their own hands in their own communities. Today’s systems are built by a few, for billions of people who never asked for them, and who will bear the consequences long after the applause fades. We marvel at what’s newly possible. But possibility has a shadow: If we don’t understand the forces we unleash, are we building bridges… or constructing something we can’t escape? So here’s the question no one onstage ever wants spoken aloud: Are we advancing human life, or replacing it? Because history won’t measure the elegance of the models, or the brilliance of the equations. It will measure whether humanity survived the outcomes. We’ve built tools we don’t fully understand before. The world still carries the scars.

Like
Reply

Google DeepMind AI in science—an exciting topic. I can understand that LLM AI can contribute creative ideas. But an AI tool must not be emergent in this particular area. Deterministic AI (it's not called cognitive AI for nothing) is not responsible for creative ideas, but for evaluating and deterministically validating possibilities – in other words, it is a tool that is fully auditable and reproducible. https://www.linkedin.com/pulse/cognitive-ai-new-technical-class-take-back-your-data-ennbf

Like
Reply

Another way to look at it is that we use our brain every day even though we still do not fully understand how it works.

Like
Reply

If youre waiting to understand every step before you start you ve already lost. Progress belongs to the ones who move while everyone else is overthinking

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories