Episode #84: Three Breakthroughs: AlphaFold 3

Tech Optimist Podcast — Tech, Entrepreneurship, and Innovation

Tech Optimist Episode #84: Three Breakthroughs: AlphaFold 3
Written by

Alumni Ventures

Published on

Read

2 min

Mike Collins and Naren Ramaswamy discuss three transformative AI advancements, including AI scaling debates, AlphaFold 3’s impact on molecular biology, and the development of AI agents that could revolutionize productivity and trust.

Episode #84: Three Breakthroughs: AlphaFold 3

See video policy below.

This week on the Tech Optimist podcast, join Alumni Ventures’ Mike Collins and Naren Ramaswamy as they spotlight three transformative innovations:

  1. AI Scaling Approaches – Aligning AI projects with long-term business goals.
  2. Impact of AlphaFold 3 – Discussing the impact in molecular biology and drug discovery.
  3. Development of AI agents – Discussing the promise to redefine productivity and trust.

This episode offers an inspiring look at how these innovations are shaping the future of science, technology, and human collaboration.

Watch Time ~29 minutes

The show is produced by Alumni Ventures, which has been recognized as a “Top 20 Venture Firm” by CB Insights (’24) and as the “#1 Most Active Venture Firm in the US” by Pitchbook (’22 & ’23).

READ THE FULL EPISODE TRANSCRIPT

Creators and Guests

HOST

Mike Collins
CEO, and Co-Founder at Alumni Ventures

Mike has been involved in almost every facet of venturing, from angel investing to venture capital, new business and product launches, and innovation consulting. He is currently CEO of Alumni Ventures Group, the managing company for our fund, and launched AV’s first alumni fund, Green D Ventures, where he oversaw the portfolio as Managing Partner and is now Managing Partner Emeritus. Mike is a serial entrepreneur who has started multiple companies, including Kid Galaxy, Big Idea Group (partially owned by WPP), and RDM. He began his career at VC firm TA Associates. He holds an undergraduate degree in Engineering Science from Dartmouth and an MBA from Harvard Business School.

GUEST

Naren Ramaswamy
Senior Principal, Spike & Deep Tech Fund, Alumni Ventures

Naren combines a technical engineering background with experience at startups and VC firms. Before joining AV, he worked with the investing team at venture firm Data Collective (DCVC) looking at frontier tech deals. Before that, he was a Program Manager at Apple and Tesla and has worked for multiple consumer startups. Naren received a BS and MS in mechanical engineering from Stanford University and an MBA from the Stanford Graduate School of Business. In his free time, he enjoys teaching golf to beginners and composing music.

To Learn More

Click the logos below for more information.

Important Disclosure Information

The Tech Optimist Podcast is for informational purposes only. It is not personalized advice and is neither an offer to sell, nor a solicitation of an offer to purchase, any security. Such offers are made only to eligible investors, pursuant to the formal offering documents of appropriate investment funds. Please consult with your advisors before making any investment with Alumni Ventures. For more information, please see here.

One or more investment funds affiliated with AV may have invested, or may in the future invest, in some of the companies featured on the Podcast. This circumstance constitutes a conflict of interest. Any testimonials or endorsements regarding AV on the Podcast are made without compensation but the providers may in some cases have a relationship with AV from which they benefit. All views expressed on the Podcast are the speaker’s own. Any testimonials or endorsements expressed on the Podcast do not represent the experience of all investors or companies with which AV invests or does business.

The Podcast includes forward-looking statements, generally consisting of any statement pertaining to any issue other than historical fact, including without limitation predictions, financial projections, the anticipated results of the execution of any plan or strategy, the expectation or belief of the speaker, or other events or circumstances to exist in the future. Forward looking statements are not representations of actual fact, depend on certain assumptions that may not be realized, and are not guaranteed to occur. Any forward- looking statements included in this communication speak only as of the date of the communication. AV and its affiliates disclaim any obligation to update, amend, or alter such forward-looking statements whether due to subsequent events, new information, or otherwise.

Frequently Asked Questions

FAQ
  • Samantha Herrick:
    Attention listeners, please prepare for entry into the Tech Optimist Podcast by Alumni Ventures. This is your gateway to the cutting edge of innovation and visionary ideas shaping our future. Keep your mind open, your curiosity sharp, and your optimism fully engaged.

    Our guide today—me, my name is Samantha Herrick—will take us through the thrilling twists and turns of groundbreaking technologies and inspiring founders. The future starts now.

    Naren Ramaswamy:
    Imagine just every scientist in the world now armed with this resource—what’s possible in the field of healthcare and biology and drug development, drug discovery. It’s staggering.

    Samantha Herrick:
    This is Naren Ramaswamy, Senior Principal at Alumni Ventures.

    Mike Collins:
    We’re going to be really impacting almost all white-collar work in the ’25–’26 timeframe, and probably by 2030 have something that far exceeds human capabilities.

    Samantha Herrick:
    And this is Founder and CEO Mike Collins.

    And hello—that’s me. My name is Samantha Herrick, and I am the producer and host for this show.

    Welcome back to The Tech Optimist, everyone. I am your guide, Sam Herrick, and I’m here to connect the dots and unpack the tech shaping our world.

    Today’s episode is packed with innovation, debate, and a glimpse into what the future holds for AI. We’re joined by two incredible legends here behind the scenes at Alumni Ventures, Naren Ramaswamy and Mike Collins. Together, they’re going to explore transformative breakthroughs in artificial intelligence today. It’s a very AI-focused episode, which is fascinating.

    We’re going to dive into the concept of AI scaling—scaling in terms of how different models can be used for different things. I’ll let Mike and Naren take it from there, but that’s a general summary of what’s coming up.

    We’re also diving into AlphaFold 3, Google DeepMind’s latest milestone in protein modeling, and what its newly open-source capabilities mean for researchers in the industry. From AI-scaling challenges to game-changing advancements, we’re going to uncover how innovations like these redefine science and technology.

    Then the conversation turns to AI agents—the possibilities they unlock, the risks they carry, and the different visions for how they might shape the future.

    Mike and Naren don’t shy away from any controversial viewpoints, so you can expect some very frank discussion about what comes next for humanity in the age of AI.

    I’ll be here adding some extra context along the way, but for now, I’m going to turn the mics over to Mike and Naren. Really quick, we’ll hop into a disclaimer and an ad before we jump into the rest of the show. Hang tight, we’ll be right back.

    Speaker 4:
    Do you have a venture capital portfolio of cutting-edge startups? Without one, you could be missing out on enormous value creation and a more diversified personal portfolio. Alumni Ventures, ranked a top-20 VC firm by CB Insights, is the leading VC firm for individual investors. Believe in investing in innovation? Visit AV.VC/foundation to get started.

    Samantha Herrick:
    As a reminder, the Tech Optimist podcast is for informational purposes only. It is not personalized advice, and it is not an offer to buy or sell securities. For additional important details, please see the text description accompanying this episode.

    Mike Collins:
    Hi, welcome to Tech Optimist. This is where me and my teammates get together and talk about really cool things going on in venture capital, technology, and innovation. My name is Mike Collins. I’m the Founder and CEO of Alumni Ventures. I’m joined by Naren Ramaswamy today. Hi, Naren.

    Naren Ramaswamy:
    Hey, Mike. How are you?

    Mike Collins:
    I’m great. So, I’m going to kick it off today. There’s been a lot of heated discussion around a topic called AI scaling. My understanding is that there are two camps: one that basically says in order for AIs to become more powerful, we just need to throw more computers at it—the more, the better. Sam Altman and others are in that camp. It’s basically: if we throw enough compute, you don’t have to think through much else. The computer, the AI, will figure out everything. That’s how we get better models, more information, more utility, and ultimately artificial general intelligence.

    There’s another camp that’s equally convinced we’re going to asymptotically approach diminishing returns. Not that they’re saying it won’t happen, but that breakthroughs will come by improving other vectors of innovation. Way above my pay grade to say what those would be, but those things have implications for the rest of us, particularly around predictability and timing.

    So, if it’s just more compute—that we know we can measure—we can anticipate with pretty strong accuracy that we’ll be in the era of AI agents sometime in 2025. We’ll be really impacting almost all white-collar work in the ’25–’26 timeframe and probably, by 2030, have something that far exceeds human capabilities, to the point where it starts to get scary around 2029–2030.

    Then there’s the other camp that says this is more like inventing: you plateau out, then you have a breakthrough, make a ton of progress, and plateau again. This approach is a little less predictable and harder to time. Generally, it suggests that reaching AGI will take longer, though breakthroughs could accelerate progress unexpectedly.

    But again, I just know this is a hot topic because AI has such profound implications for our society, for venture capital, and for innovation. I wanted to share that this debate is going on and wondered what you knew about it or had to say about it, Naren.

    Naren Ramaswamy:
    Thanks for bringing that up. I’ve been hearing chatter about this in Silicon Valley for the past couple of months. After talking to a few experts, it seems like the jury is still out on where we’ll end up—whether one approach will dominate. Many people believe there might be both approaches for different use cases.

    You’ll have this AGI-type model that Sam Altman is raising $7 trillion for—

    Mike Collins:
    Yeah, right.

    Naren Ramaswamy:
    —which, by the way, is twice the GDP of the UK. He’s raising that from the Middle East.

    Mike Collins:
    Yeah, just two UKs. We’ll start using that—

    Naren Ramaswamy:
    Just two UKs.

    Mike Collins:
    —as the unit of measure.

    Naren Ramaswamy:
    Right.

    Mike Collins:
    And China’s doing five UKs. Yeah.

    Naren Ramaswamy:
    You mentioned the productivity and timing implications, but I’m also thinking about the macroeconomic implications of moving dollars at that scale. That’s going to be one piece of it.

    But for more specific enterprise use cases—say, in legal tech—where you’re looking at a set of documents you need to query, you might actually do better with a smaller model that’s focused on that dataset rather than the entire internet. The larger the scope, the greater the risk of errors and hallucinations. So, people believe we’ll see both approaches depending on the use case, and I agree with that.

    Mike Collins:
    Yeah, I think that intuitively makes sense and matches my experience. There’s often a niche need where you’re trying to do something very specific to a customer, a workflow, a particular experience, or a specific way humans want to interact. That requires a narrower, more targeted AI—like a rifle shot tailored to a company, customer type, or product.

    You don’t need the “big God” model for that. You need strong models, but ones focused and trained for the job you’re trying to achieve. At the end of the day, you’re still in an economy where you’re solving a problem through a product or service.

    So, thinking reasonably about the timeframe—in 2025, I believe a lot of value creation will come from these targeted use cases. I had the fortune of studying with Clayton Christensen, and one of his key frameworks for marketing was to really focus on the customer problem and understand what job the customer is hiring you to do. That’s always been a powerful approach for me as an innovator and venture capitalist.

    If you apply AI to deliver that value to this group of people, it can be a huge amplifier. But a lot goes into the data, the integration, the human understanding, trust, and where human intervention is needed.

    Intuitively, I believe that in the short term we’ll see many more focused, business-specific language models, while the large, heavyweight models continue to exist.

    Even with OpenAI’s suite of models, there are times when I’ll use GPT-4 or GPT-4 with the Canvas feature for writing, but for analytics tasks I’ll often use their mini reasoning model—it’s fast and gets the job done. Using the big model for something like marketing sometimes feels like bringing a gun to a knife fight.

    Naren Ramaswamy:
    Over-engineering.

    Mike Collins:
    Overdone.

    Naren Ramaswamy:
    And who knows what the computer costs are for each response on the backend? People have said that OpenAI uses what’s called a mixture-of-experts model—under the hood of GPT-4—where different types of models are used to optimize cost for them.

    Mike Collins:
    Yeah.

    Naren Ramaswamy:
    And, as you remember, they used to have that 25-query limit, which they’ve now lifted if you’re on the paid license. So, it’s different approaches to achieving the same outcome.

    But this is a good segue, Mike, to the topic I wanted to bring up, which is a more specialized model related to biology called AlphaFold. We’ve talked about AlphaFold before—it came out of Google DeepMind—and recently they announced AlphaFold 3. The big breakthrough here is that they open-sourced the model, which has great positive implications. Previously, it was not open source, and there was criticism about that limited access. Now DeepMind has released the source code and model weights specifically for academic use, which is great. Anyone interested in researching how proteins fold, and now also how they interact with each other, can access it. That’s the real breakthrough here.

    Samantha Herrick:
    Imagine a tool so transformative, it’s reshaping the future of molecular biology and drug discovery. That’s exactly what AlphaFold 3 from Google DeepMind brings to the table. Recently made open source for academic use, this advancement marks a new chapter in how researchers can explore the building blocks of life.

    We’ve talked about AlphaFold 3 and AlphaFold in general a few times on this show, but here’s a quick refresher: AlphaFold 3 isn’t just another update—it’s a leap forward. This iteration models proteins interacting with RNA, DNA, and even drugs, pushing boundaries that were once considered unreachable. This capability opens doors for breakthroughs in areas like drug discovery, where understanding protein interaction is essential.

    The decision to open source AlphaFold 3 follows significant criticism of the limited access researchers had to earlier versions. DeepMind responded by making the source code available to everyone for non-commercial use and allowing academic researchers to request the powerful model weights. This step ensures scientists can deeply study protein structures without barriers, as long as they’re working in a non-commercial setting.

    The timing of this release is also notable. Just as Demis Hassabis and John Jumper received the 2024 Nobel Prize in Chemistry for AlphaFold’s pioneering impact, the world now has broader access to the technology.

    DeepMind hasn’t completely given up commercial control, though. The company is balancing accessibility and innovation through its spinoff, Isomorphic Labs, while ensuring no unauthorized commercial use of the technology.

    AlphaFold 3 is more than just a tool—it’s a shift in how science collaborates, advances, and pushes the frontier of what’s possible. So, what does this mean for the future of medicine and molecular biology? Stick around as we explore how this game-changing tool could revolutionize research and innovation globally. Mike and Naren, take it away.

    Naren Ramaswamy:
    AlphaFold 2 was great at predicting protein folding, and that’s what led to Demis from DeepMind winning the Nobel Prize this year in chemistry. But what’s fascinating to me is applying the technology behind large language models to something like protein folding.

    Just for our listeners: AI excels at pattern matching within a set of rules. That’s what language is, but that’s also how nature operates. You have the laws of physics and biology, and DeepMind has done an amazing job teaching this model to understand them.

    Mike Collins:
    Yeah, biochemistry.

    Naren Ramaswamy:
    Exactly. Imagine every scientist in the world now armed with this resource. What’s possible in healthcare, biology, drug development, and drug discovery is staggering.

    Mike Collins:
    Yeah. This is unlocking how humans work. At a fundamental level, proteins, amino acids, the folding and interactions of these components have largely been trial and error. Most therapeutic drugs discovered to date were happy accidents—from penicillin to Ozempic to blood pressure meds turning into Viagra. These discoveries were just accidents.

    The ability of these models to turn that process into an engineering problem is a game changer. Instead of laborious lab experiments and incremental research into pathways and cycles, which is really slow, you can now create a simulated human model that truly understands the enormous complexity of these subsystems.

    This allows for simulating drugs and solutions at orders-of-magnitude greater efficiency and speed. Yes, at the end of the day, there’s still a physical world to interact with and rigorous scrutiny and regulation for testing. But if you can do all this work up front, it’s far more efficient and effective. We can make many more drugs available, reduce time to market, and lower the number of drugs that look promising in Phase 1 trials but fail later. That has huge implications.

    So, yes—DeepMind is doing fantastic work. AlphaFold 3 is great. But more importantly, it points to where this is all heading. Over the next decade or two, there’s huge optimism about understanding the human condition and disease.

    Naren Ramaswamy:
    And Mike, this knowledge previously would’ve been locked in the minds of a few biology experts somewhere in a lab. Now it’s available to the entire world at their fingertips. Just like the internet democratized access to information, this is democratizing expert-level biological insight. It’s amazing.

    Mike Collins:
    I mean, drug discovery—we’ve seen this time and time again—the way academics have often been incentivized is very siloed. A lot of great breakthroughs happen when there’s cross-pollination between labs and offices. I actually think much of the work with AlphaFold and DeepMind has been very collaborative. But I believe this also promises to change the incentive structure toward being more open source and sharing information and learnings. Hopefully, this will accelerate progress so that we become less siloed.

    Naren Ramaswamy:
    Yeah, absolutely.

    Mike Collins:
    And there’s the power of people clustering together, as well as computers.

    Samantha Herrick:
    Commercial break incoming. Sit tight—we’ll be back.

    Speaker 4:
    Exceptional value creation comes from solving hard problems. Alumni Ventures’ Deep Tech Fund is a portfolio of 20 to 30 ventures run by exceptional teams tackling huge opportunities in AI, space, energy, transportation, cybersecurity, and more. These game-changing ventures have strong lead venture investors and practical approaches to creating shareholder value. If you’re interested in investing in the future of deep tech, visit av.vc/deeptech to learn more.

    Mike Collins:
    And that takes us back into the current state of AI, with a lot of advancements I believe will be released in the next three to six months related to agents. For those not following closely, agents are essentially software that can act on your behalf. Beyond just writing copy or emails, they can actually go out and interact with websites—essentially doing tasks for you online.

    Think about Expedia or a B2B salesperson using Salesforce. There might be 19 different workflows or tasks performed before or after a customer meeting. All of that is interacting with tools and sites. These new systems will be able to do that autonomously on your behalf.

    This capability is already in testing and coming fast. I think it will unleash a wave of enthusiasm, backlash, and concern—similar to what we saw when ChatGPT 3 and 4 first launched.

    We’ve got Anthropic with Claude, which seems to have an early API. OpenAI’s technology is rumored to be called Operator or Operations. Google is actively developing something called Jarvis, and Apple is working to catch up with Siri, leveraging their distribution advantage to make Siri more proactive.

    There’s clearly an arms race. The fact that there are multiple companies in this race suggests it’s only a matter of time before one makes a major breakthrough. Like with OpenAI’s earlier success, there’s going to be buzz for whoever is first to nail this. Inside those companies, I’m sure there are heated debates: Is it ready? Will it work? Could there be backlash? Or is it more dangerous to be late to the party?

    Naren Ramaswamy:
    I think this is similar to self-driving cars—the last 10–20% takes 80% of the time. The foundational technology is already here. Software can read a website and click buttons just like a human. Meanwhile, AI is already solving International Mathematical Olympiad problems better than humans.

    The challenge now is making it private, secure, and trustworthy. Inviting an agent to operate your computer and act on your behalf requires deep trust. Building that trust may take time.

    I’m curious to see how these companies differ in their approaches—not just technologically, but in how they gain user trust. Convincing hundreds of millions of people to let an AI handle real-world actions is a huge hurdle.

    Mike Collins:
    Yeah, the analogy to self-driving cars is excellent. The stakes there are life-and-death, so adoption is naturally slow. Similarly, with AI agents, trust will build gradually.

    At first, people might allow agents to pull information together but won’t hand over their credit card details right away. Over time, as trust grows, people will eventually wonder how they ever lived without these tools.

    Naren Ramaswamy:
    Exactly.

    Mike Collins:
    It’s going to be fascinating to watch different strategies unfold, and ultimately, the market will decide which approaches people trust first.

    Naren Ramaswamy:
    Exactly. It’s like having a virtual team available out of thin air.

    Mike Collins:
    Out of thin air.

    Naren Ramaswamy:
    Instead of being just one person navigating life, everyone will have 10 different AI agents doing tasks for them.

    Mike Collins:
    And we’ll all become orchestrators of our AI agents—directing them to handle the tasks we want done each day. That’ll take some adjustment and, as you said, a lot of trust.

    Great talking with you, Naren. Have a happy Thanksgiving, and we’ll chat again in a week or two.

    Naren Ramaswamy:
    Likewise. Thanks, Mike.

    Samantha Herrick:
    Thanks again for tuning into The Tech Optimist. If you enjoyed this episode, we’d really appreciate it if you’d give us a rating on whichever podcast app you’re using. And don’t forget to subscribe to stay updated with each new episode.

    The Tech Optimist welcomes any questions, comments, or segment suggestions. Please email us at [email protected] with your feedback, and be sure to visit our website at av.vc. As always, keep building.