Home Tech/AIThe most significant week in the AI sector: Google’s ascent, RL craze, and a celebration on a boat

The most significant week in the AI sector: Google’s ascent, RL craze, and a celebration on a boat

by admin
0 comments
The most significant week in the AI sector: Google's ascent, RL craze, and a celebration on a boat
STK093_GOOGLE_E

Reinforcement learning (RL) is poised to be the next significant development, Google is on the rise, and the social events have spiraled out of control. These were the key themes from this year’s NeurIPS in San Diego.

NeurIPS, formally known as the “Conference on Neural Information Processing Systems,” commenced in 1987 as an exclusively academic endeavor. It has since expanded commensurate with the buzz surrounding AI into a major industry gathering where research labs recruit talent and investors scout for the next generation of AI startups.

I regrettably could not attend NeurIPS this year, but I was eager to learn what discussions were taking place in San Diego over the past week. Therefore, I reached out to engineers, researchers, and founders to gather their insights. The following compilation of responses includes input from Andy Konwinski, cofounder of Databricks and head of the Laude Institute; Thomas Wolf, cofounder of Hugging Face; OpenAI’s Roon; and participants from Meta, Waymo, Google DeepMind, Amazon, among other organizations.

I asked everyone the same three questions: What was the hottest topic at the conference? Which labs seem to be thriving or struggling? Which event had the best party?

The general consensus was evident. “RL RL RL RL is dominating the scene,” Anastasios Angelopoulos, CEO of LMArena, remarked. The industry appears to be converging on the notion that optimizing models for specific applications, rather than merely enlarging the data used for pre-training, will catalyze the next wave of advancements in AI. From the inquiries regarding lab traction, it’s apparent that Google is experiencing a significant moment. “Google DeepMind is feeling optimistic,” shared Hugging Face’s Wolf.

The social scene was predictably intense. Konwinski’s Laude Lounge rose as one of the week’s popular venues — notable figures like Jeff Dean, Yoshua Bengio, Ion Stoica, along with many other leading researchers attended. Model Ship, a private cruise for 200 researchers, showcased “an unprecedented dedication to the dance floor” at a conference event, one of the cruise’s organizers, Nathan Lambert, noted. Roon expressed a cynical outlook on the festivities: “you can glean more from Twitter than being physically present … largely my on-the-ground impression was ‘this is overwhelming.’”

Here’s a summary of what attendees articulated about NeurIPS this year:

What was the hottest topic among attendees that you think will gain traction in 2026?

  • Andy Konwinski, founder of the Laude Institute: “Throughout the week, I conducted numerous interviews, and during discussions on what people deemed overhyped, I encountered responses about agentic AI, RL, and world models, while still recognizing RL and world models as burgeoning areas that are captivating and worthy of observation.”
  • Thomas Wolf, cofounder of Hugging Face: “AI x science, interpretability, RL long rollouts”
  • Roon, member of technical staff, OpenAI: “one can gain more insights from Twitter than from actually being there / the tweets indicate the buzz is around continual learning / That might be correct / uncertain / primarily my ground-level sentiment was ‘this is excess.’”
  • Maya Bechler-Speicher, research scientist at Meta: “I can’t assert with certainty what the most talked-about subject was — the conference is enormous, and my exposure was limited — but tabular foundation models were certainly gaining momentum, and I predict this trend will persist into 2026. After years of dominance by decision-tree–based methods in generalization on tabular data, we are starting to witness foundation-model approaches consistently outperforming them. Another area capturing significant attention is physical AI, which is still rife with open research inquiries and potential.”
  • Anonymous researcher at a prominent AI lab: “I may be biased, but it appears that AI focusing on the physical realm (robotics, engineering, etc., not merely AI for science) is on the verge of significant development.”
  • Nathan Lambert, senior researcher at the Allen Institute for AI: “It was widely acknowledged that [Ilya Sutskever]‘s remark on the Dwarkesh Podcast that we have now entered ‘The Age of Research’ instead of merely scaling is an apt phrase. There was no single most notable topic in the poster sessions or workshops (as last year’s NeurIPS was preoccupied with reinforcement learning and reasoning post-o1 launch). Some groups solemnly reflected on how this was the first NeurIPS since DeepSeek R1 and a year of open model transformation, but the overall conference did not seem actively engaged in it.”
  • Brian Wilt, head of data at Waymo: “The predominant theme among my acquaintances was how much research is underway in leading labs versus academic settings and is likely unpublished.. From my perspective at Waymo, many of the (applied) challenges I need to address only emerge at scale (for instance, regarding data, performance). Nonetheless, there’s a prevailing sentiment that a critical breakthrough beyond merely scaling current architectures is necessary (as pointed out by Ilya/[Andrej] Karpathy/among others)”
  • Evgenii Nikishin, member of technical staff at OpenAI: “Certainly, continual learning was among the hottest topics. I am uncertain how many scientific advancements will transpire in 2026 — perhaps some, perhaps not many — but I anticipate that more discourse will emerge surrounding it.”
  • Paige Bailey, developer lead for Google DeepMind: “Sovereign open models are definitely in the spotlight, especially their deployment on-premises coupled with fine-tuning plus RL. I believe world Models and robotics will dominate discussions in 2026.”
  • Sachin Dharashivkar, CEO of AthenaAgent: “The design of RL environments and agent training were the most frequently discussed themes.”
  • Ronak Malde, ex-DeepMind engineer and new founder of a stealth RL startup: “Continual learning. Supporting this forthcoming frontier will require novel architectures, new reward mechanisms, fresh data sources, and innovative data scalability models.”
  • Deniz Birlikci, researcher at Amazon: “Agents are a stack rather than a mere model. Consequently, RL for agents ought to train with the same tools/stacks intended for production use. More teams are contemplating the creation of a robust taxonomy and labeling system for their data, particularly in RL, and I find this crucial.”
  • Richard Suwandi, student ambassador for The Chinese University of Hong Kong: “Numerous discussions revolved around the feasibility of constructing genuinely creative AI systems (not merely optimizing within established boundaries, but capable of independently generating genuinely novel ideas and discoveries). I foresee this becoming a pivotal research area in 2026.”
  • Anastasios Angelopoulos, CEO of LMArena: “RL RL RL RL is dominating the scene”

Which labs appear to be gaining momentum, and which seem more uncertain?

  • Nathan Lambert (Allen Institute for AI): “The conversation surrounding which labs are excelling and which are lagging felt akin to rumors circulating out of SF in recent weeks. Gemini and Anthropic are on the rise at the expense of OpenAI. OpenAI was at least mentioned, whereas I did not hear anyone discussing the capabilities of xAI at all.”
  • Evgenii Nikishin (OpenAI): “The three major frontier Labs (GDM, Anthro, OAI) are experiencing positive overall momentum, though each has its own unique strengths and weaknesses. In contrast, many LLM / imagen startups from 2022-2024 that provided similar propositions but lacked distinctive value seem to be quietly failing.”
  • Andy Konwinski (Laude Institute): “Surging labs: Alibaba/Qwen, Moonshot/Kimi, Arcee, Reflection AI, Human&, Prime Intellect have all made recent noteworthy announcements; Google with gemini 3, nano banana, TPUv7”
  • Anonymous researcher: “Reflection managed to secure a substantial booth considering they are a very young startup – that’s certainly a fresh development.”
  • Brian Wilt (Waymo): “I was pleased that Alphabet/Google had the most accepted papers this year.”
  • Paige Bailey (Google DeepMind): “Periodic Labs and Reflection AI seem to be thriving; both possess compelling mission statements. I was also thrilled to witness Anna and Azalea launch a new venture (Ricursive Intelligence).”
  • Ronak Malde (stealth RL startup): “Several neolabs are on track to launch in 2026 that will disrupt research as we currently understand it. DeepMind continues to excel. Kimi Moonshot and Deepseek are doing well, too.”
  • Richard Suwandi (The Chinese University of Hong Kong): “One lab that distinctly seems to be gaining momentum is Google DeepMind. At NeurIPS, their drive for a new research agenda was palpable, with initiatives like Nested Learning and Titans/MIRAS suggesting a shift toward more continual, long-term memory rather than just larger transformers—this was a refreshing change in hallway dialogues.”
  • Thomas Wolf (Hugging Face): “Google DeepMind appears to be in a good place.”

What was the most enjoyable party you attended or felt you missed out on?

  • Nathan Lambert (Allen Institute for AI/Model Ship co-organizer): “The quintessential example of a NeurIPS party relevant to the current AI domain was Model Ship, an exclusive cruise with 200 leading researchers, investors, and figures in the AI realm. It featured custom merchandise, free-flowing discussions, and an unprecedented dedication to the dance floor at a conference event.”
  • Andy Konwinski (Laude Institute): “I regretted not being able to join events hosted by Robert Nishihara, Naveen Rao, and Nathan Lambert. I also felt disappointed to miss Rich Sutton and Yejin Choi’s keynotes (though I eventually interviewed Yejin, so we were able to discuss the topics she covered).”
  • Roon (OpenAI): “OpenAI’s events, a16z events / I particularly enjoyed the a16z gathering as I had the chance to meet Lex [Fridman], which was exciting / yet even the parties I generally sought to avoid kept turning into gatherings of around 750 guests in a house or something similar / what an overwhelming experience”
  • Maya Bechler-Speicher (Meta): “The Meta event was among the most impressive company functions I have attended. Moreover, G-Research invited a select group of researchers to a three-star Michelin restaurant, which wasn’t exactly a party, but was absolutely extraordinary.”
  • Brian Wilt (Waymo): “My favorite event was an intimate gathering at comma.ai (based in San Diego), which develops an open-source driver assistant. I utilize it in my personal vehicle; it’s perfect for when I’m not using Waymo in Phoenix. @yassineyousfi_ organized an online capture-the-flag challenge to gain entry. @realGeorgeHotz led us on a tour of their data center and production. I did feel a bit embarrassed when I entered their Wi-Fi password, ‘lidarisdoomed’”
  • Evgenii Nikishin (OpenAI): “The OpenAI party 😎”
  • Paige Bailey (Google DeepMind): “Unfortunately, I had to leave late Friday/early Saturday, so I missed the concluding workshops of the conference. I felt intense FOMO regarding the ML for Systems workshop, not to mention the ‘Claude and Gemini Play Pokemon’ workshop — both seemed fantastic!”
  • Ronak Malde (stealth RL startup): “Radical VC gathering Jeff Dean and Geoffrey Hinton in a single venue was the highlight of my week.”
  • Anastasios Angelopoulos (LMArena): “Laude Lounge”
  • Thomas Wolf (Hugging Face): “The Hugging Face event where over 2.5k people signed up / I found the Prime-intellect gathering particularly enjoyable”
  • Dylan Patel, founder of SemiAnalysis: “Mine haha”

Indeed, some individuals perceived keynotes as parties. It seems academia continues to thrive at NeurIPS.

You may also like

Leave a Comment