
Co-founder Jack Clark, who is set to direct the newly established Anthropic Institute, expressed that he had “no reservations” regarding research funding.


Co-founder Jack Clark, who will guide the new Anthropic Institute, indicated he had “no worries” about funding for research.
In light of a prolonged dispute with the Pentagon, culminating in a blacklist and a legal action, Anthropic is reorganizing its executive leadership and research activities. The company announced on Wednesday it will be establishing a new internal think tank known as the Anthropic Institute, which merges three of Anthropic’s existing research teams. This institute will aim to explore the extensive implications of AI, including “what changes to employment and economies occur, whether AI enhances safety or creates new risks, how its values may influence ours, and if we can maintain oversight,” according to the company.
This announcement coincides with shifts in executive roles. Anthropic cofounder Jack Clark is transitioning to a position leading this think tank. He will now hold the title of head of public benefit, after serving for over five years as head of public policy. The public policy division, which saw its size triple in 2025, according to Anthropic, will now be managed by Sarah Heck, formerly in charge of external affairs. Anthropic will also establish its anticipated office in Washington, DC, with the public policy team continuing to address topics such as national security, AI infrastructure, energy, and “democratic leadership in AI.”
Clark stated to The Verge that the launch of the Anthropic Institute has been planned for some time, and that he has been considering this role change since November. However, the timing follows closely after Anthropic initiated a lawsuit against the US government over its classification as a supply-chain risk, which would prevent clients from using Anthropic’s technology in their engagements with the Department of Defense. The lawsuit claims that the Trump administration unlawfully blacklisted the company for establishing “red lines” regarding mass domestic surveillance and fully autonomous lethal systems.
When questioned about these developments, Clark remarked, “Working in AI at Anthropic is never tedious — there’s always activity … The rate of AI advancement isn’t decelerating due to external circumstances, and neither are we.” He noted that the situation has not “fundamentally altered” the intended research agenda but feels it “has reinforced” Anthropic’s choice to provide greater transparency to the public. “What we’ve been experiencing over the past few weeks illustrates the intense desire for a broader national discussion by the public regarding this technology,” he remarked.
The Anthropic Institute is launching with approximately 30 members, including key figures like Matt Botvinick, formerly of Google DeepMind; Anton Korinek, a professor on leave from the University of Virginia’s economics department; and Zoe Hitzig, a researcher who departed OpenAI following its decision to implement ads in ChatGPT. This newly formed think tank integrates Anthropic’s societal impacts team, which investigates AI’s effects on various societal sectors; its frontier red team, which assesses AI systems for vulnerabilities; and its economic research group, which evaluates AI’s effects on the economy and job market. The Anthropic Institute also intends to “incubate” new teams, such as one led by Botvinick focusing on the implications of AI on the legal framework. Hitzig and Korinek will spearhead substantial economic research initiatives. Clark anticipates that the staff count for the think tank will double annually for the foreseeable future.
This year, there is mounting pressure on high-valuation AI firms like Anthropic, which reportedly has plans to go public this year. Court documents from Anthropic highlighted that the company has brought in over $5 billion in total commercial revenue while investing $10 billion thus far on model training and inference. Furthermore, it mentioned that the company has “received inquiries from various external partners … expressing uncertainty about their obligations and concerns regarding their ability to continue collaborating with Anthropic” and that “numerous companies have reached out to Anthropic” for advice and “in certain cases, to comprehend their termination rights.” Anthropic expressed that based on how the government’s prohibitions might be interpreted, “hundreds of millions of 2026 revenue is at risk” at minimum, and in extreme scenarios, it could potentially involve billions.
Is Anthropic worried about investing more resources into long-term research while likely risking a portion of its short-term revenue? When The Verge queried Clark, he stated he had “no worries.”
“People typically invest in trust,” Clark stated. “Many of our outputs are the types of research that enhance business confidence in us … Over the long haul, Anthropic has consistently regarded its commitment to safety — and the investigation and reporting of the safety of its systems — not as a cost center, but rather as a profit generator.”
Clark further noted that he believes advanced AI (essentially Anthropic’s own definition of AGI, or artificial general intelligence) will emerge by the end of this year or early 2027, and that he opted to transition roles primarily due to the “speed of AI advancements.” He remarked that reflecting on his previous year’s operation, he found himself focusing more on policy issues, such as SB 53, than AI research and development matters that he wished to address. Anthropic stated in a press release that the Anthropic Institute is specifically aimed at tackling the “most challenging questions brought forth by powerful AI.”
Of course, as The Verge noted in December, many tech companies advocate for transparency until it becomes detrimental to business. So how will the Anthropic Institute handle findings that could portray the company unfavorably?
Clark stated that Anthropic’s co-founders share “similar values” concerning the necessity of public disclosure since the company operates as a public benefit corporation, allowing it to pursue missions “beyond simple fiduciary gain.” He mentioned that in a discussion with CEO Dario Amodei last week, they reached an agreement on the significance of transparency, notwithstanding potential public relations challenges that may arise.
However, the research undertaken by the Anthropic Institute might require substantial computational resources at a juncture when businesses are keen to emphasize commercial offerings. Clark noted that aside from the resources assigned to frontier model pre-training, Anthropic distributes its computing power weekly based on “what appears most critical,” meaning no specific portion has been earmarked, but he anticipates no conflicts arising.
The Anthropic Institute is also set to explore individuals’ emotional reliance on AI, an escalating concern that has gained momentum in public discourse over the last year. Clark indicated that thus far, Anthropic’s research teams have analyzed types of dialogues occurring with Claude and gauged the technology’s ability to convince individuals or exhibit sycophantic behavior, but they have not engaged deeply with technology users about their personal experiences. He noted that the think tank aims to execute extensive social science research, including employing Anthropic’s AI to interview users.
“I perceive this as: Social media had a significant impact on society, and it wasn’t solely about what transpired on the social media platforms. It was, ‘How did social media usage transform individuals?’” Clark remarked. “We aim to understand, ‘How does AI usage modify individuals?’”