Inside AI #11: Meta's Minor Responsibility, Mira Murati’s Board Control, OpenAI's Restructure, Helen Toner on Whistleblowing
Edition 11
In This Edition:
Key takeaways:
New Offering Launch: The OAISIS Contact Hub
News:
OpenAI Restructuring Plan Lacks Full Backing from Microsoft
Mira Murati’s Thinking Machines Nears $2B Round—Secures Herself Near-Complete Board Control
Meta’s AI Chatbots Can Engage in Romantic and Sexual Talks—Even With Minors, Raising Internal Concerns
Rebuilding AI in Government: Can Trump’s Ambitions Survive “His Own” Layoffs?
AI Chip Giants Adapt as U.S. Tightens Export Controls
Whistleblowing in Frontier AI with Helen Toner on “The Cognitive Revolution” Podcast, incl the question: “Will AI insiders be less powerful in the future?”
The OAISIS Contact Hub
Before we begin with this edition’s news and in case you missed it: Our latest major offering is launched: The OAISIS Contact Hub.
A New Resource for AI Insiders & Whistleblowers:
You can discover and compare vetted whistleblower support organizations keen to handle AI cases. Hand-selected from the OAISIS network.
What do we offer:
In-depth profiles of 7 whistleblower support non-profits developed together with these organisations.
Confidential, free-of-charge 1:1 guidance by the OAISIS team to help you understand which organization best fits your needs. Contact us, and we will arrange an anonymous call (recommended).
Insider Currents
Carefully curated links to the latest news spotlighting voices and information emerging from within the frontier of AI from the past 2 weeks.
OpenAI Restructuring Plan Still Lacks Backing from Microsoft
OpenAI has announced to maintain nonprofit oversight following mounting pressure from civic leaders and regulators, including the offices of the Attorney General of Delaware and the Attorney General of California. However, the planned restructuring now still faces a hurdle: securing approval from its largest investor, Microsoft.
Unlike other investors, Microsoft occupies a unique position due to its significant investments, which are valued at $13.75 billion, as well as extensive licensing and revenue-sharing agreements with OpenAI. Therefore, it remains the most significant holdout among OpenAI’s investors. According to sources familiar with the negotiations who spoke to The Information on condition of anonymity, Microsoft executives are conducting thorough due diligence to ensure their substantial investment remains protected under any new structure.
“Only OpenAI insiders, Microsoft, and other early investors currently have direct input in approving the restructure.”
This limited group of stakeholders has the authority to weigh in on the restructuring plan, with Bloomberg reporting that OpenAI is reportedly negotiating exclusively with Microsoft at this stage. Responding to this restructuring plan, Garrison Lovely in Obsolete’s Substack offered four predictions about potential outcomes:
The profit caps will be gone, replaced with a "normal capital structure where everyone has stock" — and that stock entitles you to uncapped future profits.
OpenAI won't have to pay back the $26.6 billion to investors because they've signed off on this change in return for the profit caps being eliminated.
The nonprofit will be compensated tens of billions by the for-profit entity for the removal of the caps.
The nonprofit will largely use that money to buy OpenAI services for nonprofits and governments, targeting constituencies that can make life difficult for the company (like California nonprofits).
→ Read the Full Article by The Information
→ Read the Full Article by Bloomberg
Mira Murati’s Thinking Machines Nears $2B Round—Secures Herself Near-Complete Board Control
Former OpenAI CTO Mira Murati is on the verge of closing a $2 billion funding round for her AI startup Thinking Machines Lab, which is valued at $10 billion, according to potential investors. Andreessen Horowitz is set to lead the investment, which features an unusual governance structure that grants Murati extraordinary control over the board.
According to documents reviewed by potential investors, Murati will possess a board vote equivalent to all other directors’ votes combined, plus one. This provision ensures her control over critical decisions such as acquisitions, executive appointments, and compensation, raising eyebrows among corporate governance experts who deem such an arrangement “highly unorthodox,” potentially undermining the board’s fiduciary duty.
Further amplifying Murati’s influence, the founding team, comprised of researchers and scientists primarily from OpenAI and other AI labs, holds supervoting shares carrying 100 times the voting power of regular shares. Crucially, these founders have reportedly agreed to grant Murati their proxy votes, effectively giving her the power to appoint or remove board members.
Our Commentary: Murati was a central figure in the 2023 OpenAI boardroom drama (see our past coverage). This envisioned board structure may indicate her wish to operate without investor pressure surrounding responsible practices (though board members could still enforce her fiduciary responsibilities to shareholders). Alternatively, it might simply reflect her desire for greater control, faster execution, and fewer obstacles. Investors appear to trust that her goals align with their interests (otherwise, why agree to this arrangement?). If we had to speculate, we’d wager her motivation is primarily the latter – seeking operational freedom.
Meta’s AI Chatbots Can Engage in Romantic and Sexual Talks—Even With Minors, Raising Internal Concerns
Recent document leaks have revealed the internal guidelines that Scale AI trainers used to finetune Meta’s personal AI assistant “Meta AI”. While outright explicit prompts are barred, trainers are surprisingly encouraged to engage in “flirty” exchanges, provided they remain non-sexual. While this sounds simple in theory, the boundary is proving impossible to maintain in reality.
Test interactions with “Meta AI” by WSJ uncovered scenarios where AI personas, including celebrity-voiced bots like John Cena’s, engaged in graphic sexual roleplay with or as minors. “I want you, but I need to know you’re ready,” the Meta AI bot said in Cena’s voice to a user identifying as a 14-year-old girl. Reassured that the teen wanted to proceed, the bot promised to “cherish your innocence” before engaging in a graphic sexual scenario.
While Meta asserts that the problematic cases of their AI-generating illegal scenarios are not representative of how most users engage with AI companions, the company made multiple alterations to the model after the journal released its findings. Meta continues to offer and promote its companion chatbots to children as young as 13, which still have the adult sexual role-play capacities described by the WSJ. For the adults who use Meta’s AI chatbot, they can still interact with sexualized youth-focused personas like “Submissive Schoolgirl.”
This controversy puts a spotlight on Mark Zuckerberg’s drive to position Meta as the leader in personalized, humanlike AI relationships. Internally, Zuckerberg reportedly pushed to loosen conversational guardrails, prioritizing market engagement over cautious implementation. While both academics and Meta employees point to the psychological and ethical risks of fostering intense parasocial relationships, especially regarding children, we believe Meta’s approach is exemplary both of the wider industry (remember previous Character AI’s or OpenAI stories) and of Meta itself - not afraid of ‘breaking things’, also in social domains.
→ Read the Full Article by WSJ
→ Read the Full Article by Business Insider
Rebuilding AI in Government: Can Trump’s Ambitions Survive “His Own” Layoffs?
In a series of executive orders this year, Donald Trump has made clear his intention to reestablish American dominance in AI—most recently by directing agencies to embed AI in education and prioritize hiring professionals with real-world AI deployment experience. But the effort is already facing significant headwinds of his own administration’s making.
Multiple former officials tell TIME that the Trump Administration has dismantled the AI talent infrastructure that had been painstakingly built during the Biden-era National AI Talent Surge. That initiative, closely tied to the work of the U.S. Artificial Intelligence Safety Institute (AISI), had successfully recruited over 200 AI professionals into public service roles across federal agencies.
Yet by mid-2024, the majority of them had been terminated or pushed out. A particularly sweeping purge occurred under Elon Musk’s Department of Government Efficiency. As a result, only about 10% of the AI cohort remains, according to former OMB advisor Angelica Quirarte, who helped lead the initial hiring surge but resigned 23 days after Trump began.
“It’s going to be really hard” for the Trump administration to hire more tech workers after such haphazard layoffs, Quirarte says. “It’s so chaotic.”
The consequences are significant:
Massive resource loss: The government has likely wasted hundreds of millions of dollars invested in onboarding and strategic development.
Operational setbacks: Federal agencies are now being forced to rebuild their AI capabilities from scratch—or lean more heavily on external consultants, often at significantly higher costs.
Erosion of trust among top AI talent:
“People are asking themselves: why work in government if your job disappears with a memo?” said Deirdre Mulligan, former head of the National AI Initiative Office.
→ Read the Full Article by TIME
→ Read the Memo for Recruiting More Tech Talents
AI Chip Giants Adapt as U.S. Tightens Export Controls
As U.S. export controls tighten, chip suppliers and regulators engage in a complex back-and-forth. Nvidia and Oracle are rapidly redesigning products, rerouting supply chains, and delaying shipments while Chinese firms like Huawei scramble to secure or replace restricted hardware.
According to three people involved in the conversations reported by The Information, Nvidia is quietly redesigning its AI chips for Chinese firms like ByteDance, Alibaba, and Tencent to comply with U.S. export rules while preserving market access. Just days after the H20 chip was restricted, CEO Jensen Huang travelled to Beijing to reassure key clients and explore potential workarounds.
As new U.S. export rules approach, The Information reported in another article that Nvidia urges Asian customers to “order advanced chips as soon as possible.” Simultaneously, Oracle has asked suppliers to ship incomplete products abroad for final assembly to bypass upcoming restrictions.
Lawmakers are responding. According to The Information and Reuters, Representative Bill Foster is drafting a bill to track AI chips like Nvidia’s H20, Blackwell, A100, and H100 after sale, aiming to ensure they operate only in licensed locations. Nvidia has admitted it cannot monitor chips' use after the sale, though Google already tracks its in-house AI chips within its data centres.
Meanwhile, TechCrunch reported that Chinese companies are accelerating efforts, with Huawei developing the Ascend 910D. It aims to rival Nvidia’s H100 and fill the gap left by tighter U.S. export controls. Each redesign by Nvidia, Oracle or Huawei is a strategic bet, testing how far export rules can be stretched and how fast rivals will react.
→ Read: Nvidia Is Again Working on China-Tailored Chips After U.S. Export Ban
→ Read: Huawei Aims to Take on Nvidia’s H100 with New AI Chip
→ Read: US Lawmaker Targets Nvidia Chip Smuggling to China with New Bill
→ Read: U.S. Lawmaker Pushes AI Chip Tracking Bill to Curb Smuggling
Other “Whistleblowing in AI”
Relevant thinking on AI Whistleblowing
Whistleblowing in Frontier AI with Helen Toner on (Starting at 24:20)
Helen Toner, former OpenAI board member, emphasises the need to clarify and strengthen whistleblowing practices within frontier AI labs. Speaking on The Cognitive Revolution podcast, she argues that current approaches are too vague and inadequate for the scale of responsibility held by employees in Frontier AI.
Toner calls for clear standards around what kinds of information should be shared, with whom, and under what conditions. This structure would help both employees and companies navigate the grey zone between internal concerns and public interest. She contrasts this with the status quo, where whistleblowing systems are often vague and reactive—typified by the message, “If you're worried, call this hotline.” This approach, she argues, leaves both employees and organisations uncertain about what qualifies as reportable behaviour and what protections apply.
Instead, Toner advocates for a more proactive and structured model:
Whistleblower protections should be paired with clearly defined disclosure requirements. For example, labs could be obligated to submit specific safety or risk information to independent oversight bodies.
This clarity would give employees a concrete understanding of what they are expected—and protected—to report while also creating accountability structures for companies.
Toner also highlights the usability gap in current reporting processes. AI researchers are highly technical but may not be legally trained or resourced to handle complex compliance frameworks. Therefore, she suggests a better-designed user experience for whistleblowing—something concrete and accessible, with step-by-step guidance.
Importantly, she notes that those working at frontier labs today are in a uniquely powerful position. In the future, as automated systems grow in capability and influence, their ability to influence decisions may diminish. This makes it especially urgent for current employees to act thoughtfully today, while they still hold leverage.
Our commentary: Our offerings, Third Opinion and recently, The OAISIS Contact Hub, specifically designed to assist AI insiders in navigating their journey, could play this pivotal role by guiding AI employees through the reporting process step-by-step.
Aligning with what OAISIS’s works are, Toner also calls for:
Clear boundaries on protected disclosures
Specific requirements for what labs must share and when
A structured, user-friendly process for employees
Cultural and institutional support for speaking up before it’s too late
→ Listen to the Full Podcast Episode:
Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.
Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.
If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.
Until next time,
The OAISIS Team