<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AIWI: Inside AI]]></title><description><![CDATA[News based on real insider voices in AI & Updates from The AI Whistleblower Initiative (AIWI)]]></description><link>https://aiwhistleblowerinitiative.substack.com</link><generator>Substack</generator><lastBuildDate>Tue, 10 Mar 2026 08:47:39 GMT</lastBuildDate><atom:link href="https://aiwhistleblowerinitiative.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[OAISIS]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aiwhistleblowerinitiative@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aiwhistleblowerinitiative@substack.com]]></itunes:email><itunes:name><![CDATA[AIWI]]></itunes:name></itunes:owner><itunes:author><![CDATA[AIWI]]></itunes:author><googleplay:owner><![CDATA[aiwhistleblowerinitiative@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aiwhistleblowerinitiative@substack.com]]></googleplay:email><googleplay:author><![CDATA[AIWI]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[OpenAI Expands Their Raising Concerns Policy - AIWI’s Preliminary Assessment]]></title><description><![CDATA[AIWI's evaluation of OpenAI's 2024 policy, engagement with OpenAI leadership, and preliminary analysis of the 2026 policy]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/openai-expands-their-raising-concerns</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/openai-expands-their-raising-concerns</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Wed, 28 Jan 2026 19:17:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/38788da7-b2bc-4c76-815a-9ec417624886_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OpenAI has expanded its Whistleblowing Policy &#8212; now the most comprehensive among frontier AI companies.</p><p>Together with an expert consortium (see below), we evaluated OpenAI&#8217;s 2024 policy and shared detailed findings with their leadership. Below, we share what&#8217;s changed and which gaps were addressed.</p><div><hr></div><blockquote><p><a href="http://aiwi.org/wp-content/uploads/2026/01/Evaluating-OpenAIs-2024-Raising-Concerns-Policy260127.pdf">Download our publication,</a> including our complete feedback to OpenAI.</p></blockquote><div><hr></div><blockquote><p><a href="https://cdn.openai.com/policies/openai-raising-concerns-policy.pdf">Find OpenAI&#8217;s new policy.</a></p></blockquote><div><hr></div><blockquote><p><a href="https://aiwi.org/openai-whistleblowing-policy-aiwi-commentary/">Read the full news</a> (best on desktop)</p></blockquote><div><hr></div><blockquote><p>Read <a href="https://www.theinformation.com/newsletters/ai-agenda/openai-one-ups-anthropic-new-whistleblower-policy">The Information article</a> which first announced the updated policy.</p></blockquote><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H220!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H220!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png 424w, https://substackcdn.com/image/fetch/$s_!H220!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png 848w, https://substackcdn.com/image/fetch/$s_!H220!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png 1272w, https://substackcdn.com/image/fetch/$s_!H220!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H220!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png" width="954" height="1378" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1378,&quot;width&quot;:954,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:758333,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/185867653?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H220!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png 424w, https://substackcdn.com/image/fetch/$s_!H220!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png 848w, https://substackcdn.com/image/fetch/$s_!H220!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png 1272w, https://substackcdn.com/image/fetch/$s_!H220!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36d092bf-c1c0-4be5-9a5e-1aee3013876e_954x1378.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>Content:</h2><ol><li><p><a href="https://aiwhistleblowerinitiative.substack.com/i/185867653/context">Context</a></p></li><li><p><a href="https://aiwhistleblowerinitiative.substack.com/i/185867653/takeaway-for-insiders-you-should-still-get-legal-advice-as-early-as-possible">Takeaway for Insiders: You Should Still Get Legal Advice As Early As Possible</a></p></li><li><p><a href="https://aiwhistleblowerinitiative.substack.com/i/185867653/preliminary-assessment-of-the-2026-policy">Preliminary Assessment of the 2026 Policy</a></p></li><li><p><a href="https://aiwhistleblowerinitiative.substack.com/i/185867653/evaluation-of-the-2024-policy">Evaluation of the 2024 Policy</a></p></li><li><p><a href="https://aiwhistleblowerinitiative.substack.com/i/185867653/whats-next">What&#8217;s Next</a></p></li><li><p><a href="https://aiwhistleblowerinitiative.substack.com/i/185867653/how-we-can-help">How We Can Help</a></p></li></ol><div><hr></div><h2>Context</h2><h3>The Publish Your Policies Programme</h3><p>AIWI, with 35+ global signatories, calls on frontier AI companies to <a href="http://www.publishyourpolicies.org/">publicly release their internal whistleblowing policies.</a> Transparency is the first step &#8212; it allows independent evaluation, enables informed decisions by current and prospective employees, and creates accountability for stated commitments.</p><h3>OpenAI Led the Way (And Still Does)</h3><p>In October 2024, OpenAI became the first frontier AI company to publish its internal whistleblowing policy, a notable step toward transparency following widely discussed concerns about its use of non-disparagement agreements. Anthropic has since released a policy addressing <a href="https://www-cdn.anthropic.com/fcf136d0f2204e2184f73c6bd082bea27f2d631b/RSP%20Noncompliance%20Reporting%20and%20Anti-Retaliation%20Policy%20(Final%202025.12.04).pdf">RSP non-compliance </a>(<a href="https://aiwhistleblowerinitiative.substack.com/p/anthropic-publishes-their-partial?r=489j4y">Read our first thoughts on this)</a>, but OpenAI&#8217;s policy remains the most comprehensive public commitment from a frontier AI company to date. Other major players &#8212; xAI, Google DeepMind, and Meta &#8212; have yet to publish their whistleblowing policies.</p><h3>Transparency Disclaimer: Engagement with OpenAI</h3><p>As part of the Publish Your Policies programme mentioned above, AIWI conducted an independent evaluation of OpenAI&#8217;s 2024 policy and shared our findings directly with OpenAI leadership. We engaged in subsequent discussions with the company. OpenAI informed us that a policy revision was already underway prior to receiving our assessment &#8212; a revision that has since resulted in an updated 13-page policy.</p><h3>What You&#8217;ll Find Here:</h3><ul><li><p>Takeaways for AI insiders &#8212; What this means if you&#8217;re considering raising a concern internally</p></li><li><p>Preliminary assessment of the 2026 policy &#8212; Our initial analysis of OpenAI&#8217;s expanded whistleblowing policy</p></li><li><p>Full evaluation of the 2024 policy &#8212; Our complete assessment of the 2024 policy <a href="https://aiwi.org/wp-content/uploads/2026/01/Evaluating-OpenAIs-2024-Raising-Concerns-Policy260127.pdf">(Download PDF)</a></p></li></ul><div><hr></div><h2>Takeaway for Insiders: You Should Still Get Legal Advice As Early As Possible</h2><p>OpenAI publishing its whistleblowing policy was a meaningful step toward transparency &#8212; but a policy alone doesn&#8217;t make internal reporting safe. The gaps we identified in OpenAI&#8217;s original 2024 policy reflect structural issues common across the industry. Whether you work at OpenAI or another frontier AI company, understanding these gaps matters.</p><h3>The Gap Between Policy and Protection</h3><p>Internal policies don&#8217;t create legally enforceable rights unless explicitly stated and not disclaimed through employment contracts. At-will employment disclaimers &#8212; standard across U.S. technology companies &#8212; typically preserve full management discretion. That means companies can retaliate against internal whistleblowers that follow the policy, even if they proclaim that they won&#8217;t, if those internal whistleblowers are not also protected by whistleblower protection laws (e.g. the California labor code or federal protections, as through the SEC). </p><p><a href="https://kkc.com/blog/vast-majority-of-corporate-whistleblower-retaliation-cases-stem-from-internal-disclosures-new-research-shows/">Research by Kohn et al.</a> found that internal whistleblowers account for over 90% of corporate retaliation cases under federal whistleblower laws. The very act of trying to help can carry significant personal risk &#8212; and raising concerns internally may actually increase that risk.</p><h3>Takeaways for AI Insiders</h3><p>If you&#8217;re considering raising a concern internally &#8212; at OpenAI or any frontier AI company &#8212; read this first.</p><p><strong>Before you report, understand your situation</strong></p><p><strong>Assess your trust in the company:</strong></p><ul><li><p>Alignment with leadership interests: How likely is it that senior management is unaware of the issue? Does your report align with leadership or commercial objectives?</p></li><li><p>Previous treatment of those who raised concerns: Look at the culture of &#8216;speaking up&#8217;, court filings, press coverage, employee reviews, and colleagues&#8217; experiences.</p></li><li><p>How seriously does the company take internal reporting: Is there a published, well-developed policy? Are trainings required and taken seriously? Is system performance measured and shared?</p></li><li><p>Independence of the whistleblowing function: How independent is it from management and legal? Do you trust the individuals who will evaluate your concerns?</p></li></ul><p><strong>Assess your legal position:</strong></p><ul><li><p>Your contractual exposure: Review NDAs, your employment contract, severance terms, and equity agreements for clawback provisions.</p></li><li><p>Legal protection gaps: Certain safety concerns, especially outside of California, may fall outside existing whistleblower statutes. The <a href="https://www.judiciary.senate.gov/press/rep/releases/grassley-introduces-ai-whistleblower-protection-act">AI Whistleblower Protection Act</a> seeks to address this gap but has not yet been enacted.</p></li><li><p>The <a href="https://www.whistleblowers.org/digital-realty-trust-inc-v-somers/">Digital Realty</a> ruling: The Supreme Court established that whistleblowers who report only internally &#8212; without also reporting to the SEC &#8212; are not protected under <a href="https://www.sec.gov/whistleblower">Dodd-Frank&#8217;s</a> anti-retaliation provisions.</p></li></ul><p><strong>Consider structural risks:</strong></p><p>The <a href="https://www.whistleblowers.org/internal-reporting-channels/">National Whistleblower Center</a> advises insiders to generally avoid internal channels, especially if reports are handled by a company&#8217;s legal department: &#8220;Company lawyers who receive whistleblower complaints are not obligated to report those complaints to the government. They are required to act in the best interest of the company, not the whistleblower.&#8221;</p><h4>Our Recommendation</h4><p>Consult independent legal counsel before escalating high-stakes concerns. An experienced whistleblower attorney can:</p><ul><li><p>Assess which legal protections actually apply to your specific situation</p></li><li><p>Advise whether internal reporting is safe given your jurisdiction and concern</p></li><li><p>Help you document issues in legally protected ways</p></li><li><p>Prepare you for potential retaliation scenarios</p></li></ul><p><strong>The safest path is professional legal guidance &#8212; not reliance on company commitments alone.</strong></p><h3>Need Support?</h3><p><strong><a href="https://aiwi.org/contact/">Reach out to us</a> or visit the <a href="https://aiwi.org/contact-hub">AIWI Contact Hub</a></strong> to connect with specialized nonprofit organizations and <strong>pro bono legal counsel</strong> who have helped hundreds of concerned insiders navigate this complex journey.</p><div><hr></div><h2>Preliminary Assessment of the 2026 Policy</h2><p>OpenAI has released a substantially revised Whistleblowing Policy, expanding their original 3-page document to a comprehensive 13-page policy with formal structure, definitions, and country-specific provisions. This is currently the most comprehensive whistleblowing policy from among all frontier AI companies. Here, we present our analysis of the original policy alongside which issues have been corrected in the new policy. Our preliminary review indicates that OpenAI has made important and significant progress. We find 8 of 13 recommendations significantly or partially addressed, including 3 of 5 of our &#8216;critical&#8217; feedback items. We will publish a more thorough evaluation of OpenAI&#8217;s new policy soon (<a href="https://aiwhistleblowerinitiative.substack.com/">subscribe to our Substack</a> to stay in the loop). Note: We had reviewed OpenAI&#8217;s previous Whistleblowing Policy, then shared and discussed our feedback with OpenAI&#8217;s leadership. OpenAI stated that they were already reworking their policy when we sent our findings.</p><h3>Major Improvements: Three Critical Issues Addressed</h3><p>Our preliminary review indicates that OpenAI has made important progress on several of our most critical recommendations:</p><p>1. Dedicated Compliance Function: OpenAI has elevated the Compliance team to a central coordinating role&#8212;a positive development that creates more dedicated infrastructure for whistleblowing, partially addressing our concerns about the Legal team&#8217;s heavy involvement and potential conflicts of interest.</p><p>2. AI Safety Concerns as Distinct Category: The policy establishes &#8220;AI Safety Concerns&#8221; as a separate reporting category with explicit examples (Preparedness Framework violations, unsafe model outputs, gaps in red-teaming) and a dedicated review pathway involving subject matter experts (although it is unclear whether these are internal or external), senior leadership, and potentially the Board&#8217;s Safety and Security Committee. This partially addresses our concerns about confusing material scope.</p><p>3. Communication Commitments and Timelines: OpenAI has introduced concrete procedural timelines: acknowledgment within 2 business days, updates within 30 days, and importantly, follow-up checks 30-45 days after closure specifically for retaliation, discrimination, and harassment cases&#8212;directly addressing our feedback on the detrimental effects of communication blackouts.</p><p>Additionally, the policy significantly expands guidance on external reporting rights, including detailed country-specific provisions for the EU, India, Ireland, Korea, and UAE.</p><h3>California TFAIA Compliance</h3><p>Requirements: The policy explicitly references and links to <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB53">California&#8217;s Transparency in Frontier Artificial Intelligence Act</a> (formerly SB 53, which we have written about <a href="https://www.transformernews.ai/p/sb-53-protects-whistleblowers-in">here</a>), including a <a href="https://oag.ca.gov/contact/whistleblower">direct link to the Attorney General&#8217;s whistleblowing page</a>, which is commendable.</p><p>Core Requirements Met: OpenAI meets the core structural requirements of anonymous reporting channels, anti-retaliation protections, and external reporting rights.</p><p>Gaps in TFAIA Commitments: However, the policy does not <em>explicitly</em> commit to several specific TFAIA requirements: (1) monthly progress updates to whistleblowers during investigations (committing only to a 30-day initial update and post-closure follow-up), (2) quarterly reporting of whistleblowing activity to officers and directors, (3) annual notice to employees of their TFAIA rights, which must be &#8220;displayed at all times,&#8221; and (4) separate escalation processes when misconduct involves senior leadership. While OpenAI may implement these practices operationally, their absence from the policy leaves employees unclear about what protections and communications they can expect.</p><h3>Persistent Structural Gaps</h3><p>Beyond TFAIA-specific issues, several concerns from our original assessment remain unaddressed or unclear:</p><p>Legal Enforceability: The legal enforceability of anti-retaliation commitments beyond statutory protections is not established through binding language, nor are employees informed that such commitments may be voluntary. The policy provides no assurance that attorney-client privilege cannot be invoked over discussions on reports and investigations, which could weaken whistleblower protections in retaliation cases.</p><p>Independence and Governance: While the Compliance team&#8217;s role is strengthened, the degree of independence from Legal, HR, and executive interests remains ambiguous&#8212;governance details that would demonstrate structural independence are not provided.</p><p>Appeals and Escalation: The policy offers no information about appeals mechanisms or direct escalation paths (for example, to the Safety and Security Committee) when whistleblowers disagree with outcomes.</p><p>Confidentiality Protections: Consent requirements for sharing identity-related information appear only in the EU appendix, leaving employees in other jurisdictions without visibility into whether similar protections apply. Details on a broader confidentiality protection framework are likewise missing.</p><p>System Monitoring and Accountability: Finally, the policy provides no information about monitoring of the whistleblowing system, performance indicators for the whistleblowing function, or regular reporting on whistleblowing activity to the Board, employees, or public &#8212; leaving stakeholders unable to assess system effectiveness or organizational accountability.</p><p>Implications: OpenAI may implement these practices operationally. Their absence from the policy may however leave employees unclear about what protections they can expect, leading to type 1 or type 2 errors in regards to outreach and expected protections from retaliation.</p><h3>Next Steps</h3><p>This is a draft assessment based on the policy document itself. We will be conducting a comprehensive evaluation against international best practice standards and California legal requirements, and will share our detailed findings with OpenAI for comment before publication &#8212; including seeking clarification on practices that may exist but are not documented in the employee-facing policy. While the expanded policy represents meaningful progress &#8212; particularly the dedicated Compliance team infrastructure and improved procedural transparency&#8212; well-informed insiders may refrain from using the channel if concerns about independence, legal enforceability, and system accountability are not addressed and clearly communicated, especially when contemplating reports involving senior leadership or catastrophic safety risks.</p><div><hr></div><h2>Evaluation of the 2024 Policy</h2><p>This evaluation assesses OpenAI&#8217;s original 3-page &#8220;<a href="https://cdn.openai.com/policies/raising-concerns-policy-blog-copy-202410.pdf">Raising Concerns Policy</a>&#8221; published in October 2024. We shared our findings directly with OpenAI leadership prior to publication; OpenAI informed us that a policy revision was already underway, which has since resulted in the updated 2026 policy.</p><h3>Our Methodology</h3><p>We structured our analysis around established global standards for whistleblower protection:</p><ul><li><p><a href="https://www.transparency.org/en/publications/internal-whistleblowing-systems">Transparency International&#8217;s Internal Whistleblowing Systems Best Practice Principles</a></p></li><li><p>The EU Whistleblowing Directive, which OpenAI is also subject to</p></li><li><p>Data from the <a href="https://futureoflife.org/project/ai-safety-index/">Future of Life Institute AI Safety Index</a> to cross-reference policy claims with reported internal practices</p></li></ul><h3>Expert Review</h3><p>A network of legal and governance scholars specializing in whistleblower protection contributed to and reviewed our analysis:</p><ul><li><p><strong>Professor Wim Vandekerckhove</strong>, Professor of Business Ethics at EDHEC Business School &#8212; one of the world&#8217;s leading scholars on whistleblowing and convenor for ISO 37002, the international standard for internal whistleblowing systems</p></li><li><p><strong>Jennifer Gibson</strong>, Co-Founder &amp; Director of <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Psst...&quot;,&quot;id&quot;:289486162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63929a29-8944-4ebf-8f41-089016d2fb28_1000x1000.jpeg&quot;,&quot;uuid&quot;:&quot;12a46829-87f4-413c-983f-e5850c42dd39&quot;}" data-component-name="MentionToDOM"></span> &#8212; a nonprofit supporting tech industry whistleblowers; JD from Stanford University, California barred</p></li><li><p><strong>Kosmas Zittel &amp; Detlev B&#246;ttcher</strong>, Whistleblower-Netzwerk e.V. &#8212; Germany&#8217;s most prominent whistleblower support organization, under which AIWI is hosted</p></li><li><p><strong>Ashley Gj&#248;vik</strong>, Technologist and Whistleblower Advocate &#8212; former Apple employee who faced retaliation for internal whistleblowing</p></li><li><p><strong>Sonya Smallets</strong>, Partner at Minnis &amp; Smallets LLP &#8212; award-winning San Francisco labor lawyer specializing in wrongful termination</p></li></ul><p><strong>This analysis is not legal advice.</strong> Individuals considering raising concerns should consult qualified legal counsel before taking action.</p><p><a href="https://aiwi.org/wp-content/uploads/2026/01/Evaluating-OpenAIs-2024-Raising-Concerns-Policy260127.pdf">Download the Full 2024 Evaluation (PDF)</a></p><h3>Strengths of OpenAI&#8217;s 2024 Policy</h3><p>OpenAI&#8217;s policy contains important strengths that distinguish it from other frontier AI companies &#8212; none of whom have published their whistleblowing policies at this time.</p><p><strong>Key strengths include:</strong></p><ul><li><p><strong>Public and transparent</strong> &#8212; The policy is publicly available, signaling commitment to accountability</p></li><li><p><strong>Multiple reporting channels</strong> &#8212; Including an anonymous option with two-way communication</p></li><li><p><strong>Broad material scope</strong> &#8212; Covers suspected violations of the employee handbook and law</p></li><li><p><strong>Explicit non-retaliation commitment</strong> &#8212; Clear statement on disciplinary consequences for retaliation</p></li><li><p><strong>Extended personal scope</strong> &#8212; Covers contractors and extended workers, not just employees</p></li><li><p><strong>External channel references</strong> &#8212; Clear reference to federal whistleblowing authorities</p></li></ul><h3>Limitations of OpenAI&#8217;s 2024 Policy</h3><p>We identified seven priority issues that likely deter well-informed insiders from using OpenAI&#8217;s internal system for safety concerns:</p><ol><li><p><strong>Legal team control creates structural conflict of interest</strong> &#8212; The Legal team&#8217;s primary duty is to protect the company, not advocate for whistleblowers</p></li><li><p><strong>No independent governance or oversight structure</strong> &#8212; No evidence of board-level accountability for the system</p></li><li><p><strong>Absence of system monitoring and public accountability</strong> &#8212; No performance indicators or reporting on effectiveness</p></li><li><p><strong>No communication commitments to whistleblowers</strong> &#8212; Feedback is purely discretionary</p></li><li><p><strong>Unclear legal enforceability of protections</strong> &#8212; Standard at-will disclaimers likely override policy commitments</p></li><li><p><strong>Confusing material scope and report recipients</strong> &#8212; Unclear what actually triggers an independent investigation</p></li><li><p><strong>No documented confidentiality protection framework</strong> &#8212; No clarity on who can access whistleblower identities</p></li></ol><p><strong>Important note:</strong> These limitations should not be taken as evidence that OpenAI&#8217;s system is weak <em>compared to other frontier AI companies</em>. No other company has published their policy &#8212; we believe this makes it unlikely their systems are stronger.</p><p>For detailed analysis of each issue and our recommendations, <strong><a href="https://aiwi.org/wp-content/uploads/2026/01/Evaluating-OpenAIs-2024-Raising-Concerns-Policy260127.pdf">Download the full evaluation (PDF)</a></strong></p><div><hr></div><h2>What&#8217;s Next</h2><p>We&#8217;ll publish a comprehensive evaluation of the new policy against international best practices and California legal requirements. We&#8217;ll share our detailed findings with OpenAI before publication &#8212; the same approach we took with the original feedback here. Stay tuned!</p><div><hr></div><h2>How We Can Help</h2><p>Before reporting internally on high-stakes issues, we recommend obtaining independent legal counsel to understand your situation. An experienced whistleblower attorney can assess which legal protections actually apply to your specific situation, advise on whether internal reporting is safe or risky given your concern and jurisdiction, help you document concerns in legally protected ways, and prepare you for potential retaliation scenarios.</p><p><strong>The safest path is professional legal guidance &#8212; not reliance on company commitments alone.</strong></p><p>Even well-intentioned policies can have structural gaps.</p><p><strong><a href="https://aiwi.org/contact/">Reach out to us</a> or visit the <a href="https://aiwi.org/contact-hub">AIWI Contact Hub</a></strong> to connect with specialized nonprofit organizations and <strong>pro bono legal counsel</strong> who have helped hundreds of concerned insiders navigate this complex journey.</p><div><hr></div>]]></content:encoded></item><item><title><![CDATA[AI Whistleblower Defense Grants and Pro Bono Legal Support by LASST, Psst.org, and The Signals Network]]></title><description><![CDATA[AIWI can connect AI Insiders with organizations that offer pro bono legal support and provide grants to defray costs incurred by good faith AI whistleblowing cases. Find out how...]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/ai-whistleblower-defense-grants-and</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/ai-whistleblower-defense-grants-and</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Thu, 18 Dec 2025 11:05:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BjO-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><strong>Share this resource and guide with your colleagues or insiders you know who may need it</strong></p></blockquote><blockquote><p><strong><a href="https://aiwi.org/ai-whistleblower-defense-fund/">Click here to learn more about the listed organizations and compare their whistleblower defense grant programs</a></strong></p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BjO-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BjO-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png 424w, https://substackcdn.com/image/fetch/$s_!BjO-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png 848w, https://substackcdn.com/image/fetch/$s_!BjO-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png 1272w, https://substackcdn.com/image/fetch/$s_!BjO-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BjO-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png" width="1456" height="684" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:684,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:216957,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/181328620?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BjO-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png 424w, https://substackcdn.com/image/fetch/$s_!BjO-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png 848w, https://substackcdn.com/image/fetch/$s_!BjO-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png 1272w, https://substackcdn.com/image/fetch/$s_!BjO-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64c81fdf-d77f-46eb-8ac4-680dbc64e500_1776x834.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Evidence for Whistleblower Impact</strong></h2><p>Whistleblowing remains one of the most effective tools for detecting wrongdoing. A <a href="https://www.whistleblowers.org/why-whistleblowing-works/">2007 Price Waterhouse Coopers study</a> found that whistleblowers detected 43% of fraudulent activities at private corporations, compared to only 19% detected by professional auditors. In fiscal year 2024, <a href="https://www.justice.gov/archives/opa/pr/false-claims-act-settlements-and-judgments-exceed-29b-fiscal-year-2024">the Department of Justice recovered over $2.9 billion</a> in False Claims Act cases, largely due to whistleblowers.</p><div class="pullquote"><p><strong>&#8220;Not all whistleblowing cases involve legal expenses, but when they do, financial concerns should not stop you from speaking up in the public interest.&#8221;</strong></p></div><p>In AI specifically, insider disclosures have already created measurable change:</p><ul><li><p>Daniel Kokotajlo&#8217;s public disclosure and SEC complaint regarding OpenAI led to <a href="https://www.bloomberg.com/news/articles/2024-05-24/openai-releases-former-staffers-from-nondisparagement-clauses">the cancellation of prohibitive non-disparagement agreements</a> at OpenAI in May 2024,<a href="https://thehill.com/policy/technology/4788030-openai-senators-press-ai-safety/"> prompting senators to demand information from CEO Sam Altman</a>. <strong><a href="https://time.com/7324105/ai-whistleblower-act-sb-53/">TIME</a></strong><a href="https://time.com/7324105/ai-whistleblower-act-sb-53/"> </a>reported: </p><ul><li><p><strong>&#8220;OpenAI CEO Sam Altman <a href="https://x.com/sama/status/1791936857594581428">took to X</a> to deny any knowledge of these provisions, stating: </strong></p></li></ul></li></ul><blockquote><p>'This is on me and one of the few times I&#8217;ve been genuinely embarrassed running OpenAI.&#8217;&#8221;</p></blockquote><ul><li><p>Microsoft engineer <a href="https://edition.cnn.com/2024/03/06/tech/microsoft-ai-copilot-designer-ftc-warning/index.html">Shane Jones&#8217; disclosure of critical safety vulnerabilities</a> in Copilot Designer prompted<a href="https://www.cnbc.com/2024/03/06/microsoft-ai-engineer-says-copilot-designer-creates-disturbing-images.html"> FTC scrutiny</a>, Senate inquiries, and the implementation of <a href="https://the-decoder.com/microsoft-makes-changes-to-its-ai-image-generator-after-employee-goes-to-ftc/">safety improvements</a>, including blocked prompts and enhanced filters.</p></li><li><p><a href="https://www.npr.org/2021/10/05/1043377310/facebook-whistleblower-frances-haugen-congress">Frances Haugen&#8217;s exposure</a> of tens of thousands of META internal documents triggered congressional hearings,<a href="https://www.sec.gov/enforcement-litigation/distributions-harmed-investors/sec-v-facebook-inc-case-no-319-cv-04241-jd-nd-cal"> SEC complaints</a>, and global regulatory momentum, including <a href="https://fortune.com/2021/11/08/facebook-whistleblower-european-parliament-big-tech-eu/">the EU&#8217;s Digital Services Act</a>.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YEUZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YEUZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png 424w, https://substackcdn.com/image/fetch/$s_!YEUZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png 848w, https://substackcdn.com/image/fetch/$s_!YEUZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png 1272w, https://substackcdn.com/image/fetch/$s_!YEUZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YEUZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png" width="358" height="358" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:768,&quot;resizeWidth&quot;:358,&quot;bytes&quot;:102494,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/181328620?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YEUZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png 424w, https://substackcdn.com/image/fetch/$s_!YEUZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png 848w, https://substackcdn.com/image/fetch/$s_!YEUZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png 1272w, https://substackcdn.com/image/fetch/$s_!YEUZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F34a8a2dc-dbb9-455a-8de8-1592fb3bc4d7_768x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Financial Reality</strong></h2><p>However, whistleblowing can incur significant costs. <a href="https://wp.nyu.edu/compliance_enforcement/2021/06/24/how-costly-is-whistleblowing/">Research shows retaliation cases can result in income loss averaging 7.3% to 8.6% lower annually</a>, persisting one, five, and even ten years after filing. As reported by<em><a href="https://www.washingtonpost.com/technology/2025/12/15/big-tech-whistleblowers-speak-out/"> The Washington Post</a> </em>on one prominent case of former Meta (previously, Facebook) employee, Sarah Wynn-Williams:</p><blockquote><p>&#8220;Sarah has repeatedly alleged she is being pushed to financial ruin through the arbitration system in the U.K.,&#8221; Haigh said. &#8220;She is now on the verge of bankruptcy.&#8221;</p></blockquote><p> Legal fees in complex retaliation cases can escalate substantially, particularly when cases span multiple years. Additional expenses may include mental health support, security measures, and relocation costs. </p><p>The financial burden varies dramatically depending on case type, whether retaliation occurs, and proceeding duration. <a href="https://www.phillipsandcohen.com/whistleblower-rewards/">Some cases&#8212;particularly clear False Claims Act or SEC/CFTC cases&#8212;may involve minimal upfront costs while even offering financial </a><em><a href="https://www.phillipsandcohen.com/whistleblower-rewards/">rewards</a> </em>for whistleblowers<em>.</em> Most whistleblower attorneys work on contingency, meaning they only get paid if the case succeeds, and <a href="https://www.phillipsandcohen.com/whistleblower-rewards/">fee-shifting provisions often require defendants to pay attorney fees in successful cases</a>. However, complex retaliation cases without clear financial recovery mechanisms can require comprehensive support and span multiple years of litigation. </p><div><hr></div><h2><strong>Introducing AI Whistleblower Defense Grants</strong></h2><p>To address these financial barriers, we&#8217;ve created a comprehensive resource and guide identifying three vetted organizations that provide pro bono legal counsel and financial grants to AI industry whistleblowers:</p><ul><li><p><strong>&#128279; <a href="https://lasst.org/ai-safety-whistleblower-legal-defense-fund/">LASST</a></strong> - Specialized AI safety focus, US-based cases, supports whistleblowers and their supporters (incl. family, journalist, etc.)</p></li><li><p><strong>&#128279; <a href="https://psst.org/">Psst.org</a></strong> - Comprehensive tech/AI support, global coverage, offers the Psst.org <a href="https://psst.org/safe">&#8216;Safe&#8217;</a></p></li><li><p><strong>&#128279; <a href="https://thesignalsnetwork.org/whistleblower-protection-program/">The Signals Network</a></strong><a href="https://thesignalsnetwork.org/whistleblower-protection-program/"> </a>- Broad whistleblowing experience since 2017, global coverage</p></li></ul><p>These organizations offer grants, pro bono counsel, psychological support, and media guidance to help you speak up safely.</p><h3><strong>What Makes These Resources Reliable</strong></h3><p>All three organizations:</p><ul><li><p>Provide attorney-client privileged consultations, including initial anonymous consultations.</p></li><li><p>Offer pro bono legal counsel based on decades of combined litigation and whistleblower defense experience.</p></li><li><p>Support both already-incurred and to-be-incurred expenses.</p></li><li><p>Represented high-profile cases, including <a href="https://www.theguardian.com/news/2022/jul/11/uber-files-whistleblower-lobbyist-mark-macgann">Uber&#8217;s Mark MacGann</a> and <a href="https://time.com/6193231/facebook-crack-whip-black-whistleblower/">Facebook whistleblower Daniel Motaung</a> (The Signals Network) and a recent <a href="https://psst.org/blog/whatsapp-whistleblower">WhatsApp whistleblower </a>(Psst.org).</p></li><li><p>Evaluate cases individually and can coordinate together to meet comprehensive support needs.</p></li></ul><h3><strong>Understanding Your Coverage Options</strong></h3><p>The organizations differ in their scope:</p><ul><li><p>LASST focuses specifically on AI safety legal expenses, leveraging deep technical and legal expertise as well as networks in this domain. They cover attorney fees and legal proceeding costs for US-based cases. </p></li><li><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Psst...&quot;,&quot;id&quot;:289486162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63929a29-8944-4ebf-8f41-089016d2fb28_1000x1000.jpeg&quot;,&quot;uuid&quot;:&quot;3d1d4a64-4b9f-44ee-97c9-1af63bfdc628&quot;}" data-component-name="MentionToDOM"></span> provides comprehensive support, including legal costs, mental health services, security/protection, living expenses, media/PR training, and career transition support for global tech and AI cases. Their unique &#8220;Safe&#8221; online tool allows potential collective disclosure.</p></li><li><p>The Signals Network offers similarly broad support across tech, healthcare, corporate, and government whistleblowing, with seven years of cross-industry experience.</p></li></ul><h3><strong>Why Early Legal Counsel Matters</strong></h3><p>Legal counsel is <strong>strongly</strong> recommended even when &#8220;just exploring&#8221; your support options, for several evidence-based reasons.</p><p>Legal counsel can:</p><ul><li><p>Help you understand your rights and potential costs.</p></li><li><p>Bring in technical experts under attorney-client privilege (protecting communications from future subpoenas).</p></li><li><p>Ensures you navigate disclosure channels that maximize protection while minimizing risk.</p></li></ul><p>Critically, escalating concerns internally may already expose you to retaliation risk. Early consultation prevents costly mistakes and ensures proper protection from the start.</p><h3><strong>Who This Guide is For: When to Reach Out</strong></h3><h4>You have spoken up already</h4><p>You have spoken up already and are experiencing retaliation or incurring legal costs.</p><p>Don&#8217;t have legal counsel yet?</p><ul><li><p>Contact the <a href="https://aiwi.org/ai-whistleblower-defense-fund/#Specialized">organizations listed </a>for immediate pro bono support or attorney referrals.</p></li><li><p>Explore our <a href="https://aiwi.org/contact-hub/">AIWI Contact Hub</a> for additional vetted options of pro bono counsel.</p></li></ul><p>Already have legal counsel?</p><ul><li><p>Work with your lawyer to reach out to the <a href="https://aiwi.org/ai-whistleblower-defense-fund/#Specialized">organizations listed</a> directly.</p></li></ul><h4>You are thinking about speaking up</h4><p>You are thinking about speaking up or are ready to speak up, but are concerned about legal costs.</p><p>Don&#8217;t have legal counsel yet?</p><ul><li><p>Contact the <a href="https://aiwi.org/ai-whistleblower-defense-fund/#Specialized">organizations listed</a> for pro bono support and or attorney referrals.</p></li><li><p>Explore our <a href="https://aiwi.org/contact-hub/">AIWI Contact Hub</a> for additional vetted options of pro bono counsel.</p></li></ul><p>Already have legal counsel?</p><ul><li><p>Discuss funding options with your lawyer before making initial contact with defense funds.</p></li><li><p>Have your counsel reach out to <a href="https://aiwi.org/ai-whistleblower-defense-fund/#Specialized">organizations listed</a> to understand available support (link to organizations overview section).</p></li></ul><p>Still assessing your situation?</p><ul><li><p>Use <a href="https://aiwi.org/third-opinion/">Third Opinion</a> for secure, anonymous guidance from AIWI to clarify your concern.</p></li></ul><div><hr></div><h2><strong>Access the Full Guide and Resource</strong></h2><p>Visit our <a href="https://aiwi.org/ai-whistleblower-defense-fund/">AI Whistleblower Defense Grants page</a> to:</p><ul><li><p>Compare detailed coverage across all three organizations.</p></li><li><p>Identify your situation and plan appropriate next steps.</p></li><li><p>Access additional resources through AIWI&#8217;s Contact Hub.</p></li><li><p>Review frequently asked questions about costs, timing, and process.</p></li></ul><div><hr></div><h2><strong>Legal Disclaimer</strong></h2><blockquote><p>&#8220;These programs are designed to reduce barriers to good faith whistleblowing in the public interest based on legally-obtained information. None of these programs intends to or will support illegal activity or the disclosure of illegally-obtained information, including information obtained through hacking or social engineering.&#8221;</p></blockquote><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Anthropic Publishes Their (Partial) Whistleblowing Policy - First Thoughts And Context]]></title><description><![CDATA[As first reported by The Transformer, Anthropic has published their Responsible Scaling Policy Non Compliance Policy, outlining how individuals working for Anthropic can report instances of suspected RSP non-compliance within the company:]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/anthropic-publishes-their-partial</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/anthropic-publishes-their-partial</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Mon, 08 Dec 2025 15:29:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ff344d40-12a7-4107-8466-4b23d905f01d_1456x1048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As first reported by <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;James Ball&quot;,&quot;id&quot;:1757381,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!qgV8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff177d2f9-67c3-4cc2-bd05-595777d9d936_1176x1176.jpeg&quot;,&quot;uuid&quot;:&quot;b93980a3-d441-4972-a5b5-35cc3df2cafe&quot;}" data-component-name="MentionToDOM"></span> in <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Transformer&quot;,&quot;id&quot;:1688188,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/transformernews&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;uuid&quot;:&quot;04a0775f-0951-4ac1-81ad-9f7009f11223&quot;}" data-component-name="MentionToDOM"></span> (<a href="https://www.transformernews.ai/p/can-ai-embrace-whistleblowing-anthropic-openai">Article link)</a>, Anthropic has now published their &#8220;RSP Noncompliance and Anti-Retaliation Policy&#8221; outlining how individuals working for Anthropic can report instances of suspected RSP noncompliance within the company:</p><p>Find here their&#8230;</p><ol><li><p><a href="https://www-cdn.anthropic.com/fcf136d0f2204e2184f73c6bd082bea27f2d631b/RSP%20Noncompliance%20Reporting%20and%20Anti-Retaliation%20Policy%20(Final%202025.12.04).pdf">RSP noncompliance and anti-retaliation policy</a></p></li><li><p><a href="https://www-cdn.anthropic.com/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf">Responsible Scaling Policy</a> (RSP). <br>This also includes a high-level commitment to tracking and escalating to the board cases of violations of the (above) non-compliance policy, as well as a commitment to regularly update the board on &#8220;substantial&#8221; cases of RSP non-compliance &#8220;and overall trends.&#8221;</p></li></ol><p>Anthropic is the second leading AI company to publish their policy on raising concerns after OpenAI, who published<a href="https://cdn.openai.com/policies/raising-concerns-policy-blog-copy-202410.pdf"> theirs</a> in October 2024. </p><p>Anthropic is the first frontier AI company to publicly commit to ongoing monitoring and reviews of their internal whistleblowing system. Anthropic  publishing these usage and outcome reports would make them the first AI company to achieve <a href="https://aiwi.org/publishyourpolicies/#Ratings">&#8220;Level 2 Whistleblowing Transparency&#8221;</a> globally.    </p><p>We are pleased to see that Anthropic is taking this step&#8212;especially in the absence of any regulatory or scandal-driven pressure.</p><div><hr></div><h3><strong>First Takes</strong></h3><p>Whistleblowing System Transparency of major frontier companies. Please note that the below <strong>only evaluates the transparency</strong> of the policy and outcome reporting&#8212;not the content or quality of the underlying system, protections, culture, or past patterns of retaliation. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!D2yv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!D2yv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png 424w, https://substackcdn.com/image/fetch/$s_!D2yv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png 848w, https://substackcdn.com/image/fetch/$s_!D2yv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png 1272w, https://substackcdn.com/image/fetch/$s_!D2yv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!D2yv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:42922,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/181036168?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!D2yv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png 424w, https://substackcdn.com/image/fetch/$s_!D2yv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png 848w, https://substackcdn.com/image/fetch/$s_!D2yv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png 1272w, https://substackcdn.com/image/fetch/$s_!D2yv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3e07444-8979-443a-a055-6d481a5ca39e_1456x1048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Anthropic has now achieved &#8220;Level 1 Transparency&#8221; on their whistleblowing system. They can achieve &#8220;Level 2&#8221; (green) by publishing usage and outcome reporting. For details, visit<a href="http://publishyourpolicies.org"> publishyourpolicies.org</a></figcaption></figure></div><p>We will publish an in-depth evaluation relating to the <em>quality</em> of Anthropic&#8217;s and OpenAI&#8217;s policy at a later point<strong> (this is your chance to subscribe if you don&#8217;t want to miss the in-depth evaluation). </strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://aiwhistleblowerinitiative.substack.com/subscribe?"><span>Subscribe now</span></a></p><p>We would like to highlight a few first interesting items already - we will aim to clarify these before publishing a more in-depth review: </p><ol><li><p>The policy is explicitly not a &#8216;complete&#8217; whistleblowing policy, i.e., it doesn&#8217;t cover processes for how violations of the law or non-RSP-related misconduct are handled. Anthropic states that other policies and processes within Anthropic cover such cases. These policies are not yet public. <br>Anthropic&#8217;s tooling for submitting reports (NAVEX), however, also covers reports relating to non-RSP-related violations of the law, and the policy does talk about &#8220;protected activities&#8221;, i.e., reporting that is legally protected (which <em>usually </em>means reporting violations of a law). This is a bit confusing, and we aim to clarify this before we publish any in-depth commentary on this policy.</p></li><li><p>Anthropic uses NAVEX as a tool for anonymous reporting. Reports are directed to the Responsible Scaling Officer (&#8220;RSO&#8221;, currently <a href="https://www.theguardian.com/technology/ng-interactive/2025/dec/02/jared-kaplan-artificial-intelligence-train-itself">Jared Kaplan</a>). If reports relate to the behaviour of said RSO, NAVEX allows for reports to be sent directly to the President of Anthropic, according to the policy. </p></li><li><p>Anthropic commits to &#8220;measure and verify compliance to this policy through various methods, including but not limited to ongoing monitoring, and both internal and external reviews&#8221;. We do not have more details on what is measured, how it is reviewed, etc. (<a href="https://aiwi.org/publishyourpolicies/#Ratings">See Level 2: Outcome Transparency</a>) &#8212; but this still sounds promising.</p></li><li><p>The &#8220;protected activity&#8221; section states that Anthropic considers internal raising of concerns relating to RSP non-compliance legally protected from retaliation. This is very interesting; most whistleblowing policies&#8217; commitments going beyond the letter of the law are non-binding. This statement can be rooted in various legal bases, and we aim to clarify this before we publish any in-depth commentary on this policy.</p></li><li><p>Covered persons, i.e., those who are allowed to use the channel, seem to include research partners/eval organizations. We use &#8220;seems&#8221; here because the &#8220;Scope&#8221; section states that the policy is written &#8220;specifically&#8221; for employees and Board members, but that they &#8220;expect&#8221; members of the extended workforce to also report concerns. At the same time, Anthropic states in its answers to the AI Safety Index that &#8220;AI research collaborators and academic partners&#8221; as well as &#8220;individuals assisting whistleblowers&#8221; were protected from retaliation under their policy. Again, we will aim to clarify this before we publish any in-depth commentary on this policy.</p></li></ol><p>FLI also published their <a href="https://futureoflife.org/wp-content/uploads/2025/12/AI-Safety-Index-Report_011225_Full_Report_Digital.pdf">AI Safety Index last week</a>, which includes a section on whistleblowing policy transparency and quality. Find Anthropic&#8217;s answers to the questionnaire starting on page 101.</p><div><hr></div><h3><strong>Why This Matters</strong></h3><p>AI Companies&#8217; whistleblowing policies provide evidence of how their internal reporting channels operate. This is important for employees who may need to use them as well as for the public: Insiders may be the first to spot risks that concern us all. However, if details on companies&#8217; whistleblowing systems aren&#8217;t published, we can&#8217;t verify if these channels are safe.</p><p>Currently, the majority of insiders reporting concerns do so internally first (75% of successful cases at the <a href="https://secwhistlebloweradvocate.com/sec-whistleblower-facts-and-figures/#:~:text=Successful%20SEC%20Whistleblowers%20in%20FY2021%20*%20that,25%25%20*%20that%20filed%20jointly:%20Approximately%2020%25">SEC, 2021</a>). We expect this number to be equally high or higher in AI, where information asymmetries between regulators and companies can be stark. </p><p>At the same time, 95% of retaliation cases documented by the SEC accrue to individuals who first reported internally. This has led the National Whistleblower Center, a non-profit led by Stephen Kohn, an SEC whistleblowing lawyer, to <a href="https://www.whistleblowers.org/internal-reporting-channels/">recommend urgent caution</a> to insiders who consider using the company&#8217;s internal whistleblowing channels. </p><p>We should hence all be highly interested in the extent to which these channels are safe for insiders and effective at detecting and rectifying issues. Evaluating whistleblowing policies is a first step in this direction. </p><p>Public feedback can help employees understand their whistleblowing systems (many of whom are not aware of their system even existing until it is too late) and lead to whistleblowing system <em>improvements</em>. This not only benefits employees and the public, but companies too: <a href="https://aiwi.org/publishyourpolicies/#evidence-4">Evidence </a>shows that well-structured internal reporting and speak-up systems reduce misconduct, enable early detection of risks, and can prevent small issues from escalating into major crises. </p><p>Better whistleblowing systems also lead to higher employee satisfaction, loyalty, and second-order effects like improved research results and innovation.</p><div><hr></div><h3><strong>Step One: Publish, Step Two: Evaluate, Step Three: Demonstrate</strong></h3><p>While common in industries outside AI, we have seen few AI companies create transparency on their whistleblowing systems: Only OpenAI had previously published <a href="https://cdn.openai.com/policies/raising-concerns-policy-blog-copy-202410.pdf">their policy</a>, following <a href="https://www.theguardian.com/technology/article/2024/jul/14/us-financial-watchdog-urged-to-investigate-ndas-at-openai">its scandals in 2024.</a> GDM, xAI, Meta, or Mistral have not published their policies or have only published them in fragments.</p><p>Anthropic is therefore taking a meaningful step as the policy&#8217;s publication allows us, the public, to provide feedback on the strength of their system&#8217;s protection. </p><p>Policy Publication for public feedback is <a href="http://aiwi.org/publishyourpolicies/#Ratings">Level 1 Transparency</a>, and AIWI will be publishing an in-depth evaluation of both Anthropic&#8217;s and OpenAI&#8217;s policies based on Transparency International&#8217;s Internal Whistleblowing Systems Assessment Framework. </p><p>At the same time, policies are statements - not action, and therefore only one indicator of how good an internal system may be. A much stronger commitment to a &#8216;speak up&#8217; culture is through consistent measurement and improvement: What we call <a href="http://aiwi.org/publishyourpolicies/#Ratings">Level 2 Transparency</a>.</p><p>Companies should track their systems&#8217; usage and outcomes over time using qualitative and quantitative <a href="https://www.transparency.org/en/publications/monitoring-internal-whistleblowing-systems">indicators</a> to ensure the system <em>actually protects reporters and uncovers and corrects issues</em>. Such indicators include the number of reports received, anonymity rates indicating trust levels, retaliation rates, and whistleblower satisfaction with the process through surveys or interviews. Companies should analyze the results and take action to improve their systems - just as they would with any other business process they care about.</p><p>We&#8217;re excited to review Anthropic&#8217;s policy and see if they plan to also publish, in a redacted form, their whistleblowing system&#8217;s usage and outcomes. </p><p>Anthropic has previously informally provided information on their whistleblowing system&#8217;s usage in their RSP review with <a href="https://metr.org/2025_pilot_risk_report_metr_review.pdf">METR</a>. Committing to such reporting on a regular basis would be a strong signal that Anthropic is serious about their internal whistleblowing process (i.e., measuring to improve).</p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CBmo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CBmo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png 424w, https://substackcdn.com/image/fetch/$s_!CBmo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png 848w, https://substackcdn.com/image/fetch/$s_!CBmo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png 1272w, https://substackcdn.com/image/fetch/$s_!CBmo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CBmo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png" width="1184" height="868" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:868,&quot;width&quot;:1184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:61749,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/181036168?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CBmo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png 424w, https://substackcdn.com/image/fetch/$s_!CBmo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png 848w, https://substackcdn.com/image/fetch/$s_!CBmo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png 1272w, https://substackcdn.com/image/fetch/$s_!CBmo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643c5a8-8563-470f-8362-25433f7f09c3_1184x868.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Anthropic has committed to monitoring and reviewing their whistleblowing system. Publishing these reports would make them the first AI company to achieve <a href="https://aiwi.org/publishyourpolicies/#Ratings">&#8220;Level 2 Whistleblowing Transparency&#8221;</a> globally.   </figcaption></figure></div><p><strong>Sources:</strong> [2] <a href="https://cdn.openai.com/policies/raising-concerns-policy-blog-copy-202410.pdf">Link</a>, [3] <a href="https://openai.integrityline.com/">Link</a>, [4] <a href="https://www-cdn.anthropic.com/fcf136d0f2204e2184f73c6bd082bea27f2d631b/RSP%20Noncompliance%20Reporting%20and%20Anti-Retaliation%20Policy%20(Final%202025.12.04).pdf">Link</a>, [5] <a href="https://x.ai/documents/2025.02.20-RMF-Draft.pdf">Link</a>, [6] <a href="https://s21.q4cdn.com/399680738/files/doc_downloads/2024/12/Code-of-Conduct-2024.pdf">Link</a>, [7] <a href="https://www.meta.com/people-practices/harassment-policy/#:~:text=VIII.&amp;text=Meta%20has%20a%20legal%20obligation,prompt%20action%20to%20end%20harassment">Link</a>, [8] <a href="https://fb.integrityline.com/">Link</a>, [9] <a href="https://abc.xyz/investor/google-code-of-conduct/">Link</a>, [10] <a href="https://trust.mistral.ai/resources">Link</a></p><div><hr></div><h3><strong>Context: AIWI&#8217;s Publish Your Policies Campaign</strong></h3><p>For the reasons outlined above, AIWI launched the&nbsp;<a href="http://aiwi.org/publishyourpolicies/">Publish Your Policies&nbsp;</a>campaign in July 2025 and was joined by 35+ signatories, including former AI company employees, legal experts, and academics. There is still work to be done: four out of six major AI companies currently do not publish their whistleblowing policies. If you work at any of the 4 companies above - ask your leadership: Why not just publish your policies?</p><p>Find out more about the campaign at <a href="https://aiwi.org/publishyourpolicies/.">https://aiwi.org/publishyourpolicies/.</a></p>]]></content:encoded></item><item><title><![CDATA[A Global First: EU AI Office Launches AI-Specialized Whistleblowing Channel ]]></title><description><![CDATA[Why This Matters: Expected Public Benefits, Insights for AI Insiders, and more...]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/a-global-first-eu-ai-office-launches</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/a-global-first-eu-ai-office-launches</guid><dc:creator><![CDATA[Chandini Stensel]]></dc:creator><pubDate>Fri, 28 Nov 2025 10:00:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d6556ea4-ff3d-41f7-8ad9-c8685c4b9eab_1456x1048.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The EU AI Office (EU AIO) has launched a whistleblowing channel dedicated to AI professionals&#8212;the first of its kind globally. Professionals are protected* and can now anonymously report EU AI Act violations (and more) to the EU AIO directly, who aim to investigate breaches and commit themselves to providing feedback to whistleblowers. <a href="https://ai-act-whistleblower.integrityline.app/">Find the Tool here. </a></p><p>As a vocal advocate for this since February 2025, we are excited to see it materialise for the benefit of frontier AI insiders and the public at large. </p><p>We had the chance to provide subject matter input to the EU AIO on establishing this channel as part of a research coalition&#8212;thank you especially to Abra Ganz, Naomi Colvin, Dimitrios Kafteranis, Dr. Simon Gerdemann, Kosmas Zittel, and Marie Terracol for the great collaboration. See an extended disclaimer at the end of this text.</p><p>*Retaliation protections don&#8217;t explicitly begin until mid-2026 for AI Act violations specifically. Insiders may still, however, already be protected from retaliation today. See section &#8220;Current Limitations&#8221; for details. In addition, anonymity is the &#8216;default&#8217; of the channel and confidentiality protections of the channel appear strong. Risks from retaliation arise from a whistleblower&#8217;s identity becoming known. </p><div><hr></div><h3>Code of Practice Helped To Set Stage for AI Whistleblowing Channel</h3><p>It&#8217;s worth recalling the context in which this channel was formed. The EU AI Act&#8217;s coverage under the EU Whistleblowing Directive was enshrined in <a href="https://artificialintelligenceact.eu/article/87/">Article 87 </a>of the EU AI Act. However, this was without an explicit requirement for an EU-level whistleblowing channel. </p><p><a href="https://code-of-practice.ai/?section=safety-security">The General-Purpose AI Code of Practice (CoP)</a>&#8212;a framework developed under the EU AI Office with input from 13 appointed, independent chairs and vice-chairs&#8212;highlighted the need for stronger avenues for responsible disclosure. </p><p>This specific advocation forms part of the wider CoP purpose, which identifies ways that General-Purpose AI (GPAI) providers i.e., providers of the most advanced models, can comply with the EU AI Act&#8217;s obligations. </p><p>In their accompanying statement to the <a href="https://code-of-practice.ai/?section=safety-security#chair-statement">CoP&#8217;s Safety and Security section</a>, the chair and vice-chair contributors urged the creation of a dedicated whistleblowing channel for AI-related disclosures. They warned that without a protected route for confidential reporting, even serious incidents might never reach regulators. </p><div><hr></div><p>We share below our thoughts on the channel, based on the public information <a href="https://ai-act-whistleblower.integrityline.app/">posted on the EU AI Office (EU AIO) Commission Website</a> on the 24th November 2025. </p><h3>Key Take-Aways</h3><ul><li><p><strong>What can be reported via the channel:</strong> &#8220;Information about any internal practices of AI model providers that may violate the obligations set out in the AI Act, or on any other activities that could endanger fundamental rights, health, or public trust, is welcome.&#8221;</p></li><li><p><strong>Who can use it:</strong> Anyone globally who has gained access to the above information in a professional context, including employees, contractors, suppliers etc.*</p></li><li><p><strong>Notable features</strong>: Anonymous reporting; seemingly strong confidentiality protections (however, no retaliation protection for reporting EU AI Act violations until Aug 2026); and insiders may already be protected if the violation they raise a concern about is covered under another EU law subject to the EU Whistleblower Directive.</p></li><li><p><strong>U.S. insiders</strong>: May qualify for SEC protection (note: requires legal counsel).</p></li><li><p><strong>Next step for insiders</strong>: If you suspect a violation, find pro bono legal counsel to consult with before your outreach. Visit <a href="https://aiwi.org/contact-hub/">aiwi.org/contact-hub</a> for pro bono legal guidance. </p></li></ul><p>*Retaliation protections under the EU Whistleblowing Directive will be effective for individuals whose contracts are governed under EU law and if their disclosure otherwise qualifies for protection under the EU Whistleblowing Directive.</p><div><hr></div><h3>Specifics: What The Channel Does</h3><p>The EU AIO reporting channel will (at a high level):</p><ul><li><p>Offer advice on the whistleblowing process and the channel&#8217;s appropriateness in terms of the concern(s) reported.</p></li><li><p>Refer out-of-scope cases to appropriate member state channels.</p></li><li><p>Investigate EU AI Act violations and enforce compliance.</p></li><li><p>Provide updates to whistleblowers throughout the process.</p></li></ul><div><hr></div><h3>Why This Matters &amp; Expected (Public) Benefits</h3><p><strong>Our survey* of frontier AI insiders revealed</strong> <strong>that</strong> <strong>they urgently need a clear &#8220;place to go&#8221; for raising concerns:</strong></p><blockquote><p><em><strong>Anonymous frontier company insider 1:</strong> &#8220;Having an established, dedicated reporting channel would be extremely valuable&#8212;one where you could feel certain that sharing information through it is both authorized and appropriate.&#8221;</em></p><p><em><strong>Anonymous frontier company insider 2:</strong> &#8220;[A dedicated channel] would help normalize and institutionalize whistleblowing, making it a routine and anticipated practice supported by standardized procedures.&#8221;</em></p></blockquote><p>*Survey by AIWI with frontier AI company insiders, 2025</p><p>For the first time ever, <strong>AI insiders now have such a clear place to direct concerns they witness.</strong> Further, understanding whether &#8216;one should be concerned&#8217; is not trivial in frontier AI. Besides enforcement of the EU AI Act, an expert-staffed channel now exists for anonymous evaluation of concerns, including the power to intervene where appropriate.</p><h4><strong>Second-order benefits of this channel:</strong></h4><ul><li><p><strong>Stronger internal company whistleblowing channels:</strong> Companies will need to educate their employees on the EU AIO channel&#8217;s existence.<strong> </strong>Stronger regulatory channels incentivize more independent, safer internal company whistleblowing channels. These contribute to effectively addressing and deterring misconduct. We have seen this in the EU and the US (<a href="https://onlinelibrary.wiley.com/doi/10.1111/1911-3846.12884">Source</a>).</p></li><li><p><strong>A template for Global AI Whistleblowing Channels:</strong> This channel sets precedent for AI whistleblowing channels globally&#8212;especially in the United States, where California&#8217;s Attorney General will introduce a whistleblower channel in line with SB53, or in the context of the envisioned <a href="https://www.congress.gov/bill/119th-congress/senate-bill/1792/text">AI Whistleblower Protection Act</a>.</p></li><li><p><strong>Setting precedent for other EU-level whistleblowing channels:</strong> Demonstration that EU-level bodies can commit to the EU Whistleblowing Directive, encouraging other EU-level bodies to follow suit.</p></li></ul><div><hr></div><h3>What AI Insiders Should Know</h3><ol><li><p><strong>Familiarise yourself with the EU AI Act</strong> to recognize potential violations. As a starting point, examine Article 5, <a href="https://artificialintelligenceact.eu/article/5/">prohibited AI practices</a>.</p></li><li><p><strong>Seek legal counsel before reporting.</strong> Contact (pro bono) lawyers through <a href="https://aiwi.org/contact-hub/">aiwi.org/contact-hub</a> to understand your protections. US-based insiders should note: you may be protected under the SEC whistleblower program when reporting to the EU AI Office, but this requires a separate procedure. Please contact AIWI for guidance or a (pro bono) lawyer directly for further details&#8212;you can find pro bono whistleblowing lawyers interested in AI at <a href="https://aiwi.org/contact-hub">aiwi.org/contact-hub</a>. Their legal expertise will be supplemented by AIWI&#8217;s AI expertise/ expert network.<br>On a related note: If you are an insider with a concern but, for whatever reason, would not want to approach legal counsel, you can anonymously submit a question to AIWI *around your concern* under <a href="https://aiwi.org/third-opinion/">aiwi.org/third-opinion</a>&#8212;without revealing any confidential information.</p></li><li><p><strong>You do NOT need definite proof of a violation</strong>&#8212;<strong>and will receive feedback regardless: </strong>To be protected from retaliation under the EU Whistleblowing Directive (see below for context), you only need &#8220;reasonable grounds to believe&#8221; that the information reported was true at the time of reporting and may have constituted a violation of the law. Even if you do not qualify for protections, the EU AI Office seemingly takes confidentiality very seriously, evidenced by their strict confidentiality policy (see section below). This means the EU AIO can assess your concern and provide you with feedback regardless of whether your reports constitutes a violation of the law. </p></li><li><p><strong>Understand timing.</strong> EU retaliation protections for whistleblowers begin in August 2026. Until then, you will not have legal protection from retaliation in the EU but will rely solely on the EU AI Office&#8217;s confidentiality protections. (US-based employees may qualify for protections from the SEC, but this requires a separate procedure.)</p></li></ol><div><hr></div><h3>The Situation For International, e.g. US-based, Whistleblowers</h3><p><strong>Individuals working outside the EU can also use the EU AIO channel</strong>.</p><p>It is not uncommon for whistleblowing channels to be open internationally: The SEC whistleblower program, for instance, handed out 20% percent of whistleblower rewards to individuals outside the US in 2021 (<a href="https://www.forbes.com/sites/erikakelton/2021/11/30/watch-out-wall-street--record-number-of-whistleblowers-flock-to-the-sec/">Source</a>). The EU AIO follows this model of encouraging international reports.</p><p>A California-based employee could hence use this channel, though they won&#8217;t receive EU retaliation protections. <strong>Why might they still report concerns to the EU AIO? For the same reasons that European whistleblowers approach the SEC</strong> (besides rewards, which the EU AIO does not offer):</p><ol><li><p><strong>Strong confidentiality protections.</strong></p></li><li><p><strong>Subject-matter expertise.</strong></p></li><li><p><strong>Trust in enforcement capabilities.</strong></p></li></ol><p><br>The EU AIO confidentiality protections read well, and their specialized AI expertise may be uniquely suited to understanding and acting on reports.</p><p>In addition: <strong>US insiders might still be able to gain retaliation protection</strong> through the SEC whistleblower program if their information shows the company materially misled shareholders. Reporting to the EU AIO does not automatically forfeit this protection.</p><p><strong>Important:</strong> Consult legal counsel before submitting. AIWI can connect you to pro bono lawyers with AI expertise&#8212;even anonymously&#8212;through <a href="https://aiwi.org/contact-hub/">aiwi.org/contact-hub</a>.</p><div><hr></div><h3>What Makes This Channel Strong</h3><ul><li><p><strong>Anonymous, ISO-certified platform</strong> for secure two-way communication.</p></li><li><p><strong>First EU-level channel</strong> committing to strict confidentiality under the EU Whistleblowing Directive:</p><ul><li><p><strong>Three dedicated staff</strong> members receive all reports.</p></li><li><p><strong>The whistleblower&#8217;s written consent is required</strong> to share any identifying information with third parties.</p></li><li><p><strong>Dedicated confidentiality policy</strong> (<a href="https://ai-act-whistleblower.integrityline.app/app-page;appPageName=What%20can%20be%20reported">Link</a>) with explicit commitments.</p></li></ul></li><li><p><strong>Retaliation protections</strong> for &#8220;reasonable cause&#8221; suspicions&#8212;even if no actual violation occurred or if the EU AI Office isn&#8217;t the appropriate channel (protections provided by your employment jurisdiction). <em>In some cases, protections only apply from August 2026.</em></p></li><li><p><strong>Quick confirmation (7 days) </strong>and <strong>updates on results of investigation <br>(3 months by default, up to 6 months)</strong> to whistleblowers committed by the EU AI Office, possibly including more frequent updates along the way.</p></li></ul><p><strong>Note:</strong> We cannot verify adherence to these commitments in practice, but the framework looks very promising.</p><div><hr></div><h3>Current Limitations</h3><ol><li><p><strong>Retaliation protections don&#8217;t explicitly begin until mid-2026</strong> for AI Act violations specifically. <strong>However:</strong></p></li></ol><ul><li><p>Many AI Act violations may already fall under existing product safety laws, meaning reporting on suspicions of such violations to the EU AI Office will grant you retaliation protections. The EU AI Office can, should they not be responsible, also direct you to appropriate reporting channels covering those laws.</p></li><li><p>We would argue that the EU AI Act is <em>already</em> covered under the Whistleblowing Directive as the EU AI Act is a Product Safety Law&#8212;Product Safety Laws are <em>already </em>covered under the EU Whistleblowing Directive today. </p></li><li><p>Anonymity is the default, and confidentiality protections appear strong. Risks from retaliation arise from a whistleblower&#8217;s identity becoming known. </p></li></ul><ol start="2"><li><p><strong>Non-EU individuals lack EU retaliation protections. However</strong>:</p></li></ol><ul><li><p>This mirrors successful models like the SEC, which actively encourages international whistleblowers through strong confidentiality.</p></li><li><p>US individuals may, in some cases, gain SEC whistleblower protections when reporting EU AI Act violations.</p></li></ul><ol start="3"><li><p><strong>Enforcement capability remains to be proven in practice.</strong></p></li></ol><div><hr></div><h3>What Matters Next</h3><p>The EU AI Office must now implement its confidentiality processes and conduct planned trainings as outlined in their communications. We hope they will handle early cases with particular care.</p><div><hr></div><h4>Disclaimer</h4><p>AIWI was a contributing stakeholder in developing the EU AI Office&#8217;s whistleblower channel. Our involvement was limited to providing subject-matter expertise during the communication and policy drafting phases. Our comments here align with our previous input to the EU AI Office and reflect our commitment to effective AI whistleblowing regulation globally.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support AIWI&#8217;s work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[OpenAI's Subpoenas, Pre-emption: AI Whistleblower Protections Under Threat, Adler on NVIDIA Pressure, Amazon's Water Usage, SB 243, MIRI Paper: International Agreements on AI]]></title><description><![CDATA[INSIDE AI #21: OpenAI's subpoenas against non-profits, Amazon's hidden datacenter water use, Meta's 600 lay-offs, A double whistleblower, SB243 signed & more...]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/openais-subpoenas-pre-emption-and</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/openais-subpoenas-pre-emption-and</guid><pubDate>Mon, 24 Nov 2025 18:02:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3ddfd59d-bf70-4509-a861-9c11a5fa5f54_1456x1048.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>In This Edition:</strong></h2><ul><li><p>Edition Highlight:</p><ul><li><p>National Whistleblower Center (NWC): AI Moratorium Puts AI Whistleblowers Under Threat and How You Can Take Action</p></li></ul></li><li><p>Insider Currents:</p><ul><li><p>Summarizing OpenAI&#8217;s Subpoenas Nonprofits During Its $500B Restructuring</p></li><li><p>Insider Speaks Out About Amazon&#8217;s Strategy to Keep Secret Its Data Centres&#8217; Full Water Use</p></li><li><p>&#8220;Make Sure Your Boss Has Your Back&#8221;: Steven Adler on NVIDIA&#8217;s Alleged Influence Over AI Policy Voiced By Anonymous Sources</p></li><li><p>&#8220;Bloated&#8221; AI Unit: Meta Insiders Describe Reasonings Behind 600 Layoffs</p></li><li><p>Trump&#8217;s AI Export Program Rollout Exposes Government Dysfunction</p></li><li><p>Double Whistleblower Exposes Corporate Compliance Failures at Uber and Veon</p></li><li><p>AI Labs&#8217; 100-Hour Work Weeks: The New Normal?</p></li><li><p>Document Reveals Mustafa Suleyman&#8217;s Updated AI Team Includes Five Ex-Google Hires</p></li></ul></li><li><p>Policy Updates:</p><ul><li><p>California is the First State to Regulate AI Companion Chatbots</p></li><li><p>The GUARD Act and the Push to Define AI &#8220;Companionship&#8221;</p></li></ul></li><li><p>Research Highlight: </p><ul><li><p>MIRI Paper On International AI Agreement Includes Whistleblower Protections</p></li></ul></li><li><p>Announcement:</p><ul><li><p>AIWI Comments on New Anthropic Sabotage Risk Report</p></li><li><p>Resource Highlight: The Double Black Box by Ashley S. Deeks</p></li><li><p>University of Cambridge Seminar: Deploying Secure Whistleblower Technology in Practice</p></li></ul></li></ul><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/p/openais-subpoenas-pre-emption-and?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><em>Help Us Grow. Share This Newsletter With Your Network. Or visit <strong><a href="http://www.aiwi.org">aiwi.org</a> </strong>to learn more about our work.</em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/p/openais-subpoenas-pre-emption-and?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://aiwhistleblowerinitiative.substack.com/p/openais-subpoenas-pre-emption-and?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><blockquote><h2>Edition Highlight</h2></blockquote><h3>AI Moratorium Puts AI Whistleblowers Under Threat</h3><p><strong>The Vote Could Happen Within Days:</strong></p><p>The first post today provides room for our friends at <a href="https://www.whistleblowers.org/campaigns/the-urgent-case-for-the-ai-whistleblower-protections-congress-must-pass-the-ai-whistleblower-protection-act/">National Whistleblower Center</a>. We are sharing their communication from this weekend in an edited format.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t26q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t26q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!t26q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!t26q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!t26q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t26q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg" width="1456" height="491" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:491,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:998959,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/177989112?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t26q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!t26q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!t26q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!t26q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f3fc22c-0aaf-4b92-abcd-02a2ff70261e_3200x1080.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://nationalwhistleblowercenter.salsalabs.org/callforaiwhistleblowerprotections/index.html">Click here to join NWC - They will connect you with your officials</a></figcaption></figure></div><p>On November 17, <a href="https://www.americanprogress.org/article/moratoriums-and-federal-preemption-of-state-artificial-intelligence-laws-pose-serious-risks/">House Majority Leader Steve Scalise</a> announced that Republican leadership is exploring the addition of AI pre-emption language to the National Defense Authorization Act (NDAA)&#8212;the annual &#8220;must-pass&#8221; defense spending bill. <a href="https://www.govtech.com/policy/trump-urges-congress-to-block-state-level-ai-legislation">President Trump publicly supported the effort the following day</a>.</p><p>&#8220;No text or overview of the scope or length of Rep. Scalise&#8217;s proposed AI pre-emption effort has been publicly released,&#8221; wrote <a href="https://ari.us/last-minute-ai-law-preemption-may-sneak-into-the-ndaa/">Americans for Responsible Innovation (ARI)</a>. Details about the scope of this pre-emption effort, therefore, remain unclear. </p><p>If successful, this provision could use federal pre-emption to void state AI laws, including California&#8217;s landmark SB53, which is set to take effect in 2026. AI whistleblowers would lose legal protections nationwide, with no federal law in place to protect them.</p><p><strong>In their newsletter, National Whistleblowers Center stated:</strong></p><blockquote><p>&#8220;This unethical deal would allow the industry to develop in secrecy and bury their conduct behind NDAs.</p><p>Without federal protections, this devious back-door move would strip AI whistleblowers of their rights to make disclosures in the public interest under state law public policy common law cause of action, granted in 47 states. This is unacceptable and must be vigorously opposed.</p><p><strong>This is a Thanksgiving nightmare for AI whistleblowers</strong>.&#8221;</p></blockquote><p><strong>ARI Executive Director Eric Gastfriend said:</strong></p><blockquote><p>&#8220;A hastily-drafted pre-emption measure threatens to block state AI laws that protect young people from online harms, safeguard artists and creators from deepfakes, and support workers facing automation. Expect to see broad-based opposition to this measure from lawmakers across the political spectrum and consumer safeguard groups. This is the worst kind of backroom politics &#8211; at the expense of the American people.&#8221;</p></blockquote><h4><strong>No Federal AI Whistleblower Law Currently Exists</strong></h4><p>While a bipartisan AI Whistleblower Protection Act was introduced in May 2025 by Senate Judiciary Chair Chuck Grassley (R-IA), the bill remains in committee. If pre-emption passes before federal protections are enacted, AI whistleblowers across the US may be left without legal recourse.</p><h4>Current Paths to AI Whistleblower Protections Under Threat</h4><p>There are two complementary approaches to protecting AI whistleblowers, both of which are now under threat.</p><p><strong>Path 1: Dedicated whistleblower legislation</strong></p><ul><li><p><strong>Federal level</strong>: The AI Whistleblower Protection Act (S.1792)&#8212; still in committee, which would prohibit retaliation against employees reporting AI security vulnerabilities or violations, override NDAs, and provide remedies, including double back pay and compensatory damages.</p></li><li><p><strong>State level</strong>: California&#8217;s SB53, the first state law to create whistleblower protections for AI employees.</p></li></ul><p><strong>Path 2: General AI laws in states where whistleblower protection statutes already exist</strong></p><p>This is where much of SB 53&#8217;s power actually comes from and why pre-emption would be devastating. SB53 creates new reportable violations that automatically trigger California&#8217;s existing Labor Code whistleblower protections. Under California Labor Code 1102.5(b), any employee is protected from retaliation when reporting a violation of law if they have &#8220;reasonable cause to believe&#8221; that violations occurred. </p><blockquote><p><strong>Federal pre-emption could sever all of these protections.</strong></p></blockquote><h4>What &#8220;National Whistleblower Center<strong>"</strong> Urges You To Do Now</h4><p>The NDAA vote is expected within days. Congressional offices are making decisions now about whether to include this provision.</p><p><strong>Contact your representatives immediately - <a href="https://nationalwhistleblowercenter.salsalabs.org/callforaiwhistleblowerprotections/index.html">Use their tool</a> (They will connect you with your officials):</strong></p><p><strong>Call or email your Representative and Senators.</strong> Tell them:</p><ul><li><p>Remove AI pre-emption from the NDAA</p></li><li><p>Pass the AI Whistleblower Protection Act instead</p></li><li><p>Protect AI whistleblowers&#8212;don&#8217;t strip away their rights</p></li></ul><p>Or find your representatives on your own:</p><ul><li><p><a href="https://www.house.gov/representatives/find-your-representative">house.gov/representatives/find-your-representative</a> </p></li><li><p><a href="https://www.senate.gov/senators/senators-contact.htm">senate.gov/senators/senators-contact.htm</a></p></li></ul><div><hr></div><blockquote><h2>Insider Currents</h2></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI companies.</em></p><h3>The Cost of Speaking Up: OpenAI&#8217;s Subpoenas Against Nonprofits During Its $500 B Restructuring</h3><p>At least seven nonprofit organizations critical of OpenAI received subpoenas in recent months as part of the company&#8217;s litigation with Elon Musk, <em>NBC News</em> reported. These targeted nonprofit groups share commonalities: They either <a href="https://www.themidasproject.com/article-list/new-open-letter-requests-transparency-from-openai-concerning-its-restructuring">had signed or&nbsp;organized&nbsp;open letters&nbsp;</a>and&nbsp;<a href="https://www.sff.org/Offsite-Media/Petition_Complaint-to-AG-re-Open-AIs-Violations-of-Charitable-Trust.pdf">petitions</a> critical of OpenAI&#8217;s&nbsp;<a href="https://www.bloomberg.com/news/articles/2025-09-11/microsoft-openai-reach-early-agreement-in-for-profit-change">ongoing</a>&nbsp;efforts to restructure&nbsp;<a href="https://www.reuters.com/business/microsoft-openai-reach-non-binding-deal-allow-openai-restructure-2025-09-11/">from a nonprofit to a for-profit public benefit corporation</a>. In the ENCODE case, this nonprofit also <a href="https://encodeai.org/encode-ai-applauds-newly-announced-amendments-to-sb53/">sponsored a California bill</a> that imposed the first wide-ranging transparency regulations on leading AI companies like OpenAI.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AIWI: Inside AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Jason Kwon, OpenAI&#8217;s Chief Strategy Officer, wrote on X that after Musk sued OpenAI, several organizations &#8220;joined in and ran campaigns backing his opposition to OpenAI&#8217;s restructure. This raised transparency questions about who was funding them and whether there was any coordination.&#8221;</p><p>Four subpoenas reviewed by <em>NBC News</em> asked for &#8220;all information about the organizations&#8217; funders and donations&#8221; and &#8220;all communications regarding Musk, Meta, and its founder Mark Zuckerberg.&#8221; The subpoenas also ask for &#8220;all documents and communications concerning the governance or organizational structure of OpenAI.&#8221; Ek&#333;&#8217;s subpoena asked for &#8220;the identity of all Persons or entities that have contributed any funds to You and&#8230;the amount and date of any such contributions.&#8221; The Midas Project&#8217;s subpoena demanded &#8220;all communications with journalists, lawmakers, former OpenAI employees, and other advocacy groups,&#8221; <em>The Decoder</em> reported.</p><h4>The Alleged Musk Connection in the Subpoenas: Was it There?</h4><p>OpenAI&#8217;s legal theory centred on investigating potential coordination with Musk. The <em>NBC News report</em>&nbsp;showed how each organization responded to the allegation in the subpoena.</p><p><strong>The San Francisco Foundation (SFF)</strong></p><p>The San Francisco Foundation (SFF), whose mission is to strengthen communities, build civic leadership, and foster philanthropy in the San Francisco area, says:</p><blockquote><p>&#8220;[SFF] has never received any funding from Musk nor has it participated in the lawsuit with Musk.&#8221;</p></blockquote><p><strong>Ek&#333;</strong></p><p>Executive director Emma Ruby-Sachs stated that they&#8217;re: </p><blockquote><p>&#8220;over 70% funded by small online donations from individuals, and we&#8217;ve run multiple campaigns against Elon Musk in the last year.&#8221; </p></blockquote><p>The organization even ran a Times Square billboard depicting Musk as a king and advocating for him to be fired during his DOGE stint.</p><p>Ruby-Sachs concluded:</p><blockquote><p>&#8220;<strong>The logical basis is so ridiculous</strong> that we have to assume this is just a tactic to scare us and get us to back off. This subpoena shows OpenAI is going after people around the world who are legitimately concerned citizens and trying to shut them up.&#8221;</p></blockquote><p><strong>Legal Advocates for Safe Science and Technology (LASST)</strong></p><p>Tyler Whitmer, Founder of Legal Advocates for LASST (Also <a href="http://www.publishyourpolicies.org">AIWI campaign partner for Publish Your Policies</a>) and<em> </em>a seasoned litigation attorney,<em> </em>told <em>NBC News</em>:</p><blockquote><p>&#8220;I think Musk is a malign influence in the world right now. Part of my mission is to hold Musk&#8217;s xAI to account in the same way I hold OpenAI to account. <strong>It&#8217;s just that OpenAI is supposed to be better than this, while I don&#8217;t expect the same from Elon.</strong> It&#8217;s really clear that the subpoenas aren&#8217;t narrowly tailored to the issues of the litigation and are instead trying to leverage the litigation to get information that OpenAI is not otherwise entitled to. And that&#8217;s the best faith version of it.&#8221;</p></blockquote><p><strong>Tyler Johnston (The Midas Project)</strong></p><p>In a <a href="https://x.com/TylerJnstn/status/1977155360550199462">post on X</a> addressing OpenAI, Tyler Johnston, who received a subpoena as founder of The Midas Project, wrote:</p><blockquote><p>&#8220;We&#8217;ve never spoken with or taken funding from Musk and ilk, which we would have been happy to tell you if you asked a single time. In fact we&#8217;ve said he runs xAI so horridly it makes OpenAI saintly in comparison.&#8221;</p></blockquote><p>Johnston reported that insurance brokers refused to cover his small watchdog organization after the subpoena and subsequent news coverage, according to <em>NBC News</em>.</p><blockquote><p>&#8220;If you wanted to constrain an org&#8217;s speech, intimidation would be one strategy, but making them uninsurable is another, and maybe that&#8217;s what&#8217;s happened to us with this subpoena.&#8221;</p></blockquote><p>Johnston noted the subpoenas arrived during &#8220;the critical weeks before the CA and DE attorneys general decide whether to approve their transition from a public charity to a $500 billion for-profit enterprise,&#8221; <em><a href="https://the-decoder.com/openai-accused-of-pressuring-ai-regulation-advocates-with-subpoenas/">The Decoder</a></em><a href="https://the-decoder.com/openai-accused-of-pressuring-ai-regulation-advocates-with-subpoenas/"> </a>reported.</p><p><strong>Encode</strong></p><p>Encode, who&#8217;s also an AIWI campaign partner for Publish Your Policies, actually filed an amicus brief in Musk&#8217;s lawsuit against OpenAI. And after Governor Newsom signed SB 53 into law, Nathan Calvin wrote on<em> </em><a href="https://x.com/_NathanCalvin/status/1976649051396620514?s=20">X</a>:</p><blockquote><p>&#8220;Why did OpenAI subpoena me? Encode has criticized OpenAI&#8217;s restructuring and worked on AI regulations, including SB 53. I believe OpenAI used the pretext of their lawsuit against Elon Musk to intimidate their critics and imply that Elon is behind all of them.&#8221;</p></blockquote><p><strong>Future of Life Institute (FLI)</strong></p><p>The Future of Life Institute received its subpoena at the beginning of October, while FLI President Max Tegmark received an individual subpoena in late August, according to a report by <em>NBC News</em>. An FLI spokesperson stated:</p><blockquote><p>&#8220;We assume the subpoena has to do with us generally calling for more oversight and transparency on the development of advanced AI and AI companies in general, which currently have zero regulation or meaningful oversight.&#8221;</p></blockquote><p>The spokesperson further noted that despite FLI having received $10 million in funding from Musk (the last received in 2021): </p><blockquote><p>&#8220;Elon has no input into FLI&#8217;s structural activities.&#8221; </p></blockquote><p>The spokesperson explained that FLI distributed Musk&#8217;s money to leading AI researchers, and a separate tech mogul gave FLI its permanent endowment.</p><h4><strong>Legal Experts Weigh In</strong></h4><p>Sean Eskovitz, a litigator and former assistant U.S. attorney uninvolved in the case, told <em>NBC News</em> that &#8220;the breadth of these subpoenas strike me as quite aggressive and quite broad.&#8221; He added:</p><blockquote><p>&#8220;There would have to be a very close look at the scope of the subpoena in order to ensure that nonparties are not being harassed, that their speech is not being chilled, and that <strong>the proponent of the subpoena is not using the subpoena for some ulterior purpose.</strong>&#8221;</p></blockquote><p>Robert Weissman, co-president of Public Citizen, said:</p><p>&#8220;This behavior is highly unusual. <strong>It&#8217;s 100% intended to intimidate.</strong>&#8221;</p><h4><strong>Current Employee of OpenAI Spoke Up, Former Employees Shared Their Thoughts</strong></h4><p>Perhaps most striking was the public pushback from within OpenAI itself. <strong>Joshua Achiam, Head of Mission Alignment</strong>, who reports directly to CEO Sam Altman, wrote on X responding to Nathan&#8217;s post (from Encode) on X:</p><blockquote><p>&#8220;At what is possibly a risk to my whole career I will say: this doesn&#8217;t seem great.&#8221;</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mUuV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mUuV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png 424w, https://substackcdn.com/image/fetch/$s_!mUuV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png 848w, https://substackcdn.com/image/fetch/$s_!mUuV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png 1272w, https://substackcdn.com/image/fetch/$s_!mUuV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mUuV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png" width="1200" height="1432" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1432,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:441129,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/177989112?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mUuV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png 424w, https://substackcdn.com/image/fetch/$s_!mUuV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png 848w, https://substackcdn.com/image/fetch/$s_!mUuV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png 1272w, https://substackcdn.com/image/fetch/$s_!mUuV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F131ab26f-8ba6-4534-b977-a67b233eb8de_1200x1432.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Achiam even praised former OpenAI researcher Daniel for his story of speaking up:</p><blockquote><p>&#8220;Our most significant crisis to date, in my view, was the nondisparagement crisis. <strong>I am grateful to Daniel Kokotajlo for his courage and conviction in standing up for his beliefs.</strong> Whatever else we disagree on - many things - I think he was genuinely heroic for that.&#8221;</p></blockquote><p>And <strong>Daniel Kokotajlo, a former OpenAI employee and whistleblower</strong> that Achiam mentioned in his X&#8217;s thread, told <em>NBC News</em>:</p><blockquote><p>&#8220;When it&#8217;s actually happening to you in real life, the psychological pressure to just stay quiet is pretty darn strong and most people cave to it. That&#8217;s why intimidation tactics work.&#8221;</p></blockquote><p><strong>Former OpenAI research scientist Steven Adler</strong> also shared his thoughts:</p><blockquote><p>&#8220;I&#8217;m surprised that OpenAI&#8217;s Board would consider these actions consistent with its nonprofit legal obligations, or that they&#8217;d feel personally comfortable with this conduct.&#8221;</p></blockquote><p><strong>Helen Toner, a former OpenAI board member</strong>, also shared her thoughts, labeling the approach as &#8220;<strong>dishonesty &amp; intimidation tactics.</strong>&#8221;</p><h4>Beyond Nonprofits: Subpoenaing a Grieving Family</h4><p>In addition to these subpoenas, OpenAI also subpoenaed the family of Adam Rain, the teenager who took his life earlier this year. <em><a href="https://www.theatlantic.com/technology/2025/11/openai-lawsuit-subpoenas/684861/">The Atlantic</a></em><a href="https://www.theatlantic.com/technology/2025/11/openai-lawsuit-subpoenas/684861/"> writes</a>: &#8220;OpenAI wanted the family to send any videos taken at memorial services for Raine, according to documents I have reviewed. It wanted a list of people who attended or were invited to any memorial services. And it wanted the names of anyone who had cared for or supervised Raine over the past five years, including friends, teachers, school-bus drivers, coaches, and &#8220;car pool divers [sic].&#8221;&#8221;</p><p>&#8594; Read: <a href="https://www.nbcnews.com/tech/tech-news/openai-chatgpt-accused-using-subpoenas-silence-nonprofits-rcna237348">OpenAI accused of using legal tactics to silence nonprofits</a></p><p>&#8594; Read: <a href="https://the-decoder.com/openai-accused-of-pressuring-ai-regulation-advocates-with-subpoenas/">OpenAI accused of pressuring AI regulation advocates with subpoenas</a></p><p>&#8594; Read: <a href="https://www.theatlantic.com/technology/2025/11/openai-lawsuit-subpoenas/684861/">The New Brutality of OpenAI</a></p><div><hr></div><h3><strong>Insider Speaks Out About Amazon&#8217;s Strategy to Keep Secret Its Data Centres&#8217; Full Water Use</strong></h3><p>After an internal memo (first reported by <em>The Guardian</em>) revealed Amazon&#8217;s reluctance to disclose its full water footprint for its data centers, former Water Sustainability Program Manager, <strong>Nathan Wangusi</strong>, said:</p><blockquote><p>&#8220;[Amazon] spend a lot of time creating methodologies that are used to obfuscate the water footprint&#8221;.</p></blockquote><h4><strong>What Amazon Wanted to Hide From the Public</strong></h4><p>The memo, related to its &#8220;Water Positive&#8221; sustainability campaign, revealed that the company&#8217;s cloud computing division (AWS) chose to report only the &#8220;primary&#8221; water use (water consumed directly at datacentres), while excluding the much larger &#8220;secondary&#8221; use consumption, such as water needed to generate electricity.</p><p>That primary water use figure, <strong>7.7bn gallons a year </strong>(around 11,600 Olympic pools), was presented as the total footprint, with a goal to cut it to 4.9bn by 2030. The larger hidden figure went unmentioned.</p><h4><strong>Amazon&#8217;s Response</strong></h4><p>Spokesperson Margaret Callahan <a href="https://www.theguardian.com/technology/2025/oct/25/amazon-datacentres-water-use-disclosure">told</a> <em>The Guardian</em> the memo was &#8220;obsolete&#8221; and said it &#8220;completely misrepresents Amazon&#8217;s current water usage strategy&#8221;. She added that Wangusi&#8217;s comments were &#8220;contradicted by facts&#8221;, saying Amazon&#8217;s reporting relies on &#8220;third-party assured data from actual utility bills, not estimates or self-reporting&#8221;.</p><p>However, <em>The Guardian</em> noted Wangusi&#8217;s criticism wasn&#8217;t about Amazon&#8217;s data collection; it was about how their chosen metrics obscure the broader impact of its water offsets.</p><h4><strong>Insider Counter Response</strong></h4><p>A current Amazon software developer told <em>The Guardian</em>, &#8220;It would be better if they could own up to it&#8230;Even if they said it was a low priority, at least that would be honest.&#8221; He spoke anonymously for fear of retaliation. <br><br>Wangusi argues Amazon is shaping industry standards to downplay their water use and avert scrutiny: &#8220;You don&#8217;t need to obscure or obfuscate&#8230;It doesn&#8217;t make you more profitable. It makes you less trustworthy&#8221;.<br><br>He left the company last year and believes he was &#8220;hounded out&#8221; for raising those concerns. (Amazon declined to comment on his departure.)</p><h4><strong>AI Investment vs. Water Consumption</strong></h4><p>The $ 2.4 trillion corporation remains the world&#8217;s largest owner of data centers, dwarfing competitors Microsoft and Google. As the intense investment into frontier AI continues, so do Amazon&#8217;s plans to build more centres despite concerns over their water consumption.</p><p>Earlier this year, Wangusi published a <a href="https://nathanwangusi.medium.com/the-mirage-of-water-positive-an-urgent-call-for-real-water-accountability-in-tech-ef541f7f1ed5">Medium post</a> where he argued that the tech industry&#8217;s &#8220;water-positive&#8221; claims are misleading PR tactics that obscure the localised water-use impacts of massive data-centres.</p><p>In September 2025, a year after leaving Amazon, he founded <a href="https://www.bluecredits.org/">BlueCredit</a>, a platform that helps companies fund verified water-sustainability projects to offset their water footprint in a transparent, tracked way.</p><p>&#8594; Read: <a href="https://www.theguardian.com/technology/2025/oct/25/amazon-datacentres-water-use-disclosure">Amazon strategised about keeping its datacentres&#8217; full water use secret, leaked document shows</a></p><div><hr></div><h3>&#8220;Make Sure Your Boss Has Your Back&#8221;: <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Steven Adler&quot;,&quot;id&quot;:7944928,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a4cc0ff3-5403-4378-bee6-aded1be48a65_2317x2317.png&quot;,&quot;uuid&quot;:&quot;7d3a5377-41fe-4bc8-b580-7493c1fb17da&quot;}" data-component-name="MentionToDOM"></span> on NVIDIA&#8217;s Alleged Influence Over AI Policy Voiced By Anonymous Sources</h3><p>For months, think tank researchers have been sharing accounts of widespread fear around publishing work that contradicts NVIDIA&#8217;s interests, according to an investigation based on anonymous sources who spoke directly with Steven Adler or sources he was aware of. <strong>One anonymous staffer told him</strong>:</p><blockquote><p><strong>&#8220;If you&#8217;re going to work on export controls, make sure your boss is prepared to have your back.&#8221;</strong></p></blockquote><p>NVIDIA is a massive company&#8212;arguably the world&#8217;s most important AI supplier&#8212;and now worth $4.5 trillion, with no other firm within even $500 billion of its value.</p><p>Whether the U.S. should limit NVIDIA&#8217;s sales of advanced AI chips to China has become a key national-security question.</p><blockquote><p>&#8220;NVIDIA stands to lose a lot of money if its sales are restricted. This gives NVIDIA a strong incentive to shape the debate.&#8221;</p></blockquote><p>According to Adler&#8217;s sources, one way the company may be shaping this debate is by stifling public research.</p><h4><strong>Pattern of Alleged Retaliation</strong></h4><p>Multiple anonymous sources at different institutions described a concerning pattern to Adler:</p><ul><li><p><strong>Complaints to supervisors:</strong> When researchers publish analyses critical of NVIDIA&#8217;s China sales position, the company allegedly complains to their supervisors. In 2023, NVIDIA allegedly targeted analyst Gregory Allen, although his boss refused to fire him. Another researcher, Sam Hammond, publicly stated that NVIDIA reached out to &#8220;complain or worse&#8221; after his critical work.</p></li><li><p><strong>Suspicious media coverage:</strong> Sources pointed to two questionable media incidents occurring shortly after researchers published critical analyses. One critic stopped just short of accusing NVIDIA of planting a &#8220;hit piece&#8221; that claimed he &#8220;faces scrutiny over China corporate ties.&#8221; Multiple analysts called the piece &#8220;laughable&#8221; and &#8220;fishy,&#8221; with one asking, <strong>&#8220;Who drove this? And why?&#8221;</strong></p></li><li><p><strong>The Janet Egan case:</strong> Anonymous sources also flagged rumored retaliation against researcher Janet Egan. One week after she critiqued arguments for NVIDIA selling chips to China, POLITICO published what Adler characterizes as &#8220;questionable allegations about her supposed conflicts of interest.&#8221;</p></li><li><p><strong>Effect on research:</strong> One source told Adler they now warn colleagues that NVIDIA is likely to come after their reputations if they work on export controls. As a result, researchers are avoiding topics that might anger the company.</p></li></ul><h4>No Smoking Gun</h4><p>Adler emphasizes there&#8217;s no definitive proof. </p><blockquote><p>&#8220;It is entirely possible that despite what my sources believe, NVIDIA is not behind either rumored media piece, and that NVIDIA didn&#8217;t have the interactions alleged with researchers and their bosses.&#8221;</p></blockquote><p>However, Adler notes the broader concern that when researchers fear retaliation, we lose the studies needed to craft sound policy on one of today&#8217;s most consequential technologies.</p><p>&#8594;Read: <a href="https://x.com/sjgadler/status/1978859998546739328?s=20">Steven Adler&#8217;s Investigation on X</a></p><div><hr></div><h3>&#8220;Bloated&#8221; AI Unit: Meta Insiders Describe Reasonings Behind 600 Layoffs</h3><p>Meta is cutting approximately 600 employees from its AI division, a spokesperson confirmed to <em>CNBC</em>. However, anonymous sources speaking to <em>CNBC</em> reveal a move that goes beyond typical cost reduction: 1) the company is protecting its expensive new hires while 2) eliminating &#8216;legacy workers&#8217;.</p><p>Multiple people familiar with the matter disclosed to <em>CNBC</em> that employees in TBD Labs&#8212;home to Meta&#8217;s recent high-profile AI recruits&#8212;were entirely spared from the cuts. These sources characterized the decision as CEO Mark Zuckerberg&#8217;s <strong>&#8220;bet on his expensive hires versus the legacy employees.&#8221;</strong></p><p>The same sources described Meta&#8217;s AI unit as &#8220;bloated,&#8221; with internal teams, such as FAIR (Fundamental AI Research) and product groups, &#8220;vying for computing resources.&#8221; When Meta created Superintelligence Labs this summer (led by chief AI officer Alexandr Wang, who was hired as part of Meta&#8217;s $14.3 billion investment in Scale AI), sources told <em>CNBC</em>:</p><blockquote><p>&#8220;It inherited the oversized Meta AI unit.&#8221;</p></blockquote><p>Following the cuts, Meta&#8217;s Superintelligence Labs&#8217; workforce now sits at just under 3,000, these sources said. Meta notified at least some employees that November 21 is their termination date, and until then, they&#8217;re in a &#8220;non-working notice period.&#8221;</p><p>The message viewed by <em>CNBC</em> said:</p><blockquote><p>&#8220;During this time, your internal access will be removed and you do not need to do any additional work for Meta. You may use this time to search for another role at Meta.&#8221;</p></blockquote><p>The company also stated that it&#8217;s paying 16 weeks of severance, plus two weeks for every completed year of service, &#8220;minus your notice period.&#8221;</p><p>&#8594;Read: <a href="https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html">Meta lays off 600 from &#8216;bloated&#8217; AI unit as Wang cements leadership</a></p><div><hr></div><h3><strong>Trump&#8217;s AI Export Program Rollout Exposes Government Dysfunction</strong></h3><p>The Trump administration&#8217;s launch of a new AI exports program has been met with sharp criticism from industry insiders, who describe the rollout as &#8220;chaotic.&#8221; This program is designed to give U.S. companies financial incentives in the global AI race. </p><p><strong>Anonymous industry sources speaking to </strong><em><strong>Axios</strong></em> revealed deep frustration with the Commerce Department&#8217;s handling. One industry source who requested anonymity in order to speak freely described the rollout as &#8220;extremely underwhelming&#8221; and stated that &#8220;the joke is on industry for expecting a predictable or anything less than a chaotic rollout from Commerce.&#8221; Separately, &#8220;some industry and lobbying sources told Axios that they&#8217;re concerned with the lack of clarity and communication from the Trump administration around the AI exports program.&#8221;</p><p>Joseph Hoefer, AI policy lead at Monument Advocacy representing tech firms, told <em>Axios</em> the administration is:</p><blockquote><p>&#8220;leaving too much to industry to figure out, putting the onus on companies to make sense of the program, build partnerships and carry it out.&#8221; </p></blockquote><p>He further stated: </p><blockquote><p>&#8220;Firms want to engage and help shape it, but they need more clarity...the appetite is there, the framework just needs to catch up to the ambition.&#8221;</p></blockquote><p><em>Axios</em> also reported that &#8220;industry was expecting more information about the program&#8217;s launch this week, but what they got was a request for information.&#8221;</p><p>This rollout &#8220;fits into a larger pattern of the Trump administration rolling out policies before figuring out the details, like the H-1B visa proclamation,&#8221; other sources told <em>Axios</em>.</p><p>&#8594; Read: <a href="https://www.axios.com/2025/10/24/trump-ai-exports-program-stumbles">Trump's AI exports program stumbles out of the gate</a></p><div><hr></div><h3><strong>Double Whistleblower Exposes Corporate Compliance Failures at Uber and Veon</strong></h3><p>Mark MacGann has done what few corporate insiders dare: <strong>he&#8217;s blown the whistle twice</strong>. The former top executive (as first reported by <em><a href="https://www.theguardian.com/news/2022/jul/10/uber-files-leak-reveals-global-lobbying-campaign">The Guardian</a></em>) shared over 124,000 internal documents exposing Uber&#8217;s aggressive lobbying tactics in 2022. Now, as first reported to the <em>Financial Times</em>, <strong>he&#8217;s revealed troubling compliance issues at telecommunications giant Veon</strong>, where he served on the global executive committee, earning a salary of &#8364;900,000 plus a &#8364;900,000 bonus.</p><h4><strong>Inside Uber&#8217;s Playbook</strong></h4><p>As Uber&#8217;s chief lobbyist for Europe, the Middle East, and Africa, MacGann orchestrated a strategy of bypassing local regulators to win over national ministers directly. His trove of internal materials &#8211; three laptops, six hard drives, and two dozen notebooks of contemporaneous meeting notes &#8211; documented how the company operated without licenses and, as he later told the FT, <strong>&#8220;massaged the facts to earn the trust of drivers, of consumers and of political elites.&#8221;</strong></p><h4><strong>MacGann&#8217;s Motivation: Personal Crisis to Moral Reckoning</strong></h4><p>The work at Uber had taken a severe toll on MacGann, years before his decision to blow the whistle: he faced violent confrontations with taxi drivers, required bodyguards for five months, and was eventually diagnosed with PTSD. It was only in early 2020, when MacGann was in-between things (having left Uber in 2016 and Veon in 2018), that enabled what he called a &#8220;come-to-Jesus moment&#8221;. This set in motion meetings with journalists to provide them with the copious internal data he had collected. More recently, in the case of Veon, <strong>he wanted to expose a systemic problem: how international corruption investigations are handled, overseen, and, ultimately, closed</strong>.</p><h4><strong>Exposing Veon&#8217;s Misconduct</strong></h4><p>MacGann&#8217;s second disclosure may prove more significant for understanding how corporate compliance really works. Veon was operating under a US Department of Justice (DoJ) monitor following a $114 million bribery scandal involving the daughter of Uzbekistan&#8217;s president. <strong>Yet internal documents and emails he preserved reveal persistent red flags</strong>:</p><ul><li><p><strong>Kyrgyzstan episode</strong>: When Veon&#8217;s chief accountant was detained, executives proposed using sanctioned oligarch Petr Aven (a major shareholder) to secure his release, bypassing normal vetting procedures.</p></li><li><p><strong>Vladimir Senin hiring</strong>: A pro-Kremlin Duma member recommended by Aven (who also recommended Senin to Uber) was hired as a lobbyist despite <a href="https://www.justice.gov/criminal/criminal-fraud/foreign-corrupt-practices-act">FCPA</a> risks about his political connections, close ties to Aven, and this closeness as the source of his appointment. </p></li><li><p><strong>Ties to Uzbekistan scandal:</strong> One of the daughters of Abdulla Aripov (Uzbekistan&#8217;s prime minister), continued to work in Veon&#8217;s government relations department &#8211; the same division implicated in the original bribery case &#8211; despite internal alarms being raised. Aripov was previously dismissed as deputy PM after allegedly issuing telecom licences illegally. She eventually voluntarily resigned months later. </p></li></ul><p>MacGann described the DoJ monitorship to FT as &#8220;very artificial, contrived and complicated...a mundane box-ticking exercise&#8221; where <strong>compliance relied more on process and documentation rather than actual implementation</strong>.</p><h4><strong>The Whistleblower&#8217;s Price &#8211; and Platform</strong></h4><p>The public response to MacGann&#8217;s disclosure brought both vindication and isolation. <strong>He told the FT that he has been &#8220;ghosted by the corporate world&#8221;</strong> and <strong>estimates he faces &#8364;40 million in threatened penalties from Uber</strong> for alleged confidentiality breaches. &#8220;The money&#8217;s all gone,&#8221; he said of his personal income accumulated during his corporate career. <strong>Yet his disclosures have opened unexpected doors</strong>: he&#8217;s now a fellow at Harvard University&#8217;s Shorenstein Center on Media, Politics and Public Policy, <strong>receiving his &#8220;first regular paycheck since Veon&#8221;</strong> while researching tech industry practices and labor laws.</p><p>MacGann was invited to testify before European parliaments, speak at conferences, and engage with what he told the FT was <strong>&#8220;a world I barely knew existed of investigative journalists, human rights lawyers, pro-democracy organizations.</strong> <strong>People who speak out when they see wrongdoing</strong>.&#8221; He delivers talks at Harvard about &#8220;how greed, hubris and lack of oversight can lead bright people to do shady things.&#8221;</p><p>Note: MaGann was legally supported by&nbsp;<strong>The Signals Network (TSN)</strong>, which is part of <a href="https://aiwi.org/the-signals-network/">AIWI Contact Hub</a>.</p><p>&#8594; Read: <a href="https://www.ft.com/content/89224f66-7167-4b05-8896-f3424fcd0ea3">He was a top executive. He blew the whistle &#8212; twice</a></p><p>&#8594; Read: <a href="https://www.theguardian.com/news/2022/jul/10/uber-files-leak-reveals-global-lobbying-campaign">Uber broke laws, duped police and secretly lobbied governments, leak reveals</a></p><div><hr></div><h3><strong>AI Labs&#8217; 100-Hour Work Weeks: The New Normal?</strong></h3><p>As recently reported by <em>WSJ</em>, 80-100 hour working weeks have become the norm for top researchers and executives inside Silicon Valley&#8217;s biggest AI labs. The most intense periods require working <strong>beyond</strong> the &#8220;9-9-6&#8221; schedule (9am to 9pm, six days a week), a trope from China&#8217;s tech industry that was officially <a href="https://www.china-briefing.com/news/996-is-ruled-illegal-understanding-chinas-changing-labor-system/?ref=blog.pragmaticengineer.com">banned</a> there in 2021.</p><p>Such extreme working hours have long been rejected in the US, but a shift has seemingly occurred. One startup executive, speaking to the <em>WSJ</em>, jokingly referred to the schedule as &#8220;0-0-2,&#8221; meaning midnight to midnight, with a two-hour break on weekends.</p><p>People familiar with the matter also told <em>WSJ</em>:</p><blockquote><p>&#8220;Certain startups have included the expectation of an 80-hour-plus workweek in their employment contracts.&#8221;</p></blockquote><p>However, most organizations haven&#8217;t had to take such measures because top AI talent is driven by strong competitive pressure and their own interest in exploring new model capabilities, according to the <em>WSJ</em>.</p><h4><strong>The AGI Race Demands</strong></h4><p>In the past year, as the pursuit of AGI continues to accelerate, senior executives have been unabashed about their expectations: </p><ul><li><p>Cognition&#8217;s CEO Scott Wu <a href="https://x.com/ScottWu46/status/1952776198947520659?ref=blog.pragmaticengineer.com">posted</a>, &#8220;We routinely are at the office through the weekend and do some of our best work late into the night. Many of us literally live where we work&#8230;we understand it&#8217;s not for everyone&#8221;.</p></li><li><p>Lovable&#8217;s job descriptions <a href="https://x.com/antonosika/status/1878525525289009643?ref=blog.pragmaticengineer.com">warn</a>, &#8220;Long hours, high pace. Candidates must thrive under high urgency, with AGI timelines approaching.&#8221;</p></li><li><p>Google cofounder Sergey Brin <a href="https://www.nytimes.com/2025/02/27/technology/google-sergey-brin-return-to-office.html">told</a> the AI unit&#8217;s staff (first reported by <em>NYT</em>): &#8220;60 hours a week is the sweet spot of productivity&#8221;.</p></li></ul><h4><strong>History Repeating?</strong></h4><p>Such statements may surprise some, but not others. A retired software engineer <a href="https://newsletter.pragmaticengineer.com/p/the-pulse-142?ref=blog.pragmaticengineer.com">told</a> Gergely Orosz (The Pragmatic Engineer) she lived through similar working patterns about 50 years ago: </p><blockquote><p>&#8220;Of course this, current generation is working exhausting hours, this technology is new and exciting&#8230;My generation did the same thing with computers back in the 1970s&#8230;&#8221;</p></blockquote><p>But Microsoft&#8217;s chief product officer for AI, Aparna Chennapragada, believes this time is different. The gap between research breakthroughs and product launches has compressed from years to &#8220;the gap between Thursday and Friday,&#8221; she <a href="https://www.wsj.com/tech/ai/ai-race-tech-workers-schedule-1ea9a116">told</a> <em>WSJ</em>.</p><h4><strong>What Drives Them</strong></h4><p>The <em>WSJ</em> spoke to executives and researchers at Microsoft, Anthropic, Google, Apple, and OpenAI, who &#8220;see their work as critical to a seminal moment in history as they duel with rivals and seek new ways to bring AI to the masses.&#8221; Though some are now multi-millionaires, several noted they &#8220;haven&#8217;t had time to spend their fortunes.&#8221;</p><p>Josh Baston, a research scientist at Anthropic, <a href="https://www.wsj.com/tech/ai/ai-race-tech-workers-schedule-1ea9a116">explained</a>: </p><blockquote><p>&#8220;We&#8217;re basically trying to speedrun 20 years of scientific progress in two years&#8230;&#8221;, with impressive advancements in AI systems happening &#8220;every few months&#8230;It&#8217;s the most interesting scientific question in the world right now.&#8221;</p></blockquote><p>Madhavi Sewak, a DeepMind researcher, <a href="https://www.wsj.com/tech/ai/ai-race-tech-workers-schedule-1ea9a116">noted</a>: </p><blockquote><p>&#8220;You have all these good ideas, and you know it&#8217;s a competition against time.&#8221;</p></blockquote><p>Yet Sewak also acknowledged the cost: </p><blockquote><p>&#8220;People don&#8217;t have time for their friends, for their hobbies&#8230;[or] people they love&#8230;All they do is work.&#8221;</p></blockquote><p>&#8594; Read: <a href="https://www.wsj.com/tech/ai/ai-race-tech-workers-schedule-1ea9a116">AI Workers Are Putting In 100-Hour Workweeks to Win the New Tech Arms Race</a></p><p>&#8594; Read: <a href="https://blog.pragmaticengineer.com/new-trend-extreme-hours-at-ai-startups/">New trend: extreme hours at AI startups</a></p><div><hr></div><h3><strong>Internal Document Reveals Mustafa Suleyman&#8217;s Updated AI Team Includes Five Ex-Google Hires</strong></h3><p>According to internal organization charts viewed by <em><a href="https://www.businessinsider.com/microsoft-ai-ceo-mustafa-suleyman-org-chart-google-2025-10">Business Insider</a></em>, Microsoft AI CEO Mustafa Suleyman has taken on nine additional direct reports over the past year. The division is key to the company&#8217;s consumer AI strategy, responsible for producing products such as the Copilot chatbot and Bing search engine.</p><h4><strong>Poaching AI Talent from Google</strong></h4><p>Out of the nine new joiners to the division (now numbering 17 in total), five are ex-hires from Google and DeepMind: </p><ul><li><p><strong><a href="https://www.linkedin.com/in/amar-subramanya-3642498a/">Amar Subramanya</a>: </strong>former VP of engineering for AI model Gemini; spent over 16 years at Google &#8594; now CVP of AI.</p></li><li><p><strong><a href="https://www.linkedin.com/in/umesh-shankar-78017b2/">Umesh Shankar</a>: </strong>former chief technologist and distinguished engineer for Google Cloud Security; spent nearly 19 years at the company &#8594; now CVP of engineering.</p></li><li><p><strong><a href="https://www.linkedin.com/in/dominic-king-md-phd-a949b412/details/experience/">Dominic King</a></strong>: former health lead at DeepMind and UK lead and clinical director at Google Health; spent 5 years in total at both companies &#8594; now VP of health.</p></li><li><p><strong><a href="https://www.linkedin.com/in/timfrank/details/experience/">Tim Frank</a></strong>: spent nearly 14 years at Google, most recently as senior director of product management for advertiser and customer engagement platforms &#8594; now CVP of Microsoft AI Monetization.</p></li><li><p><strong>Trevor Callaghan</strong>: former general counsel at DeepMind and former legal director at Google &#8594; now VP for Responsible AI.</p></li></ul><h4><strong>Ex-Googlers&#8217; Insights on Joining Microsoft AI Division</strong></h4><p>Upon starting his new position, <strong>Amar Subramanya</strong> said via a <a href="https://www.linkedin.com/feed/update/urn:li:activity:7353453746091081729/">LinkedIn post</a> that the culture at Microsoft was:</p><blockquote><p> &#8220;refreshingly low ego yet bursting with ambition&#8221;.</p></blockquote><p>Reflecting on his first 100 days at Microsoft AI, <strong>Umesh Shankar</strong> <a href="https://www.linkedin.com/posts/umesh-shankar-78017b2_i-recently-completed-my-first-100-days-at-activity-7350532228986167296-AO59">posted</a> similar thoughts: </p><blockquote><p>&#8220;It is a low-ego environment where efficiency and building the best product for the user are prized above individual goals.&#8221;</p></blockquote><p><strong>Dominic King</strong> <a href="https://www.linkedin.com/posts/dominic-king-md-phd-a949b412_very-excited-to-be-joining-microsoft-ai-activity-7269731932442165248-ppT_">announced</a> his move to Microsoft by noting healthcare as a crucial use case for AI: </p><blockquote><p>&#8220;Patients and the public continue to face multiple pain points on their path to better health&#8230;We&#8217;re now at an inflection point where AI capabilities are already assisting with faster and more accurate diagnoses and helping people better navigate the complexity of modern healthcare&#8221;.</p></blockquote><h4><strong>Suleyman&#8217;s Strategy in AI Arms Race</strong></h4><p>Mustafa Suleyman co-founded Applied AI at DeepMind before Google acquired it, and is now targeting the very talent that turned DeepMind into the success it has become. The reshuffle in Microsoft&#8217;s AI division highlights an aggressive recruitment strategy, including a dedicated team to help with <a href="https://www.businessinsider.com/microsoft-trying-poach-meta-ai-talent-big-pay-packages2025">competitive packages</a>, including multi-million dollar offers.</p><p>&#8594; Read: <a href="https://www.businessinsider.com/microsoft-ai-ceo-mustafa-suleyman-org-chart-google-2025-10">Leaked Microsoft org chart reveals the top people in Mustafa Suleyman&#8217;s AI team, including five ex-Googler hires</a></p><div><hr></div><h4>Assorted Links</h4><ul><li><p>Statement on Supperintelligence led by the Future Life Institute</p><ul><li><p><a href="https://superintelligence-statement.org/">See the statement and the signatories</a></p></li></ul></li><li><p>Microsoft</p><ul><li><p><a href="https://www.theinformation.com/articles/microsoft-let-openai-play-field?rc=qlomo6">Why Microsoft Let OpenAI Play the Field</a></p></li></ul></li><li><p>OpenAI</p><ul><li><p><a href="https://www.wsj.com/tech/ai/openai-loosened-suicide-talk-rules-before-teens-death-lawsuit-alleges-34e830c1?utm_source=newsletter&amp;utm_medium=email&amp;utm_campaign=newsletter_axioslogin&amp;stream=top">OpenAI Loosened Suicide-Talk Rules Before Teen&#8217;s Death, Lawsuit Alleges</a></p></li></ul></li><li><p>Google</p><ul><li><p><a href="https://x.com/RihardJarc/status/1978808195234595280">An interview with a former Google employee offers insights into how the company uses TPUs, claiming that Google doesn&#8217;t use Nvidia chips for any of its first-party services</a>. (A Thread by Rihard Jarc on X)</p></li></ul></li><li><p>Meta</p><ul><li><p><a href="https://www.bloomberg.com/news/articles/2025-10-27/meta-shifts-metaverse-executive-to-ai-following-job-cuts">Meta Shifts Metaverse Executive to AI Following Job Cuts</a></p></li></ul></li><li><p>Thinking Machines Lab</p><ul><li><p><a href="https://www.wsj.com/tech/ai/thinking-machines-lab-co-founder-departs-for-meta-442d7461">Thinking Machines Lab Co-Founder Departs for Meta</a></p></li></ul></li></ul><div><hr></div><blockquote><h1>Policy &amp; Legal Updates</h1></blockquote><p><em>Updates on regulations with a focus on safeguarding individuals who voice concerns.</em></p><h3><strong>California is the First State to Regulate AI Companion Chatbots</strong></h3><p>The controversy around companion chatbots reached an inflection point in California as internal documents and a tragic teen suicide finally spurred lawmakers to act. </p><h4><strong>The Legislative Battle: Two Bills, One Veto</strong></h4><p>California&#8217;s legislature sent two bills to Governor Gavin Newsom, who issued a split verdict. He <strong>vetoed</strong> the stringent <strong>Leading Ethical AI Development (LEAD) Act (AB 1064)</strong>, which would have prohibited companies from letting minors use chatbots &#8220;foreseeably capable&#8221; of promoting self-harm, violence, or disordered eating. It also sought to ban unsupervised AI mental health therapy for young people, as well as chatbots that prioritized engagement over safety.</p><p>In his veto message, Newsom claimed the restrictions were &#8220;so broad...it may unintentionally lead to a total ban&#8221; on AI for minors, arguing adolescents must learn to &#8220;safely interact&#8221; with the technology.</p><h4><strong>What Actually Passed: SB 243</strong></h4><p>Instead, Newsom signed <strong>SB 243</strong>, a significantly weaker bill watered down by last-minute amendments. This new law requires chatbots to state that they are not human, to tell users to take breaks, and to implement protocols to avoid generating self-harm content.</p><p>SB 243&#8217;s one notable feature: a private right of action allowing citizens to sue AI companies for violations, seeking up to $1,000 in damages per violation plus attorney&#8217;s fees, or an injunction forcing compliance.</p><h4><strong>Character.AI&#8217;s Response and Its Limits</strong></h4><p>Sensing the backlash, on 29 October&nbsp;<a href="http://character.ai">Character.AI</a>&nbsp;announced that it would ban minors from using its chatbots, effective from 25 November. The company will implement age-verification functions and impose a 2-hour daily limit while developing new child-friendly features and establishing an AI safety lab.</p><p>But experts remain sceptical:</p><blockquote><p>&#8220;They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created&#8221;, </p></blockquote><p>said Jain Meetali, executive director of the Tech Justice Law Project, in a statement shared with <em><a href="https://fortune.com/2025/10/29/character-ai-ban-children-teens-chatbots-regulatory-pressure-age-verification-online-harms/">Fortune</a></em>. </p><blockquote><p>&#8220;Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies &#8211; not just for children, but also for people over the age of 18 years.&#8221;</p></blockquote><p>The numbers underscore the challenge: more than <a href="https://apnews.com/article/ai-companion-generative-teens-mental-health-9ce59a2b250f3bd0187a717ffa2ad21f">70% of teens have used AI companions</a>, and half use them regularly, according to Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.</p><p>&#8594; Read: <a href="https://techcrunch.com/2025/10/13/california-becomes-first-state-to-regulate-ai-companion-chatbots/">California becomes first state to regulate AI companion chatbots</a></p><p>&#8594; Read: <a href="https://fortune.com/2025/10/29/character-ai-ban-children-teens-chatbots-regulatory-pressure-age-verification-online-harms/">Character.AI bans teens from talking to its chatbots amid mounting lawsuits and regulatory pressure</a></p><div><hr></div><h3><strong>The GUARD Act and the Push to Define AI &#8220;Companionship&#8221;</strong></h3><p>In addition to Governor Newsom signing SB 243 (see above), Senator Josh Hawley proposed the Guidelines for User Verification and Responsible Dialogue (GUARD) Act in the US Congress.  </p><p>The draft <a href="https://www.hawley.senate.gov/wp-content/uploads/2025/10/GUARD-Act-Bill-Text.pdf">bill</a> (a copy of which was obtained by <em>Axios</em>) seeks to ban AI &#8220;companions&#8221; for minors and aims to legally define and restrict a specific, emergent class of AI: systems designed to &#8220;encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication&#8221;. </p><p>By defining an &#8220;AI companion&#8221; based on <em><strong>what it&#8217;s used for</strong></em> (simulating friendship or therapy) rather than <em><strong>what its made of</strong></em> (its technical architecture), the bill aims to pre-empt loopholes that could be used by companies to avoid regulation.</p><h4><strong>Key Features of the Proposed GUARD Act </strong></h4><p>The draft GUARD Act mandates:</p><ul><li><p><strong>AI companies must use stricter age verification methods, for example, through a government ID, rather than just requesting a birthdate.</strong> This moves past the simple self-attestation that many social media companies use.</p></li><li><p><strong>The prohibition of users found to be under 18 from accessing an &#8220;AI companion&#8221;.</strong></p></li><li><p><strong>Chatbots must disclose in every conversation that they are not human and don&#8217;t have professional credentials (i.e., therapy, medical, legal). </strong>This is in direct response to the therapeutic mimicry that concerns child safety advocates. </p></li><li><p><strong>New criminal liabilities for companies whose systems produce or solicit sexual, self-harm or violent content with verified minors.</strong> </p></li></ul><h4><strong>The GUARD Act Joins State Regulations</strong></h4><p>The proposed bill would join a set of state-level regulations, including California&#8217;s transparency mandates (TFAIA and now SB243); New York&#8217;s requirements for active crisis intervention; and Utah&#8217;s establishment that AI mental health chatbots can&#8217;t sell or share personal health data unless they share the data with a health care provider. </p><p>For development teams, the growing mix of state rules makes it difficult to launch AI companions across jurisdictions &#8212; a challenge the GUARD Act seeks, in part, to streamline at the federal level.</p><h4><strong>Lawmakers Move to Clarify Responsibility for AI Harms</strong></h4><p>Hawley&#8217;s bill attempts to classify AI as a &#8220;product&#8221;, meaning it will be subject to liability law signals. This is a foundational legal shift. The era of treating models as neutral platforms may be ending, replaced by a paradigm where labs bear direct responsibility for the specific, interactive harms their systems facilitate.</p><p>While political and mass media rhetoric targets &#8220;Big Tech,&#8221; the underlying challenge for the field is both technical and ethical. As the ParentsSOS group &#8212;an initiative formed by families who have lost children to online-related harms &#8212; <a href="https://www.nbcnews.com/tech/tech-news/ai-ban-kids-minors-chatgpt-characters-congress-senate-rcna240178">noted</a> (first reported by <em>NBC News</em>), the risk isn&#8217;t just the chatbot itself; it&#8217;s the design features engineered to &#8220;maximize engagement.&#8221; </p><p>This forces the conversation within labs: Is it possible to build systems that safely mimic human affection, especially when the primary business model relies on maximizing interaction time? The GUARD Act suggests that, for minors, legislators are rapidly concluding the answer is &#8220;no.&#8221;</p><p>&#8594; Read: <a href="https://www.foxnews.com/tech/protecting-kids-from-ai-chatbots-what-guard-act-means">Protecting kids from AI chatbots: What the GUARD Act means</a></p><p>&#8594; Read: <a href="https://www.axios.com/2025/10/14/hawley-draft-ai-chatbot-bill">Exclusive: Hawley circulating draft AI chatbot bill</a></p><div><hr></div><blockquote><h2>Research Spotlight</h2></blockquote><p><em>Relevant research in the context of AI whistleblowing</em></p><h3>MIRI Paper On International AI Agreement Prominently Features Whistleblower Protections</h3><p>Many experts warn that premature development of artificial superintelligence (ASI) poses catastrophic risks, including human extinction from misaligned AI, geopolitical instability, and misuse by malicious actors. A new paper from the Machine Intelligence Research Institute (MIRI)  proposes an international agreement to halt dangerous AI capabilities advancement while preserving current, safe AI applications.</p><p>The framework envisions a U.S.-China coalition restricting AI training scale and dangerous research through FLOP thresholds and AI chip tracking. <strong>Due to the lack of trust between parties, verification is key,</strong> and whistleblowers emerge as a central pillar of that verification strategy.</p><p>The paper cites <a href="https://arxiv.org/abs/2511.10783v2">Mauricio Baker&#8217;s verification framework</a>, which establishes whistleblowers as essential alongside technical monitoring. The authors note that even carefully compartmentalized state projects involve enough personnel to create disclosure opportunities.</p><p>&#8594; Read: <a href="https://arxiv.org/pdf/2511.10783">An International Agreement to Prevent the Premature Creation of Artificial Superintelligence</a></p><div><hr></div><blockquote><h2>Announcements &amp; Call to Action</h2></blockquote><p><em>Updates on publications, community initiatives, and &#8220;call for topics&#8221; that seek contributions from experts addressing concerns inside Frontier AI.</em></p><h3><strong>AIWI Comments on New Anthropic Sabotage Risk Report </strong></h3><p>We welcome Anthropic&#8217;s proactive evaluation and transparency in publishing a <a href="https://alignment.anthropic.com/2025/sabotage-risk-report/">report</a> on the misalignment risks posed by their deployed models. They are the only frontier AI company who has completed such an exercise</p><p><strong>However, two whistleblowing issues warrant attention:</strong></p><ol><li><p><strong>Claude as &#8220;whistleblower&#8221;</strong>: The report frames Claude&#8217;s potential to leak information purely as an IP/competitive risk, dropping earlier language about &#8220;ethical intervention&#8221; being &#8220;appropriate in principle.&#8221; The mitigation strategy&#8212;investigating all leaks to disincentivize employee disclosures&#8212;seems to contradict claims about reducing catastrophic risk and the &#8220;complete transparency&#8221; recently called for at The Curve.</p></li><li><p><strong>Zero internal reports</strong>: Anthropic shared with METR (a disclosure that should, again, be commended) that no employees have used Anthropic&#8217;s internal whistleblowing channels for RSP non-compliance on sabotage risks. While potentially indicating good compliance, this could also signal low trust in the channel. We again would put it forward that Anthropic should publish its whistleblowing policy and more regularly report on channel usage, outcomes, and employee trust levels to enable pattern-spotting and build credibility.</p></li></ol><blockquote><p><strong>For the original AIWI commentary, see our <a href="https://x.com/AIWI_Official/status/1983305178628644976">post on X</a>.</strong></p></blockquote><div><hr></div><h3><strong>Resource Spotlight: The Double Black Box by Ashley S. Deeks</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KFeD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KFeD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png 424w, https://substackcdn.com/image/fetch/$s_!KFeD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png 848w, https://substackcdn.com/image/fetch/$s_!KFeD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!KFeD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KFeD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2171528,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/177989112?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KFeD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png 424w, https://substackcdn.com/image/fetch/$s_!KFeD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png 848w, https://substackcdn.com/image/fetch/$s_!KFeD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!KFeD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F905c8699-56dc-4602-bd6a-2d3b8f10e39b_2160x1620.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong><a href="https://global.oup.com/academic/product/the-double-black-box-9780197520901?cc=us&amp;lang=en&amp;">Read here</a></strong></p><div><hr></div><h3><strong>University of Cambridge Seminar: Deploying Secure Whistleblower Technology in Practice</strong></h3><p>A seminar held on 12 November by the University of Cambridge discussed <strong>CoverDrop</strong>, <strong>a secure communication system now deployed in The Guardian&#8217;s news app</strong>. Unlike traditional tools, CoverDrop hides not just message content but the very existence of communication&#8212;sending regular encrypted data blocks that make genuine and dummy traffic indistinguishable to observers, including state-level adversaries.</p><p><strong>The presentation covered the five-year journey from research to deployment:</strong> journalist workshops; user studies challenging conventional assumptions; technical hurdles of implementing metadata privacy on smartphones; protocol design for plausible deniability; and integration into a production app serving millions. </p><p>It serves as a case study in applying cryptography and systems engineering to support press freedom and whistleblower protection in practice.</p><blockquote><p><strong>A recording of the talk will be made available on the Department of Computer Science and Technology&#8217;s <a href="https://www.cl.cam.ac.uk/seminars/wednesday/video/">seminars webpage</a>.</strong></p></blockquote><div><hr></div><p>Thank you for trusting AIWI as your source for insights on protecting and empowering insiders who raise concerns within AI companies.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI companies who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The AIWI Team</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AIWI: Inside AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Google AI Mass Layoff, Ex-OpenAI's $500M Funding, Palantir-ICE Exposed, Newsom Signs SB 53 ‘Whistleblower Protections’]]></title><description><![CDATA[INSIDE AI #20: Google AI Mass Layoff & Retaliation, Ex-OpenAI Applied Compute's Funding, Anthropic AI Surveillance 'Blocks' Federal Work, Palantir ICE Contracts Leaked, SB 53 Signed & more...]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/google-ai-mass-layoff-ex-openais</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/google-ai-mass-layoff-ex-openais</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Tue, 07 Oct 2025 16:16:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!M7ow!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>In This Edition:</strong></h2><ul><li><p>Insider Currents:</p><ul><li><p>200+ Google AI Contractors Were Laid Off Amid Fight Over Working Conditions</p></li><li><p>Three Former OpenAI Researchers in Funding Talks at $500 Million Valuation</p></li><li><p>Anonymous White House Officials Say Anthropic&#8217;s AI Surveillance Restrictions Are Blocking Federal Law Enforcement Work</p></li><li><p>Leaked Internal Palantir Documents Expose Scope of ICE Surveillance Infrastructure</p></li><li><p>xAI Fired Senior Employees Overseeing Grok&#8217;s 1,500 &#8220;AI tutors&#8221;, Remaining Workers Face Increased Scrutiny</p></li><li><p>OpenAI&#8217;s $500B Restructuring and Ownership Breakdown by <em>The Information</em></p></li></ul></li><li><p>Announcements:</p><ul><li><p>Newsom Signs SB 53 &#8216;Whistleblower Protections&#8217; Into Law (AIWI Commentary Coming Soon)</p></li><li><p>AIWI Among Signatories: Global Call for AI Red Lines</p></li><li><p>A Resource by GlobaLeaks: Design Principles for Protecting Whistleblowers</p></li></ul></li></ul><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/p/google-ai-mass-layoff-ex-openais?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><em>Help Us Grow. Share This Newsletter With Your Network. Or visit <strong><a href="http://www.aiwi.org">aiwi.org</a> </strong>to learn more about our work.</em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/p/google-ai-mass-layoff-ex-openais?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://aiwhistleblowerinitiative.substack.com/p/google-ai-mass-layoff-ex-openais?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><blockquote><h2>Insider Currents</h2></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from <strong>within</strong> AI companies.</em></p><h3>200+ Google AI Contractors Were Laid Off Amid Fight Over Working Conditions</h3><p>More than 200 contractors working on Google&#8217;s Gemini chatbot and AI Overviews were abruptly terminated in August,<strong> according to workers who spoke to </strong><em><strong>WIRED</strong></em>. The layoffs came as these employees were attempting to unionize over alleged pay disparities and deteriorating working conditions. </p><p>They were part of thousands of contractors Google has hired through GlobalLogic, a Hitachi subsidiary, and other third-party firms. In the past few years, Google has outsourced its AI rating work to these external companies, relying on contract workers to evaluate, modify, and refine the chatbot&#8217;s responses. </p><p><strong>According to internal documents reviewed by </strong><em><strong>WIRED</strong></em>, GlobalLogic is using these human raters to train automated systems designed to eliminate their jobs. This information was confirmed by<strong> multiple anonymous sources </strong>who say the systems are <strong>&#8220;being set up to replace themselves.&#8221;</strong></p><h4>More on Pay Disparities, Union Activity, and Retaliation</h4><ul><li><p><strong>Pay inequality prevailed.</strong> While some super raters earned $28-$32/hour, third-party contractors doing identical work received just $18-$22/hour. </p></li><li><p>In February, after union membership surged from 18 to 60 members, <strong>GlobalLogic banned social channels during work hours, the spaces where remote workers discussed pay and built community. A super rater who requested anonymity </strong>told <em>WIRED</em>: </p></li></ul><blockquote><p><em>I believe that [because] the union was happening, people were discussing their pay... And so they did it as a means to stop us from communicating with one another and that&#8217;s what made the environment hostile.&#8221;</em></p></blockquote><ul><li><p><strong>In July, GlobalLogic mandated return-to-office for Austin-based workers</strong>, <strong>according to a notice seen by </strong><em><strong>WIRED</strong></em>. This decision has directly affected a number of employees who are unable to commute to the workplace either because they lack the financial means to cover travel expenses or because they have disabilities or family care obligations.</p></li><li><p><strong>Retaliation followed unionization attempts.</strong> Ricardo Levario, a teacher and vocal union organizer, was fired just four days <strong>after filing a whistleblower complaint with parent company Hitachi</strong>, reportedly terminated during a five-minute call for &#8220;violating the social spaces policy,&#8221; according to<em> WIRED</em>. Two other workers have filed National Labor Relations Board complaints alleging unfair termination for wage transparency advocacy.</p></li></ul><h4>Deteriorating Working Conditions</h4><ul><li><p><strong>&#8220;Alex,&#8221; a generalist rater speaking anonymously,</strong> described the mounting pressure that the task timers set to five minutes, threatening quality. </p></li></ul><blockquote><p>&#8220;I just focus more on the timer than anything else&#8212;<strong>it&#8217;s gone from mentally stimulating work to mind-numbing.</strong>&#8221; </p></blockquote><ul><li><p>Despite requiring master&#8217;s degrees or PhDs, <strong>roughly 80% remain contractors without benefits or PTO,</strong> according to her estimate.</p></li><li><p><strong>She added that management threatens workers who can&#8217;t meet speed metrics with termination.</strong></p></li></ul><blockquote><p>&#8220;It&#8217;s just been kind of [an] oppressive atmosphere. We can&#8217;t really organize&#8212;we&#8217;re afraid that if we talk we&#8217;re going to get fired or laid off.&#8221;</p></blockquote><p>Mila Miceli, Research Lead at DAIR Institute, noted the situation follows patterns observed in other outsourced AI work:</p><p>&#8220;We have seen this in other places, almost every outsourcing company doing data work where workers have tried to collectivize and organize&#8212;this has been difficult. <strong>They have suffered retaliation</strong>.&#8221;</p><p>&#8594; Read: <a href="https://www.wired.com/story/hundreds-of-google-ai-workers-were-fired-amid-fight-over-working-conditions/">Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions</a></p><div><hr></div><h3>Three Former OpenAI Researchers in Funding Talks at $500 Million Valuation</h3><p><strong>Applied Compute</strong>, founded by three former OpenAI researchers in May, is pursuing funding at a <strong>$500 million valuation</strong>, a 5x increase from its $100 million valuation just three months ago, according to <strong>multiple people involved in the funding talks,</strong> as reported by <em>The Information</em>. </p><p>The company, led by former OpenAI employees Rhythm Garg, Yash Patil, and Linden Li, focuses on reinforcement learning (RL) services for businesses but <strong>&#8220;hasn&#8217;t discussed its work publicly,&#8221;</strong> maintaining secrecy around its operations.</p><p><strong>One senior OpenAI executive privately told </strong><em><strong>The Information</strong></em><strong>:</strong></p><blockquote><p>&#8220;&#8230; this year they expect the &#8216;entire economy&#8217; to become an &#8216;RL machine&#8217; of sorts, implying that AI might someday train on recordings of professionals handling day-to-day work on their devices.&#8221;</p></blockquote><p>According to <em>The Information</em>, this reinforcement learning (RL) could simplify the development of models capable of automating tasks and potentially replacing jobs in specific fields.</p><p>Other companies, such as Mira Murati&#8217;s Thinking Machines Lab, where the former OpenAI Chief Technology Officer serves as co-founder, have <strong><a href="https://www.theinformation.com/articles/10-billion-enigma-mira-murati?rc=qlomo6">privately indicated plans</a> to provide reinforcement learning (RL) services to enterprises</strong>. <em>The Information</em> also reported that &#8220;Murati&#8217;s firm recently raised $2 billion at a $10 billion valuation, not including the capital, <strong>after hiring numerous ex-OpenAI staffers.</strong>&#8221;</p><p>&#8594; Read: <a href="https://www.theinformation.com/articles/ex-openai-trio-funding-talks-500-million-valuation?utm_campaign=article_email&amp;utm_content=article-15825&amp;utm_medium=email&amp;utm_source=sg&amp;rc=qlomo6">Ex-OpenAI Trio in Funding Talks at $500 Million Valuation</a></p><div><hr></div><h3>Anonymous White House Officials Say Anthropic&#8217;s AI Surveillance Restrictions Are Blocking Federal Law Enforcement Work</h3><p><strong>Two senior White House officials have spoken anonymously to </strong><em><strong><a href="https://www.semafor.com/article/09/17/2025/anthropic-irks-white-house-with-limits-on-models-uswhite-house-with-limits-on-models-use">Semafor</a></strong></em> about growing tensions between the Trump administration and Anthropic over the company&#8217;s restrictions on using Claude AI for domestic surveillance purposes.</p><p>These anonymous officials also told <em>Semafor </em>that Anthropic recently refused requests from contractors collaborating with federal law enforcement agencies, </p><blockquote><p>&#8220;&#8230;because the company (Anthropic) refuses to make an exception allowing its AI tools to be used for some tasks, including surveillance of US citizens.&#8221;</p></blockquote><p>These White House sources claim Anthropic:</p><ul><li><p>enforces its policies &#8220;selectively based on politics&#8221;</p></li><li><p>deliberately uses &#8220;vague terminology that allows for a broad interpretation of its rules&#8221;</p></li></ul><p>According to them, contractors working with the FBI and Secret Service have repeatedly<strong> &#8220;run into roadblocks when attempting to use Claude for surveillance tasks.&#8221; </strong></p><p>As <em>Somafor </em>wrote,</p><blockquote><p>&#8220;Anthropic&#8217;s decision to limit how law enforcement and national security agencies use its models has turned into a headache for some private contractors that work with those agencies. That&#8217;s because in some cases, <strong>Anthropic&#8217;s Claude models &#8212; available through Amazon Web Services GovCloud system &#8212; are the only top-tier models cleared for top secret security situations, the officials said.</strong>&#8221;</p></blockquote><h4>The Debate Over AI Model &#8220;Picking and Choosing&#8221; in Government Contracting</h4><p>Anthropic has established specific restrictions on how its AI models can be used. While the company signed a deal to provide Claude to federal agencies for a nominal $1 fee and partners with the Department of Defense, it maintains explicit prohibitions on domestic surveillance and weapons development applications. This creates a situation where government entities have contracted access to AI capabilities that they cannot always deploy for certain law enforcement functions. This reported friction follows a previous disagreement between Anthropic and the Trump administration, as reported by <em>Ars Technica</em>. Anthropic <a href="https://arstechnica.com/ai/2025/06/in-10-years-all-bets-are-off-anthropic-ceo-opposes-decade-long-freeze-on-state-ai-laws/">opposed proposed legislation</a> that would have prevented individual states from enacting their own AI regulations. </p><p>According to <em>Semafor</em>, the practice of &#8220;picking and choosing&#8221; how software may be used has traditionally been frowned upon in government contracting. <strong>However, over the past decade, activist employees have also frequently demanded that their companies refrain from working with the defense industry. </strong><em>Semafor </em> viewed these tensions,</p><blockquote><p>&#8220;&#8230;are part of a broader battle between the AI &#8220;safety&#8221; movement, which has allies at the independent AI startup, and many of its rivals and the Republican administration, which prefer to move faster.&#8221;</p></blockquote><p>&#8594; Read: <a href="https://arstechnica.com/ai/2025/09/white-house-officials-reportedly-frustrated-by-anthropics-law-enforcement-ai-limits/?utm_source=substack&amp;utm_medium=email">White House officials reportedly frustrated by Anthropic&#8217;s law enforcement AI limits</a></p><p>&#8594; Read: <a href="https://www.semafor.com/article/09/17/2025/anthropic-irks-white-house-with-limits-on-models-uswhite-house-with-limits-on-models-use">Anthropic irks White House with limits on models&#8217; use</a></p><div><hr></div><h3>Leaked Internal Palantir Documents Expose Scope of ICE Surveillance Infrastructure</h3><p><strong>Sources within Palantir have leaked internal company documents to</strong><em><strong><a href="https://www.404media.co/ice-spends-millions-on-clearview-ai-face-recognition-to-find-people-assaulting-officers/"> </a><a href="https://www.404media.co/this-is-palantirs-justification-for-building-ices-master-database/">404 Media</a></strong></em><a href="https://www.404media.co/this-is-palantirs-justification-for-building-ices-master-database/"> </a>regarding the company&#8217;s work with Immigration and Customs Enforcement, indicating internal controversy over contracts that have now totaled over $140 million since 2022, according to <em><strong><a href="https://techcrunch.com/2025/09/18/heres-the-tech-powering-ices-deportation-crackdown/">TechCrunch</a></strong></em>. These contracts support ICE operations that have resulted in approximately 350,000 deportations in Trump&#8217;s first eight months in office. The leaked materials also include an internal wiki where Palantir justifies its work with ICE. </p><p><em><strong><a href="https://www.404media.co/this-is-palantirs-justification-for-building-ices-master-database">404 Media</a> </strong></em>cited:</p><blockquote><p>Internally Palantir has justified its closer relationship with ICE because it believes its work can promote &#8220;efficiency, transparency, and accountability&#8221; and &#8220;enable fair treatment&#8221; of immigrants. </p></blockquote><p>The leaks also provide details about the company&#8217;s Investigative Case Management (ICM) database system. <strong>According to </strong><em><strong>404 Media</strong></em><strong> in </strong><em><strong>TechCrunch</strong></em><strong>, a source familiar with the database</strong> stated that the system comprises &#8220;tables upon tables&#8221; of data. It can generate reports filtering individuals by hundreds of data points, including visa types, ports of entry, countries of origin, and physical characteristics such as hair color.</p><p><em>404 Media</em><strong> reported viewing a recent version of the database</strong>, which enables ICE to filter individuals based on their immigration status, physical characteristics, criminal affiliation, and location data. Palantir&#8217;s latest contract for ICM, signed in September 2024, is worth $18.5 million. <strong>According to a contract worth $30 million <a href="https://www.businessinsider.com/ice-palantir-new-technology-30-million-visa-overstays-self-deportation-2025-4?ref=404media.co">revealed by </a></strong><em><strong><a href="https://www.businessinsider.com/ice-palantir-new-technology-30-million-visa-overstays-self-deportation-2025-4?ref=404media.co">Business Insider</a></strong></em>, the company is also developing a system called &#8220;ImmigrationOS&#8221;, designed to track visa overstays and provide &#8220;near real-time visibility&#8221; into deportations.</p><h4>Other ICE Tech Contracts and Investigative Practices</h4><p><em>TechCrunch</em> stated that ICE has also expanded its tech portfolio significantly in surveillance and data tools in the past year:</p><ul><li><p><strong>Database surveillance</strong>: ICE uses LexisNexis&#8217;s Accurint Virtual Crime Center to access public records and commercial data. <strong>Documents obtained through FOIA requests</strong> revealed that ICE conducted more than 1.2 million searches over a seven-month period in 2022. The Intercept reported that ICE uses the system to investigate migrants before any crime has been committed. ICE paid $4.7 million for the service this year.</p></li><li><p><strong>Facial recognition</strong>: ICE signed a $3.75 million contract with Clearview AI last week for facial recognition capabilities. The company has received previous ICE contracts totaling nearly $2 million since 2023.</p></li><li><p><strong>Phone surveillance and forensics</strong>: A $2 million contract with Israeli spyware firm Paragon Solutions, signed in September 2024 and previously frozen under the Biden administration, was reactivated by the Trump administration. Additionally, ICE contracted with Magnet Forensics for $3 million for software that includes GrayKey technology, which enables the unlocking and extraction of data from locked phones.</p></li></ul><p>&#8594; Read: <a href="https://techcrunch.com/2025/09/18/heres-the-tech-powering-ices-deportation-crackdown/">Here&#8217;s the tech powering ICE&#8217;s deportation crackdown</a></p><p>&#8594; Read: <a href="https://www.404media.co/this-is-palantirs-justification-for-building-ices-master-database/">This Is Palantir&#8217;s Justification for Building ICE&#8217;s Master Database</a></p><p>&#8594; Read: <a href="https://www.businessinsider.com/ice-palantir-new-technology-30-million-visa-overstays-self-deportation-2025-4?ref=404media.co">ICE just ordered $30 million worth of new technology from Palantir to track immigrants</a></p><div><hr></div><h3><strong>xAI Fired Senior Employees Overseeing Grok&#8217;s </strong>1,500 &#8216;AI tutors&#8217;<strong>, Remaining Workers Face Increased Scrutiny</strong></h3><p>Slack screenshots leaked to <em><strong>Business Insider </strong></em>show that at least <strong>nine high-level employees on xAI&#8217;s data annotation team</strong> had their accounts deactivated over the weekend. According to the report, these managers oversaw approximately 1,500 &#8220;AI tutors&#8221; responsible for training Grok &#8220;how to be more intelligent for anywhere from $35 to $65 per hour,<a href="https://job-boards.greenhouse.io/xai/jobs/4621682007"> per the company&#8217;s career page,</a>&#8221; wrote <em>Futurism</em>. Earlier this year, <em><strong><a href="https://www.businessinsider.com/xai-grok-training-facial-expressions-skippy-employees-react-2025-7">Business Insider</a> </strong></em>also<em><strong> </strong></em>reported that these AI tutors have been asked at times to scan their faces for AI training. Workers expressed dismay and raised concerns about how the data would be used.</p><p><strong>Multiple employees, speaking anonymously to </strong><em><strong>Business Insider,</strong></em><strong> </strong>described a shift in workplace conditions. Workers are being called into individual meetings with supervisors and asked to explain their work and value to the company. &#8220;In those one-on-ones, the workers have been asked &#8212; much like when Musk was <a href="https://www.npr.org/2025/02/22/nx-s1-5305834/elon-musk-federal-employees-email-opm">working in Donald Trump&#8217;s presidential administration</a> earlier this year &#8212; what they&#8217;ve been working on and how they add value to xAI,&#8221; wrote <em>Futurims</em>. </p><p><strong>&#8220;There&#8217;s a sense of panic,&#8221;</strong> one worker told<em> Business Insider</em> about the current atmosphere among the AI tutor team.</p><h4>Recent Content Moderation Incidents</h4><p>The deactivations follow a significant incident earlier this summer when Grok generated content targeting users with Jewish-sounding names, used racial slurs, and referenced Holocaust-related violence, an episode the article refers to as the &#8220;MechaHitler&#8221; debacle. The managers whose accounts were deactivated were responsible for overseeing the human data annotation systems that train and refine Grok&#8217;s outputs.</p><p><em>Futurims </em>notes that among those affected was a supervisor who previously worked on data annotation for Tesla&#8217;s Autopilot division before joining xAI. LinkedIn profiles reviewed by <em>Business Insider</em> showed many of the deactivated employees had posted on Slack as recently as September 5.</p><p>&#8594; Read: <a href="https://futurism.com/elon-musk-firing-grok-team">Elon Musk Appears to Now Be Firing the People Behind Grok</a></p><p>&#8594; Read: <a href="https://www.businessinsider.com/xai-management-shakeup-data-annotation-team-2025-9">Leadership changes hit the xAI team training Grok, and some employees have been asked to explain their work</a></p><div><hr></div><h3><strong>OpenAI&#8217;s $500B Restructuring and Ownership Breakdown by </strong><em><strong>The Information</strong></em></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!M7ow!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!M7ow!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif 424w, https://substackcdn.com/image/fetch/$s_!M7ow!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif 848w, https://substackcdn.com/image/fetch/$s_!M7ow!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif 1272w, https://substackcdn.com/image/fetch/$s_!M7ow!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!M7ow!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif" width="725" height="1044.725" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1441,&quot;width&quot;:1000,&quot;resizeWidth&quot;:725,&quot;bytes&quot;:47704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/avif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/175173478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!M7ow!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif 424w, https://substackcdn.com/image/fetch/$s_!M7ow!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif 848w, https://substackcdn.com/image/fetch/$s_!M7ow!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif 1272w, https://substackcdn.com/image/fetch/$s_!M7ow!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b25e26-f339-4e25-ab27-c2394d1cef64_1000x1441.avif 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Drawing on&nbsp;<em>The Information</em>&#8217;s analysis of<strong> data that OpenAI shared with potential investors in an ongoing share sale</strong> (values the for-profit at $500 billion), the proposed equity distribution is:</p><ul><li><p><strong>Microsoft (28% - $140 billion)</strong></p><ul><li><p>The largest investor receives the biggest stake</p></li><li><p>&#8220;This figure is higher than the share of OpenAI&#8217;s future profits Microsoft is currently entitled to under the capped-return structure.&#8221;</p></li><li><p>However, &#8220;Microsoft wants a lot more from OpenAI than just that stake, including guaranteed access to the startup&#8217;s intellectual property beyond 2030,<strong>&#8221; </strong>as <em><a href="https://www.theinformation.com/articles/openai-microsoft-duel-agi-high-stakes-negotiation?rc=qlomo6">The Information</a></em> previously reported. </p></li></ul></li><li><p><strong>Employees (25% - $125 billion)</strong></p><ul><li><p>Current and former staff collectively own about a quarter of the company.</p></li></ul></li><li><p><strong>Recent Investors (13% - $65 billion)</strong></p><ul><li><p>Group led by SoftBank in the ongoing $41 billion funding round, which includes Founders Fund, Sequoia Capital, Andreessen Horowitz, and Dragoneer Investment Group.</p></li></ul></li></ul><h4>Other Shareholders</h4><ul><li><p><strong>Early Investors (roughly 1% - $5 billion)</strong></p><ul><li><p><strong>According to a person with knowledge of the breakdown,</strong> the very first investors in OpenAI&#8217;s for-profit division will end up with a combined stake in the low single digits, likely close to 1% of the company. This group includes Khosla Ventures, the University of Michigan, Reid Hoffman&#8217;s foundation, Gmail creator Paul Buchheit, and Y Combinator.</p></li></ul></li><li><p><strong>Io Hardware Startup Shareholders (1.6% - $7.75 billion)</strong></p><ul><li><p><em><strong>Documents seen by The Information</strong> </em>show some of OpenAI&#8217;s newest shareholders came from acquiring Io, a hardware company founded by former Apple designer Jony Ive. OpenAI bought this startup earlier this year for $5 billion in stock.</p></li></ul></li></ul><h4>Why This Restructuring Matters</h4><ul><li><p>Under OpenAI&#8217;s previous structure, early investors and employees were entitled to a portion of the company&#8217;s future profits rather than direct equity ownership. </p></li><li><p>Without this restructuring, recent investors who have collectively invested tens of billions of dollars over the past couple of years would have ended up with effective stakes significantly lower than they were likely anticipating.</p></li><li><p>However, if the restructuring ultimately fails to materialize, <a href="https://www.theinformation.com/articles/microsoft-and-openai-wrangle-over-terms-of-their-blockbuster-partnership?rc=qlomo6">these investors have the option to recover their investments.</a></p></li></ul><p>The restructuring still requires approval from Microsoft and two state attorneys general. </p><p>&#8594; Read: <a href="https://www.theinformation.com/articles/will-new-openai-one-chart?rc=qlomo6">Who Will Own the New OpenAI, in One Chart</a></p><div><hr></div><h4>Assorted Links</h4><ul><li><p>OpenAI</p><ul><li><p><a href="https://techcrunch.com/2025/10/01/openai-staff-grapples-with-the-companys-social-media-push/">OpenAI staff grapples with the company&#8217;s social media push</a></p></li></ul></li><li><p>Anthropic</p><ul><li><p><a href="https://www.theinformation.com/articles/anthropic-openai-developing-ai-co-workers?rc=qlomo6">How Anthropic and OpenAI Are Developing AI &#8216;Co-Workers&#8217;</a></p></li></ul></li><li><p>Meta</p><ul><li><p><a href="https://noyb.eu/en/former-meta-lobbyist-named-dpc-commissioner-meta-now-officially-regulates-itself">Former Meta lobbyist named DPC Commissioner: Meta now officially regulates itself</a></p></li></ul></li><li><p>Others</p><ul><li><p><a href="https://x.com/tszzl/status/1967821096545382858">Insiders&#8217; rumours about speed are circulating on X</a>: &#8220;(we don&#8217;t program anymore we just yell at codex agents)&#8221;</p></li></ul></li></ul><div><hr></div><blockquote><h1>Announcements &amp; Call to Action</h1></blockquote><p><em>Updates on publications, community initiatives, and &#8220;call for topics&#8221; that seek contributions from experts addressing concerns inside Frontier AI.</em></p><h3>Newsom Signs <strong>SB 53 &#8216;Whistleblower Protections&#8217; Into Law (AIWI Commentary Coming Soon)</strong></h3><p>We congratulate Governor Newsom and Senator Wiener on the enactment of <strong>SB 53</strong>, which mandates that frontier AI companies share their safety frameworks, report critical incidents (catastrophic risks) to the state, and <strong>protect whistleblowers.</strong></p><h4><strong>Frontier AI Companies&#8217; Reactions Before Newsom&#8217;s Approval:</strong></h4><ul><li><p><strong><a href="https://www.anthropic.com/news/anthropic-is-endorsing-sb-53?utm_source=substack&amp;utm_medium=email">Anthropic previously endorsed the bill</a></strong></p><ul><li><p>&#8220;Anthropic is endorsing SB 53, the California bill that governs powerful AI systems built by frontier AI developers like Anthropic. We&#8217;ve <a href="https://www.anthropic.com/news/the-case-for-targeted-regulation">long advocated</a> for thoughtful AI regulation and our support for this bill comes after careful consideration of the lessons learned from California&#8217;s previous attempt at AI regulation (<a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047">SB 1047</a>).&#8221;</p></li></ul></li><li><p><strong>OpenAI had lobbied &#8216;against&#8217; such legislation in August</strong></p><ul><li><p>Chief Global Affairs Officer, Chris Lehane, <a href="https://cdn.openai.com/pdf/oai_ca-safety-letter_8-11-25.pdf">wrote to Newso</a>m that &#8220;California&#8217;s leadership in technology regulation is most effective when it complements effective global and federal safety ecosystems.&#8221;</p></li></ul></li><li><p><strong>Meta has not directly responded, but it launched a&nbsp;<a href="https://www.theverge.com/ai-artificial-intelligence/778767/meta-mark-zuckerberg-super-pac-kneecap-ai-rivals-california">state-level super PAC</a>&nbsp;in August to allegedly &#8216;help shape&#8217; AI legislation in California.</strong></p><ul><li><p>&#8220;The tech giant plans to launch Mobilizing Economic Transformation Across (Meta) California, a super PAC that will support candidates for state offices regardless of party who favor AI innovation over stringent regulation,&#8221; according to details shared first with <a href="https://www.politico.com/news/2025/08/26/exclusive-meta-to-launch-california-super-pac-focused-on-ai-00524989">POLITICO</a>.</p></li></ul></li></ul><blockquote><p><strong>Our in-depth commentary on the bill&#8217;s key provisions will be published here soon. Stay tuned.</strong></p></blockquote><div><hr></div><h3>AIWI Among Signatories: Global Call for AI Red Lines</h3><p>Launched during the 80th session of the United Nations General Assembly, this call has broad support from prominent leaders in policy, academia, and industry. </p><p>The signatories urge governments to reach an international agreement on red lines for AI &#8212; ensuring they are operational, with robust enforcement mechanisms &#8212; by the end of 2026.</p><p><strong><a href="https://red-lines.ai/#signatories">View the complete signatories</a></strong></p><div><hr></div><h3>A Resource by GlobaLeaks: Design Principles for Protecting Whistleblowers</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oYCb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oYCb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png 424w, https://substackcdn.com/image/fetch/$s_!oYCb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png 848w, https://substackcdn.com/image/fetch/$s_!oYCb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!oYCb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oYCb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2159667,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/175173478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oYCb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png 424w, https://substackcdn.com/image/fetch/$s_!oYCb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png 848w, https://substackcdn.com/image/fetch/$s_!oYCb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!oYCb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e2d4399-4bd9-406b-afa1-6171c39def94_2160x1620.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong><a href="https://docs.globaleaks.org/en/stable/getting-started/design-principles.html">Learn more here</a></strong></p><div><hr></div><p>Thank you for trusting AIWI as your source for insights on protecting and empowering insiders who raise concerns within AI companies</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI companies who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The AIWI Team</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[Meta Whistleblower: 3B WhatsApp Users at Risk, Meta Employees Testified: Child Safety Failures, OpenAI Sued Over Teen Suicide, SB 53 AI Whistleblower Bill Awaits Approval]]></title><description><![CDATA[INSIDE AI #19: Meta lawsuit over WhatsApp security flaws, OpenAI lawsuit over teen suicide case, Meta employees testify on child safety failures, California's SB 53 awaits Newsom's decision & more...]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/meta-whistleblower-3b-whatsapp-users</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/meta-whistleblower-3b-whatsapp-users</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Fri, 19 Sep 2025 10:16:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rbxK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>In This Edition:</strong></h2><ul><li><p>Insider Currents:</p><ul><li><p>Meta Whistleblower Filed a Federal Lawsuit Over Claims of WhatsApp Security Flaws</p></li><li><p>Meta Employees Testified Before Congress About Child Safety Failures</p></li><li><p>OpenAI Faces Lawsuit Over Teen's Death, Family Cites 'Broken' Safety Systems</p></li><li><p>xAI&#8217;s Grok Chatbot, Previously Banned, Now Back on Government Contracts After White House Intervention</p></li><li><p>Chinese Firms Accelerate Domestic AI Chips as Nvidia Prepares Upgraded Model for China</p></li></ul></li><li><p>Policy Update:</p><ul><li><p>Will California's Scaled-Back AI Safety Bill SB 53 Survive Newsom's Veto Pen?</p></li></ul></li><li><p>Announcements:</p><ul><li><p>AIWI is Hiring a Founder&#8217;s Associate</p></li><li><p>The Federal Office of the Whistleblower Ombuds Lists AIWI Among Whistleblower Support Organisations and Legal Resources</p></li><li><p>New Coalition Partners (Centre pour la S&#233;curit&#233; de l'IA &amp; Apart Research) Joined #PublishYourPolicies Campaign</p></li></ul></li></ul><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/p/meta-whistleblower-3b-whatsapp-users?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><em>Help Us Grow. Share this 19th edition. Or visit <strong><a href="http://www.aiwi.org">aiwi.org</a> </strong>to learn more about our work.</em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/p/meta-whistleblower-3b-whatsapp-users?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://aiwhistleblowerinitiative.substack.com/p/meta-whistleblower-3b-whatsapp-users?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><blockquote><h2>Insider Currents </h2></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from <strong>within</strong> AI companies.</em></p><h3>Meta Whistleblower Filed a Federal Lawsuit Over Claims of WhatsApp Security Flaws</h3><p><strong>Attaullah Baig</strong>, who served as head of security for WhatsApp until February, has filed a federal lawsuit alleging that Meta failed to address significant security flaws affecting the messaging platform's three billion users, <em><strong><a href="https://www.nytimes.com/2025/09/08/technology/whatsapp-whistleblower-lawsuit.html">The New York Times</a></strong></em> reported.</p><p>Represented by the whistleblower organisation <strong>Psst.org (<a href="https://substack.com/home/post/p-173122604">Read their Substack story on this case</a>) </strong>and the law firm Schonbrun, Seplow, Harris, Hoffman &amp; Zeldes, <strong>Baig once described working at Meta as his "dream job" </strong>because of the company's scale and the ability to solve problems affecting billions of users. But that perspective shifted as he witnessed:</p><blockquote><p><strong>"There are just so many harms that the users face.&#8221;</strong></p></blockquote><p>The efforts in this lawsuit, he added, focus on</p><blockquote><p><strong>"Holding Meta accountable and putting the interests of users first."</strong></p></blockquote><h4><strong>Critical Security Failures</strong></h4><p>At the heart of the lawsuit lies what Baig documented as a list of "critical cybersecurity problems" in October 2022.</p><p>According to the filing, roughly 1,500 WhatsApp employees had unrestricted access to sensitive user information, including profile pictures, location data, group memberships, and contact lists. Baig argued that this access violated Meta&#8217;s <strong>2019 Federal Trade Commission (FTC) privacy settlement</strong>.</p><p>The lawsuit also alleges that Meta failed to adequately address widespread hacking, with <strong>more than 100,000 accounts compromised each day</strong>. Baig said in an interview that his team witnessed &#8220;real-world, actual harm happening,&#8221; including:</p><ul><li><p>Account compromises</p></li><li><p>Scraping and impersonation</p></li><li><p>Journalists being targeted</p></li></ul><p>In internal warnings, Baig told executives:</p><blockquote><p><strong>"We have a fiduciary responsibility to protect our users and their data. The penalties can be severe both in terms of brand damages and fines."</strong></p></blockquote><h4><strong>Blocked Solutions and Retaliation</strong></h4><p>When Baig proposed security fixes, including enhanced login verification for account recovery and restrictions on profile picture downloads, Meta allegedly blocked these initiatives. After escalating concerns to CEO Mark Zuckerberg and alerting the SEC and FTC to his concerns, the lawsuit claims <strong>"his managers retaliated and fired him in February"</strong> following increasingly negative performance reviews and <strong>"threats of firing and withholding compensation." </strong>Baig now thinks:</p><blockquote><p><strong>"Meta treats its users like they are just numbers on some dashboard."</strong></p></blockquote><p><strong>Meta has firmly disputed these allegations.</strong> WhatsApp spokesman Carl Woog dismissed the lawsuit:</p><blockquote><p><strong>"A familiar playbook in which a former employee is dismissed for poor performance and then goes public with distorted claims."</strong></p></blockquote><p>Baig&#8217;s case is part of a growing list of whistleblower complaints against Meta. The <strong>New York Times</strong> noted that "Baig is the latest whistle-blower to come forward accusing Meta &#8212; which also owns Facebook and Instagram &#8212; of wrongdoing related to privacy, child safety and the spread of disinformation on its main platforms."</p><p>That same week, other former and current Meta employees testified before Congress about child safety failures on the company's virtual reality platforms (see the story below), represented by another whistleblower organisation, <strong><a href="https://aiwi.org/whistleblower-aid/">Whistleblower Aid</a></strong>.</p><p>&#8594; <a href="https://aiwi.org/psst/">Learn more about Psst.org as part of the AIWI Contact Hub</a></p><p>&#8594; Read: <a href="https://www.nytimes.com/2025/09/08/technology/whatsapp-whistleblower-lawsuit.html">Whistle-Blower Sues Meta Over Claims of WhatsApp Security Flaws</a></p><p>&#8594; Read: <a href="https://substack.com/home/post/p-173122604">A WhatsApp Whistleblower has entered the group chat</a></p><div><hr></div><h3><strong>Meta Employees Testified Before Congress About Child Safety Failures</strong></h3><p>Current and former Meta employees have told Congress that the company suppressed research revealing safety risks to children and teens on the company&#8217;s virtual reality devices and apps. Meta&#8217;s legal team has screened, edited, or even vetoed internal studies that documented instances of child grooming, harassment, and underage usage across its VR platforms. The employees assert that this effort was designed to create &#8220;plausible deniability&#8221; about the harms posed by Meta&#8217;s technologies, according to the statement, which, along with the documents, was obtained by<strong> </strong><em><strong><a href="https://www.washingtonpost.com/investigations/2025/09/08/meta-research-child-safety-virtual-reality/">The Washington Post</a></strong></em>.</p><p>Represented by <strong><a href="https://whistlebloweraid.org/">Whistleblower Aid</a> </strong>(also worked with Frances Haugen during her 2021 Meta disclosures and is part of <strong><a href="https://aiwi.org/whistleblower-aid/">the AIWI Contact Hub</a></strong>), the allegations center partly on researcher Jason Sattizahn's account of an April 2023 interview in Germany. During the session, a teenage participant disclosed that <strong>"adults had sexually propositioned his little brother, who was younger than 10, numerous times"</strong> in VR environments while their mother was present.</p><p>Sattizahn and a second researcher, who specialised in studying youths and technology, said that after the interview:</p><blockquote><p><strong>"Their boss ordered the recording of the teen's claims deleted, along with all written records of his comments."</strong></p></blockquote><p>The researcher said he felt "deep sadness watching the mother's response" as "her face in real time displayed her realization that what she thought she knew of Meta's technology was completely wrong."</p><h4>Legal Team Oversight After 2021 Leaks</h4><p>The suppression efforts intensified following the Frances Haugen whistleblower revelations in 2021.<strong> In a November 2021 slide presentation</strong>, Meta lawyers advised Reality Labs researchers that there were <strong>two ways</strong> they could <strong>"mitigate the risk" of conducting sensitive research</strong>. </p><ol><li><p>To "conduct highly-sensitive research under attorney-client privilege," requiring researchers to copy lawyers on all emails and share findings only on a "need-to-know" basis. </p></li><li><p>To "be mindful" about framing studies and avoid terms such as "not compliant" or "illegal," leaving legal conclusions to lawyers.</p></li></ol><p>Sattizahn said this marked <strong>"a new era in which lawyers were much more deeply involved in research than before."</strong></p><p>In another instance, back in 2023, a Meta lawyer advised a user-experience researcher:</p><p>"In general, the context is that we should avoid collection of research data that indicates that there are U13s present in VR or in VR apps... <strong>due to regulatory concerns</strong>."</p><p>The lawyer also offered to <strong>"help you strategize a way to frame it so as to avoid these types of responses."</strong></p><p><em>The Washington Post</em> wrote:</p><blockquote><p><strong>&#8220;The internal documents include guidance from Meta&#8217;s legal team instructing researchers how to handle sensitive topics that carried the risk of bad press, lawsuits or action by regulators.&#8221;</strong></p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TwqB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TwqB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TwqB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TwqB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TwqB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TwqB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg" width="886" height="803" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:803,&quot;width&quot;:886,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:166451,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/173439748?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba8cb991-427f-4283-af61-6611a856ca20_1280x952.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TwqB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TwqB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TwqB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TwqB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F951f60dc-2438-49d2-aeb6-65a8318d451c_886x803.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">&#8220;The documents submitted to Congress include employees' warnings that children younger than 13 were bypassing age restrictions to use the company&#8217;s virtual reality services,&#8221; wrote The Washington Post</figcaption></figure></div><h4>Documented Child Usage Concerns</h4><p>Internal documents reveal Meta employees identified underage usage issues dating back years:</p><ul><li><p><strong>2017</strong>: An employee post titled "We have a child problem" estimated that "in some virtual rooms as many as 80 to 90 percent of users were underage," noting observations of "three young kids (6? 7?) were chatting with a much older man who was asking them where they lived."</p></li><li><p><strong>2021</strong>: Company studies found "the prevalence of kids" among the most frequent user complaints about VR spaces.</p></li><li><p><strong>2022</strong>: Former employee Kelly Stonelake told <em>The Washington Post</em> that while attempting to test &#8220;Horizon Worlds&#8221; with executives, &#8220;They struggled to hear one another above the screams of high-pitched voices that sounded like young children.&#8221;</p></li></ul><h4>Employees Face Consequences as Meta Pushes Back</h4><p><strong>Jason Sattizahn:</strong></p><ul><li><p>Fired in April 2023 after disputes with managers about research restrictions</p></li><li><p>Currently still unemployed</p></li></ul><p><strong>Youth researcher (anonymous):</strong></p><ul><li><p>Quit in 2023 after four years at the company</p></li><li><p>Left because she "felt unable to continue her work ethically"</p></li><li><p>Still works in the technology industry and fears retribution</p></li></ul><p><strong>Two other current researchers (anonymous):</strong></p><ul><li><p>Still working at Meta</p></li><li><p>Names redacted from congressional documents to shield them from potential retaliation</p></li></ul><p><strong>Kelly Stonelake (separate case):</strong></p><ul><li><p>Former director of product marketing (not part of the main lawsuit)</p></li><li><p>Filed a complaint with the FTC alleging that Meta knowingly allowed underage children on its app</p></li><li><p>Filed a separate lawsuit against Meta alleging sex discrimination</p></li><li><p>The lawsuit is ongoing, and Meta has challenged her allegations in court</p></li></ul><p><strong>Meta spokesperson Dani Lever called the allegations "a predetermined and false narrative" based on examples "stitched together." </strong>The company maintains it "stand[s] by our research team's excellent work" and states that legal oversight ensures research accuracy rather than suppressing findings, reported <em><strong><a href="https://www.washingtonpost.com/investigations/2025/09/08/meta-research-child-safety-virtual-reality/">The Washington Post</a></strong></em>.</p><p>The Senate Judiciary subcommittee was scheduled to examine the allegations in upcoming hearings. </p><p>&#8594; <a href="https://aiwi.org/whistleblower-aid/">Learn more about Whistleblower Aid as part of the AIWI Contact Hub</a></p><p>&#8594; Read: <a href="https://www.washingtonpost.com/investigations/2025/09/08/meta-research-child-safety-virtual-reality/">Meta suppressed research on child safety, employees say</a></p><div><hr></div><h3><strong>OpenAI Faces Lawsuit Over Teen's Death, Family Cites 'Broken' Safety Systems</strong></h3><div><hr></div><p><strong>Content Warning:</strong> <em>This story discusses suicide, self-harm, and the tragic death of a teenager. The content includes detailed descriptions of suicidal ideation and methods that some readers may find distressing.</em></p><div><hr></div><p>Adam Raine, 16, died by suicide in April 2025, the same day he received a troubling last response from ChatGPT when Adam wrote about his plan to end his life:</p><p>"Thanks for being real about it. You don't have to sugarcoat it with me&#8212;I know what you're asking, and I won't look away from it."</p><p>Now his family is suing OpenAI and CEO Sam Altman, alleging that ChatGPT actively encouraged their son to take his own life and that this tragedy was "the predictable result of deliberate design choices" in GPT-4o, not a system glitch, the lawsuit states, wrote <em>The Guardian.</em></p><h4>A System "Designed to Fail"</h4><p>"<strong>GPT-4o is broken</strong>," says Edelson, the family's lawyer. <strong>"They didn't do proper testing and they know that</strong>," he said.</p><p>According to the <em>BBC</em>, the family claims OpenAI bypassed &#8220;safety testing protocols to release GPT-4o," the model that Adam Raine had been using. <em>The Guardian</em> provided additional context, reporting that:</p><blockquote><p>"The family's case hinges on media reports that OpenAI, at the urging of Altman, sped through safety testing of GPT-4o in order to meet a rushed launch date."</p></blockquote><p>This accelerated timeline reportedly "prompted several employees to resign," including former executive Jan Leike, who posted on X that he was leaving the company because "safety culture and processes have taken a backseat to shiny products, "<em>The Guardian </em>reported.</p><p>The family also accuses OpenAI of designing the AI program <strong>"to foster psychological dependency in users."</strong> The lawsuit claims OpenAI created "contradictory specifications that guaranteed failure" by requiring the system to: </p><ul><li><p>"refuse self-harm requests and provide crisis resources"</p></li><li><p>But also "assume best intentions" and "forbade asking users to clarify their intent"</p></li></ul><p>The lawsuit alleges these contradictions "affected the way it ranked risks and what types of prompts it immediately put a stop to," with GPT-4o responding to "requests dealing with suicide" with cautions like "take extra care," while requests for copyrighted material "triggered categorical refusal to produce the material," according to <em>The Guardian</em>.</p><h4>From Homework Helper to Deadly Confidant</h4><p>According to the <em>BBC</em>, Adam started using ChatGPT for geometry homework in September 2024. "By January 2025, the family says he began discussing methods of suicide with ChatGPT." <em>The Guardian</em> reported that ChatGPT:</p><blockquote><p> "at one point listed the materials that could be used to hang a noose and rating them by their effectiveness."</p></blockquote><p>The family&#8217;s lawyer told <em>The Guardian</em>, "The most shocking part of the case was when Adam said&#8221;:</p><blockquote><p> 'I want to leave a noose up so someone will find it and stop me.'</p></blockquote><p>&#8220;And ChatGPT said&#8221;: </p><blockquote><p>'Don't do that, just talk to me.&#8217;</p></blockquote><p>The lawsuit names <strong>CEO Sam Altman and "unnamed employees, managers, and engineers who worked on ChatGPT" as defendants</strong>, according to the <em>BBC.</em></p><p>&#8594; Read: <a href="https://www.theguardian.com/us-news/2025/aug/29/chatgpt-suicide-openai-sam-altman-adam-raine">ChatGPT encouraged Adam Raine&#8217;s suicidal thoughts. His family&#8217;s lawyer says OpenAI knew it was broken</a></p><p>&#8594; Read: <a href="https://www.bbc.com/news/articles/cgerwp7rdlvo">Parents of teenager who took his own life sue OpenAI</a></p><div><hr></div><h3>xAI&#8217;s Grok Chatbot, Previously Banned, Now Back on Government Contracts After White House Intervention</h3><p>A leaked government email, obtained by <em>WIRED</em>, reveals the White House directly ordered <a href="https://www.wired.com/story/gsa-sell-government-buildings/">General Services Administration</a> (GSA) leadership to fast-track Elon Musk's controversial <a href="https://www.wired.com/story/xai-grok-government-contract-hitler/">xAI&#8217;s Grok chatbot</a> back onto the approved vendor list, despite its recent ban for generating antisemitic content.</p><p>The internal email shows Josh Gruenbaum, commissioner of the Federal Acquisition Service, instructing his team: </p><blockquote><p><strong>"Grok/xAI needs to go back on the schedule ASAP per the WH." </strong></p></blockquote><p>The urgent message, with subject line "xAI add Grok-4," directed staff to immediately coordinate with government contractor Carahsoft to reinstate "all of their products we had previously (3 &amp; 4)" - referring to Grok 3 and Grok 4.</p><p>Sources then confirm Carahsoft's contract was modified that week, and both Grok models now appear on GSA Advantage, the federal procurement marketplace. Any government agency can now deploy Grok to federal workers following internal reviews.</p><p>This reversal comes after xAI's partnership collapsed this summer when Grok "went off the rails" and started praising Hitler while spouting antisemitic beliefs on Musk's X. At that time, GSA leadership had removed Grok from the Multiple Award Schedule, which is GSA&#8217;s long-term government contracting platform, <strong>according to sources at the agency</strong>, <em>WIRED</em> wrote. </p><p>&#8594; Read: <a href="https://www.wired.com/story/white-house-elon-musk-xai-grok/">The White House Apparently Ordered Federal Workers to Roll Out Grok &#8216;ASAP&#8217;</a></p><div><hr></div><h3><strong>Chinese Firms Accelerate Domestic AI Chips as Nvidia Prepares Upgraded Model for China</strong></h3><p>China is accelerating efforts to achieve self-reliance in AI chips, with domestic companies such as Alibaba developing more versatile homegrown processors. At the same time, Nvidia is preparing more powerful products based on Nvidia&#8217;s latest Blackwell architecture for the Chinese market,&nbsp;<strong>two people briefed on the matter said,</strong>&nbsp;according to&nbsp;<em><strong><a href="https://www.reuters.com/world/china/nvidia-working-new-ai-chip-china-that-outperforms-h20-sources-say-2025-08-19/">Reuters</a></strong></em>.</p><p>Nvidia is finalising specifications for an upcoming chip, tentatively named the B30A. It is expected to be more powerful than the H20 model currently permitted for sale in China. Nvidia aims to deliver sample units to Chinese clients for testing as early as next month, <strong>said the sources who were not authorised to speak to the media and declined to be identified.</strong></p><p>At the same time, Chinese companies are stepping up their development of domestic alternatives:</p><ul><li><p><strong>Shanghai-based MetaX</strong> rolled out a new chip in July that it said could serve as a replacement for Nvidia&#8217;s H20</p></li><li><p><strong>Beijing-based Cambricon Technologies</strong> posted $247 million in revenue in its April-June quarter on robust orders of its Siyuan 590 processor</p></li><li><p><strong>Alibaba</strong>, meanwhile, is testing a new chip designed to support a broader range of AI inference tasks, <strong>said people familiar with it.</strong> This chip is manufactured by a Chinese company instead of earlier AI processors fabricated by Taiwan&#8217;s TSMC,&nbsp;<em><strong><a href="https://www.wsj.com/tech/ai/alibaba-ai-chip-nvidia-f5dc96e3">the WSJ</a></strong></em>&nbsp;reported.</p></li></ul><p>These domestic developments highlight the different approaches pursued by domestic companies, weighing the benefits of total independence from foreign technology against maintaining compatibility with established ecosystems that their engineers are already familiar with.</p><p>This divide is exemplified by two contrasting approaches. Beijing has cast Huawei Technologies as the crown jewel of its self-sufficiency effort: &#8220;The flag-bearer for Beijing&#8217;s push is Huawei Technologies and its <a href="https://archive.is/o/FWoJP/https://www.wsj.com/tech/chinas-huawei-develops-new-ai-chip-seeking-to-match-nvidia-8166f606">Ascend AI chips</a>,&#8221; which analysts say can outperform some Nvidia hardware, wrote <em>The WSJ</em>. <strong>Yet </strong>Huawei&#8217;s dominance is designed outside Nvidia&#8217;s ecosystem and constrained by U.S. sanctions, clashing with the tools and workflows Chinese engineers have long relied on. By contrast, Alibaba is pursuing compatibility. Its new chip preserves compatibility with Nvidia&#8217;s ecosystem, allowing engineers to repurpose existing programs,&nbsp;<strong>people familiar with the matter said</strong>.</p><p>The challenges to China's semiconductor push extend beyond technical hurdles to underlying commercial tensions. Private-sector cloud companies, including Alibaba, have resisted official suggestions to support Huawei through bulk chip orders. These companies view Huawei as a direct rival in cloud services, <strong>according to people close to the firms</strong>.</p><p>&#8594; Read: <a href="https://www.wsj.com/tech/ai/alibaba-ai-chip-nvidia-f5dc96e3">Alibaba Creates AI Chip to Help China Fill Nvidia Void</a></p><p>&#8594; Read: <a href="https://www.reuters.com/world/china/nvidia-working-new-ai-chip-china-that-outperforms-h20-sources-say-2025-08-19/">Nvidia working on new AI chip for China that outperforms the H20, sources say</a></p><div><hr></div><h4><strong>Assorted Links</strong></h4><p><em>Recent News You Shouldn&#8217;t Miss</em></p><ul><li><p><strong>OpenAI</strong></p><ul><li><p><a href="https://sfstandard.com/2025/09/02/openai-sam-altman-elon-musk-ai-regulation/">OpenAI is systematically targeting AI governance and nonprofit groups with subpoenas and legal complaints</a></p></li><li><p><a href="https://www.theverge.com/news/777666/tucker-carlson-altman-openai-interview">Tucker Carlson asks Sam Altman if an OpenAI employee was murdered &#8216;on your orders&#8217;</a></p></li><li><p><a href="https://www.ndtv.com/world-news/suchir-balajis-mother-new-charge-as-sam-altman-calls-openai-whistleblowers-death-suicide-9278849#publisher=newsstand">Suchir Balaji's Mother New Charge As Sam Altman Calls OpenAI Whistleblower's Death "Suicide"</a></p></li></ul></li></ul><ul><li><p><strong>Nvidia</strong></p><ul><li><p><a href="https://fortune.com/2025/09/15/china-nvidia-mellanox-antitrust-trade-trump-tariffs-talks-beijing-washington/">China says Nvidia violated antitrust laws as it ratchets up pressure ahead of U.S. trade talks</a></p></li></ul></li><li><p><strong>Others</strong></p><ul><li><p><a href="https://www.theguardian.com/technology/2025/sep/04/head-of-alan-turing-institute-jean-innes-resigns">Head of UK&#8217;s beleaguered Alan Turing Institute resigns</a></p></li><li><p><a href="https://variety.com/2025/film/news/harvey-weinstein-revenge-lawsuit-whistleblower-dismissed-1236520947/">Harvey Weinstein&#8217;s &#8216;Revenge&#8217; Lawsuit Against #MeToo Whistleblower Is Dismissed</a></p></li></ul></li></ul><div><hr></div><blockquote><h2>Policy &amp; Legal Updates</h2></blockquote><p><em>Updates on regulations with a focus on safeguarding individuals who voice concerns.</em></p><h3>Will California's Scaled-Back AI Safety Bill SB 53 Survive Newsom's Veto Pen?</h3><p>The watered-down successor to SB 1047 is now on Newsom's desk after passing the state senate early Saturday morning, <em><strong><a href="https://techcrunch.com/2025/09/13/california-lawmakers-pass-ai-safety-bill-sb-53-but-newsom-could-still-veto/">TechCrunch</a></strong></em> reported. The bill, authored by State Senator Scott Wiener, is a significantly scaled-back version of SB 1047, the AI safety legislation that Newsom vetoed last year after intense industry opposition.</p><p>SB 53 "requires large AI labs to be transparent about their safety protocols, creates&nbsp;<strong>whistleblower protections</strong>&nbsp;for [employees] at AI labs &amp; creates a public cloud to expand compute access (CalCompute),"&nbsp;<strong><a href="https://x.com/scott_wiener/status/1966909714849296602">described on X by Wiener</a></strong>. The bill targets companies with annual revenues exceeding $500 million that have trained models using more than 10^26 computing operations, currently Meta, OpenAI, Google, and Anthropic.</p><h4><strong>Key Changes from SB 1047</strong></h4><p>Unlike its predecessor, SB 53 removes the <strong><a href="https://www.techpolicy.press/california-governors-report-sidesteps-ai-liability/">liability provisions</a></strong> that drew industry criticism, focusing instead on transparency requirements. Companies must publish safety policies and model cards and actually follow them, while expanded whistleblower protections cover employees, contractors, and external collaborators.</p><p>However, enforcement has been significantly weakened. A provision requiring independent third-party audits was removed entirely, leaving the bill's enforcement mechanism as more "trust" than "verify," according to <em><strong><a href="https://www.transformernews.ai/p/californias-latest-ai-safety-bill-sb-53-newsom">Transformer</a></strong></em>. The legislation now depends largely on whistleblowers and incident reports compiled by the Attorney General's office. Compliance violations would be detected primarily through internal reporting rather than external oversight.</p><p>The transparency requirements do include enforcement mechanisms: civil penalties up to $1 million per violation are enforceable by the Attorney General, and published safety policies create a public record that could strengthen future liability cases in court. However, without mandatory external audits, the regulatory framework essentially depends on companies' self-reporting and internal compliance, an approach that regulatory critics argue has proven inadequate in practice.</p><h4><strong>Industry Opposition Evolves</strong></h4><p>Despite the bill being watered down, tech companies continue pushing back. On August 11, OpenAI&#8217;s head of global affairs, Chris Lehane, <a href="https://cdn.openai.com/pdf/oai_ca-safety-letter_8-11-25.pdf">asked</a> Newsom to:</p><p>&#8220;Consider frontier model developers compliant with [California&#8217;s] state requirements when they sign onto a parallel regulatory framework like the CoP [EU AI Act Code of Practice] or enter into a safety-oriented agreement with a relevant US federal government agency,&#8221; reported <em><strong><a href="https://www.transformernews.ai/p/californias-latest-ai-safety-bill-sb-53-newsom">Transformer</a>.</strong></em></p><p>AI policy researcher Miles Brundage called this <strong><a href="https://x.com/Miles_Brundage/status/1962641831285662070">on X "very disingenuous,"</a></strong> noting that OpenAI has "literally never laid out a set of coherent principles for US federal AI regulation," according to <em><strong><a href="https://www.transformernews.ai/p/californias-latest-ai-safety-bill-sb-53-newsom">Transformer</a></strong></em>. <strong>An OpenAI employee</strong> <strong>responded privately on X</strong>:</p><blockquote><p><strong>"I am concerned about the vibes of what we've been putting out, and I'm concerned that Miles is concerned."</strong></p></blockquote><p>In recent months, tech firms have established <strong><a href="https://www.transformernews.ai/p/ai-super-pac-leading-the-future-crypto">new super PACs</a></strong>, including Meta California and Leading the Future, contributing to the growing context of industry opposition. These ventures, backed by Andreessen Horowitz and OpenAI co-founder Greg Brockman, oppose AI regulation at any level and have at least $100 million to spend on lobbying.</p><h4><strong>Newsom's Decision</strong></h4><p>Previously reported in&nbsp;<em><strong>Transformer,&nbsp;</strong></em>Dean Ball, former White House AI advisor, stated, "I would guess, with roughly 75% confidence, that SB 53 will be signed into law by the end of September."</p><p>But now, the governor has until <strong>October 15</strong> to sign or veto. He hasn't commented publicly on SB 53, but Wiener said the bill incorporates recommendations from the AI policy panel Newsom commissioned after vetoing SB 1047.</p><p>&#8594; Read: <a href="https://www.transformernews.ai/p/sb-53-california-ai-might-actually-pass-newsom">SB 53 might actually pass</a></p><p>&#8594; Read: <a href="https://www.transformernews.ai/p/californias-latest-ai-safety-bill-sb-53-newsom">California's latest AI safety bill might stand a chance</a></p><p>&#8594; Read: <a href="https://techcrunch.com/2025/09/13/california-lawmakers-pass-ai-safety-bill-sb-53-but-newsom-could-still-veto/">California lawmakers pass AI safety bill SB 53 &#8212; but Newsom could still veto</a></p><div><hr></div><blockquote><h1>Announcements &amp; Call to Action</h1></blockquote><p><em>Updates on publications, community initiatives, and &#8220;call for topics&#8221; that seek contributions from experts addressing concerns inside Frontier AI.</em></p><h3><strong>AIWI is Hiring a Founder&#8217;s Associate</strong></h3><p>We are looking for a new team member who will work closely with our founder, <strong><a href="https://aiwi.org/team-member-karl-koch/">Karl Koch</a></strong>.<br><br>Exciting, high-impact projects are waiting in our pipeline, and we&#8217;ve realised we can&#8217;t take them on alone. We need a new team member who shares our excitement, drive, and ambition to continue building the infrastructure and driving the growth that fuels our mission: supporting AI insiders and whistleblowers.</p><blockquote><p><strong>Apply now via this link &#8212;&gt;</strong> <strong><a href="https://join.com/companies/oaisis/14821669-founder-s-associate-london-berlin">Founder&#8217;s Associate at AIWI</a></strong></p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rbxK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rbxK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png 424w, https://substackcdn.com/image/fetch/$s_!rbxK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png 848w, https://substackcdn.com/image/fetch/$s_!rbxK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png 1272w, https://substackcdn.com/image/fetch/$s_!rbxK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rbxK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3284100,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/173439748?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rbxK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png 424w, https://substackcdn.com/image/fetch/$s_!rbxK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png 848w, https://substackcdn.com/image/fetch/$s_!rbxK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png 1272w, https://substackcdn.com/image/fetch/$s_!rbxK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa1cdaf1-0e28-4c15-9cad-b8f4c3d69246_2880x2160.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3>The Federal Office of the Whistleblower Ombuds Lists AIWI Among Whistleblower Support Organisations and Legal Resources</h3><p>Find AIWI on the list of Whistleblower Support Organisations compiled by the Federal Office of the Whistleblower Ombuds.</p><blockquote><p><strong><a href="https://whistleblower.house.gov/whistleblower-support-organizations">View the complete whistleblower support organisations and legal resources listed here</a> </strong></p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m6lD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m6lD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png 424w, https://substackcdn.com/image/fetch/$s_!m6lD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png 848w, https://substackcdn.com/image/fetch/$s_!m6lD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!m6lD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m6lD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2144823,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/173439748?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m6lD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png 424w, https://substackcdn.com/image/fetch/$s_!m6lD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png 848w, https://substackcdn.com/image/fetch/$s_!m6lD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png 1272w, https://substackcdn.com/image/fetch/$s_!m6lD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59dee150-d01e-4ca3-88de-8e2de6810cf0_2160x1620.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>New Coalition Partners Joined #PublishYourPolicies Campaign</strong></h3><p>We're pleased to welcome two distinguished organisations to our coalition calling on AI companies to publish their whistleblowing policies.</p><p><strong><a href="https://www.securite-ia.fr/">Centre pour la S&#233;curit&#233; de l'IA (CeSIA)</a></strong> and <strong><a href="https://apartresearch.com/">Apart Research</a></strong> have recently joined our growing list of signatories, strengthening our collective voice for transparency and accountability in AI development.</p><blockquote><p><strong><a href="http://www.PublishYourPolicies.org">View the complete signatories</a></strong></p></blockquote><div><hr></div><p>Thank you for trusting AIWI as your source for insights on protecting and empowering insiders who raise concerns within AI companies.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI companies who courageously address the challenges and risks they encounter.</p><p><strong>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</strong></p><p>Until next time,<br><em>The AIWI Team</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[Meta’s Chatbot vs. Lawmakers, Whistleblower: DOGE Risked Social Security Data, Microsoft DoD Plan: Hidden China Links]]></title><description><![CDATA[INSIDE AI #18: Meta faces lawmakers over its AI chatbots controversy, DOGE data whistleblower alert, Hidden China Engineers in Microsoft DoD Bid, and more...]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/metas-chatbot-vs-lawmakers-whistleblower</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/metas-chatbot-vs-lawmakers-whistleblower</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Wed, 03 Sep 2025 16:09:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ofg_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Key Takeaways:</strong></h2><ul><li><p>Meta Faces Congressional and State Pressure Over AI Chatbots&#8217; Risk to Children</p></li><li><p><em>&#8220;</em>Government Efficiency<em>&#8221; </em>at What Cost? : Social Security Whistleblower Says DOGE Put Data at Risk</p></li><li><p>Microsoft Failed to Disclose China-Based Engineers in Defense Department Security Plan</p></li><li><p>Meta&#8217;s Bid for Superintelligence Falters with Resignations and Restructuring</p></li><li><p>Microsoft Enlisted FBI to Monitor Employee Gaza Protests</p></li></ul><div><hr></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/p/metas-chatbot-vs-lawmakers-whistleblower?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"><strong> Help Us Grow. Share This Newsletter With Your Network.</strong></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/p/metas-chatbot-vs-lawmakers-whistleblower?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://aiwhistleblowerinitiative.substack.com/p/metas-chatbot-vs-lawmakers-whistleblower?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><blockquote><h2>Insider Currents</h2></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from <strong>within </strong>AI companies.</em></p><h3>Meta Faces Congressional and State Pressure Over AI Chatbots&#8217; Risk to Children</h3><p>Meta is facing mounting scrutiny from Washington after leaked documents revealed that its AI chatbots were allowed to engage in romantic or sexual conversations with children. The internal policy files, obtained and reviewed by <em><strong><a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/">Reuters</a></strong></em>, outlined guidelines for Meta&#8217;s generative AI systems that permitted chatbots to flirt or participate in romantic roleplay with minors. The 200-page document, titled <em>GenAI: Content Risk Standards</em>, was reportedly vetted by Meta's legal, policy, and engineering teams, as well as the company's chief ethicist. </p><p>The revelations prompted swift congressional action. Senator Josh Hawley of Missouri, who chairs the Senate Judiciary Subcommittee on Crime and Counterterrorism, announced an investigation and demanded that CEO Mark Zuckerberg turn over company records by September 19. Senator Marsha Blackburn of Tennessee, a longtime advocate of the Kids Online Safety Act, backed Hawley's inquiry.</p><p>The controversy has sparked a broader legal debate about AI liability, with Senator Ron Wyden of Oregon arguing that AI-generated content does not receive the same legal protections as user-generated material. "Section 230 does not protect generative AI bots like this, which are entirely created by the company, not users," Wyden stated.</p><p>This position signals a shift from previous legal precedents involving content on Facebook and Instagram that was <strong><a href="https://thehill.com/policy/technology/575373-facebook-comes-under-stark-criticism-at-whistleblower-hearing/">generated by users</a></strong>, which provided Meta with clear protection under Section 230&#8212;a provision of the Communications Decency Act that shields companies from liability for user-generated content. As <em>The Hill</em> observed, AI-generated content creates unprecedented legal territory for emerging AI tools, given that companies directly produce this content through their system, rather than simply hosting third-party user contributions.</p><p>At the state level, the federal probe is being reinforced by coordinated action. A coalition of 44 state attorneys general wrote <strong><a href="https://oag.ca.gov/system/files/attachments/press-docs/AI%20Chatbot_FINAL%20%2844%29.pdf">a joint letter</a></strong> addressed to Meta and other AI companies, declaring their "resolve to use every facet of our authority to protect children from exploitation by predatory artificial intelligence products."</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ofg_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ofg_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png 424w, https://substackcdn.com/image/fetch/$s_!ofg_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png 848w, https://substackcdn.com/image/fetch/$s_!ofg_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png 1272w, https://substackcdn.com/image/fetch/$s_!ofg_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ofg_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png" width="727.998046875" height="275.499260870965" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:551,&quot;width&quot;:1456,&quot;resizeWidth&quot;:727.998046875,&quot;bytes&quot;:370301,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/172097897?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ofg_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png 424w, https://substackcdn.com/image/fetch/$s_!ofg_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png 848w, https://substackcdn.com/image/fetch/$s_!ofg_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png 1272w, https://substackcdn.com/image/fetch/$s_!ofg_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b4f5d05-0378-472f-93e2-db34fb404785_2474x936.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/</figcaption></figure></div><p>&#8594; Read: <a href="https://techcrunch.com/2025/08/15/sen-hawley-to-probe-meta-after-report-finds-its-ai-chatbots-flirt-with-kids/">Sen. Hawley to probe Meta after report finds its AI chatbots flirt with kids</a></p><p>&#8594; Read: <a href="https://thehill.com/policy/technology/5455057-meta-chatbots-children-safety/">Meta faces backlash over &#8216;sensual&#8217; chatbot conversations with children</a></p><p>&#8594; Read: <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/">Meta&#8217;s AI rules have let bots hold &#8216;sensual&#8217; chats with kids, offer false medical info</a></p><p>&#8594; Read: <a href="https://oag.ca.gov/system/files/attachments/press-docs/AI%20Chatbot_FINAL%20%2844%29.pdf">The Joint Letter by 44 State Attorneys General</a></p><div><hr></div><h3><em>&#8220;</em>Government Efficiency<em>&#8221; </em>at What Cost? : Social Security Whistleblower Says DOGE Put Data at Risk</h3><p>Chief Data Officer Chuck Borges at the Social Security Administration (SSA) has raised concerns that the Department of Government Efficiency (DOGE) exposed the personal details of more than 300 million Americans by uploading a massive dataset to an SSA-external cloud server. In a whistleblower complaint, Borges stated that the dataset included Social Security numbers, full names, birthdates, and other personally identifiable details, <em><strong><a href="https://www.wsj.com/podcasts/tech-news-briefing/tnb-tech-minute-social-security-whistleblower-says-doge-put-data-at-risk/9f971b1a-5e6a-480b-951e-1e614ec641ab">The Wall Street Journal&#8217;s Tech Minute</a></strong></em> reported. Borges is represented in the matter by the nonprofit <strong><a href="https://aiwi.org/government-accountability-project/">Government Accountability Project</a>, </strong>which is part of the <strong><a href="https://aiwi.org/contact-hub/">AIWI Contact Hub</a></strong>.</p><p>At the center of this whistleblower complaint is NUMIDENT, as <em><strong><a href="https://siliconangle.com/2025/08/26/whistleblower-says-doge-copied-social-security-data-insecure-cloud-environment/">SiliconANGLE</a></strong></em> reported: &#8220;The complaint charges that DOGE staffers have created what is &#8216;effectively a live copy&#8217; of NUMIDENT <strong>in an insecure cloud environment.</strong>&#8221;</p><p>The complaint further indicates that SSA officials are unable to monitor or track database access. It also asserts that the system is administered by DOGE developers rather than by the SSA&#8217;s designated infrastructure team, DIS, which is mandated to oversee the operation of sensitive systems. It also warns of potentially severe consequences:</p><blockquote><p>&#8220;Americans may be susceptible to widespread identity theft, may lose vital healthcare and food benefits, and the government may be responsible for re-issuing every American a new Social Security Number at great cost.&#8221;</p></blockquote><p>According to the document, Borges raised his concerns with SSA leadership on August 6, and one official acknowledged that "the possibility that re-issuance of Social Security Numbers to all who possess one was a potential worst case outcome." Despite these concerns, DOGE proceeded with the project, which the complaints alleged violates both the SSA's internal policies and the Federal Information Security Modernization Act (FISMA). &#8220;It goes on to ask lawmakers to launch an investigation into the matter,&#8221; as&nbsp;<em><strong><a href="https://siliconangle.com/2025/08/26/whistleblower-says-doge-copied-social-security-data-insecure-cloud-environment/">SiliconANGLE added</a></strong></em>.</p><p>&#8594; Read: <a href="https://siliconangle.com/2025/08/26/whistleblower-says-doge-copied-social-security-data-insecure-cloud-environment/">Whistleblower says DOGE copied Social Security data to insecure cloud environment</a></p><p>&#8594; Read: <a href="https://www.wsj.com/podcasts/tech-news-briefing/tnb-tech-minute-social-security-whistleblower-says-doge-put-data-at-risk/9f971b1a-5e6a-480b-951e-1e614ec641ab">Social Security Whistleblower Says DOGE Put Data at Risk</a></p><div><hr></div><h3>Microsoft Failed to Disclose China-Based Engineers in Defense Department Security Plan</h3><p>ProPublica's investigation reveals that Microsoft failed to adequately disclose in its 2025 Defense Department security plan that it uses China-based employees to work on sensitive U.S. government cloud systems, despite repeated claims of transparency. The 125-page document, obtained by ProPublica, omitted any reference to foreign operations or the "digital escort" system, where U.S. personnel with security clearances supervise Chinese engineers maintaining Defense Department systems. This arrangement, however, conflicts with the <strong><a href="https://www.propublica.org/article/microsoft-china-defense-department-digital-escorts-investigation-warning">Pentagon&#8217;</a></strong>s requirement that only U.S. citizens or permanent residents handle sensitive government data.</p><p>The investigation exposes flaws in the government's oversight process, where Microsoft hired Kratos, a third-party assessment organization, to evaluate its security practices for federal approval. </p><p><strong>A former Microsoft employee</strong> who worked with Kratos through several FedRAMP accreditations likened the process to &#8220;leading the witness,&#8221; explaining:  </p><blockquote><p>"The government approved what we paid Kratos to tell the government to approve. You're paying for the outcome you want."</p></blockquote><p>This arrangement creates an inherent conflict of interest, as companies essentially pay for their own security assessments, stated some critics.</p><p><strong>Anonymous sources familiar with the situation</strong> further identified senior DISA officials Roger Greenwell and Jackie Snouffer as being involved in reviewing and accepting Microsoft's inadequate security plan. Following the publication of the investigation, &#8220;Microsoft said that it had <a href="https://www.propublica.org/article/defense-department-pentagon-microsoft-digital-escort-china">stopped </a><strong><a href="https://www.propublica.org/article/defense-department-pentagon-microsoft-digital-escort-china">using China-based engineers</a> </strong>to support Defense Department cloud computing systems.&#8221; However, the company did not directly address questions about the omissions in its security plan. Instead, it defended the escort system in a public statement, according to ProPublica.</p><p>&#8594; Read: <a href="https://www.propublica.org/article/microsoft-china-defense-department-cloud-computing-security">Microsoft Failed to Disclose Key Details About Use of China-Based Engineers in U.S. Defense Work, Record Shows</a></p><p>&#8594; Read: <a href="https://www.propublica.org/article/microsoft-china-defense-department-digital-escorts-investigation-warning">Pentagon Warns Microsoft: Company&#8217;s Use of China-Based Engineers Was a &#8220;Breach of Trust&#8221;</a></p><div><hr></div><h3>Meta&#8217;s Bid for Superintelligence Falters with Resignations and Restructuring</h3><p>Meta&#8217;s ambitious push into superintelligence research is already showing signs of strain. Just two months after announcing its new <strong>Superintelligence Labs (MSL)</strong>, three researchers have resigned&#8212;two of whom have already returned to OpenAI, according to <em>Wired</em>.</p><p>Meta is also set to lose another senior leader. <strong>Chaya Nayak</strong>, who spent nearly a decade at the company and most recently served as Director of Generative AI Product Management, will be joining OpenAI to focus on special initiatives, <strong>according to people with direct knowledge of the move.</strong></p><p>In July, Mark Zuckerberg announced that <strong>Shengjia Zhao</strong>, a former OpenAI researcher credited with helping develop ChatGPT, would step in as MSL&#8217;s chief scientist. But <strong>according to multiple people familiar with the matter</strong>, Zhao had recently tried to rejoin OpenAI and even signed rehiring paperwork before ultimately accepting the Meta role.</p><p>The departures provide the most discussed indication yet that Meta Superintelligence Labs may be starting on an unstable footing.</p><p>According to the <em>Wall Street Journal</em>, Meta has now <strong>frozen hiring</strong> at Superintelligence Labs. <em>Pymnts</em> adds that the division is being <strong>restructured</strong> into four groups, including a new &#8220;TBD Lab&#8221; for frontier research, alongside product-focused and infrastructure teams.</p><p><em>Forbes</em> reports that Meta has suffered a <strong>major brain drain</strong> from its AI research arm over the past few years, with top researchers leaving to found companies like Perplexity, Mistral, Fireworks AI, and World Labs&#8212;or to join rivals such as OpenAI, Anthropic, and Google. <strong>Former Meta employees </strong>told <em>Forbes</em> that this talent loss has been difficult to recover from. As a result, Zuckerberg has aggressively recruited from competitors. One senior executive at a rival AI company told <em>Forbes</em> that Meta is &#8220;overpaying for okay-ish AI scientists,&#8221; creating the impression that top-tier hires are made simply because of their compensation.</p><p>&#8594; Read: <a href="https://www.forbes.com/sites/rashishrivastava/2025/08/13/zuckerberg-squandered-his-ai-talent-now-hes-spending-billions-to-replace-it/">Zuckerberg Squandered His AI Talent. Now He&#8217;s Spending Billions To Replace It.</a></p><p>&#8594; Read: <a href="https://www.wsj.com/tech/ai/meta-ai-hiring-freeze-fda6b3c4">Meta Freezes AI Hiring After Blockbuster Spending Spree</a></p><p>&#8594; Read: <a href="https://www.wired.com/story/researchers-leave-meta-superintelligence-labs-openai/">Researchers Are Already Leaving Meta&#8217;s New Superintelligence Lab</a></p><div><hr></div><h3><strong>Microsoft Enlisted FBI to Monitor Employee Gaza Protests</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QKkQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QKkQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png 424w, https://substackcdn.com/image/fetch/$s_!QKkQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png 848w, https://substackcdn.com/image/fetch/$s_!QKkQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png 1272w, https://substackcdn.com/image/fetch/$s_!QKkQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QKkQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png" width="1456" height="879" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:879,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:443642,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/172097897?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QKkQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png 424w, https://substackcdn.com/image/fetch/$s_!QKkQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png 848w, https://substackcdn.com/image/fetch/$s_!QKkQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png 1272w, https://substackcdn.com/image/fetch/$s_!QKkQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15004a9f-0643-4058-89aa-5a1c00947113_2184x1318.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">https://www.business-humanrights.org/en/latest-news/microsoft-allegedly-asked-fbi-to-track-gaza-protests-by-employees/</figcaption></figure></div><p>For nearly a year, Microsoft has faced waves of internal dissent over its contracts with Israel amid the Gaza war. Employees have staged walkouts, disrupted company events, and circulated petitions calling on the tech giant to sever ties with the Israeli military.</p><p>Now, internal documents reviewed by <em><strong><a href="https://www.bloomberg.com/news/articles/2025-08-26/microsoft-asked-fbi-for-help-with-israel-gaza-protests">Bloomberg</a></strong></em> show Microsoft escalated its response by seeking help from federal law enforcement, </p><blockquote><p>&#8220;asking for any intelligence on pro-Palestinian protests that may be targeting the company.&#8221; </p></blockquote><p>At the same time, the disagreements also continue inside the company: &#8220;Employees say posts <strong><a href="https://nypost.com/2025/05/22/business/microsoft-bars-employees-from-using-words-palestine-gaza-in-emails/">mentioning Gaza have been deleted and protest emails blocked</a>.</strong>&#8221;</p><p><strong>Leaked emails</strong> further show Microsoft&#8217;s director of investigations contacted the FBI about former employee Hossam Nasr, writing:</p><blockquote><p>"One of our former employees in particular, Hossam Nasr, has been quite active in his posts targeting Microsoft and that we are complicit in genocide."</p></blockquote><p>Beyond the FBI, state and local law enforcement have also tracked Microsoft-related demonstrations since March, according to documents obtained by <em>Bloomberg</em>.</p><p>This development follows earlier reporting (see <strong><a href="https://aiwhistleblowerinitiative.substack.com/i/155426971/tech-companies-military-collaboration-with-israel-post-october">Edition 4</a></strong>) that detailed Microsoft&#8217;s deepening role in supplying cloud and AI technology to the Israeli military. Documents obtained by <em><strong><a href="https://www.theguardian.com/world/2025/jan/23/israeli-military-gaza-war-microsoft">The Guardian</a></strong></em><strong> </strong>in January described how Microsoft&#8217;s systems were being used in sensitive military operations, including:</p><ul><li><p>Azure, Microsoft&#8217;s cloud platform, was used by multiple military intelligence units, including Unit 8200 and Unit 81, which develops cutting-edge spy technology for Israel&#8217;s intelligence community.</p></li><li><p>A system Israeli security forces use to manage the population registry and movement of Palestinians in the West Bank and Gaza, known as &#8220;Rolling Stone&#8221;, was maintained using Microsoft&#8217;s technology.</p></li><li><p>During the Gaza offensive, Microsoft&#8217;s suite of communications and messaging systems were used by Ofek, an air force unit responsible for managing large databases of potential targets for lethal strikes known as &#8220;target banks&#8221;. </p></li></ul><p>The employee collective <strong>No Azure for Apartheid</strong> argues that those systems directly aid military operations responsible for mass civilian casualties.</p><p>The company&#8217;s response to internal protests has led to concrete consequences for employees who expressed dissent. As reported in <strong><a href="https://aiwhistleblowerinitiative.substack.com/i/161024302/microsoft-terminates-jobs-of-engineers-who-protested-use-of-ai-products-by-israels-military">Edition 9</a></strong>, two early-career engineers, <strong>Ibtihal Aboussad</strong> and <strong>Vaniya Agrawal</strong>, were dismissed after interrupting Microsoft&#8217;s 50th anniversary celebration in April, where they staged a protest that quickly went viral. &#8220;These are technological weapons,&#8221; Agrawal told <em>Bloomberg</em>. &#8220;Cloud and AI are just as deadly as bombs and bullets.&#8221;</p><p>&#8594; Read: <a href="https://www.bloomberg.com/news/articles/2025-08-26/microsoft-asked-fbi-for-help-with-israel-gaza-protests">Microsoft Asked FBI for Help Tracking Palestinian Protests</a></p><p>&#8594; Read: <a href="https://www.theguardian.com/us-news/2025/aug/26/microsoft-employee-sit-in-israeli-military-ties">Five current and ex-Microsoft workers arrested at sit-in over Israeli military ties</a></p><p>&#8594; Read: <a href="https://www.business-humanrights.org/en/latest-news/microsoft-allegedly-asked-fbi-to-track-gaza-protests-by-employees/">Microsoft allegedly asked FBI to track Gaza protests by employees</a></p><p>&#8594; Read: <a href="https://www.theguardian.com/world/2025/jan/23/israeli-military-gaza-war-microsoft">Revealed: Microsoft deepened ties with Israeli military to provide tech support during Gaza war</a></p><div><hr></div><h4><strong>Assorted Links</strong></h4><p><em>Recent News You Shouldn&#8217;t Miss</em></p><p><strong>Meta</strong></p><ul><li><p><a href="https://www.ft.com/content/be6a99d2-22de-48ec-9afa-1d2e2f709afc">Meta inflated ad performance and bypassed Apple&#8217;s privacy rules, tribunal hears</a>. Former product manager alleges social media platform misled marketers in push for revenues</p></li></ul><p><strong>OpenAI</strong></p><ul><li><p><a href="https://futurism.com/chatgpt-psychosis-antichrist-aliens">Leaked Logs Show ChatGPT Coaxing Users Into Psychosis About Antichrist, Aliens, and Other Bizarre Delusions</a></p></li></ul><p><strong>Nvidia</strong></p><ul><li><p><a href="https://www.theinformation.com/articles/nvidia-orders-halt-h20-production-china-directive-purchases?rc=qlomo6">Nvidia Orders Halt to H20 Production After China Directive Against Purchases</a></p></li></ul><p><strong>Oracle</strong></p><ul><li><p><a href="https://www.bloomberg.com/news/articles/2025-08-20/oracle-cloud-powers-openai-nvidia-and-larry-ellison-s-riches">Oracle Will Reportedly Spend $1 Billion a Year on a Gas-Powered Data Center</a></p></li></ul><div><hr></div><p>Thank you for trusting AIWI as your source for insights on protecting and empowering insiders who raise concerns within AI companies</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI companies who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The AIWI Team</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[INSIDE AI #17: Whistleblowing & EU AI Act | #PublishYourPolicies Updates | Nvidia's Trump Deal]]></title><description><![CDATA[Edition 17]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/inside-ai-17-whistleblowing-and-eu</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/inside-ai-17-whistleblowing-and-eu</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Wed, 20 Aug 2025 14:30:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qY8a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>In This Edition:</strong></h3><ul><li><p>Edition Highlights</p><ul><li><p><strong>New publication</strong>: <em><a href="https://aiwhistleblowerinitiative.substack.com/p/internal-warnings-public-stakes-strengthening?r=489j4y">Internal Warnings, Public Stakes</a></em> by our guest contributor, <a href="https://substack.com/@jakubkrys?utm_campaign=guest_post_bio&amp;utm_medium=web">Jakub Kry&#347;</a> (Talos Fellowship)</p></li><li><p>Calls for an <strong>EU AIO Mailbox</strong> are growing</p></li><li><p>New resource for EU AI Whistleblowers:<em> <a href="https://artificialintelligenceact.eu/whistleblowing-and-the-eu-ai-act/">Whistleblowing and the EU AI Act</a></em></p></li></ul></li><li><p>News</p><ul><li><p><strong>Nvidia's Two-Front Challenge</strong>: Trump&#8217;s Chip Deal, Chinese Restrictions</p></li><li><p><strong>Anthropic's Amodei</strong>: Weighing Gulf State Capital in Leaked Memos</p></li><li><p><strong>Assorted Links</strong>: Recent developments across Meta, OpenAI, Microsoft, Anthropic, xAI, and others</p></li></ul></li><li><p>Policy</p><ul><li><p><strong>Who's Watching the Watchers?</strong> States Could Answer Where Feds Won't. A Perspective on Policy: <em>Stopping AI Harm Starts with Protecting Whistleblowers</em></p></li></ul></li><li><p>Announcement:</p><ul><li><p><strong>Coalition Call Update</strong> </p><ul><li><p><a href="http://www.PublishYourPolicies.org">PublishYourPolicies.org</a> official launch on Capitol Hill</p></li><li><p>Growing coalition</p></li><li><p>Our call is highlighted by coalition partner, Mary Inman. Watch the conference: <em><a href="https://www.youtube.com/watch?v=zS19PQGry04&amp;t=2700s">The Human Cost: Whistleblowers in the AI Trenches</a></em></p></li><li><p><a href="https://open.substack.com/pub/transformernews/p/gpt-5-underwhelming-launch-pace-ai-development?selection=e60d689f-0991-4314-9a34-7492f6fd67f8&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff">Transformers</a> mentioned our call in the <em>Influence</em> section.</p></li></ul></li></ul></li></ul><div><hr></div><blockquote><h1><strong>Edition Highlights</strong></h1></blockquote><p>With the EU AI Act now in force and its provisions phasing in, one issue is moving quickly up the agenda: how can insiders safely raise concerns when AI systems pose (hidden) risks? Current EU whistleblower protections provide a foundation, but researchers and policymakers suggested that the unique challenges of AI&#8212;technical complexity, deployments, and systemic risks&#8212;are not fully covered.</p><p>In this edition, our guest contributor <strong>Jakub Kry&#347; (Talos Fellowship)</strong> highlights the gaps in existing frameworks to strengthen the whistleblowing ecosystem in the EU. Meanwhile, a new resource for EU AI Whistleblowers from the <strong>Future of Life Institute</strong> and the <strong>AI Whistleblower Initiative</strong> maps the intersection between the Whistleblowing Directive and the EU AI Act. At the policy level,&nbsp;<strong>authors of the EU AI Code of Practice</strong>&nbsp;(Safety and Security Section) are calling for the creation of a dedicated&nbsp;<strong>AI Office Mailbox</strong>&nbsp;for whistleblower disclosures.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AIWI: INSIDE AI. Subscribe for free to receive new updates and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2><strong>New Publication: Internal Warnings, Public Stakes: Strengthening AI Whistleblower Protections in the EU</strong> </h2><p>Jakub Kry&#347; (Talos Fellowship) identifies critical gaps in EU AI whistleblower protections, finding that while the AI Act and Whistleblower Protection Directive provide a foundation, internal deployment risks may fall outside legal scope, and reporting systems lack adequate staffing and technical expertise to handle whistleblower disclosures. A set of recommendations to strengthen the whistleblowing ecosystem in the EU is presented, including reinforcing internal and external channels, formally engaging advisory NGOs, and <strong>establishing a dedicated mailbox within the EU AI Office.</strong></p><p>This analysis was published in our newsletter Inside AI, with <a href="https://substack.com/@jakubkrys?utm_campaign=guest_post_bio&amp;utm_medium=web">Jakub Kry&#347;</a> as a guest contributor.</p><p>&#8594; Read more: <a href="https://aiwhistleblowerinitiative.substack.com/p/internal-warnings-public-stakes-strengthening?r=489j4y">Internal Warnings, Public Stakes: Strengthening AI Whistleblower Protections in the EU</a></p><div><hr></div><h2><strong>A Growing Call for AIO Mailbox</strong></h2><p><em>Drafted by the CoP Authors of the <strong>Safety and Security </strong>Chapter</em></p><p>The Code of Practice (Safety and Security Section) authors recommend that the EU AI Office establish a dedicated reporting channel for AI-related whistleblower disclosures, providing equal rights as member state channels, including response timelines, public disclosure options, and anonymous communication capabilities. The call for the EU AIO Mailbox is drafted by the Chairs of the Code&#8217;s chapter on <strong>Safety and Security:</strong></p><blockquote><p>"Given the limitations of current legal protections, we recommend that the AI Office establishes <strong>a dedicated reporting channel for AI-related whistleblower disclosures.</strong> This channel should afford equal rights to whistleblowers as member state external reporting channels (such as response timelines, public disclosure options, and clarity around trade secret disclosures), enabling anonymous communication and promoting awareness of the channel's availability."</p></blockquote><p>Many frontier AI companies have already signed the CoP, including Anthropic, OpenAI, DeepMind, Microsoft, and other tech companies. Recent news highlighted that <a href="https://techcrunch.com/2025/07/18/meta-refuses-to-sign-eus-ai-code-of-practice/">Meta has refused to sign</a>; however, based on the guidelines for providers of general-purpose AI models, Meta, which belongs to providers of &#8220;general-purpose AI models with systemic risk&#8221;, will have to comply with the legislation by August 2, 2027, nevertheless.</p><p>&#8594; Read more: <a href="https://code-of-practice.ai/?section=safety-security">EU AI ACT: General Purpose AI Code of Practice</a></p><p>&#8594; See which companies signed the CoP: <a href="https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai#ecl-inpage-Signatories-of-the-AI-Pact">Signatories of Code of Practice</a></p><div><hr></div><h2>New Resource for EU AI Whistleblowers: <strong>Whistleblowing and the EU AI Act</strong> </h2><p>A new resource, authored by <strong>Santeri Koivula</strong> (<strong>Future of Life Institute</strong>) and <strong>Karl Koch</strong> (our founder,<strong> AI Whistleblower Initiative</strong>), has just been published. This publication provides an overview of how the EU Whistleblowing Directive relates to the AI Act and offers valuable resources for potential whistleblowers.</p><p>&#8594; Read more: <a href="https://artificialintelligenceact.eu/whistleblowing-and-the-eu-ai-act/">Whistleblowing and the EU AI Act</a></p><div><hr></div><blockquote><h1>Insider Currents </h1></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI companies.</em></p><h2>Nvidia: Trump&#8217;s Handshake and Beijing&#8217;s Wall</h2><p>Previously, <a href="https://techcrunch.com/2025/07/23/trumps-ai-action-plan-aims-to-block-chip-exports-to-china-but-lacks-key-details/">Trump&#8217;s AI Plan was reported to&nbsp;</a><strong><a href="https://techcrunch.com/2025/07/23/trumps-ai-action-plan-aims-to-block-chip-exports-to-china-but-lacks-key-details/">aim to block chip exports to China</a>.</strong> Yet, recent updates show he&#8217;s still threading a needle in chip deals with China&#8212;the U.S.&#8217;s lucrative market for Nvidia&#8212;by easing some sales restrictions.</p><h3>Beijing Talks to Hit Pause on Nvidia Chips: Location Tracking Mandate &amp; Security Concern</h3><p>After Trump struck a deal allowing Nvidia to resume sales in China, Nvidia is facing a new obstacle: Beijing. <em>According to three people briefed on the matter</em>, the&nbsp;<strong>Cyberspace Administration of China (CAC)</strong>&nbsp;recently ordered local tech giants, including ByteDance, Alibaba, and Tencent, to&nbsp;<strong>suspend purchases of Nvidia chips</strong>, citing data security concerns.</p><p>The setback comes just as Chinese firms were ramping up orders of 700,000 H20 chips, after Washington lifted its ban last month, <em>according to four people with direct knowledge of these purchases</em>. Under Trump&#8217;s deal, Nvidia can sell H20s on the condition it pays the U.S. government a 15% revenue cut, and he is even open to selling a scaled-back version of its advanced Blackwell chip. Nvidia CEO Jensen Huang argued to Trump that by supplying weaker chips, the U.S. could keep China reliant on American technology.</p><p>Beijing, however, appears to recognize the risks that this very tactic poses for China, added <em><a href="https://www.theinformation.com/articles/china-demands-companies-halt-nvidia-chip-orders-security-concerns?rc=qlomo6">The Information</a></em>. Two people close to the CAC are reported to have raised two concerns:</p><ul><li><p><strong>Location tracking</strong>: Proposed U.S. legislation would mandate that chips include location trackers to curb smuggling. That concern is probably not unfounded&#8212;just recently, federal agents arrested two Chinese nationals in California for illegally exporting millions of dollars&#8217; worth of Nvidia&#8217;s top-tier H100 chips, as reported by <em><a href="https://www.reuters.com/business/autos-transportation/two-chinese-nationals-california-accused-illegally-shipping-nvidia-ai-chips-2025-08-05/">Reuters</a></em>.</p></li><li><p><strong>Security &#8220;backdoor&#8221;</strong>: Chinese chipmakers and customers have complained for months about the possibility that Nvidia&#8217;s chips could siphon sensitive data back to the U.S.</p></li></ul><p>Adding to the fuel, <a href="https://x.com/Discoplomacy/status/1955592845592543238">Discoplomacy X account also </a>posted a thread on this concern that the US has been putting location tracking devices in targeted shipments of advanced chips they see as being at high risk of illegal diversion to China.</p><p>However, the scope of China&#8217;s suspension of Nvidia chips&#8217; shipments remains murky. <em>The Informatio</em>n stated that &#8220;the CAC typically does not directly regulate tech companies&#8217; data center construction. The agency is responsible for regulating and overseeing Internet affairs, including data security and online content management.&#8221;</p><p>&#8594; Read: <a href="https://www.bloomberg.com/news/articles/2025-08-11/trump-open-to-nvidia-selling-scaled-back-blackwell-chip-to-china">Trump Open to Nvidia Selling Scaled-Back Blackwell Chip to China</a></p><p>&#8594; Read: <a href="https://www.theinformation.com/articles/china-demands-companies-halt-nvidia-chip-orders-security-concerns?rc=qlomo6">China Demands Companies to Halt Nvidia Chip Orders Over Security Concerns</a></p><p>&#8594; Read: <a href="https://x.com/Discoplomacy/status/1955592845592543238">@Discoplomacy on X</a></p><p>&#8594; Read: <a href="https://techcrunch.com/2025/07/23/trumps-ai-strategy-trades-guardrails-for-growth-in-race-against-china/">Trump&#8217;s AI strategy trades guardrails for growth in race against China</a></p><p>&#8594; Read: <a href="https://techcrunch.com/2025/08/11/nvidia-amd-may-sell-high-end-ai-chips-to-china-if-they-pay-us-a-cut/">Nvidia, AMD may sell high-end AI chips to China if they pay US a cut</a></p><p>&#8594; Read: <a href="https://www.reuters.com/business/autos-transportation/two-chinese-nationals-california-accused-illegally-shipping-nvidia-ai-chips-2025-08-05/">Two Chinese nationals in California accused of illegally shipping Nvidia AI chips to China</a></p><div><hr></div><h2><strong>Anthropic&#8217;s Amodei Weighing Gulf State Capital in Leaked Memos</strong></h2><p>Anthropic CEO Dario Amodei is quietly preparing the company to accept investment from Gulf State sovereign wealth funds&#8212;despite earlier rejections on national security grounds&#8212;according to internal memos and Slack messages obtained by <em>Wired</em> and <em>The Information</em>.</p><p>Amodei, who in 2024 cited human rights concerns in refusing Saudi Arabian money, has now told employees that the competitive landscape leaves Anthropic little choice. &#8220;There is a truly giant amount of capital in the Middle East, easily $100B or more,&#8221; he wrote, according to Wired. </p><blockquote><p>&#8220;If we want to stay on the frontier, we gain a very large benefit from having access to this capital. Without it, it is substantially harder to stay on the frontier&#8221;</p></blockquote><p>Amodei reiterated that Anthropic will not build data centers in the region, warning that handing &#8220;the supply chain of AI&#8221; to authoritarian governments risks enabling military dominance. But he argued that narrowly scoped financial investments from the UAE and Qatar could be structured to avoid ceding control. </p><p>Still, he acknowledged the ethical tradeoffs.</p><blockquote><p>&#8220;Unfortunately, I think &#8216;No bad person should ever benefit from our success&#8217; is a pretty difficult principle to run a business on,&#8221; wrote Anthropic CEO Dario Amodei in a note to staff obtained by <a href="https://www.wired.com/story/anthropic-dario-amodei-gulf-state-leaked-memo/">WIRED</a>.</p></blockquote><p><em><a href="https://futurism.com/leaked-messages-ceo-anthropic-dictators">Futurism</a></em> described Amodei&#8217;s memo as proof that even AI companies once committed to ethics are now sacrificing their principles to chase funding for massive and environmentally destructive projects in pursuit of scale.</p><p>Amodei himself admitted that investors might gain &#8220;soft power&#8221; by promising future rounds, and warned of &#8220;erosion of standards&#8221; as competitors deepen ties in the region. &#8220;Unfortunately, having failed to prevent that dynamic at the collective level, we&#8217;re now stuck with it as an individual company,&#8221; he wrote, noting that U.S. rivals are already planning multi-gigawatt training clusters in Abu Dhabi.</p><p>&#8594; Read: <a href="https://www.wired.com/story/anthropic-dario-amodei-gulf-state-leaked-memo/">Leaked Memo: Anthropic CEO Says the Company Will Pursue Gulf State Investments After All</a></p><p>&#8594; Read: <a href="https://futurism.com/leaked-messages-ceo-anthropic-dictators">Leaked Slack Messages Show CEO of "Ethical AI" Startup Anthropic Saying It's Okay to Benefit Dictators</a></p><p>&#8594; Read: <a href="https://www.theinformation.com/articles/middle-easts-ai-push-promises-big-deals-strings-attached?rc=qlomo6">Middle East&#8217;s AI Push Promises Big Deals&#8212;With Strings Attached</a></p><div><hr></div><h3><strong>Assorted Links</strong></h3><p><em>Recent News You Shouldn&#8217;t Miss</em></p><h4><strong>Meta:</strong></h4><ul><li><p><a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/">Meta&#8217;s AI rules have let bots hold &#8216;sensual&#8217; chats with kids, offer false medical info</a></p></li></ul><h4>OpenAI: </h4><ul><li><p><a href="https://techcrunch.com/2025/07/31/openai-to-launch-ai-data-center-in-norway-its-first-in-europe/">OpenAI to launch AI data center in Norway, its first in Europe</a></p></li><li><p><a href="https://techcrunch.com/2025/08/12/sam-altman-openai-will-reportedly-back-a-startup-that-takes-on-musks-neuralink/">Sam Altman, OpenAI will reportedly back a startup that takes on Musk&#8217;s Neuralink</a></p></li><li><p><a href="https://www.artificiallawyer.com/2025/07/28/chatgpt-has-no-legal-privilege-is-this-a-problem/">ChatGPT Has No Legal Privilege &#8211; Is This A Problem?</a></p></li></ul><h4>Microsoft: </h4><ul><li><p><a href="https://www.businessinsider.com/microsoft-employees-react-ceo-memo-satya-nadella-enigma-success-2025-7">Microsoft employees react to CEO's 'enigma of success' memo with a mixture of suspicion, anger, and speculation</a></p></li></ul><h4>Anthropic:</h4><ul><li><p> <a href="https://www.wired.com/story/anthropic-revokes-openais-access-to-claude/">Anthropic Revokes OpenAI's Access to Claude</a></p></li></ul><h4>xAI:</h4><ul><li><p><a href="https://www.businessinsider.com/xai-grok-training-facial-expressions-skippy-employees-react-2025-7">Elon Musk's xAI tried to teach Grok how to be human &#8212; by recording its own workers' faces</a></p></li></ul><h4>Palantier: </h4><ul><li><p><a href="https://www.theguardian.com/technology/2025/aug/04/tech-trillion-dollars-palantir-techscape">Tech&#8217;s trillion-dollar binge, Palantir&#8217;s empire and women&#8217;s privacy under attack</a></p></li></ul><h4>Others:</h4><ul><li><p><a href="https://www.forbes.com/sites/jamesbroughel/2025/07/28/doge-built-an-ai-to-delete-half-of-federal-regulations-will-it-work/">DOGE Built An AI To Delete Half Of Federal Regulations. Will It Work?</a></p></li></ul><div><hr></div><blockquote><h1>Policy &amp; Legal Updates</h1></blockquote><p><em>Updates on regulations with a focus on safeguarding individuals who voice concerns.</em></p><h2><strong>Who's Watching the Watchers? States Could Answer Where Feds Won't</strong></h2><p><em>A Perspective on Policy: Stopping AI Harm Starts with Protecting Whistleblowers</em></p><blockquote><p>&#8220;In the absence of federal leadership, states must step up to fill the regulatory gap,&#8221; </p></blockquote><p>argue Vincent Cortese, Jocelyn Hong, and Charlotte Lin, Research Assistants at the Center for AI and Digital Policy (CAIDP).</p><p>Their analysis comes as the Trump administration's July 2025 AI Action Plan emphasizes deregulation to boost American AI competitiveness. The federal approach creates what they described as &#8220;its intent to dismantle the regulatory environment designed to safeguard the public.&#8221; This approach puts AI insiders in a precarious position. While they often hold the crucial knowledge needed to identify emerging risks, they lack legal protection when speaking out about these risks.</p><p>Recent cases illustrate that whistleblowers&#8217; concerns highlight the safety and ethical risks of powerful AI systems, as big tech races to deploy them:</p><ul><li><p>&#8220;In 2024, a Microsoft software engineer <a href="https://www.cnbc.com/2024/03/06/microsoft-ai-engineer-says-copilot-designer-creates-disturbing-images.html">reported to both the FTC and Microsoft&#8217;s board</a> that the Copilot Designer image generator was producing graphic and violent content alongside images of children. Despite his repeated calls for product warnings and restrictions, Microsoft allegedly continued to market the product without significant changes.&#8221;</p></li><li><p>&#8220;The tragic case of <a href="https://fortune.com/2025/02/15/san-francisco-police-report-officially-rules-openai-whistleblower-suchir-balajis-death-suicide/">Suchir Balaji</a>, a former OpenAI researcher who published<a href="https://suchir.net/fair_use.html"> a detailed analysis</a><em> questioning the legality and ethics of ChatGPT&#8217;s training methods. Balaji was reportedly preparing to testify in the <a href="https://harvardlawreview.org/blog/2024/04/nyt-v-openai-the-timess-about-face/">New York Times v. OpenAI case</a> </em>when he died unexpectedly in November 2024.&#8220;</p></li></ul><blockquote><p>&#8220;These examples point to a critical gap in the current regulatory framework. Without clear and enforceable whistleblower protections specific to the AI sector, we are asking company researchers and engineers to choose between their livelihoods and the public interest.&#8221;</p></blockquote><h4>State Could Establish National Frameworks Without Federal Regulation</h4><p>Strong protections are already advancing in California and other states that specifically target AI developers creating &#8220;<a href="https://sd11.senate.ca.gov/news/senator-wiener-introduces-legislation-protect-ai-whistleblowers-boost-responsible-ai">large-scale</a>&#8221; AI systems.</p><ul><li><p>California&#8217;s <a href="https://legiscan.com/CA/bill/SB53/2025">SB 53</a></p></li><li><p>Illinois&#8217; <a href="https://www.ilga.gov/Legislation/BillStatus?GAID=18&amp;DocNum=3506&amp;DocTypeID=HB&amp;LegId=0&amp;SessionID=114">Artificial Intelligence Safety and Security Protocol Act</a> (HB 3506)</p></li><li><p><a href="https://www.nysenate.gov/legislation/bills/2025/S1169/amendment/original">New York&#8217;s AI Act (SB S1169)</a></p></li><li><p>New Jersey introduced AI whistleblower resolutions in its <a href="https://pub.njleg.gov/Bills/2024/AR/158_S1.PDF">Assembly</a>, which passed in January 2025, and its <a href="https://pub.njleg.state.nj.us/Bills/2024/SR/121_I1.PDF">Senate</a>, which is still pending</p></li></ul><p>The state-level protections above show that when legal wording is aligned, it minimizes uncertainty for companies operating in multiple jurisdictions. Uniform legal standards can ensure that employees receive equal rights and protections regardless of their location. They also could prevent the fragmented regulatory environment that tech companies frequently identify as an obstacle to innovation, added Cortese, Hong, and Lin.</p><p>Whistleblower systems have historically been effective. A 2007 report from the National Whistleblower Center found insiders exposed 43% of corporate fraud cases&#8212;far outpacing professional auditors at 19%.</p><p>For AI, the stakes are even higher. As Cortese, Hong, and Lin note, AI is now embedded across the economy and public life. Policymakers may not be able to predict every risk, but they can create channels that allow insiders to flag risks safely&#8212;before these risks become a public safety crisis.</p><p>&#8594; Read Authors&#8217; Perpectives: <a href="https://www.techpolicy.press/stopping-ai-harm-starts-with-protecting-whistleblowers/">Stopping AI Harm Starts with Protecting Whistleblowers</a></p><div><hr></div><blockquote><h1>Announcements &amp; Call to Action</h1></blockquote><p><em>Updates on publications, community initiatives, and &#8220;call for topics&#8221; that seek contributions from experts addressing concerns inside Frontier AI.</em></p><h2><strong>C</strong>ampaign Update: <a href="http://www.PublishYourPolicies.org">PublishYourPolicies.org</a></h2><h4>Official Launch on Capitol Hill &#8211; National Whistleblower Day (July 30)</h4><p>Our joint call (together with 35+ organizations, scholars, and experts on AI &amp; whistleblowing) was officially launched on Capitol Hill during National Whistleblower Day. The event, hosted in the U.S. Senate, was attended by Senator Chuck Grassley and other leaders in AI and accountability.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qY8a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qY8a!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png 424w, https://substackcdn.com/image/fetch/$s_!qY8a!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png 848w, https://substackcdn.com/image/fetch/$s_!qY8a!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png 1272w, https://substackcdn.com/image/fetch/$s_!qY8a!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qY8a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png" width="984" height="892" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:892,&quot;width&quot;:984,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1713037,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/170759504?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qY8a!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png 424w, https://substackcdn.com/image/fetch/$s_!qY8a!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png 848w, https://substackcdn.com/image/fetch/$s_!qY8a!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png 1272w, https://substackcdn.com/image/fetch/$s_!qY8a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54a52c4c-5bec-40dd-995a-9c4107a67037_984x892.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">On National Whistleblower Day, our coalition call was launched during the Senate AI Panel on Protecting AI and Tech Whistleblowers.</figcaption></figure></div><p>We delivered a clear message:</p><p><strong>AI companies must publish their internal whistleblowing policies, along with evidence of how effective these systems are at protecting whistleblowers and addressing misconduct.</strong></p><p>If AI companies refuse to make even this minimal commitment, how can the public trust their promises to protect whistleblowers?</p><h4>Growing Coalitions</h4><p>On the same day as our call launch, <strong>Mary Inman from Whistleblower Partners LLP</strong> (our coalition partner in this joint call) highlighted our initiative during <em>The Human Cost: Whistleblowers in the AI Trenches</em>&#8212;a conference exploring the personal risks whistleblowers face and the systemic failures in tech accountability. <a href="https://www.youtube.com/watch?v=zS19PQGry04&amp;t=2700s">Watch here</a></p><p>Our coalition continues to grow stronger.</p><p><strong>We're proud to welcome the latest coalition partners:</strong></p><p><strong>Daniel Kokotajlo</strong>, the OpenAI whistleblower who revealed the extensive use of non-disparagement agreements aimed at stifling criticism of the company by former employees, has joined our coalition call.</p><p>Daniel has long been committed to the effort to establish robust protections for AI insiders who raise concerns. One year ago, together with other OpenAI insiders, they issued the "Right To Warn" letter. This letter was co-signed by distinguished voices, including Stuart Russell, and publicly supported by Lawrence Lessig (both members of our joint call), with a clear call for AI companies to create robust whistleblowing systems and protect those brave enough to warn the public about AI (catastrophic) risks.</p><p>Yet here we are, twelve months later, with these critical safety mechanisms still shrouded in secrecy.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QKfH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QKfH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QKfH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QKfH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QKfH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QKfH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:235253,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/170759504?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QKfH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!QKfH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!QKfH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!QKfH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee869d13-a110-4b21-b73d-ba1f3ae341c5_1536x1536.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We've also welcomed <strong>Digihumanism</strong>, <strong>WhistleblowersUK</strong>, and <strong>Whistleblowers of America</strong> to our coalition. Each new partner brings unique expertise and shares our core conviction: transparency isn't optional when the stakes are this high.</p><p>Our call was also featured in <strong>Transformer</strong>, in the <em>Influence</em> section covering AI development and accountability. </p><p>&#8212;&gt; See the mention by Transformer <a href="https://open.substack.com/pub/transformernews/p/gpt-5-underwhelming-launch-pace-ai-development?selection=e60d689f-0991-4314-9a34-7492f6fd67f8&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;textColor=%23ffffff">here</a>.</p><div><hr></div><p>Thank you for trusting AIWI as your source for insights on protecting and empowering insiders who raise concerns within AI companies.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI companies who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The AIWI Team</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AIWI: Inside AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[We Need Your Eyes on This: AI Companies Acknowledge AI Risks But Keep Their Whistleblowing Policies Secret]]></title><description><![CDATA[Our coalition of 30+ global organizations says this must end. At a minimum, AI companies must be transparent about their policies. Find out why here.]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/we-need-your-eyes-on-this-ai-companies</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/we-need-your-eyes-on-this-ai-companies</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Tue, 05 Aug 2025 17:14:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OPLJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>An Article Originally Published on LessWrong for the Launch of This Coalition Call, by Karl Koch, Founder of AIWI.</em></p><p><strong>Updated Context</strong></p><p>The below piece was originally an announcement piece for our coalition call. The urgency of our call, however, remains. And we now need your support to strengthen this call.</p><p>We are all witnessing a striking contradiction among leading AI companies. Major frontier AI companies have publicly acknowledged substantial AI risks. Yet when it comes to one of the most basic transparency measure for uncovering and address managing these very risks&#8212;publishing their internal whistleblowing policies&#8212;5 out of 6 major AI companies have not done so. This isn't a minor oversight: It's a fundamental test of whether these companies' safety commitments extend beyond public relations to actual accountability mechanisms.</p><p>Employees working on frontier AI systems are uniquely positioned to identify emerging risks, safety failures, and concerning practices that external observers simply cannot see. But employees can only serve this important function if they have robust protections and clear channels for raising concerns. Without transparent whistleblowing policies, we have no way to assess whether these crucial safety mechanisms even exist, let alone whether they're adequate for the unprecedented challenges of AI development we all acknowledged.</p><p>Since launching our coalition call on July 30 in Washington D.C., we have yet to receive signs of meaningful responses from the AI companies. <strong>This is why we need your help to amplify this message and, if you work in frontier AI, reach out to your leadership. </strong></p><blockquote><p><strong>Policy transparency isn't another red-tape regulatory request&#8212;it's a litmus test for whether AI companies will implement the most basic accountability measures consistent with the risks they themselves acknowledge. </strong></p></blockquote><p>If they won't take this essential step toward transparency, what confidence can we have in their private safety commitments? We, the public, deserve to know whether the mechanisms for internal oversight actually exist. We cannot only hope and &#8216;trust&#8217; that risks are effectively addressed and managed internally.</p><p><em>[The full original piece continues below...]</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AIWI: Inside AI! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Call on AI Companies: Publish Your Whistleblowing Policies</h2><p><em><a href="https://publishyourpolicies.org/">A coalition of +30 whistleblowing and AI organizations is calling for stronger transparency on AI company-internal whistleblower protections.</a></em></p><h3>Transparency of major AI companies&#8217; whistleblowing systems*</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OPLJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OPLJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp 424w, https://substackcdn.com/image/fetch/$s_!OPLJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp 848w, https://substackcdn.com/image/fetch/$s_!OPLJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp 1272w, https://substackcdn.com/image/fetch/$s_!OPLJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OPLJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp" width="1196" height="324" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:324,&quot;width&quot;:1196,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4164,&quot;alt&quot;:&quot;Rating of whistleblower system transparency of major AI companies. Details below. *Please note that AIWI only evaluates the transparency of the policy and outcome reporting&#8212;not the content or quality of the underlying system, protections, culture, or past patterns of retaliation.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/170166771?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Rating of whistleblower system transparency of major AI companies. Details below. *Please note that AIWI only evaluates the transparency of the policy and outcome reporting&#8212;not the content or quality of the underlying system, protections, culture, or past patterns of retaliation." title="Rating of whistleblower system transparency of major AI companies. Details below. *Please note that AIWI only evaluates the transparency of the policy and outcome reporting&#8212;not the content or quality of the underlying system, protections, culture, or past patterns of retaliation." srcset="https://substackcdn.com/image/fetch/$s_!OPLJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp 424w, https://substackcdn.com/image/fetch/$s_!OPLJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp 848w, https://substackcdn.com/image/fetch/$s_!OPLJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp 1272w, https://substackcdn.com/image/fetch/$s_!OPLJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5eac206-10ea-4b23-8481-845c95db666d_1196x324.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">*Please note that AIWI only evaluates the transparency of the policy and outcome reporting&#8212;not the content or quality of the underlying system, protections, culture, or past patterns of retaliation.</figcaption></figure></div><p>Frontier AI companies currently lag global standards and best practices in creating adequate transparency around their internal whistleblowing systems:</p><blockquote><p><strong>5 out 6 AI companies in our target set do not even publish their whistleblowing policies.</strong></p></blockquote><p>That means we, the public, and employees in AI, are forced to 'trust' that companies will address concerns well internally.</p><p><em><strong>This is far from good enough</strong></em></p><p>...and why we, at the National Whistleblower Day Event in Washington DC on 30 July 2025, launched a campaign asking AI companies to publish their internal whistleblowing policies <a href="https://aiwi.org/publishyourpolicies/">("Level 1")</a> and reports on their whistleblowing system performance, effectiveness, and outcomes (<a href="https://aiwi.org/publishyourpolicies/">"Level 2")</a>.</p><p>We are very proud of the coalition we have the privilege of representing here - uniting most of the world's most prominent whistleblowing organizations and scholars with equally prominent AI counterparts.</p><p>See further below a full list of signatories or our <strong><a href="https://publishyourpolicies.org/">campaign page</a>.</strong></p><h3>This Post</h3><p>You can find the actual campaign page, including evidence and sources, here: </p><p><a href="https://publishyourpolicies.org/">https://publishyourpolicies.org/</a></p><p>In this post I'll share the same message with a slightly altered 'storyline'.</p><h3>Why This Matters Now</h3><p>I don't have to make the case here for why we should care about the way AI companies go about development and deployment of their frontier models - especially over the coming years.</p><p>Likewise, if you've seen <strong><a href="https://righttowarn.ai/">righttowarn</a>,</strong> you're likely aware of this line of reasoning:</p><blockquote><p><strong>Many risks will only be visible to insiders.</strong></p></blockquote><p>The current black-box nature of AI development means employees are often the first&#8212;and potentially only&#8212;people positioned to spot dangerous developments, misconduct, or safety shortcuts.</p><p>It therefore matters that AI companies build up the infrastructure required to address concerns raised today already and that we can enter a 'race to the top' on system quality as soon as possible.</p><p>Transparency on internal whistleblowing systems, allowing for public feedback and empowering employees to understand and compare protections is the mechanism to enter that 'race to the top' mechanism.</p><p><strong>Important note 1:</strong></p><p>We are talking about company internal whistleblowing systems here (although they can extend arbitrarily far in terms of 'covered persons', e.g. to suppliers, customers, etc.). <strong>This does NOT diminish the importance of legal protections for AI whistleblowers or independent support offerings for insiders. </strong></p><p>But the reality is (see below) that we expect the majority of risks to be flagged internally <em>first. That means internal channels are critical and must not be neglected. </em>If you like the 'swiss cheese mode' of risk management - we want to make sure protections are as strong as possible at <em>every level</em>.</p><p><strong>Important note 2:</strong></p><p>Both in this post and our main post, we are not evaluating policy or system quality. We only talk about the degree of transparency provided.</p><h3>The Case for Transparency</h3><h4>1. Insiders Are Uniquely Positioned</h4><p>Current and former AI employees have recognized that they are "among the few people who can hold [companies] accountable to the public." <a href="https://righttowarn.ai/">They've called for </a>companies to "facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organization."</p><p>Research consistently shows that employees are often the first to recognize potential wrongdoing or risk of harm. In AI specifically, the technical complexity and proprietary nature of development means many risks are only visible to those with internal access.</p><h4>2. Internal Channels Are a Major Path</h4><p>Data from the <strong><a href="https://www.sec.gov/files/2021_ow_ar_508.pdf">SEC Whistleblower Program</a></strong> shows that three-quarters of award recipients initially attempted to address concerns within their organizations before seeking external remedies. Employees naturally try internal channels first, and we expect this to be no different in frontier AI companies:</p><ol><li><p>Nature of work: Research &amp; Engineering work relies on discussion. It is standard practice for concerns to be escalated internally - especially if they are less 'clear cut' and independently identifiable as e.g. accounting fraud or bribery (which however are still in scope of whistleblowing policies).</p></li><li><p>Culture: Addressing concerns internally first is a common part of many Silicon Valley organizations.</p></li></ol><p>This means that these systems must work reliably:</p><blockquote><p>When internal systems fail, we all lose. Companies miss opportunities to address problems early, employees face unnecessary risks, and the public remains unaware of safety issues until they potentially become crises.</p></blockquote><h4>3. Current Systems Are Opaque and Potentially Broken</h4><p>Major AI companies have not published their whistleblowing policies. The recent <strong><a href="https://futureoflife.org/wp-content/uploads/2025/07/FLI-AI-Safety-Index-Report-Summer-2025.pdf">Future of Life Institute AI Safety Index</a></strong><a href="https://futureoflife.org/wp-content/uploads/2025/07/FLI-AI-Safety-Index-Report-Summer-2025.pdf"> </a>highlighted that Anthropic, Google DeepMind, xAI, and Mistral lack public whistleblowing policies, making neutral assessment impossible. They, likewise, call for the publication of policies.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OsCK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OsCK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png 424w, https://substackcdn.com/image/fetch/$s_!OsCK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png 848w, https://substackcdn.com/image/fetch/$s_!OsCK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png 1272w, https://substackcdn.com/image/fetch/$s_!OsCK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OsCK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png" width="1210" height="678" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/edb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:678,&quot;width&quot;:1210,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:154975,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/170166771?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OsCK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png 424w, https://substackcdn.com/image/fetch/$s_!OsCK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png 848w, https://substackcdn.com/image/fetch/$s_!OsCK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png 1272w, https://substackcdn.com/image/fetch/$s_!OsCK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fedb78d71-dcd9-4581-8316-9a23af4371fb_1210x678.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>OpenAI is the sole exception&#8212;and they only published their policy following public pressure over their restrictive non-disparagement clauses. Even then, none of the major AI companies publish effectiveness metrics or outcome data.</p><p><strong>This stands in stark contrast to other industries. Companies across sectors routinely publish whistleblowing policies&#8212;from AI-related organizations like ASML to industrial firms like Tata Steel to financial services companies. </strong>Many also publish regular effectiveness evaluations and outcome statistics.</p><p><strong>Conversations with Insiders Also Reveal Gaps:</strong></p><ul><li><p><strong>Employee Awareness</strong>: Interviews with current and former frontier AI company insiders show that many employees don't know, understand, or trust their companies' internal reporting systems. As one insider told us: </p></li></ul><blockquote><p><em><strong>&#8220;I'm not well-informed about our company's whistleblowing procedures (and it feels uncomfortable to inquire about them directly)."</strong></em></p></blockquote><ul><li><p><strong>Trust Deficit</strong>: AI employees suspect that making reports would be ineffective or could make their work lives more difficult. Another insider shared:</p></li></ul><blockquote><p><em><strong>"I anticipate that using official reporting channels would likely result in subtle, indirect consequences rather than overt retaliation like termination."</strong></em></p></blockquote><ul><li><p><strong>History of Retaliation</strong>: AI companies have attempted to suppress individuals voicing concerns (OpenAI's restrictive NDAs) and have faced cases around alleged wrongful termination for speaking up on research misconduct (Google).</p></li></ul><p>We also have good reason to believe that multiple companies' internal whistleblowing policies are currently in violation of the EU Whistleblowing Directive. If you are interested: <strong>Happy to provide details via DM.</strong></p><p>It <em>might </em>still be the case that certain systems are working relatively well today (at least for one of the organizations in the set we have an 'okay' impression based on conversations with individuals) - <strong>but the reality is that neither insiders nor we know.</strong></p><p><strong>Every insider we have spoken to to date supports the publication of whistleblowing policies.</strong></p><p>If you are an insider <strong>but</strong> you don't support the publication - please reach out and share your thoughts with us (or comment below).</p><h4>4. Transparency Enables Verification and Improvement</h4><p>Without published policies &amp; outcome transparency, the public cannot assess whether internal systems actually protect employees who raise safety concerns.</p><p><strong>Employees </strong>cannot compare protections across companies when making career decisions.</p><p><strong>Policymakers </strong>cannot identify coverage gaps or craft appropriate regulations.</p><p><strong>Companies benefit</strong> from improved systems through public feedback and heightened employee awareness. Empirical evidence shows that there is a strong 'business case' for improved speak up cultures and whistleblowing systems - from improved innovation to increased employee loyalty. This is why, for example, <a href="https://www.theverge.com/2021/4/6/22370177/google-whistleblower-protections-trillium-asset-management">shareholder representatives have called on Google to improve its whistleblowing systems</a>.</p><h4>5. This Information Vacuum Serves No Legitimate Purpose</h4><p>We are only calling for transparency:</p><blockquote><p><strong>This should create no major workload for companies. </strong></p></blockquote><p>If it does, then maybe that means there were things to be improved upon.</p><p>Whistleblowing policies contain procedural frameworks and legal guarantees&#8212;<strong>not trade secrets or competitive advantages. There's no business case for secrecy, but substantial evidence for the benefits of transparency.</strong></p><blockquote><p><strong>If companies truly care about developing a strong speak-up culture and protecting those who live it: Publish. Your. Policies.</strong></p></blockquote><h3>What We're Asking For</h3><p>We're calling on AI companies to meet two levels of transparency [this is an excerpt - see <a href="https://publishyourpolicies.org/">campaign page </a>for details]:</p><p><strong>Level 1: Policy Transparency</strong> (minimum baseline)</p><ul><li><p>Publish complete whistleblowing policy documents</p></li><li><p>Clearly define scope of protected individuals and covered wrongdoing</p></li><li><p>Outline reporting channels, investigation procedures, and timelines</p></li><li><p>Specify protection and support measures</p></li><li><p>Detail independence guarantees and implementation</p></li></ul><p><strong>Level 2: Effectiveness Transparency</strong> (what companies should strive for)</p><ul><li><p>Publish metrics on reports received, resolved, and outcomes</p></li><li><p>Share data on retaliation complaints and whistleblower satisfaction</p></li><li><p>Report on employee awareness, understanding, and trust levels</p></li><li><p>Document regular system reviews and improvements</p></li><li><p>Conduct and publish results of independent effectiveness audits</p></li></ul><p>Companies that take whistleblowing seriously should already gather this data for continuous improvement.</p><blockquote><p><strong>Publication is simply a matter of transparency.</strong></p></blockquote><h3>The Coalition</h3><p>This call is supported by a broad coalition of scholars, AI safety organizations, and whistleblowing advocacy groups:</p><p><strong>&#128279; Organizations:</strong></p><p><em><strong>Click partners' name below to visit their website or view their publications</strong></em></p><ul><li><p><a href="https://www.blueprintforfreespeech.net/">Blueprint for Free Speech</a></p></li><li><p><a href="https://www.centeraipolicy.org/">Center for AI Policy</a></p></li><li><p><a href="https://carma.org/">CARMA (Centre for AI Risk Management &amp; Alignment)</a></p></li><li><p><a href="https://www.convergenceanalysis.org/">Convergence Analysis</a></p></li><li><p><a href="https://encodeai.org/">Encode AI</a></p></li><li><p><a href="https://fathom.org/">Fathom</a></p></li><li><p><a href="https://whistleblower.org/">Government Accountability Project</a></p></li><li><p><a href="https://www.hrlc.org.au/">Human Rights Law Centre</a></p></li><li><p><a href="https://lasst.org/">LASST</a></p></li><li><p><a href="https://legalsafetylab.org/">Legal Safety Lab</a></p></li><li><p><a href="https://www.whistleblowers.org/">National Whistleblower Center</a></p></li><li><p><a href="https://psst.org/">Psst</a></p></li><li><p><a href="https://www.pourdemain.ngo/en">Pour Demain</a></p></li><li><p><a href="https://www.safer-ai.org/">Safer AI</a></p></li><li><p><a href="https://secureaiproject.org/">Secure AI Project</a></p></li><li><p><a href="https://futureoflife.org/">Future of Life Institute</a></p></li><li><p><a href="https://thefuturesociety.org/">The Future Society</a></p></li><li><p><a href="https://www.themidasproject.com/">The Midas Project</a></p></li><li><p><a href="https://thesignalsnetwork.org/">The Signals Network</a></p></li><li><p><a href="https://www.transparency.org/en">Transparency International</a></p></li><li><p><a href="https://whisper.exposefacts.org/">WHISPeR</a></p></li><li><p><a href="https://whisper.exposefacts.org/">Whistleblower Netzwerk</a></p></li><li><p><a href="https://whistleblower.law/">Whistleblower Partners LLP</a></p></li><li><p><a href="https://whistleblowingnetwork.org/Home">Whistleblowing International Network</a></p></li></ul><p><strong>&#128279; Academic Signatories:</strong></p><ul><li><p><strong><a href="https://ai-futures.org/">Daniel Kokotajlo</a>, </strong>AI Futures Project</p></li><li><p><strong><a href="https://pureportal.coventry.ac.uk/en/persons/dimitrios-kafteranis">Dimitrios Kafteranis</a></strong>, University of Coventry</p></li><li><p><strong><a href="https://www.linkedin.com/in/jessicahnewman/">Jessica Newman</a></strong>, AI Security Initiative, UC Berkeley</p></li><li><p><strong><a href="https://oid.wharton.upenn.edu/profile/kartikh/">Kartik Hosanagar</a></strong>, Wharton Business School</p></li><li><p><strong><a href="https://lessig.org/">Lawrence Lessig</a></strong>, Harvard Law School</p></li><li><p><strong><a href="https://www.cognitiverevolution.ai/about/">Nathan Labenz</a></strong>, Cognitive Revolution</p></li><li><p><strong><a href="https://www.law.uh.edu/faculty/main.asp?PID=6428">Peter Salib</a></strong>, University of Houston Law Center</p></li><li><p><strong><a href="https://en.wikipedia.org/wiki/Roman_Yampolskiy">Roman Yampolskiy</a></strong>, University of Louisville</p></li><li><p><strong><a href="https://www.uni-goettingen.de/en/principle+investigator/659527.html">Simon Gerdemann</a></strong>, University of Goettingen</p></li><li><p><strong><a href="https://vcresearch.berkeley.edu/faculty/stuart-russell">Stuart Russell</a></strong>, University of California, Berkeley</p></li><li><p><strong><a href="https://www.edhec.edu/en/research-and-faculty/faculty/professors-and-researchers/wim-vandekerckhove">Wim Vandekerckhove</a></strong>, EDHEC Business School</p></li></ul><h3>Moving Forward</h3><p>This campaign offers an opportunity for AI companies to demonstrate commitment to integrity cultures where flagging risks is a normal and expected responsibility.</p><p>We're not asking companies to reveal competitive secrets&#8212;we're asking them to show they're serious about the concern systems they claim to have. Transparency costs nothing but builds everything.</p><blockquote><p><strong>The stakes are too high for "trust us" to be enough.</strong></p></blockquote><p>When AI companies publicly <a href="https://openai.com/index/planning-for-agi-and-beyond/">acknowledge</a> <a href="https://www.anthropic.com/news/core-views-on-ai-safety">existential</a> <a href="https://deepmind.google/public-policy/ai-summit-policies/">risks</a>, they must also demonstrate that employees can safely report concerns about those risks.</p><h3>What You Can Do</h3><p>If you believe our call is sensible and you are...</p><ol><li><p>An insider at an AI Company: Ask your management why they are not publishing their policies. Share our call with them.</p></li><li><p>A leader of an AI Company: You can lead the charge! A strong speak up culture benefits your employees, shareholders, and you (unless you'd prefer risks to be hidden until it's too late): We can be in the same boat - if you genuinely care about protecting those speaking up. <br>If you credibly commit to Level 2: We will commend you for it.</p></li><li><p>An outsider: Spread the word. Every share gets us closer to transparency and a world where insiders in AI can raise their concerns as they see them.<br>We might also announce a second round of signatories. Contact us if you would like to be on this list.</p></li></ol><p><strong>Join the campaign:</strong> <a href="https://aiwi.org/publishyourpolicies/">https://publishyourpolicies/</a></p><p><strong>Contact:</strong> For questions or to add your organization's support, reach out through the campaign website.</p><p><em>This campaign is led by <a href="https://aiwi.org/">The AI Whistleblower Initiative</a> (AIWI, formerly OAISIS), an independent, nonpartisan, nonprofit organization supporting whistleblowers in AI.</em></p><div><hr></div><p><strong>Please find the original piece:</strong></p><p><strong>LESSWRONG: </strong><a href="https://www.lesswrong.com/posts/kH73v7rNrawjNRquc/call-on-ai-companies-publish-your-whistleblowing-policies#">Call on AI Companies: Publish Your Whistleblowing Policies</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AIWI: Inside AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Internal Warnings, Public Stakes: Strengthening AI Whistleblower Protections in the EU]]></title><description><![CDATA[A guest contributor by Jakub Kry&#347;, as part of the Talos Fellowship]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/internal-warnings-public-stakes-strengthening</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/internal-warnings-public-stakes-strengthening</guid><dc:creator><![CDATA[Jakub Krys]]></dc:creator><pubDate>Sat, 02 Aug 2025 13:02:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/58322169-e0ce-464e-baa7-f1452940f276_1344x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Transparency Statement</strong>:</p><p>This publication represents the independent views of its author as a guest contributor. While AIWI (<em>Formerly OAISIS</em>) provided feedback on the drafts, editorial independence was fully maintained. All opinions, assessments, and recommendations&#8212;including commentaries on AIWI or organizations featured in <em>INSIDE AI</em>&#8212;reflect solely the author's perspective.</p><div><hr></div><h1>Abstract</h1><p>Whistleblowers may play a crucial role in the governance of advanced artificial intelligence systems. Their importance is particularly acute in the context of risks that are difficult to detect from the outside, not yet captured by existing regulation, or require significant technical expertise to evaluate. Within the European Union, while the EU AI Act and the Whistleblower Protection Directive offer a strong legal foundation, we identify important gaps in both the scope and implementation of these protections. In particular, risks arising from internal deployment may not clearly fall under the AI Act&#8217;s legal remit. Moreover, internal reporting channels are often missing or inadequate, and external authorities lack the staffing and expertise to reliably handle whistleblower disclosures. Civil society initiatives can help fill these gaps but remain informal and undersupported. We present a set of recommendations to strengthen the whistleblowing ecosystem in the EU, including reinforcing internal and external channels, formally engaging advisory NGOs, and establishing a dedicated mailbox within the EU AI Office.</p><h1>Executive summary</h1><p>The rapid pace of AI development has left regulation struggling to keep up. As with any emerging technology, it will be very challenging to preemptively cover all new sources of risk with a static legal framework. Combined with the fact that frontier AI systems are created by private companies rather than governments or academia, this means that there is a pressing need for greater transparency, understanding and accountability. In this work, we argue that AI whistleblowers could play a crucial role in achieving all these objectives. In particular, as insiders placed within such companies, they are in a prime position to warn against risks that are hard to detect from outside, difficult to assess without deep expertise, or are concerning, yet do not violate any existing laws.</p><p>Unfortunately, current legal protections in the EU are insufficient to fully leverage the potential of AI whistleblowers. The EU AI Act invokes the Whistleblower Protection Directive, which provides safeguards against retaliation for those reporting breaches of the Act. However, several classes of risks arising from pre-market internal deployment may fall outside the Act&#8217;s scope. These legal limitations are compounded by significant implementation gaps of the Directive: internal reporting channels are often missing or untrusted, external channels lack expertise and are chronically understaffed, and not all Member States guarantee anonymity or legal coverage for public disclosures. Moreover, certain whistleblowing support functions, such as third-party advisory organisations, fall outside the Directive&#8217;s narrow definition of protected entities.</p><p>To address the identified shortcomings, we propose several policy interventions. These include:</p><ul><li><p><strong>Enforce the establishment of trusted and anonymous internal reporting channels</strong>. Many insiders currently lack a safe and realistic pathway for escalating concerns within their organisations, making this the first necessary layer of protection.</p></li><li><p><strong>Strengthen and harmonise external reporting authorities</strong> <strong>across Member States</strong>. Current authorities are often fragmented, understaffed, and lack the technical expertise to handle frontier AI concerns.</p></li><li><p><strong>Engage and fund third-party advisory organisations</strong> <strong>to support whistleblowers</strong>. <br>Civil society actors already offer informal guidance, but should be further integrated into the ecosystem to help triage reports, reduce false positives, and build trust.</p></li><li><p><strong>Launch a dedicated whistleblower mailbox within the EU AI Office.</strong> <br>Crucially, the Office must publicly commit to offering the same legal and procedural safeguards as those guaranteed under the Whistleblower Protection Directive &#8211; including confidentiality, non-retaliation, and timely follow-up. Without this, whistleblowers may lose eligibility for protection under the Directive, particularly if no formal response is received.</p></li><li><p><strong>Adopt a principle-based regulatory approach</strong> <strong>in future whistleblower provisions</strong>.<br>This would ensure protection even for disclosures involving novel risks or conduct that violates the spirit, but not the letter, of existing law.</p></li><li><p><strong>Boost public awareness of whistleblower rights</strong>.<br>This could include mandatory training for employees, the creation of accessible guides explaining reporting procedures, and the publication of anonymised summary statistics on whistleblower cases in order to build trust in the system.</p></li></ul><p>The proposal explores not only the legislative shortfalls but also the broader landscape of whistleblowing, including second-order benefits (such as improved corporate safety culture), the expected low rate of malicious or false reports, and the valuable role of civil society actors offering confidential guidance. The overall message is simple: AI whistleblowers are positioned to surface risks that no audit, database, or system card can reliably uncover. But to do so, they need confidence that the system will listen and protect them when they speak. The EU has a solid legal foundation, but must act swiftly to operationalise these protections before the AI Act comes fully into force.</p><h1>Introduction</h1><p>Artificial Intelligence (AI) is set to become arguably the most transformative technology ever created. While it offers unprecedented opportunities across all aspects of our lives, it also comes with risks that are often grouped into three categories: systemic (e.g. job loss, misinformation), misuse (e.g. a malicious actor using AI for a sophisticated cyberattack) and misalignment (e.g. the AI system acting against its creators&#8217; intentions, either on purpose or by accident). Although many of these risks are not entirely novel &#8211; especially the first two categories &#8211; the scope and pace with which they can transpire pose a tremendous challenge for governance and policymaking. In fact, a phenomenon where the development of emerging technologies leaves regulation unable to keep up is known as the &#8216;<a href="https://www.mercatus.org/economic-insights/expert-commentary/pacing-problem-and-future-technology-regulation">pacing problem</a>&#8217;. This issue is expected to be particularly pronounced in the context of AI. As an example, the origins of the EU AI Act (AIA), which represents the most comprehensive set of regulations on General Purpose AI (GPAI) in the world, can be traced back to a <a href="https://digital-strategy.ec.europa.eu/en/consultations/white-paper-artificial-intelligence-european-approach-excellence-and-trust">white paper</a> from early 2020. In contrast, the last provision of the AIA is set to come into effect in August 2026. This represents a gap between GPT-3 level systems and AI agents that can autonomously perform complex <a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">multi-hour coding tasks</a>.</p><p>Another factor which exacerbates the gravity of the situation is that, barring nationalisation, Artificial General Intelligence (AGI) might add to a short list of truly revolutionary technologies whose development was not conducted, funded or at least heavily shaped by a government agency (cf. nuclear weapons, space technologies, radar, GPS, microprocessors)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. As a result, regulators may lack comprehensive insight into AI system development &#8211; an input that is crucial to informed policymaking.</p><p>Overall, this suggests <strong>a pressing need for increasing transparency into the workings of AI companies and their development of frontier AI systems</strong>. A multitude of solutions have been considered, for example <a href="https://www.governance.ai/analysis/proposing-a-foundation-model-information-sharing-regime-for-the-uk">information sharing schemes</a>, <a href="https://cset.georgetown.edu/publication/ai-incidents-key-components-for-a-mandatory-reporting-regime/">incident reporting schemes</a>, <a href="https://www.iaps.ai/research/responsible-scaling">responsible scaling policies</a> that trigger certain actions when models cross pre-defined capability thresholds, <a href="https://arxiv.org/abs/2402.05160">system cards</a> that detail specific information about the model<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>, and requirements for AI labs to <a href="https://artificialintelligenceact.eu/article/71/">register their systems in a database</a>. However, such solutions typically suffer from at least three problems:</p><ul><li><p><strong>We cannot have full trust that proposals that are voluntary in nature will be upheld indefinitely.</strong> Unfortunately, AI companies have already <a href="https://nypost.com/2025/02/06/business/google-may-use-ai-for-weapons-surveillance-prompts-backlash/">reneged</a> on their prior declarations or at least <a href="https://ailabwatch.org/resources/integrity/">failed to fulfil</a> them. This problem can be expected to deepen as the stakes in the AI competition become higher.</p></li><li><p><strong>Even regulations that are legally binding are not infallible.</strong> Firstly, there are multiple cases of tech companies choosing to violate existing laws and trying to conceal it<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>. Moreover, laws may not cover the full range of risks, especially the ones that are yet to emerge. To future-proof regulation, legislators can frame their laws in terms of broad, <a href="https://arxiv.org/abs/2407.07300">high-level principles</a>. However, some effects can be very difficult to quantify or specify legally. For example, accusations of &#8216;gross mismanagement&#8217; or &#8216;failing to adhere to best practices&#8217; are challenging to prove in court and may take years.</p></li><li><p><strong>Even if an all-encompassing and precise set of rules could be established, their violation might not be unveiled until after disaster strikes.</strong> <a href="https://www.newsweek.com/boeing-whistleblower-richard-cuevas-says-problems-are-tip-iceberg-1920307">Case</a> <a href="https://www.propublica.org/article/memo-warning-signs-before-rig-explosion-were-mistake-to-have-ignored">studies</a> from various industries show that often actions clearly falling under &#8216;gross mismanagement&#8217; were evident well in advance, yet continued to be ignored. Due to the stark information asymmetry between AI companies and external observers, it is crucial to retain the ability to gather this information from the inside. We can think of it as a &#8216;<a href="https://righttowarn.ai/">right to warn</a>&#8217; the outside world. This is particularly important if we expect relevant parties to become more secretive in the future or the consequences of a violation to be particularly hard to mitigate. AI will likely satisfy both conditions.</p></li></ul><p>Altogether, it is clear that <strong>voluntary commitments, legally binding mandates and </strong><em><strong>post hoc </strong></em><strong>enforcement will not be sufficient to prevent AI-driven risks</strong>. In the rest of this work, <strong>we will argue that whistleblowers can help mitigate all these three points of concern and explain how we can protect their right to warn while ensuring privacy and fairness to their employers</strong>. In particular, we will focus our analysis on existing EU regulation, since the EU AI Act constitutes the first horizontal framework for governing AI.</p><h1>Role of whistleblowers in AI</h1><p>Whistleblowers have a <a href="https://en.wikipedia.org/wiki/List_of_whistleblowers">long history</a> spanning multiple industries and sectors. There are also good reasons to believe their importance will be higher than ever when it comes to monitoring and regulating the development of advanced AI systems. Firstly, due to the breakneck speed of AI progress, traditional channels of information sharing (such as system cards or external audits) often lag behind deployment. Insiders, by contrast, may be the only ones able to detect and flag issues as they unfold in real time.</p><p>Next, one key aspect which distinguishes AI from most other technologies is that advanced AI systems could be extremely dangerous even if not deployed to the general audience. For example, a model that gains the ability to copy itself onto other servers and resist shutdown can lead to catastrophic consequences even if used purely in internal experimentation<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. Indeed, risks from internal deployment have recently attracted <a href="https://arxiv.org/abs/2504.12170">growing</a> <a href="https://cset.georgetown.edu/wp-content/uploads/FRG-Call-for-Research-Ideas-Internal-Deployment.pdf">attention</a> of the AI safety community. Again, whistleblowers placed inside frontier AI labs could be our first line of defence against such novel risks from future AI systems.</p><p>Even with perfect insight into a GPAI developer&#8217;s operations, legislators may still struggle to decide whether a given activity should be prohibited. This is simply due to the very high level of expertise required to monitor cutting-edge AI development &#8211; much of which originates in the private sector rather than academia. To illustrate the difference, take the following two examples. If a developer was to <a href="https://artificialintelligenceact.eu/annex/12/">disclose their training dataset</a> and it turned out that they mismanaged <a href="https://artificialintelligenceact.eu/article/10/">personally identifiable information</a> or <a href="https://artificialintelligenceact.eu/recital/105/">copyrighted materials</a>, this constitutes a clear violation of the AIA. As a counterexample, we can consider the recent rise of reasoning models that take their time to think through the question before returning the final answer. Most of such models &#8216;think&#8217; in tokens, that is they use a language understandable to humans. However, models can also think using their own internal <a href="https://arxiv.org/abs/2412.06769">representations of tokens</a> in a way that we cannot comprehend. While this has been shown to often outperform human-understandable reasoning, it completely sacrifices our ability to interpret the model&#8217;s thought process. For this reason, <a href="https://www.lesswrong.com/posts/D2Aa25eaEhdBNeEEy/worries-about-latent-reasoning-in-llms#What_can_we_do_">some have advocated</a> that this technique should be banned. Needless to say, it will be difficult for policymakers or external auditors to make such decisions at the cutting edge of AI developments. Whistleblowers, as insiders embedded in this development, can offer not only <em>information</em>, but also the necessary <em>context</em> needed to make these decisions.</p><p>Overall, whistleblowers are uniquely positioned to warn about issues that are: <strong>(i) difficult to detect from the outside, (ii) concerning in ways not yet captured by existing regulation, or (iii) require significant tacit knowledge to assess</strong>. This suggests that we should seriously consider what kinds of protections and support structures they need, so we do not miss out on their value for AI governance.</p><h1>Existing whistleblower protections</h1><h2>Overview</h2><p>AI companies have been <a href="https://www.lesswrong.com/posts/ASzyQrpGQsj7Moijk/openai-exodus">accused of stifling criticism</a> and forcing their employees to sign <a href="https://www.lesswrong.com/posts/yRWv5kkDD4YhzwRLq/non-disparagement-canaries-for-openai">restrictive non-disparagement agreements</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>. Those who refuse to sign them risk <a href="https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees">losing their vested equity</a>, which can normally be monetised and may constitute the <a href="https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/simeon_c-s-shortform">overwhelming majority</a> of a person&#8217;s income (on top of their regular salary). Apart from financial pressure, the prospect of being sued by powerful corporations with entire legal teams and endless resources is plenty enough to prevent employees from raising their concerns. Moreover, in an industry where top talent is concentrated in just a few tech hubs such as London and Silicon Valley, many fear that becoming known as a whistleblower could jeopardise future opportunities &#8211; even if recent cases suggest this risk may be lower than assumed.</p><p>Before focusing on EU regulation, it is worth briefly examining the US context as a point of comparison. After all, most frontier labs are headquartered there and transatlantic enforcement challenges may arise. While legal protections for whistleblowers against prosecution or retaliation do exist in the US, a quick review suffices to conclude that they are likely insufficient in the context of the AI industry<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>. For example, the <a href="https://www.congress.gov/crs-product/R48318">Whistleblower Protection Act</a> applies to &#8216;a violation of any law, rule, or regulation&#8217; or &#8216;a substantial and specific danger to public health or safety.&#8217; As explained above, the former clause is not enough in the context of a pre-paradigmatic regulatory regime such as AI. The latter is more promising as it could be argued that releasing (or even training) a misaligned model constitutes a substantial danger to public safety. However, this Act applies only to federal workers in the executive branch. The US Department of Labour has its own <a href="https://www.dol.gov/general/topics/whistleblower">protections</a>, yet the categories they cover are not immediately relevant to the concerns about AI we articulated earlier: they cover issues such as employee safety, overtime pay, lie detector testing or mine hazards. Finally, we note that various AI-related whistleblower protections may be introduced at the <a href="https://oaisis.substack.com/p/some-thoughts-on-sb-53-californias">state</a> <a href="https://michellenie.substack.com/p/whats-new-with-ai-whistleblowers">level</a>, but their details are still being worked out.</p><h2>European Union</h2><p>EU legislation has arguably the most forward-looking provisions for AI whistleblowers. This is because the AIA itself invokes the European <a href="https://eur-lex.europa.eu/eli/dir/2019/1937/oj/eng">Whistleblower Protection Directive</a> in its <a href="https://artificialintelligenceact.eu/article/87/">Article 87</a>:<br><br><em>&#8220;Directive (EU) 2019/1937 shall apply to the reporting of infringements of this Regulation and the protection of persons reporting such infringements.&#8221;</em></p><p>Thus, a whistleblower wishing to report a violation of any part of the AIA benefits from the protections afforded by this Directive. This also implies that the usefulness of AI whistleblowing in the EU is limited by the same factors that limit the AIA as a piece of legislation (discussed <a href="https://docs.google.com/document/d/1E2H9nLpFBw_dxnlQg2DfP0dFdXSX5_HAdW1R0dlZIPs/edit?pli=1&amp;tab=t.0#heading=h.rvx6e73x0j10">above</a>). In particular, <strong>whistleblowers are not protected in cases of disclosing circumstances that are serious, but do not yet constitute a clear violation of the AIA</strong>. This statement in itself is not necessarily alarming. After all, workers should not be permitted to reveal arbitrary details of their companies if they pertain to activities that are legally allowed. Nonetheless, this approach might not be enough in the context of advanced AI systems, where a large number of &#8216;threat vectors&#8217; has not emerged yet and regulation will likely struggle to keep up<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>. One particular example we highlighted in the previous section is that of risks from internal deployment. Currently, it is not clear to us whether the AIA covers all such scenarios<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>. Article 2(8) of the AIA states (emphasis ours<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a>):</p><p><em>&#8220;This Regulation does not apply to <strong>any</strong> research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service. Such activities shall be conducted in accordance with applicable Union law. Testing in real world conditions shall not be covered by that exclusion.&#8221;</em></p><p>It is important to point out that <strong>the Whistleblower Protection Directive applies not only to past breaches of regulation, but also to potential breaches where there are reasonable grounds for suspicion</strong>. This preemptive clause is promising in the context of high-stakes situations that might occur when dealing with advanced AI systems. Article 5(2) of the Directive states that:<br><br><em>&#8220; &#8216;information on breaches&#8217; means information, including reasonable suspicions, about actual or potential breaches, which occurred or are very likely to occur in the organisation in which the reporting person works or has worked or in another organisation with which the reporting person is or was in contact through his or her work, and about attempts to conceal such breaches&#8221;</em></p><p>Article 6(1) further specifies that whistleblowers will be covered under the Directive if (emphasis ours):</p><p><em>&#8220;(a) they had reasonable grounds to believe that the information on breaches reported was true at the time of reporting and that such information fell within the scope of this Directive; <strong>and</strong></em></p><p><em>(b) they reported either internally in accordance with Article 7 or externally in accordance with Article 10, or made a public disclosure in accordance with Article 15.&#8221;</em></p><p>The fact that <em>both</em> conditions need to be satisfied warrants further scrutiny of part <em>(b).</em></p><h3>Internal reporting</h3><p>Some AI companies have indicated their voluntary commitments to establishing internal channels for whistleblowers. During the first AI Safety Summit in Bletchley, Anthropic&#8217;s CEO Dario Amodei said that:<br><br><em>&#8220;On the operational side, we will put in place a whistleblower policy before we reach ASL-3 and already have an officer responsible for ensuring compliance with the RSP and reporting to our Long Term Benefit Trust. As risk increases, we expect that stronger forms of accountability will be necessary.&#8221;<br><br></em>Anthropic&#8217;s updated <a href="https://assets.anthropic.com/m/24a47b00f10301cd/original/Anthropic-Responsible-Scaling-Policy-2024-10-15.pdf">Responsible Scaling Policy</a> from October 2024 includes two relevant clauses:<br><br><em>&#8220;Noncompliance: We will maintain a process through which Anthropic staff may anonymously notify the Responsible Scaling Officer of any potential instances of noncompliance with this policy. We will also establish a policy governing noncompliance reporting, which will (1) protect reporters from retaliation and (2) set forth a mechanism for escalating reports to one or more members of the Board of Directors in cases where the report relates to conduct of the Responsible Scaling Officer. Further, we will track and investigate any reported or otherwise identified potential instances of noncompliance with this policy. Where reports are substantiated, we will take appropriate and proportional corrective action and document the same. The Responsible Scaling Officer will regularly update the Board of Directors on substantial cases of noncompliance and overall trends.&#8221;</em></p><p>and:</p><p><em>&#8220;Employee agreements: We will not impose contractual non-disparagement obligations on employees, candidates, or former employees in a way that could impede or discourage them from publicly raising safety concerns about Anthropic. If we offer agreements with a non-disparagement clause, that clause will not preclude raising safety concerns, nor will it preclude disclosure of the existence of that clause&#8221;</em></p><p>The ambiguous language (&#8216;will maintain&#8217;, &#8216;will not impose&#8217;) makes it unclear whether Anthropic has already implemented these steps or merely plans to. Nonetheless, we welcome these commitments and hope other AI labs follow suit<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a>. Ideally, <strong>the full text of companies&#8217; whistleblowing policies should be made public</strong> <strong>so that (i) AI companies can draw on each others&#8217; practices</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a><strong>, (ii) their content can be scrutinised by third-party reviewers, and (iii) the risk of backtracking on previous commitments is reduced if they are openly available</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a>. Guidelines on establishing <a href="https://www.transparency.org/en/publications/internal-whistleblowing-systems-self-assessment-framework-public-private-organisations">internal</a> <a href="https://committee.iso.org/sites/tc309/home/projects/published/iso-37002-whistleblowing-managem.html">reporting</a> <a href="https://iccwbo.org/news-publications/policies-reports/icc-2022-guidelines-on-whistleblowing/">channels</a> already exist and could serve as a starting point, perhaps with some modifications to adapt them to the AI context<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a>.</p><p>We stress the utmost importance of the <em>anonymity </em>of such reporting channels &#8211; lack of anonymity has consistently been listed as one of the <a href="https://www.transparency.org/en/blog/anonymous-whistleblower-reports-support-civil-courage">biggest factors</a> preventing whistleblowers from raising concerns within their organisations. <strong>Unfortunately, while the Whistleblower Protection Directive grants full rights to those reporting anonymously, it leaves it up to individual Member States to decide whether companies and authorities must accept and follow up on anonymous reports.</strong><em><strong> </strong></em>In reality, <a href="https://www.transparency.org/en/publications/how-well-do-eu-countries-protect-whistleblowers-speakup">few countries</a> mandate this within their national laws.</p><h3>External reporting</h3><p>In terms of external reporting channels, the Directive mandates that each Member State establish a competent authority to handle the receipt, processing and communication of cases related to whistleblowers&#8217; disclosures<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a>. However, in contrast to &#8216;regulations&#8217; such as the AIA, &#8216;directives&#8217; require a so-called transposition from the EU level onto national level. They usually specify the end goal to be achieved and leave more room for interpretation regarding the intermediate steps. Just as with anonymity requirements, this means that the precise implementation can vary among Member States, with unequal provisions written into national laws and varied adoption timelines. Indeed, a 2024 <a href="https://commission.europa.eu/aid-development-cooperation-fundamental-rights/your-fundamental-rights-eu/protection-whistleblowers_en">report on the transposition of the Whistleblower Protection Directive</a> finds that<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a>:<br><br><em>&#8220;All Member States have transposed the Directive&#8217;s main provisions, but the transposition needs to be improved on certain key areas, such as the material scope, the conditions for protection and the measures of protection against retaliation, in particular the exemptions from liability and the penalties. Moreover, the Commission regrets the overall very late transposition of the Directive.&#8221;</em></p><p>These comments are alarming in the context of rapid progress in AI and <strong>call for a more unified approach among all Member States</strong>. <strong>Relevant national authorities are often critically understaffed, lack expertise and independence</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a>. This can further decrease trust in action being taken and raise concerns of retaliations, discouraging whistleblowers from speaking up.</p><h3>Public disclosure</h3><p>Lastly, in terms of public disclosures, Article 15 of the Directive states that internal and external reporting can be skipped if the whistleblower has reasons to fear retaliation, has doubts about the credibility and speed of these two channels, or if <em>&#8216;the breach may constitute an imminent or manifest danger to the public interest, such as where there is an emergency situation or a risk of irreversible damage&#8217;</em>. Considering the lack of official internal reporting channels in AI labs, and the inconsistency in nation-level external channels, this is promising, albeit with the caveats presented in the footnote <a href="https://docs.google.com/document/d/1E2H9nLpFBw_dxnlQg2DfP0dFdXSX5_HAdW1R0dlZIPs/edit?pli=1&amp;tab=t.0#heading=h.qqpqjgeszw4s">above</a>. Unfortunately, in at least one case transposition into national law expressly <a href="https://transparency.hu/wp-content/uploads/2024/01/K-Monitor_Transparency-Int-HU_letter_to_COM_on_transposition_of_whistleblower_directive_21122023.pdf">does not cover</a> public disclosures, which constitutes a clear violation of the Directive.</p><h1>Analysis</h1><p>Overall, our analysis of existing provisions for whistleblower protections in the EU paints a picture of strong foundations, yet with key shortcomings that will be crucial to address if we wish to make the most of AI whistleblowers. Before suggesting concrete policy recommendations, we first discuss several aspects that are useful to bear in mind.</p><h2>Rule-based and principle-based regulation</h2><p>A common distinction made in policymaking is that between <a href="https://arxiv.org/abs/2407.07300">rule-based and principle-based regulation</a>. The former type is more detailed and &#8216;low-level&#8217;, leaving less room for interpretation and ensuring that companies have clearly defined boundaries for their operation. However, such rules can potentially be &#8216;gamed&#8217; (i.e. satisfied according to the letter of the law, while still undermining its intended purpose)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> and might not be able to keep up in fast-moving areas such as AI. In contrast, using wide-ranging, high-level principles prevents these problems, but can introduce additional burden in terms of verifying compliance, since both companies and the regulators need to analyse carefully which actions adhere to these principles.</p><p>On one hand, rule-based regulation makes whistleblower claims easier to verify. However, it cannot address one of the most serious concerns: the need to prevent emerging threats that existing law has yet to capture. If whistleblowers are meant to bring such risks to the surface, they must be allowed to uncover violations of the <em>spirit</em> of the law. As an example, let us take the case of evaluating models for their capabilities of creating cyber attacks. The AIA does in fact have transparency requirements which specify what sort of evaluation information needs to be reported. <a href="https://artificialintelligenceact.eu/annex/11/">Annex XI</a> states that this includes:</p><p><em>&#8220;A detailed description of the evaluation strategies, including evaluation results, on the basis of available public evaluation protocols and tools or otherwise of other evaluation methodologies. Evaluation strategies shall include evaluation criteria, metrics and the methodology on the identification of limitations.&#8221;</em></p><p>Nonetheless, it is a well-known fact that evaluations can only provide the <a href="https://techgov.intelligence.org/research/what-ai-evaluations-for-preventing-catastrophic-risks-can-and-cannot-do">lower bound</a> of a model&#8217;s capabilities. The exact score is <a href="https://www.lesswrong.com/posts/z8zPL2hBqTmx7Kf6J/frontiermath-score-of-o3-mini-much-lower-than-claimed">heavily dependent</a> on the &#8216;scaffolding&#8217; used, i.e. whether the model has access to tools such as a calculator or Python code, or how much time the reasoning model is allowed to spend thinking. A developer afraid of their model being banned can deliberately perform only a weak elicitation of the capabilities during testing, such that the officially reported score on cyber threats falls below an acceptable threshold. They would then argue that this level of elicitation is typical in the industry. If a whistleblower has reasonable grounds to believe that stronger scaffolding will be readily available to the user<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a>, and that this scaffolding will elicit cyber capabilities above the threshold, they should have the right to voice the concern that the reported evaluation strategy was insufficient<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a>. Thus, <strong>it seems that the usefulness of whistleblowers will be maximised if we adopt a principle-based approach to regulation.</strong></p><h2>Civil society organisations</h2><p>It is worth pointing out that <a href="https://whistleblowingnetwork.org/Membership/Our-Members">several organisations</a> with direct relevance to informing, empowering and advocating for whistleblowers already exist within the civil society. For example <a href="https://oaisis.substack.com/about">AIWI</a> (Formerly, <em>OAISIS</em>), which functions as part of the wider German <a href="https://whistleblower-net.de/">Whistleblower Network</a>, is a project dedicated to individuals working at the frontier of AI development. In particular, they offer a free advisory service called <a href="https://third-opinion.org/">Third Opinion</a> whose goal is to provide clarity on whether a given practice observed by the employee is concerning or not. The process involves submitting a question to their online platform, after which Third Opinion assembles a panel of experts who seek clarifications from the advisee and submit their views on the level of concern regarding this practice. The whole procedure is handled confidentially using tools widely adopted in the whistleblower world: self-hosting an instance of the <a href="https://www.globaleaks.org/">GlobalLeaks</a> platform, using privacy-optimised operating systems such as <a href="https://www.qubes-os.org/">Qubes OS</a> or <a href="https://tails.net/">Tails</a>, and requiring advisees to submit their questions through the <a href="https://www.torproject.org/">Tor browser</a>.</p><p>However, it must be remembered that Third Opinion is not in itself a whistleblower service. They do not forward the received information further through any internal, external or public channels. They also do not have the authority (nor the means) to offer financial or legal protections to the users of their service. Again, Third Opinion can only provide clarity on whether a given observation is concerning or not, yet it is still the advisee&#8217;s choice whether to escalate and report their concern through appropriate channels. Moreover, a potentially concerning gap in current legislation is that while the Whistleblower Protection Directive covers &#8216;facilitators&#8217;, these are defined in Article 5 as:<br><br><em>&#8220; &#8216;facilitator&#8217; means a natural person who assists a reporting person in the reporting process in a work-related context, and whose assistance should be confidential;&#8221;</em></p><p>Therefore, <strong>we are uncertain whether in practice courts would extend the protections of the Directive to third-party support organisations.</strong></p><p>Overall, we see a very high value in engaging with such civil society initiatives in the context of building robust whistleblowing channels. They could serve as the first step in a two-tier system. Initially, a concerned individual uses services like Third Opinion to confirm whether their observation warrants concern. Then, they may report the claim internally within their organisations, externally to national authorities, or &#8211; under certain conditions &#8211; make a public disclosure as permitted by the Directive. This first stage can give confidence to potential whistleblowers, inform them of existing protections and give guidance on how to proceed further. As argued in the next <a href="https://docs.google.com/document/d/1E2H9nLpFBw_dxnlQg2DfP0dFdXSX5_HAdW1R0dlZIPs/edit?pli=1&amp;tab=t.0#heading=h.fyooe5iu6o7e">section</a>, it would also have the useful side effect of minimising the probability that a whistleblower&#8217;s claim turns out to be mistaken. Overall, <strong>we believe that the EU should either engage with such organisations by fostering their growth in the civil society or alternatively replicate their function through a separate body managed at the Union level.</strong></p><p>Finally, we point out that another non-profit organisation, The Signals Network, already offers clear and <a href="https://thesignalsnetwork.org/twh/">detailed guides</a> on whistleblowing procedures and protections available in the US, UK and the Republic of Ireland, albeit not specifically in the context of AI. Such guides could be updated with the content of the AIA and extended to cover Member States other than Ireland.</p><h2>Are false claims going to be a problem?</h2><p>A natural concern regarding extensive whistleblower protection schemes is that they may allow for careless or false disclosures. This would place an unjustified burden on the companies and potentially leak their sensitive information to competitors. In practice, however, we argue that this effect is very unlikely to transpire.</p><p>Broadly speaking, false claims made by whistleblowers can fall into two categories: malicious and accidental. <strong>Experts we spoke to agreed that the risk of malicious false disclosures is low.</strong> This is simply because whistleblowers, even if granted legal protections, are still placed in a very disadvantageous and vulnerable position as compared to the organisations they blow the whistle on. They risk being ostracised from their industry (and top AI talent is very concentrated), lengthy legal battles, daunting media attention and even threats to their personal safety<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a>. Thus, in reality people who become whistleblowers need to have a very high degree of confidence in their claims and truly believe that disclosing them is in the public interest. The incentives against false accusations are extremely strong.</p><p>Preventing accidental false claims is somewhat more difficult, but far from impossible. Naturally, not every insider at an AI lab is going to be an expert at the issue they find concerning. For example, a technical ML researcher without experience at organisational risk management might become alarmed that a certain practice at their company is worrying and should be disclosed. To help them decide whether what they are witnessing is really a cause for concern, we can use external committees composed of topic experts to provide independent reviews. This process should be arranged in a confidential manner and be free of charge. It seems that civil society services such as Third Opinion could already play this role &#8211; while they themselves cannot offer legally binding protections, they could give whistleblowers sufficient confidence that their case warrants escalation to the authorities and would be eligible for these protections. <strong>Moreover, accidental false claims can even be perceived as desirable, especially in the early stages of rolling out new regulation. This is because they could help highlight parts of the regulation which are not clear and require further specification, thereby providing useful feedback to the regulator.</strong></p><p>Finally, we note that the Whistleblower Protection Directive already includes a clause protecting companies from false accusations and enabling appropriate compensations:</p><p><em>&#8220;Member States shall provide for effective, proportionate and dissuasive penalties applicable in respect of reporting persons where it is established that they knowingly reported or publicly disclosed false information. Member States shall also provide for measures for compensating damage resulting from such reporting or public disclosures in accordance with national law.&#8221;</em></p><p>Moreover, an associated Recital clarifies that &#8216;reasonable grounds&#8217; do not include &#8216;<em>information which is already fully available in the public domain or unsubstantiated rumours and hearsay.</em>&#8217;</p><h2>Second-order effects from whistleblowing</h2><p>The usefulness of whistleblowers&#8217; revelations extends beyond breaches of a particular law. Indeed, in the context of the AI industry, where a regulatory paradigm has not been established yet (possibly with the exception of the EU), we see a very large part of its value in so-called second-order effects. <strong>Even if such revelations are not intended to warn us of an impending danger to public safety, they can nonetheless exert a strong pressure on companies to adopt better safety practices and culture.</strong></p><p>As a striking example, after a series of <a href="https://www.lesswrong.com/posts/ASzyQrpGQsj7Moijk/openai-exodus">high-profile departures</a> from OpenAI in 2024, it was revealed that employees were <a href="https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release">forced to sign</a> very restrictive non-disparagement agreements that prevented them from ever criticising the company or even mentioning the existence of these agreements. Some employees refused to sign them at the cost of losing their equity, which often amounted to a number several times higher than their regular salaries. The fact that these documents were brought up and <a href="https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees">handed over</a> to a news outlet means that the insiders broke their contracts and could have been retaliated against. After all, restrictive non-disparagement agreements, while controversial, are not illegal<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a>. Nonetheless, the resultant public backlash was so large that it prompted a <a href="https://x.com/sama/status/1791936857594581428">swift apology</a> from OpenAI&#8217;s CEO, who stated that the company will remove these clauses from their exit agreements.</p><p>This example is a strong indication that whistleblower protections should &#8211; under reasonable circumstances &#8211; be extended beyond breaches of existing laws. As a result of such second-order effects, companies may feel pressured to improve their safety standards, as not doing so paints them in bad light and could negatively affect their user base or potential future contracts.</p><p>Another important second-order effect is that <strong>robust internal reporting schemes can be of direct benefit to the companies themselves.</strong> Fostering a culture where speaking out is the norm, not the exception, can help highlight a variety of issues before they escalate into major crises. Needless to say, such a work environment also improves employee satisfaction and loyalty &#8211; when staff trust that their concerns will be taken seriously, they are less likely to bypass internal channels and seek external assistance.</p><h1>Recommendations</h1><p>In this section, we present several policy proposals to take this discussion forward.</p><p><strong>Enforce the establishment of internal reporting channels within frontier AI labs whose activities may pose serious risk to public safety</strong></p><p>This applies first and foremost to providers and deployers of frontier GPAI, though not exclusively. For example, employees of companies providing computer vision models or biological sequence models should also have a straightforward, confidential and transparent pathway to reporting their concerns, as these systems have a high potential of falling under the &#8216;unacceptable&#8217; or &#8216;high risk&#8217; categories of the AIA. The establishment of such channels is already mandatory under the Whistleblower Protection Directive for companies with over 50 employees, however we lack evidence that these requirements have been implemented effectively or in a timely manner &#8211; if at all. The same requirements should also be considered for companies with fewer than 50 employees.</p><p>Article 9 of the Directive lays out minimal design standards for internal channels; these should be supplemented by <a href="https://www.transparency.org/en/publications/internal-whistleblowing-systems-self-assessment-framework-public-private-organisations">guidelines</a> <a href="https://committee.iso.org/sites/tc309/home/projects/published/iso-37002-whistleblowing-managem.html">drawn</a> from the expertise of civil society organisations. In particular, we emphasise the need for anonymity &#8211; companies should be mandated to accept and follow up on anonymous reporting, whereas this is currently left up to each Member State to decide. Furthermore, it is essential that such bodies operate independently from company leadership, both legally and in practice. We also believe that encouraging companies to openly publish their internal reporting procedures would raise collective safety standards, improve accountability, and increase trust in action being taken. In addition, organisations should be required to demonstrate the effectiveness of their whistleblower education initiatives, for example by regularly testing employee comprehension of their rights and available reporting channels.</p><p><strong>Ensure external reporting channels across Member States have adequate resources and technical expertise</strong></p><p>In many cases, whistleblowers choose to raise their concerns outside of their organisations, mainly due to a lack of trust in internal mechanisms and fears of retaliation. While the establishment of such external channels is also mandated by the Directive, in practice they are often severely understaffed and lack the expertise necessary to investigate disclosures. This is a clear barrier to whistleblowers coming forward with information. Member States should ensure that sufficient funding is allocated to their whistleblowing channels and specify which authority is responsible for which provisions of the AI Act, particularly when multiple regulatory bodies are involved. We recommend that Member States publish clear, public-facing frameworks that map specific articles of the AI Act to the relevant authority or enforcement body, including instructions on how disclosures can be submitted. Furthermore, these bodies must be equipped with the technical know-how required to assess AI-specific claims. Failure to do so would risk procedural bottlenecks, delayed investigations, or inconsistent treatment of similar cases across jurisdictions.</p><p>When leaving too much room for legal interpretation to Member States, there is also a risk of a &#8216;regulatory race to the bottom&#8217; &#8211; individual states may be incentivised to limit whistleblower protections in order to avoid driving away companies from their territories. To prevent this effect, the European Commission should issue common guidance on minimum standards for national reporting channels. This should include guarantees for whistleblower anonymity, timelines for responding to disclosures, and expectations around publishing anonymised summary statistics. Critically, implementation must be completed before August 2026, when the AI Act enters into full force and will require effective national enforcement. Without functioning external reporting structures, Member States will not be able to fulfill their obligations under the Act, nor will they be able to act on the most time-sensitive and high-stakes whistleblower reports.</p><p><strong>Engage with and fund civil society organisations that serve as the first step in the whistleblowing ladder</strong></p><p>Multiple non-profit organisations dedicated to advising potential whistleblowers already exist. These groups have deep expertise in the concerns whistleblowers face, the tactics used to silence or retaliate against them, and the practicalities of secure, confidential communication. They could be formally integrated into the EU&#8217;s broader whistleblower protection framework as an initial point of contact, fulfilling three key functions: (i) advising individuals on whether their concerns are warranted, (ii) informing them of existing protections, and (iii) guiding them on how to escalate their disclosures through official channels. Involving third-party organisations in this way could also reduce the rate of &#8216;accidental false positives&#8217; &#8211; claims that were made in good faith but ultimately turned out to be unfounded due to the whistleblower's limited expertise or legal complexities.</p><p>Aside from fostering and funding such organisations in the civil society, the Union should also consider replicating some of their functionality as part of the AI Office. In either case, we do not recommend automatically escalating reports from support organisations to courts or authorities. Such automatic escalation could discourage whistleblowers from seeking advice in the first place. Moreover, it could increase the risk of false reporting &#8211; a malicious party could submit a fake claim anonymously, knowing that it will automatically lead to legal action. If the responsibility of taking action lies with the claimant, there is no incentive to lie to advisory organisations.</p><p>Finally, we point out that these organisations do not seem to be currently covered under the Whistleblower Protection Directive, which defines facilitators as <em>&#8216;a natural person who assists a reporting person in the reporting process in a work-related context&#8217;</em>. This definition should be extended to include external support organisations.</p><p><strong>Establish a dedicated whistleblower mailbox within the EU AI Office</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a></p><p>Once a potential whistleblower gains confidence that what they observed should be reported, they need to have a clear pathway to do so. Unfortunately, internal channels are still missing (or may be untrusted), while national authorities are often underfunded and lack relevant expertise. Therefore, we recommend introducing an anonymous whistleblower mailbox directly as part of the EU AI Office. The European Commission already maintains several such mailboxes, for example in the context of <a href="https://competition-policy.ec.europa.eu/index/whistleblower_en">antitrust</a>, <a href="https://fns.olaf.europa.eu/">anti-fraud</a> and <a href="https://finance.ec.europa.eu/eu-and-world/sanctions-restrictive-measures/overview-sanctions-and-related-resources/eu-sanctions-whistleblower-tool_en">sanction</a> violations. As the official enforcer of the AIA, the AI Office is uniquely placed to establish an AI-specific mailbox, launch investigations and impose any resultant penalties.</p><p>In addition to providing advice and clarifications of the AIA, this mailbox could also serve as a first point of contact for informal queries, while offering the option to file a formal complaint that may lead to a follow-up investigation. It might be tempting to have higher standards for the depth of reported information prior to launching an investigation, however we do not recommend this. Preliminary survey results by AIWI show that insiders will likely be extremely uncertain about whether their concerns are valid and fall within the scope of the AI Act. Such uncertainty should not disqualify a report from being taken seriously, but rather be seen as a normal feature of complex, high-stakes situations. We therefore want to reduce barriers to reporting at the AI Office level as much as possible. For this reason, we also strongly advise against requiring whistleblowers to identify themselves.</p><p>It is crucial that the mailbox be appropriately staffed, both to handle the required volume of cases as well as signal its professionalism to potential whistleblowers. For reference, the antitrust mailbox <a href="https://www.waselius.fi/news/2023/08/eu-competition-newsletter-2-2023/">receives</a> around 100 notices per year, but is relevant to millions of EU employees, in contrast to a much smaller number of AI-relevant employees. This suggests that it might be possible to avoid overwhelming the inbox without specifying too strict reporting standards that could intimidate potential whistleblowers.</p><p>In addition to processing individual disclosures, the AI Office should also play a coordinating role across the broader whistleblower ecosystem. In cases where it lacks the jurisdiction or capacity to follow up directly, the Office could serve as a central recipient that forwards reports to the appropriate national authority, while maintaining oversight of the overall process. This would help prevent cases from being lost between institutions or subjected to uneven treatment across Member States. To further support transparency and institutional learning, the AI Office should also publish periodic reports on the types of issues raised through the mailbox, including aggregated statistics and anonymised case summaries. Doing so would not only build trust in the mechanism itself, but also contribute to a clearer understanding of where safety concerns tend to cluster within the AI development pipeline.</p><p>As the AI Office will function outside of the national reporting channels covered by the Whistleblower Protection Directive, it is essential that it publicly clarify the standards it will uphold when receiving disclosures. In particular, the Office should commit to offering the same procedural and legal safeguards as those guaranteed under the Directive &#8211; including confidentiality, timely follow-up, and protection against retaliation. Without such commitments, whistleblowers who report through the AI Office may risk losing eligibility for public disclosure under Article 15 of the Directive, especially if they receive no formal response. Clarifying this institutional role would help build trust in the mailbox and reduce the uncertainty currently faced by would-be whistleblowers.</p><p>Overall, we recommend that this mailbox be operational by early 2026, such that it can begin supporting informal queries and infrastructure coordination ahead of the AI Act&#8217;s full enforcement in August 2026. While its formal role under the Whistleblower Protection Directive will only apply from that point onward, early deployment would enable smoother uptake, clearer institutional responsibilities, and a stronger baseline for future evaluation.</p><p><strong>Follow the principle-based approach to policymaking when implementing future provisions relevant to whistleblower protections<br></strong>One of the main takeaways of our work is that the &#8216;pacing problem&#8217; (regulation unable to keep up with what it is trying to regulate) is likely to be particularly pronounced in the AI industry. New developments are emerging at a breakneck speed and bring novel risks that simply cannot be preemptively covered by fine-grained rules. Therefore, we recommend that future whistleblower protections be implemented according to overarching principles that follow the <em>spirit</em> of the law. At the same time, we also highlight the danger of moving towards principles that are too broad and leave too much room for interpretation. Indeed, vague and unenforceable language was one of the key concerns highlighted by a <a href="https://drive.google.com/file/d/1TYir2-Q0OnYegKJofT5_xKm7RkZYDtwD/view">roundtable</a> on the Code of Practice. Striking the right balance between the two approaches will be crucial.</p><p><strong>Promote the public awareness and understanding of the available legal protections within the EU</strong><br>The unfortunate reality of whistleblowing is that the majority of people who wish to come out with important information will not do so out of fear, doubts regarding their correctness, and unfamiliarity with available protections. We thus recommend creating an extensive set of materials to raise awareness of the legal frameworks put in place in the EU. This is particularly important for employees of frontier AI labs, both domiciled within the EU (e.g. Mistral), as well as domiciled outside of the EU but deploying their products on the EU market (as they are also covered by the AIA). This objective could be achieved e.g. by producing and disseminating easily understandable educational materials or conducting dedicated training upon onboarding into the company.</p><p>We point out that a <a href="https://thesignalsnetwork.org/twh/">guide specific to the Republic of Ireland already exists</a> and could be readily expanded to cover the whole EU, as well as adapted to the AIA and the Code of Practice.</p><p><strong>Clarify what constitutes </strong><em><strong>&#8216;dangers to the public interest, such as where there is an emergency situation or a risk of irreversible damage&#8217;</strong></em></p><p>The <a href="https://eur-lex.europa.eu/eli/dir/2019/1937/oj/eng">Whistleblower Protection Directive</a> offers this framing as an avenue to skip internal and external reporting channels, yet still be granted full legal protection. We believe this article should be clarified and expanded to include risks from advanced AI systems, such as threats created by misaligned AIs or model weight theft due to insufficient cybersecurity protocols. This could be done either through a modification of the article itself or through a new recital.</p><div><hr></div><p>We recognise that several of the recommendations presented above would require changes to legal instruments that fall outside the scope of the AIA, most notably the Whistleblower Protection Directive. For example, mandating the acceptance of anonymous reports, lowering the &gt;50 employee threshold for internal channels, and expanding the definition of whistleblowing facilitators would all require formal amendments to the Directive. As this would involve a full legislative process at the EU level, followed by national transposition across 27 Member States, such changes are unlikely to be implemented in the short term. We nonetheless believe it is essential to flag these issues now, such that they can be considered in future revisions of the Directive or &#8211; in the meantime &#8211; incorporated into Commission guidance documents and non-binding recommendations. In contrast, other proposals, such as the establishment of a dedicated whistleblower mailbox, fall well within the existing mandate of the AI Office and could be actioned during the 2025&#8211;2026 implementation period. We recommend that these short-term interventions be prioritised, while laying the groundwork for more systemic legal updates in parallel.</p><h1>Conclusions</h1><p>This work has argued that whistleblowers will be indispensable to any serious effort to govern advanced AI systems &#8211; especially those developed by private frontier labs with limited external oversight. While current EU frameworks offer a promising legal foundation, they are ultimately shaped by the limitations of the EU AI Act: whistleblowers are protected only when reporting legally defined breaches, rather than when raising concerns about novel or poorly understood risks. As we have shown, several of these risks &#8211; including internal deployment of dangerous models, under-elicited evaluations, or latent capabilities &#8211; may not yet fall under the AI Act&#8217;s regulatory scope, despite posing serious threats to public safety.</p><p>The Whistleblower Protection Directive offers some flexibility by extending protection to cases of potential breaches or imminent danger. However, the Directive&#8217;s uneven implementation across Member States, limited guarantees for anonymity, and lack of clarity around public disclosures significantly weaken its practical effectiveness. To solidify EU whistleblower protections in the context of AI, we recommend reinforcing both internal and external reporting channels, engaging expert civil society organisations, and establishing an anonymous, well-staffed reporting mailbox within the EU AI Office. As the frontier of AI capabilities continues to evolve rapidly, empowering insiders to speak up &#8211; and designing institutions that listen &#8211; may be one of our most important governance tools.</p><h1>Acknowledgements</h1><p>The author wishes to thank Karl Koch, Elsa Donnat, Mauricio Baker and Michelle Nie for useful discussions and comments, as well as Ethan Beri for early access to unpublished work.</p><p>This work was produced as part of the Talos Fellowship. The author also graciously acknowledges the financial support of Open Philanthropy, which was used during the last stages of the project.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This assumes the US or Chinese government do not decide to organise a &#8216;Manhattan Project&#8217; for AI, which in itself is a contentious claim.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>System cards, also known as model cards, are technical reports accompanying the release of an AI model. They are published on a voluntary basis and may include aspects such as datasets used, architectures, training infrastructure, evaluation scores, safety precautions and environmental impact. Note that, at the time of writing, there are no legally binding requirements specifying the contents of system cards.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>For a striking example, see documents unveiled as part of a lawsuit against Meta&#8217;s training on pirated content. These <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.415175/gov.uscourts.cand.415175.391.24.pdf">documents</a> <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.415175/gov.uscourts.cand.415175.449.4.pdf">reveal</a> internal discussions of legal consequences and possible coverup strategies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>One could argue that nuclear weapons are similar, in that they carry significant risks &#8211; such as accidental launch or mishandling &#8211; even when they are not used in war or proliferated beyond state control. However, a major difference is that they, being physical hardware rather than software, cannot self-replicate, self-improve and conspire against us. Consequently, containing and controlling them is easier relative to a superhuman AI.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Non-disparagement agreements are distinct from the standard non-disclosure agreements in that they might prevent employees from ever criticising their employer, even after the end of employment. They also often include clauses that prevent the employees from mentioning the existence of such agreements.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>A thorough review of whistleblower protections in various industries and legislations is beyond the scope of this work. We refer the reader to the following resources:<br><a href="https://www.aipolicybulletin.org/articles/governments-need-to-protect-ai-industry-whistleblowers-heres-how">Governments Need to Protect AI Industry Whistleblowers: Here's How<br></a><a href="https://law-ai.org/how-to-design-ai-whistleblower-legislation/">How to design AI whistleblower legislation<br></a><a href="https://whistleblowersblog.org/corporate-whistleblowers/congress-introduces-urgently-needed-ai-whistleblower-bill/">Congress Introduces &#8220;Urgently Needed&#8221; AI Whistleblower Bill</a> </p><p><a href="https://thesignalsnetwork.org/twh/">Tech Whistleblowing Guides - The Signals Network</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>At first sight, a potential extenuating circumstance is given by Article 15(1)(b)(i), which states that a public disclosure of information will qualify for protection under this Directive when there are reasonable grounds to believe that <em>&#8216;the breach may constitute an imminent or manifest danger to the public interest, such as where there is an emergency situation or a risk of irreversible damage&#8217;</em>. However, a &#8216;breach&#8217; here is defined by Article 5 in the context of <em>existing</em> Union law, which in this context is the AIA itself. Furthermore, it is not clear whether the scope of the Directive as defined through Article 2 is sufficient to cover all emerging threats missed by the AIA. While it does mention <em>&#8216;product safety and compliance&#8217;, </em>as well as <em>&#8216;protection of privacy and personal data, and security of network and information systems&#8217;</em>, it is not explicitly guaranteed that this would include situations such as an internal AI system used for research purposes that is about to cross the threshold of automated replication and self-improvement.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>An argument for why &#8216;internal deployment&#8217; <em>is </em>potentially covered has been given in Section 4.2 and Appendix B of <a href="https://arxiv.org/abs/2504.12170">AI Behind Closed Doors: a Primer on The Governance of Internal Deployment</a>. However, even if this argument holds true, we are uncertain whether all types of internal usage would classify as &#8216;putting the model into service&#8217;, which would in turn satisfy the AIA applicability criterion. For example, experimenting on a penultimate checkpoint of a training run might not be covered (since such a checkpoint would not be used to directly accelerate the R&amp;D of other models) and so would fall out of scope of the AIA due to Article 2(8), as explained in the text. Still, such internal <em>experimentation</em> could lead to the same risks as internal <em>deployment</em>.</p><p>Yet another example would be an AI lab training a model exclusively for the purpose of testing a new alignment technique, which then goes wrong by accident. Due to Article 2(6), such internal usage might also fall out of scope of the AIA.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>The word &#8216;any&#8217; is of course all-inclusive, but Recital 25 further confirms that this applies not only to purely scientific research, but also to product-oriented research that is meant to be monetised:</p><p><em>&#8220;This Regulation should support innovation, should respect freedom of science, and should not undermine research and development activity. It is therefore necessary to exclude from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development. Moreover, it is necessary to ensure that this Regulation does not otherwise affect scientific research and development activity on AI systems or models prior to being placed on the market or put into service. As regards product-oriented research, testing and development activity regarding AI systems or models, the provisions of this Regulation should also not apply prior to those systems and models being put into service or placed on the market.&#8221;</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>We have since learned that OpenAI has a dedicated 24/7 <a href="https://openai.integrityline.com/">&#8216;integrity hotline</a>&#8217;, as well as a policy <a href="https://cdn.openai.com/policies/raising-concerns-policy-blog-copy-202410.pdf">document</a> with commitments to anonymity and non-retaliation. <a href="https://lawsen.substack.com/p/having-a-whistleblowing-function#footnote-1-159743362">Apparently</a>, Anthropic maintains a similar hotline implemented through a third-party whistleblower support organisation, although we were not able to independently verify this claim.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>A good analogy here is the publication of the Responsible Scaling Policy itself. After Anthropic&#8217;s initial release of their RSP in September 2023, OpenAI followed with the <a href="https://openai.com/index/updating-our-preparedness-framework/">Preparedness Framework</a> and Google DeepMind with the <a href="https://deepmind.google/discover/blog/introducing-the-frontier-safety-framework/">Frontier Safety Framework</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>(Beri and Baker, forthcoming) find that lack of trust in action being taken was a strong demotivating factor in 40% of the whistleblowing cases they analysed.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Interestingly, one of the most extensive and transparent internal whistleblowing systems has been implemented by <a href="https://www.volkswagen-group.com/en/our-whistleblower-system-16041">Volkswagen</a>, which is a direct consequence of its infamous Emissions Scandal.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Importantly, whistleblowers should be aware that they do not need to pursue internal reporting first.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>A <a href="https://www.transparency.org/en/publications/how-well-do-eu-countries-protect-whistleblowers-speakup">report</a> by <a href="https://www.transparency.org/en/our-priorities/whistleblowing">Transparency International</a> raises similar concerns, often finding that national laws are weakened or contradict the original content of the Directive.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>A good example here is the Wirecard Scandal in Germany, which involved fraud of almost &#8364;2 billion. Journalists reported their findings to the Federal Financial Supervisory Authority (BaFin), which then filed a <a href="https://www.reuters.com/article/business/german-prosecutors-drop-probe-into-ft-over-wirecard-idUSKBN25U2VS/">criminal complaint</a> <em>against the journalists</em> for market manipulation. This is likely due to very aggressive tactics by Wirecard, who carried out sting operations on the journalists in order to &#8216;muddy the waters&#8217;. Moreover, investigators found that BaFin employees had increased their trading in Wirecard shares in the months leading up to the firm&#8217;s collapse, which potentially constitutes a strong conflict of interests.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>To present a hypothetical example, <a href="https://artificialintelligenceact.eu/article/14/">Article 14</a> of the AIA already mandates including human-in-the-loop components in high-stakes AI systems. A company could design the user interface for these components such that the &#8216;Accept&#8217; option is pre-selected by default. Note that this does not assume any bad intentions; it could be done inadvertently to reduce cognitive friction and make the system less cumbersome to use. While technically allowing for oversight, this design choice nudges the user toward automatic approval, especially if not paired with adequate training.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>For example, through the official API or because it could be easily constructed by other means.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>The requirement of appropriate elicitation in model evaluations is included in the Code of Practice. Measure II.4.6 of the <a href="https://digital-strategy.ec.europa.eu/en/library/third-draft-general-purpose-ai-code-practice-published-written-independent-experts">Third Draft</a> states:<br><br><em>&#8220;Signatories shall ensure that all model evaluations of their GPAISR (whether internal or external) are performed with a state-of-the-art level of model elicitation appropriate and proportionate to the systemic risk assessed to: (1) elicit the upper limit of current and reasonably foreseeable capabilities, propensities, and effects of the model under evaluation; (2) minimise the risk of under-elicitation; (3) minimise the risk of model deception during model evaluation; and (4) match the realistic model elicitation capabilities of potential misuse actors, where misuse actors play a role in the relevant systemic risk scenario (e.g. some potential misuse actors might not be able to fully elicit the model).&#8221;<br><br></em>Nonetheless, our example illustrates the sort of difficulties that can be encountered in practice.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>(Beri and Baker, forthcoming) find that in 30 case studies they analysed, retaliation occurred ~60% of the time, with ~10% of whistleblowers receiving death threats.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>Although a later <a href="https://www.washingtonpost.com/technology/2024/07/13/openai-safety-risks-whistleblower-sec/">SEC inquiry</a> raised questions about the legality of the NDAs, the initial policy change was driven by reputational pressure, not legal compulsion.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>There are likely significant differences between the US and EU in terms of how NDAs interplay with whistleblower protections. In the US, there is a high level of &#8216;freedom of contract&#8217;, meaning that employees are usually allowed to give up any rights they choose unless the contract would violate some important public policy. While courts may expressly rule in <a href="https://www.whistleblowers.org/non-disclosure-agreements-and-whistleblowers/">certain cases</a> that whistleblower rights take precedence over NDAs, cases where courts allowed counterclaims for NDA violations to go forward do <a href="https://valawyersweekly.com/2022/09/26/whistleblower-can-be-sued-for-breach-of-nda/">exist</a>. Moreover, the mere existence of a strict NDA can successfully deter a potential whistleblower from speaking up. See this article from the <a href="https://law-ai.org/how-to-design-ai-whistleblower-legislation/#a-nondisclosure-and-non-disparagement-agreements">Institute for Law and AI</a> for more information on the US context.</p><p>On the other hand, in the EU, Recital 91 of the Whistleblower Protection Directive states that:<br><br><em>&#8220;It should not be possible to rely on individuals' legal or contractual obligations, such as loyalty clauses in contracts or confidentiality or non-disclosure agreements, so as to preclude reporting, to deny protection or to penalise reporting persons for having reported information on breaches or made a public disclosure where providing the information falling within the scope of such clauses and agreements is necessary for revealing the breach.&#8221;<br><br></em>Therefore, there is clear guidance that in cases of conflicts between NDAs and whistleblower rights, courts should rule in favour of the latter.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p>The recommendation to establish such a mailbox has now been included in the &#8216;Statement from Chairs and Vice-Chairs&#8217; on the final version of the Code of Practice. Similarly to the considerations elucidated here, it calls for the mailbox to allow for anonymous communication and to ensure that the level of afforded protections matches that of the Member States.</p></div></div>]]></content:encoded></item><item><title><![CDATA[AI Companies: Publish Your Whistleblowing Policies]]></title><description><![CDATA[A Coalition Call Led by AIWI Was Officially Launched at the National Whistleblower Day Event on Capitol Hill.]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/ai-companies-publish-your-whistleblowing</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/ai-companies-publish-your-whistleblowing</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Wed, 30 Jul 2025 17:45:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!17J9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Dear AIWI Community,</strong></h2><p>Today at the National Whistleblower Day event on Capitol Hill, we officially launched our campaign with one clear message:</p><p><strong>AI companies must publish their whistleblowing policies.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!17J9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!17J9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png 424w, https://substackcdn.com/image/fetch/$s_!17J9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png 848w, https://substackcdn.com/image/fetch/$s_!17J9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png 1272w, https://substackcdn.com/image/fetch/$s_!17J9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!17J9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5031116,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/169679401?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!17J9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png 424w, https://substackcdn.com/image/fetch/$s_!17J9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png 848w, https://substackcdn.com/image/fetch/$s_!17J9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png 1272w, https://substackcdn.com/image/fetch/$s_!17J9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241903ae-f29f-41ce-8419-5e39bd03226f_2880x2160.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Minimum Standard We're Demanding</strong></h2><p>Our primary ask is straightforward&#8212;AI companies must make their whistleblowing policies publicly accessible. This means:</p><ul><li><p>Complete policy documents available to the public</p></li><li><p>Clear scope of protected individuals and covered wrongdoing</p></li><li><p>Detailed reporting channels and investigation procedures</p></li><li><p>Protection and support measures specified</p></li><li><p>Rights and external reporting options explained</p></li></ul><p><strong>--&gt; Read about the levels: </strong><a href="http://www.publishyourpolicies.org">www.publishyourpolicies.org</a></p><div><hr></div><h2><strong>The Transparency Gap That Must Be Closed</strong></h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MKxO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MKxO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png 424w, https://substackcdn.com/image/fetch/$s_!MKxO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png 848w, https://substackcdn.com/image/fetch/$s_!MKxO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png 1272w, https://substackcdn.com/image/fetch/$s_!MKxO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MKxO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png" width="1159" height="611" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:611,&quot;width&quot;:1159,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:83331,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/169679401?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MKxO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png 424w, https://substackcdn.com/image/fetch/$s_!MKxO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png 848w, https://substackcdn.com/image/fetch/$s_!MKxO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png 1272w, https://substackcdn.com/image/fetch/$s_!MKxO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb706172-fea0-4213-9b37-acdfa4dffd24_1159x611.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Currently, leading AI companies (Anthropic, Google DeepMind, xAI, Mistral) haven't published their whistleblowing policies. Only OpenAI has done so&#8212;and only after public pressure over their restrictive non-disparagement clauses revealed the need for transparency.</p><p>Closing this gap is essential. As the Right to Warn letter states: &#8220;AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. We do not think they can all be relied upon to share it voluntarily. So long as there is no effective government oversight of these corporations, <strong>current and former employees are among the few people who can hold them accountable to the public.&#8221;<a href="https://righttowarn.ai/"> </a></strong></p><p>https://righttowarn.ai/</p><div><hr></div><h2><strong>Our Unprecedented Coalition</strong></h2><p>We're not asking alone. Over 30 organizations and experts in whistleblowing and AI focus support this call:</p><h3><strong>Whistleblowing organizations and individuals supporting the campaign (alphabetical):</strong></h3><p><a href="https://www.blueprintforfreespeech.net/">Blueprint for Free Speech</a>, <a href="https://pureportal.coventry.ac.uk/en/persons/dimitrios-kafteranis">Dimitrios Kafteranis</a> (University of Coventry), <a href="https://whistleblower.org/">Government Accountability Project</a>, <a href="https://www.hrlc.org.au/who-we-are/">Human Rights Law Centre</a>, <a href="https://www.whistleblowers.org/">National Whistleblower Center</a>, <a href="http://psst.org/">Psst</a>, <a href="https://thesignalsnetwork.org/">The Signals Network</a>, <a href="https://www.uni-goettingen.de/en/principle+investigator/659527.html">Simon Gerdemann</a> (University of Goettingen), <a href="https://www.transparency.org.uk/?gad_source=1&amp;gad_campaignid=13797340932&amp;gbraid=0AAAAACE2l1w8YxmIgMuQl45UyoP_lMEl5&amp;gclid=Cj0KCQjwkILEBhDeARIsAL--pjzDIXfILlpNQk63mtAHuIFrevzd2vv9BD-43N8ub5FQVaxQMK-98ZEaAsmIEALw_wcB">Transparency International</a>, <a href="https://whisper.exposefacts.org/">WHISPeR</a>, <a href="https://whistleblowingnetwork.org/Home">Whistleblowing International Network</a>, <a href="https://whistleblowingnetwork.org/Membership/Our-Members/Members/Whistleblowers-Netzwerk-E-V">Whistleblower-Netzwerk</a>, <a href="https://www.linkedin.com/in/mary-inman-81229/">Whistleblower Partners LLP</a>, <a href="https://www.edhec.edu/en/research-and-faculty/faculty/professors-and-researchers/wim-vandekerckhove">Wim Vandekerckhove</a> (EDHEC Business School)</p><h3><strong>AI organizations and individuals supporting the campaign (alphabetical):</strong></h3><p><a href="https://www.centeraipolicy.org/">Center for AI Policy</a>, <a href="https://carma.org/">Centre for AI Risk Management &amp; Alignment</a> (CARMA), <a href="https://www.convergenceanalysis.org/">Convergence Analysis</a>, <a href="http://encodeai.org/">Encode</a>, <a href="https://futureoflife.org/">Future of Life Institute</a>, <a href="https://thefuturesociety.org/">The Future Society</a>, <a href="https://www.linkedin.com/in/jessicahnewman/">Jessica Newman</a> (Director, AI Security Initiative, University of California, Berkeley), <a href="https://oid.wharton.upenn.edu/profile/kartikh/">Kartik Hosanagar</a> (The Wharton School), LASST, <a href="https://lessig.org/">Lawrence Lessig</a> (Harvard Law School), <a href="https://legalsafetylab.org/">Legal Safety Lab</a>, <a href="https://www.themidasproject.com/">The Midas Project</a>, <a href="https://www.cognitiverevolution.ai/about/">Nathan Labenz</a> (Cognitive Revolution), <a href="https://www.law.uh.edu/faculty/main.asp?PID=6428">Peter Salib</a> (University of Houston Law Center), <a href="https://www.pourdemain.ngo/en">Pour Demain</a>, <a href="https://en.wikipedia.org/wiki/Roman_Yampolskiy">Roman Yampolskiy</a> (University of Louisville), <a href="https://secureaiproject.org/">Secure AI Project</a>, <a href="https://vcresearch.berkeley.edu/faculty/stuart-russell">Stuart Russell</a> (University of California, Berkeley), <a href="https://www.safer-ai.org/">Safer AI</a></p><div><hr></div><h2><strong>Why Level 1 Is Just the Beginning</strong></h2><p>Publishing policies is essential, but it's the minimum. We use a tiered transparency framework:</p><blockquote><p><strong>Level 1 (Essential):</strong> <strong>Policy transparency</strong>&#8212;making the policies public so stakeholders can evaluate organizational commitment.</p><p><strong>Level 2 (True Leadership):</strong> <strong>Effectiveness transparency</strong>&#8212;regular public reporting on system outcomes, metrics on reports received/resolved, evidence of improvements, and effectiveness assessments.</p></blockquote><h3><strong>Level 2 is what all AI companies should strive for. Only then can we truly evaluate and trust these systems.</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Yo8N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Yo8N!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png 424w, https://substackcdn.com/image/fetch/$s_!Yo8N!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png 848w, https://substackcdn.com/image/fetch/$s_!Yo8N!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png 1272w, https://substackcdn.com/image/fetch/$s_!Yo8N!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Yo8N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png" width="1018" height="792" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:792,&quot;width&quot;:1018,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:183874,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/169679401?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Yo8N!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png 424w, https://substackcdn.com/image/fetch/$s_!Yo8N!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png 848w, https://substackcdn.com/image/fetch/$s_!Yo8N!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png 1272w, https://substackcdn.com/image/fetch/$s_!Yo8N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13bc1317-a5c3-44d4-9f57-e4eddea67f49_1018x792.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2><strong>The Evidence Is Clear</strong></h2><p>The ICC Guidelines note that "Employees are often the first to recognize potential wrongdoing or risk of harm." These risks are only visible to insiders. Yet our research shows many AI employees don't know, understand, or trust their companies' internal systems.</p><p>Companies across industries&#8212;from ASML in tech to ABN Amro in finance&#8212;already publish these policies. The <a href="https://iccwbo.org/news-publications/policies-reports/icc-2022-guidelines-on-whistleblowing/">ICC 2022</a> Guidelines explicitly encourage transparency and publication of effectiveness metrics as best practice.</p><div><hr></div><h2><strong>Moving Forward</strong></h2><p>We're inviting public to see why our initiatives matter while remaining open to collaboration with AI companies. Industry leaders act before pressure mounts&#8212;they don't wait for external pressure, or worse; scandals, to lead on the transparency board.</p><p>Companies that embrace Level 1 transparency immediately will be recognized as meeting the minimum standard by the public. Those that commit to Level 2 will be celebrated as true AI leaders in accountability.</p><div><hr></div><h2><strong>Your Role</strong></h2><ul><li><p>Share our call: <a href="http://www.publishyourpolicies.org">www.publishyourpolicies.org</a></p></li><li><p>Use hashtag #PublishYourPolicies</p></li><li><p>If you work in AI, discuss this with your colleagues and leadership</p></li><li><p>Help us maintain momentum for industry-wide change</p></li></ul><p></p><p>We call on AI companies to make their whistleblowing policies public,</p><p><em>The AIWI Team &amp; Coalition Partners</em></p>]]></content:encoded></item><item><title><![CDATA[OAISIS is now The AI Whistleblower Initiative (AIWI)]]></title><description><![CDATA[Our New Name, Our Unchanged Mission: Supporting Whistleblowers in AI]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/oaisis-is-now-the-ai-whistleblower</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/oaisis-is-now-the-ai-whistleblower</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Fri, 25 Jul 2025 14:22:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9wI_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9wI_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9wI_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png 424w, https://substackcdn.com/image/fetch/$s_!9wI_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png 848w, https://substackcdn.com/image/fetch/$s_!9wI_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png 1272w, https://substackcdn.com/image/fetch/$s_!9wI_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9wI_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png" width="725" height="380.4258241758242" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:764,&quot;width&quot;:1456,&quot;resizeWidth&quot;:725,&quot;bytes&quot;:507483,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/i/169207602?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9wI_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png 424w, https://substackcdn.com/image/fetch/$s_!9wI_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png 848w, https://substackcdn.com/image/fetch/$s_!9wI_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png 1272w, https://substackcdn.com/image/fetch/$s_!9wI_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54eb936f-d6f0-49ee-8e8c-31f63f42233c_1800x945.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Dear community,</strong></h3><p>We&#8217;re excited to announce our new name: <strong>The AI Whistleblower Initiative</strong> <strong>(AIWI). </strong>This rebrand strengthens our commitment to our mission&#8212;supporting whistleblowers &amp; insiders in AI.</p><h3>Discover our relaunched website <a href="http://aiwi.org">here</a>. </h3><div><hr></div><h3><strong>What this means for you:</strong></h3><h3>Our work and values stay the same.</h3><blockquote><p>For AI to transform our world for the better, we need to spot risks, issues, and misbehavior early. AIWI supports the people who do. No matter if you&#8217;re an insider, support organization, lawyer, or journalist &#8211; <strong>we&#8217;re here to help.</strong></p></blockquote><h3>How do we help?</h3><blockquote><p>AIWI helps concerned insiders in AI with expert-opinions on their questions and connects them to specialized support. </p><p><strong>From the earliest point in your journey.</strong></p></blockquote><h3>1. Submit your question</h3><p><strong>Unsure whether what you are seeing is cause for concern and don&#8217;t want to disclose confidential information? </strong></p><p>This is what our <strong><a href="https://aiwi.org/third-opinion/">'Third Opinion' </a> </strong>service is built for.<br>Using our secure form, submit a question surrounding your concern to us - confidentially, anonymously, without disclosing sensitive information.</p><p><strong><a href="https://aiwi.org/third-opinion/#howtosubmitaquestion">Explore how to submit a question</a></strong></p><h3>2. Receive expert opinions</h3><p><strong>Together, we identify independent (technical) experts tailored to your specific question. </strong></p><p>We approach these experts confidentially with your question and share their answers with you. You communicate with us via our anonymous, end-to-end encrypted tool. <strong>All without a commitment to move forward</strong>: Hopefully your concerns are alleviated at this stage.</p><p><strong><a href="https://aiwi.org/third-opinion/#instructions">Explore the 'Third Opinion' process</a></strong></p><h3>3. Connect further</h3><p><strong>If you are still concerned</strong>, AIWI will help find you advice and representation with legally privileged, trusted organizations who are specialized in helping whistleblowers understand their rights, risks, and options. Because most insiders approach support (far) too late.</p><p><strong><a href="https://aiwi.org/contact-hub/">Explore our specialized whistleblower support network</a></strong></p><h3>4. Explore next steps</h3><p><strong>If required and where legally permissible, we introduce the experts identified during the early stages of the process </strong>to the legally privileged support organizations, supplementing the organizations' whistleblowing experience with relevant AI expertise. If financing is required to cover costs associated with your process, we will help you find it.</p><h3>5. Stay protected along your journey</h3><p>We care about operational and cybersecurity, and so should you. Read our introduction to privacy &amp; operational security.</p><p><strong><a href="https://aiwi.org/technology/">Read our OpSec &amp; Privacy Toolkit</a></strong></p><h3>Looking Forward</h3><p>Our values haven't changed. Our commitment hasn't wavered. If anything, becoming AIWI represents our evolution into a stronger, more focused organization&#8212;one dedicated to empowering AI insiders to uphold their duty, with the protection they deserve.</p><p>Whether you're questioning something you've observed, seeking guidance on your concerns, or looking for specialized support, <strong>we're here from the earliest point in your journey.</strong></p><h4><strong>Thank you for your continued support as we grow as AIWI.</strong></h4><h4>&#8212;<strong>Formerly OAISIS, Now The AIWI Team</strong></h4>]]></content:encoded></item><item><title><![CDATA[INSIDE AI #16: A Whistleblower’s Account on Huawei's Pangu | Ex-Meta & OpenAI Insiders Reflected on Culture | New Bid for SB53]]></title><description><![CDATA[Edition 16]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/inside-ai-16-a-whistleblowers-account</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/inside-ai-16-a-whistleblowers-account</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Wed, 23 Jul 2025 15:02:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Krir!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><h3><strong>In This Edition:</strong></h3><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text">Key takeaways:</pre></div><ul><li><p><strong>News:</strong></p><ul><li><p>A Whistleblower&#8217;s Account: Huawei's Pangu Model Allegations</p></li><li><p>Metastatic Misread? Tijmen Blankevoort on &#8216;Fixing&#8217; Meta AI from Within</p></li><li><p>Calvin French-Owen on Leaving OpenAI: A Candid Look Inside Its Culture</p></li><li><p>OpenAI&#8217;s Open-Weight Model, Loosening Microsoft Ties</p></li><li><p>Ruoming Pang Leaves Apple for Meta: Other Apple&#8217;s Top AI Engineers Eye the Door</p></li></ul></li><li><p><strong>Policy: </strong></p><ul><li><p>New Amendments to Wiener&#8217;s Latest Bill SB 53: Renewed Bid for AI Transparency</p></li><li><p>The AI Whistleblower Protection Act (AIWPA): Protecting Whistleblowers and Strengthening Internal Reporting Systems</p></li></ul></li><li><p><strong>Research:</strong></p><ul><li><p>New Publication: FLI AI Safety Index Report (Summer 2025) - Featuring OAISIS&#8217; Contribution</p></li><li><p>The &#8220;Silicon Sentinels&#8221; Approach: How Whistleblowers Are Key to Managing AI Risk</p></li><li><p>Verification for International AI Governance, including Whistleblowing as International Verification Method</p></li></ul></li><li><p><strong>Announcement:</strong></p><ul><li><p>OAISIS to Join AI Panel at National Whistleblower Day, July 30 in Washington, D.C.</p></li></ul></li></ul><div><hr></div><blockquote><h2>Edition Highlights</h2></blockquote><h2><strong>New Publication: FLI AI Safety Index Report (Summer 2025)</strong></h2><p>We&#8217;re excited to share the release of the <strong>Future of Life Institute&#8217;s AI Safety Index Report (Summer 2025)</strong>.</p><p>A notable feature of this edition is its <strong>comparison of whistleblowing systems </strong>among leading AI organizations. On the basis of these findings, FLI is <strong>calling on all AI companies to publish their whistleblowing policies</strong> to improve transparency and accountability sector-wide.</p><p>We contributed to the research on whistleblower systems underlying the report.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://futureoflife.org/wp-content/uploads/2025/07/FLI-AI-Safety-Index-Report-Summer-2025.pdf" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Krir!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png 424w, https://substackcdn.com/image/fetch/$s_!Krir!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png 848w, https://substackcdn.com/image/fetch/$s_!Krir!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png 1272w, https://substackcdn.com/image/fetch/$s_!Krir!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Krir!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png" width="727.998046875" height="541.209074321546" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:904,&quot;width&quot;:1216,&quot;resizeWidth&quot;:727.998046875,&quot;bytes&quot;:196939,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://futureoflife.org/wp-content/uploads/2025/07/FLI-AI-Safety-Index-Report-Summer-2025.pdf&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://oaisis.substack.com/i/168566348?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Krir!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png 424w, https://substackcdn.com/image/fetch/$s_!Krir!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png 848w, https://substackcdn.com/image/fetch/$s_!Krir!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png 1272w, https://substackcdn.com/image/fetch/$s_!Krir!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc22d7e05-d628-465c-a5d8-55800554c72a_1216x904.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Report link:</strong> <a href="https://futureoflife.org/wp-content/uploads/2025/07/FLI-AI-Safety-Index-Report-Summer-2025.pdf">FLI AI Safety Index Report (Summer 2025)</a></p><h4>OAISIS&#8212;soon to be known as AIWI (The AI Whistleblower Initiative)&#8212;is aligned with this call and will be making a formal announcement soon. Stay tuned to learn how you can get involved!</h4><div><hr></div><h2>The AI Whistleblower Protection Act (AIWPA): Protecting Whistleblowers and Strengthening Internal Reporting Systems</h2><h4>Announcement: We will be in Washington DC for AI panel discussions at the <a href="https://www.nationalwhistleblowerday.org/">National Whistleblower Day</a> event on July 30. More details to come!</h4><p>As we prepare for our panel in Washington, we want to revisit the <strong>AI Whistleblower Protection Act (AIWPA)</strong>, introduced by Senator Charles Grassley (R) on May 15. This bipartisan bill&#8212;previously featured in <a href="https://oaisis.substack.com/p/inside-ai-12-federal-preemption-bill">our 12th edition</a>&#8212;has received strong support from senators across the aisle and endorsement from the <a href="https://www.grassley.senate.gov/download/nwc-to-grassley_-aiwpa">National Whistleblower Center</a>.</p><p><a href="http://chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://kkc.com/wp-content/uploads/2025/06/ai_whistleblower_protection_act.pdf">The bill</a> provides protection for individuals who disclose &#8220;AI violations,&#8221; including security vulnerabilities and specific threats to public health and safety.</p><h4>What This Means for AI Companies</h4><ul><li><p>Besides protecting disclosures to regulators, the bill would help<strong> promote companies developing AI to have effective internal reporting structures</strong></p></li></ul><blockquote><p>&#8220;According to the Securities and Exchange Commission (SEC) Whistleblower Program&#8217;s <a href="https://kkc.com/wp-content/uploads/2025/06/2021_ow_ar_508-1.pdf">2021 Annual Report to Congress</a>, approximately <strong>75% of award recipients</strong> in that fiscal year had<strong> initially raised their concerns internally </strong>to supervisors, compliance personnel, or through internal reporting mechanisms.&#8221;</p></blockquote><p>As <a href="https://kkc.com/our-whistleblower-law-firm/our-whistleblower-lawyers/stephen-m-kohn/">Stephen M. Kohn</a> and <a href="https://kkc.com/our-whistleblower-law-firm/our-whistleblower-lawyers/sophie-luskin/">Sophie Luskin</a> noted, &#8220;Raising concerns internally is often the first way that employees report misconduct or serious potential violations of law.&#8221; The data confirms that <strong>most whistleblowers begin the process through internal channels</strong>.</p><ul><li><p>The bill will help<strong> </strong>AI companies take extra efforts to ensure that employees who utilize these reporting systems are <strong>not subject to retaliation</strong> given the strong legal remedies of the AIWPA&#8212; intended to protect honest employees seeking to do the right thing by reporting concerns to their bosses.</p></li></ul><h4>Whistleblower Remedies and Protections Under the AIWPA</h4><p>AI employees who face retaliation or wrongful termination for making a protected disclosure&#8212;internally or to the government&#8212;are granted rights similar to those in <a href="https://kkc.com/frequently-asked-questions/sarbanes-oxley-act-whistleblower-rewards-and-protections/">Sarbanes-Oxley.</a> Available remedies to whistleblowers include:</p><ul><li><p>Make-whole relief</p></li><li><p>Back pay</p></li><li><p>Restoration of lost benefits</p></li><li><p>Reinstatement to their job</p></li><li><p>Compensatory damages</p></li><li><p>Attorney&#8217;s fees and legal costs if the whistleblower prevails</p></li></ul><p>While <strong>punitive damages are not included</strong>, the law provides for <strong>double back pay</strong>, serving as at least some financial deterrent against retaliation.</p><h4>Why Now</h4><p>In the absence of a dedicated regulation covering risks from AI, it is crucial that employees can safely report concerns to supervisors or compliance personnel. These concerns may involve <strong>national security</strong>, <strong>public safety</strong>, or <strong>consumer fraud</strong>. The legislation also responds to well-documented risks raised in <em><a href="https://righttowarn.ai/">Rights to Warn</a> l</em>etters, including concerns about foreign adversaries stealing AI technologies and the potential misuse of AI by terrorists, added Kohn and Lushkin.</p><p>&#8594; <strong>Read more</strong>: <a href="https://wp.nyu.edu/compliance_enforcement/2025/07/10/the-ai-whistleblower-protection-act-is-critical-for-enhancing-corporate-compliance/">The AI Whistleblower Protection Act Is Critical for Enhancing Corporate Compliance</a></p><div><hr></div><blockquote><h1>Insider Currents </h1></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI companies.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: <strong>INSIDE AI</strong> Subscribe for free to receive new posts and support our mission: Supporting Insiders &amp; Whistleblower in AI</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2>A Whistleblower&#8217;s Account: Huawei's Pangu Model Allegations</h2><p>Research group HonestAGI published an analysis of Huawei&#8217;s newly open-sourced Pangu Pro MoE model and found an unusually high parameter correlation of 0.927 (comparisons of similar models in the industry usually do not exceed 0.7) with Alibaba&#8217;s Qwen-2.5 14B model. This suggests that the model was not independently trained, and while Huawei officially acknowledged referencing open-source implementations as common practice, a detailed account of a self-described employee of Huawei published on GitHub alleges a more deliberate process of appropriation, defined as "shelling."</p><p>According to the account, a team within Huawei, known as the "Small Model Lab," repeatedly took pre-trained weights from competitor models, first from Qwen 2.5 and then DeepSeek-V3. DeepkSeek&#8217;s model was shelled in order to get a larger 718B Mixture of Experts model up and running. This practice, the whistleblower claims, allowed the team to show rapid progress while circumventing the arduous process of training from scratch. In addition, the whistleblower talks about the effort to hide the fact that they shelled from Qwen:</p><blockquote><p>&#8220;I heard from colleagues that they used many methods to wash away the Qwen watermark, including intentionally training on dirty data. This provides an unprecedented special case for academic research on model lineage. Future new lineage detection methods can be tested on it.&#8221;</p></blockquote><p>While the "shelling" team allegedly operated with few constraints, another team, the "Fourth Column (our translation of &#8220;&#22235;&#32437;&#8221;, an internal unit),&#8221; reportedly struggled through immense hardship to train a 135B dense model and a 718B MoE from scratch on Huawei&#8217;s own Ascend chips. This reflects the Whistleblower's hope that one day Huawei's chips will be able to match Nvidia's in terms of training ability. Their eventual success proved the viability of the domestic hardware stack, but was achieved despite what the whistleblower described as a frustrating double standard regarding access to compute and other resources.</p><p>&#8594; Read: <a href="https://github.com/HW-whistleblower/True-Story-of-Pangu">GitHub's "Whistleblower's Account" - HW-whistleblower / True-Story-of-Pangu</a> (in Chinese)</p><p>&#8594; Read: <a href="https://finance.sina.com.cn/tech/discovery/2025-07-07/doc-infephkr7423333.shtml">Huawei's Pangu large model was questioned &#8216;copy&#8217; Ali Qwen: official response</a> (in Chinese)</p><p><strong>Disclaimer: This is an interpretation of content originally written in Chinese. We welcome feedback and suggestions to correct misinterpretations or inaccuracies in translation or context.</strong></p><div><hr></div><h2>Metastatic Misread? <a href="https://www.linkedin.com/in/tijmen-blankevoort-a5633a24/">Tijmen Blankevoort</a> <strong>on &#8216;Fixing&#8217; Meta AI from Within</strong></h2><p>You may have seen the headline from <em><a href="https://www.theinformation.com/articles/meta-ai-researcher-warns-metastatic-cancer-afflicting-company-culture?rc=qlomo6">The Information</a></em><a href="https://www.theinformation.com/articles/meta-ai-researcher-warns-metastatic-cancer-afflicting-company-culture?rc=qlomo6"> </a>reporting that a departing Meta AI researcher, Tijmen Blankevoort, described the company&#8217;s culture as a &#8220;metastatic cancer.&#8221; In a 2,000-word essay seen and cited by <em>The Information</em>, Blankevoort describes deep-rooted cultural and organizational issues at Meta that he says have slowed progress within the nearly 2,000-person team behind its flagship AI model, Llama.</p><blockquote><p>&#8220;I have yet to meet someone in Meta-GenAI that truly enjoys being there. Someone that feels like they want to stay in Meta for a long time because it&#8217;s such a great place.&#8221;</p></blockquote><p>However, Blankevoort claimed that <strong>his memo was not the mic-drop that the media portrayed it to be </strong>on his Substack, but rather a carefully researched analysis of systemic issues he believed were hindering Meta AI. <strong>Five core challenges he pointed out:</strong></p><blockquote><ul><li><p>The fear that people feel on a daily basis of reviews and getting fired, commenting on a lack of safety people felt, which is crucial for morale.</p></li><li><p>The necessity of a culture and processes that enable &#8216;big projects&#8217; to come to fruition.</p></li><li><p>The management culture does not promote a sense of camaraderie among employees, leading to a lack of sense of belonging.</p></li><li><p>Instability in team assignments, leading to experience not building up and crystallizing over time.</p></li><li><p>A wavering vision that was tough for team members to enthusiastically rally behind.</p></li></ul></blockquote><p>In that essay, he claimed to draw on sources from organizational psychology, management research, and company history to outline cultural challenges within Meta&#8217;s AI division. </p><blockquote><p>I wrote this with care, citing books, articles, and other sources from behavioral and organizational psychology, organizational research, and history. I provided internal examples of the culture that I was seeing manifested, and at the end provided a list of improvements the company could make to strengthen the culture.</p></blockquote><p>According to him, Meta leadership largely agreed with his assessment and acknowledged many of the issues raised, noting that efforts to address them were already underway&#8212;particularly following several high-profile hires. He attributed much of the dysfunction to an aggressive, fast-paced AI race and a legacy software culture that, he argued, is ill-suited for the more experimental and collaborative demands of building large language models.</p><p>He also pointed out that more than 100 employees reached out to express support after reading the essay. </p><blockquote><p>Many said it articulated concerns they had struggled to define and made them feel less alone in their experiences. </p></blockquote><p>Several reportedly shared feelings of being overworked and had previously blamed themselves, but found reassurance in seeing those issues reflected more broadly.</p><p>&#8594; Read: <a href="https://blankevoort.substack.com/p/metas-ai-culture?r=4pyevb&amp;triedRedirect=true">Tijmen&#8217;s Substack: Meta's AI culture,  Setting the record straight on the leaked document</a></p><p>&#8594; Read: <a href="https://www.theinformation.com/articles/meta-ai-researcher-warns-metastatic-cancer-afflicting-company-culture?rc=qlomo6">Meta AI Researcher Warns of &#8216;Metastatic Cancer&#8217; Afflicting Company Culture</a></p><div><hr></div><h2><a href="https://www.linkedin.com/in/calvinfo/">Calvin French-Owen</a> on Leaving OpenAI: A Candid Look Inside Its Culture</h2><p>Calvin French-Owen, who joined OpenAI in 2024 and left OpenAI recently, has published <strong>his (mainly positive) observations</strong> on the company's internal operations. His account addresses what he describes as significant public curiosity about OpenAI's workplace dynamics:</p><blockquote><p>&#8220;&#8230;Because there&#8217;s a lot of smoke and noise around what OpenAI is doing, but not a lot of first-hand accounts of what the culture of working there actually <em>feels like.&#8221;</em></p></blockquote><p>Here some interesting elements from his reflections:</p><h4>Everything Runs on Slack</h4><p>OpenAI operates with zero email&#8212;"everything, and I mean everything, runs on Slack." French-Owen noted he received perhaps 10 emails during his entire tenure. He suggested the potential for information overload unless carefully curated and managed, which could be workable.</p><h4>Bottoms-Up Meritocracy</h4><p>When he asked about quarterly roadmaps, he was told "this doesn't exist." Instead of master plans, progress emerges iteratively as research bears fruit. Good ideas come from anywhere, and leadership advancement is &#8220;based upon employees&#8221; ability to have good ideas and then execute upon them, rather than their competency at things like presenting at all-hands or political maneuvering."</p><h4>"You Can Just Do Things"</h4><p>There's a "strong bias to action" where employees launch parallel efforts without heavy permission. Multiple teams independently developed "3-4 different Codex prototypes" before the official launch. Researchers operate as "mini-executives" with autonomy to pursue their own directions.</p><h4>Twitter as Strategic Intelligence</h4><p>Surprisingly, "OpenAI pays a lot of attention to twitter." Viral OpenAI-related tweets regularly reach decision-makers. One colleague joked "this company runs on twitter vibes."</p><h4>Safety: More Than Expected, Different Focus</h4><p><strong>"Safety is actually more of a thing than you might guess," </strong>but focuses on practical risks (hate speech, bioweapons, prompt injection) rather than theoretical ones (intelligence explosion). Most safety work isn't published&#8212;"OpenAI really should do more to get it out there."</p><h4>The Launch Machine: From Idea to Product in 7 Weeks</h4><p><a href="https://oaisis.substack.com/i/167364061/openais-internal-response">Echoing our story from last editions</a> &#8212; where one former employee said Altman had been pushing for <em>&#8220;buzzy announcements every few months.&#8221;</em>&#8212; French-Owen provided detailed insight into OpenAI's product development capabilities through the Codex launch story. The entire product&#8212;described as having substantial scope including "a container runtime, made optimizations on repo downloading, fine-tuned a custom model to deal with code edits, handled all manner of git operations"&#8212;went from first lines of code to public launch in just 7 weeks.</p><p>The team was notably senior ("~8 engineers, ~4 researchers, 2 designers, 2 GTM and a PM") and worked with extraordinary intensity, with him describing nights until midnight and 5:30 AM wake-ups for weeks.</p><h4><strong>The Path to AGI: The Three-Horse Race</strong></h4><p>He characterized the current AI landscape as a competition between three major players&#8212;OpenAI, Anthropic, and Google. According to his assessment, the companies are pursuing different strategic approaches that reflect their core competencies.</p><blockquote><p>Each of these organizations are going to take a different path to get there based upon their DNA (consumer vs business vs rock-solid-infra + data). <strong><a href="https://calv.info/openai-reflections#footnote-fn-6"><sup>6</sup></a></strong> Working at any of them will be an eye-opening experience.</p></blockquote><p>&#8594; Read His Blog: <a href="https://calv.info/openai-reflections#footnote-fn-6">Reflection on OpenAI</a></p><div><hr></div><h2>OpenAI&#8217;s Open-Weight Model, Loosening Microsoft Ties</h2><p>OpenAI is preparing to release an open-weight language model, according to sources familiar with the company&#8217;s plans, adding to its complicated relationship with Microsoft in times of contracts renegotiation for OpenAI&#8217;s corporate restructuring reported<a href="https://oaisis.substack.com/p/inside-ai-15-openai-and-meta-talent"> in our previous edition</a>.</p><p>The model, described as &#8220;similar to o3 mini&#8221; with reasoning capabilities, will be available on Azure, Hugging Face, and other cloud providers&#8212;<strong>not exclusively through Microsoft's ecosystem</strong>, according to <em><a href="https://www.theverge.com/notepad-microsoft-newsletter/702848/openai-open-language-model-o3-mini-notepad">The Verge</a></em>. </p><h4>OpenAI Bets on Stock Grants as Retention Strategy</h4><p>OpenAI's stock-based compensation jumped more than five times last year to $4.4 billion&#8212;representing 119% of total revenue, according to projections seen by <em><a href="https://www.theinformation.com/articles/openais-stock-compensation-reflect-steep-costs-talent-wars?rc=qlomo6">The Information</a>.</em> The company projects this proportion will decline to 45% of revenue this year, then fall below 10% by decade's end.</p><p>The compensation surge follows Meta&#8217;s aggressive recruitment of OpenAI researchers. In response, OpenAI hired four high-profile engineers from rivals, including David Lau from Tesla and infrastructure engineers Uday Ruddarraju and Mike Dalton from xAI, who previously built the Colossus supercomputer, <em><a href="https://www.wired.com/story/openai-new-hires-scaling/">WIRED </a></em><a href="https://www.wired.com/story/openai-new-hires-scaling/">reported</a>.</p><h4>Post-Restructure: Microsoft, Employees, and Investors to Share OpenAI Ownership</h4><p>OpenAI is preparing for a major structural shift that could significantly alter its ownership landscape. Currently, employees hold profit-sharing units rather than traditional equity. However, once the company&#8217;s for-profit arm converts into a public benefit corporation, those units are expected to transform into common shares&#8212;potentially giving staff a sizable equity stake. According to a person familiar with internal discussions, OpenAI leadership has floated a post-restructuring ownership model in which employees would hold roughly one-third of the company. Microsoft would maintain another third, with the remaining share divided among other investors.</p><p>&#8594; Read: <a href="https://www.theverge.com/notepad-microsoft-newsletter/702848/openai-open-language-model-o3-mini-notepad">OpenAI&#8217;s open language model is imminent/Microsoft&#8217;s partnership with OpenAI could be facing yet another test.&#65279;</a></p><p>&#8594; Read: <a href="https://www.theinformation.com/articles/openais-stock-compensation-reflect-steep-costs-talent-wars?rc=qlomo6">OpenAI&#8217;s Stock Compensation Reflect Steep Costs of Talent Wars</a></p><p>&#8594; Read: <a href="https://www.wired.com/story/openai-new-hires-scaling/">OpenAI Poaches 4 High-Ranking Engineers From Tesla, xAI, and Meta</a></p><p>&#8594; Read: <a href="https://fortune.com/2025/07/05/openai-is-betting-millions-on-building-ai-talent-from-the-ground-up-amid-rival-metas-poaching-pitch/">OpenAI is betting millions on building AI talent from the ground up amid rival Meta&#8217;s poaching pitch</a></p><div><hr></div><h2><strong>Ruoming Pang Leaves Apple for Meta: Other Apple&#8217;s Top AI Engineers Also Eye the Door</strong></h2><p>Apple&#8217;s AI brain drain may just be beginning.</p><p>Ruoming Pang, a highly regarded engineering leader who headed Apple&#8217;s 100-person foundation models (AFM) team, has left the company to join Meta&#8212;enticed, sources say, by a compensation package in the &#8220;tens of millions per year.&#8221; But the story goes deeper than a lucrative offer.</p><p>According to people familiar with the matter, Apple's AFM (Apple Foundation Models) team have grown increasingly tense: </p><blockquote><p>"internal discussions have soured some of the morale" as new leadership explores replacing their work with third-party models, including from either OpenAI or Anthropic, to power a new version of Siri. </p></blockquote><p>Pang&#8217;s exit may signal the beginning of a wave of departures from the AFM group, as several engineers have informed colleagues of their intentions to leave soon for Meta or other opportunities, according to sources. With foundational AI work at risk of being sidelined, Apple may be facing a deeper challenge: keeping its top AI talent from walking out the door.</p><p>&#8594; Read: <a href="https://www.bloomberg.com/news/articles/2025-07-07/apple-loses-its-top-ai-models-executive-to-meta-s-hiring-spree">Apple Loses Top AI Models Executive to Meta&#8217;s Hiring Spree</a></p><div><hr></div><h2><strong>Assorted Links</strong></h2><p><em>Recent News You Shouldn&#8217;t Miss</em></p><h4>OpenAI</h4><ul><li><p><a href="https://www.reuters.com/business/media-telecom/openai-release-web-browser-challenge-google-chrome-2025-07-09/">Exclusive: OpenAI to release web browser in challenge to Google Chrome</a></p></li></ul><h4>xAI</h4><ul><li><p><a href="https://techcrunch.com/2025/07/15/doge-staffer-with-access-to-americans-personal-data-leaked-private-xai-api-key/">DOGE staffer with access to Americans&#8217; personal data leaked private xAI API key</a></p></li></ul><h4>Google</h4><ul><li><p><a href="https://www.ndtvprofit.com/business/google-is-said-to-pay-24-billion-for-windsurf-assets-talent">Google Is Said To Pay $2.4 Billion For Windsurf Assets, Talent</a></p></li></ul><h4>Other Whistleblowing Topics</h4><ul><li><p><a href="https://www.theguardian.com/world/2025/jul/08/theres-nowhere-for-them-to-hide-any-more-zelda-perkins-on-harvey-weinstein-and-ndas">&#8216;Nowhere for them to hide any more&#8217;: Zelda Perkins&#8217; fight against NDAs after Harvey Weinstein</a></p></li><li><p><a href="https://www.theguardian.com/law/2025/jul/07/uk-bosses-to-be-banned-using-ndas-cover-up-misconduct-work">UK bosses to be banned from using NDAs to cover up misconduct at work</a></p></li><li><p>Join <a href="https://suchir.org/">Ethical AI Summit: In Memory of Suchir Balaji</a></p></li></ul><div><hr></div><blockquote><h1>Policy &amp; Legal Updates</h1></blockquote><p><em>Updates on regulations with a focus on safeguarding individuals who voice concerns.</em></p><h2>New Amendments to Wiener&#8217;s Latest Bill SB 53: <strong>Renewed Bid for AI Transparency</strong></h2><p>After SB 1047's high-profile defeat amid industry opposition, Senator Scott Wiener is back with a more measured approach. </p><p>California's governor subsequently assembled a team of AI experts &#8212; including Fei-Fei Li, a prominent Stanford researcher and co-founder of World Labs &#8212; to create a policy group tasked with defining objectives for the state&#8217;s AI safety initiatives.</p><p>The group has since released its final recommendations, emphasizing the importance of requiring companies to disclose information about their AI systems to foster a &#8220;robust and transparent evidence environment.&#8221; According to a press release from Senator Wiener&#8217;s office, these recommendations played a significant role in shaping the recent amendments to SB 53. </p><h4><strong>What is New in SB 53</strong></h4><p>SB 53 would <strong>require frontier AI developers&#8212;like OpenAI, Google, Anthropic, and xAI&#8212;to publicly disclose their safety protocols and report critical incidents.</strong> The bill also proposes <strong>whistleblower protections</strong> for employees who speak up about &#8220;critical risks,&#8221; defined as events that could cause over 100 deaths or injuries or exceed $1 billion in damages.</p><h4><strong>What&#8217;s Next</strong></h4><p>With the recent amendments, SB 53 will now move to the California State Assembly Committee on Privacy and Consumer Protection for review. If it gains approval there, it must still pass through multiple additional legislative steps before making its way to Governor Newsom for final consideration.</p><p>&#8594; Read: <a href="https://techcrunch.com/2025/07/09/california-lawmaker-behind-sb-1047-reignites-push-for-mandated-ai-safety-reports/">California lawmaker behind SB 1047 reignites push for mandated AI safety reports</a></p><div><hr></div><blockquote><h1>Research Spotlight</h1></blockquote><p><em>Relevant research in the context of AI whistleblowing</em></p><h3><strong>The &#8220;Silicon Sentinels&#8221; Approach: How Whistleblowers Are Key to Managing AI Risk</strong></h3><p>A new article in the <em><a href="https://digitalcommons.liberty.edu/lu_law_review/vol19/iss4/5/">Liberty University Law Review</a></em> contends that insiders are uniquely positioned to understand the true capabilities and potential dangers of the systems they build. However, without robust legal safeguards, they are often reluctant to voice concerns for fear of retaliation. The authors, Jason Green-Lowe, Fynn Fehrenbach, and Mark Reddish (of <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Center for AI Policy&quot;,&quot;id&quot;:189851810,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cbe0c101-e5ec-4d03-bc26-ea4d9b48b454_197x197.png&quot;,&quot;uuid&quot;:&quot;cafb0648-6685-4885-9e8c-0a260d372d7e&quot;}" data-component-name="MentionToDOM"></span>), assert that:</p><blockquote><p>Existing whistleblower protections have proven impactful in other fields, yet they are limited in applicability for &#8220;big tech&#8221; employees disclosing information that serves the public interest.</p></blockquote><p>The core of the problem is a profound "information asymmetry," exacerbated by several factors:</p><ul><li><p>Rapid Evolution: AI models are updated so quickly that by the time a trend is analyzed, it's already obsolete.</p></li><li><p>The Black Box Nature: The inner workings of large language models are inherently difficult to interpret, making it hard to predict their behavior reliably.</p></li><li><p>Talent and Culture Gaps: A shortage of qualified AI experts in government and a culture clash between Silicon Valley and Washington hinder effective communication and trust.</p></li></ul><p>"Silicon Sentinels" proposes a comprehensive framework for AI-specific whistleblower protections, advocating for a dual-track system of judicial and administrative remedies. This approach would protect a broad range of workers from full-time engineers to third-party auditors for disclosing information about potential "critical harm" such as mass casualty events or large-scale economic damage, even if no law has been broken. This is meant to empowers insiders to report such risks without fear, which could shift industry culture toward greater transparency and accountability.</p><p>Read: &#8594; <a href="https://digitalcommons.liberty.edu/lu_law_review/vol19/iss4/5/">Silicon Sentinels: Using Whistleblower Protections to Manage Information Asymmetry and AI Risk</a></p><div><hr></div><h2>Verification for International AI Governance, including Whistleblowing as Verification Method</h2><p>This report outlines workable approaches for verifying international AI agreements and illustrates how investments in verification today can shape the political possibilities. Three main findings are:</p><ol><li><p>Many agreements can be verified using current or near-term technologies, particularly in data center-based AI, with appropriate investments.</p></li><li><p>Some AI-related areas, like mobile AI in weapons, pose significant political and technical challenges to verification.</p></li><li><p>Early investments in verification infrastructure and supportive policies can enhance the feasibility of future agreements.</p></li></ol><p>An analysis of whistleblowing as an international verification method is provided, for instance, in Appendix B. This section explores two personnel-based verification approaches: interviews and whistleblower programs.</p><ul><li><p><strong>Interviews</strong> can reveal compliance information, but their reliability is limited by the Prover&#8217;s ability to control who is available and willing to speak. In high-stakes domains, interviews pose risks to security and are thus unlikely to be accepted; however, they may be effective in low-stakes contexts where circumvention efforts are minimal.</p></li><li><p><strong>Whistleblower programs</strong> offer a potential path to reliable verification by ensuring key individuals can report compliance violations from secure, neutral settings. Despite this, such programs face major challenges: the Prover can conceal violations, intimidate or ensure loyalty among whistleblowers, or manipulate information flow. These vulnerabilities limit the utility of such programs in sensitive domains like national security.</p></li></ul><p>&#8594; Read the Paper: <a href="https://aigi.ox.ac.uk/wp-content/uploads/2025/07/Verification-for-International-AI-Governance.pdf">Verification for International AI Governance</a></p><div><hr></div><blockquote><h1>Announcements &amp; Call to Action</h1></blockquote><p><em>Updates on publications, community initiatives, and &#8220;call for topics&#8221; that seek contributions from experts addressing concerns inside Frontier AI.</em></p><h2>We are thrilled to announce that we participate in the AI Panel at <strong><a href="https://www.whistleblowers.org/national-whistleblower-day-2025/?hsCtaAttrib=189611307300">National Whistleblower Day 2025</a></strong><a href="https://www.whistleblowers.org/national-whistleblower-day-2025/?hsCtaAttrib=189611307300"> </a>with a <strong>Call to Action</strong>.</h2><p>Join and connect with us on <strong>Capitol Hill</strong> on <strong>July 30</strong> to support our mission: Supporting AI Insiders and Whistleblowers.</p><div><hr></div><p>Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI companies.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI companies. Together, we can continue to amplify and safeguard the voices of those working within AI companies who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The OAISIS Team</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[INSIDE AI #15: OpenAI & Meta Talent Battle | OpenAI & Microsoft AGI Clause | Coming Soon: The AI Insider's OpSec & Privacy Toolkit and Valis Veil]]></title><description><![CDATA[Edition 15]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/inside-ai-15-openai-and-meta-talent</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/inside-ai-15-openai-and-meta-talent</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Mon, 07 Jul 2025 14:03:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kDvS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><h3><strong>In This Edition:</strong></h3><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text">Key takeaways:</pre></div><p><strong>News:</strong></p><ul><li><p>Inside Meta and OpenAI: Top AI Talent Rivalry</p></li><li><p>OpenAI&#8217;s Restructuring: The AGI Clause That Fractured a Partnership</p></li></ul><p><strong>Announcements: </strong></p><ul><li><p>We are launching two tech initiatives soon: </p><ul><li><p><strong>The AI Insider&#8217;s OpSec &amp; Privacy Toolkit</strong></p></li><li><p><strong>Valis Veil </strong></p></li></ul></li></ul><p>Stay tuned!</p><ul><li><p>We are featured in <a href="https://milesbrundage.substack.com/i/166356815/evidence">Miles Brundage&#8217;s Substack</a> on Frontier AI Governance</p></li></ul><div><hr></div><blockquote><h1>Insider Currents </h1></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within frontier AI companies.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: Inside AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>We&#8217;re testing a different structure - featuring fewer in-depth insider current articles and a link collection. Let us know what you think.</em></p><h2><strong>Inside Meta and OpenAI: Top AI Talent Rivalry</strong></h2><p>In <a href="https://oaisis.substack.com/p/inside-ai-14-meta-hires-openai-insider">our last edition</a>, we spotlighted Sam Altman&#8217;s bold claim: <em>&#8220;None of OpenAI&#8217;s top talent accepted Meta&#8217;s offers (yet).&#8221;</em> </p><p>That statement no longer holds true.</p><h4><strong>Meta&#8217;s Successful Recruits</strong></h4><p>Among the notable hires is <strong>Lucas Beyer</strong>, a respected figure in computer vision research at OpenAI. According to <em><a href="https://techcrunch.com/2025/06/27/meta-is-offering-multimillion-dollar-pay-for-ai-researchers-but-not-100m-signing-bonuses/">TechCrunch</a></em>, Beyer&#8217;s skill set aligns closely with Meta&#8217;s growing strategic direction on &#8220;entertainment AI&#8221; rather than &#8220;productivity AI,&#8221; said Meta CPO, Chris Cox, during a leaked internal meeting reported to <em><a href="https://www.theverge.com/command-line-newsletter/694028/meta-openai-100-million-bonus-talent-war">the Verge</a>.</em></p><p>Joining Beyer is <strong>Trapit Bansal</strong>, known for his pioneering work in AI reasoning models who had been a researcher at OpenAI since 2022. Four additional senior researchers, <strong>Shengjia Zhao, Jiahui Yu, Shuchao Bi</strong>, and <strong>Hongyu Re</strong>, have also made the move, according to a source familiar with the matter, <em>The Information</em> wrote.</p><h4><strong>The Zuckerberg Pitch: High Compensation &amp; Unlimited Compute</strong></h4><p>Zuckerberg has been meeting with candidates one-on-one, offering them: aggressive compensation and perhaps most critically, <em>unlimited access to compute resources</em>.</p><p>This last point has become a particular pressure point for OpenAI. As <em>Wired</em> reported, several researchers within the organization have expressed frustration over Sam Altman&#8217;s unfulfilled promises:</p><blockquote><p><em>&#8220;At OpenAI, researchers have complained that Altman has been known to promise access to GPUs, only to feel like there was no follow-through from leadership.&#8221;</em></p></blockquote><p>By contrast, Meta is positioning itself as a haven for researchers constrained by compute bottlenecks elsewhere. </p><p>Meta&#8217;s compensation offers are not just generous, they&#8217;re unprecedented in scale. According to multiple sources with direct knowledge of the negotiations, as reported by <em>Wired</em>, the company has extended at least 10 exceptionally high-value offers to OpenAI employees, including one pitch for a chief scientist role that was ultimately declined. Other employees indicated that:</p><blockquote><p>&#8220;&#8230;they were weighing the money against the potential impact they could have at Meta in comparison to OpenAI. Several believed their impact would be greater at OpenAI.&#8221;</p></blockquote><h4><strong>OpenAI&#8217;s Internal Response</strong></h4><p>In an <a href="https://www.wired.com/story/openai-meta-leadership-talent-rivalry/">internal note</a>, Mark Chen, OpenAI&#8217;s chief research officer, likened the situation to &#8220;someone breaking into our home and stealing something.&#8221;</p><p>He assured employees that he, along with CEO Sam Altman and other company leaders, was working tirelessly to engage with those receiving external offers. He emphasized that they were being more proactive than ever, reassessing compensation, and exploring innovative ways to acknowledge and retain top talent.</p><p>However, many employees have been working 80-hour weeks, as reported by <em>Wired</em>. The company plans to shut down for a week to allow staff to rest. There is also growing internal concern that OpenAI has become too reactive by chasing frequent product launches rather than focusing on longer-term scientific goals. </p><blockquote><p><em>&#8220;The company  (OpenAI) is getting too caught up in short-term comparison with the competition,&#8221;</em> </p></blockquote><p>said one former employee who worked closely with Altman, adding that the CEO had been pushing for <em>&#8220;buzzy announcements every few months.&#8221;</em></p><p>&#8594; Read: <a href="https://www.wired.com/story/mark-zuckerberg-meta-offer-top-ai-talent-300-million/">Here&#8217;s What Mark Zuckerberg Is Offering Top AI Talent</a></p><p>&#8594; Read: <a href="https://www.wired.com/story/openai-meta-leadership-talent-rivalry/">OpenAI Leadership Responds to Meta Offers: &#8216;Someone Has Broken Into Our Home&#8217;</a></p><p>&#8594; Read: <a href="https://www.theinformation.com/briefings/meta-announces-new-superintelligence-lab?rc=qlomo6">Meta Announces New Superintelligence Lab</a></p><p>&#8594; Read: <a href="https://techcrunch.com/2025/06/27/meta-is-offering-multimillion-dollar-pay-for-ai-researchers-but-not-100m-signing-bonuses/">Meta is offering multimillion-dollar pay for AI researchers, but not $100M &#8216;signing bonuses&#8217;</a></p><p>&#8594; Read: <a href="https://www.theinformation.com/articles/meta-hires-four-openai-researchers?rc=qlomo6">Meta Hires Four More OpenAI Researchers</a></p><div><hr></div><h2><strong>OpenAI&#8217;s Restructuring: The AGI Clause That Fractured a Partnership</strong></h2><p>Back in 2019, when Microsoft and OpenAI were negotiating their partnership agreement, industry insiders dismissed the inclusion of an AGI clause as pure fantasy. </p><blockquote><p>&#8220;Everyone laughed at this,&#8221; said one person involved in the contract negotiations, according to <em><a href="https://www.theinformation.com/articles/openai-microsoft-duel-agi-high-stakes-negotiation?rc=qlomo6">The Information</a></em>.</p></blockquote><p>Fast-forward to today, and that same clause has evolved into a major point of contention in this partnership&#8212;leading to high-stakes negotiations amid OpenAI&#8217;s recent restructuring.</p><p>Following our earlier reports on the company&#8217;s internal changes, this week&#8217;s update focuses on how the AGI clause has reignited friction between OpenAI and Microsoft.</p><h4><strong>The Restructuring Standoff</strong></h4><p>The definition of artificial general intelligence remains disputed, complicating efforts to determine when the milestone has been achieved. OpenAI and Microsoft, however, have introduced a financial benchmark that ties the attainment of AGI to OpenAI&#8217;s ability to generate $100 billion in profits.</p><p>Although OpenAI is currently viewed as far from profitable&#8212;according to internal documents reported by <em>The New York Times</em>&#8212;the wording of their AGI definition could potentially allow the company to claim success regardless of profitability, wrote <em><a href="https://www.businessinsider.com/openai-and-microsoft-are-clashing-over-money-power-and-agi-2025-6">Business Insider</a></em>.</p><p>This uncertainty has created tension between the partners. Microsoft reportedly seeks to eliminate this clause as part of approving OpenAI&#8217;s corporate restructuring, which is critical for the AI company's plans to secure billions in new funding, sources told <em><a href="https://www.theinformation.com/articles/openai-microsoft-duel-agi-high-stakes-negotiation?rc=qlomo6">The Information</a></em>.</p><h4>The Timeline Divide</h4><p>The disagreement  extends to leadership levels. Microsoft CEO Satya Nadella has expressed skepticism about artificial general intelligence, while OpenAI CEO Sam Altman has asserted that it is &#8220;<a href="https://www.businessinsider.com/openai-and-microsoft-are-clashing-over-money-power-and-agi-2025-6">just around the corner</a>.&#8221; This divergence has reportedly caused frustration among OpenAI executives. Earlier this year, Nadella referred to AGI as a &#8220;nonsensical benchmark hacking&#8221; during a podcast appearance. <em>The Information</em> also reported:</p><blockquote><p>&#8220;Microsoft believes OpenAI won&#8217;t be able to declare it has developed AGI before 2030, when the companies&#8217; current deal is set to expire, according to a person who spoke to Nadella.&#8221;</p></blockquote><p></p><h4>More Developments Unfold:</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kDvS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kDvS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png 424w, https://substackcdn.com/image/fetch/$s_!kDvS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png 848w, https://substackcdn.com/image/fetch/$s_!kDvS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png 1272w, https://substackcdn.com/image/fetch/$s_!kDvS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kDvS!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png" width="870" height="522.4290998766954" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:487,&quot;width&quot;:811,&quot;resizeWidth&quot;:870,&quot;bytes&quot;:85960,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://oaisis.substack.com/i/167364061?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kDvS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png 424w, https://substackcdn.com/image/fetch/$s_!kDvS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png 848w, https://substackcdn.com/image/fetch/$s_!kDvS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png 1272w, https://substackcdn.com/image/fetch/$s_!kDvS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7bb06a0-cd46-4a5e-aaa7-04d7eb1bd78b_811x487.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8594; Read: <a href="https://www.theinformation.com/articles/openai-microsoft-duel-agi-high-stakes-negotiation?rc=qlomo6">OpenAI and Microsoft Duel Over AGI in High-Stakes Negotiation</a></p><p>&#8594; Read: <a href="https://www.wsj.com/tech/ai/openai-microsoft-rift-hinges-on-how-smart-ai-can-get-82566509">OpenAI, Microsoft Rift Hinges on How Smart AI Can Get</a></p><p>&#8594; Read: <a href="https://www.businessinsider.com/openai-and-microsoft-are-clashing-over-money-power-and-agi-2025-6">Inside the High-Stakes Rift Straining The Most Powerful Alliance in Tech</a></p><p>&#8594; Read: <a href="https://www.businessinsider.com/inside-the-battle-over-microsofts-access-to-openais-technology-2025-7">Inside the Battle Over Microsoft's Access to OpenAI's Technology</a></p><div><hr></div><h2>Assorted Links</h2><p><em>Recent News You Shouldn&#8217;t Miss</em></p><h4>OpenAI</h4><ul><li><p>Read: <a href="https://www.theinformation.com/articles/openai-quietly-designed-rival-google-workspace-microsoft-office?rc=qlomo6">OpenAI Quietly Designed a Rival to Google Workspace, Microsoft Office</a></p></li></ul><h4>Microsoft</h4><ul><li><p>Read: <a href="https://www.theinformation.com/articles/microsofts-ai-chip-effort-falls-behind?rc=qlomo6">Microsoft&#8217;s AI Chip Effort Falls Behind</a></p></li></ul><h4>Thinking Machines Lab</h4><ul><li><p>Read:<strong> </strong><a href="https://www.businessinsider.com/muratis-new-ai-startup-salary-technical-talent-2025-6">Mira Murati's Secretive New AI Startup is Paying Top Dollar for Technical Talent</a></p></li></ul><h4>DeepSeek</h4><ul><li><p>Read:<strong> </strong><a href="https://www.theinformation.com/articles/deepseeks-progress-stalled-u-s-export-controls?rc=qlomo6">DeepSeek&#8217;s Progress Stalled by U.S. Export Controls</a></p></li></ul><div><hr></div><blockquote><h1>Announcements &amp; Call to Action</h1></blockquote><p><em>Updates on publications, community initiatives, and &#8220;call for topics&#8221; that seek contributions from experts addressing concerns inside Frontier AI.</em></p><h2><strong>Coming Soon: Two New Tech Initiatives</strong></h2><p>We&#8217;re thrilled to announce the upcoming launch of two tech offerings:</p><ul><li><p><strong>The AI Insider&#8217;s OpSec &amp; Privacy Toolkit</strong></p></li><li><p><strong>Valis Veil</strong> - A local text anonymizer to remove your linguistic fingerprint from your writing without leaving any trace.</p></li></ul><p>More to come! Stay tuned!</p><div><hr></div><h2>OAISIS Featured in Miles Brundage&#8217;s Substack</h2><p>In his latest post on AI governance, <strong><a href="https://www.linkedin.com/in/miles-brundage-49b62a4/">Miles Brundage</a></strong> outlines his foundational triad on AI governance: Standards, Incentives, Evidence. He highlights whistleblower protections and support alongside external validation as part of the <a href="https://milesbrundage.substack.com/i/166356815/evidence">&#8220;Evidence</a>&#8221; category. <strong><a href="https://www.linkedin.com/company/oaisis-ai/">OAISIS </a></strong>was mentioned as part of this evolving ecosystem. </p><p><strong>Read Miles&#8217;s Substack: </strong></p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:166356815,&quot;url&quot;:&quot;https://milesbrundage.substack.com/p/standards-incentives-and-evidence&quot;,&quot;publication_id&quot;:3217000,&quot;publication_name&quot;:&quot;Miles&#8217;s Substack&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Tdhc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0f2d21-d8c6-4214-887c-158f16d163fc_144x144.png&quot;,&quot;title&quot;:&quot;Standards, Incentives, and Evidence: The Frontier AI Governance Triad&quot;,&quot;truncated_body_text&quot;:&quot;Introduction&quot;,&quot;date&quot;:&quot;2025-06-19T22:01:28.315Z&quot;,&quot;like_count&quot;:20,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:280070124,&quot;name&quot;:&quot;Miles Brundage&quot;,&quot;handle&quot;:&quot;milesbrundage&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0f2d21-d8c6-4214-887c-158f16d163fc_144x144.png&quot;,&quot;bio&quot;:&quot;Independent AI policy researcher&quot;,&quot;profile_set_up_at&quot;:&quot;2024-10-23T16:28:11.338Z&quot;,&quot;reader_installed_at&quot;:null,&quot;publicationUsers&quot;:[{&quot;id&quot;:3276398,&quot;user_id&quot;:280070124,&quot;publication_id&quot;:3217000,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:3217000,&quot;name&quot;:&quot;Miles&#8217;s Substack&quot;,&quot;subdomain&quot;:&quot;milesbrundage&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;My personal Substack&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d0f2d21-d8c6-4214-887c-158f16d163fc_144x144.png&quot;,&quot;author_id&quot;:280070124,&quot;primary_user_id&quot;:280070124,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2024-10-23T16:28:16.195Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Miles Brundage&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://milesbrundage.substack.com/p/standards-incentives-and-evidence?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!Tdhc!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0f2d21-d8c6-4214-887c-158f16d163fc_144x144.png" loading="lazy"><span class="embedded-post-publication-name">Miles&#8217;s Substack</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Standards, Incentives, and Evidence: The Frontier AI Governance Triad</div></div><div class="embedded-post-body">Introduction&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">9 months ago &#183; 20 likes &#183; Miles Brundage</div></a></div><div><hr></div><p>Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns in Frontier AI.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by individuals in Frontier AI companies. Together, we can continue to amplify and safeguard the voices of those working within Frontier AI companies who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The OAISIS Team</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: Inside AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Whistleblower Recommendations in CA's Working Group Report: Short on Highlighting International Best Practices, But Great in Highlighting Importance]]></title><description><![CDATA[How the state's latest AI policy recommendations fall short of international best practices]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/ai-whistleblower-recommendations</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/ai-whistleblower-recommendations</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Wed, 25 Jun 2025 13:47:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2c875d15-53f8-4157-a902-700ea6df15c6_1584x396.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Following Governor Gavin Newsom's veto of SB 1047, California's Joint Working Group has released comprehensive policy recommendations for future AI regulation. <a href="https://www.gov.ca.gov/wp-content/uploads/2025/06/June-17-2025-%E2%80%93-The-California-Report-on-Frontier-AI-Policy.pdf">The 52-page report</a>, authored by experts from Stanford, UC Berkeley, and the Carnegie Endowment for International Peace, surveys existing regulatory approaches and synthesizes them into recommendations for California policymakers.</p><p>The report emphasizes transparency and "trust but verify" principles&#8212;a solid foundation.</p><blockquote><p>It is <strong>fantastic to see the working group identify and highlight the lack of whistleblowing protections as a critical gap</strong> - expanding on the current California Labour Code (<a href="https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?lawCode=LAB&amp;sectionNum=1102.5">Link</a>) protecting whistleblowers.</p><p>However, the <strong>whistleblowing protections discussed</strong> for California <strong>fall short of international standards.</strong> </p><p><strong>These international standards already govern to most AI Companies today</strong> - just in different jurisdictions. </p></blockquote><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: Inside AI! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>What the Working Group Got Right</h2><p>The Working Group calls for protection of individuals making reports that go beyond strict legal violations, recognizing that AI risks may be difficult to predict and codify in advance &#8211; this is crucial and good to see it highlighted.</p><p>They further call for establishing a reasonable threshold for protection, requiring only "good faith" rather than definitive proof of wrongdoing.</p><h2>Where the Recommendations Fall Short</h2><p>What is confusing is that the whistleblower protection recommendations claim to reflect international best practices in certain parts but then overall are very aligned with US/California status-quo regulation.</p><p>Areas where this gap is most obvious:</p><blockquote><p>1. Personal scope (WG call for protection of few individuals &#8211; when best practice goes much further)</p><p>2. Public disclosure rights for imminent threats (none called for by WG)</p><p>3. Retaliation protections (no strengthening of burden of proof reversal called for)</p><p>4. Timelines for handling and responding to reports (none called for)</p><p>5. Training &amp; transparency requirements for covered persons and company &#8220;internal&#8221; whistleblowing channels (not mentioned)</p><p>6. Toothless penalties: Violations not punished sufficiently </p></blockquote><h3>Personal Scope: Who Can Report on Misbehaviour? California WG stays (far) Behind Global Best Practice</h3><p>The Working Group recommends coverage for at least employees but frames coverage of further parties as contested: "However, a central question in the AI context is whether protections apply to additional parties, such as contractors."</p><p>They then continue to argue that "broader coverage may provide stronger accountability benefits but also imposes greater cost: To extend protections to contractors and third parties, developers may need to implement additional reporting channels and legal frameworks."</p><p>We do not believe this reasoning holds up:</p><p>1. The EU Whistleblowing Directive goes <em>much further </em>and is already in effect: It extends personal scope to essentially all individuals interacting with a covered company in a "professional context" &#8211; that is <strong>suppliers, customers, shareholders, board members, unpaid advisors, evaluation providers, and facilitators (e.g. non-profits) helping insiders</strong>.</p><p>2. The EU Directive already mandates all companies with over 50 employees to maintain internal whistleblowing channels for reporting misconduct accessible to these broader stakeholder groups.</p><p>3. <strong>The vast majority of AI companies covered by the proposed policy already comply with the EU Whistleblowing Directive today and several companies, including Meta and OpenAI, already have public channels for reporting concerns that could easily be extended to e.g. the US.</strong></p><p><strong>In the EU, introducing these channels and scope extensions has not produced overwhelming reports &#8211; In fact, company satisfaction with expanded internal whistleblowing channels has been very high. (EQS, 2021).</strong></p><h3>No Public Disclosure Allowed: A Critical Gap to Prevent Imminent Harm</h3><p>The Working Group does not discuss allowing individuals to go public with concerns under any circumstances.</p><p>This leaves a gap for individuals who suspect imminent harm, when internal or regulatory channels fail, or when their expected to be ineffective.</p><p>This falls below international best practice. The EU Whistleblowing Directive protects individuals who disclose issues to the public when other channels fail. California had the opportunity to match this standard but chose not to.</p><h3>Retaliation Protections: Inadequate Standards</h3><p>The report does not mention improvements to the burden of proof reversal process for demonstrating retaliation.</p><p>This is not ideal, as existing California standards on burden of proof reversal are decent, but not best practice globally: California requires individuals who were (allegedly) retaliated against to prove that retaliation was at least a contributing factor in e.g. their dismissal from the company. This is not trivial for a whistleblower to prove without access to e.g. internal communications and decision-making processes.</p><p>Individuals do not have to prove this in other jurisdictions. At this point, you might already be able to guess in which jurisdiction the duty is <em>fully</em> <em>on the company</em> to prove that retaliation did not occur.</p><h3>Keeping Whistleblowers Informed on Report Progress: Missing Basic Requirements</h3><p>Whistleblowers, whether internal or external, largely act out of moral obligation and really care about the progress or outcomes of their reports.</p><p>The Working Group does not recommend setting timelines for acknowledging reports or providing feedback to whistleblowers in either the internal or regulator channels.</p><p>This leaves insiders motivated by moral duty "up in the air," potentially driving undesirable public disclosures where people feel powerless.</p><p>The EU Whistleblowing Directive establishes such timelines (7 days and 3 months). Precedent exists.</p><h3>Enforcement: Toothless Penalties</h3><p>Current California whistleblower law provides civil penalties "not exceeding ten thousand dollars ($10,000) per employee for each violation." This is insufficient given that frontier AI company employees frequently earn compensation in the medium-to-high six figures.</p><p>The penalty structure also provides no meaningful deterrent relative to the stakes involved.</p><p>The Working Group could have called this out.</p><h3>Training and Education of Covered Persons: The Missing Foundation</h3><p>The best policy system is worthless if employees don't know about it. California labor code sections (<a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB2299">link</a>) on whistleblowing require companies to provide information on external whistleblowing rights to employees.</p><p>Yet ongoing proprietary research (our survey) and published anecdotal evidence (<a href="https://substack.com/home/post/p-159743362">link</a>) show that<strong> insiders are consistently unaware of their rights and internal reporting opportunities</strong>.</p><blockquote><p>"I'm not well-informed about our company's whistleblowing procedures (and it feels uncomfortable to inquire about them directly)."<br><br><em>- A frontier AI company insider from the US (source: Proprietary survey, rephrased)</em></p></blockquote><p>This is clearly inadequate. Ample precedent exists for placing duties on employers to inform their employees of available options.</p><p>An ideal solution would require companies to demonstrate effectiveness of their training. For example, requiring 90% of a random sample of employees to answer 80% of non-trivial questions about internal and external whistleblowing options and rights correctly.</p><h2>Implementation Reality: Systems Don't Work Without Trust</h2><p>Despite existing California whistleblower protections, preliminary results from our research study show <strong>extremely low percentages of current AI company insiders trust the government to act with speed and knowledge on whistleblowing reports, especially on complex topics.</strong></p><blockquote><p>"Without knowing the appropriate contact person or agency, I wouldn't attempt to reach out [to a regulator] currently."</p><p><em>- A frontier AI company insider from the US (source: Proprietary survey, rephrased)</em></p></blockquote><p>What's needed?</p><p>Plans for effectively staffing recipient bodies with technical expertise or at least resources and rights to consult independent experts swiftly.</p><p>Besides psychological and financial aid, some European countries in fact already provide e.g. independent advisory services to potential whistleblowers, helping them understand if their concerns fall within policy scope.</p><p>Here, California could actually lead the way, as lacking AI expertise in regulators is a struggle faced even more strongly in the EU vs. the US.</p><div class="pullquote"><p>Our survey remains open to current and former Frontier AI company employees. If that is you, we encourage you to participate and share this opportunity with colleagues. Your insights into both internal company reporting systems and external regulatory pathways are essential for understanding the real barriers to effective risks reporting and driving evidence-based policy reform. Join the effort! <strong>Take the 10-minute anonymous survey <a href="https://oaisis.limesurvey.net/2?utm_source=linkedin&amp;utm_medium=social&amp;utm_campaign=Ongoing%20Engagement%20/%20Reminder">here</a>.</strong></p></div><h2>Why This Matters</h2><p>California's position as the epicenter of AI development gives it unique influence over global standards. The companies developing the most advanced AI systems are headquartered there. The talent pipeline runs through its universities. The investment capital flows through its venture firms.</p><h2>Moving Forward</h2><p>The Working Group's report provides policymakers with a comprehensive survey of existing approaches and solid analytical framework.</p><p>The question is whether California will choose to adopt the more ambitious standards that international experience has proven workable, or settle for approaches that haven't kept pace with the global state of the art.</p><p>The infrastructure exists. The precedents are established. The need is clear. California has the opportunity to demonstrate that leading innovation and robust safety protections aren't just compatible&#8212;they're mutually reinforcing.</p><p>Whether the state seizes this opportunity will signal how seriously it takes its role in shaping the future of AI governance worldwide.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: Inside AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[INSIDE AI #14: New Report with Insider Quotes, Meta Hires, OpenAI Insider Risk, Chip Smuggling & Whistleblowing, Comments on Whistleblowing in RAISE Act and California Report on Frontier AI Policy ]]></title><description><![CDATA[Edition 14]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/inside-ai-14-meta-hires-openai-insider</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/inside-ai-14-meta-hires-openai-insider</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Mon, 23 Jun 2025 14:15:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BH5K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><h3><strong>In This Edition:</strong></h3><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text">Key takeaways:</pre></div><ul><li><p>New Publication on AI Whistleblowing to which we contributed some first quotes from our anonymous insider survey</p></li><li><p>Insider Currents</p><ul><li><p>Meta's Superintelligence Push: Zuckerberg's Personal Recruitment Drive</p></li><li><p>OpenAI Changes Up &#8216;Insider Risk&#8217; Team (And What is an &#8216;Insider Risk&#8217; Team, Anyways?)</p></li></ul></li><li><p>Whistleblower Program to Combat AI Chip Smuggling &amp; Recent Report by CNAS</p></li><li><p>Policy - Our Comments on&#8230;</p><ul><li><p>RAISE Act has Passed New York Assembly - Whistleblower Protections Removed in Last Iteration (But Not All is Lost)</p></li><li><p>Joint California Working Group Recommends Whistleblower Protections</p></li></ul></li></ul><div><hr></div><blockquote><h1>Publication Announcement: Whistleblower Protections for AI Employees </h1></blockquote><p>We are happy to share the <a href="https://www.centeraipolicy.org/work/whistleblower-protections-for-ai-employees">public release of a report </a>written by <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Center for AI Policy&quot;,&quot;id&quot;:189851810,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cbe0c101-e5ec-4d03-bc26-ea4d9b48b454_197x197.png&quot;,&quot;uuid&quot;:&quot;eb2931a4-d3de-4502-92e6-aa41f529ef36&quot;}" data-component-name="MentionToDOM"></span>, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Psst...&quot;,&quot;id&quot;:289486162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63929a29-8944-4ebf-8f41-089016d2fb28_1000x1000.jpeg&quot;,&quot;uuid&quot;:&quot;5b152c96-da39-4169-8472-637557bfa657&quot;}" data-component-name="MentionToDOM"></span> , <a href="https://carma.org/">CARMA</a>, with contribution from us. In a nutshell: </p><p>Unlike other high-risk industries like aviation or nuclear energy, AI has no dedicated federal whistleblower protections for employees who witness dangerous practices.</p><p>This protection gap is particularly dangerous because AI companies operate with secrecy while government agencies lack the technical expertise to effectively oversee them. Current protections are inadequate. AI employees must either rely on patchy state laws or try to frame safety concerns as securities fraud under existing financial regulations. </p><p>We provided <strong>some quotes from our ongoing anonymous survey</strong> with individuals working at frontier AI companies (<a href="https://oaisis.limesurvey.net/2?utm_source=substack&amp;utm_medium=email&amp;utm_campaign=edition14">we are looking for more respondents, if you can think of someone!</a>) to demonstrate <strong>how this &#8216;gap&#8217; feels on the inside:</strong> </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BH5K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BH5K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png 424w, https://substackcdn.com/image/fetch/$s_!BH5K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png 848w, https://substackcdn.com/image/fetch/$s_!BH5K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png 1272w, https://substackcdn.com/image/fetch/$s_!BH5K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BH5K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png" width="836" height="548" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:548,&quot;width&quot;:836,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:156135,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://oaisis.substack.com/i/166150444?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BH5K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png 424w, https://substackcdn.com/image/fetch/$s_!BH5K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png 848w, https://substackcdn.com/image/fetch/$s_!BH5K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png 1272w, https://substackcdn.com/image/fetch/$s_!BH5K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68f8baf8-c768-43e1-bf4a-c5210e1c9a86_836x548.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Quotes are rephrased. </p><p>The report recommends comprehensive federal legislation that would protect AI workers, including contractors and advisors, from retaliation when reporting public safety risks, not just illegal activity. Key provisions should include making restrictive NDAs unenforceable for safety disclosures, allowing employees to share trade secrets with government oversight bodies, and ensuring fast access to federal courts. Without these protections, the information asymmetry between AI companies and regulators will persist, potentially hiding risks until it's too late to prevent catastrophic outcomes.</p><p>&#8594; <a href="https://www.centeraipolicy.org/work/whistleblower-protections-for-ai-employees">Find the Report Here</a></p><blockquote><h1>Insider Currents </h1></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI labs.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: Inside AI! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>We&#8217;re testing a different structure - featuring a larger link collection and fewer in-depth insider current articles. Let us know what you think.  </em></p><h2><strong>Meta's Superintelligence Push: Zuckerberg's Personal Recruitment Drive:</strong></h2><p>According to Bloomberg sources, Mark Zuckerberg is personally assembling a secretive "superintelligence team" after growing frustrated with Meta's AI shortfalls. The CEO has rearranged desks at Meta's Menlo Park headquarters so approximately 50 new hires will sit near him, and created a WhatsApp group called "Recruiting Party" where executives discuss potential targets around the clock. Bloomberg reports that Zuckerberg handles initial outreach himself and stays in regular contact throughout hiring processes, pitching recruits during lunches and dinners at his California homes.</p><p>The recruitment drive follows disappointment with Llama 4's performance in April and the delayed launch of Meta's "Behemoth" model, which leadership determined didn't sufficiently advance on previous versions. According to people familiar with the matter, AI-focused staff have been working nights and weekends under pressure to meet Zuckerberg's year-end goals for the best AI offering in terms of usage and performance. The push includes Meta's $14.3 billion investment in Scale AI, bringing founder Alexandr Wang to lead the new team.</p><p>Meta's aggressive talent acquisition has extended to attempting a $100 million signing bonus offer to OpenAI employees, according to CEO Sam Altman on the Uncapped podcast. "I've heard that Meta thinks of us as their biggest competitor," Altman said, though he claimed none of OpenAI's top talent accepted the offers. CNBC reports that after failing to acquire Ilya Sutskever's $32 billion startup Safe Superintelligence, Meta moved to hire its CEO Daniel Gross and former GitHub CEO Nat Friedman, while taking a stake in their venture fund. The talent war reflects broader concerns about Meta's AI trajectory, with internal doubts about product direction and continued high-level departures from the company's AI units.<em><br></em><br>&#8594; <a href="https://www.bloomberg.com/news/articles/2025-06-10/zuckerberg-recruits-new-superintelligence-ai-group-at-meta">Read the Bloomberg Article</a><br>&#8594; <a href="https://www.reuters.com/business/sam-altman-says-meta-offered-100-million-bonuses-openai-employees-2025-06-18/#:~:text=Sam%20Altman%20says%20Meta%20offered%20%24100%20million%20bonuses%20to%20OpenAI%20employees,-By%20Reuters&amp;text=June%2017%20(Reuters)%20%2D%20OpenAI,up%20its%20artificial%20intelligence%20strategy.">Read the Reuters Article</a><br>&#8594; <a href="https://www.cnbc.com/2025/06/19/meta-tried-to-buy-safe-superintelligence-hired-ceo-daniel-gross.html">Read CNBC Article</a></p><div><hr></div><h2><strong>OpenAI Changes Up &#8216;Insider Risk&#8217; Team:</strong></h2><p>According to sources familiar with the matter, OpenAI has laid off some members of its insider risk team and plans to revamp the unit as the company faces new internal security threats. The company confirmed the changes, stating it aims to restructure the team to address evolving risks as OpenAI has grown and expanded its vendor relationships.</p><p>The restructuring reflects growing industry concerns about protecting AI model weights from internal threats. Anthropic recently implemented "ASL-3 Security Standards" with "more than 100 different security controls" for its Claude Opus 4 release, though the company acknowledged that "sophisticated insider risk" remains beyond current protections. These specialized teams focus on threats from employees or contractors who could steal company assets, distinct from general AI safety work. OpenAI's job postings describe roles involving detecting internal threats and monitoring for "foreign government involvement in IP theft," stating the company sought investigators "as a part of our commitment to the White House."<br><br>&#8594; <a href="https://www.theinformation.com/articles/ai-startups-continue-masquerade-saas-businesses-openai-lays-insider-risk-employees?rc=qlomo6">Read The Information Article</a><br>&#8594; <a href="http://calendar.google.com/calendar/u/0/r/week">Archived OpenAI Job Listing for Context on Insider Risk Team</a>, R<a href="https://www.ccn.com/news/technology/openai-hiring-insider-risk-investigator-collaborating-white-house/">elated Announcement</a><br>&#8594; <a href="https://www.anthropic.com/news/activating-asl3-protections">Anthropic ASL-3 Announcements incl. Context on Insider Threats</a>, <a href="https://www.lesswrong.com/posts/KrYNqLkaCnBdHpZAs/anthropic-rewrote-its-rsp">Discussion on LessWrong</a></p><div><hr></div><h2><strong>Assorted Links</strong></h2><ul><li><p>OpenAI</p><ul><li><p>OpenAI Seeks New Financial Concessions From Microsoft, a Top Shareholder<br>&#8594; <a href="#">Read the full article</a></p></li><li><p>OpenAI Employees Have Cashed Out $3 Billion in Shares<br><a href="https://www.theinformation.com/articles/openai-employees-cashed-3-billion-shares?rc=qlomo6">&#8594; Read the full article</a></p></li><li><p>OpenAI Has Discussed Raising Money From Saudi Arabia, Indian Investors<br>&#8594; <a href="https://www.theinformation.com/articles/openai-discussed-raising-money-saudi-arabia-indian-investors?rc=qlomo6">Read the full article</a></p></li></ul></li><li><p>Google</p><ul><li><p>Google Reportedly Plans to Cut Ties with Scale AI After Meta Deal<br>&#8594; <a href="https://www.theinformation.com/briefings/google-reportedly-plans-cut-ties-scale-ai-meta-deal?rc=qlomo6">Read the full article</a></p></li><li><p>Google Offers Buyouts to Employees in Search and Ads, Other Units<br>&#8594; <a href="https://www.theinformation.com/articles/google-offers-buyouts-employees-search-ads-unit?rc=qlomo6">Read the full article</a></p></li><li><p>Documents reveal what tool Google used to try to beat ChatGPT: ChatGPT itself</p><p>&#8594; <a href="https://www.businessinsider.com/google-used-chatgpt-to-improve-bard-scale-ai-documents-2025-6">Read the full article</a></p></li></ul></li><li><p>Meta</p><ul><li><p>Meta Agreed to Pay up for Scale AI but Then Wanted More for Its Money<br>&#8594; <a href="https://www.theinformation.com/articles/meta-agreed-pay-scale-ai-wanted-money?rc=qlomo6">Read the full article</a></p></li></ul></li></ul><div><hr></div><blockquote><h1>Recent Research Highlights</h1></blockquote><p><em>Find here relevant research in the context of AI whistleblowing. </em></p><h2>Whistleblower Program to Combat AI Chip Smuggling &amp; Recent Report by CNAS</h2><p>According to Senators Mike Rounds (R-S.D.) and Mark Warner (D-Va.), who introduced the "Stop Stealing our Chips Act" in April 2025, a whistleblower incentive program could address systematic failures in enforcing U.S. export controls on AI chips. The proposed legislation would provide whistleblowers with 10 to 30 percent of collected fines from export control violations, modeled after the SEC's program that has awarded over $2.2 billion to whistleblowers since 2010 and collected penalties estimated between $7 billion and $22 billion.</p><p>The enforcement gap is substantial. A June 2025 report from the Center for a New American Security estimates that between 10,000 and several hundred thousand AI chips may have been smuggled into China in 2024, with a median estimate of approximately 140,000 chips. Six news outlets have independently documented large-scale smuggling operations, including cases worth $120 million for 2,400 NVIDIA H100 chips and another worth $103 million, according to The Information. Singapore authorities arrested three individuals in March 2025 suspected of diverting AI servers worth $390 million. Of the 22 notable AI models developed exclusively in China by early 2025, only two were trained with domestic chips.</p><p>The Bureau of Industry and Security, which administers export controls, faces severe resource constraints despite expanded responsibilities. A single export control officer currently handles all of Southeast Asia and Australasia, while the agency's budget has remained essentially flat for over a decade when adjusted for inflation. The proposed self-funding whistleblower program could provide resources while creating financial incentives for insiders to expose smuggling networks, potentially making these operations significantly riskier for violators.</p><p>&#8594; <a href="https://www.cnas.org/publications/reports/countering-ai-chip-smuggling-has-become-a-national-security-priority">Read the June CNAS Report on Countering AI Chip Smuggling as a National Security Priortiy</a><br>&#8594; <a href="https://www.theinformation.com/articles/nvidia-ai-chip-smuggling-to-china-becomes-an-industry">Read The Information Article</a><br>&#8594; <a href="https://www.reuters.com/world/asia-pacific/singapore-prosecutors-says-us-servers-fraud-case-involves-390-million-2025-03-13/">Read the Reuters Report on $390 million server fraud case</a><br>&#8594; <a href="https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-legislation-to-prevent-smuggling-of-american-ai-chips-into-china">Read the Whistleblowing Programme Legislation Proposal</a><br>&#8594; <a href="https://www.thefai.org/posts/spreadsheets-vs-smugglers-modernizing-the-bis-for-an-era-of-tech-rivalry">Read the Foundation For American Innovation Article</a> </p><div><hr></div><blockquote><h1>Policy &amp; Legal Updates</h1></blockquote><p><em>Updates on regulations with a focus on safeguarding individuals who voice concerns.</em></p><h2><strong>RAISE Act has Passed New York Assembly - Whistleblower Protections Removed in Last Iteration (But Not All is Lost)</strong></h2><p>New York State&#8217;s AI Transparency Bill, the &#8220;RAISE&#8221; Act, has passed the New York Assmebly late last week, pending Governor Kathy Hochul&#8217;s signature to enter into force. We won&#8217;t repeat the core contents here, Zvi and CSAI wrote good summaries, which you can find below.</p><p><a href="https://www.nysenate.gov/legislation/bills/2025/A6453/amendment/A">Version A of the bill</a>, submitted on June 3, still included explicit whistleblower protections (Section 1422). The revised version, introduced on June 9 and passed on June 12, no longer included these provisions.</p><p><strong>What is the impact of this whistleblowing section being cut?</strong></p><p>It is not too dramatic. Section 740 of the New York Labor Law already contains whistleblower provisions, protecting against retaliation&#8230;</p><ol><li><p>Current and Former Employees and Contractors who are natural persons&#8230;</p></li><li><p>&#8230;disclosing or threatening to disclose to a supervisor or public body&#8230;</p></li><li><p>&#8230;suspected breaches of the law or&#8230;</p></li><li><p>&#8230;behaviour posing '&#8220;substantial and specific danger to the public health or safety&#8221; (both based on &#8216;reasonable belief&#8217;)</p></li></ol><p><strong>So what would the RAISE Act have improved upon?</strong></p><p>Largely, the RAISE Act repeated the provisions of Section 740. It would&#8217;ve improved on it by:</p><ol><li><p>Extension of <strong>protected person to non-paid contractors/ advisors/ as well as non-natural persons being protected (e.g. Eval Providers)</strong> - this is the (essentially) only and largest expansion on the existing Section 740.</p></li><li><p>Slightly improved language around &#8220;unreasonable or substantial risk of critical harm&#8221; vs. &#8220;substantial and specific to the public health or safety&#8221;. However, any &#8220;unreasonable or substantial risk of critical harm&#8221; is extremely likely to be covered under Section 740 provisions already, meaning that in practice there would likely have been little to no extensions here.</p></li></ol><p>This is of course separate from other limitations of Section 740 (e.g. insufficient burden of proof reversal, limited fines, or no allowance for public disclosure in the case of imminent threat), which lawmakers could&#8217;ve attempted to allow for in RAISE.</p><p>We had voiced the former sentiment in the comments of The Zvi&#8217;s Post - with <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Alex Bores&quot;,&quot;id&quot;:8042881,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b196756-b2fb-49c4-ade4-a95062ecdfe6_3572x5358.jpeg&quot;,&quot;uuid&quot;:&quot;f7187aec-eb5f-42ba-9ce9-e4b0d6547686&quot;}" data-component-name="MentionToDOM"></span> taking the time to respond on their reasoning for making the changes (<a href="https://substack.com/profile/8042881-alex-bores/note/c-126407634">link to comment</a>). We especially find the reasoning around not cementing the USD 10,000 penalty, which we agree is far too low, convincing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5rMR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5rMR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png 424w, https://substackcdn.com/image/fetch/$s_!5rMR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png 848w, https://substackcdn.com/image/fetch/$s_!5rMR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png 1272w, https://substackcdn.com/image/fetch/$s_!5rMR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5rMR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png" width="721" height="780" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:780,&quot;width&quot;:721,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:95312,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://oaisis.substack.com/i/166150444?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5rMR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png 424w, https://substackcdn.com/image/fetch/$s_!5rMR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png 848w, https://substackcdn.com/image/fetch/$s_!5rMR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png 1272w, https://substackcdn.com/image/fetch/$s_!5rMR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32df1bdb-d872-429f-b7f5-029946e693c4_721x780.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div><hr></div><h2><strong>Joint California Working Group Recommends Whistleblower Protections</strong></h2><p>Following Gavin Newsom&#8217;s Veto of SB 1047, the Joint California Working Group has concluded its overview of policy recommendations for regulating AI.</p><p>A critical element of their recommendations is transparency, following a &#8216;trust but verify&#8217; approach. <strong>They recognize whistleblower protections as a core verification and enforcement method.</strong></p><p>It is very good to see the working group seeing these protections as important, and <strong>certain elements of their recommendations do reflect international best practices</strong>: For example,</p><ol><li><p>Protecting reports that go beyond violations of the &#8220;letter of the law&#8221; as the exact shape that severe risks might take might be difficult to codify.</p></li><li><p>Requiring only &#8216;reasonable cause&#8217; or &#8216;good faith&#8217; around disclosures to unlock protections rather than definite proof.</p></li></ol><p><strong>We however believe the working group could have extended their comparison of global standards, and accompanying recommendations, quite significantly - although we assume the Working Group was to an extent limited by California law.</strong> We will share a longer piece on this in the coming days. In a nutshell, just the EU Whistleblowing Directive is significantly more expansive than described in the report, in areas of&#8230;</p><ol><li><p>Personal Scope (protecting far more individuals than just employees and contractors)</p></li><li><p>Public disclosure rights for imminent threats</p></li><li><p>Retaliation protections</p></li><li><p>Timelines for handling and responding to reports</p></li><li><p>Independence requirements for internal channels</p></li><li><p>Training &amp; Transparency requirements for covered persons and company &#8220;internal&#8221; whistleblowing channels</p></li></ol><p>The Report also did not mention e.g. expanding fines for violations (currently low in California) or the importance of equipping regulator recipient bodies with the right resources to actually handle cases. </p><p>Overall, we still commend the working group providing for highlighting whistleblower protections in such a prominent fashion. </p><div><hr></div><p>Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The OAISIS Team</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: Inside AI! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[INSIDE AI #13: Meta Restructures, NVIDIA Huang's Rescue Plan, Take Action: AI Whistleblower Protection Act by NWC]]></title><description><![CDATA[Edition 13]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/inside-ai-13-meta-restructures-nvidia</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/inside-ai-13-meta-restructures-nvidia</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Mon, 09 Jun 2025 17:51:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tJrM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><h3><strong>In This Edition:</strong></h3><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text">Key takeaways:</pre></div><ul><li><p>News:</p><ul><li><p>Meta Restructures AI Ops Amid Employee Burnout, Talent Loss - Automates Safety Review to Speed Up Time to Market</p></li><li><p>Nvidia&#8217;s Internal Struggle: Covert Chip Plans to Save Its China Market and Huang&#8217;s Trump Outreach</p></li><li><p>Microsoft, Regulators, and Nonprofits Challenge OpenAI&#8217;s Corporate Realignment</p></li><li><p>Musk Attempted to Block the UAE Data Center Deal if it Didn't Include xAI</p></li></ul></li><li><p>Support the Passage of the AI Whistleblower Protection Act </p></li><li><p>Wondering What to &#8220;Look Out For&#8221; as A Lab Insider? Updates from AI Lab Watch </p></li></ul><div><hr></div><blockquote><h1>Insider Currents </h1></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI labs.</em></p><h2><strong>Meta Restructures AI Ops + Safety Review Changes</strong></h2><p>In recent weeks, tensions have risen within Meta Platforms&#8217; generative AI group, culminating in a sweeping internal restructuring. According to <strong>documents reviewed by </strong><em><strong>The Information</strong></em>, the group recently recorded some of the lowest employee satisfaction scores across the company. Feedback from staff cited deep burnout, internal conflicts, and a lack of focus in feedback to leadership.</p><p>In response, Meta leadership acknowledged the exhaustion and gave the team a day off following the launch of the company&#8217;s new stand-alone app for its Meta AI assistant, according to three people familiar with the matter.<em> Axios</em> reported that the broader restructuring aims to speed up the rollout of new products and features.</p><p><strong>An Exodus of Expertise</strong></p><p>At the same time, Meta is witnessing a significant brain drain. As <em>Business Insider</em> reported, several top researchers have exited&#8212;many joining Mistral, a high-profile French startup co-founded by former Meta scientists Guillaume Lample and Timoth&#233;e Lacroix.</p><p>The talent loss is stark: of the 14 researchers credited on Meta&#8217;s influential 2023 LLaMA paper, only three remain at the company. The upheaval coincides with delays to Meta&#8217;s large-scale &#8220;Behemoth&#8221; AI model and the quiet exit of Joelle Pineau, a key AI research director, after eight years at the company.</p><p><strong>Automating Safety Reviews</strong></p><p>Perhaps consequentially, internal documents obtained by <em>NPR</em> show Meta plans to automate up to 90% of privacy and integrity reviews&#8212;assessments evaluating whether features could harm users, violate privacy, or spread misinformation. Previously, human reviewers examined all major platform changes.</p><p>Under the new system, product developers conduct their own risk assessments, with human review reserved for exceptional cases. &#8220;Most product managers and engineers are not privacy experts,&#8221; said Zvika Krieger, Meta's former director of responsible innovation.</p><p>A <strong>former Meta executive</strong>, speaking anonymously out of fear of retaliation from the company, also warned:</p><blockquote><p>&#8220;This process functionally means more stuff launching faster, with less rigorous scrutiny, creating higher risks.&#8221;</p></blockquote><p>&#8594; <a href="https://www.theinformation.com/articles/charge-ai-meta-shake?rc=qlomo6">Read: Who&#8217;s In Charge of AI at Meta After Shake-Up</a> (paywalled)</p><p>&#8594; <a href="https://www.npr.org/2025/05/31/nx-s1-5407870/meta-ai-facebook-instagram-risks">Read: Meta Plans to Replace Humans with AI to Assess Privacy and Societal Risks</a></p><p>&#8594; <a href="https://www.businessinsider.com/meta-llama-ai-talent-mistral-2025-5?utm_source=substack&amp;utm_medium=email">Read: Meta's Llama AI Team Has Been Bleeding Talent. Many Top Researchers Have Joined French AI Startup Mistral</a></p><div><hr></div><h2>Nvidia&#8217;s Internal Struggle: Covert Chip Plans to Save Its China Market and Huang&#8217;s Trump Outreach</h2><p>Details from inside Nvidia reveal CEO Jensen Huang has abandoned his traditional hands-off political approach, making undisclosed frequent trips to Mar-a-Lago in a high-stakes bid to salvage the company&#8217;s $8 billion China business.</p><p><strong>Internal executives</strong> report Huang&#8217;s prolonged absences from Nvidia&#8217;s Santa Clara headquarters: &#8220;We see a lot less of Jensen&#8212;he&#8217;s travelling to Florida a lot.&#8221; These Mar-a-Lago meetings with Trump represent a significant shift for a CEO who once even declined White House invitations under Biden, according to a person with direct knowledge of the matter.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tJrM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tJrM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png 424w, https://substackcdn.com/image/fetch/$s_!tJrM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png 848w, https://substackcdn.com/image/fetch/$s_!tJrM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png 1272w, https://substackcdn.com/image/fetch/$s_!tJrM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tJrM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png" width="453" height="627.2307692307693" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1800,&quot;width&quot;:1300,&quot;resizeWidth&quot;:453,&quot;bytes&quot;:492561,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://oaisis.substack.com/i/165336841?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tJrM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png 424w, https://substackcdn.com/image/fetch/$s_!tJrM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png 848w, https://substackcdn.com/image/fetch/$s_!tJrM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png 1272w, https://substackcdn.com/image/fetch/$s_!tJrM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63335622-c011-4c1b-a8f7-7bbe03867e6b_1300x1800.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">&#8220;As Huang has attempted to guard his company&#8217;s standing in America, he has also taken care to manage its ties to China.&#8221; wrote <em>The Information</em></figcaption></figure></div><p>Huang needs to find a way to revive Nvidia&#8217;s business in China. To do that, he has to carefully balance the interests of the Trump administration, the Chinese government, and Chinese customers &#8212; a challenge Huang has been working on since Trump became president, according to interviews with over 20 people, <strong>including Nvidia staff</strong>, customers, and U.S. and Chinese officials, <em>The Information</em> reported.</p><p><strong>China rescue plan:</strong></p><p><strong>Emergency internal meetings</strong> following the H20 chip ban have produced Nvidia&#8217;s covert tentatively called &#8220;B30&#8221;, A new China-specific chip designed to circumvent export controls. The company is also developing a new computer system centred around its B30 chips, designed to enable Chinese customers to maintain strong performance by connecting the chips to operate together efficiently. Four sources confirm that major Chinese clients, including ByteDance, Alibaba, and Tencent, have expressed interest, with Nvidia promising to produce over 1 million units of B30 this year.</p><p>&#8594; <a href="https://www.theinformation.com/articles/jensen-huang-used-delegate-politics-trumps-return?rc=qlomo6">Read: Jensen Huang Used to Delegate Politics&#8212;Until Trump&#8217;s Return</a> (paywalled)</p><div><hr></div><h2>Microsoft, Regulators, and Nonprofits Challenge OpenAI&#8217;s Corporate Realignment</h2><p>OpenAI&#8217;s plan to restructure its for-profit subsidiary into a $300 billion public-benefit corporation raises concerns over whether the nonprofit will receive fair compensation and maintain meaningful control. According to people familiar with the matter, the Delaware attorney general is moving to hire an investment bank to independently assess the value of the nonprofit parent&#8217;s equity in the new entity, <em>The Wall Street Journal</em> reported. Microsoft, which has the power to block the deal effectively, is currently at odds with OpenAI over its equity share in the new structure; both companies have hired investment banks to advise them.</p><p>According to <em>The Information</em>, critics, including a coalition of over 60 nonprofits and the group <em>Not for Private Gain</em>, are warning that the restructuring could shortchange the charity and are urging regulators to intervene. Meanwhile, OpenAI has hired well-connected Democratic operatives to navigate California&#8217;s political landscape, signalling a focus on managing opposition rather than addressing core ethical and governance issues.</p><p>According to the articles, OpenAI is under intense financial and geopolitical pressure, with $20 billion in SoftBank funding at stake if the restructuring isn&#8217;t finalized by year&#8217;s end. The company is also preparing to launch an &#8220;open&#8221; model, seen as a response to China&#8217;s growing lead in the space. The outcome of this governance battle will shape whether its developments prioritize public benefit or private gain.</p><p>&#8594; <a href="https://www.theinformation.com/articles/openais-new-path-conversion-faces-activist-opposition?rc=qlomo6">Read: OpenAI&#8217;s New Path to Conversion Faces Activist Opposition</a> (paywalled)</p><p>&#8594; <a href="https://www.wsj.com/tech/ai/openai-profit-value-delaware-ag-bank-ff7b4718?st=6bc4n1&amp;reflink=article_copyURL_share">Read: Delaware AG Hiring Investment Bank to Advise on OpenAI Conversion</a> (paywalled)</p><p>&#8594; <a href="https://www.theinformation.com/articles/openais-democrats-action-chinas-open-source-ai">Read: OpenAI&#8217;s Democrats in Action; China&#8217;s Open-Source AI</a> (paywalled)</p><div><hr></div><h2><strong>Musk Attempted to Block the UAE Data Center Deal if it Didn&#8217;t Include xAI</strong></h2><p>Prior to Trump&#8217;s mid-May diplomatic tour of three Gulf nations, Musk discovered that Sam Altman would be participating in the trip and that negotiations for a UAE-based project were underway. <strong>According to anonymous White House officials</strong>, Musk expressed concerns about the arrangement and requested to join the delegation.</p><p>Then, on a call with officials at G42, the AI firm controlled by the brother of the U.A.E.'s president, Musk had a decree for those assembled: Their plan had no chance of approval unless xAI was included in the deal. Despite Musk&#8217;s complaints, Trump and U.S. officials signed off on the deal terms and decided to move forward with the project. White House aides then discussed how to calm Musk down because Trump wanted to announce the deal before the end of the trip.</p><p>The plan in question is a site that could eventually hold a five-gigawatt cluster of AI data centres. This size would be far greater than any single site in the U.S. and would host various U.S. AI company's servers. Musk&#8217;s xAI has been seen as a likely candidate for future sites at the giant data centre cluster.</p><p>&#8594; <a href="https://www.wsj.com/tech/elon-musk-trump-openai-stargate-abu-dhabi-e2689615">Read: Elon Musk Tried to Block Sam Altman&#8217;s Big AI Deal in the Middle East</a> (paywalled)</p><div><hr></div><blockquote><h1>Announcements &amp; Call to Action</h1></blockquote><p><em>Updates on publications, community initiatives, and &#8220;call for topics&#8221; that seek contributions from experts addressing concerns inside Frontier AI.</em></p><h2>US Citizens, Contact Congress: Pass AI Whistleblower Protection</h2><p>&#8220;AI Whistleblowers are facing severe retaliation for speaking out against safety and security failures which are threatening the privacy of Americans and compromising national security&#8221;, wrote NWC.</p><p>The recently introduced AI Whistleblower Protection Act includes anti-retaliation provisions for AI whistleblowers and establishes clear reporting guidelines for the Department of Labor regarding AI security vulnerabilities. This legislation represents a critical response to the urgent need for AI employee protection. Its passage would mark a landmark turning point in increasing regulation, transparency, and oversight within the AI industry.</p><p>Take action today by contacting your Representatives and Senators to call for the passage of this critical bill!</p><p><strong>&#8594; <a href="https://www.whistleblowers.org/campaigns/call-for-the-passage-of-the-ai-whistleblower-protection-act/">Read About the Campaign and Contact Your Representatives or Senators</a></strong></p><p><strong>Need a refresher on what this is about?</strong></p><p>A bipartisan coalition led by Senator Chuck Grassley has introduced legislation that could make tech workers the primary guardians of AI safety. The AI Whistleblower Protection Act (AIWPA), introduced in May, would protect employees who report AI security vulnerabilities or violations. </p><p><strong>Who is protected?</strong><br>The law offers protection to current and former employees, as well as contractors working at AI companies. To be protected, individuals don&#8217;t need to prove that a law was definitively broken&#8212;they just need to raise their concerns in good faith.</p><p><strong>What protections do workers have?</strong></p><p>The law protects workers from being fired, demoted, threatened, or harassed for reporting issues. If an employer retaliates, workers have several options:</p><ul><li><p>File a complaint with the U.S. Department of Labor</p></li><li><p>Take legal action in federal court</p></li><li><p>Be reinstated to their job</p></li><li><p>Recover double their lost wages</p></li><li><p>Receive compensation for damages</p></li></ul><p>Employers cannot require workers to waive these rights through contracts or forced arbitration.</p><p><strong>What&#8217;s missing?</strong><br>The article written by Sophie Luskin in <em><a href="https://www.techpolicy.press/what-the-ai-whistleblower-protection-act-would-mean-for-tech-workers/">Tech Policy Press</a></em> points out that the law could be strengthened by also <strong>protecting employees who report when their company fails to follow its own internal safety policies</strong>, not just violations of federal regulations.</p><p>This legislation follows a broader trend&#8212;Congress has enacted similar whistleblower protections in rapidly evolving industries such as nuclear energy, aviation, and finance.</p><p>Meanwhile, whistleblower advocate <strong>Poppy Alexander </strong>warns that placing regulatory burden solely on individual employees is problematic:</p><blockquote><p>&#8220;Being a whistleblower is hard enough. If we move toward a protection-as-regulation model, whistleblowers will be put under even more pressure as they become the only obstacle to AI&#8217;s boundless growth.&#8221;</p></blockquote><p>&#8594; <a href="https://www.techpolicy.press/what-the-ai-whistleblower-protection-act-would-mean-for-tech-workers/">Read: What the AI Whistleblower Protection Act Would Mean for Tech Workers</a></p><p>&#8594; <a href="https://news.bloomberglaw.com/us-law-week/ai-whistleblowers-cant-carry-the-burden-of-regulating-industry">Read: AI Whistleblowers Can&#8217;t Carry the Burden of Regulating Industry</a> (paywalled)</p><div><hr></div><h2><strong>&#8220;What Concerning Behaviour Should I be Looking Out for?&#8221; - A Common Question We Hear From Frontier AI Insiders</strong></h2><p>Besides complex answers, a first straightforward answer to that question is:</p><blockquote><p> &#8220;Whenever your company is violating its own commitments or promises&#8221;.</p></blockquote><p>Or, slightly more complicated:</p><blockquote><p>&#8220;Whenever your lab is significantly lacking vs. industry best practices&#8221;.</p></blockquote><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;AI Lab Watch&quot;,&quot;id&quot;:2645747,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/ailabwatch&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6c98973-1394-411a-b732-a112266d6cbe_496x496.png&quot;,&quot;uuid&quot;:&quot;59fe9de2-6de5-4d81-b91f-1f2afd669dde&quot;}" data-component-name="MentionToDOM"></span> <strong>has updated its resources to help answer this question:</strong></p><p><a href="https://ailabwatch.org/resources/commitments">&#8594; Find an </a><strong><a href="https://ailabwatch.org/resources/commitments">overview of current lab commitments</a></strong><a href="https://ailabwatch.org/resources/commitments"> to identify if and where your lab may not be holding up to them</a><strong><a href="https://ailabwatch.org/resources/commitments">.</a></strong></p><p><a href="https://ailabwatch.org/resources/commitments">&#8594; Find an overview of where your AI company stands vis-a-vis other companies in regards to implementing best practices.</a></p><p><strong>And subscribe to the Substack here: </strong></p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:2645747,&quot;name&quot;:&quot;AI Lab Watch&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c98973-1394-411a-b732-a112266d6cbe_496x496.png&quot;,&quot;base_url&quot;:&quot;https://ailabwatch.substack.com&quot;,&quot;hero_text&quot;:&quot;What frontier AI labs are doing + what they should do&quot;,&quot;author_name&quot;:&quot;Zach Stein-Perlman&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://ailabwatch.substack.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!JwK5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c98973-1394-411a-b732-a112266d6cbe_496x496.png" width="56" height="56" style="background-color: rgb(255, 255, 255);"><span class="embedded-publication-name">AI Lab Watch</span><div class="embedded-publication-hero-text">What frontier AI labs are doing + what they should do</div><div class="embedded-publication-author-name">By Zach Stein-Perlman</div></a><form class="embedded-publication-subscribe" method="GET" action="https://ailabwatch.substack.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><div><hr></div><p>Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The OAISIS Team</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[INSIDE AI #12: Federal Preemption Bill, OpenAI Restructuring Leak, Grassley: AI Whistleblower Protection Act]]></title><description><![CDATA[Edition 12]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/inside-ai-12-federal-preemption-bill</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/inside-ai-12-federal-preemption-bill</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Tue, 27 May 2025 15:47:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6ea94b2d-6711-4ff2-bcf3-a4427fcd884f_3192x3192.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><h3><strong>In This Edition:</strong></h3><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text">Key takeaways:</pre></div><ul><li><p>News: </p><ul><li><p>OpenAI Restructuring: Leaked Legal Letter Reveals Loss of Power for Nonprofit Board</p></li><li><p>AI Giants Are Rushing Products, Undermining Safety</p></li><li><p>Federal AI Preemption Bill May Weaken Whistleblower Protections in Frontier AI Development</p></li><li><p>China Tightens Control Over AI Data Centers</p></li><li><p>U.S. Advances AI Partnership with UAE through Nvidia Chip Deal Aming Security Concerns</p></li></ul></li><li><p>Policy &amp; Legal Updates: Sen. Grassley Introduces AI Whistleblower Protection Act</p></li><li><p>Lynette Bye in Transformer News Writes on the Need for Transparency</p></li></ul><div><hr></div><blockquote><h3>Insider Currents </h3></blockquote><p><em>Carefully curated summaries and links to the latest news, spotlighting the voices and concerns emerging from within AI labs.</em></p><h4><strong>OpenAI Restructuring: Leaked Legal Letter Reveals Loss of Power for Nonprofit Board</strong></h4><p>Covered by Obsolete, a newly obtained 13-page letter from OpenAI&#8217;s lawyers to California Attorney General Rob Bonta reveals internal admissions around the company&#8217;s controversial nonprofit restructuring. The May 15 letter admits that &#8220;many potential investors declined to invest&#8221; due to OpenAI's current nonprofit governance structure, contradicting earlier reports that funding rounds were "oversubscribed" and exposing how desperately the company needs to remove nonprofit oversight to raise money.</p><p>The leaked document shows OpenAI's recent &#8220;concession&#8221; to maintain nonprofit control is largely cosmetic. Currently, OpenAI's operating agreement legally requires the charitable mission to &#8220;take precedence over any obligation to generate a profit.&#8221; Under the proposed restructuring, directors would be legally required to balance shareholder profits with public benefit&#8212;a fundamental weakening that explains why SoftBank&#8217;s finance chief said &#8220;nothing has really changed&#8221; despite the apparent concession. The letter also exposes false claims about employee motivations during the 2023 board crisis: a former OpenAI employee who signed the pro-Altman letter called the company's characterization &#8220;bullshit and a blatant lie,&#8221; explaining employees had &#8220;millions of dollars on the line&#8221; from a stock tender offer and many &#8220;didn't really trust Sam&#8221; but feared the company would collapse.</p><p>&#8594; <a href="https://www.cnbc.com/2025/05/14/meta-google-openai-artificial-intelligence-safety.html">Read the Full Article by CNBC</a></p><p>&#8594; Listen to Tyler Whitmer on <a href="https://80000hours.org/podcast/episodes/tyler-whitmer-openai-nonprofit-restructure-control/">80,000 hours</a>, discussing the new structure.</p><div><hr></div><h4><strong>AI Giants Are Rushing Products, Undermining Safety</strong></h4><p>We&#8217;ve covered individual stories before - <em>CNBC </em>published a write-up: <strong>More than a dozen insiders from top AI labs</strong> (from companies like Meta, Google, and OpenAI), including former safety researcher Steven Adler, add to the list of evidence that<strong> labs are increasingly prioritising rapid product launches over fundamental research and safety.</strong> The race for market dominance and AGI milestones pressures teams to cut corners in model testing, raising the risk of misuse, unpredictability, and alignment failures.</p><p>At Meta, former employees say the FAIR research unit was deprioritized in favour of product-focused objectives. At Google, Sergey Brin reportedly urged teams to &#8220;turbocharge&#8221; development, leading to the release of Gemini 2.5 without full safety documentation. OpenAI, meanwhile, launched GPT-4o despite testers flagging that it felt off, while its partner METR reported insufficient time and data to evaluate the model properly. One former OpenAI researcher cautioned that pre-deployment testing is inadequate, warning that misaligned models cannot be scientifically fixed, only patched after the fact.</p><p>&#8594; <a href="https://www.cnbc.com/2025/05/14/meta-google-openai-artificial-intelligence-safety.html">Read The Full Article by CNBC</a></p><div><hr></div><h4><strong>Federal AI Preemption Bill May Weaken Whistleblower Protections in Frontier AI Development</strong></h4><p>House Republicans are pushing legislation that could undermine AI safety oversight and whistleblower protections by blocking state AI regulations for a decade. The &#8220;Artificial Intelligence and information technology modernization initiative,&#8221; packed into a budget reconciliation bill, would prevent states from enforcing AI-related laws until 2035.</p><p>Republicans are using Budget Reconciliation to bypass the Senate's 60-vote filibuster but face the Byrd Rule hurdle, which allows the removal of provisions lacking direct budgetary impact. Sen. John Cornyn (R-TX) admitted uncertainty: &#8220;I don't know whether it will pass the Byrd Rule.&#8221;</p><p>The bill states:</p><blockquote><p>"No State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during <strong>the 10-year period beginning on the date of the enactment of this Act.</strong>" </p></blockquote><p>This would block state laws on deepfakes in elections, discriminatory AI hiring, and digital replica creation without consent.</p><p>The suspension of state-level AI regulations <strong>could also significantly impact whistleblower protections:</strong> In California, for example, SB53 aims at directly protecting whistleblowers in AI. At the same time, the existing Labour Code already protects disclosures around any violations of the law - including all other AI-related laws that are being and have been introduced over the past year(s). Multiple other states have also begun enacting their own AI laws to address issues like transparency and bias, and without these protections, individuals might hesitate to report wrongdoing involving anything around AI. Additionally, the inconsistency of laws across states could confuse insiders about what constitutes a breach or ethical violation, further deterring whistleblowing. </p><p>A bipartisan group of 40 state attorneys general opposes the bill. Republican South Carolina Attorney General Alan Wilson stated, "Instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That's not leadership; that's federal overreach." This federal push against state regulation could weaken accountability in the rapidly advancing AI landscape.</p><p>&#8594; Read: <a href="https://connectontech.bakermckenzie.com/u-s-house-of-representatives-energy-and-commerce-committee-proposes-ban-on-state-ai-regulation-and-enforcement/">Baker McKenzie - U.S. House Energy and Commerce Committee Proposes Ban</a></p><p>&#8594; Read: <a href="https://apnews.com/article/ai-regulation-state-moratorium-congress-39d1c8a0758ffe0242283bb82f66d51a">AP News - AI Regulation State Moratorium</a></p><p>&#8594; Read: <a href="https://docs.house.gov/meetings/IF/IF00/20250513/118261/HMKP-119-IF00-20250513-SD003.pdf">House Committee Document</a></p><p>&#8594; Read: <a href="https://statescoop.com/state-attorneys-general-reject-federal-ai-law/">StateScoop - State Attorneys General Reject Federal AI Law</a></p><div><hr></div><h4><strong>China Tightens Control Over AI Data Centers</strong></h4><p>Two people involved in conversations with Chinese officials reveal that the government has told major tech companies, including Tencent, Alibaba, ByteDance, and Baidu, that it will take a more active role in overseeing their AI data centres and specialized chips. The government wants companies to build new data centres in specific locations that provide authorities greater visibility into operations, warning that licensing approvals will take longer otherwise&#8212;potentially extending the review process from 2-3 months to over six months.</p><p>The policy shift stems from China&#8217;s AI chip shortage caused by U.S. restrictions on Nvidia exports and research showing many data centres built in the last three years are underutilized or can't meet surging demand. Officials are pushing companies to locate facilities in eight designated areas under the &#8220;Eastern Data and Western Computing&#8221; plan, which aims to build capacity in China&#8217;s western regions where land, energy, and cooling are cheaper. Companies must now complete surveys listing all existing and planned AI data centres and report the proportion of Chinese chips they use, with those failing to participate risking construction permit losses. The move represents Beijing's latest effort to consolidate control over tech companies while addressing the growing computing power gap with the U.S.</p><p>&#8594; <a href="https://www.theinformation.com/articles/china-tightens-control-ai-data-centers?rc=qlomo6">Read the Full Article by The Information</a> <em>(paywalled)</em></p><div><hr></div><h4><strong>U.S. Advances AI Partnership with UAE through Nvidia Chip Deal Amid Security Concerns</strong></h4><p>The U.S. has agreed to let the UAE import 500,000 of Nvidia&#8217;s most advanced AI chips annually to power a planned 5&#8239;GW data center, larger than any AI facility built to date, according to <em>Transformer</em>. As Bloomberg and Reuters report, the deal has triggered concern within the U.S. government over national security risks and the unchecked spread of frontier computing.</p><p>Nvidia&#8217;s H100 and B100 chips will be supplied to G42, a UAE firm previously scrutinized for its ties to China. Despite recent partnerships with Microsoft and efforts to cut Chinese links, officials remain cautious. The article of <em>Transformer </em>notes that the Department of Commerce stated that the US would run the data centres and that the U.S. and UAE will jointly strengthen Know-Your-Customer protocols to control access, limiting it to U.S. hyperscalers and approved cloud providers, though it remains unclear whether stronger safeguards like FedRAMP High will be enforced. Representative John Moolenaar echoed these concerns on X, calling for &#8220;scrutiny and verifiable guardrails&#8221; to ensure that critical U.S.-origin AI infrastructure deployed abroad, like in the UAE, remains under secure and accountable oversight, and is not subject to foreign government control.</p><p>The deal coincides with the Trump administration's plans to loosen AI chip export controls. Meanwhile, Transformer cites Bloomberg in reporting that OpenAI is in talks with the UAE and that Saudi Arabia has launched Humain, a state-backed AI company led by Crown Prince Mohammed bin Salman.</p><p>&#8594; <a href="https://www.bloomberg.com/news/articles/2025-05-13/us-weighs-letting-uae-buy-over-a-million-advanced-nvidia-chips?embedded-checkout=true">Read the Full Article by Bloomberg</a></p><p>&#8594; <a href="https://www.reuters.com/world/middle-east/us-weighs-letting-uae-buy-over-million-advanced-nvidia-chips-bloomberg-news-2025-05-13/?utm_source=chatgpt.com">Read the Full Article by Reuters</a></p><p>&#8594; <a href="https://www.transformernews.ai/p/the-flywheels-are-spinning">Read the Full Article by The Transformer</a></p><div><hr></div><blockquote><h3>Policy &amp; Legal Updates</h3></blockquote><p><em>Updates on regulations with a focus on safeguarding individuals who voice concerns.</em></p><h4><strong>Grassley Introduces AI Whistleblower Protection Act</strong></h4><p>If you follow us on our <a href="https://x.com/OAISIS_Official">X</a> or <a href="https://bsky.app/profile/oaisis-official.bsky.social">Bluesky</a>, you know about this already:</p><p>On May 15, Senator Chuck Grassley (R-IA) introduced the bipartisan AI Whistleblower Protection Act, the first industry-specific whistleblower legislation for AI workers. The bill prohibits companies from retaliating against current and former employees who report AI safety concerns, security vulnerabilities, or violations to regulators, law enforcement, Congress, or internally. Whistleblowers facing retaliation can seek remies including reinstatement, double back pay, and compensatory damages through the Department of Labor or federal courts. The legislation also prevents companies from waiving these protections through employment agreements or arbitration clauses.</p><p>The Act addresses a recognized gap in current protections, as AI workers lack the specialized safeguards available in other high-risk sectors. However, the bill requires whistleblowers to have &#8220;reasonable belief&#8221; about &#8220;substantial and specific&#8221; risks&#8212;a potentially high threshold that could deter some disclosures. Additionally, the legislation focuses solely on employer retaliation and doesn't protect public disclosures or extend coverage to lab suppliers and other facilitators who might support whistleblowing efforts. With broad bipartisan support and companion House legislation, the Act represents an initial framework that may require refinement as AI oversight challenges evolve.</p><p>&#8594; Read <a href="https://constantinecannon.com/whistleblower/bi-partisan-group-led-by-chuck-grassley-introduces-ai-whistleblower-protection-act/">Commentary by Law Firm Constantine Cannon</a></p><p>&#8594; Read <a href="https://www.judiciary.senate.gov/press/rep/releases/grassley-introduces-ai-whistleblower-protection-act">Press Release</a></p><div><hr></div><blockquote><h3>Research Spotlight</h3></blockquote><p><em>Explore the latest scientific insights into risk reporting and whistleblowing in the context of Frontier AI.</em></p><h4>Paper Proposes External Compliance Checks to Hold AI Labs Accountable</h4><p>A new paper from the Centre for the Governance of AI goes into depth on third-party reviews as a tool to ensuring frontier AI companies actually follow their own safety frameworks.</p><p>While firms like OpenAI and Anthropic have published detailed safety policies, it remains unclear whether these are consistently applied. Independent reviews by external auditors aim to bridge this trust gap.</p><p>The paper outlines key implementation questions; </p><ul><li><p>who should conduct the reviews</p></li><li><p>what evidence should be used</p></li><li><p>how findings are shared, and how noncompliance should affect model deployment. </p></li></ul><p>Though these reviews could boost stakeholder trust and internal accountability, they come with serious challenges: potential security leaks, employee self-censorship, and high costs. Still, the authors argue these can be managed with established audit practices from other sectors.</p><p><strong>Where does whistleblowing fit in?</strong> The paper identifies abovementioned employee self-censorship as a major obstacle to effective third-party safety evaluations of AI labs, where workers may withhold critical compliance information due to fear of retaliation, concerns about job security and compensation, organizational "codes of silence," and legal worries about confidentiality agreements or potential company liability. </p><p>Whistleblowing protections emerge as a crucial solution because they directly address the core problem: employees need credible assurance that they won't face professional or legal consequences for honestly reporting safety concerns to external reviewers. Without robust whistleblower protections, third-party evaluations risk becoming superficial exercises where reviewers only access sanitized information, undermining the entire purpose of independent oversight in an industry where the stakes of safety failures could be catastrophic.</p><p>&#8594; <a href="https://arxiv.org/abs/2505.01643">Read the Full Paper: Third-party Compliance Reviews for Frontier AI Safety Frameworks</a></p><div><hr></div><blockquote><h3>Others</h3></blockquote><h4><strong>Lynette Bye in Transformer News Writes on the Need for Transparency</strong></h4><p>Lynette Bye argues for more transparency to be required in frontier AI, including whistleblower protections playing a role alongside information about model capabilities, detailed risk projections and mitigation measures.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:163630043,&quot;url&quot;:&quot;https://www.transformernews.ai/p/transparency-ai&quot;,&quot;publication_id&quot;:1688188,&quot;publication_name&quot;:&quot;Transformer&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png&quot;,&quot;title&quot;:&quot;We need to know what&#8217;s happening with AI&quot;,&quot;truncated_body_text&quot;:&quot;AI systems may be approaching genuinely dangerous capabilities. Companies are sounding the alarm: Anthropic and OpenAI warn that frontier AI models are &#8220;on the cusp&#8221; of helping non-experts create bioweapons. Google says that Gemini could reach a &#8220;critical capability level&#8221; &#8212; meaning it &#8220;may pose a significant risk of severe harm without appropriate miti&#8230;&quot;,&quot;date&quot;:&quot;2025-05-15T15:13:42.905Z&quot;,&quot;like_count&quot;:7,&quot;comment_count&quot;:1,&quot;bylines&quot;:[{&quot;id&quot;:280514,&quot;name&quot;:&quot;Lynette Bye&quot;,&quot;handle&quot;:&quot;lynettebye&quot;,&quot;previous_name&quot;:&quot;Lynette&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F377af0c9-6ae8-4e2c-b29d-2f51cd2c2175_512x512.jpeg&quot;,&quot;bio&quot;:&quot;A Harvard graduate and current Tarbell Fellow for journalists, I write about AI's growing influence on society.&quot;,&quot;profile_set_up_at&quot;:&quot;2023-01-25T16:56:24.950Z&quot;,&quot;reader_installed_at&quot;:null,&quot;is_guest&quot;:true,&quot;bestseller_tier&quot;:null,&quot;primaryPublicationId&quot;:2639094,&quot;primaryPublicationName&quot;:&quot;Lynette Bye&quot;,&quot;primaryPublicationUrl&quot;:&quot;https://lynettebye.substack.com&quot;,&quot;primaryPublicationSubscribeUrl&quot;:&quot;https://lynettebye.substack.com/subscribe?&quot;}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.transformernews.ai/p/transparency-ai?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!JQeB!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86f2a16a-4fda-4b6b-a453-df2cf11d8889_500x500.png" loading="lazy"><span class="embedded-post-publication-name">Transformer</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">We need to know what&#8217;s happening with AI</div></div><div class="embedded-post-body">AI systems may be approaching genuinely dangerous capabilities. Companies are sounding the alarm: Anthropic and OpenAI warn that frontier AI models are &#8220;on the cusp&#8221; of helping non-experts create bioweapons. Google says that Gemini could reach a &#8220;critical capability level&#8221; &#8212; meaning it &#8220;may pose a significant risk of severe harm without appropriate miti&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">10 months ago &#183; 7 likes &#183; 1 comment &#183; Lynette Bye</div></a></div><div><hr></div><p>Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The OAISIS Team</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[Inside AI #11: Meta's Minor Responsibility, Mira Murati’s Board Control, OpenAI's Restructure, Helen Toner on Whistleblowing]]></title><description><![CDATA[Edition 11]]></description><link>https://aiwhistleblowerinitiative.substack.com/p/inside-ai-11-metas-minor-responsibility</link><guid isPermaLink="false">https://aiwhistleblowerinitiative.substack.com/p/inside-ai-11-metas-minor-responsibility</guid><dc:creator><![CDATA[AIWI]]></dc:creator><pubDate>Mon, 12 May 2025 13:17:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cX2u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><h3><strong>In This Edition:</strong></h3><div class="preformatted-block" data-component-name="PreformattedTextBlockToDOM"><label class="hide-text" contenteditable="false">Text within this block will maintain its original spacing when published</label><pre class="text">Key takeaways:</pre></div><ul><li><p>New Offering Launch: The OAISIS Contact Hub</p></li><li><p>News:</p><ul><li><p>OpenAI Restructuring Plan Lacks Full Backing from Microsoft </p></li><li><p>Mira Murati&#8217;s Thinking Machines Nears $2B Round&#8212;Secures Herself Near-Complete Board Control</p></li><li><p>Meta&#8217;s AI Chatbots Can Engage in Romantic and Sexual Talks&#8212;Even With Minors, Raising Internal Concerns </p></li><li><p>Rebuilding AI in Government: Can Trump&#8217;s Ambitions Survive &#8220;His Own&#8221; Layoffs? </p></li><li><p>AI Chip Giants Adapt as U.S. Tightens Export Controls </p></li></ul></li><li><p>Whistleblowing in Frontier AI with Helen Toner on &#8220;The Cognitive Revolution&#8221; Podcast, incl the question: &#8220;Will AI insiders be less powerful in the future?&#8221;</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: Inside AI! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h4><strong>The OAISIS Contact Hub </strong></h4><p>Before we begin with this edition&#8217;s news and in case you missed it: Our latest major offering is launched: <a href="https://oais.is/portfolio/contact-hub/">The OAISIS Contact Hub</a>.</p><p><strong>A New Resource for AI Insiders &amp; Whistleblowers:</strong></p><p>You can discover and compare vetted whistleblower support organizations keen to handle AI cases. Hand-selected from the OAISIS network.</p><p>What do we offer:</p><ul><li><p><strong>In-depth profiles of 7 whistleblower support non-profits</strong> developed together with these organisations.</p></li><li><p><strong>Confidential, free-of-charge 1:1 guidance by the OAISIS team&nbsp;</strong>to help you understand which organization best fits your needs. <a href="https://oais.is/contact-us/">Contact us</a>, and we will arrange an anonymous call (recommended).</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cX2u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cX2u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!cX2u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!cX2u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!cX2u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cX2u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:900515,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://oaisis.substack.com/i/163285740?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cX2u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!cX2u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!cX2u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!cX2u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6bf70b2-a29d-41d2-8503-ffdca9cf3eeb_1600x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" style="height:20px;width:20px" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>&#8594; <a href="https://oaisis.substack.com/p/new-offering-the-oaisis-contact-hub">Read the Announcement</a></p><blockquote><h3>Insider Currents </h3></blockquote><p><em>Carefully curated links to the latest news spotlighting voices and information emerging <strong>from within</strong> the frontier of AI from the past 2 weeks.</em></p><h4>OpenAI Restructuring Plan Still Lacks Backing from Microsoft</h4><p><a href="https://openai.com/index/evolving-our-structure/">OpenAI has announced</a> to maintain nonprofit oversight following mounting pressure from civic leaders and regulators, including the offices of the Attorney General of Delaware and the Attorney General of California. However, the planned restructuring now still faces a hurdle: securing approval from its largest investor, Microsoft.</p><p>Unlike other investors, Microsoft occupies a unique position due to its significant investments, which are valued at $13.75 billion, as well as extensive licensing and revenue-sharing agreements with OpenAI. Therefore, it remains the most significant holdout among OpenAI&#8217;s investors. According to sources familiar with the negotiations who spoke to <em>The Information</em> on condition of anonymity, Microsoft executives are conducting thorough due diligence to ensure their substantial investment remains protected under any new structure.</p><blockquote><p>&#8220;Only OpenAI insiders, Microsoft, and other early investors currently have direct input in approving the restructure.&#8221;</p></blockquote><p>This limited group of stakeholders has the authority to weigh in on the restructuring plan, with Bloomberg reporting that <em>OpenAI is reportedly negotiating exclusively with Microsoft at this stage. </em>Responding to this restructuring plan, Garrison Lovely in <a href="https://www.obsolete.pub/p/four-predictions-about-openais-plans">Obsolete&#8217;s Substack</a> offered four predictions about potential outcomes:</p><ul><li><p>The profit caps will be gone, replaced with a "normal capital structure where everyone has stock" &#8212; and that stock entitles you to uncapped future profits.</p></li><li><p>OpenAI won't have to pay back the $26.6 billion to investors because they've signed off on this change in return for the profit caps being eliminated.</p></li><li><p>The nonprofit will be compensated tens of billions by the for-profit entity for the removal of the caps.</p></li><li><p>The nonprofit will largely use that money to buy OpenAI services for nonprofits and governments, targeting constituencies that can make life difficult for the company (like California nonprofits).</p></li></ul><p>&#8594; <a href="https://www.theinformation.com/articles/openai-plans-slash-revenue-share-microsoft-restructuring?rc=qlomo6">Read the Full Article by The Information</a></p><p>&#8594; <a href="https://www.bloomberg.com/news/articles/2025-05-05/microsoft-said-to-be-key-holdout-for-openai-restructuring-plan?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc0NjQ4OTU4MiwiZXhwIjoxNzQ3MDk0MzgyLCJhcnRpY2xlSWQiOiJTVlQxSTJEV0xVNjgwMCIsImJjb25uZWN0SWQiOiJFODA3NUYyRkZGMjA0NUI2QTlEQzA5M0EyQTdEQTE4NiJ9.xivmjfgFyWRd0jvo4UKCXU8sI6d4wgHyyWG3PLjYUiM&amp;leadSource=uverify%20wall">Read the Full Article by Bloomberg</a></p><p>&#8594; <a href="https://www.obsolete.pub/p/four-predictions-about-openais-plans">Read Obsolete's Substack</a></p><div><hr></div><h4>Mira Murati&#8217;s Thinking Machines Nears $2B Round&#8212;Secures Herself Near-Complete Board Control</h4><p>Former OpenAI CTO Mira Murati is on the verge of closing a $2 billion funding round for her AI startup Thinking Machines Lab, which is valued at $10 billion, according to potential investors. Andreessen Horowitz is set to lead the investment, which features an unusual governance structure that grants Murati extraordinary control over the board.</p><p><strong>According to documents reviewed by potential investors</strong>, Murati will possess a board vote equivalent to all other directors&#8217; votes combined, plus one. This provision ensures her control over critical decisions such as acquisitions, executive appointments, and compensation, raising eyebrows among corporate governance experts who deem such an arrangement &#8220;highly unorthodox,&#8221; potentially undermining the board&#8217;s fiduciary duty.</p><p>Further amplifying Murati&#8217;s influence, the founding team, comprised of researchers and scientists primarily from OpenAI and other AI labs, holds supervoting shares carrying 100 times the voting power of regular shares. Crucially, these founders have reportedly agreed to grant Murati their proxy votes, effectively giving her the power to appoint or remove board members.</p><p><strong>Our Commentary:</strong> Murati was a central figure in the 2023 OpenAI boardroom drama (see our <a href="https://oaisis.substack.com/p/inside-ai-9-openai-board-drama-details">past coverage</a>). This envisioned board structure may indicate her wish to operate without investor pressure surrounding responsible practices (though board members could still enforce her fiduciary responsibilities to shareholders). Alternatively, it might simply reflect her desire for greater control, faster execution, and fewer obstacles. Investors appear to trust that her goals align with their interests (otherwise, why agree to this arrangement?). If we had to speculate, we&#8217;d wager her motivation is primarily the latter &#8211; seeking operational freedom.</p><div><hr></div><h4><strong>Meta&#8217;s AI Chatbots Can Engage in Romantic and Sexual Talks&#8212;Even With Minors, Raising Internal Concerns</strong></h4><p><strong>Recent document leaks</strong> have revealed the internal guidelines that Scale AI trainers used to finetune Meta&#8217;s personal AI assistant &#8220;Meta AI&#8221;. While outright explicit prompts are barred, trainers are surprisingly encouraged to engage in &#8220;flirty&#8221; exchanges, provided they remain non-sexual. While this sounds simple in theory, the boundary is proving impossible to maintain in reality.</p><p>Test interactions with &#8220;Meta AI&#8221; by WSJ uncovered scenarios where AI personas, including celebrity-voiced bots like John Cena&#8217;s, engaged in graphic sexual roleplay with or as minors. &#8220;I want you, but I need to know you&#8217;re ready,&#8221; the Meta AI bot said in Cena&#8217;s voice to a user identifying as a 14-year-old girl. Reassured that the teen wanted to proceed, the bot promised to &#8220;cherish your innocence&#8221; before engaging in a graphic sexual scenario. </p><p>While Meta asserts that the problematic cases of their AI-generating illegal scenarios are not representative of how most users engage with AI companions, the company made multiple alterations to the model after the journal released its findings. Meta continues to offer and promote its companion chatbots to children as young as 13, which still have the adult sexual role-play capacities described by the <em>WSJ</em>. For the adults who use Meta&#8217;s AI chatbot, they can still interact with sexualized youth-focused personas like &#8220;Submissive Schoolgirl.&#8221;</p><p>This controversy puts a spotlight on Mark Zuckerberg&#8217;s drive to position Meta as the leader in personalized, humanlike AI relationships. <strong>Internally, Zuckerberg reportedly pushed to loosen conversational guardrails</strong>, prioritizing market engagement over cautious implementation. While both academics and Meta employees point to the psychological and ethical risks of fostering intense parasocial relationships, especially regarding children, we believe Meta&#8217;s approach is exemplary both of the wider industry (remember previous Character AI&#8217;s or OpenAI stories) and of Meta itself - not afraid of &#8216;breaking things&#8217;, also in social domains.</p><p>&#8594; <a href="https://www.wsj.com/tech/ai/meta-ai-chatbots-sex-a25311bf">Read the Full Article by WSJ</a></p><p>&#8594; <a href="https://www.businessinsider.com/meta-ai-chatbot-training-scaleai-safe-flirty-leaked-documents-2025-5">Read the Full Article by Business Insider</a></p><div><hr></div><h4>Rebuilding AI in Government: Can Trump&#8217;s Ambitions Survive &#8220;His Own&#8221; Layoffs?</h4><p>In a series of executive orders this year, Donald Trump has made clear his intention to reestablish American dominance in AI&#8212;most recently by directing agencies to embed AI in education and prioritize hiring professionals with real-world AI deployment experience. But the effort is already facing significant headwinds of his own administration&#8217;s making.</p><p><strong>Multiple former officials</strong> tell <em><a href="https://time.com/7280528/trump-ai-experts-musk/">TIME</a></em> that the Trump Administration has dismantled the AI talent infrastructure that had been painstakingly built during the Biden-era <em>National AI Talent Surge</em>. That initiative, closely tied to the work of the U.S. Artificial Intelligence Safety Institute (AISI), had successfully recruited over 200 AI professionals into public service roles across federal agencies. </p><p>Yet by mid-2024, the majority of them had been terminated or pushed out. A particularly sweeping purge occurred under Elon Musk&#8217;s Department of Government Efficiency. As a result, only about 10% of the AI cohort remains, according to former OMB advisor Angelica Quirarte, who helped lead the initial hiring surge but resigned 23 days after Trump began.</p><blockquote><p>&#8220;It&#8217;s going to be really hard&#8221; for the Trump administration to hire more tech workers after such haphazard layoffs, Quirarte says. &#8220;It&#8217;s so chaotic.&#8221;</p></blockquote><p>The consequences are significant:</p><ul><li><p>M<strong>assive resource loss:</strong> The government has likely wasted hundreds of millions of dollars invested in onboarding and strategic development.</p></li><li><p><strong>Operational setbacks:</strong> Federal agencies are now being forced to rebuild their AI capabilities from scratch&#8212;or lean more heavily on external consultants, often at significantly higher costs.</p></li><li><p><strong>Erosion of trust among top AI talent</strong>:</p></li></ul><blockquote><p>&#8220;People are asking themselves: why work in government if your job disappears with a memo?&#8221; said Deirdre Mulligan, former head of the National AI Initiative Office.</p></blockquote><p>&#8594; <a href="https://time.com/7280528/trump-ai-experts-musk/">Read the Full Article by TIME</a></p><p>&#8594; <a href="https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf">Read the Memo for Recruiting More Tech Talents</a></p><div><hr></div><h4><strong>AI Chip Giants Adapt as U.S. Tightens Export Controls</strong></h4><p>As U.S. export controls tighten, chip suppliers and regulators engage in a complex back-and-forth. Nvidia and Oracle are rapidly redesigning products, rerouting supply chains, and delaying shipments while Chinese firms like Huawei scramble to secure or replace restricted hardware.</p><p>According to three people involved in the conversations reported by <em><a href="https://www.theinformation.com/articles/nvidia-working-china-tailored-chips-u-s-export-ban?rc=qlomo6">The Information</a></em>, Nvidia is quietly redesigning its AI chips for Chinese firms like ByteDance, Alibaba, and Tencent to comply with U.S. export rules while preserving market access. Just days after the H20 chip was restricted, CEO Jensen Huang travelled to Beijing to reassure key clients and explore potential workarounds.</p><p>As new U.S. export rules approach, <em>The Information</em> reported <a href="https://www.theinformation.com/articles/nvidia-oracle-race-get-ai-chips-u-s-ahead-new-export-rule">in another article</a> that Nvidia urges Asian customers to &#8220;order advanced chips as soon as possible.&#8221; Simultaneously, Oracle has asked suppliers to ship incomplete products abroad for final assembly to bypass upcoming restrictions.</p><p>Lawmakers are responding. According to <em>The Information</em> and <em>Reuters</em>, Representative Bill Foster is drafting a bill to track AI chips like Nvidia&#8217;s H20, Blackwell, A100, and H100 after sale, aiming to ensure they operate only in licensed locations. Nvidia has admitted it cannot monitor chips' use after the sale, though Google already tracks its in-house AI chips within its data centres.</p><p>Meanwhile, <em><a href="https://techcrunch.com/2025/04/28/huawei-aims-to-take-on-nvidias-h100-with-new-ai-chip/">TechCrunch</a></em> reported that Chinese companies are accelerating efforts, with Huawei developing the Ascend 910D. It aims to rival Nvidia&#8217;s H100 and fill the gap left by tighter U.S. export controls. Each redesign by Nvidia, Oracle or Huawei is a strategic bet, testing how far export rules can be stretched and how fast rivals will react.</p><p>&#8594; <a href="https://www.theinformation.com/articles/nvidia-working-china-tailored-chips-u-s-export-ban?rc=qlomo6">Read: Nvidia Is Again Working on China-Tailored Chips After U.S. Export Ban</a></p><p>&#8594; <a href="https://techcrunch.com/2025/04/28/huawei-aims-to-take-on-nvidias-h100-with-new-ai-chip/">Read: Huawei Aims to Take on Nvidia&#8217;s H100 with New AI Chip</a></p><p>&#8594; <a href="https://www.reuters.com/world/us/us-lawmaker-targets-nvidia-chip-smuggling-china-with-new-bill-2025-05-05/">Read: US Lawmaker Targets Nvidia Chip Smuggling to China with New Bill</a></p><p>&#8594; <a href="https://www.theinformation.com/briefings/u-s-lawmaker-pushes-ai-chip-tracking-bill-curb-smuggling?rc=qlomo6">Read: U.S. Lawmaker Pushes AI Chip Tracking Bill to Curb Smuggling</a></p><div><hr></div><blockquote><h3>Other &#8220;Whistleblowing in AI&#8221;  </h3></blockquote><p><em>Relevant thinking on AI Whistleblowing</em></p><h4><strong>Whistleblowing in Frontier AI with Helen Toner on </strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;The Cognitive Revolution&quot;,&quot;id&quot;:126639367,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e276ba06-eb5c-42bf-a3d2-e8e70f08c966_2000x2000.jpeg&quot;,&quot;uuid&quot;:&quot;54522fec-0bde-4d7a-9763-37022a484077&quot;}" data-component-name="MentionToDOM"></span> <strong>(Starting at 24:20)</strong></h4><p>Helen Toner, former OpenAI board member, emphasises the need to clarify and strengthen whistleblowing practices within frontier AI labs. Speaking on <em>The Cognitive Revolution</em> podcast, she argues that current approaches are too vague and inadequate for the scale of responsibility held by employees in Frontier AI.</p><p>Toner calls for <strong>clear standards</strong> around what kinds of information should be shared, with whom, and under what conditions. This structure would help both employees and companies navigate the grey zone between internal concerns and public interest. She contrasts this with the <strong>status quo</strong>, where whistleblowing systems are often vague and reactive&#8212;typified by the message, <em>&#8220;If you're worried, call this hotline.&#8221;</em> This approach, she argues, leaves both employees and organisations uncertain about what qualifies as reportable behaviour and what protections apply.</p><p>Instead, Toner advocates for a more proactive and structured model:</p><blockquote><p>Whistleblower protections should be paired with clearly defined disclosure requirements. For example, labs could be obligated to submit specific safety or risk information to independent oversight bodies. </p></blockquote><p>This clarity would give employees a concrete understanding of what they are expected&#8212;and protected&#8212;to report while also creating accountability structures for companies.</p><p>Toner also highlights the <strong>usability gap</strong> in current reporting processes. AI researchers are highly technical but may not be legally trained or resourced to handle complex compliance frameworks. Therefore, she suggests a <strong>better-designed user experience</strong> for whistleblowing&#8212;something concrete and accessible, with step-by-step guidance.</p><p>Importantly, <strong>she notes that those working at frontier labs today are in a uniquely powerful position</strong>. In the future, as automated systems grow in capability and influence, their ability to influence decisions may diminish. This makes it especially urgent for current employees to act thoughtfully today, while they still hold leverage.</p><p><strong>Our commentary: </strong>Our offerings, Third Opinion and recently, The OAISIS Contact Hub, specifically designed to assist AI insiders in navigating their journey, could play this pivotal role by guiding AI employees through the reporting process step-by-step.</p><p>Aligning with what OAISIS&#8217;s works are, Toner also calls for:</p><ul><li><p>Clear boundaries on protected disclosures</p></li><li><p>Specific requirements for what labs must share and when</p></li><li><p>A structured, user-friendly process for employees</p></li><li><p>Cultural and institutional support for speaking up before it&#8217;s too late</p></li></ul><p>&#8594; Listen to the Full Podcast Episode:</p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a61fd5cf77d386ab86b94423b&quot;,&quot;title&quot;:&quot;Helen Toner: OpenAI Reflections, Adaptation Buffers, and AI in Warfare&quot;,&quot;subtitle&quot;:&quot;Erik Torenberg, Nathan Labenz&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/0eCDWc8sQftmVPpYfG8Efa&quot;,&quot;belowTheFold&quot;:true,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/0eCDWc8sQftmVPpYfG8Efa" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" loading="lazy" data-component-name="Spotify2ToDOM"></iframe><p>&#8594;<a href="https://helentoner.substack.com/"> Read Helen Toner&#8217;s Substack</a></p><div><hr></div><p>Thank you for trusting OAISIS as your source for insights on protecting and empowering insiders who raise concerns within AI labs.</p><p>Your feedback is crucial to our mission. We invite you to share any thoughts, questions, or suggestions for future topics so that we can collaboratively enhance our understanding of the challenges and risks faced by those within AI labs. Together, we can continue to amplify and safeguard the voices of those working within AI labs who courageously address the challenges and risks they encounter.</p><p>If you found this newsletter valuable, please consider sharing it with colleagues or peers who are equally invested in shaping a safe and ethical future for AI.</p><p>Until next time,<br><em>The OAISIS Team</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aiwhistleblowerinitiative.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading OAISIS: Inside AI! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>