Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for political professionals · Wednesday, July 30, 2025 · 835,399,346 Articles · 3+ Million Readers

AI Pioneer Robin Rowe Speaking at Free AI Ethics Summit on 30 July 2025

Join activists, policymakers and tech leaders in person in San Francisco and online virtually as together we seek meaningful reforms in AI

BALTIMORE, MD, UNITED STATES, July 29, 2025 /EINPresswire.com/ -- "AI is much more capable today, for good or evil, than many people understand," says AI pioneer Robin Rowe.

• When: National Whistleblower Day on 30 July 2025 at 8am-5pm PDT (Robin Rowe 1:35-2:00 pm)
• What: Join activists, policymakers and tech leaders in person in San Francisco and online virtually as together we seek meaningful reforms in AI transparency and accountability
• Why: AI is transforming society, redefining life, work and the battlefield in ways that present substantial risks, that if left unchecked can perpetuate and amplify prejudices, violate privacy, erode trust, create conflict, threaten human autonomy, and cost lives
• Where: San Francisco and virtually online (Zoom)
• Who: The AI Ethics Suchir Balaji Memorial Summit is hosted by the Suchir Balaji Foundation
• Free: Register to attend at https://conference.suchir.ai/

AI Is Already Dangerous and There’s No Safety Commission

On 5 June 2025, during an interview on CNBC Squawk on the Street, Palantir CEO Alex Karp said, “My general bias on AI is it is dangerous”. Karp speaks from the vantage point that his company delivers what may be considered the most dangerous and lethal surveillance AI systems in existence. Military grade AI surveillance systems are offered by many companies, not only Palantir. AI surveillance systems are sold to governments worldwide to use as they see fit, whether to streamline healthcare, for the police to rapidly solve crimes, to wage wars using hunter-killer drones, or to eliminate dissidents or "undesirables" with all-seeing-eye police state surveillance. Available at the push of a button.

Karp says there is an artificial intelligence arms race between the U.S. and China, that, “either we win or China will win.” However, the outcome of another arms race, for nuclear weapons, is nobody won. Everyone is living under constant threat of being one minute to doomsday, where any unstable world leader with dementia or a narcissistic impulse to end civilization can push the button or threaten to do so as extortion.

Unlike atomic weapons, there is no International Atomic Energy Agency (IAEA), U.S. Nuclear Regulatory Commission (NRC), nor U.S. Nuclear Weapons Council (NWC) charged with overseeing the safety and security of AI, to prevent weapons of mass destruction implemented as advanced AI surveillance models being used to identify, track, abduct and kill opponents or dissidents wherever they may be in the world.

About Robin Rowe

Robin Rowe is executive director of the AI institute Fountain Abode and CEO of the AI design firm Heroic Robots. Working for DARPA, the research arm of the U.S. Department of Defense, Rowe created real-time crisis detection AI integrated into the U.S. national defense system, and sailed on the aircraft carrier USS Lincoln to test it at sea. Professor Rowe teaches AI and cybersecurity at CCBC, the largest defense workforce training college in the Washington D.C. region with 50,000 students enrolled annually. The CCBC cybersecurity program is designated by the NSA and DHS as a National Center of Academic Excellence in Cyber Defense.

AI Ethics Suchir Balaji Memorial Summit - Conference Schedule

July 30, 2025, Wednesday

• 8:00-8:10 AM Opening & Legacy
• 8:10-8:30 AM AI Ethics Keynote
• 8:30-9:30 AM. Panel 1: The Cost of Truth
• 9:45-10:30 AM. Panel 2: Global Accountability
• 10:30-11:30 AM. Workshop: Drafting the Model AI Suchir Act
• 11:30 AM-12:30 PM. "Broken Systems" Debate
• 12:30-1:15 PM PDT LUNCH & NETWORKING
• 1:15-1:35 PM AI Copyright Keynote
• 1:35-2:00 PM Panel 3: Whistleblower Safeguards
• 2:00-2:45 PM Panel 4: Enforcing AI Transparency
• 2:45-3:30 PM Panel 5: Responsible AI in Practice
• 3:30-3:45 PM☕ Break
• 3:45-4:15 PM Panel 6: Coalition Building
• 4:15-4:35 PM Legacy & Action Ceremony
• 4:35-4:45 PM Closing Balaji Ramamurthy Call to action

About AI Researcher Suchir Balaji

Suchir Balaji (November 21, 1998 – November 26, 2024) was an American artificial intelligence researcher who accused his former employer, OpenAI, of violating United States copyright law. OpenAI cofounder John Schulman, in a eulogy to Balaji, stated that, “Suchir’s contributions to this project were essential, and it wouldn’t have succeeded without him.” Schulman had hired Balaji shortly after Balaji graduated from the University of Berkeley. In four years working at OpenAI, Balaji had gathered and organized Internet data used to train GPT-4 and ChatGPT. Balaji left the company in August 2024, saying, “If you believe what I believe, you have to just leave the company.”

On 23 October 2024, The New York Times published an interview with Balaji. He stated that LLMs like ChatGPT violated United States copyright law because they are trained on the products of business competitors and may produce exact copies. The New York Times article referred to Balaji’s self-published essay “When does generative AI qualify for fair use?”, that says that reinforcement learning from human feedback (RLHF), a process to reduce AI hallucinations, can go too far, can plagiarize the training data by creating 1:1 copies that are not generative AI. At the time, OpenAI was being sued for copyright infringement by prominent authors and news publishers, including The New York Times.

On 18 November 2024, a court filing by the New York Times named Balaji as potential witness who might have “relevant documents” in the copyright case against OpenAI. A month later, on 26 November 2024, Balaji, age 26, was dead. Balaji’s death by gunshot drew widespread attention due to his whistleblower status and claims of foul play made by his parents and others. However, the final conclusion of the investigation by the San Francisco Police Department and the San Francisco Office of the Chief Medical Examiner was that Balaji shot himself, a suicide.

On 8 February 2025, Fortune magazine published an article, “An OpenAI whistleblower was found dead in his apartment. Now his mother wants answers”, comparing the death of Balaji to the controversial death of John Barnett, the Boeing whistleblower who on the third day of giving a deposition died by what authorities concluded was a self-inflicted gunshot.

Links

https://conference.suchir.ai/
https://www.cnbc.com/2025/06/05/palantir-karp-ai-dangerous-china.html
https://en.wikipedia.org/wiki/Suchir_Balaji
https://suchir.net/fair_use.html
https://en.wikipedia.org/wiki/John_Barnett_(whistleblower)

Robin Rowe
Fountain Abode
+1 323-535-0952
robin.rowe@fountainabode.org
Visit us on social media:
LinkedIn

Powered by EIN Presswire

Distribution channels: Conferences & Trade Fairs, Human Rights, IT Industry, Technology, U.S. Politics

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Submit your press release