bridge.jpg
 
 

ORCAA Collaborative Expert Assistance Network

We defend the public interest when algorithms don’t

 

The NEed

AI is already harming people in devastating ways. We’ve seen ChatGPT-assisted suicides and psychosis, insurance AIs denying life-saving treatments, and biased hiring tools shutting people out of jobs they are qualified for. These aren’t abstract risks—they are real harms with life-or-death consequences.

And yet, the people most affected—children, the elderly, the poor, disabled and communities of color—have the fewest options for recourse. Powerful corporations can afford teams of experts and lawyers to defend their algorithms. The public cannot. Courts, meanwhile, are ill-prepared to scrutinize these complex systems, leaving victims with little chance of redress.

This imbalance has created a dangerous gap: those suffering the most harm from AI are also those least able to fight back. Without intervention, the technology will continue to advance unchecked, deepening inequities and eroding public trust. OCEAN was founded to close this gap and make sure accountability is within reach for everyone.

What We Do

OCEAN (ORCAA Collaborative Expert Assistance Network) defends the public interest where algorithms don’t. We put world-class algorithmic auditing expertise—honed over nearly a decade—into the hands of people and institutions fighting for accountability. OCEAN brings technical expertise to those who need it most, not just those who can pay the most.

Because federal and international regulation is lagging, we help fill the gap by:

  • Leveling the playing field in court through expert analysis in lawsuits on behalf of people harmed by AI.

  • Equipping the justice system by training judges and lawyers to interrogate complex AI systems with rigor and apply the laws and regulations that do exist.

  • Advancing stronger protections by transforming hard-won legal precedents into durable state laws and standards.

Our work shifts power: ensuring those harmed by AI aren’t left powerless—and that companies deploying dangerous technologies are held to account.

HOW WE DO IT

OCEAN pairs deep technical expertise with people-driven advocacy, making accountability possible across the courts, policymaking, and the public sphere. Our approach rests on four pillars:

  • Centering people, not just code. We use our Ethical Matrix framework to ask “for whom could this fail?”— engaging directly with those most affected by algorithms. This ensures audits measure and monitor real-world harms, to create real-world positive impacts.

  • Strengthening legal action. We partner with public interest lawyers, nonprofits, and communities to expose algorithmic harms in court, help launch new cases, and equip judges and attorneys to interrogate AI systems effectively.

  • Turning insight into impact. We educate the public, policymakers, and regulators on algorithmic risks and solutions, and we translate lessons from lawsuits into stronger, lasting laws and rules.

  • Guiding standards and adoption. We convene experts to set best practices for algorithmic accountability, and we advise nonprofits and mission-driven organizations on whether and how to adopt AI tools responsibly.

Together, these pillars ensure that harmful AI can be challenged in real time, while also building the standards and safeguards needed for the long term.

OUR VISION

We envision a future where people—not algorithms—decide what justice looks like. A future where courts, regulators, and communities have the expertise to hold powerful companies accountable. 

In this future, accountability isn’t optional. AI and algorithmic systems are safe, fair, and ethical because strong laws and standards require them to be. The public interest—not corporate profit—guides how technology is designed and deployed.

OCEAN exists to make that future real: shifting power back to the people and ensuring that those most at risk of harm are never left without recourse.

NONPROFIT STATUS

Contributions to OCEAN, Inc. are tax-deductible to the extent permitted by law. We are a registered 501(c)(3) nonprofit organization. Form 1023 and IRS determination letter available upon request.

Take Action
beautiful-ocean-wave-with-air-bubbles-2025-06-20-11-16-56-utc.jpg

We offer

Data expertise in plaintiff lawsuits on behalf of those harmed by algorithms

We are experts in drafting data calls in lawsuits where there is an opportunity for discovery. It starts with a list of English language, non-technical statements that the legal team would want to say to the judge. We then work backwards to figure out what kind of evidence would support such statements, and then we take one more step backwards to infer what kind of data would be needed to provide such evidence. We have experience requesting such data in a precise and concise format which elicits the highest quality responses.

Education to build capacity in the legal system for investigating AI and algorithmic systems

Educational leadership is key to OCEAN’s mission. We educate judges, lawyers, and policymakers to make them aware of how algorithmic auditing works and what it can accomplish.

AI policy assistance at the state and city level

We help legislators and regulators develop robust AI policies and rules. This includes drafting model legislation, commenting on draft legislation, and other technical assistance.


We are OCEAN

Our board provides us with sector expertise and oversight.

Şerife Wong

Board Chair

Şerife Wong is an artist and researcher who investigates the complex interplay of power, narratives, and technology through her work at Icarus Salon. As an affiliate of ORCAA and an affiliate research scientist at Kidd Lab, UC Berkeley, she addresses the societal impacts of AI. Wong serves on the boards of Gray Area and Tech Inquiry, and as the AI governance lead at the Tech Diplomacy Network. 

Her work has been honored with many awards, including a residency at the Rockefeller Foundation Bellagio Center, a research fellowship at the Berggruen Institute, a Mozilla Creative Award, a Salzburg Global fellowship, and a Creative Capital award. She is a frequent collaborator with the Center for Advanced Study in the Behavioral Sciences at Stanford, worked at the Electronic Frontier Foundation, and served as a board member for Digital Peace Now.

Tom Adams

Executive Director

Thomas Adams has over twenty-five years of business and legal experience. He has represented banks, companies and individuals on corporate, securities and business law matters. He also provided strategic advice, litigation support and expert witness testimony on issues relating to the financial crisis.

Mr. Adams is an expert in creating solutions and solving problems for complex financial and corporate transactions and has provided strategic advice and analysis to banks, insurance companies, private equity companies, hedge funds and a variety of other companies. He graduated from Fordham Law School in 1989 and Colgate University in 1986. He is admitted to practice in New York.

Laura Strausfeld

Board Member

Laura Strausfeld specializes in constitutional law advocacy and policy reform. She is the Associate Director of Institutional Relations at the Brennan Center for Justice, where she manages foundation, corporation, and law firm fundraising. Her law and policy nonprofit, Period Law, continues the work she began at Period Equity, fighting for tax-free, toxin-free menstrual supplies that are freely available to everyone who needs them.

Strausfeld has a wide-ranging project-based background, including as a plaintiffs’ attorney; development strategist for nonprofits and companies including the Economic Hardship Reporting Project and Agenda Management + Production; writer, director, and producer of theater and film; and Anton Chekhov scholar at Columbia University’s Harriman Institute. She has a BA in history from Yale University and a JD from Columbia University.

Jacob Appel

Board Member / Officer

Jake is an algorithmic auditor with deep expertise in assessing algorithm performance and their impacts on stakeholders. As ORCAA's Chief Strategist for over six years, he specializes in designing tests and analyses to assess the performance of algorithms.

Before joining ORCAA he worked with the Behavioral Insights Team, where he advised state and local governments on incorporating behavioral science “nudges” into citizen-facing policies and programs, and testing them with randomized experiments. Jake holds a BA in Mathematics from Columbia University and an MPA from Princeton's Woodrow Wilson School of Public and International Affairs. He is co-author of More Than Good Intentions: How a new economics is helping to solve global poverty and Failing in the Field: What we can learn when field research goes wrong.

Andrew Smart

Board Member

Andrew Smart is a Senior Research Scientist at Google Research investigating the philosophical and social foundations of AI. His interests range from algorithmic auditing to social ontology. He is also a PhD candidate in philosophy at the Australian National University where he is investigating the relationships between social ontology, causality, and estimating risks and impacts of machine learning in high-stakes domains. He is the author of two books and more than 30 peer reviewed papers on AI, society and ethics.

Prior to Google, Smart was a research scientist at Twitter, Novartis, and Honeywell Aerospace, working on data science, medical device safety, clinical research, and safety engineering in aviation. He holds a master's degree in cognitive science from Lund University and worked as a junior research scientist at NYU on brain imaging of human language.

Cathy O’Neil

Board Member / Officer

Cathy O'Neil has been an independent data science consultant since 2012, advising clients including the Illinois Attorney General's Office and Consumer Reports. She founded ORCAA, an algorithmic auditing company, and received her PhD in mathematics from Harvard. Her analysis was honed working as a quant at D.E. Shaw and a professor at Barnard College.

O'Neil is the author of Doing Data Science (2013), the bestselling Weapons of Math Destruction (2016), which won the Euler Book Prize and was longlisted for the National Book Award, and The Shame Machine (2022). She launched Columbia University's Lede Program for data journalism and is a regular contributor to Bloomberg Opinion.

OUR EXPERTISE

OCEAN leverages nearly a decade of experience gained through ORCAA (O’Neil Risk Consulting and Algorithmic Auditing), a consultancy founded in 2016 with the goal of setting good standards for algorithmic auditing in order to protect people from harm. ORCAA is the clear leader in this field:

  • Audited dozens of AI and other automated systems in high-stakes areas including credit, insurance, hiring, housing, and social media.

  • Assisted public officials investigating and overseeing AI and algorithmic systems, including

    • Two state Insurance Commissioners

    • Attorney Generals in three states as well as three multistate AG investigations

    • Two federal agencies

    • Four state agencies in two states.

  • Met with and provided advice, input to draft rules or legislation, or other technical assistance

    • Five US Congressional offices,

    • Multiple Congressional committees and working groups,

    • Legislators and regulators in four US states,

    • Regulators from the EU and Canada.

Partnerships

Partnerships are essential to OCEAN’s work. Only through bringing together foundations, public interest groups, and communities can we leverage our combined expertise to ensure technology and tech companies accountable. If you are working to ensure algorithmic tools benefit the public not harm them, get in touch.

Collaborators & Communities

Can OCEAN help you? Has AI harmed you or others in your community? Are you currently pursuing justice through the courts? Contact us for an initial consult.

Funders

OCEAN’s work is not possible without support from our funders. Thank you.

Get in Touch

For inquiries, please email hello@oceannetwork.net