bridge.jpg
 
 

ORCAA Collaborative Expert Assistance Network

We defend the public interest when algorithms don’t

 

our Mission

OCEAN (ORCAA Collaborative Expert Assistance Network) is a new nonprofit fighting to ensure that AI and other automated decision systems are safe, fair, and accountable. We work as technologists in the public interest — bringing deep technical expertise to the people and institutions trying to rein in harmful algorithms.

what we do

Alt text: Icon of scale of justice on a navy document, with magnigying glass

We support lawsuits on behalf of people harmed by algorithmic systems.

Alt text: Icon of two people shaking hands with yellow sparkles

We train and advise lawyers, judges, regulators, and policymakers to challenge and set standards for these systems.

Alt text: Icon of magnifying glass over charts

We conduct independent algorithmic audits.

Our Story


We launched OCEAN after nearly a decade at the forefront of algorithmic auditing. Through our consultancy ORCAA, we’ve worked across industries — insurance, hiring, housing, credit, social media — and technologies, from facial recognition to scoring systems to large language models. We’ve helped regulators like the Colorado Division of Insurance enforce groundbreaking AI laws, and worked with D.C.’s insurance commissioner to investigate racial bias in auto insurance.

But things are moving faster now. New AI systems are being built and deployed at breakneck speed, often in high-stakes areas — and people are getting hurt. For example, Robert Julian-Borchak Williams was wrongfully arrested due to faulty facial recognition. And Sewell Setzer III, a teenager, died by suicide after forming a relationship with an AI chatbot.

The rules can’t keep up. The regulators are underfunded. And in too many cases, the courts are the only line of defense.

That’s why we started OCEAN — to meet the moment. We’ll show up in courtrooms, support those on the front lines, and build legal and public sector muscle to confront algorithmic harm.

And we can’t do it alone. This fight is dangerously lopsided against the public interest. In 2024, big tech spent over $60 million on lobbying, employing one lobbyist for every two members of Congress. And that's a small fraction of what they spent on legal fees, settlements, and fines. Meanwhile, NIST, the federal agency tasked with AI safety, requested $45 million from Congress to create an AI safety institute and got just $10 million. Civil society has to step up to even the playing field.

Technology — and the companies behind it — must follow the same laws that protect the rest of us.

That’s what OCEAN is here to ensure. If you agree, and want to help, get in touch.

beautiful-ocean-wave-with-air-bubbles-2025-06-20-11-16-56-utc.jpg

our approach

Education and Analysis

Educational leadership is key to OCEAN’s mission. We educate the public, governments, and regulators about issues regarding the impact of algorithms and artificial intelligence. We bring together our experienced network of collaborators and advisors to set standards for algorithms and artificial intelligence that will demonstrate best practices in our field.

Context-based Auditing

We believe algorithmic audits should start with a fundamental question: For whom could this fail? Our Ethical Matrix framework centers stakeholders—the real people whose lives are touched by AI systems—at the heart of every audit. Rather than focusing solely on technical metrics, we engage directly with those affected to understand their concerns about how systems might help or harm them. This transforms the overwhelming question "How could this AI system fail?" into specific, actionable insights that can be monitored and addressed.

Explainable Fairness

Sentence on context of potential OCEAN work. The Explainable Fairness framework is a process to define and measure when an algorithm is complying with an anti-discrimination law. The central question is, does the algorithm treat certain groups of people differently than others? The framework has three steps: choose a protected class, choose an outcome of interest, and measure and explain differences.

Data Call Expertise

We are experts in drafting data calls in lawsuits where there is an opportunity for discovery. It starts with a list of English language, non-technical statements that the legal team would want to say to the judge. We then work backwards to figure out what kind of evidence would support such statements, and then we take one more step backwards to infer what kind of data would be needed to provide such evidence. We have experience requesting such data in a precise and concise format which elicits the highest quality responses.

Technical Support & Procurement

Many institutions that work for the public interest and societal good are both development and deployment algorithms. We provide technical assistance necessary during all stages of product process. We integrate hearing back from communities and employees, who can provide insights that help guide our technical recommendations. Likewise many non-profit and community organizations are navigating what AI tools, if any, they should incorporate in their work. OCEAN can help asses the suitability of tools and advise if these tools will work as they are intended to and if there may be any unintentional consequences.


We are OCEAN

Our board provides us with sector expertise and oversight.

Tom Adams

Executive Director

Thomas Adams has over twenty-five years of business and legal experience. He has represented banks, companies and individuals on corporate, securities and business law matters. He also provided strategic advice, litigation support and expert witness testimony on issues relating to the financial crisis.

Mr. Adams is an expert in creating solutions and solving problems for complex financial and corporate transactions and has provided strategic advice and analysis to banks, insurance companies, private equity companies, hedge funds and a variety of other companies. He graduated from Fordham Law School in 1989 and Colgate University in 1986. He is admitted to practice in New York.

Cathy O’Neil

Board Member

Cathy O'Neil has been an independent data science consultant since 2012, advising clients including the Illinois Attorney General's Office and Consumer Reports. She founded ORCAA, an algorithmic auditing company, and received her PhD in mathematics from Harvard. Her analysis was honed working as a quant at D.E. Shaw and a professor at Barnard College.

O'Neil is the author of Doing Data Science (2013), the bestselling Weapons of Math Destruction (2016), which won the Euler Book Prize and was longlisted for the National Book Award, and The Shame Machine (2022). She launched Columbia University's Lede Program for data journalism and is a regular contributor to Bloomberg Opinion.

Jacob Appel

Board Member

Jake is an algorithmic auditor with deep expertise in assessing algorithm performance and their impacts on stakeholders. As ORCAA's Chief Strategist for over six years, he specializes in designing tests and analyses to assess the performance of algorithms.

Before joining ORCAA he worked with the Behavioral Insights Team, where he advised state and local governments on incorporating behavioral science “nudges” into citizen-facing policies and programs, and testing them with randomized experiments. Jake holds a BA in Mathematics from Columbia University and an MPA from Princeton's Woodrow Wilson School of Public and International Affairs. He is co-author of More Than Good Intentions: How a new economics is helping to solve global poverty and Failing in the Field: What we can learn when field research goes wrong.

Şerife Wong

Chairman of the Board

Şerife Wong is an artist and researcher who investigates the complex interplay of power, narratives, and technology through her work at Icarus Salon. As an affiliate of O'Neil Risk Consulting and Algorithmic Auditing and an affiliate research scientist at Kidd Lab, UC Berkeley, she addresses the societal impacts of AI. Wong serves on the boards of Gray Area and Tech Inquiry, and as the AI governance lead at the Tech Diplomacy Network. 


Her work has been honored with many awards, including a residency at the Rockefeller Foundation Bellagio Center, a research fellowship at the Berggruen Institute, a Mozilla Creative Award, a Salzburg Global fellowship, and a Creative Capital award. She is a frequent collaborator with the Center for Advanced Study in the Behavioral Sciences at Stanford, worked at the Electronic Frontier Foundation, and served as a board member for Digital Peace Now.

Laura Strausfeld

Board Member

Laura Strausfeld specializes in constitutional law advocacy and policy reform. She is the Associate Director of Institutional Relations at the Brennan Center for Justice, where she manages foundation, corporation, and law firm fundraising. Her law and policy nonprofit, Period Law, continues the work she began at Period Equity, fighting for tax-free, toxin-free menstrual supplies that are freely available to everyone who needs them.

Strausfeld has a wide-ranging project-based background, including as a plaintiffs’ attorney; development strategist for nonprofits and companies including the Economic Hardship Reporting Project and Agenda Management + Production; writer, director, and producer of theater and film; and Anton Chekhov scholar at Columbia University’s Harriman Institute. She has a BA in history from Yale University and a JD from Columbia University.

Andrew Smart

Board Member

Andrew Smart is a Senior Research Scientist at Google Research investigating the philosophical and social foundations of AI. His interests range from algorithmic auditing to social ontology. He is also a PhD candidate in philosophy at the Australian National University where he is investigating the relationships between social ontology, causality, and estimating risks and impacts of machine learning in high-stakes domains. He is the author of two books and more than 30 peer reviewed papers on AI, society and ethics.

Prior to Google, Smart was a research scientist at Twitter, Novartis, and Honeywell Aerospace, working on data science, medical device safety, clinical research, and safety engineering in aviation. He holds a master's degree in cognitive science from Lund University and worked as a junior research scientist at NYU on brain imaging of human language.

Partnerships

Partnerships are essential to OCEAN’s work. Only through bringing together foundations, public interest groups, and communities can we leverage our combined expertise to ensure technology and tech companies accountable. If you are working to ensure algorithmic tools benefit the public not harm them, get in touch.

Collaborators & Communities

Can OCEAN help you?

Tktk what are some questions potential OCEAN clients would have? Has AI done X in your community? Are you a non profit doing activity and need OCEAN expertise? Are you currently involved in a lawsuit against a tech company? Contact us for an initial consult.

Funders

OCEAN’s work is not possible without support from our funders. Thank you.

Get in Touch

For general inquiries, please email hello@oceannetwork.net

For press inquiries, please email hello@oceannetwork.net