neuralnet_stockimage.jpg

ORCAA Consulting & Expertise Network

 

What does OCEAN do?

OCEAN is a new nonprofit fighting to ensure that AI and other automated decision systems are safe, fair, and accountable. We work as technologists in the public interest — bringing deep technical expertise to the people and institutions trying to rein in harmful algorithms.

We do three things:

  • We support lawsuits on behalf of people harmed by algorithmic systems.

  • We train and advise lawyers, judges, regulators, and policymakers to challenge and set standards for these systems.

  • We conduct independent algorithmic audits.

We launched OCEAN after nearly a decade at the forefront of algorithmic auditing. Through our consultancy ORCAA, we’ve worked across industries — insurance, hiring, housing, credit, social media — and technologies, from facial recognition to scoring systems to large language models. We’ve helped regulators like the Colorado Division of Insurance enforce groundbreaking AI laws, and worked with D.C.’s insurance commissioner to investigate racial bias in auto insurance.

But things are moving faster now. New AI systems are being built and deployed at breakneck speed, often in high-stakes areas — and people are getting hurt. For example, Robert Julian-Borchak Williams was wrongfully arrested due to faulty facial recognition. And Sewell Setzer III, a teenager, died by suicide after forming a relationship with an AI chatbot.

The rules can’t keep up. The regulators are underfunded. And in too many cases, the courts are the only line of defense.

That’s why we started OCEAN — to meet the moment. We’ll show up in courtrooms, support those on the front lines, and build legal and public sector muscle to confront algorithmic harm.

But we can’t do it alone. This fight is dangerously lopsided against the public interest. In 2024, big tech spent over $60 million on lobbying, employing one lobbyist for every two members of Congress. And that's a small fraction of what they spent on legal fees, settlements, and fines. Meanwhile, NIST, the federal agency tasked with AI safety, requested $45 million from Congress to create an AI safety institute and got just $10 million. Civil society has to step up to even the playing field.

Technology — and the companies behind it — must follow the same laws that protect the rest of us.
That’s what OCEAN is here to ensure. If you agree, and want to help, get in touch.

services-background.jpg

Data Call Expertise

We are experts in drafting data calls in lawsuits where there is an opportunity for discovery.

It starts with a list of English language, non-technical statements that the legal team would want to say to the judge. We then work backwards to figure out what kind of evidence would support such statements, and then we take one more step backwards to infer what kind of data would be needed to provide such evidence.

We have experience requesting such data in a precise and concise format which elicits the highest quality responses.

Explainable Fairness

How do we know if an algorithm is fair?

We propose the Explainable Fairness framework: a process to define and measure when an algorithm is complying with an anti-discrimination law. The central question is, Does the algorithm treat certain groups of people differently than others? The framework has three steps: choose a protected class, choose an outcome of interest, and measure and explain differences.

Example from hiring algorithms

Step 1: Identify protected stakeholder groups. Fair hiring rules prohibit discrimination by employers according to gender, race, national origin, and disability, among other protected classes. So all these could be considered specific groups for whom fairness needs to be verified. 

Step 2: Identify outcomes of interest. In hiring, being offered a job is an obvious topline outcome. Other outcomes could also be considered employment decisions: for instance, whether a candidate gets screened out at the resume stage, or whether they are invited to interview, or who applied in the first place, which might reflect bias in recruitment.

Step 3: Measure and Explain Loop. Measure the outcomes of interest for different categories of the protected class. For example, are fewer women getting interviews? If so, is there a legitimate factor that explains that difference? For example, are men who apply more likely to have a relevant credential or more years of experience? If so, account for those legitimate factors and remeasure the outcomes. If you end up with unexplained large differences, you have a problem. 

The process can be applied more generally, and looks like this:

Contact

 

We are OCEAN

 
DSC0085wcurversw_crop2_700.jpeg

Tom Adams

Executive Director

Thomas Adams has over twenty-five years of business and legal experience. He has represented banks, companies and individuals on corporate, securities and business law matters. He also provided strategic advice, litigation support and expert witness testimony on issues relating to the financial crisis. Mr. Adams is an expert in creating solutions and solving problems for complex financial and corporate transactions and has provided strategic advice and analysis to banks, insurance companies, private equity companies, hedge funds and a variety of other companies. He graduated from Fordham Law School in 1989 and Colgate University in 1986. He is admitted to practice in New York.

Sherry.jpeg

Şerife (Sherry) Wong

Chairman of the Board

Şerife (Sherry) Wong is an artist and founder of Icarus Salon, an art and research organization exploring the societal implications of emerging technology. She is a researcher at the Berggruen Institute where she focuses on the data economy for the Transformations of the Human program, serves on the board of directors for Digital Peace Now, and is a member of Tech Inquiry. She has been a resident on artificial intelligence at the Rockefeller Foundation Bellagio Center, a jury member at Ars Electronica for the European Commission, and frequently collaborates on AI governance projects with the Center for Advanced Study in the Behavioral Sciences at Stanford. Previously, she created the Impact Residency at Autodesk’s Pier 9 Technology Center where she worked with over 100 leading creative technologists exploring the future of robotics, AR/VR, engineering, computer-aided machining, and machine learning for product development, and worked at the Electronic Frontier Foundation.

Copy+of+Cathy+O_Neil+%28125%29-Full+Size.jpg

Cathy O’Neil

Board Member

Cathy has been an independent data science consultant since 2012 and has worked for clients including the Illinois Attorney General’s Office and Consumer Reports. She wrote the book Doing Data Science in 2013 and Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy, released in September 2016.

IMG_5132.jpg

Jacob Appel

Board Member

Jake is ORCAA’s Chief Strategist. He conducts algorithmic audits, and specializes in designing tests and analyses to assess the performance of algorithms and their impacts on stakeholders. Before joining ORCAA he worked with the Behavioral Insights Team, where he advised state and local governments on incorporating behavioral science “nudges” into citizen-facing policies and programs, and testing them with randomized experiments. Jake received a BS in mathematics from Columbia University and an MPA from Princeton School of Public and International Affairs. He coauthored two books: More Than Good Intentions: How a new economics is helping to solve global poverty, and Failing in the Field: What we can learn when field research goes wrong.

Andrew Smart

Board Member

Andrew Smart is a Senior Research Scientist at Google Research investigating the philosophical and social foundations of AI. His interests range from algorithmic auditing to social ontology. He is also a PhD candidate in philosophy at the Australian National University working on the epistemology and philosophy of science of AI. He is the author of two books and more than 30 peer reviewed papers on AI, society and ethics. 

 

Laura Strausfeld

Board Member

Laura Strausfeld is a Chekhov scholar, the founder of law and policy nonprofit Period Law, and has a background as a plaintiffs’ attorney; development strategist for nonprofits and companies including the Economic Hardship Reporting Project. She is the associate director of institutional relations, managing foundation, corporation, and law firm fundraising at the Brennan Center.

 Our Generous Funders