How Zero Knowledge Proof Makes AI Fair



Opinion: Rob Viglione, co-founder and CEO of Horizen Labs

Can you trust your AI to be fair? Recent research papers suggest that it is a little more complicated. Unfortunately, bias is more than just a bug. It is a permanent feature that does not have a proper cryptographic guardrail.

A September 2024 study from Imperial College London shows Zero Knowledge Proof (ZKPS) shows how a machine learning (ML) model can help businesses treat all demographic groups equally, while keeping model details and user data private.

Zero knowledge proof is a encryption method that allows one party to prove to another that a statement is true without revealing additional information beyond the validity of the statement. However, when defining “fairness,” we open a whole new can of worms.

Machine Learning Bias

In machine learning models, biases manifest in dramatically different ways. This can lead to credit scoring services being able to rate people differently based on their friend or community credit scores. It also allows AI image generators to urge Google’s AI tools to show the Pope and the ancient Greeks as people of different races, as Gemini notorious last year.

Finding unfair machine learning (ML) models in the wild is easy. If the model is taking away people loans or credits for who their friends are, that’s discrimination. If it corrects history, treats certain demographics differently and overtreats them in the name of fairness, that is also discrimination. Both scenarios undermine trust in these systems.

Consider banks using the ML model for loan approval. ZKP can prove that models are not demographically biased, without revealing sensitive customer data or details of their own model. Using ZK and ML, banks were able to prove that they did not systematically discriminate against racial groups. The evidence will be real-time and continuous inefficient government audits today.

Ideal ML model? Something that doesn’t modify history or treat people differently based on background. AI must comply with anti-discrimination laws, such as the 1964 American Civil Rights Act. The problem is that it is burned into AI and made it verifiable.

ZKP provides a technical route to ensure this compliance.

AI is biased (but there’s no need to do so)

When working with machine learning, you need to make sure that proof of fairness keeps the underlying ML model and training data secret. You need to provide sufficient access to help users know that their model is not discriminatory, while protecting intellectual property and user privacy.

It’s not an easy task. ZKPS offers a verifiable solution.

ZKML (Zero Knowledge Machine Learning) is a method of using zero knowledge proofs to ensure that your ML model is displayed in a box. ZKML combines zero-knowledge cryptography with machine learning to create a system that allows you to verify AI properties without exposing the underlying models or data. You can also employ that concept and use ZKPS to identify ML models that treat everyone equally and fairly.

Recent: Know your peers – KYC’s pros and cons

Previously, proving AI fairness using ZKPS was very limited as it could only focus on one phase of the ML pipeline. This allows fraudulent model providers to build datasets that meet the fairness requirements, even if the model does not. ZKP also introduces unrealistic computational demands and long wait times to generate evidence of fairness.

In recent months, the ZK framework has scaled ZKP to determine end-to-end fairness for models using tens of millions of parameters, and certainly allows them to be done safely.

Trillion Dollar Question: How do you measure whether AI is fair?

Let’s break down three definitions of fairness for the most common groups. Demographic equality, equal opportunity, predictive equality.

Demographic parity means that the probability of a particular prediction is the same for different groups, such as race or gender. Diversity, equity, and inclusive sectors often use it as a measure that seeks to reflect the demographics of the population within a company’s workforce. It is not an ideal fairness metric for ML models as it is unrealistic to expect all groups to get the same results.

Equal opportunity is easy for most people to understand. It gives all groups the same opportunity to produce positive outcomes, assuming they are equally qualified. It is not optimized for results. Every demographic should have the same opportunities as getting a job or a mortgage.

Similarly, if the ML model makes predictions with the same accuracy over different demographics, then the predictive equality measure will not punish anyone because they are simply part of the group.

In both cases, the ML model does not scale the thumb on the scale for stock reasons, but rather to ensure that the group is not discriminated against in any way. This is a very wise fix.

Fairness is becoming a standard in some way

Over the past year, the US government and other countries have issued statements and mandates on AI equity to protect the public from ML bias. Currently, new administrations in the US may approach AI equity differently, focusing on equality of opportunity and moving away from equity.

As political landscapes change, the definition of fairness in AI similarly moves between a paradigm focused on equity and an opportunity focused paradigm. We welcome ML models that treat everyone equally without putting their thumb on scale. Zero-knowledge Proof serves as an airtight method to ensure that the ML model is doing this without revealing private data.

ZKP has faced many scalability challenges over the years, but the technology is ultimately becoming affordable in mainstream use cases. You can use ZKPS to verify the integrity of your training data, protect your privacy, and make sure the model they use is what they say.

ML models are more woven into our daily lives, and we can use a little more security that AI treats us fairly, as future work outlook, university admissions, and mortgages depend on them. But whether we can all agree with the definition of fairness is a completely different question.

Opinion: Rob Viglione, co-founder and CEO of Horizen Labs.

This article is for general informational purposes and is not intended to be considered legal or investment advice, and should not be done. The views, thoughts and opinions expressed here are the authors alone and do not necessarily reflect or express Cointregraph’s views and opinions.