Skip to content

Latest commit

 

History

History
42 lines (31 loc) · 3.73 KB

README.md

File metadata and controls

42 lines (31 loc) · 3.73 KB

Verifiable Evaluation Attestations for Machine Learning Models

This repository introduces a method for verifiable evaluation of machine learning models using zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) via ezkl toolkit. The approach enables transparent evaluation of closed-source ML models, providing computational proofs of a model's performance metrics without exposing its internal weights. This technique ensures evaluation integrity and promotes trust in ML applications, applicable to any standard neural network model across various real-world examples.

The Paper

The paper to accompany this repo can be found on arxiv.

Full Abstract

In a world of increasing closed-source commercial machine learning models, model evaluations from developers must be taken at face value. These benchmark results---whether over task accuracy, bias evaluations, or safety checks---are traditionally impossible to verify by a model end-user without the costly or impossible process of re-performing the benchmark on black-box model outputs. This work presents a method of verifiable model evaluation using model inference through zkSNARKs. The resulting zero-knowledge computational proofs of model outputs over datasets can be packaged into verifiable evaluation attestations showing that models with fixed private weights achieve stated performance or fairness metrics over public inputs. These verifiable attestations can be performed on any standard neural network model with varying compute requirements. For the first time, we demonstrate this across a sample of real-world models and highlight key challenges and design solutions. This presents a new transparency paradigm in the verifiable evaluation of private models.

Build and run

  1. For any of this t work you'll need to install and use the ezkl CLI and python package. Details of how to install can be found at https://github.com/zkonduit/ezkl.git. The CLI or Python interfaces may be out of date in the latest versions, but the proof speed will be improved.
  2. Simple model examples can be found in /src. They run using the CLI and pytorch.
  3. Experiments to replicate the paper results can be found in /src/experiments.
  4. Choose a new model or dataset to run and follow the example process to get an onnx version of the model and generate inference proofs on it.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Cite this work

If you use this work or the ideas in it, please cite:

@misc{south2024verifiableevals,
      title={Verifiable evaluations of machine learning models using zkSNARKs}, 
      author={Tobin South and Alexander Camuto and Shrey Jain and Shayla Nguyen and Robert Mahari and Christian Paquin and Jason Morton and Alex 'Sandy' Pentland},
      year={2024},
      eprint={2402.02675}
}