Skip to content

"DPFL-FPGA-Accel" for carrying out DSE of DPFL on FPGAs that enables users to design optimized FPGA-accelerators for ML-tasks implementing privacy-preserving federated-learning with adjustable epsilon values.

Notifications You must be signed in to change notification settings

shakeelakram00/DPFL-FPGA-Accel-Framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DPFL-FPGA-Accel Framework for carrying out DSE of DPFL on FPGAs that enables users to design optimized FPGA accelerators for ML tasks implementing privacy-preserving federated learning with adjustable epsilon values.

Steps to Implement DPFL-FPGA-Accel-Framework

  1. DPFL-FPGA-Accel-Framework_CompPro directory contains the DPFL-FPGA-Accel Framework complete flow for pre-built AI-Accel-1, AI-Accel-2, AI-Accel-3, and AI-Accel-4 for FPGA ZCU102.
  2. The directory also contains the Dependecies related to the DPFL-FPGA-Accel-Framework on FPGA ZCU102. It is required to make sure they are incorporated.
  3. The file in each subdirectory for each AI-Accel contains the DPFL-FPGA-Accel_flow file to run and do the Design Space Exploration (DSE) concerning throughput, timing, accuracy, privacy, loss, number of clients contributing in DPFL, Number of Global Rounds, number of local epochs at each user etc.
  4. The DSE can help develop an FPGA-Accel in the DPFL environment according to the required performance and privacy.

Steps to regenerate the existing bit_files built using FINN

  1. Set up a FINN docker image on your system.
  2. Incorporate the DPFL_Dependencies in the image.
  3. Run the build files with your appropriate settings of PE, SIMD, InFiFo depth, and outFiFo depth in AI-AccelX_hw_config.json files.
  4. Place the generated bit and hwh files with the updated configuration file in the DPFL-FPGA-Accel-Framework_CompPro to run DPFL-FPGA-Accel Framework and do the DSE on the updated bit file.

Steps to build bit files on the QDNN Model setting other than the settings in QDNNtoBit_Files of AI-Accel-1, 2, 3, and 4.

  1. Update the QDNN model in the QDNNtoBit Files.
  2. Train the model using the same file and follow the file steps to generate the bit file on the new QDNN. Different PE, SIMD, InFifo depth, and outFiFo depth can be defined in the configuration file for the new QDNN.
  3. Change the existing model and .json configuration file name in DPFL_Accel_5Ectopic_main with the new one.
  4. Place the generated bit and hwh files with the updated configuration file in the DPFL-FPGA-Accel-Framework_CompPro to run DPFL-FPGA-Accel Framework and do the DSE on the updated bit file.

Steps to build bit files for applications other than the classification of cardiac arrhythmia

  1. Gather the dataset and develop a QDNN on that. Make sure the dataset is in float32 data type, and to test the accelerator on the FPGA board generate the test dataset of uint8 data type.
  2. Update the QDNN model in the QDNNtoBit Files.
  3. Train the model using the same file and follow the file steps to generate the bit file on the new QDNN. Different PE, SIMD, InFifo depth, and outFiFo depth can be defined in the configuration file for the new QDNN.
  4. Change the existing model and .json configuration file name in DPFL_Accel_5Ectopic_main with the new one.
  5. Place the new dataset files in the same directory of DPFL-FPGA-Accel Flow and change the dataset files in DPFL_Accel_5Ectopic_main with the new one.
  6. Place the generated bit and hwh files with the updated configuration file in the DPFL-FPGA-Accel-Framework_CompPro to run DPFL-FPGA-Accel Framework and do the DSE on the updated bit file.

Citation

The current implementation of the framework is based on the following publication. Please consider citing it if you find it useful.

Muhammad Shakeel Akram, Bogaraju Sharatchandra Varma, Dewar Finlay. DPFL-FPGA-Accel: Open Source Framework for Design Space Exploration of FPGA-Based Differential Private Federated Learning Accelerator: A Case Study with Cardiac Arrhythmia. TechRxiv. November 10, 2024. DOI: 10.36227/techrxiv.173121364.46610550/v1

@article{akram2024dpfl,
  title={DPFL-FPGA-Accel: Open Source Framework for Design Space Exploration of FPGA-Based Differential Private Federated Learning Accelerator: A Case Study with Cardiac Arrhythmia},
  author={Akram, Muhammad Shakeel and Varma, Bogaraju Sharatchandra and Finlay, Dewar},
  journal={Authorea Preprints},
  year={2024},
  publisher={Authorea}
}

About

"DPFL-FPGA-Accel" for carrying out DSE of DPFL on FPGAs that enables users to design optimized FPGA-accelerators for ML-tasks implementing privacy-preserving federated-learning with adjustable epsilon values.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published