Change the repository type filter
All
Repositories list
103 repositories
algorithmic-efficiency
Publicmlperf-automations
PublicThis repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.modelbench
Public- Collective Knowledge (CK), Collective Mind (CM) and Common Metadata eXchange (CMX): community-driven projects to facilitate collaborative and reproducible research and to learn how to run AI, ML, and other emerging workloads more efficiently and cost-effectively across diverse models, datasets, software, and hardware using MLPerf.
- Croissant is a high-level format for machine learning datasets that brings together four rich layers.
mlcflow
PublicMLCFlow: Simplifying MLPerf Automationslogging
PublicGaNDLF
PublicA generalizable application framework for segmentation, regression, and classification using PyTorch- These are automated test submissions for validating the MLPerf inference workflows
submissions_algorithms
Publiccm4mlperf-results
PublicCM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$inference_policies
Public- A collection of portable, reusable and cross-platform CM automations for MLOps and MLPerf to simplify the process of building, benchmarking and optimizing AI systems across diverse models, data sets, software and hardware
inference_results_v4.0
Publicchakra
Publicpolicies
Public