Skip to content

Commit

Permalink
Documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
DiegoGarciaVega committed Dec 18, 2024
1 parent a4a2400 commit 5ab74de
Show file tree
Hide file tree
Showing 6 changed files with 345 additions and 31 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
![logo](https://github.com/QHPC-SP-Research-Lab/LazyQML/blob/main/docs/logo.jpg)
![LazyQML](./docs/logo.jpg)
---
[![Pypi](https://img.shields.io/badge/pypi-%23ececec.svg?style=for-the-badge&logo=pypi&logoColor=1f73b7)](https://pypi.python.org/pypi/lazyqml)
![GitHub Actions](https://img.shields.io/badge/github%20actions-%232671E5.svg?style=for-the-badge&logo=githubactions&logoColor=white)
Expand Down Expand Up @@ -29,15 +29,15 @@ With LazyQML, you can:
- Flexible & Modular: From basic quantum circuits to hybrid quantum-classical models—LazyQML has you covered.

## Documentation
For detailed usage instructions, API reference, and code examples, please refer to the official LazyQML documentation.
For detailed usage instructions, API reference, and code examples, please refer to the official LazyQML [documentation](https://qhpc-sp-research-lab.github.io/LazyQML/).

## Requirements

- Python >= 3.10

> [!CAUTION]
> ❗❗
> This library is only supported by Linux Systems. It doesn't support Windows nor MacOS.
> Only supports CUDA compatible devices.
## Installation
To install lazyqml, run this command in your terminal:
Expand Down
90 changes: 90 additions & 0 deletions docs/api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
Certainly! Here's the complete markdown code with a bit of flair to make the documentation more engaging, all ready for you to copy and use:


# LazyQML API Overview

Welcome to **LazyQML** – your quantum machine learning playground! LazyQML is a cutting-edge Python library designed to simplify the integration of quantum classifiers into your machine learning workflows. With LazyQML, you'll be able to explore quantum neural networks, quantum support vector machines, and other quantum models, all while maintaining a simple and easy to use code.

At the heart of LazyQML is the **QuantumClassifier** – the Swiss Army knife of quantum machine learning. This easy-to-use class empowers you to train, evaluate, and fine-tune quantum classifiers on your data, whether you're a beginner or a seasoned quantum enthusiast.

## Key Features

LazyQML is packed with tools to streamline quantum classification. Below are the core features that set it apart from the crowd:

### 1. **QuantumClassifier: The Heart of LazyQML**

The **QuantumClassifier** class is the core of LazyQML, offering a variety of methods for training and evaluating quantum models. It provides an elegant and flexible interface for working with quantum circuits, allowing you to explore different types of classifiers, embeddings, and ansatz circuits. The goal? To make quantum classification as intuitive as possible.

### 2. **Variants of QuantumClassifier**

LazyQML provides **two exciting variants** of the **QuantumClassifier**, depending on which module you import. This gives you the freedom to choose the right quantum simulation backend for your specific needs:

- **State Vector Simulation** (imported from `lazyqml.st`): This variant simulates the full quantum state of your system, perfect for smaller systems or when you want a more intuitive understanding of quantum behavior.

- **Tensor Networks** (imported from `lazyqml.tn`): This variant uses tensor networks, providing higher scalability for larger quantum systems. It's optimized for more complex and larger datasets, helping you tackle big problems with ease.

#### Importing State Vector Simulation Variant:
```python
from lazyqml.st import *
```

- Use this import to access the **QuantumClassifier** based on **State Vector simulations**, simulating the full quantum state for an intuitive understanding.

#### Importing Tensor Network Variant:
```python
from lazyqml.tn import *
```
- Use this import to access the **QuantumClassifier** based on **Tensor Networks**, offering efficient simulation of larger quantum systems using approximate methods.

### 3. **Training and Evaluation Methods**

LazyQML offers you three robust methods to train and evaluate your quantum models. These methods are designed to give you complete control over the classification process:

#### **fit**
The **fit** method is where the magic happens. 🌟 It trains your quantum model on your dataset, selecting from different quantum classifiers, embeddings, and ansatz circuits. This method provides a simple interface to quickly train a model, view its results, and get on with your quantum journey.

- **When to use it?** Use **fit** when you want to quickly train and evaluate a quantum model with just a few lines of code.

#### **leave_one_out**
**Leave-One-Out Cross Validation (LOO CV)** is a robust technique where each data point is used as the test set exactly once. This method is fantastic for small datasets, providing a deeper understanding of your model’s performance.

- **When to use it?** Choose **leave_one_out** when working with small datasets and you need to evaluate every data point for a thorough assessment.

#### **repeated_cross_validation**
This method performs repeated k-fold cross-validation. It divides your dataset into k subsets, trains the model on k-1 subsets, and tests on the remaining fold. This process is repeated multiple times to provide a more accurate estimate of your model's performance.

- **When to use it?** Use **repeated_cross_validation** for a more comprehensive evaluation of your model, especially when working with larger datasets.

### 4. **Enums for Quantum Model Selection**

LazyQML gives you full control over your quantum model's architecture. With a rich set of enums, you can easily select the correct ansatz circuits, embedding strategies, and classification models. 🎯

#### **Ansatzs Enum**
Ansatz circuits define the structure of your quantum model. LazyQML provides a selection of ansatz types:

- `ALL`: All available ansatz circuits.
- `HCZRX`, `TREE_TENSOR`, `TWO_LOCAL`, `HARDWARE_EFFICIENT`: Popular ansatz circuits that are ideal for quantum machine learning.

#### **Embedding Enum**
Embeddings control how your classical data is encoded onto quantum states. LazyQML offers several types of embedding strategies:

- `ALL`: All available embedding circuits.
- `RX`, `RY`, `RZ`: Common qubit rotation embeddings.
- `ZZ`, `AMP`: Embedding strategies based on entanglement or amplitude encoding.

#### **Model Enum**
LazyQML supports a variety of quantum models, each suited for different tasks. Choose the model that best fits your data and problem:

- `ALL`: All available quantum models.
- `QNN`: Quantum Neural Network.
- `QNN_BAG`: Quantum Neural Network with Bagging.
- `QSVM`: Quantum Support Vector Machine.
- `QKNN`: Quantum k-Nearest Neighbors.

---

## What's Next?

This overview introduces you to the powerful features of **LazyQML** and the **QuantumClassifier**. Whether you’re just getting started or you’re a quantum computing pro, LazyQML simplifies quantum machine learning. 🌐✨

For more detailed documentation on each function, parameter, and quantum algorithm, head over to the full documentation pages. Get ready to dive into the world of quantum classification with LazyQML – your quantum adventure begins here! 🛸
91 changes: 83 additions & 8 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,91 @@
# Welcome to lazyqml
![LazyQML](./docs/logo.jpg)
---
[![Pypi](https://img.shields.io/badge/pypi-%23ececec.svg?style=for-the-badge&logo=pypi&logoColor=1f73b7)](https://pypi.python.org/pypi/lazyqml)
![GitHub Actions](https://img.shields.io/badge/github%20actions-%232671E5.svg?style=for-the-badge&logo=githubactions&logoColor=white)
![NumPy](https://img.shields.io/badge/numpy-%23013243.svg?style=for-the-badge&logo=numpy&logoColor=white)
![Pandas](https://img.shields.io/badge/pandas-%23150458.svg?style=for-the-badge&logo=pandas&logoColor=white)
![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white)
![scikit-learn](https://img.shields.io/badge/scikit--learn-%23F7931E.svg?style=for-the-badge&logo=scikit-learn&logoColor=white)
![nVIDIA](https://img.shields.io/badge/cuda-000000.svg?style=for-the-badge&logo=nVIDIA&logoColor=green)
![Linux](https://img.shields.io/badge/Linux-FCC624?style=for-the-badge&logo=linux&logoColor=black)


[![image](https://img.shields.io/pypi/v/lazyqml.svg)](https://pypi.python.org/pypi/lazyqml)

LazyQML is a Python library designed to streamline, automate, and accelerate experimentation with Quantum Machine Learning (QML) architectures, right on classical computers.

**LazyQML benchmarking utility to test quantum machine learning models.**
With LazyQML, you can:
- 🛠️ Build, test, and benchmark QML models with minimal effort.

- ⚡ Compare different QML architectures, hyperparameters seamlessly.

- 🧠 Gather knowledge about the most suitable architecture for your problem.

## ✨ Why LazyQML?

- Free software: MIT License
- Documentation: <https://DiegoGV-Uniovi.github.io/lazyqml>

- Rapid Prototyping: Experiment with different QML models using just a few lines of code.

## Features
- Automated Benchmarking: Evaluate performance and trade-offs across architectures effortlessly.

- TODO
- Flexible & Modular: From basic quantum circuits to hybrid quantum-classical models—LazyQML has you covered.

## Documentation
For detailed usage instructions, API reference, and code examples, please refer to the official LazyQML documentation.

## Requirements

- Python >= 3.10

> ❗❗
> This library is only supported by Linux Systems. It doesn't support Windows nor MacOS.
> Only supports CUDA compatible devices.

## Installation
To install lazyqml, run this command in your terminal:

```
pip install lazyqml
```

This is the preferred method to install lazyqml, as it will always install the most recent stable release.

If you don't have [pip](https://pip.pypa.io) installed, this [Python installation guide](http://docs.python-guide.org/en/latest/starting/installation/) can guide you through the process.

### From sources

To install lazyqml from sources, run this command in your terminal:

```
pip install git+https://github.com/QHPC-SP-Research-Lab/LazyQML
```
## Example

```python
from sklearn.datasets import load_iris
from lazyqml.lazyqml import *

# Load data
data = load_iris()
X = data.data
y = data.target

classifier = QuantumClassifier(nqubits={4}, classifiers={Model.QNN, Model.QSVM}, epochs=10)

# Fit and predict
classifier.fit(X=X, y=y, test_size=0.4)
```

## Quantum and High Performance Computing (QHPC) - University of Oviedo
- José Ranilla Pastor - [email protected]
- Elías Fernández Combarro - [email protected]
- Diego García Vega - [email protected]
- Fernando Álvaro Plou Llorente - [email protected]
- Alejandro Leal Castaño - [email protected]
- Group - https://qhpc.uniovi.es

## Citing
If you used LazyQML in your work, please cite:
- García-Vega, D., Plou Llorente, F., Leal Castaño, A., Combarro, E.F., Ranilla, J.: Lazyqml: A python library to benchmark quantum machine learning models. In: 30th European Conference on Parallel and Distributed Processing (2024)

## License
- Free software: MIT License
154 changes: 154 additions & 0 deletions docs/lazyqml.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
## QuantumClassifier Parameters:
#### Core Parameters:
- **`nqubits`**: `Set[int]`
- Description: Set of qubit indices, where each value must be greater than 0.
- Validation: Ensures that all elements are integers > 0.

- **`randomstate`**: `int`
- Description: Seed value for random number generation.
- Default: `1234`

- **`predictions`**: `bool`
- Description: Flag to determine if predictions are enabled.
- Default: `False`

#### Model Structure Parameters:
- **`numPredictors`**: `int`
- Description: Number of predictors used in the QNN with bagging.
- Constraints: Must be greater than 0.
- Default: `10`

- **`numLayers`**: `int`
- Description: Number of layers in the Quantum Neural Networks.
- Constraints: Must be greater than 0.
- Default: `5`

#### Set-Based Configuration Parameters:
- **`classifiers`**: `Set[Model]`
- Description: Set of classifier models.
- Constraints: Must contain at least one classifier.
- Default: `{Model.ALL}`
- Options: `{Model.QNN, Model.QSVM, Model.QNN_BAG}`

- **`ansatzs`**: `Set[Ansatzs]`
- Description: Set of quantum ansatz configurations.
- Constraints: Must contain at least one ansatz.
- Default: `{Ansatzs.ALL}`
- Options: `{Ansatzs.RX, Ansatzs.RZ, Ansatzs.RY, Ansatzs.ZZ, Ansatzs.AMP}`

- **`embeddings`**: `Set[Embedding]`
- Description: Set of embedding strategies.
- Constraints: Must contain at least one embedding.
- Default: `{Embedding.ALL}`
- Options: `{Embedding.HCZRX, Embedding.TREE_TENSOR, Embedding.TWO_LOCAL, Embedding.HARDWARE_EFFICENT}`

- **`features`**: `Set[float]`
- Description: Set of feature values (must be between 0 and 1).
- Constraints: Values > 0 and <= 1.
- Default: `{0.3, 0.5, 0.8}`

#### Training Parameters:
- **`learningRate`**: `float`
- Description: Learning rate for optimization.
- Constraints: Must be greater than 0.
- Default: `0.01`

- **`epochs`**: `int`
- Description: Number of training epochs.
- Constraints: Must be greater than 0.
- Default: `100`

- **`batchSize`**: `int`
- Description: Size of each batch during training.
- Constraints: Must be greater than 0.
- Default: `8`

#### Threshold and Sampling:
- **`threshold`**: `int`
- Description: Decision threshold for parallelization, if the model is bigger than this threshold it will use GPU.
- Constraints: Must be greater than 0.
- Default: `22`

- **`maxSamples`**: `float`
- Description: Maximum proportion of samples to be used from the dataset characteristics.
- Constraints: Between 0 and 1.
- Default: `1.0`

#### Logging and Metrics:
- **`verbose`**: `bool`
- Description: Flag for detailed output during training.
- Default: `False`

- **`customMetric`**: `Optional[Callable]`
- Description: User-defined metric function for evaluation.
- Validation:
- Function must accept `y_true` and `y_pred` as the first two arguments.
- Must return a scalar value (int or float).
- Function execution is validated with dummy arguments.
- Default: `None`

#### Custom Preprocessors:
- **`customImputerNum`**: `Optional[Any]`
- Description: Custom numeric data imputer.
- Validation:
- Must be an object with `fit`, `transform`, and optionally `fit_transform` methods.
- Validated with dummy data.
- Default: `None`

- **`customImputerCat`**: `Optional[Any]`
- Description: Custom categorical data imputer.
- Validation:
- Must be an object with `fit`, `transform`, and optionally `fit_transform` methods.
- Validated with dummy data.
- Default: `None`

## Functions:

### **`fit`**
```python
fit(self, X, y, test_size=0.4, showTable=True)
```
Fits classification algorithms to `X` and `y` using a hold-out approach. Predicts and scores on a test set determined by `test_size`.

#### Parameters:
- **`X`**: Input features (DataFrame or compatible format).
- **`y`**: Target labels (must be numeric, e.g., via `LabelEncoder` or `OrdinalEncoder`).
- **`test_size`**: Proportion of the dataset to use as the test set. Default is `0.4`.
- **`showTable`**: Display a table with results. Default is `True`.

#### Behavior:
- Validates the compatibility of input dimensions.
- Automatically applies PCA transformation for incompatible dimensions.
- Requires all categories to be present in training data.

### **`repeated_cross_validation`**
```python
repeated_cross_validation(self, X, y, n_splits=10, n_repeats=5, showTable=True)
```
Performs repeated cross-validation on the dataset using the specified splits and repeats.

#### Parameters:
- **`X`**: Input features (DataFrame or compatible format).
- **`y`**: Target labels (must be numeric).
- **`n_splits`**: Number of folds for splitting the dataset. Default is `10`.
- **`n_repeats`**: Number of times cross-validation is repeated. Default is `5`.
- **`showTable`**: Display a table with results. Default is `True`.

#### Behavior:
- Uses `RepeatedStratifiedKFold` for generating splits.
- Aggregates results from multiple train-test splits.

### **`leave_one_out`**
```python
leave_one_out(self, X, y, showTable=True)
```
Performs leave-one-out cross-validation on the dataset.

#### Parameters:
- **`X`**: Input features (DataFrame or compatible format).
- **`y`**: Target labels (must be numeric).
- **`showTable`**: Display a table with results. Default is `True`.

#### Behavior:
- Uses `LeaveOneOut` for generating train-test splits.
- Evaluates the model on each split and aggregates results.
Loading

0 comments on commit 5ab74de

Please sign in to comment.