diff --git a/README.md b/README.md index fb89f98f..ea41fa15 100644 --- a/README.md +++ b/README.md @@ -23,12 +23,13 @@
Inference, ingestion, and indexing – supercharged by Rust 🦀
- Explore the docs »
+ Python docs »
+ Rust docs »
View Demo
·
- Benches
+ Benches
·
Vector Streaming Adapters
.
@@ -74,10 +75,11 @@ EmbedAnything is a minimalist, highly performant, lightning-fast, lightweight, m
- **ONNX Models**: Works with ONNX models for BERT and ColPali
- **ColPali** : Support for ColPali in GPU version
- **Splade** : Support for sparse embeddings for hybrid
+- **ReRankers** : Support for ReRanking Models for better RAG.
- **Cloud Embedding Models:**: Supports OpenAI and Cohere.
- **MultiModality** : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
- **Rust** : All the file processing is done in rust for speed and efficiency
-- **Candle** : We have taken care of hardware acceleration as well, with Candle.
+- **GPU support** : We have taken care of hardware acceleration on GPU as well.
- **Python Interface:** Packaged as a Python library for seamless integration into your existing projects.
- **Vector Streaming:** Continuously create and stream embeddings if you have low resource.