A sophisticated Chrome extension that helps you take control of your social media experience through advanced content filtering and rating, powered by local AI processing.
- Real-time content analysis using local LLama models
- 0-100 rating scale with visual indicators
- Comprehensive rating components:
- Content Quality (40%)
- Emotional Impact (30%)
- User Preferences (30%)
- Platform-specific content detection
- Customizable filtering thresholds
- Visual feedback through color-coded ratings
- Quick actions: Hide/Block content
- All processing happens locally on your device
- No data sent to external servers
- Local LLama model integration
- Complete control over AI model selection
- Efficient post detection and processing
- Background worker for AI processing
- Smart caching system
- Minimal impact on browsing experience
- Clean, intuitive interface
- Dark mode support
- Material Design components
- Responsive overlays and popups
- β Basic extension structure
- β TypeScript/React setup
- β Post detection system
- β Rating overlay UI
- β Settings management
- π Local AI integration (in progress)
- Frontend: React, TypeScript, Material-UI
- Styling: TailwindCSS
- AI: Local Llama.cpp via WebAssembly
- Build: Webpack, PostCSS
Before you begin, ensure you have the following tools installed:
-
Node.js and npm
- Download and install from nodejs.org
- Required version: 16.x or higher
-
CMake
- macOS:
brew install cmake
- Linux:
sudo apt-get install cmake
- Windows: Download installer from cmake.org
- Required version: 3.13 or higher
- macOS:
-
Emscripten
- Install using the following commands:
git clone https://github.com/emscripten-core/emsdk.git cd emsdk ./emsdk install latest ./emsdk activate latest source ./emsdk_env.sh # On Windows, use: emsdk_env.bat
- Add to your PATH as instructed by the installer
- Required version: 3.1.45 or higher
-
Git
- macOS:
brew install git
- Linux:
sudo apt-get install git
- Windows: Download from git-scm.com
- macOS:
- Clone the repository
git clone https://github.com/yourusername/chrome-extension-hardcode-blackout.git
cd chrome-extension-hardcode-blackout
- Install dependencies
npm install
- Prepare the Llama model and WASM build
npm run prepare-model
This script will:
- Clone and build llama.cpp with WASM support
- Download the required model files
- Set up the WASM integration
- Build the extension
npm run build
- Load in Chrome
- Open Chrome and go to
chrome://extensions/
- Enable Developer mode
- Click "Load unpacked"
- Select the
dist
directory
- Start Development Server
npm run dev
This will:
- Start Webpack in watch mode
- Rebuild on file changes
- Enable source maps for debugging
- Enable Chrome DevTools
- Right-click the extension icon
- Click "Inspect popup"
- Use the Console and Network tabs for debugging
- Live Reload
- Changes to content scripts require extension reload
- Click the refresh icon in
chrome://extensions/
- Or use Chrome's Extensions Reloader extension
- Debug Logging
- Set
DEBUG=true
in your.env
file - View logs in the background script console
- Access via
chrome://extensions/
-> Inspect views
The project includes comprehensive testing at multiple levels:
Test individual components and services:
# Run all unit tests
npm run test:unit
# Run specific test file
npm run test:unit -- llama-service.test.ts
# Watch mode for development
npm run test:unit -- --watch
Test component interactions and DOM manipulation:
# Run all integration tests
npm run test:integration
# Run with coverage
npm run test:integration -- --coverage
Test the extension in a real browser environment:
# Install Playwright browsers
npx playwright install
# Run all E2E tests
npm run test:e2e
# Run specific browser tests
npm run test:e2e -- --project=chromium
# Show test report
npx playwright show-report
Generate and view test coverage reports:
# Generate coverage report
npm run test:coverage
# Open coverage report
open coverage/lcov-report/index.html
-
Content Processing
- Post detection and analysis
- Rating calculation
- Visual overlay rendering
-
UI Components
- Popup functionality
- Settings page interactions
- Dark/light theme switching
-
Model Integration
- WASM module loading
- Model inference
- Performance benchmarks
-
Browser Integration
- Extension installation
- Chrome API interactions
- Cross-platform compatibility
- Jest Tests
# Run with detailed logging
npm run test -- --verbose
# Debug specific test
node --inspect-brk node_modules/.bin/jest --runInBand path/to/test
- Playwright Tests
# Run in debug mode
npm run test:e2e -- --debug
# Run with UI mode
npm run test:e2e -- --ui
- Common Issues
- WASM Loading: Ensure proper path resolution in tests
- Chrome API Mocks: Verify mock implementation matches real behavior
- Async Operations: Use proper wait and timeout values
- DOM Events: Ensure proper event simulation and cleanup
- Type Checking
npm run type-check
- Linting
# Run linter
npm run lint
# Fix auto-fixable issues
npm run lint -- --fix
- Pre-commit Hooks
- Tests must pass
- No TypeScript errors
- No linting errors
- Coverage thresholds met
-
Writing Tests
- Follow AAA pattern (Arrange, Act, Assert)
- Use meaningful test descriptions
- Keep tests focused and isolated
- Clean up after each test
-
Mocking
- Mock external dependencies
- Use jest.spyOn for verification
- Reset mocks between tests
- Document mock behavior
-
Performance
- Group related tests
- Reuse setup when possible
- Mock heavy operations
- Use snapshot testing wisely
- Click the extension icon to open the popup
- Use the quick settings for basic filtering
- Open the full settings page for detailed configuration
- Select your preferred Llama model
- Configure inference settings
- Adjust processing parameters
- Posts are automatically rated (0-100)
- Color-coded indicators show content quality
- Use quick actions to hide or block content
- Customize thresholds in settings
We welcome contributions! Please see our Contributing Guidelines for details.
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Llama.cpp for the WebAssembly integration
- Material-UI for the UI components
- The open-source community for inspiration and support
Feature Specification: Hardcore Blackout
Hardcore Blackout constitutes a sophisticated Chrome extension meticulously engineered to regulate social media content exposure through an advanced filtration paradigm. This solution leverages a hybrid architecture integrating deterministic keyword filtering with probabilistic AI-driven content analysis to grant users unparalleled control over the digital landscape. The platform facilitates a multi-tiered approach wherein users may dictate precise content filtration rules, employ OpenAI's advanced natural language processing models, or opt for entirely local AI-driven inference mechanisms via frameworks such as Ullama, ensuring privacy-centric computation. Additionally, the extension incorporates a robust browsing history sanitation module, expediting the automated removal of undesired entries predicated upon user-defined heuristics. This project, maintained as an open-source initiative, resides in a public GitHub repository and adheres to the permissive MIT License, fostering community-driven contributions and iterative enhancements.
- Enables fine-grained specification of prohibited lexemes and phrases.
- Implements real-time content interception across multiple digital ecosystems, including Twitter, Facebook, and Reddit.
- Grants platform-specific configurability to ensure adaptive content management strategies.
- Facilitates user-defined prompt-based filtering by interfacing with OpenAI models (e.g., GPT-4-turbo).
- Introduces an alternative local processing mechanism leveraging open-source LLMs such as Ullama for enhanced privacy.
- Permits dynamic toggling between remote and local AI evaluation pipelines to optimize computational efficiency and cost management.
- Empowers users with a modular interface to activate or deactivate filtering across designated platforms.
- Enhances processing efficiency by narrowing operational scope to pertinent social media domains.
- Identifies and programmatically expunges history records encompassing filtered keywords.
- Implements a rule-based approach permitting manual validation or automated enforcement of deletion protocols.
- Allows persistent user-defined blacklists to refine and streamline future filtering operations.
- Employs computational linguistics techniques to ascertain the probability of textual AI synthesis.
- Incorporates statistical similarity analysis and entropy-based heuristics to discern machine-generated content.
- Displays real-time confidence metrics enabling user-informed interaction with AI-generated discourse.
- Utilizes cryptographic hashing functions to generate canonical fingerprints for encountered posts.
- Stores historical content signatures within an indexed local repository to facilitate redundancy assessment.
- Affords customizable visibility controls for duplicated content, encompassing highlighting, blurring, or removal.
- Soft Filtering: Implements opacity-based obfuscation, affording discretionary user visibility restoration.
- Hardcore Blackout: Executes absolute removal of designated content from rendered web pages.
- Web Content Manipulation:
chrome.webRequest
andchrome.storage
are leveraged for content interception and storage operations. - Browsing History Manipulation:
chrome.history.deleteUrl()
serves as the core API for deletion automation. - Persistent Configurations:
chrome.storage.local
preserves user-defined preferences across sessions.
- Remote OpenAI Integration:
- Transmits textual data via API requests for model inference and response generation.
- Receives and processes sentiment-based or semantic filtration scores to inform removal decisions.
- Local AI Deployment:
- Implements on-device inferencing with locally hosted models (e.g., Ullama) to ensure data sovereignty.
- Utilizes optimization strategies such as quantized inference for enhanced computational efficiency.
- Canonicalization Pipeline:
- Standardizes and tokenizes content before hash computation.
- Compares resultant hashes against a stored repository for duplicate detection.
- User Interface Enhancements:
- Provides an intuitive settings dashboard featuring per-platform toggle switches and dynamic input fields.
- Integrates a real-time status monitor displaying AI computation metrics and filtering efficacy.
- On-Page Controls:
- Empowers users to override or refine filtering results directly within the browser interface.
β Establish GitHub repository with structured documentation. β Define modular project architecture to facilitate incremental development. π² Implement foundational Chrome extension scaffolding and permission definitions.
π² Deploy deterministic keyword-based filtering logic. π² Implement dynamic browsing history sanitation mechanisms. π² Design and integrate a feature-rich user settings page.
π² Establish OpenAI API interfacing for remote AI inference. π² Implement Ullama-based local AI inference pipeline. π² Integrate probabilistic AI-generated content detection mechanisms.
π² Augment UI with interactive filtering status feedback. π² Develop an on-screen content manipulation toolkit. π² Refine in-page dynamic toggle and control mechanisms.
π² Optimize computational workload distribution for real-time filtering. π² Conduct multi-platform testing to validate cross-environment compatibility. π² Ensure compliance with Chrome extension security and permission policies.
π² Publish initial release on Chrome Web Store. π² Develop community contribution guidelines and best practices. π² Solicit and incorporate feedback for iterative refinements.
- Computational Overhead: Optimizing inferencing latency while maintaining accuracy is a primary objective.
- Web Platform Compliance: Adapting to evolving social media content policies remains an ongoing necessity.
- Privacy Safeguards: Ensuring on-device computation alternatives fortifies user data security.
- OpenAI API Cost Management: Provisions for quota monitoring and alternative AI deployments mitigate financial overhead.
- Local AI Optimization: Refining memory-efficient models to sustain real-time processing efficacy.
- Adaptive AI Personalization: Continually evolving user-tailored filtration models via interactive feedback loops.
- Extended Multi-Model Integration: Expanding the AI processing ecosystem to encompass diversified LLM architectures.
- Cross-Device Interoperability: Facilitating synchronization across browser instances and mobile environments.
This document encapsulates a rigorous, methodologically structured approach to the development of Hardcore Blackout, ensuring an optimized, extensible, and research-driven solution for real-time social media content management.