cactus_destroy for all on-device inference Projects .

Technology

cactus_destroy for all on-device inference

High-velocity weight pruning engine that cuts LLM memory footprints by 60 percent for edge deployment.

Cactus_destroy strips redundant activation paths to enable high-speed inference on constrained hardware (like the iPhone 12 or Raspberry Pi 4). The engine targets high-variance weights that contribute zero to output accuracy: reducing total memory footprints by 60 percent. You get 45 tokens per second on standard mobile NPU architectures by leveraging 4-bit sparse matrix conversion. It fits 7B parameter models into a 4GB RAM envelope: just pass your PyTorch checkpoint through the CLI to generate optimized CoreML or ONNX files.

https://github.com/cactus-destroy/inference
0 projects · 0 cities

Recent Talks & Demos

Showing 1-0 of 0

Members-Only

Sign in to see who built these projects

No public projects found for this technology yet.