LFM2-VL-1.6B-GGUF
LFM2-VL is a new generation of vision models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-VL-1.6B
π How to run LFM2-VL
Example usage with llama.cpp:
full precision (F16/F16):
llama-mtmd-cli -hf LiquidAI/LFM2-VL-1.6B-GGUF:F16
fastest inference (Q4_0/Q8_0):
llama-mtmd-cli -hf LiquidAI/LFM2-VL-1.6B-GGUF:Q4_0
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
4-bit
8-bit
16-bit
Model tree for aoiandroid/LFM2-VL-1.6B-GGUF
Base model
LiquidAI/LFM2-VL-1.6B