command-r7b-arabic-heretic-abliterated GGUF
GGUF quantizations of the original model:
https://huggingface.co/Faisalkh/command-r7b-arabic-heretic-abliterated
These files are compatible with llama.cpp, LM Studio, KoboldCpp, and other GGUF-supported runtimes.
Available Files
16-bit
- Faisalkh-command-r7b-arabic-heretic-abliterated-F16.gguf
2-bit
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q2_K.gguf
3-bit
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q3_K_S.gguf
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q3_K_M.gguf
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q3_K_L.gguf
4-bit
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q4_K_S.gguf
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q4_K_M.gguf
5-bit
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q5_K_S.gguf
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q5_K_M.gguf
6-bit
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q6_K.gguf
8-bit
- Faisalkh-command-r7b-arabic-heretic-abliterated-Q8_0.gguf
Usage
Example with llama.cpp:
./llama-cli -m Faisalkh-command-r7b-arabic-heretic-abliterated-Q4_K_M.gguf
Original Model
Base model repository:
https://huggingface.co/Faisalkh/command-r7b-arabic-heretic-abliterated
- Downloads last month
- 189
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support