RTX 5090
Pros
- Runs Devstral Small 2 24B at Q4 natively
- 32 GB VRAM — adequate headroom
21 consumer GPUs can run Devstral Small 2 24B at Q4 natively. Precise VRAM thresholds and benchmarks below.
Prices and availability may change · affiliate link
llama.cpp 0.2.x · CUDA 12 · ROCm 6 · updated monthly · methodology →
This model requires aMid-range GPU (16 GB VRAM)
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
Pros
Pros
Pros
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
CPU vs GPU for Devstral Small 2 24B →
VRAM Calculator — instant compatibility check
RTX 5090
32 GB · Runs Q4 natively · Check availability
*Prices and availability may change. Some links are affiliate links.
| Quantization | VRAM needed | Disk space | Quality |
|---|---|---|---|
| FP16 (max quality) | 52.8 GB | 48 GB | Maximum |
| Q8 (high quality) | 26.4 GB | 24 GB | Near-lossless |
| Q4 (recommended) Best balance | 13.2 GB | 12 GB | Recommended |
| Q2 (minimum) | 6.6 GB | 6 GB | Quality loss |
| Developer | Mistral AI |
| Parameters | 24B |
| Context window | 256,000 tokens |
| License | Apache 2.0 |
| Use cases | coding, agentic, reasoning |
| Released | 2025-12 |
Install with Ollama
ollama run devstral:24b Hugging Face
mistralai/Devstral-Small-2-24B-Instruct-2512 Devstral Small 2 24B requires <strong class="text-primary-container">13.2 GB VRAM</strong> at Q4. 21 consumer GPUs meet this threshold. Below 8 GB or 11.2 GB you'll hit significant offload latency.
21 Q4 native · 18 offload
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
RTX 5090
32 GB VRAM
Check availability →
RTX 4090
24 GB VRAM
Check availability →
M4 Ultra
128 GB VRAM
Check availability →
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
Devstral Small 2 24B can run on CPU without a dedicated GPU — unusual for a 24B model. On an i7-13700K with llama.cpp Q4 it reaches 5 tok/s (slow but usable). With a GPU you get 4–6× more speed — check the VRAM calculator for specifics.
Which GPU is worth it? Real specs and benchmarks side by side.
GPUs that run Devstral Small 2 24B at Q4 — sorted by AI performance score.
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
Similar models in the coding category with comparable VRAM footprints.
The VRAM Calculator tells you exactly which quantization your hardware can handle.
RTX 5090
Prices change daily