RTX 5090
Prós
- Runs Qwen3 1.7B at Q4 natively
- 32 GB VRAM — adequate headroom
40 consumer GPUs can run Qwen3 1.7B at Q4 natively. Precise VRAM thresholds and benchmarks below.
Prices and availability may change · affiliate link
llama.cpp 0.2.x · CUDA 12 · ROCm 6 · updated monthly · methodology →
This model requires aEntry GPU (8 GB VRAM)
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
Prós
Prós
Prós
Alguns links são links de afiliado da Amazon. Podemos receber uma comissão sem custo adicional para si. O cookie da Amazon pode durar até 24 horas após o clique.
CPU vs GPU for Qwen3 1.7B →
VRAM Calculator — instant compatibility check
RTX 5090
32 GB · Runs Q4 natively · Check availability
*Prices and availability may change. Some links are affiliate links.
| Quantization | VRAM needed | Disk space | Quality |
|---|---|---|---|
| FP16 (max quality) | 3.7 GB | 3.4 GB | Maximum |
| Q8 (high quality) | 1.9 GB | 1.7 GB | Near-lossless |
| Q4 (recommended) Best balance | 0.9 GB | 0.9 GB | Recommended |
| Q2 (minimum) | 0.5 GB | 0.4 GB | Quality loss |
| Developer | Alibaba |
| Parameters | 1.7B |
| Context window | 131,072 tokens |
| License | Apache 2.0 |
| Use cases | chat, reasoning |
| Released | 2025-04 |
Install with Ollama
ollama run qwen3:1.7b Hugging Face
Qwen/Qwen3-1.7B Qwen3 1.7B requires <strong class="text-primary-container">0.9 GB VRAM</strong> at Q4. 40 consumer GPUs meet this threshold. Below 8 GB or -1.1 GB you'll hit significant offload latency.
40 Q4 native · 0 offload
| GPU Unit | VRAM | Compatibility | Est. Speed | Action |
|---|---|---|---|---|
| RTX 5090 | 32GB | Optimal | 330 tok/s | Calculate → |
| RTX 4090 | 24GB | Optimal | 330 tok/s | Calculate → |
| M4 Ultra | 128GB | Optimal | 330 tok/s | Calculate → |
| RTX 5080 | 16GB | Optimal | 330 tok/s | Calculate → |
| M3 Ultra | 192GB | Optimal | 319 tok/s | Calculate → |
| RTX 4080 Super | 16GB | Optimal | 305 tok/s | Calculate → |
| RTX 5070 Ti | 16GB | Optimal | 326 tok/s | Calculate → |
| RTX 3090 | 24GB | Optimal | 329 tok/s | Calculate → |
| M4 Max 48GB | 48GB | Optimal | 227 tok/s | Calculate → |
| RX 7900 XTX | 24GB | Optimal | 330 tok/s | Calculate → |
| M4 Max 36GB | 36GB | Optimal | 227 tok/s | Calculate → |
| RTX 4070 Ti Super | 16GB | Optimal | 279 tok/s | Calculate → |
| RTX 3080 Ti | 12GB | Optimal | 323 tok/s | Calculate → |
| RX 7900 XT | 20GB | Optimal | 319 tok/s | Calculate → |
| RTX 5070 | 12GB | Optimal | 279 tok/s | Calculate → |
| RTX 3080 | 10GB | Optimal | 315 tok/s | Calculate → |
| M4 Pro | 24GB | Optimal | 113 tok/s | Calculate → |
| RX 7800 XT | 16GB | Optimal | 259 tok/s | Calculate → |
| RX 6800 XT | 16GB | Optimal | 213 tok/s | Calculate → |
| RTX 4070 | 12GB | Optimal | 209 tok/s | Calculate → |
| RTX 4060 Ti 16GB | 16GB | Optimal | 119 tok/s | Calculate → |
| RX 7700 XT | 12GB | Optimal | 179 tok/s | Calculate → |
| RTX 3070 Ti | 8GB | Optimal | 253 tok/s | Calculate → |
| RTX 4060 Ti | 8GB | Optimal | 119 tok/s | Calculate → |
| RTX 3070 | 8GB | Optimal | 186 tok/s | Calculate → |
| RX 6700 XT | 12GB | Optimal | 160 tok/s | Calculate → |
| M3 Pro | 18GB | Optimal | 63 tok/s | Calculate → |
| RTX 3060 Ti | 8GB | Optimal | 186 tok/s | Calculate → |
| RTX 2080 Ti | 11GB | Optimal | 186 tok/s | Calculate → |
| RTX 3060 | 12GB | Optimal | 149 tok/s | Calculate → |
| M2 Pro | 16GB | Optimal | 83 tok/s | Calculate → |
| RTX 4060 | 8GB | Optimal | 113 tok/s | Calculate → |
| Arc A770 16GB | 16GB | Optimal | 93 tok/s | Calculate → |
| M1 Pro | 16GB | Optimal | 83 tok/s | Calculate → |
| RX 7600 | 8GB | Optimal | 120 tok/s | Calculate → |
| RX 6600 XT | 8GB | Optimal | 113 tok/s | Calculate → |
| Arc A750 8GB | 8GB | Optimal | 85 tok/s | Calculate → |
| RX 6600 | 8GB | Optimal | 102 tok/s | Calculate → |
| RTX 3050 8GB | 8GB | Optimal | 93 tok/s | Calculate → |
| GTX 1660 Super | 6GB | Optimal | 139 tok/s | Calculate → |
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
RTX 5090
32 GB VRAM
Check availability →
RTX 4090
24 GB VRAM
Check availability →
M4 Ultra
128 GB VRAM
Check availability →
Alguns links são links de afiliado da Amazon. Podemos receber uma comissão sem custo adicional para si. O cookie da Amazon pode durar até 24 horas após o clique.
Qwen3 1.7B is an edge model that runs directly on CPU — no GPU required. On an i7-13700K with llama.cpp Q4 it reaches 35 tokens/second, enough for real-time chat. With a GPU you get up to ~126 tok/s with 6 GB VRAM. Ideal for laptops and desktops without a dedicated graphics card.
Which GPU is worth it? Real specs and benchmarks side by side.
GPUs that run Qwen3 1.7B at Q4 — sorted by AI performance score.
Alguns links são links de afiliado da Amazon. Podemos receber uma comissão sem custo adicional para si. O cookie da Amazon pode durar até 24 horas após o clique.
Similar models in the chat category with comparable VRAM footprints.
The VRAM Calculator tells you exactly which quantization your hardware can handle.
RTX 5090
Preços mudam diariamente