€269
RTX 3060 12GB
For 7B–13B under €300
Pros
- 12 GB VRAM
- Llama 8B at 30 tok/s
- Best entry point
Exact VRAM requirements, real benchmarks, and compatible GPUs — no guesswork.
Stop guessing. Find the exact GPU that determines which AI models you can run">VRAM and GPU you need in under 5 minutes.
Microsoft · 14B parameters
TFLOPS, CUDA cores, tensor ops… none of that tells you which models you can actually run. Real performance depends on memory bandwidth and quantization efficiency.
Buying the wrong GPU can limit you for years. VRAM is the ultimate bottleneck for LLMs — yet most consumer cards are underspecified for local inference.
Most guides are vague or outdated. By the time a tutorial ships, the model architectures and runtime optimizations have already evolved past it.
Our hardware diagnostic engine maps your machine’s exact capabilities against every model in the registry. No synthetic benchmarks — real inference on real hardware.
Optimized local inference begins with precise architecture matching.
Select your hardware, GPU, and system specs.
Choose LLMs, image generation, audio, or coding AI.
Get exact compatibility + performance benchmarks.
Accuracy-First Catalog Signal
Hardware-fit guidance is calculated from 99 models and 40 GPU profiles, so every recommendation starts from live catalog evidence.
The most searched models on RunAIatHome. See requirements and compatible hardware.
Contains affiliate links. We may earn a commission from qualifying purchases at no cost to you.
€269
For 7B–13B under €300
Pros
€499
Sweet spot for 13B Q4
Pros
€1799
30B+ without compromises
Pros
Our free wizard analyzes your hardware and tells you exactly what you can run.
Start Free Assessment