Best budget GPU for...
 
Notifications
Clear all

Best budget GPU for local AI model training in 2024?

3 Posts
4 Users
0 Reactions
51 Views
0
Topic starter

Should I go with a used RTX 3060 12GB or just shell out the extra cash for the 16GB 4060 Ti if I want to actually train models locally without it taking forever? I'm honestly so stressed about making the wrong call here since my budget is strictly $500 and I need to get this rig running by next Tuesday for my final project here in Chicago.

I've been looking at these options:

  • RTX 3060 12GB (found one for $240, very tempting)
  • RTX 4060 Ti 16GB (new for $450, hits the budget hard)

The 3060 is super cheap but everyone says VRAM is king for AI stuff and I'm worried 12GB will just hit an out of memory error the second I try to fine-tune Llama 3 or something similar. On the other hand the 4060 Ti 16GB fits the budget but I've read some weird stuff about the memory bus width being a bottleneck for training speeds. Does that actually matter more than the raw VRAM amount for local fine-tuning?

I even looked at used 3090s but people are asking like $750 which is insane and way over my limit. If I go with the 4060 Ti will I regret it because of the speed or is the 16GB actually the way to go for a budget setup? I just dont want to buy something that's gonna be obsolete in three months...


3 Answers
12

be careful with the NVIDIA GeForce RTX 4060 Ti 16GB tho. i bought one and that bus really choked my training speeds... honestly just grab the NVIDIA GeForce RTX 3060 12GB to save cash.


11

@Reply #2 - good point! Just a quick thought... i would be really careful about spending your full $500. A few years back, I bought a pricey card for a project and my PSU fried, but I was too broke to fix it. Maybe look for a used MSI Ventus GeForce RTX 3060 12GB to keep a cash cushion. TL;DR: Go with the 3060. Having backup money is better for a tight deadline.





1

Building on the earlier suggestion, i spent months wrestling with memory limits on smaller cards. VRAM is basically the only thing that matters when trying to fit a model like Llama 3. Even with the slower bus on the ASUS Dual GeForce RTX 4060 Ti 16GB GDDR6, having that extra 4GB headroom saved me from so many OOM errors.

  • 16GB allows for larger batch sizes
  • Avoids the hardware wall of 12GB cards
  • Fits your 500 budget perfectly Speed is secondary if the model wont even load, ya know? Youll do great on that project tho.


Share:
PCTalkTalk.COM is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com. As an Amazon Associate, I earn from qualifying purchases.

Contact Us | Privacy Policy