Shop OBEX P1 Docs P2 Docs Learn Events
Looking to upgrade my video card — Parallax Forums

Looking to upgrade my video card

GordonMcCombGordonMcComb Posts: 3,366
edited 2015-06-07 16:44 in General Discussion
I'm looking to upgrade the video card in my Dell T5500, which uses PCIe 2.x slots. I need a very specific type of video card, an NVIDIA with at 4GB RAM and 750+ CUDA cores. All of the ones I've seen with these types of specs, like their GeForce 970, are designed for PCIe 3.x.

I understand from others I've spoken to (like Xanadu, who lives near me) that a 3.x card will still work in a 2.x buss, but with some diminished performance. What I'm looking for is a notion of how much of a performance hit that might entail.

My application is not gaming, which is the typical benchmark I'm finding on the Web. This is rending through NVIDIA's iRay rendering engine, which makes use of their cards. The more CUDA cores, the better. I need at least 2GB to hold the kind of scenes I'm doing (now with CPU-only rendering), but I think 4GB would be better for future expansion.

I want to avoid having to upgrade my entire machine as well. That will eventually come, but I'm hoping to stave that off for a little while longer. I

If anyone is familiar with the CUDA core pipeline performance between PCIe architectures -- or if it's not doable or advisable at all -- I'd appreciate any suggestions. I'm aware there are other issues with the T5500 (cooling, power supply cables for the graphics card) I will also need to deal with, but I'll take those on in due time.

Comments

  • evanhevanh Posts: 15,920
    edited 2015-06-07 15:55
    I'm looking to upgrade the video card in my Dell T5500, which uses PCIe 2.x slots. I need a very specific type of video card, an NVIDIA with at 4GB RAM and 750+ CUDA cores. All of the ones I've seen with these types of specs, like their GeForce 970, are designed for PCIe 3.x.

    I understand from others I've spoken to (like Xanadu, who lives near me) that a 3.x card will still work in a 2.x buss, but with some diminished performance. What I'm looking for is a notion of how much of a performance hit that might entail.

    The card just steps down to 2.0 spec is all. So, it'll pull full 2.0 bus speed - which is as good as your CPU can do. The PCIe bus will rarely be the bottleneck so you'll be fine.

    The latest well priced Geforce 960 has 1024 cores but most cards come with 2GB RAM so getting it with 4GB might be difficult. It has a comfortable 120W max power rating so is easy fit with smaller power supplies. BTW: It's well known that the 970 really only has 3.5GB of useful RAM. Splurging on a Geforce TitanX would give you 12GB!
  • Clock LoopClock Loop Posts: 2,069
    edited 2015-06-07 16:07
    Rendering isn't a heavy I/O process, its heavy on the cuda cores, or (cpu cores)
    The PCIe 3.0 versus 2.0 is nill for non-gaming.

    In gaming, 120 frames a second on new HD monitors in dynamically changing environments require pci bandwidth.

    If you use some kind of viewport rendering in a 3d program like 3d studio max, you most likely won't get a chance to peg pci-e2.0 bandwidth, your cpus would peg first.

    I run dual gtx760's SLI at 120fps 3d monitor, and all 4 of my cores that run 4.2ghz are pushing max in a game like battlefield3. Pumping lots of data through the pci-e would require a fast cpu also.
    Today, the only need for more bandwidth on pcie or more gpu RAM is due to gaming. Thats from the fact that they are massive dynamically changing environments with on the fly shadow mapping and ray tracing.

    With gpu monitor utilities you can see how much gpu and gpu-ram your program uses.
    Many programs have recommended lists of hardware.

    The way gpu's use their ram when rendering is different than a cpu-ram rendering.
    Most rendering programs still use main program RAM and main cpu to serve up to the gpu data as it needs, which then the cuda cores store their own data in gpu-ram, so 2gig is probably good enough also.
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2015-06-07 16:44
    Thanks for both replies.

    For Iray I've read the entire rendered scene has to fit in VRAM of a single card, or else the drivers will chuck the thing out of the GPU(s), and you're back to a CPU render. This is a feature/limitation of Iray. Most of my scenes would fit into 2GB, but I'd rather leave some room to grow. Likewise, if you have two NVIDIA cards, the scene has to fit into each of them. The process can share CUDA cores, though, so 1000 on and 500 on another makes for 1500 total.

    Yes, a TitanX would be great, but if I could afford one of those, I would buy a new PC, too! As it is, my T5500 has only one CPU right now, but I'll be adding another soon. On eBay they're under $200, with the RAM stack. I am worried a little about the PS, though, with the beefier card and the second CPU.
Sign In or Register to comment.