



🚀 Elevate Your Computing Game!
The NVIDIA HPE Tesla P40 24GB Computational Accelerator is a powerful, certified refurbished GPU designed for high-performance computing. With 12 TFlops of peak performance, 3840 cores, and 24GB of GDDR5 memory, it is perfect for demanding applications. Backed by a 90-day warranty, this accelerator is compatible with ProLiant DL380 Gen9 and XL190r systems, making it a reliable choice for professionals seeking top-tier computational power.
| ASIN | B07C63HP4V |
| Antenna Location | Professional |
| Best Sellers Rank | #21,068 in Amazon Renewed ( See Top 100 in Amazon Renewed ) #1,720 in Computer Graphics Cards #6,570 in Renewed Computers & Accessories |
| Brand | NVIDIA |
| Compatible Devices | Desktop |
| Customer Reviews | 4.1 4.1 out of 5 stars (19) |
| Display Resolution Maximum | 3840x2160 |
| Graphics Card Interface | PCI-Express x16 |
| Graphics Card Ram | 24 GB |
| Graphics Coprocessor | NVIDIA Tesla P40 |
| Graphics Description | NVIDIA Tesla P40 |
| Graphics Ram Type | GDDR5 |
| Manufacturer | Nvidia |
| Model Name | Tesla P40 |
| UPC | 656541882799 |
| Video Output Interface | DisplayPort or HDMI |
| Video Processor | NVIDIA |
D**S
An old GOAT with a new lease on life.
About the oldest mining card you could possibly use to do inference; but by God, I dare you to find 24GB of VRAM for cheaper. Even though the flops are garbage and you have to jank up a cooling solution, it does support flash attention and with three of them you'll comfortably be running quantized 70B models with room for context - on the cheap. As long as you like running GGUFs you'll have a good time. Very satisfied.
M**.
needs aftermarket cooler to work
It does the job, but it's only about 1/3rd the speed of a 3060. I like that it has 24gb vram. I had to get aftermarket cooling to keep it under control.
D**N
Greatb24gb card for ai
Works great for AI. Cuda drivers are supported and installed easily. Only downside is that these cards are intended for servers so their is no fan. The card seems to be a bit over half the speed of my 3090's on lmstudio
Z**S
One was good, One died right after return period.
The one that works is great, the other one gets "infoROM is corrupted at gpu". Worked just long enough to go past return date.
N**B
Works with llm-cpp-python
It's great for llm chatgpt and what not. While it's only 11.76 TFLOPS FP32, 24G of vram is big help to load the model on. You need to pay a bit more for electricity. 250W! :) works with privategpt.
E**Y
perfecto
I originally thought it wasn't working or compatible with my server util i changed the cable. I just tried to order for another cable online just to try again and waoo it worked perfectly. Thanks you
A**R
It works for me.
I installed it in a workstation, configured necessary software stack to do inference runs on LLM models such as LLAM 2 7b.
A**R
Need to enable 4G in bios
Card did not work when first installed. Motherboard was showing a GPU fault. Found a post that said to enable 4G in the bios. Enabled that and card started working.