Depends where you are plugging them in - but yes they are older gen - despite this, 8xV100 will outperform most of what you can buy for that price simply by way of memory and nvlink bandwidth. If you want to practically run a local model that takes 200GB of memory (Devstral-2-123B-Instruct-2512 for example or GPT-OSS-120B with long context window) without resorting to aggressive ggufs or memory swapping, you don't have many cheaper options. You can also parallelize several models on one node to get some additional throughput for bulk jobs.