We need to define what the benchmark for decent is. There are a lot of smaller models/uses that for a variety of reasons it is beneficial to run locally. audio transcription, a lot of image analysis, and so on, much of it doesn't require a 250W, $2000 GPU. You aren't going to run an LLM on this, but loads of stuff will work great.