Programming graphics, especially 2D stuff, is far more ergonomic and convenient in your native language already executing on the CPU. If you can get away with it performance wise, there's really no incentive to incur the myriad obnoxious bullshit inherent in GPU programming.
But with the advent of high dpi displays it's become problematic to do even simple 2D/UI rendering on the CPU just because of the enormous quantity of pixels.
When you pull in GPU support, now you're stuck having to pick a backend (gl/vulkan/d3d/metal) or some compatibility layer to make some/all of them work. You have to write shaders, you have to constantly move state in/out of the GPU across this GPU:CPU API boundary. It's just a total clusterfuck best avoided if possible.
I'm not familiar with modern game engines, but I'd be very surprised if any of them managed to eliminate the utterly unnatural reality of writing shaders vs. writing classical 2D rendering algorithms operating on a linear buffer of pixels in memory.
For concurrency reasons shaders logically run on a single pixel. Gone are your longstanding algorithms for doing simple things like bresenham line-drawing, or something as simple as drawing a filled box like this:
for (int y = box.y; y < box.y + box.h; y++)
for (int x = box.x; x < box.x + box.w; x++)
FB[y * FB_STRIDE + x] = box.color;
Nope, not happening in a shader. Every shader basically executes in isolation on a pixel and you have to operate from a sort of dead-reckoning perspective. No more sequential loops iterating rows and lines, a fashion which we have literally decades of graphics programming publications explaining how to do things. Not to mention how natural it is to think about things that way, since it closely resembles drawing on paper.
In shaders you often end up doing things that feel utterly absurd in terms of overhead because of this "you run on an arbitrary pixel" perspective. Oftentimes you're writing some kind of distance function, where previously you would have written a loop iterating across lines and rows advancing some state as you step through the pixels. In a shader it's like the paper is covered with thousands of pencils that don't move, and the shader program just determines what color the pencil should be based on its location.
GPU programming is plain annoying, even without the GPU API fragmentation clusterfuck. Especially if you've been writing 2D stuff on the CPU for decades.
What you've described is a blit operation (copying a block of pixels from a source, such as a texture, in your case a solid color). Probably you wouldn't write this out but would write:
blit(box.x,box.y,box.w,box.h,RED)
In shader-land, this is equivalent to rendering a rectangle, with a texture or solid color as source. Sure it's more involved to implement this abstraction, since you need a mesh, need to write a small shader, be familiar with the render pipeline state etc, but it also gives you some stuff trivially, like anti-aliasing, and scaling support for the blit operation.
Many libraries already implement this stuff on top of WebGL, like pixi.js
While I don't doubt the creative possibilities of working with pixels directly, once you figure out how GPUs work, a lot of 2D stuff is actually pretty easy.
Both of the sibling comments describe quadratic Bezier curves (used often in font rendering because TrueType only supports quadratic), while graphics APIs and CFF font outlines often mandate support for cubic Beziers. Cubics are a lot more challenging to build a closed-form solution for, and also have things like self-intersection which makes it a lot more challenging.
Most production renderers, sometimes even ones on the CPU, approximate cubic Bezier curves with a number of quadratic Bezier curves. This is a preprocessing step which needs to be done on the CPU. While it it could be done on the GPU, doing it in the pixel shader would be really wasteful.
But with the advent of high dpi displays it's become problematic to do even simple 2D/UI rendering on the CPU just because of the enormous quantity of pixels.
When you pull in GPU support, now you're stuck having to pick a backend (gl/vulkan/d3d/metal) or some compatibility layer to make some/all of them work. You have to write shaders, you have to constantly move state in/out of the GPU across this GPU:CPU API boundary. It's just a total clusterfuck best avoided if possible.