Your example is not a problem with using an efficient execution model, it is a problem with django/wsgi. In fact, your example is using the exact same model as apache, it just sucks at it and makes you statically define the number of workers on a per app basis. You can easily have multiple web apps running in a single application server, and the resource limits will be shared just like with a typical apache+php setup.
Note that in environments where this sort of thing is trivial to do (java for example), virtually nobody does it, preferring to run separate servers per application anyways.
The way I understand it is that you either have a pool of interpreters per app or per set of apps. In the second case, life is easy: you can have a simple system that allocates interpreters to apps on demand. In the first case, you have to have a more complex solution. Perhaps the process manage (in this case apache) could implement such a system, but thus far it has not.
There's no need for sets of interpreters at all, that's what I am saying. Python being worse at this than PHP doesn't mean PHP is good at it. Look at go for example, there's one app server, running as many apps as you want.
Note that in environments where this sort of thing is trivial to do (java for example), virtually nobody does it, preferring to run separate servers per application anyways.