Any given model has less fidelity than reality. An atlas map of the US has less detail than the actual terrain. The Planck constants represent the maximal fidelity possible with the standard model of physics. We can’t model shorter timeframes or smaller sizes, so we can’t predict what happens at scales that small. Building equipment the can measure something so small is difficult too… how do you measure something when you don’t know what to look for?
It may be that one day we come up with a more refined model. But as of today, it’s not clear how that would happen or if it’s even possible.
Imagine going from 4K to 8k to 16k resolution and then beyond. At some point a “pixel” to represent part of an image doesn’t make sense anymore, but what do you use instead? Nobody currently knows.
It may also be that "space" and "time" are emergent properties, much like an "apple" is "just" a description of a particular conglomeration of molecules. If we get past Planck scales it may turn that out that there are no such things as "space" and "time" and the Planck constants are irrelevant. We currently don't know but there _are_ a few theoretical frameworks that have yet to be empirically verified, like string theory.
It may be that one day we come up with a more refined model. But as of today, it’s not clear how that would happen or if it’s even possible.
Imagine going from 4K to 8k to 16k resolution and then beyond. At some point a “pixel” to represent part of an image doesn’t make sense anymore, but what do you use instead? Nobody currently knows.