One of the things not covered here is how to deal with versioning.
By default a monorepo will give you $current and nothing else.
A monorepo is not a bad idea, but you should think about either preventing breaking changes in some dependency killing the build globally, or have some sort of artefact store that allows versioned libraries (both have problems, you'll need to work out which is better for you. )
I think a key idea often associated with the use of a monorepo is to encourage developer behaviour to do the integration/mitigation work at the point of change, rather than creating lots of integration debt in the form of versions ( however you do it ).
You need to look at your development model as a whole and decide whether the happy path incentivises good or bad development practices.
Do you want to incentivise the creation of technical debt with a myriad of versioned dependencies or do you want to incentivise designing code to be evolvable and resuable?
I worked at a startup with a "monorepo" (C++, cuda and python) it worked well and wasn't too hard to manage. Once someone bit the bullet and made some robust bazel spells it was brilliant to use and multi-platform too.
Worked at a FAANG with a monorepo, and everything was partially broken most of the time. Its trivial to bring in dependencies, which is great, super fast re-use.
The problem is, its trivial to add dependencies. That means that bringing in a library to manage messages also somehow requires a large amount of CUDA code as well.
A basic python programme would endup having something like >10k build items to go through each build.
Great point - it's one of my pet peeves - automatic chained dependency management - at the what-do-I-need-to-build-everything level ( which is not the same as what do I need for my particular use ).
I think dependency management should be manual - make it intentional - and yes slightly harder.
If you have a static typed language, ( reflection like mechanisms aside ) you can make the compiler do the work in determining if the right dependencies are there and you can massively cut down the dependencies trees.
ie there is a mismatch between the semantics of automatic dependencies tree's and what you actually need when you import.
So if you need want to use library B from module A - I don't need the dependancies of B such that the whole of B compiles , I just need the dependencies of B that enable the my very specific use of B.
So if you add the module B to your project and run your compiler then it should tell you what further dependencies you need - rather than assuming you need to be in C and because you brought in C you also need D and E etc etc.
If you don't have a compiler ( or have dynamic loading anyway ) then your test becomes does it run, rather than does it compile in terms of finding missing dependencies.
Given you only add dependencies once, I don't think it's a big deal to force developers to spend 5 mins determining exactly what they need rather than importing the world.
I have been approaching this by eventually breaking out a module into its own repo when the time comes for that (enough resources to dedicate to maintaining it independently, having tests, and so forth).
When the folks working on the monorepo really need to slam through a change in the now-independent monorepo, we can use git submodules.
By default a monorepo will give you $current and nothing else.
A monorepo is not a bad idea, but you should think about either preventing breaking changes in some dependency killing the build globally, or have some sort of artefact store that allows versioned libraries (both have problems, you'll need to work out which is better for you. )