This about the impedance mismatch between building everything from source and using distro package management. Mixing and matching is painful, as package managers correlate a lot of package metadata in order to provide smooth integration and interdependencies of multiple packages.
Building everything from source "by hand" quickly runs into scaling problems, and admins start to roll their own "package databases" in order to cope. Fully-baked package management becomes the solution. The problem here is that distro packages are not the latest development snapshots (for good reasons, most often and presumably). Still, there are times where newer-than-distro-repo software is required.
The author's Pandoc case may be one of those, but I doubt it. RHEL (and CentOS) packages should never (by definition) include showstopper bugs. It's more likely that bugfixes have been backported to the older, stable version. Still, it's possible that the author is correct, and the official package does not meet requirements or is otherwise unusable.
In that case, the best path forward is to build or reuse an updated package, rather than installing from development sources "by hand", e.g. `make install`. It's likely that someone else has already had this problem and has built an updated package, which you could then install, but let's assume not.
Now we're into the wonderful world of package building. We can avoid some of the author's pain here, but mostly there is a tradeoff for having to learn and deal with the packaging pipeline. The upshot is that package building may be scripted and synched with development sources, and you are keeping the metadata inside the packaging system where it belongs and can be managed (think: future updates).
I can't help but compare this to OS X, where most applications are bundled together with any dependencies that you can't safely assume will exist because they aren't included in the OS.
Yes, it does waste some disk space. But the $5 worth of disk space that this approach costs me is more than justified by the $5,000 worth of gray hairs it saves me.
My only lament is that things still fall back to the annoying old-fashioned way of doing it whenever I need to install software that's more Unix than OS X. Which, given I'm a programmer, is still pretty much all of it.
Sure, and Windows is much the same wrt bundling. The tradeoff is that now you've multiplied the responsibility of library updates across everyone that bundled it. You probably won't care if simple library version updates are missed, if the app itself doesn't care. You might care more about bugs that require every app to update, especially if it's a security bug.
Sure there's a tradeoff, but for most people it's really rare to use software that isn't being actively maintained. So let the maintainers update their dependencies when there's a security flaw, the auto-update will pull in the changes, and I'll still benefit from OS X's 30-second install process.
The catkin and bloom tools developed for ROS (robot operating system) go a long way toward mitigating this pain— they make it much, much easier to maintain a source workspace of multiple federated packages, build them all together with dependency resolution and a single command invocation, and switch between the released (deb) and source version of a particular package.
Unfortunately, there isn't a good page which explains this all for the benefit of someone who doesn't care about ROS or robotics specifically, or who doesn't care about how catkin compares to the legacy system it replaced. But this is a start: http://wiki.ros.org/catkin/conceptual_overview
I think your understating how hard it is for a beginner to create a spec for an rpm. As a n00b I recently did this for sikuli on CentOS 6. The macros in the spec file gave me the most trouble. The close second was the build process until I realized that specs where designed for maintainers not developers.
Normally, someone has already written a spec that you can modify for your use case.
Given the number of docker projects (looking at you redis) that build from source, docker seems to be moving into package management.
The real problem is that all of the major package managers require root, and usually you can't set the install prefix to a different folder than '/' (like for example '~/.local/'), without using chroot (again root is needed there).
It is true that some packages DO actually require root (like obviously the kernel), but most really don't.
A package manager that works like this would be portable to any distro. I think Nix and Guix are supposed to do this.
Just to reinforce this: RHEL/CentOS version numbers cannot be directly compared with the upstream projects version numbers. That is the version that was picked as a base, and then bug fixes are ported back to that version. In this case it is a single patch to deal with TeX support in CentOS 6, but for many other packages it can be dozens of patches to try to get a truly stable version.
Building everything from source "by hand" quickly runs into scaling problems, and admins start to roll their own "package databases" in order to cope. Fully-baked package management becomes the solution. The problem here is that distro packages are not the latest development snapshots (for good reasons, most often and presumably). Still, there are times where newer-than-distro-repo software is required.
The author's Pandoc case may be one of those, but I doubt it. RHEL (and CentOS) packages should never (by definition) include showstopper bugs. It's more likely that bugfixes have been backported to the older, stable version. Still, it's possible that the author is correct, and the official package does not meet requirements or is otherwise unusable.
In that case, the best path forward is to build or reuse an updated package, rather than installing from development sources "by hand", e.g. `make install`. It's likely that someone else has already had this problem and has built an updated package, which you could then install, but let's assume not.
Now we're into the wonderful world of package building. We can avoid some of the author's pain here, but mostly there is a tradeoff for having to learn and deal with the packaging pipeline. The upshot is that package building may be scripted and synched with development sources, and you are keeping the metadata inside the packaging system where it belongs and can be managed (think: future updates).