Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone else think this reflects badly on Python? The fact that the author has to use a bunch of different tools to manage Python versions/projects is intimidating.

I don't say this out of negativity for the sake of negativity. Earlier today, I was trying to resurrect an old Python project that was using pipenv.

"pipenv install" gave me an error about accepting 1 argument, but 3 were provided.

Then I switched to Poetry. Poetry kept on detecting my Python2 installation, but not my Python3 installation. It seemed like I had to use pyenv, which I didn't want to use, since that's another tool to use and setup on different machines.

I gave up and started rewriting the project (web scraper) in Node.js with Puppeteer.



Granted, I'm just a scientific programmer, but my workplace has a full blown software team maintaining a multi million line codebase. That codebase is rebuilt every night, and as I understand it, you're not allowed to submit a change that breaks anything. And they have people whose job is to keep their tools working.

What people casually think of as "Python" is really a huge dynamic ecosystem of packages. Imagine that there are 60k packages, each with 1k lines of code... that's a 60 million line codebase, and it can't be checked for breaking changes. Short of continually testing your code against the latest versions of packages, you're going to hit some bumps if you haul an old code out of the vault and try to fire it up on a new system.

I don't know how Javascript developers handle this.

I handle it by running Python inside an isolated environment -- WinPython does this for me -- and occasionally having to fix something if a new version of a package causes a breaking change.

The drawback of my method is deployment -- there is no small nugget of code that I can confidently share with someone. They have to install their own environment for running my stuff, or take their chances, which usually ends badly.


> you're going to hit some bumps if you haul an old code out of the vault and try to fire it up on a new system.

Most package managers have lockfiles that allow for some degree of determinism. Of course if a library introduces breaking changes you're going to have to rewrite, but only when explicitly upgrading the dependency.


You should try Dockerizing your project. Then other people just have to type docker run and it’ll work everywhere.


Well in sceintific projects you are working in HPC platforms which docker doesn't exists and there are many reasons for it (it is not just technical).


Singularity may offer a way forward (though I guess you are aware of this already)


and nowadays most HPC centers run on spack or easybuild anyways. I adopted easybuild 2 years ago, since then switched to spack and see that our local HPC center is also using it for their new modules...


When I am working on large projects like this and the miryad of problems which python and its friends bring to us, I tend to see if projects like "nuitka" can solve those problems. For example if you need to scale you jobs ust imagine how many sysopen_at syscalls one single python script causes, and usually multiply that to the number of cores your job will have. Nuitka solves most of those problems by packaging everythin. It is almost as staticallly linking your code to run in the cluster.


I love Python. Throughout my life I tried learning many languages, and Python is the only one that really stuck, and was able to do useful things. Learning Python changed my life, in 5 years my salary more than doubled, and for the last 5 years I've been a full time developer. A coworker likes to say that I think in Python.

That said, I 100% agree. I don't have the answer, except that I wish that there was one official answer, for developers and deployment that was easy to explain to beginners.

For what it's worth I've been using pipenv for over a year and it works good enough. I think npm's approach is better, but not perfect. I've heard good things about yarn. I know CPAN is the grandfather of all of them. I've barely used gems but they seem like magic, and Go get your repo URI out of my code please and thank you. :-) All fun aside, what languages have it right? and is there maybe a way to come up with a unified language dependency manger?


Basically this. Every tool does the same thing slightly differently and all run into different variations of the exact same problem. Is your interpreter in your path? Do you have the right version of your modules installed?

Technically, OP didn’t have to use pipenv for anything except knowing which versions of each dependency to install (Pipfile.lock) with good old pip. Those other tools are mere conveniences. Giving up on a language for that...that’s drastic.


> and is there maybe a way to come up with a unified language dependency manger?

For interpreter/compiler version management you can use asdf [0]. It works for all popular programming languages. You can use it to replace tools such as pyenv, nvm, gvm, etc.

[0]: https://asdf-vm.com/#/


There is no link to asdf ( "[0]" ) in your comment


Whoops.. There is one now :)


This is why I manage every nontrivial project I do nowadays with Nix (Haskell, Go, Python, C & C++, bash ..anything)

Everything is pinned to exact source revisions. You can be relatively sure to be able to git clone and nix-shell and be off to the races.

You can even go the extra mile and provide working editor integration in your nix-shell (especially easy with emacs). So you can enable anyone to git clone, nix-shell, and open a working editor.

The biggest downside is becoming proficient in Nix isn't easy or even straightforward.


> Everything is pinned to exact source revisions.

While you're here, how do you do this with nix-pkgs?

I looked into using nixos for nodejs deployments recently, and was amazed to find that the versions of node in the nix-pkgs repo are just pinned to X.Y.0 releases, with no discernable way to update to a bugfix release after .0 , so... I don't see how this could possibly be used for production deployments?

https://nixos.org/nixos/packages.html?channel=nixpkgs-unstab...

What I really want is to be able to tell nix to give me nodejs at version 10.16.3 and have it do the right thing every time.

I'm happy to be wrong on any of this.


While the nix package manager supports the coexistence of multiple versions of a package, the nixpkgs package collection does not contain every single versions of every package. However, it does make it very easy to refer to past versions of a package by importing package specifications from an older version of nixpkgs.

I think this is a reasonable choice, considering that the main purpose of nixpkgs is to provide packages for the NixOS distribution. It's impossible to actively maintain every single version of every package.


I think its a coincidence. Check nodejs commit history: https://github.com/NixOS/nixpkgs/commits/f3282c8d1e0ce6ba5d9... there are minor versions also


Right, so if I'm understanding correctly, I'd need to pin the whole nix-pkgs system to the particular git revision which contains the version I need?


That's it, also you can cherry pick different software from different git revisions.


Do you have any public examples of doing this? Or docs?

Closest I found was to pin the nix-channel entirely, but not different components to different versions.


You could do something like:

    let pinnedPkgs = import (builtins.fetchTarball {...}) {};
        node = pinnedPkgs.nodejs-10_x;
to refer to a specific version of node.

https://nixos.wiki/wiki/FAQ/Pinning_Nixpkgs


Thanks that helps.

I've mostly stuck to nix-shell -p foo bar baz to setup my environments. It's holding me back and I should pull the trigger on a shell.nix file, just gotta learn the language.


Agreed. Super painful to learn and still learning, but once it clicks, it's totally worth it. Amazing tool.

Occasionally I still run into weird non-deterministic issues with builds, though: https://github.com/NixOS/nixpkgs/issues/71178

You'd think "pinning" a python version + channel would avoid this.


How do you specify Python dependencies and their versions in Nix?


A quick and dirty way would be:

$ nix run "(import <nixpkgs> {}).python37.withPackages([ pandas statsmodels ])"

This would drop you into a shell with Python 3.7, pandas, and statsmodels installed.

For more complex use cases, you should specify your dependencies in a shell.nix file or write your own nix package.


And then to expand, if you wish to pin one of those packages to a specific rev, the easiest way is to create an "overlay" (it's a Nix design pattern) that you apply to nixpkgs to override whatever version is in your nixpkgs version.


Ditto, roughly.


The big gap is management of the full dependency tree. With yarn I can get a package.lock which pretty well ensures I'll have the same exact version of everything, with no unexpected changes, every time I run yarn install. I get the same thing in the Rust world with Cargo.

In Python it's a mess. Some packages specify their deps in setup.py; some in a requirements file, which may or may not be read in by their setup.py. It's not rare to have to have multiple 'pip install' commands to get a working environment, especially when installing fairly boutique extensions to frameworks like Django.

There just isn't a holistic, opinionated approach to specifying dependencies, especially at a project level. Which leaves unexpected upgrades of dependencies (occasionally leading to regressions) as a reality for Python devs.


There are two new tools in the Python ecosystem, which try to fill the gap left by cargo, npm, yarn & co.:

One is pipenv [0], which works similar to yarn & co. It uses Pipfile/Pipfile.lock to define and lock dependencies. Pipenv has a major flaw: It can't be used to publish packages on pypi.org (you still need twine & setup.py for that). It's also known for being slow and somewhat buggy. Despite all that pipenv is an "official" tool maintained by the "Python Packaging Authority".

The other one is poetry [1], which works exactly like yarn & co. It uses "pyproject.toml" to specify dependencies and "poetry.lock" to lock them. Poetry does most of the things "right", but it's still an underdog compared to pipenv.

Both tools have not yet fully matured, thus there are a lot of complaints.

[0]: https://github.com/pypa/pipenv

[1]: https://github.com/sdispater/poetry


There's good news on that front: the Python Packaging Working Group has secured >$400K in grants to improve pip's dependency resolver: https://twitter.com/di_codes/status/1193980331004743680


I've worked on tons of small to medium-small Python projects over the years where I didn't fix dependency versions at all, not even major versions, just a requirements.txt with a list of package names (usually it's a list of maybe at most ten well-known libraries, resulting in at most twenty actual packages pulled from PyPI). Come back three years later, pull the latest versions of everything, code still works fine.

Now try that with JavaScript or Rust. If you don't fix versions, come back three months later and compatibility is usually fucked up beyond all recognition.

Some languages embraced better dependency locking because they absolutely couldn't not solve the problem.


I’ve only recently started working with Python, and I’ve already been bitten by TensorFlow v1 and v2 packages having different APIs, so the viability of that approach will depend heavily on which packages you use.

However in SemVer a major version number change is how breaking changes are documented, so seeing a v1 to v2 change coupled with having to do some work to fix breakage is just expected, something that may well be necessary for a project to make progress.


ML, pydata etc. are really worlds apart from more traditional Python ecosystems; guido himself a couple years back admitted he had no idea about those silos and sat down with some leaders of those communities to hear their needs. Those communities tend to have their own recommendations and best practices.

My very brief exposure to TF seems to suggest that dev environments surrounding TF are way harder to set up than my “list of bare packages in a requirements.txt file” scenario which is sufficient for a lot of more traditional endeavors.


> With yarn I can get a package.lock

pipenv generates a Pipfile.lock - if it can, I've used it primarily for Airflow, and some packages within Airflow have incompatible version ranges for the same dependency, which means it can't generate the lock file.


Oh, thanks - I'll give it a whirl!


Yes. As someone who has never dove deep into python, but has had some contact with it: the package manager ecosystem is the #1 thing keeping me away from it.

npm sucks and all, but at least it just works and doesn't get in my way as much.


Anecdotally, I've had more Node packages with native code in them fail to build for me when installed via npm, than Python packages with native code fail when installed via pip. That whole node-gyp thing is a huge mess.


> npm sucks and all, but at least it just works and doesn't get in my way as much.

used many package managers: pip, gem, go, npm, composer, etc... npm is the only one i have recent memories of having to kill & the only one that makes the fan go off (well okay most c/c++ do that too...)

quite frankly surprised by what i am seeing about python here. i have never been into the new stuff, pipenv, poetry, pipx, etc... maybe that's where the bad experience is coming from? i even don't know when and why it got so complex...


What does npm do that python can’t? I’m curious.


npm is equivalent to combining pip and virtualenv into a single tool. This gives better ergonomics when switching between projects since you never have to "activate" your environment, it's always activated when standing in the project directory.


Isn't this what Pipenv does? What has been a downer for me is that many of the cloud providers do not support pipfiles in their serverless app services (Elastic Beanstalk, App Engine etc.)


On second thought, at least on GCP I should be able to put the pipfiles into .gcloudignore and just update the requirements.txt file with each new commit using git hooks, build scripts or a ci/cd tool.


Yep, correct. Unfortunately I had quite a few issues with pipenv - mainly around its relationship with pip.


That does sound convenient. I wonder if the virtualenv aspect is relevant though, i.e. do people really deploy npm apps outside of a container/isolation layer?

I imagine if you're deploying docker, you probably should be developing in docker (e.g. using PyCharm's remote interpreter/docker interpreter integration).


Coming the opposite way: having started programming on python and only started doing node.js stuff recently, npm is awesome.


100% agree.

Coming from the scientific computing realm, I've only ever done "real" code in Python until a couple of weeks ago when I was forced to do some work in Typescript. Once I got over the initial worries about the best method to install npm to avoid Python-esque problems and gave nvm a try, I was very pleasantly surprised by the package management process and was able to just get on with the work.

I've tried various Python env management solutions in the past (mostly leaning towards conda), but had recently settled on just using separate LXC/LXD containers for each project.


Poetry does all that stuff, as discussed in the article.


Just spent ~1 hour trying to set up a working python environment .... so yes. It's in the classic phase where it has an ecosystem with a bunch of problems that are small enough that they aren't being tackled comprehensively by the core language, but large enough that n different solutions are being created in parallel by different groups. The result is an explosion in complexity for anybody just trying to get their job done and ... it's actually very unpythonic!


I've spent the whole weekend on dealing with Python, and now this evening and probably a few evenings to come.


The alternative is an opinionated build system defined by the language developers.

Like any dictatorship, that can be fine if those in charge are benevolent and competent. For programming languages, the first is almost always true, but the second is far from guaranteed. Skill at programming language development has no bearing on skill at developing a build, packaging, and distribution system.

Go is a prime example of this. The language is pleasant to use with few ugly surprises, but their build system has been awful for a decade, only now reaching a semi-decent state with modules (which are still pretty damn ugly).

With python, on the other hand, there's competition in this space, and as a result the tools are pretty nice, albeit fragmented.

But then there's rust, which has a nice language AND a nice build system. You take a big risk when building both the language and build system; sometimes it works, sometimes it doesn't. And you risk fragmentation if you don't. It's a tough choice.


The thing is, Golang and Python had the same problem with regards to depenencies, in that they punted on the problem and the community came up with several competing products that confused users.


I never had any issue myself, but I guess that is because I use the standard tools:

python3 -m venv /tmp/foo

/tmp/foo/bin/pip -U pip wheel

/tmp/foo/bin/pip -r requirements.txt

I understand some might not like it, but really, it's simple and it works.


Note how you only use one version of Python above, which is not the case discussed.


Until now I just deal with that by always explicitly specifying the whole path to the python executable. Granted my needs are very run-of-the-mill, but wouldn't that suffice for many people? I've tried several times over the last few months to get into all this python environment/package stuff, but it all feels like yak shaving... much like the hours and hours I spend customizing vim years ago, which were fun (at that time) but which weren't 'productive' in that they'll never in my lifetime pay themselves off.


>Granted my needs are very run-of-the-mill, but wouldn't that suffice for many people?

For individual people perhaps. For companies with older projects, newer projects, greenfield stuff, etc, no.


I maintain a mix of old and new python projects (sadly, still working on migrating some older 2.7 stuff to 3) and my setup is the same as TheChaplain. I just keep a separate venv for each project and have it setup appropriately in VSCode. With VSCode, I don't even have to think about which venv I am working with outside of when I initially set the interpreter.


Until you have to deploy on a machine without internet access, and suddenly pip -r requirements is not enough, especially if you don't have a local pip mirror.


I develop a project that gets deployed to some users with locked down boxes (tight permissions, no internet), and it's really not that bad. You just download the dependencies using `pip download package_name` and bundle them with your project. Your install is basically the same; `pip install -r requirements.txt --no-index --find-links /path/localdeps/`.

It's not as nice as just doing a regular pip -r, but it works and isn't that much effort.


For me the biggest problem is C-based python modules that can't just be installed in your virtual environment but want to be part of the global installation.

Tkinter and mpi4py are the most recent ones I've had this problem with. I expect someone will tell me "it's trivial to install these in a venv, just do X", but X is not obvious to me.


it's trivial to install C-programs with the appropriate tools such as Nix and spack. You might end up in Tcl-hell in your new python-environment, but as it's not that widely used anymore, one should be fine.

Having said that: you generally want this packages integrated with your system (which provides a self-consistent Tcl as well as MPI-environment)


I've never quite understood the need for pyenv. Just keep a virtualenv with each project that you want to have an isolated environment.


Pyenv is for installing multiple versions of python. Virtualenvs are a layer beneath that.

It’s super useful for maintaining static versions of python, like 2.7, 3.6 and 3.7 when you have many projects that have different python requirements.


I understand how this was needed historically, when using the official installer might overwrite the Python you already had installed. But as far as I know, you can download an installer for a new version, run it, and it doesn't touch your previous installation.

For example I've had 3.7 on my macOS system for a while, installed from the official installer, not through Homebrew. I just installed 3.8, which pointed my python3 to 3.8. But my 3.7 was still there; I created an alias python3.7 to point to that. So I can run or install anything I want against 3.7 or 3.8.

Why do we still need pyenv? I'm not asking that antagonistically, I really don't understand at this point and I'm wondering if I'm missing something.


Sometimes scripts will want to invoke Python as ‘python’ and it’s easier to use pyenv to set $PATH overrides.


Thank you, that’s the first concrete reason I’ve heard. It also explains why I haven’t run into issues yet using my own aliases.


Why do you need pyenv to set PATH?


Maybe you don’t, but it’s a way of doing it.

Specifically, pyenv will dynamically and programmatically link the $PATH for the Python executables (python, pip, etc.) to the desired version as defined either by an environment variable or by the contents of a .python-version file in CWD or an ancestor thereof. The .python-version file can be checked into version control.


Honestly I haven’t used official installations before so I can’t speak to that too much. Pyenv mostly uses official builds though so it’s mostly an automated frontend to manual installs.

I like being able to specify the global and local versions for my projects and the system as a whole. I also use it as a virtualenv manager. It works well with pipenv (which I still use in anger) and vscode.


I suspect, but could be wrong, that the disconnect here is bc devs who are making open source packages need to make sure it runs on multiple different versions of Python.

If you're working on closed-source code or have tight control of your environment, it's enough to develop and run on a single version, rendering pyenv and whatnot unnecessary.


It manages interpreters, not environments. So one environment can use python 3.5, while another can use 3.8.


Poetry will use the default “python” command found on the PATH. If you’re working on multiple Python interpreters for the same project, it’s very useful to combine Poetry with Pyenv.


Compared to other languages and ecosystems, it really is lagging behind. Depedendency and version management were afterthoughts in Python. I dread having to maintain our Python projects.


Old virtualenv works. Pip works.

This post reads most to me as fad. It is an opinion, you would most likely bail out.


We've stuck to virtualenv and pip mainly for the reason that we've got plumbing that works and we'd rather be doing other things than finding new plumbing.

Very few issues arise from our choice of build tooling. Not enough to consider switching at the moment. I suspect I'll try pyenv next time I have a new dev maching to set up but only as it seems fairly painless to switch from virtualenv.


There's a lot of "shiny new toy syndrome" where people want to try the latest tool that is "allegedly" better but it's a pain in multiple other factors.

Pip works, virtualenv works. They might not be great tools, but they do the job.

I don't want to worry to much about my environment, that's why I'm skeptical about new tools. Because they might break when you least expect


Pretty much.

Virtualenv solves 90% of the problem, and it is not like you need to revisit your setup on a daily basis.

I would be more willing to spend time on my test setup than anything else, because that affects the development experience.


> The fact that the author has to use a bunch of different tools to manage Python versions/projects is intimidating.

It just shows that Python is used for a lot of purposes and there is no single tool that handles all usecases.

> I gave up and started rewriting the project (web scraper) in Node.js with Puppeteer.

And when you'll get to the point where you need to work with different node projects, you will also need several tools to manage node versions and different environments, so that doesn't help at all.


The thing is, if I need to switch node versions. I can use nvm. The thing is, I don't need to manage different environments in node because the dependencies are contained in "node_modules" and not "attached" to a Python interpreter instance.


I'm a Java developer and I make fun of Maven and Gradle as much as anyone, but overall it seems like I am better off than I would be in the Python ecosystem for dependency management as well as managing the version of the language I compile and run with.


To be fair, he says in the article that his requirements are somewhat different from most Python developers, to wit:

* "I need to develop against multiple Python versions - various Python 3 versions (3.6, 3.7, 3.8, mostly), PyPy, and occasionally Python 2.7 (less and less often, thankfully)."

* "I work on many projects simultaneously, each with different sets of dependencies, so some sort of virtual environment or isolation is critical."


It's kind of shocking to hear these two quoted as "different than most" requirements – as a Ruby developer, this sounds like exactly a thing that any engineer supporting production systems would need routinely. RVM and bundler are standard developer tools and I would never question the need for supporting multiple versions on the same machine, unless in a very well-defined scenario where RVM was unneeded (like in a containerized environment packaged through a pipeline.)

So sure, there is more than one way to manage a Ruby runtime version, but are there any competitors to Ruby's Bundler? I feel like it's the undisputed champion of Ruby dependency management, working with Rubygems, another unopposed incumbent in its own space, and it never even occurred to me that my language should have more than one of either such tool. Can someone help me understand what drove the Python ecosystem to need more than one dependency manager, ELI5?

I am pretty cloistered as a Ruby developer, but my limited professional experience with other language runtimes tells me that the Ruby "developer environment" experience is really second to none, (change my mind.) Is there any tool which is nearly comparable to Ruby's Pry for Python or other interpreted languages? (I doubt that there is!)

Managing gemsets and multiple versions of Ruby is old-hat.


I think contemporary best practices start with the production environment and work backwards towards creating a development environment as close to the production environment as possible.

These days, most productions environments are effectively isolated containers. If your production environment is a container, you probably should develop in a container as well. In that case you don't need much tooling for isolating an application's dependencies from other applications.

The tooling that you need is to build a python application, which means (1) get the dependencies (2) copy over some source code (or invoke the C-compiler if build a C<->Python extension) (3) run tests. Python's builtin setuptools does that fine. It didn't strike me as amazingly simple, but it's not amazingly complex either. pip is essentially a convenience wrapper for setuptools, i.e. pip is to setuptools as apt is to dpkg.

Basically, I believe that because of docker isolation is a irrelevant criterion by which to judge a language/ecosystem.


Well then I guess you've reasoned yourself into a nice position from which you can claim the problem I routinely handle cleanly almost every day, is out of scope and unworthy of attention. This is the one important feature of Ruby that has enabled me to stay working without containers.

I'm a developer that supports multiple production applications, and I frequently need to make my development environment as similar to production as possible in order to reproduce a production issue for debugging purposes. I depend on that isolation to be able to do this. My work environment is such that I'm not generally permitted to use containers in production (yet). So it stands to reason through your argument that perhaps I shouldn't use them in development either. It sounds like if I were using Python as well instead of Ruby, I'd be having a much harder time.

Honestly, if I could use containers in dev and prod I would, I truly do believe the grass is greener ;) but I would not sacrifice this marvelous isolation tech, in fact I'd prefer to take RVM with me into docker-land so that I can A-B test Ruby versions within the same container image, and be guaranteed that all my cluster's worker nodes will not have to take extraordinary measures and carry both images in order to ensure the application can boot without a download delay, whenever we have to revert the canary (or whatever other minor potentially reversible lifecycle event would normally trigger a node to need to download a new, expensive base set of image layers all over again.)

This works really well: https://github.com/ms-ati/docker-rvm


Bundler is really nice and much better than virtualenv/pipenv/etc. I find this sadly ironic as I actually consider most of the Python ecosystem to have much more breadth and quality in terms of libraries; it’s just a pain in the ass to manage those dependencies once you actually want to use them.


Since you are familiar with nodejs:

poetry == npm

pyenv == nvm

pipx == npx

No big difference, IMO.


Right? It's literally the same set of issues.

Package Management - npm? bower? yarn? Which should I use this week?

Interpreter Versions - Revisiting a project I last touched 2 years ago on node 8 has a load of locked dependencies that only work on that node version. OK, let's bring in nvm so I can get a dev environment going.

Executable Packages - oh no I've got two different projects with different and incompatible versions of apollo, what do I do? Oh right, sure npx for executable isolation so we don't pollute the global namespace.

Every ecosystem has these problems, and if they don't it's probably because they're still relatively esoteric.


> Every ecosystem has these problems, and if they don't it's probably because they're still relatively esoteric.

Exactly! I'm not aware of any non-compiled language where (all) these issues are solved much better. I can be very productive with the tools I mentioned above and I'm glad that they work almost identical for both my main drivers (Python and JS/TS).


No. Using an ensemble of tools is in the tradition of Unix. I much prefer using small tools that do one thing and do it well.


I think it really comes down to Python not having a chosen way to handle package management as well as Python being dependent on the underlying C libraries and compilers for the given platform.

Since Python did not prescribe a way to handle it the community has invented multiple competing ways to solve the problem, most of which have shortcomings in one way or another.

To further add to the confusion, most Linux and Unix-based operating systems (Linux, MacOS, etc.) have their own system Python which can easily get fouled up if one is not careful.

This is one place where Java's use of a virtual machine REALLY shines. You can build an uberjar of your application and throw it at a JVM and (barring JNI or non-standard database drivers) it just works. There is also usually no "system Java", so there is nothing to break along those lines.


> barring JNI

Exactly. It's not Python's only problem, but far and away the most painful snags I've hit with packages is when they use C code, and thereby drag in the whole system. "I'll just `pip install` this- Oh, I need to install foo-dev? Okay, `apt-install foo-dev`... oh, that's in Ubuntu but not Debian? Well this is gonna be fun..." Now I trend a bit more exotic in my systems (NixOS, Termux, *BSD, ...) but if my Python were just Python it would just work so long as I have a working Python install; in practice that's rarely enough.


You’d need to install foo-dev in the context of any language that supports c extensions.


> There is also usually no "system Java", so there is nothing to break along those lines.

Oh, but there is :( `jenv shell 1.8` is muscle memory for me now.


You can have multiple JVMs or JDKs installed, and therefore the need to change environment variables depending on your use cases, but I was referring to Java being part of the operating system in the same way that Python is part of some operating systems, for example several Linux distributions (Fedora, RHEL, and practically all derivatives).


Isn't it still just a package on Red Hat distros? A base system package, granted, because some system tools are written in Python.

But in any case, it just becomes one more version of Python to consider. If you're already dealing with multiple versions, what difference does it make?


It is "just a package" in the sense that there are RPMs for Python, but many system management tools are Python scripts that assume you have Python and specific Python libraries installed so that everything will run correctly.

If you have root privileges and you run "sudo pip" commands you might accidentally break the specific Python dependencies that the system scripts rely on. See https://developers.redhat.com/blog/2018/11/14/python-in-rhel...

There's no issue with using the system Python, but any Python packages should be installed via yum or similar Red Hat / Fedora tools and not pip.

Note that the newer versions of RHEL have created developer-specific tool packages to separate the system packages from developer packages. This allows the developer packages to get upgraded quickly so developers have nee, shiny tools without breaking the compatibility that the base system needs to keep running.


I'm not sure how people manage to "foul up" their system python, but you are doing something extremely wrong, when giving random devtools root access to transform a typical (non NixOS and friends) production environment (your workstation!) into a custom development environment.

Given that: are you sure problems arise around pure python-packages (which generally have a well enough forward compatibility), or is the problem with all the cool "machine-code"-embedded packages (which are a lot!)? And yeah, these might indeed break randomly, when installed on different systems of different time-periods. But that's a problem all binary packages have!


It doesn't have to be this way, here's a simpler alternative: https://news.ycombinator.com/item?id=21513044


I don't understand... That looks very similar to the other blog post.


You'll find it's a much simpler setup. Thanks for the input however.


I see the same problems with the python ecosystem.

There is a lot of tools and confusion between versions especially because of the breaking changes between v2 and v3 (try to explain someone why he has both and when he types python it will take v2.7 as default).

I love the elegance and simplicity of the language and many tools written with it but this is a point I'd really much appreciate to be improved.

Because of that I sometimes just rewrite something before fixing it with 2.7. It's perhaps a bit more work sometimes but not as frustrating as trying to get something running that is deprecated in parts.


It occurs to me that, with respect to version dependency, you can think of Python and Java programming as similar to Smalltalk programming: you program and alter the environment.

In Smalltalk you change parts of the environment. In Python, Java (and Ruby?), you change the entire environment, as described in TFA.

https://www.quora.com/What-is-the-essence-of-Smalltalk


He has some specific workflows that go beyond the standard pip/virtualenv tooling, but for most developers that's likely all you'll ever need.


And not just Python, try writing a small script in Haskell or Clojure and you'll see how much burden there is to setup their environments.


People hate on Gradle endlessly, but the fact that 99% of my JVM based applications can be successfully launched including entirely self-contained dependencies with

    ./gradlew run
is a huge boon and one of the things that keeps me sticking with the ecosystem.


You kind of mentioned this yourself already, but this boon is more of a feature of the JVM (the classpath) rather than the dependency manager.

If Python would have a similar concept rather than depending on a global module location we would be able to replicate the same developer ergonomics as we have for the JVM.


Well with Haskell and Clojure, many people just use a pretty plain text editor + a REPL and maybe. And for Haskell at least, that's super easy to install. I suppose there isn't One True Way, but each of the popular options (cabal, Nix+cabal, stack) are only a couple steps.

Haskell with new-style cabal works like a charm with `cabal init` for package setup. And then ghci from there will give you...

- expression evaluation ofc

- type-at-point via type hole annotations

- Laziness inspector via :print

- Breakpoint debugger (someone just posted a nice Reddit text post about it today)

- Package information of terms via :info (and :load $MODULE to get any top-level term in $MODULE into repl scope)

- Docs via :doc (this is pretty new)

- Module export lists via :browse

off the top of my head


In Haskell that would be (given Stack is not shipped with your OS's package manager, which is extremely rare):

  $ curl -sSL https://get.haskellstack.org/ | sh
  $ stack install turtle

  $ cat >> ./hello.hs <<EOL
  #!/usr/bin/env stack
  -- stack --resolver lts-14.14 script

  {-# LANGUAGE OverloadedStrings #-}

  import Turtle

  main = echo "Hello, world!"
  EOL

  $ chmod u+x ./hello.hs
  $ ./hello.hs
  Hello, world!

With Nix it would be even simpler.


I use mostly JavaScript for my day-to-day web stuff. I've been really turned off from using Python for more things because of these issues. My experience with managing dependencies in JS has been much easier than with Python--I'm really astonished that such a popular language has done such a bad job at this for so long.


I read it as a symptom of a very active project. It’s being taken to new places daily and independently. It might be somewhat like all the various Linux distributions. Somewhat overwhelming from the outside but in practice not so much.


Yes. As a non-Python programmer, I sometime had to make software or dependencies in Python works. It was always a long steps of installing some package manager, setting up virtual environment, running another package manager, etc. And of course, it failed at some point before the all thing was working.

On the contrary, I seldom had these kind of issues with projects coded in C#, C or C++: most of the time the few steps to compile the project succeeded and produced a usable binary.


Really? With a decade long experience with C# I have never found the nuget package manager to be superior to pip. I don’t think nuget is necessarily worse either, but there are so many abandoned packages that died with some .Net version. Which means you’re either building your own extensions or abandoning packages.

As far as virtual enviroments go, I actually kind of like them. They were containers before containers became a thing, and I’ve had much fewer issues with them than say NPM, but they are more burdensome than adding full binaries to your C# project.

Where compiled languages shine, and maybe I’m misunderstanding you, is when you need to distribute and executional to end-usere. C# is much better at that than Python, but we haven’t actually done that at my shop, in around 10 years.


It's admittedly bad, any Python dev who says otherwise isn't being honest.

That said, once you get it down, you're not burdened by it much/at all. You can start and begin work on a new project in seconds, which feels important for a language that prides itself on ease of use.

But yeah, not a great look for newcomers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: