And then when compiling package A, an error occurs saying that
Error: Cannot satisfy the dependency on C 0.1.0 and C 0.2.0
This is problematic if I want package A to use a newer version of package C say 0.2.0, I have to first update package B to use C == 0.2.0 and bump B to 0.2.1 and then A use B == 0.2.1. It is not always possible to update B if it is others' repo. Even I own package B, I may not want to update it immediately just for package A, because I know package B have already been tested and stable. And also this is a chain reaction: updating C requires updating B require updating... the dependence graph could be large. This problem is especially obvious when package C is a utility library that serve as a base library.
Nimble allows specifying package requirement with >=, <=. but it only mitigate a bit the problem. First, what should be the upper bound ? C >= 0.1.0 < 0.2.0 ? The problem in the above example still persists. An half bound like C >= 0.1.0 is doomed to be broken in the future. Semver is only a convention for specifying compatibility, it is not a guarantee. Even the interface is unchanged, higher version does not necessarily mean better. It is not rare to see newer version introduce new bugs, memory consumption issue or behavioural changes. For production, a reproducible binary is highly desirable and the use of == is almost a must.
It seems there is no solution so far, but I could be wrong. I would like hear your experience and how the language development team think about this problem.
It seems it is impossible to co-exist multiple version of packages so far, but I could be wrong.
I know of no good solution either. But I also think Rust/Cargo's solution of allowing duplicated libraries is terrible, so I'm not looking forward to the day where Nimble supports this scenario.
I personally consider semver to be little more than wishful thinking so here is my advice: Manage dependencies manually, use a tool (or scripts) to manage a set of git repositories and use git commit hashes for reproducible builds. Watch your dependency tree and try to minimize the number of your own git repositories.
But please notice that this my personal opinion and others are investing time and money in improving Nimble and they won't stop until it's as good as Cargo. :-)
But I also think Rust/Cargo's solution of allowing duplicated libraries is terrible, so I'm not looking forward to the day where Nimble supports this scenario
relevant discussion where I advise against copying fusion sources into nim, causing duplicate packages: https://github.com/nim-lang/fusion/issues/25 > https://github.com/nim-lang/fusion/issues/25#issuecomment-708099682
@Araq
allowing duplicated libraries is terrible
Why do you think that?
From experience in nodejs, even though allowing co-existence of multiple version of a package will result in bloated node_modules for large applications, there is no more dependency hell issue. Having bloated size of project is lesser evil because it is an easily solvable problem as long as the project size is within few GB, not TB... For Nim, this may means longer compile time, more memory consumption during build and larger binary, but it shouldn't be unacceptably worse for small-to-mid size projects. One ceavet,
On the other hand, it is hard to maintain large application without
Why do you think that?
Because more often than not it's caused by semver, not by "incompatible" library versions. For example:
Say you have libraries A, B, C.
A seeks to depend on B and C v1.0. B depends on C v2.0 but only because when B was written C was at v2.0 and we strive to support up-to-date software, it could support C v1.0 but B's authors don't know that.
Now what happens with a good PM? You get duplicated libraries.
Now what happens when you don't use a PM? The thing compiles and works.
And if it doesn't compile, you get a meaningful error message like "unknown identifier: foo" instead of "Error: Cannot satisfy the dependency on C 0.1.0 and C 0.2.0", Nim programmers understand Nim compiler error messages better than arbitrary version requirements.
Having bloated size of project is lesser evil because it is an easily solvable problem as long as the project size is within few GB, not TB...
It's not easily solvable at all. More often than not in the end you simply live with the bloat for good.
Now what happens when you don't use a PM? The thing compiles and works.
And if it doesn't compile, you get a meaningful error message [...]
On the other hand, it's not only about whether the code compiles. A backward-incompatible change might not lead to compile-time problems. It could also be different semantics (for example different order in which code paths are tried, or different thread safety semantics).
Duplicated libraries are the final proof that semver simply doesn't work.
I wouldn't go that far. It may not solve dependency problems as they're described in this thread, at least not easily. But I think it makes it still a bit easier to determine "manually" where you could run into problems when you use a different library or multiple versions of it.
I think semver is still quite helpful. If I see a change in the major version number in a library I use, I can check the README/changelog for the release and think about how it would affect my software. On the other hand, if version numbers are chosen more arbitrarily, this makes it much more difficult to determine the effects of a version change.
It could also be different semantics (for example different order in which code paths are tried, or different thread safety semantics).
True, but not really relevant, for quality assurance I would rely on compiler error messages and tests (!). In practice it means that bugfix updates are risky but then semver doesn't change that. Updates are risky. Also, these subtle mystical threading changes would be covered in the changelog.
On the other hand, if version numbers are chosen more arbitrarily, this makes it much more difficult to determine the effects of a version change.
You can always read the changelog. And you should, because semver doesn't replace having to read the changelogs.
Here, https://youtu.be/tISy7EJQPzI?t=1128 it seems like I'm not the only one who thinks this.
This is not a very Nim-specific problem, but about software lifecycle in general.
Various software distributions take a "snapshot" of the ecosystem at a given point in time and provide one given version of each library. This ensures that specific set of libraries and tools are well tested together and can be used for a long enough time. This is often done by Linux distributions, but also in large projects like OpenStack, Kubernetes distributions, and various big closed-source platforms that need stability. Some Linux distributions provide backporting of security and stability fixes for 5+ years and reproducible builds for a longer time. [ https://www.cip-project.org/ is planning to maintain backports for 25 years. ]
Unfortunately, this requires library developers to avoid strict dependencies, and library consumers to be a bit selective with what they use.