Moreover, the latter risks to make things slower for the common case of having a single header value
As a library author, it is difficult to keep track of changes this way. It would be very useful to have some log of possible upcoming breaking changes, and some deprecation time.
I feel bitter on that topic.
The list of possible upcoming changes are all the changes to "devel" of Nim after a release. The deprecation time is "at least" the time between releases.
Use releases and not devel if you need to rely on your code working.
You could also have stayed with the "commit" you know which works until you have time to fix your code. If your library needs or follows "devel" the library by itself is highly unstable and you should know that.
This is the reason why 1.0 (and it's promise to bugfixes without changing features) is such a big deal in the first place. That is also the reason why 1.0 will change everything.
I understand your point of view. Still, I am just trying to help the community. I am doing the best I can to do this in my spare time. There is really no need to remark you feel bitter about this.
I have to follow Nim devel if I want my libraries to be of any use.
It is clear that everything could break. Still, deprecations have been used in the past, and they do work fine, at least for the most glaring changes (such as moving every method related to HTTP to use a type that did not exist before the refactoring)
I am not against you ... and I am in a similar spot because I live on devel too.
But I am against putting burden on the development because you (and me and others) feel it is "stable enough" and create stuff on "devel".
I just wanted to point out that "stable enough" means nothing for devel. You can check it out and have even a non compiling version. I simply can't agree that the devs should "inform" people about changes in devel.
You could now say: "Hey devs.. latest changes affect my code in a very bad way. Are you sure you want to change this or that without a deprecation cycle for the next release of Nim?". Maybe stuff even gets reversed. But making lists of upcoming changes for devel seems the wrong way to me.
I am not particularly interested in a list of upcoming changes, rather any way for library authors to follow what is going on.
If there a few machines available, I think it could be a good idea to set up some continuous integration server. It could be like this:
This would have the following benefits:
@andrea you just described what I've been building on CircleCI. A build is triggered by new packages being pushed on Nimble's package repository. All available packages are installed and a report is generated:
It also creates hot-linkable badges with the install test output and the package version:
https://circleci-tkn.rhcloud.com/api/v1/project/FedericoCeratto/packages/tree/circleci/latest/artifacts/<package_name>.svg https://circleci-tkn.rhcloud.com/api/v1/project/FedericoCeratto/packages/tree/circleci/latest/artifacts/<package_name>.version.svg
The code is temporarily hosted at https://github.com/FedericoCeratto/packages/tree/circleci
Contributors are welcome - please ping me on IRC!
@federico great work, this seems exactly what I had in mind! I don't hang around much on irc, so I am asking here.
What are you testing exactly? Are you doing just nimble install?
I guess many libraries come with tests, and one would want to run those. Actually, I never install libraries directly, but rather depend on them in other projects. I guess that by installing, one would not compile generic code, that would be instead instantiated in tests.
Also, it would be very useful to make this opt-in, so that authors can leave an email address to receive notifications in case of breakage.
Finally, many libraries probably depend on C libraries, so it would be useful for them to declare such dependencies. Probably having each library declare a Dockerfile would be the simplest thing
@federico: what check are you doing to display the status? Just nimble install?
I checked, and - as expected - without a bin file, nothing is compiled when doing nimble install. This means that the check is empty. In fact, I do not see why one would ever want to nimble install a library, rather than depend on it.
I think the right approach would be for each author to manually propose the library for inclusion, and provide the commands needed to test it, be it nimble tests or whatever.
What do you think?
@andrea currently http://ci.nim-lang.org/ is doing only nimble install, indeed, but this is already catching some libraries that fail to install. Running full unit/functional test suites might be unnecessary for this use-case and too heavy on the buildbot. Having a standard, simple, smoke test to be run by nimble would be ideal.
I'm thinking of creating a Seccomp sandbox with https://github.com/FedericoCeratto/nim-seccomp for each install/smoke test to protect the buildbot and prevent unwanted network connections.
@federico I think that creating a sandbox and letting package mantainers decide which code to execute is the way to go.
I don't get how nimble install is catching any errors. All the libraries I have written (memo, csvtools, rosencrantz, linalg, spills, teafiles, emmy, nimblas, nimcl, patty) are just that: libraries, with no executable attached. There are some test executables, but since these are tests they are explicitly excluded using excludeDirs. I guess most other libraries do this.
When you do nimble install on one of my libraries, the only action performed is checking out the code and copying some of the files (those not in excludeDirs). There is no compilation at all, because there is nothing to compile. Hence there is nothing that can reasonably fail.
Still I see that most of the libraries that I have listed above appear failed on the report. I guess that the only way this could possibly happen is a network glitch while checking out github. Probably, this explains most of the errors that appear in the report.
Asking mantainers to register and manually submit new packages has another advantage: there is an email to contact someone when the package fails.
I get that putting all the necessary infrastructure in place is a lot of work, but it would be much more valuable than doing nimble install