Hello, my Nimian Simians.
I would like to share with all of you my thoughts ever since I started using Nim.
First, some background. Python is my bread and butter but I have been itching to learn something else and more rigorous. So far, Nim has attracted me the most. After going through the tutorials and half of the Nim in Action book, I decided to set out on making something. I had a lot of ideas, but ultimately chose a text-user interface library to start with. Given the difficulties in designing and the troubles I've had over the past few months, I'm not entirely sure I'm going to finish any time soon. So I figured I might as well post something now in case I lose steam and post nothing.
There's quite a bit I do have to say, but to guard against an omnibus of a post, I'll post some things separately and more fleshed out.
All right. Let's get into it.
The first thing I noticed about Nim is that error messages are correct, yet opaque and unhelpful in many situations. I've already seen a lot of acknowledgment on the repo about this. The only thing I do not see being spoken about is how the error messaging for varargs makes it awkward to use, but I'll make a separate post on that.
Tooling: My setup has been VSCodium with the nimsaem plugin. I've seen a file that compiles but has no symbols for the outline. Formatting sometimes doesn't do anything to code that is clearly misformatted. nimsuggest thrashes the CPU sometimes. I can flip the red/green state of tabs by saving files in different order. Rename symbol doesn't work across modules. Go-To-Def will not work if the body of the definition has errors. You cannot refactor module names. And a whole bunch of things that suck that I'm sure the community is aware of.
Moving on to testing.
Testing on its own feels very strange and I found myself incredibly frustrated as I tried to understand how the community tests their Nim projects. It seems there's a strong division between writing the tests and running them. I'll cover three tools.
Let's take testament first; broken formatting on its documentation doesn't inspire confidence. Testament creates files all over your project with no documentation about them; leaving you to guess if you are suppose to check them in VCS. Or, you can look at another project that uses it and steal their .gitignore.
Balls: Feels much more polite than testament. It also follows the philosophy of avoiding config as much as humanely possible. This is the only tool I found whose behaviour matched my expectation in just running my tests. Unfortunately, it's slow as balls.
unittest2: The docs mention the new way of collecting tests which sounds just like PyTest. That's cool and all but I'm still confused as to how that helps when you can only run one Nim file at a time with it, at least that's what the examples indicate. Is it implied that you use unittest2 with a runner? I do not know ¯_(ツ)_/¯
What I ended up with was just using include to merge all my tests together in one file and executing that. A macro generates the include statements so I do not have to update my tests/all.nim file.
Before closing, I'll just speak about the one issue I have with the language itself. I've mostly programmed in a dynamically typed language; so the task of putting different types of things into a list doesn't require any more effort than a list that only holds one type. In Nim, we can solve this issue with variant objects or inheritance. Now, I'm not a OOP fanatic, but so far inheritance with dynamic dispatch has variant objects beaten in ergonomics. I do have a crazy idea to make variant objects easier to use which I'll put in a separate post.
I'll move on to closing remarks now.
There is something about Nim that draws me. Perhaps it's the fine line between expressiveness and performance. Nim doesn't seem overly promoted or demoted, and like Haskell, it doesn't appear to care much about success.
I do want Nim to succeed. When I scroll through the experimental section, I see a lot of cool toys. Cool toys that would make the language so powerful. I hope Nim gets these cool toys in a fully realised and complete fashion, because it would make Nim beyond amazing.
The core language is good. It's everything else that needs some TLC.
As somebody who tests their packages with both std/unittest (for snorlogue, nimword and tinypool) and testament I agree that some TLC is needed, mostly on the docs front for testament in my eyes.
However, I'd note that the docs are present and understandable and I'm saying my statement more in the sense that testament could use an expansive tutorial that dives into every feature in detail as it packs quite a punch in what it can do.
Honestly, std/unittest (I assume unittest2 acts identical) was pretty trivial to get going though.
Just 2 examples:
https://github.com/PhilippMDoerner/TinyPool/blob/master/tests/tSqlitePool.nim https://github.com/PhilippMDoerner/nimword/blob/main/tests/t_pbkdf2_sha256.nim
And you can run however many examples you want with this simply via running nimble test (https://github.com/nim-lang/nimble#tests) .
Glimpsing through unittest2 though, they also have it in their docs on how to run all your tests: https://github.com/status-im/nim-unittest2/#testing-unittest2
So while on the rest I'll agree, at least the unittest part is pretty covered.
What I ended up with was just using include to merge all my tests together in one file and executing that. A macro generates the include statements so I do not have to update my tests/all.nim file.
I do the same, or even write small tests in the same file as the code. All you need is to define the test somewhere, say in mylib/test.nim
template test*(name: string, body) =
if env["test", "false"] == "true":
body
And then use it whenever you want:
...
test "range":
check range(-1.0, 1.0, 4) == @[-1.0, -0.5, 0.0, 0.5, 1.0]
...