Progress. Generic inner procs are now moved to a position where lambda lifting can handle them.
In other words, this code now works:
import std/[syncio]
proc outer =
var x = 120
proc inner[T] = echo x
inner[int]()
outer()
It also does not use the heap as the closure does not escape.
https://github.com/nim-lang/nimony/issues/1478 lists "heap based exceptions".
About OOM: It's covered in my blog post, what do you want to know?
Well seqs and strings and collections in general have an "empty" state that we can use should an allocation fail. Failures also trigger a callback. Only new(x) returns nil on OOM and since we check for nil derefs at compile-time, the compiler complains about it. Only if the proc already uses .raises the compiler then exploits this fact and the new operation then cannot return nil. (Of course, this special behavior can later be exposed as a pragma for other new-like operations but there is no experience with this feature yet, so why bother...)
In other words, usually allocations do not cause a .raises annotation.
But if the callback doesn't terminate the program, how can it allow to recover? It doesn't know where the failure happened (as per signature) and how to handle it.
Also, wanted to share a though about OOM handling. It is quite different from other failure modes, as
That means most of the time the programmer can be oblivious to the fact that allocations can fail, as this almost never happens, and when it does, there is nothing to be done (not at runtime at least).
So perhaps it would make sense to make OOM tracking orthogonal to exception tracking. Every proc can raise OOM by default (unless compiler inferred otherwise). If one wishes to stop it from crashing the program, one can annotate some proc with something like .oomSafe., and then the compiler forces one to handle it (similar to .raises: []. trick).
So if the programmer doesn't use .oomSafe. annotation and doesn't handle OOM anywhere, he gets the nim v2 experience (OOM equivalent to a defect), and can be oblivious to this failure mode entirely (which is fine most of the time IMO). If he wants control over how and when OOM is handled, he can get as much as he wants.
This can also be back-ported to nim v2, I think.
I will give my 2 cents this from the perspective of someone with long experience in C/C++, and Object Pascal, with some experience in PHP, and who took a long time even moving away from HTML5/AJAX+JQuery to CLJS, avoiding the JS framework churn.
The only real scripting language I've needed to use from a systems perspective is Lua. No complaints there. It is a super tool. Python is its own unique case as a language that offered kid gloves to non-engineers in math, scientific fields, to focus on the problem domain than worrying about code. Pascal was the language that started it though, for systems, and I always have a soft spot for Free Pascal.
Nim is the modern, cheerful, and powerful distillation of Free Pascal, Ada, and scripting in my view. Free Pascal has an outstanding base to develop desktop apps, and has a comprehensive web framework that puts Java to shame - mORMot2. It can handle all modern workflows. It can go bare metal without needing an OS in embedded systems. At least in Europe, Object Pascal is a well-considered language in some MINT circles. Heck, FPC has something called LAMW that lets one create native, well designed Android apps. It is high level or low level or what? It is everything.
I haven't had much to do with Python, and my ML playground (classic ML with Shogun toolbox, and a bit of mlpack) has been with C++. Shogun has a number of long standing bindings, of which Lua is very popular. It can be made Nim ready relatively easily. Wrapping mlpack is a different case, as it would need idiomatic reimplementation of its algorithms using Arraymancer specific operations and types. It only offers a C++ API, and all its dependencies are C++ related types and toolchain.
So, Nim really breaks the view of needing multiple types of programming languages. It is systems level, it is for the web, it can be a first class, single language for ML/AI, and it is script like. Its JS backend allows full stack Nim apps to be PWAs made native on mobile. All Nim needs is maturity of libraries. So, I'm an evangelist for the language and the good folks who work on its amazing libraries, because the world would be uglier without them.
Bare with me for a moment please, as I can only grasp a few tidbits of all the internals. At some point @Araq mentioned plug-ins and in a way that it stuck, but probably not in the right way.
So, if I'd write a ray-tracer I would not write a ray-tracer in Nimony, but I would write it as a plug-in to/for Nimony. That would give the ray-tracer access to the full ecosystem of Nimony.
It would have its own scene description language (DSL) converted to NIF? I could script the ray-tracer scene in nimscript without boundaries and compile them eventually to "include files".
Probably skipped a few steps.
Does that make sense??
You could do that but writing a ray-tracer in Nimony would still be the first step and the first choice. Also, I'm not sure if you got the "converted to NIF" part right.
A plugin needs to emit NIF as required by Nimony's data model. This model describes code not data and so that might not be what you want for your scene description language. But yeah, it would work.
You need code to build the data, his could be as simple as positioning a cube or more complex generate mesh lofting etc. This would be the scene description language, I thought to emit NIF, or just a macro based DSL?
Then while rendering a render time language is used for other functionality as bending light rays, sub pixel triangle subdivision, shaders. Again NIF/same Nif?
I think mmmm, is a plugin just a thing on its own, its own NIF and a way to convert X to C, or is it (also) an extension to Nimony. Give Nimony ray-tracing powers, or sound?
Anyway, it feels powerful.
NIF is "JSON on steroids", but how exactly it looks like ("what tags do exist and what do they mean") isn't part of the specification. But a Nimony plugin is not "any NIF will do", it has an inherent structure. This structure is like Nim's macro AST (NimNode) and plugins are macros, albeit with many restrictions lifted. I doubt you want to run your raytracer at compile-time, that's just a gimmick. It seems more useful to forget the "plugin" aspect here and explore the benefits and downsides of a NIF based 3D world description.
My educated guess here is that NIF would excel at that -- clear structure, text based with tooling so simple you can literally count the number of ( to get the number of compound nodes. Excellent "where does X come from" in the form of optional line information.