This thread is supposed to become a progress report for Nimony. Tonight the following program worked for the first time:
import std / syncio
echo "hi", "abc"
This might not look impressive but it is (IMO): Nimony is a new Nim compiler which does not use much of the old codebase. A good dozen subsystems have been reimplemented from scratch:
Of course, this is still not ready for anybody. But once ARC has received more bugfixes we have a minimal Nim that offers seqs+objects and modularity.
For more information about its architecture, read: https://github.com/nim-lang/nimony/blob/master/doc/design.md
For more information about its planned design, read: https://github.com/nim-lang/nimony/discussions/529
The next milestone is get our seq implementation to work. I hope to get there within the next 2 weeks, but that might be overly optimistic as the interaction between generics and ARC hooks is hard.
Once seq works, our table implementation needs to compile. Once that is done, parseopt and then you can write super simple CLI programs with it while we implement ref...
What's the tooling situation like in the new Nimony implementation? I understand the new incremental compilation will boost nimsuggest's performance and help solve some quirks, but are there other benefits?
Also, what about debugging—will the GDB debugging experince improve? Better support for the rr debugger could be a killer feature.
Nimony sounds really exciting, congrats to the team!
Well there is no tooling yet, but since everything is based on NIF we have effectively implemented a "compiler database": A tool can just process the NIF files in nimcache/, you can write your own tools and NIF has a short spec.
As for the debugging, all the transformations and tools keep column-precise line information and name mangling information, a debugger can be written to make good use of that. However, I expect in practice we'll just use NIFC-to-LLVM instead of NIFC-to-C and get the typical debugging experience of all the other compiled languages.
I am curious about the replacement of macros through compiler plugins.
The way macros currently work in Nim is not without weaknesses, but I like that the only thing stopping one from writing a macro is pretty much import macros and they can go seamlessly in a Nim source file with related things. As they run in VM it is also possible to get very rapid iteration going on with nim check.
How far would the workflow of compiler plugins differ from this? To exaggerate a bit, will we be going back to having a separate a script outputting C code which needs to be added to the Makefile?
Slightly related how does this coincide with previous plans of removing untyped?
It's just
template foobar(x, y: int): int {.plugin: "foobar.nim".}
And then in foobar.nim you have a program that receives two command line parameters, two filenames <input.nif> and <output.nif> and you use the NIF APIs (or APIs on top of that that emulate macros.nim) for the transformation logic.
instead we already get zero-alloc zero-copy
it's not just for strings but for any user-created (view) type - openArray is just a convenient way of explaining the concept .. overloading == with different types for lhs/rhs is allowed -> this is just a natural extension of that mechanism to std/tables that one would expect "just works" - no need to artificially cripple it this way, ie C++ has had this feature forever.
The issue often appears when you work with caches, lazy loading and the like where constructing a "full" key just for the sake of a lookup is expensive (for example involves a memory allocation). We've had numerous cases where we've had to implement ugly hacks instead of using this very natural feature.
Progress report. This program works:
import std / [syncio]
type
BinaryTree = ref object
le, ri: BinaryTree
data: string
proc newNode*(data: sink string): BinaryTree = BinaryTree(data: data)
proc append*(root: var BinaryTree; n: BinaryTree) =
# insert a node into the tree
if root == nil:
root = n
else:
var it = root
while it != nil:
var c = cmp(n.data, it.data)
if c < 0:
if it.le == nil:
it.le = n
return
it = it.le
else:
if it.ri == nil:
it.ri = n
return
it = it.ri
proc append*(root: var BinaryTree, data: sink string) =
append(root, newNode(data))
proc toString(n: BinaryTree; result: var string) =
if n == nil: return
result.add n.data
toString n.le, result
toString n.ri, result
proc `$`*(n: BinaryTree): string =
result = ""
toString n, result
proc main =
var x = newNode("abc")
x.append "def"
echo $x
main()
It does not leak memory either, recursive ref destructors seem to work well.
Fun work!
Will the new compiler still support compiling as objective-c? I apologize if this isn't the place to ask this (I don't know the right place).
Will the new compiler still support compiling as objective-c?
It isn't planned but not hard to do either.
It isn't planned but not hard to do either
So, can I ask what you're planning on targeting right now? Is it C, C++, and JavaScript?