I like this plan! I hope the goals are reached without too much hassle haha.
Besides IC are there any other ways of improving tooling?
Do the Cylons have a Plan to decide / to reason about the scope of a type? So, that type dependencies are made explicit?
If not, the Cylons might have a plan, but would ride a dead horse at the same time.
Do the Cylons have a Plan to decide / to reason about the scope of a type? So, that type dependencies are made explicit?
I don't know what that means. Same for your remark about 2nd order logic and CPS. Please link to the relevant books/papers.
I like this plan! I hope the goals are reached without too much hassle haha.
I've tried very hard to make the plan a modest one.
Besides IC are there any other ways of improving tooling?
Well you can help NVLM so that it supports 1.6. NVLM should have the better debugging support. Or you can help the VSCode plugin authors.
More of the standard library should be moved to external Nimble packages.
Does this mean "less batteries included". Will these external packages be "Nim Approved" in some form?
Just being interested.
So if one of the important packages doesn't work with the latest release, will the release be postponed?
That's already true, important packages can block an upcoming release.
Maybe, but keep in mind how terrible I'm at type theory.
No comment... I'm focusing on Nim only. For "generics", a core type theory is necessary anyway, because otherwise it is not sound.
ou say "undecidable!", I say "don't care, I'll simply add a counter and bail out if it takes too long".
Unfortunately, the counter will not help. With a semi-algorithm, you will even fail if the problem is in / can be capt 1st order.
Unfortunately, the counter will not help. With a semi-algorithm, you will even fail if the problem is in / can be kept 1st order.
Yes and I know, but I still don't care. Even if it's decidable but takes too long it's impractical.
The better solution here is to watch out that the used algorithms are decidable
Compilers will typically use decidable algorithms, otherwise, they could not run stable.
Yes and I know, but I still don't care.
This sounds that I've just pulled out a (cheap) textbook argument but this is not the case.
Type derivations should always kept simple and computationally low , no counters needed (exceptions see below) . This can be achieved with
Therefore, procedures should not only be declared, they can be defined too, they are then available "out of the box"
concepts will then become the preferred instrument for almost anything. They will especially allow for faster compilation.
Now to the exceptions: If the programmer wants more, e.g. categorical operators like functors, applicative, monads etc. the programmer has to annotate accordingly (e.g. an fmap etc.) and then, we'll go for higher order unification. For this particular purpose, you can have your counter again, why not.
Now to the exceptions: If the programmer wants more, e.g. categorical operators like functors, applicatives, monads etc. the programmer has to annotate accordingly (e.g. an fmap etc.) and then, we'll go for higher order unification. For this particular purpose, you can have your counter again, why not.
Looks like we agree, so what about:
type NTuple[N: static int] = concept
requires c: # a 'requires' section for everything the declarative thing cannot handle. Allowed to use "more complex" algorithms (that run on the Nim VM and thus have a "counter")
c.tupleLen == N
Well, there is much to say about it, the very first thing being the notation of the N as a "free running parameter". But it is not as free as it seems, because it is now declared as a part of NTuple, in fact, NTuple establishes its own namespace. I would write it (almost) as a repeat of your code:
type
NTuple[N: static int] = NTuple # pure desperation here, but I'm not to blame!
requires c: #
c.tupleLen == N
presuming that we are already in a concept/submodule e.g. with the name of the type itself.
There is a debate if unification could be extended with the assignment and/or equivalence operator, since unification does work without them as well. Anyway, I would like to have a procedural representation of the macro too. And now, I would declare the NTuple as a type with a minor attachment NTuple N . The entire thing would come as:
type
NTuple N = NTuple[N:int]
requires c:
c.tupleLen == NTuple.N
Maybe, but keep in mind how terrible I'm at type theory. You say "undecidable!", I say "don't care, I'll simply add a counter and bail out if it takes too long".
Sometimes there's other ways to cut that Gordian knot. One of the classic undecidable problems of type theory was the problem of type inference with polymorphic recursion. OCaml uses explicit types in this case and some others (GADTs? It's been a while, I could be mistaken) to avoid undecidability. The counter approach is what C++ did to limit runaway template instantiation. So far Nim has avoided that since, like Rust, it uses macros instead of template metaprogramming for compile time computing.
OCaml uses explicit types in this case and some others (GADTs? It's been a while, I could be mistaken)
OCAML uses modules primarily and GADTs are an add-on introduced late, in 2012. I'd go for GADT's indeed, specialized for module inclusion/membership.
We have:
type
MA rows cols vty = MA[rows,cols : int, vty :type]
SQma rows rows = MA
QUadma 4 4 = MA
IMa rows cols int = MA # or
IMa rows cols vty = MA[vty = int]
export SQma,QUadma as MA
proc read(m :MA, nrow.ncol :int) :m.vty = m[nrow][ncol]
proc write(m :MA, nrow.ncol :int, val : m.vty) :m.vty =
m[nrow][ncol] = val
proc getrows(m:MA) : int = m.cols
proc getcolumns(m:MA) : int = m.col
proc issqma(m :MA) : bool = false
proc issqma(m :SQma) : bool = true