I've been a big fan of this idea ever since I started becoming serious about programming (aka when I got my first IT job 2 years ago). I'm a "learnt from the Internet" programmer who's been doing different things with JavaScript since 2021 (and a computing hobbyist from about 2014; and a lawyer before that).
And since 2021, I've been reading about various programming languages, and technologies. And the idea that one could create a programming language that could do it all has fascinated me.
However, a senior programming educator and a former engineer told me that such a language is impossible due to the "Church-Turing thesis". I just know (after reading its Wikipedia page) that the thesis relates to computability. I have not received any formal education in Mathematics after high school.
Is the thesis an obstacle to achieving the goal of having a programming language that can rule them all? Is it scientifically possible?
I ask this, of course, due to the Nim's stated goal of being the one language to rule them all. https://nim-lang.org/blog/2022/12/21/version-20-rc.html
These days I prefer to phrase it as "good for everything" which captures the idea much better. Use the same programming language for OS development, scientific computing, scripting, game development, <insert domain here>.
It not hard to design a language that does it, Nim is my attempt but plenty of languages do the same: C#, C++, Rust, Julia, D, Golang come to mind. Some of these languages are still in "denial" and pretend to only cover a certain niche but they are all sufficiently general purpose and could all tweak their runtimes/implementations to do what it takes to be "hard realtime" or whatever your domain might be.
All you need is a workable syntax that describes operations on a mutable dynamic graph and an associated cost model. Plus some primitive for accessing memory directly like ptr + cast or Basic's peek and poke.
However, a senior programming educator and a former engineer told me that such a language is impossible due to the "Church-Turing thesis".
For a counterpoint, see Turing Tarpit: the idea that all languages are basically equivalent because they're Turing-complete ignores some really fundamental considerations in programming. There's a reason we're not all programming in assembly language (or COBOL). We're not looking for a language that can compute functions in previously uncomputable complexity domains (although some languages try to help a bit, like providing easy access to branch-and-bound optimisers and heuristic SAT solvers etc, and Araq's example of quantum computing).
What most of us are looking for is a language that makes it easier to express and understand solutions to the kinds of problems we've been trying to solve for decades. To maximise comprehensibility, extensibility, performance and compile-time + runtime safety, while minimising memory cost, build time, program size etc. All of that is really difficult and requires lots of tradeoffs where sometimes one goal is prioritised over another -- for example, Nim could use global type inference and reduce the amount of type annotations needed, which would make programs smaller and (sometimes) easier to write, but maybe not easier to understand, and it would certainly increase compilation time.
Each of us will have our own feelings about the "true priority" of each consideration, so there can never really be "One language to rule them all". Some people might already argue that e.g. Rust is that one language because it avoids GC and has data race safety, which they consider more important than anything else. But another person (e.g. me) might disagree because the mental overhead, compilation speed and code verbosity are higher as a result.
Because of this, I'm glad to see people continually trying out new language models and exploring some of those tradeoffs, especially the ones that haven't been deeply explored yet. Regarding Nim, I think it's got a really nice balance in most of the priorities I mentioned, but other people will disagree and that's perfectly reasonable too.
Church-Turing thesis says that certain theoretical computation models compute same functions (or in simpler words solve same problems).
Unfortunately these models are theoretical and assume unbounded memory. Church-Turing thesis also doesn't bother with the "speed" of these models or how comfortable they're to use as @DestyNova mentioned.
"good for everything"
Nim needs more pressure from higher level app development. It could do better than it does now. All those a) need to use fmt b) {.base.} c) tdiv d) no CopyOnWrite e) no infer for proc/lambda args f) no incremental g) poor IDE support h) no eval (poor VM support) i) no interfaces (poor concept support) j) poor support for random order of declaration j) etc. make it too hard to use and in the end not better than TS/Kotlin/C#.
So we need to prioritise things for you, got it.
make it too hard to use and in the end not better than TS/Kotlin/C#.
Yawn. Feel free to use TS instead.
"good for everything"
"good for getting things done" would be enough for me.
Nim is amazing and by far my preferred programming language but there is at least one thing that it is not very good for at the moment: interactive / notebook style programming (i.e. like using python with a Jupyter notebook). There is inim but it is terribly slow and often doesn't work very well.
I believe that is one of the main reasons why python took off the way it did in the scientific / AI domain (replacing Matlab which is also great at that kind of interactive programming).
Hopefully when (if?) incremental compilation is implemented that will change?
Better (VS Code!) tooling would help too but that is not a property of the programming language itself and does not restrict the kinds of things you can do with nim. It only makes them a bit harder to do in practice.
unrelated but the time limit on being able to edit anything is kinda silly
It is an anti-bot feature and it works. Bots would fail us and be considered to be human. Later on they edit their posts to be spam.
My bad (to @everyone here as well). The person said (I went back to our chat and re-read what he said) that this ("a language without tradeoffs") is not possible and that it was proved by Turing and Godel; NOT the Church-Turing thesis.
Mea culpa.
that this ("a language without tradeoffs") is not possible and that it was proved by Turing and Godel
I agree with this and I don't mind it either, what's bad about tradeoffs.
But where have Turing and Gödel proven anything like this?!
He was referring to how a single language cannot be fast at solving all problems.
So, it is possible? I mean, can one make a programming language that will work efficiently in all domains?
So, it is possible? I mean, can one make a programming language that will work efficiently in all domains?
I've already answered this question. Yes and no.
For yet another perspective consider this question: What is the ARM instruction set bad at? It works for embedded and scales to super-computers. Same for the Linux kernel.
Got it.
Thanks @Araq.
Thank you for creating Nim!