Hi my name is Kevin. Currently, I am in the middle of developing my final year project at university using C++. It is a 2D physics engine that offers multiple broadphase algorithm for user to choose.
I have look at nim manual and I feel that nim is a fun language. I like nim syntax better than c++ and there are a lot of other things that i like about nim. I am considering to switch my project to nim. But before switching, I want to know if nim will be as fast as c or c++ if i turn off the garbage collector. I really want to contribute to this community. Thank you.
I would say for a 2D pysices engine nim is at least fast enough. I am not yet an expert in optimizing nim code for performance, but the language has nothing in it, where I would say it could be a performance killer, and I wrote my own rendering engines.
I used to write in a language that had a performance killer for graphics, and that was Scala. Scala is a JVM language with only garbace collected heap objects. This basically killed the performance in the code that use a lot of local temporary variables, because there was no way to tell the JVM to allocate those objects on the Stack. Garbace collection was consuming most of the time :-/ But in Nim you do not have that problem, because you have true C-struct like objects. You can even program like if you are using C with non garbage collected pointers etc.
Nim is actually a C transpiler, it is going to be the same. Nim generates C source files in the nimcache directory then invokes the C compiler on that.
The GC is reference counting, but you can also use C pointers.
Exactly Mapdog.
I think the best answer to Kevin's question is that Nim's upper speed limit is determined by the C compiler of choice. To the extent that you make use of the higher level features such as garbage collected references, then you will take on overhead. However, depending on the application that can be a trivial factor of the total run time, and save you a lot of work.
One way to say it is that Nim's upper speed limit is the same a C, and depending on how it is written, it will asymptotically approach that limit.
hi dom96!
you said: or even faster than C and C++
Can you give me an example code which nim is faster than C?
my opintion?? if nim translate code to C? how can it faster than C?
Any example? thanks!
@sam I wouldn't say that Nim could be faster than C in a direct sense, but many of the built-in structures and mechanisms can be faster than their counterparts in C or C++.
For example, len() for Nim strings is always O(1), unlike strlen(), which is O(N). Nim's multimethods may also be faster than C++'s virtual methods due to their underlying implementations (case statement vs vtable lookup).
Of course, if nlvm becomes sufficiently advanced, there's an even bigger chance for a performance boost.
So let's say that implementing your game in Nim instead of C++ means 20% larger binary sizes, 20% more RAM usage, and 20% more CPU/GPU usage
Why should Nim be larger, more ram intensive or need more cpu? Thats just wrong to start with. Nim is compiled to C or CPP and there is nothing like a 20% (whatever %) burden on using Nim.
There are in my opinion no "general" upsides or downsides using Nim over C++ in those areas. But maybe thats what you wanted to say. Dunno.
And, if you've already invested in decent hardware, for many programs the performance difference is too minute for the human senses to notice.
I don't believe that this is true. There is no "Python can be as fast as C++" if you pay more money for the hardware as you could pay that for C++ too (or have already). And there is no "something is fast enough" besides of having no timeframe for execution, which is seldom if ever. Most stuff even runs in multitasking environments and each slow program brings burden to all other programs and the system.
Which leads to my impression that you ignore that we kinda reached the single core speed limit for CPUs and that from that point of view there are very high differences in cost using a slower than native approach.
I personally even hate this "the hardware today is so fast, lets waste it with suboptimal coding because developer time cost more than hardware". This is simply not true and we all pay for this with computer software which behaves like software in the 90s using computer which are 10 to 100 times faster.
Beside that.. I bet that large projects in any language can be a debugging nightmare. I think this actually depends on other things than the language used. Tooling comes into mind and yes I don't think language == tooling. Tooling is something which may be extended by users. The language is normally not in your reach.
I still don't think that a programming language is that important. Most stuff can be done with every language. But some stuff can be done more easily or elegant. Nim has much of that with nearly no artificial bottleneck to the machine. Thats what I like beside the meta programming. You can use it "everywhere" (considering NimScript and different backends). It should be the real Java :)
Nim with the C backend can never be faster than C.
What we are actually comparing here is typical code written in C to typical code written in Nim. There are many examples where Nim can win out:
Why should Nim be larger, more ram intensive or need more cpu?
I said those estimates were pessimistic. Playing "devil's advocate". Benchmarks vary (ex). And of course Nim's performance is more likely to see significant improvements in the future than C/C++, so the gap will narrow.
There is no "Python can be as fast as C++" if you pay more money for the hardware as you could pay that for C++ too
The money (time is money) you spend writing it in C/C++ instead of Python is money you don't spend on more execution efficiency (electricity / local hardware / more cloud hosting resources / compensating for potential customers lost due to minimum system requirements).
What I'm saying is that we should measure in economic units as a common denominator for all trade-offs. In terms of "total cost of ownership", hardware and electricity are usually far behind the value of development time. Nim offers likely a little less execution efficiency than C in exchange for A LOT more value in developer time, flexibility, and safety.
And there is no "something is fast enough" besides of having no timeframe for execution, which is seldom if ever.
I find that to be the case quite often.
If your game is for PS4, there's a barrier to entry - there's no such thing as a PS4 with less than 8GB RAM, so the difference between your game using 4GB and 5GB is inconsequential to your bottom line. Multitasking is not really an issue for games. If a Windows game runs slowly because the antivirus is doing a full scan in the background, you won't lose many sales for telling the user to pause the scan.
For back-end, there are often bundles: for example, I need 8GB RAM to fit my database, and on DigitalOcean that gets me a lot more CPU power and transfer than I need for free.
Which leads to my impression that you ignore that we kinda reached the single core speed limit for CPUs [...]
Nah. Terahertz Graphene (3D) chips are coming... :D
I agree that doesn't mean that execution efficiency is unimportant, but it's often a low priority. In most situations, human effort is a much greater value.
I personally even hate this "the hardware today is so fast, lets waste it with suboptimal coding because developer time cost more than hardware".
So do I, but there are trade-offs. Software bloat is the price we pay for cheaper (as in free) software. Having software written in a higher-level language also means it's easier for me to read and tweak the code.
So let's say that implementing your game in Nim instead of C++ means 20% larger binary sizes, 20% more RAM usage, and 20% more CPU/GPU usage.
There is no reason it would be. It's not like Python or Java. You can allocate structures on the stack. You can allocate memory on the heap and do pointer arithmetic and casting if you want.
While I'm very happy that Nim is faster than anything on this benchmark. It is not a good benchmark unless what you are measuring is how Nim lowering makes it easy for the compiler to analyze tail calls: https://forum.nim-lang.org/t/4253
That said, in my experience in high performance computing, image processing, ray-tracing and cryptography, any speed that can be achieved in C can be achieved in Nim and sometimes even in pure Assembly library, for example for matrix multiplication/linear algebra. And this by a straightforward 1-1 translation of the C code.
Regarding C++, it should achievable but translation might not be 1-1 if there are a lot classes and inheritance being used as you might want to use callbacks or object variants as Nim inheritance always involves the GC at the moment + methods might be slow if there are a lot.
Since this resurfaced, I will note that these statements are completely wrong:
Nim is actually a C transpiler, it is going to be the same.
I think the best answer to Kevin's question is that Nim's upper speed limit is determined by the C compiler of choice.
Nim is an optimizing compiler that happens to use C (or C++ or Javascript, or even native machine code via nlvm) as its "native" output language; it is not a transpiler, which simply does source code transformations between languages that are generally of the same semantic strength. If you don't believe or understand this, take a look at the compiler source code and the years of effort that went into it, compared to any transpiler.
And think about why programs compiled with -d:release run so much faster than those without.
That statement is not really helpful in this context. -d:release removes a lot of checks, and turns on optimizing (-O3) for the C backend. So -d:release can not demonstrate that Nim optimizes itself.
Whoosh! The point is that the claim was that Nim programs will run as fast as C because the output language is C. The fact that they run at different speeds depending on the flags proves that this isn't true.
-d:release removes a lot of checks
Yes, I know ... and those checks slow the program down. So the claim that Nim programs will run as fast as C programs because the output language is C is absurd and a fundamental misunderstanding. What are those checks for? They are to provide the sort of safety level that one gets with interpreted languages--which is a big part of the cost of interpreted languages.