As the title of this thread says, I’ve seen many recommendations in this forum to wrap your code in a top level “main” procedure.
Why does that make it faster and why can’t the cinoaiker make that optimization automatically for you?
Sorry, I typed my previous message from my phone and did not notice the typo.
Couldn’t the compiler detect the case in which there is no main procedure yet none of the global variables is used in any procedures and optimize it?
whenever somebody does a benchmark
So should we make -d:danger the default just for the case a fool makes a benchmark?
Global code is generally used only for short scripts where nobody really cares for performance. And the benefit of a main proc is explained in most tutorials, it is in section http://ssalewski.de/nimprogramming.html#_scoping_visibility_and_locality in my book.
We also should ask people to use the main proc even when they do when isMainModule, because that doesn't introduce any new scopes, so we should advise to do something like
proc main =
echo "hi"
when isMainModule:
main()
@Stefan_Salewski - The idea of defaulting to an optimizing mode has come up before. It seems contrary to what almost any user coming from a "compiled language" world would expect. For various reasons (debugability, compile-time performance, etc.), almost all compilers default to a non-optimized (or very weakly optimized) output and allow various knobs to crank up optimization, as does the current nim setup.
There is even a whole dark art of "best optimization flags" which can be taken to severe extremes. More simply/efficiently/some might say intelligently, you can often use PGO https://forum.nim-lang.org/t/6295 to get 1.25..2.0x boosts on object code generated from nim-generated C. Some flags like -ffast-math can even change semantics in subtle ways that can impact correctness or not depending on use cases.
I don't know what to do about people publicizing misleading benchmarks. That seems an ineliminable hazard, not only for Nim, but actually everywhere and all the time, and often not on purpose (though most "marketing" wants to do strawman comparisons on purpose). Besides compiling wrong, they could also use a bad algorithm, misrepresentative inputs, weird machines, benchmarks naive relative to intended measurements, and probably several other mistake categories. :-/ The best strategy may be trying to be supportive where & when we can, educating as we go, though I agree/admit that is a neverending battle.
The idea of defaulting to an optimizing mode has come up before.
Sorry, maybe I made my statement not clear enough: Of course it is fine that default compiler mode is to use runtime checks and do not use -d:danger. @didlybom asked to generate an automatic main function so that newcomers making a benchmark would get good results. But generally these newcomers also forget to compile with -d:release, and also these newcomers often use too many ref objects, too many string allocations and other slow stuff.
So the recommendation is: Before making benchmarks or videos read a tutorial.
I'll admit people (including myself) are not great at writing or understanding the results of benchmarks. If the language environment has rough edges like this, it turns every small test into a head-scratching exercise. And after a few weeks of this, it leaves one with the feeling "If I can't even get simple 10-20 line programs to do what I want, does it make sense to use this for a 200K line program?" It makes me feel a bit like "to drive this car, I have to become a mechanic and know how to rebuild it first or I'll crash".
When Nim works, is fast, and uses reasonable amounts of memory, it's fantastic and I love it. But getting there seems more difficult than it should be.
As the person who added rightSize, I agree with those points actually. The only slight gotcha is that doing as you suggest would result in old code that calls it with (the correct power of two to avoid a resize, or maybe already a rightSize) would allocate the next higher power, wasting space. That might be a price worth paying ease-of-use wise.
We could only call rightSize if the argument is not a power of two, but it's probably simpler to just always use rightSize and just tell people. Wasting space isn't quite a "failure". Anyway, you should feel free to file a github issue and/or work on a pull request. Maybe I should have pushed harder at the time to change the parameter semantics.
There is definitely a learning curve coming from other languages. There are definitely more examples of the stdlib not providing the "best in class" for various functionalities. There will likely be even more with time. Nim core only has so much manpower to write/maintain/support such things.
We discussed doing a "distribution" over at https://github.com/nim-lang/RFCs/issues/173 and elsewhere. This "fusion" repository (see end of that thread) seems to have been the output, but not much has been happening over there.
Anyway, I think if you persevere there are good chances you could be a happy Nimmer once you've learned your way around. You seem pretty capable of finding these issues and you could probably help round some of the sharp edges, if you can sustain the patience. There is probably a Porting to Nim From Python guidebook or maybe you could help do one!
Whenever you leave the C code that Python happens to use like actually looping over an array it's slow. Python is in fact an abstraction inversion, primitive operations are slow and high level operations are fast (because they can avoid Python's slow VM) and so you need to use premade libraries for everything. In Nim it's the opposite, primitive operations are fast and custom code wins over premade libraries quite often because custom code is specialized for the task at hand.
That's the fundamental lesson to learn when coming from Python.
Surely we can and will improve the rough edges you encountered but it won't change the fundamental lesson.
In my experience batteries are almost never fully charged and it's hard to get feedback if you only release a perfect cathedral. With no feedback, it's kind of unlikely you will build something popular.
To take a topical example, even after 30 years of tuning, the core language dict in Python still seems to have no easy way to "pre-size" an instance. There is no power of 2, no rightSize, nuthin'. So, one of your rough edges literally cannot arise due to an inflexibility/API design flaw (IMO).
Yes, there must be 3rd party replacements or ways to pre-size in some slow subclass or whatever, but you could also just write a proc initTab that always calls rightSize for your own code. What's at issue here is "out of the box" (and personally I think the Nim workaround is less onerous than workarounds in the Python world).
Do you have to learn your way around? Sure. A sales pitch of "just like Python but compiled/with native types" is definitely not quite right. That's Cython/similar. Analogies and oversimplifications help some people while others want the real story, and you strike me as the latter.
Nim gives programmers more power and flexibility, but responsibilities also come with that. Cue Spider Man's Uncle Ben. ;-) It is yet a comparatively tiny community. Bugs, workarounds, and rough edges come with that. Unused patterns like your & are just unused, unoptimized things waiting to be discovered/fixed. There was a time when C had no proper namespacing of struct members and that is why some core Unix types have prefixes like the st_* in struct stat (run "man 2 stat" on any unix machine).
No one using Nim wants it to be hard, but there may also be good reasons some things are the way they are (often flexibility, performance, or safety, but yes sometimes neglect). I'm really sorry to hear your entry has been tough, but the flip side of that is you could probably make a HUGE impact to future similar-but-not-quite-you's, and I again encourage you to try! Even just documenting everything somewhere would probably help at least a few other Python Refugees. :-) Cheers and best wishes whatever you decide.