i believe it is in large part the deprecation of random, which is used to generate random tests.
replacing that with rand should fix most of these i expect.
I can start to go through them (after actually doing the katas) and make my best effort to fix them.
Here is the first https://www.codewars.com/kata/56541980fa08ab47a0000040/translations
Wasn't much to do (random -> rand) and I slightly refactored the code a bit. If that is met with approval, can continue and do some more. How best to coordinate with others?
interestingly (disturbingly), Esolang interpreter #3 Custom Paintfuck Interpreter fails on 1.6 due to a perf regression.
there's a maximum limit for all the tests to be run of 12s, and while previous solutions take maybe 9-10s, on 1.6 it times out.
i created an issue: https://github.com/codewars/runner/issues/194
it seems like compile times are included, and template expansion of all the unittest stuff is taking too long.
maybe using something other than unittest would be possible?
hi - did you try running a test locally - it is fairly straight forward copying codewars_output.nim and one of the tests I was working on. It was very fast to compile, despite generating the 100 random tests. approx one second. I've tried release, arc, orc - all take 1 second to compile, but take 3 seconds the first time (is there some cache?). Sorry I am still a noob. :)
Where did you find out they are running with -:drelease? Just curious, I couldn't find that anywhere. I wondered what options they were using.
The issue with some of the tests I have seen is they use check() inside a procedure call which doesn't record a failure within Nim's unittest framework. So from the point of a user on codewars, all your tests look like they are passing even though they are clearly not suppose to. The check() isnt recording it as a failure, but the unittest framework partially recorded an error, by saying a test failed via returning a program exit code of 1.
So the user sees all their tests pass despite they shouldn't, yet at the end there is some obscure error about an return code. Why this has been allowed to happen for so long on codewars I don't know, but gives for a very poor user experience.
I can hack my way around each test, by basically ensuring the check() doesn't happen inside in the procedure calls. But that seems ugly. The question I am asking myself is: is the unittest framework broken in some way? Is it really suppose to not work if a check() is made inside a procedure call? I am looking at the code, but my macro foo is nowhere near good enough to understand what is going on.
Hey, yeah I did run a local test, and on my laptop I see like 6-9s to compile and run with -d:release.
Yes there's a cache, so what I do is
rm -rf./cac/*; time nim c -r -d:release --nimcache:cac tests.nim
To detect compile flags I added
when defined(release):
echo "release"
else:
echo "debug"
in the testsupdate: they've switch from gcc to clang and it's dramatically improved compile times.
it's a bit of a bureaucratic nightmare to get trivial updates approved, however. I'm not sure my interest can hold out very long if the powers that be just deny simple updates with no discussion.