I want to use Nim to generate client-side JS to perform date functions. E.g. if I change a select box I want to increase the date by what's selected, and after that update input fields that show the date.
What's the process to do something like this? Would I use Karax somehow?
There are also many PRs in fusion devoted to improving JS backend.
With XML stuff from stdlib you can create SVG too.
Raster with millions of colors and transparency client-side.
In reality the reasons why JS devs are so focused on the JS code size is because all the other numbers are harder to obtain. And we need some numbers in order to fabricate (pseudo) objectivity.
There's a real reason why JS size is important, usually much more important than the performance. In most cases you can tolerate bad performance - because modern PC and even Phones are so insanely fast that it's almost impossible to slow it down unless you do something really stupid or heavy interactions like games, so bad performance is forgiven and nobody cares too much about it. But the latency and bandwidth - are not so fast, and won't forgive you.
JS size matters becase it causes direct loss of money. And that loss is visible and very easy to measure. And is usually measured by marketing department. So it is not JS devs who care about JS size. It's the marketing/sales/finance department who cares about it.
Sites spend money to attract new users to new site/app, every single millisecond of delay - means user may leave before site is loaded (new users don't know your site and doesn't have good motivation to wait), but you still pay Google for that lost user.
Size of images are different story because they don't block user interaction.
JS size matters becase it causes loss of money. And that loss is visible and easy to measure, usually by marketing department. So it is not just JS devs who care about JS size :).
The marketing department has no way of measuring these things for the program sizes we're talking about. Surely megabytes of bloatware will lose you users, but I never talked about multiple megabytes.
Sites spend money to attract new users to new site/app, every single millisecond of delay - means user may leave before site is loaded (new users don't know your site and doesn't have good motivation to wait), but you still pay Google for that lost user.
A single millisecond is not noticable to anybody. All I claim is that there is a limit at which things simply don't improve anymore. If you disagree, fine, keep looking, maybe you can find 20 bytes of overhead somewhere that then translates to 0.0001 more mouse clicks on your site.
A single millisecond is not noticable to anybody.
At 200 kB/s, 1 MB is 5 seconds, which makes a pretty big difference.
If we're going to talk about size and these things then some meaningful parameters are necessary to characterize things.
In e-commerce milliseconds to page is useful/responsive (time to first render) is a big deal yes, but e-commerce itself has niches.
For bandwidth, @Araq is bang on, the JS should long since be cached and images/content should dominate the network. Size however is a useful proxy metric for first run work for the interpreter/JIT.
Back to latency, if we're talking about a cold cache (first page load) and high latency networks (anything less than LTE) then you're looking at basically 2 MTU due to TCP slow start... I mean if you can in fact meaningfully do anything in 2 MTU then super, but it's bonkers to push into this space unless you actually have those constraints.
You can avoid the blocking from downloading and parsing large JS payloads and other assets in the browser which helps perceived performance especially on esoteric user agents with terrible engines. That's a bigger deal in small niches (by volume of commerce or specific use cases like auto/set top/"embedded" scenarios).
So if you have some "basic" parameters to characterize your problem(s) then it's worth taking about how far Nim is off the mark and how much sloppiness it can absorb for someone and how much they have to makeup. I've written a bunch of JS interop code and it's much slimmer for sure but you lose things along the way. Those tradeoffs might be entirely unreasonable for someone else and it might not make sense to make them in the core.
With all that said it's taking a rather everything in Nim approach and when teams are converting an existing project or starting with something small within a greater whole, they're not going to like a sudden bump in size for a single widget. I can imagine a number of aborted trials that would have otherwise resulted in Nim adoption.
Indeed, it's the latency, not the size, that's the problem. While almost everything is cached for users today, most CDNs are configured in such a way that the browsers still try to validate the cache often, paying almost all the latency.
One 250KB file that contains everything (html, js, possibly even images in data;base64) is -- for many users -- much faster than 5 10KB files each pulled from a different CDN/server (and possibly even from the same server).
Also, making sure your cache is valid essentially forever (and breaking it via embedding e.g. hash in the URL) is very helpful and rarely practiced when considering caches.
FYI, I made a quick test on the generated js file size for the following code.
import times
echo now()
I compiled the code in debug, release, release+danger mode. Also, I used google closure compiler (gcc) in simple mode for minification. And also, I wrap the danger code with a function, this allow gcc to rename top level variables and remove unused local variables. Here the result
209746 byte test.debug.js
169579 byte test.debug.min.js
57103 byte test.release.js
46520 byte test.danger.wrap.js
46502 byte test.danger.js
31989 byte test.release.min.js
25159 byte test.danger.min.js
17661 byte test.danger.wrap.min.js
Run with the following bash.
nim js test.nim && mv test.js test.debug.js
nim js -d:release test.nim && mv test.js test.release.js
nim js -d:release -d:danger test.nim && mv test.js test.danger.js
java -jar closure-compiler-v20201207.jar test.debug.js > test.debug.min.js
java -jar closure-compiler-v20201207.jar test.release.js > test.release.min.js
java -jar closure-compiler-v20201207.jar test.danger.js > test.danger.min.js
# wrap inside a function
echo '(function(){' > test.danger.wrap.js
cat test.danger.js >> test.danger.wrap.js
echo '})()' >> test.danger.wrap.js
java -jar closure-compiler-v20201207.jar test.danger.wrap.js > test.danger.wrap.min.js
Nim js has an advantage that compile into one giant js file with many top level functions, if wrapping this into a function scope, gcc in simple mode should be good at minify this pattern. Unlike other packer which usually wrap each file into a function and use some way to simulate import/export, which gcc in simple mode cannot minify it.