Hot code reloading has always been difficult and, in my opinion, can make or break an ecosystem for certain tasks such as game development.
In my personal work, I've had the opportunity to get much more acquainted with compilers in the world of JavaScript, and I've found them fascinating. Because of the inherent limitations imposed by working in a browser, JS is often split up into small pieces that are loosely linked together. As it turns out, this works very well for hot code reloading. Vite has a Hot Module Replacement (HMR) system that enables a great degree of freedom in how a module handles a reload. One of the most interesting parts of its system is the fact that modules explicitly register their reload handlers and can choose what to do with the new module. This allows, for example, a module to recalculate some of its data as needed.
I can see potential for such a system to be implemented as part of Nimony's proposed compiler plugin API. I also believe that if the compiler can output individual DLLs (or .js, .so, .wasm, etc) for modules, the process can be quite customizable, and enable more rapid development of tooling around GUIs and games. Outside of the JS world, it seems like Dart has done something similar in terms of module splitting, where you can easily dynamically import any module and the compiler will split it into shared libraries for you to be loaded whenever you want at runtime. This system appears to power its hot reloading as well.
I'm curious to hear what others think about this idea.
Well it gets easier to develop when you view the problem as "just keep the globals and heap state" across process invocations. But even then you need to detect if the involved types did change (previously it was var global: seq[string] and now it is var global: seq[(string, string)]!), at least detect it, not to mention schema migrations as a DB supports it.
But it stays messy and there are other ways to get fast iteration times. For example, in former days game development added cheat codes. (These days a scripting language is added to an engine and that works out too.)
Eh compiler support is unneeded for HCR. The one who knows best how to migrate a data type is the owner of the data type not the compiler. Which is why when I wrote my HCR I migrate everything(including ref), but procs and pointers, leaving those to point to previous shared objects. This means you do need to reload pointer procs on code reload, but it also means automatic migration of resources like SDL windows. For me I just stored the global state of any variables marked {persistent.} in `JsonNode s in the host program then attempt to deserialize, if that fails(due to names being different or data not being the same type) it then takes the default value.
An small sdl2 program using HCR
Vite is developed mainly by a small group of people.
Hot reload is enabled by JavaScript's runtime flexibility and the capabilities of the most common JavaScript engines (like V8 from Google and JavaScriptCore from Apple). These engines allow code to be dynamically evaluated, replaced, or re-executed without restarting the whole application. Vite and similar tools merely leverage these features to provide a fast development experience.
You're probably right. HCR would take a lot of deliberate work, better to offload that burden onto the framework you're using that can provide more application-specific APIs for HCR.
I do think code splitting is something that's more doable without HCR at least, which is the more important thing in my opinion.