Currently the database modules (db_mysql, db_postgre, and db_sqlite) all rely on using nil to represent NULL when passing data in/out of the module.
As most folks know, Nim is moving away from production support of nil, so it would be good to make changes to those libraries. I'm happy to help with this, but rather arbitrarily deciding on a solution, I want to get opinions and/or direction from the community.
But, to get started, let's formally describe the current solution:
CURRENT
Rows of data are passed in/out using:
seq[string]
Where NULL values are represented by entries with a nil. All data types are represented by the string equivalent of the data.
So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass:
@["joe", nil, "30"]
I see four ways we could change this:
STRINGIFY
Rows of data are passed in/out using:
seq[string]
Where NULL values are represented by entries with the strings NULL. Strings are quoted. All other data types are represented by the string equivalent of the data.
So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass:
@["'joe'", "NULL", "30"]
notice the extra inner quotes around the real string value.
OPTION[STRING]
Rows of data are passed in/out using:
seq[Option[string]]
Where NULL values are represented by entries with none. All other data types are represented by the string equivalent of the data.
So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass:
@[some[string]("joe"), none(string), some[string]("30")]
TUPLES
Rows of data are passed in/out using:
tuple[data: seq[string], are_null: seq[bool]]
The data types are represented by the string equivalent of the data, and the null items are indicated in the separate sequence.
So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass:
(data: @["joe", "", "30"], are_null: @[false, true, false])
JSON
Rows of data are passed in/out using:
seq[JsonNode]
Where all data types are represented by their JSON equivalents.
So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass:
@[parseJson("""["joe", null, 30]""")]
NULLABLE
Rows of data are passed in/out using:
seq[nstring]
Where all data types are represented by string equivalents.
So, if you have three columns of "name VARCHAR(50), desc VARCHAR(50), age INT" and "desc" is NULL, you would pass:
@["joe", NULL, "30"]
Note: I've not actually finished this type library yet. In fact, I only have nint's type operations fully flushed out.
Okay, and now:
NOTES:
Hi JohnAD,
there is already an issue open : https://github.com/nim-lang/Nim/issues/9453
The sqlite3-part is finished - if you like to dive in please take a look at the sqlite3_examples folder of my project. You can "mix" my module with Nim´s (db_sqlite3). The other modules are (unfinished) on my desk but not forgotten. Discussion and PR´s are welcome.
regarding dbNull´s: thats a never ending story - best fit would be "do not handle it within the db-layer". Just let the consumer handle it. It´s a state and not a value - unfortunately handled vendor-specific - always take a look into the vendors's manual regarding dbNulls :-) So if you do a select null||'hello_nim' from dual you will be surprised :-)
The option-type would be the best fit for Nim I think. But I would not enforce that - if you like to do bulk-processing the option-variant is significant slower than raw-handling. Regarding json on the database I have my own (private) opinion :->
thanks @mikra!
I did a search in Nim issues and somehow didn't find that one.
Araq mentioned possibly moving ndb to the standard library if it can act as a replacement for the db_* modules. I'll look into volunteering to help with development of ndb. I like what I see so far, especially the handling of nulls and blobs with dbNull and dbBlob.