I checked the documentation, i understand i think, that a natural is a range of a type specifically int in the example. But other than that i have no idea how to use it. I'm trying to use readbytes proc in system module but having a little difficulty understanding what i need to do to use the proc.
But my case use is i'm trying too use my .obj parser to write the data from text file to a binary file then read it back from binary file, then later i'll make a text based scene format file.
any help appreciated.
Some definitions, including the standard ISO 80000-2,[3][a] begin the natural numbers with 0, corresponding to the non-negative integers 0, 1, 2, 3, ...,
What is misleading?
Natural.high and Positive.high shouldn't really exist.
Yeah, and 1e-1000 shouldn't be 0.0. Unfortunately, computers have finitely many bits.
Coercing to a Natural is implicit and can throw an error. Even declaring code with {.raises: [].}, the compiler doesn't seem to anticipate the possible error at compile time
Range checks are omitted with -d:danger, so making them catchable could break code.
(Positive 1) is Natural is false. (Positive 1) == (Natural 1) is true.
Is this really a rough edge? Positive and Natural are two different sub range types. The is operator checks if the type(or type of an expression in this case) is the same type or apart of a typeclass.
n in Natural doesn't work - afaict, to turn a range into a Slice, you have to do it manually: n in (Natural.low..Natural.high)
I agree that x in Positive is a nice syntax sugar but luckily it's not overly complex to make that an operator that compiles
proc contains*[T](a: typedesc[range], b: T): bool = b in a.low..a.high
To the other issues, Natural is not a new data type it's a subrange type across the system integer range. It's goal like any subrange type is to easily give runtime checks where needed and to make self documenting code.In any case the use of Natural and Positive isn't really encouraged. They just haven't specifically been removed or deprecated, because there isn't that much of a reason to.
They're not really useful either because they only work for int. A proper implementation would be like:
type
PositiveOrZero[T: SomeNumber] = range[T(0)..high(T)]
Positive[T: SomeInteger] = range[T(1)..high(T)]
proc foo(x: Positive[int]) = echo x
foo(1)
foo(-1) # error
But this doesn't compile due to some weird range generics bug so we have to use templates.
template positiveOrZero(T: type SomeNumber): untyped = range[T(0)..high(T)]
template positive(T: type SomeInteger): untyped = range[T(1)..high(T)]
proc foo(x: positive int) = echo x
foo(1)
foo(-1) # error
@xigoi
Yeah, and 1e-1000 shouldn't be 0.0. Unfortunately, computers have finitely many bits.
Point taken. I'm not saying Natural should be arbitrary precision; just that low is a logical restriction from the Natural concept, high is a physical restriction of the representation. I'm not sure a better way to present this, though.
Range checks are omitted with -d:danger, so making them catchable could break code.
Maybe I misunderstand {.raises.} or maybe I expect more from the compiler that should be done with static analysis tools like DrNim.
@ElegantBeef
it's not overly complex to make that an operator that compiles
Nice!
Is this really a rough edge? Positive and Natural are two different sub range types. The is operator checks if the type(or type of an expression in this case) is the same type or apart of a typeclass.
I definitely think so. I understand that this is a distinction at the type level, but also, it's an odd choice that numeric narrowing type conversions are otherwise explicit.
Here's another few head-scratchers:
# you can create a bad value without casts:
pred(Natural 0) # -1: Natural
var n = Natural(0)
n -= 1 # -1: Natural
Positive.default # 0: Positive
# the type differs depending if you use ordinal operations or integer operations
succ(Natural 1, Natural 1) # 2: Natural
(Natural 1)+(Natural 1) # 2: int
To the other issues, Natural is not a new data type it's a subrange type across the system integer range. Its goal like any subrange type is to easily give runtime and compiletime checks where needed and to make self documenting code.
I would rather have positive and natural be predicates that can be attached (statically or dynamically) to a value than be concrete types themselves.
I should probably "put up or shut up" and write my own refinement type implementation and see if I can make something both correct and ergonomic!
As per my examples, Nim doesn’t even check them all at runtime!
And the halting problem is a cop-out in this case. You can’t automatically detect all unsafe code, but you can enforce “trivial semantic properties” (a la Rice’s theorem) like type-safety and provide conspicuous escape hatches so unsafe code looks unsafe.
I would rather have positive and natural be predicates that can be attached (statically or dynamically) to a value than be concrete types themselves.
I actually have a relevant situation to this, but it's a bit long and probably derailing/not fit for the forum, and I'd risk embarrassment if the ideas are ill thought out; but I'll say it anyway. I'd have put it in <details> like on Github but I can't find a way to do something similar in RST, for now I'll just put it in a blockquote.
In my scripting language I've been trying to do something like this, where refinement types are "properties" over concrete types that you can use in both compile time overloading and optimized dynamic dispatch. For example the Positive property here has a predefined check of value > 0 that is used in dynamic dispatch:
# syntax is conceptual Positive = property(check: x => x > 0) foo(_: Int{Positive}) = "positive" foo(_: Int{Not(Positive)}) = "not positive" # without Not() dynamic dispatch becomes complicated x = 1{Positive} # attached to type foo(x) # resolves positive overload at compile time foo(1) # unknown at compile time if 1 is Positive or Not(Positive) # so this is equivalent to: if Positive.check(1), foo(1{Positive}) else if Not(Positive).check(1), foo(1{Not(Positive)})
It's not as freeform as pattern-based dispatch like in Erlang/Elixir or Haskell, but it abstracts away stuff like guard returns or IllegalArgumentException.
The roadblock of implementing this has been figuring out when and how to efficiently store/cache properties at runtime for specific values. For example I would like to implement distinct types this way (properties can also affect type relations) but that wouldn't work with dynamic dispatch without the property information being kept at runtime.
Macros are already implemented like this though, function types like (Context, Expression, Statement, Statement, Expression) -> Statement can have a property like Meta((Any, Int, Any, Any) -> String), where a symbol with such a type would act like Nim's macro foo(a: untyped, b: int, c: typed, d: untyped). The benefit is that this property is only considered during overloading and foo can be treated as just a regular function otherwise.
I wouldn't know if this is correct or ergonomic or sound though as admittedly it isn't very far along, I'm even considering moving function signatures to a property which is making me suspicious of the idea.
I'm guessing {.raises.} only cares about exceptions that are always present regardless of -d:danger.
Correct defects are not tracked by the effects system(and defects generally are oddly used).
Here's another few head-scratchers:
Nim presently does not have any mechanism to set a default value for a type, so presently everything is defaulted to 0. As to Natural + Natural it's a subrange type, it's not a distinct range type. It uses the integers addition as such it does not return a ranged value. Consider range[2..4](2) + range[2..4](3) it's unsafe to just use the integer mechanisms to do this then assume it's in the range. It's of course not ideal but one can again resolve this themselves with the following:
import std/[typetraits, macros]
type Range = concept r
r is range
macro getBaseType*(r: Range): untyped = r.getTypeImpl[^1][1].getType
proc `+`*[T: Range](a, b: T): T =
T(getBaseType(a)(a) + getBaseType(a)(b))
proc `-`*[T: Range](a, b: T): T =
T(getBaseType(a)(a) - getBaseType(a)(b))
proc `*`*[T: Range](a, b: T): T =
T(getBaseType(a)(a) * getBaseType(a)(b))
proc `div`*[T: Range](a, b: T): T =
T(getBaseType(a)(a) div getBaseType(a)(b))
echo typeof(Positive(10) + Positive(1000))
@Hlaaftana yeah it makes sense that this was something that was just “good enough” at some point and could use better design.
In my scripting language I've been trying to do something like this, where refinement types are "properties" over concrete types …
Very cool to see how you’re approaching this. The way I’m trying to do it in Nim is similar - a wrapper type with a predicate and some value(s). There can be an explicit constructor to check the predicate at runtime and an implicit converter to weaken or forget the predicate.
@ElegantBeef Cool idea, but I don’t think it quite works. For instance, Naturals are closed under addition but not subtraction (indeed one way to define negative numbers is as equivalence classes of pairs of naturals).
As for the default value of zero, I understand why it’s the case, it’s just unfortunate.
@demotomohiro Saying integers mod N is somewhat misleading too - subtraction and division are defined as per the integers, not the way you’d define them as residue classes. Naming things is hard. Also, yes, they are most suitable for parameters; it is nice to be able to declare parameter preconditions in code.