Thx
Looks very good for point 1, thx!
Re element access A[i,j] instead of A[i][j] any chance to overload setters and getters in Nim?
proc `[]`[T](A: seq[seq[T]], i, j: int): T = A[i][j]
Or roll your own , here is a tensor in 50 lines (a matrix is a tensor of rank 2):
# MIT License
# Copyright (c) 2018 Mamy André-Ratsimbazafy
## This files gives basic tensor library functionality, because yes we can
import strformat, macros, sequtils, random
type
Tensor[Rank: static[int], T] = object
## Tensor data structure stored on Cpu
## - ``shape``: Dimensions of the tensor
## - ``strides``: Numbers of items to skip to get the next item along a dimension.
## - ``offset``: Offset to get the first item of the tensor. Note: offset can be negative, in particular for slices.
## - ``storage``: A data storage for the tensor
## - Rank is part of the type for optimization purposes
##
## Warning ⚠:
## Assignment ``var a = b`` does not copy the data. Data modification on one tensor will be reflected on the other.
## However modification on metadata (shape, strides or offset) will not affect the other tensor.
shape: array[Rank, int]
strides: array[Rank, int]
offset: int
storage: CpuStorage[T]
CpuStorage*{.shallow.}[T] = object
## Data storage for the tensor, copies are shallow by default
data*: seq[T]
template tensor(result: var Tensor, shape: array) =
result.shape = shape
var accum = 1
for i in countdown(Rank - 1, 0):
result.strides[i] = accum
accum *= shape[i]
func newTensor*[Rank: static[int], T](shape: array[Rank, int]): Tensor[Rank, T] =
tensor(result, shape)
result.storage.data = newSeq[T](shape.product)
proc rand[T: object|tuple](max: T): T =
## A generic random function for any stack object or tuple
## that initialize all fields randomly
result = max
for field in result.fields:
field = rand(field)
proc randomTensor*[Rank: static[int], T](shape: array[Rank, int], max: T): Tensor[Rank, T] =
tensor(result, shape)
result.storage.data = newSeqWith(shape.product, T(rand(max)))
func getIndex[Rank, T](t: Tensor[Rank, T], idx: array[Rank, int]): int {.inline.} =
## Convert [i, j, k, l ...] to the memory location referred by the index
result = t.offset
for i in 0 ..< t.Rank:
{.unroll.} # I'm sad this doesn't work yet
result += t.strides[i] * idx[i]
func `[]`[Rank, T](t: Tensor[Rank, T], idx: array[Rank, int]): T {.inline.}=
## Index tensor
t.storage.data[t.getIndex(idx)]
func `[]=`[Rank, T](t: var Tensor[Rank, T], idx: array[Rank, int], val: T) {.inline.}=
## Index tensor
t.storage.data[t.getIndex(idx)] = val
You can easily equip it with many operations in a few lines of code: https://github.com/SimonDanisch/julia-challenge/blob/b8ed3b6/nim/nim_sol_mratsim.nim#L224-L230
In 2D or 3D graphics/physics/geometry, you rarely use vectors longer than 4 or matrix larger than 4x4. In that case, following Nim libraries would be more suitable than Arraymancer. https://github.com/treeform/vmath https://github.com/stavenko/nim-glm
I don't know much about Arraymancer, but it seems it is designed for large (> 100) vector/matrix. vmath/nim-glm uses only array to store data. So size of vector/matrix must be determined at compile time in vmath/nim-glm but size can be changed at runtime in Arraymancer. If you use array, data can be placed in stack or heap.
Here is how nim-glm implements matrix-vector multiplication: https://github.com/stavenko/nim-glm/blob/47d5f8681f3c462b37e37ebc5e7067fa5cba4d16/glm/mat.nim#L341
How nim-glm implements matrix[i, j] subscript operator: https://github.com/stavenko/nim-glm/blob/47d5f8681f3c462b37e37ebc5e7067fa5cba4d16/glm/mat.nim#L51
# This proc implements matrix[i, j] = a.
proc `[]=`*[M,N,T](v:var Mat[M,N,T]; ix, iy:int; value:T): void {.inline.} =
# This proc implements a = matrix[i, j]
proc `[]`*[M,N,T](v: Mat[M,N,T]; ix, iy: int): T {.inline.} =