I think I need this, to make sure I can safely put the data on the shared heap, and pass it around in Channels. I don't want to use local heap data that is copied by the Channel. Also, I need "any type" Channels, so I just want to (have to) pass pointers around. "Ownership" of the data must pass to the receiver thread. I'm trying to do this with the current Nim version, even if coming changes might make some of this easier/obsolete.
Do I need "concepts" to do the (negative) matching? Is there any example anywhere?
Later I'll need to "outlaw" the "int" and "float" types too (and probably also raw pointers), to make the format architecture neutral, but I'm still far from that.
For no. 1, maybe like this?
type
TypeError = object of Exception
proc withmytype[T](a: T) =
when (T is ref or T is ptr):
raise newException(TypeError, "Supply only primitive value type")
else:
discard
var a = newSeq[int]()
try:
withmytype(a.addr)
except TypeError:
echo getCurrentExceptionMsg()
withmytype(5)
it's explained here, https://nim-lang.org/docs/manual.html#generics-is-operator
Also, maybe typetraits module can help? https://nim-lang.org/docs/typetraits.html
especially genericHead proc
Just a small addition from me:
proc withmytype[T](a: T) =
when (T is ref or T is ptr):
{.fatal: "Supply only primitive value type".}
else:
discard
var a = newSeq[int]()
withmytype(a.addr) # generates an error at compile time
withmytype(5)
This way the error will be always reported already while compiling.Interesting.
TypeError doesn't display the msg "Supply only primitive value type"
but if ObjectAssignmentType (or some other Error ?) is used, then the message is displayed.
Then it comes down to {.fatal ...} for compile-time, or raise newException for run-time
Example here
GC types mess up OpenMP, so I need 2 versions of each higher-order functions: with no-GC allocation and with GC allocation in the end result (and no OpenMp)
proc map2*[T, U; V: not (ref|string|seq)](t1: Tensor[T],
f: (T,U) -> V,
t2: Tensor[U]): Tensor[V] =
I think it's a bit off-topic but oh well ;).
Let's be clear about vocabulary first:
Parallelism is good for CPU, memory or cache-bound algorithms. Parallelism is: "I want to do something, how best to distribute it across all my resources"
And there is concurrency (Actors, channels, CSP ...). Concurrency is "Many things want my attention, how should I split it so that everyone is satisfied". Concurrency is good for IO-bound operations so that while waiting for something, you don't block everything else (say waiting for a webpage to load, a file to transfer or be saved on disk ...)
OpenMP is a data parallel framework for shared memory parallelism on a single (multi-core) machine.
For task parallelism the reference is Intel TBB (Threads Building Blocks, used in OpenCV extensively).
For data parallelism on a distributed system (network, cluster), the standard is MPI (Message Passing interface).
OpenMP and MPI can be mixed so that MPI distributes the load on different cluster nodes and OpenMP distribute this sub-load on all the cores of each nodes.
You can check very short examples of Matrix Multiplication using OpenMPI, Intel TBB, OpenMP and Intel Cilk Plus (an older alternative to TBB) here: http://blog.speedgocomputing.com/search/label/parallelization
If you want to continue on this, it would be best to create another one with a clearer title in my opinion.
@mratsim
I know it's an offtop anyway but... please correct me if I'm wrong, but as far as I know MPI-3 provides data parallelism both across a distributed AND shared memory systems. It seems quite widely supported.