Hey all, I'd like to share the first post in a new series I'll be writing on applied linear algebra.
This first post focuses on introducing vectors and basic vector operations. After much thought I decided to introduce nim concepts non-linearly and in a "as needed" manner as this more reflects how projects/work typically unfold when learning on the job (i.e: we talk about generics before talking about primitive types).
I want to focus on the bare minimum needed for "doing science" through applying concepts from linear algebra. The outcome of the series will be a basic linear algebra library that we co-create and use/update along the way for simple things like image segmentation, signal processing, linear regression, clustering, etc.
I think I will post a new article once every 10 days or so (give or take 3x per month) for the rest of the year - taking feedback on focusing on different areas as needed.
It is likely the series will have concepts all over the place, AND any feedback is appreciated.
Hopefully this will be my first of many contributions to the nim community!
Great article can't wait for the next!
Small mistake here:
white = @[255, 255, 255]
# Print out its length
echo white.len()
# Output -> 2
Awesome idea and execution!
In the first code example, you didn't explain why array[0..1, float] is the same thing as array[2, float], which might be confusing.
wierd -> weird
I liked reading it, but I noticed two minor typos. Planetis caught the first one (where both instances of Output -> 2 are incorrect), but there was another one in the comments a bit later.
# Because we know we want to support vector addition for vectors
# of *bot* integers, and floats, and maybe other types like Natural numbers, etc,
You were writing about array indexing and on how Nim starts with 0 in comparison to math notation which starts at 1. A concept I wanted to mention, but may be better to ignore, is that you can define arrays via ranges instead of integers for size.
var test: array[1..2, int]
test = [3, 4]
echo test[1]
# Output -> 3
This is great - I've never used this feature before. I agree that this might not be the best place to introduce it, but it might come up in the modeling posts (kernels, convolutions, caching, ...).
Thanks for brining this up!
It should be mentioned in outro/intro that there are already some good libraries to do linear algebra out there and that we don't have to reimplement vector operations ourselves in 2023.
-> icedquinn's icedgmath: https://sr.ht/~icedquinn/icedgmath/ -> Scinim's nimblas: https://github.com/SciNim/nimblas (if telling in layman's terms what a BLAS implementation and Nim bindings are does not overload your tutorial). -> mratsim's Arraymancer: https://github.com/mratsim/Arraymancer
Have you consider the eventuality to use https://github.com/pietroppeter/nimib to automatically check your tutorial's code?
I am looking forward to the more advanced tutorials you mentioned.
Hey @dlensoff, thanks for the notes!
Absolutely agree that one should not reimplement anything in 2023! Not even 2015 actually, and the awesome libraries you mention are the ones I use on a day to day basis.
I think when looking for a tool to use you should use one that already exists. When trying to understand the tools you already use, reinvention is a pretty good strategy.
So the aim won't be for everyone that follows along to publish their lin-alg library, but to understand how those things work, while also writing some nim. A secondary goal is to increase the number of "hits" one would get when doing a google search for something in nim.
The series is probably not for everyone, especially not advanced users familiar with both lin-alg and nim, and definitely not for those looking for a ready made solution/tool.
I should probably mention in that in the intro somewhere like you said - I thought it was obvious, but not so sure anymore.
And thanks for the nimib reminder, totally slipped my mind. I will definitely consider using it more.