A little bit of light-hearted post here.
Post two FizzBuzz solutions
Fizz buzz program:
from 1 to 100 on a new line.
For each multiple of 3, print "Fizz" instead of the number.
For each multiple of 5, print "Buzz" instead of the number.
For numbers which are multiples of both 3 and 5, print "FizzBuzz" instead of the number.
My first attempt (shortest char count) https://play.nim-lang.org/#ix=28gV
for i in 1..100:
var res = $i
if i mod 3 == 0: res = "Fizz"
if i mod 5 == 0:
if res == "Fizz": res.add "Buzz" else: res = "Buzz"
echo res
My second attempt https://play.nim-lang.org/#ix=28gW
proc fizzer(num: int, str:var string) =
if str == "Fizz":
if num mod 5 == 0:
str.add "Buzz"
else:
if num mod 3 == 0:
str = "Fizz"
fizzer(num, str)
elif num mod 5 == 0:
str = "Buzz"
else:
str = $num
for i in 1..100:
var str = ""
fizzer(i,str)
echo str
five characters saved:
for i in 1..100:
if i mod 15==0:
echo "FizzBuzz"
elif i mod 5==0:
echo "Buzz"
elif i mod 3==0:
echo "Fizz"
else:
echo $i
Of course, you could also then remove all the fancy spacing if the if statement to make it shorter.
saved 5 more:
nim
for i in 1..100:
case i mod 15:
of 0:
echo "FizzBuzz"
of 3,6,9,12:
echo "Fizz"
of 5,10:
echo "Buzz"
else:
echo $i
I'm going to stop now. Down these paths lay OCD madness. :)
one where you try to get the shortest amount of characters
for n in 1..100:echo [$n,"Fizz","Buzz","Fizz Buzz"][(n mod 3==0).int+2*(n mod 5==0).int]
89 characters
@zevv, you're not even trying! 43 whitespace characters saved:
for i in 1..100:
echo case i mod 15
of 0:"FizzBuzz"
of 3,6,9,12:"Fizz"
of 5,10:"Buzz"
else: $i
@miran: I was just hinting at using the case as an expression. I wonder though why you missed the obvious since you are trying so hard: :)
for n in 1..100:echo [$n,"Fizz","Buzz","FizzBuzz"][int(n%%3<1)+2*int(n%%5<1)]
78
Well, not really creative but still, just use v (used @zevv's solution as a basis):
import v
vvvv vvvvv v vvvvv v vvvvv vvvvv vv vvv vv v v vvvvv v vvvv
vv v v vvvv vvvvv vvvv vvvvv v vvvv vv v v vv vvvv vvv vv vvv
vvvvv vv vvv vvvvv vv vvvv vvv vv vvvv vv vv vvvv vv vvv v vv
vvvv vvvv vvvvv vvvv vvvv vvv vvvv vvvvv vvv vvvvv v vvvvv vv
v v vvvv vv vvvvv vv v vvvvv vvvvv v vvvv vv vvv vvv vv v vvv
vvv vvv vvvv vvvv vvvvv vvvv vvvvv vvvv v vvvvv vvvv v vv v
vvv vv vvv vvv vv v vvv vvv vv vvvvv vvvvv vvv v vvvvv vvvv
v vvvvv vvvv v vv v vvv vv vvv vvv vv v vvv vvv vvv vvvv vvvv
vvvvv vvvv vvvvv vvvv v vvvvv vvvv v vvv vv vvvvv vvvvv vvv
v vvvvv vvvv v vvvvv vvvv v vv v vvv vvvv vvv vv vvvv vv vvvvv
vvvv vvvvv vvvv vvvvv v vvvv vvvvv vv vvvvv vv vv vvvv vvvvv
v vvvv vv vv v vv vv v vv vvvv vvvvv vvv v vvvv vv vvvv vvv
vv vv vvvvv vv vvv vv vv vvvv vvvv vv vvv v vvvv vvvvv vvvv
vvvvv v vvvv vvvvv vv vvvvv vv vv vvvv vvvvv v vvvv vv vv v
vv vv v vv vvvvv vv vvv v vvvv vv vvvv vvv vv vv vvvvv vvvv
vvv vv v v vvvv v v vvvv
Ok, something a bit more sophisticated - I've used v as a base and created a version of it which can represent any Nim code in "fizz", "buzz" and "fizzbuzz". This code works like this - we import v and v library "decodes" the code written with v's which is a modified (and minified) version of v which can translate "fizz", "buzz" and "fizzbuzz" back to original nim code, which then runs actual FizzBuzz by @zevv
https://gist.github.com/Yardanico/c6495e6b8d0776ff07b6feabab0b1d17#file-lol-nim (all other files are here too)
I have one in Arraymancer that uses a neural network trained on Fizzbuzz on number between 101 and 1024 and then tested on 1 ..< 100.
It seems to have learned division: https://github.com/mratsim/Arraymancer/blob/v0.6.0/examples/ex04_fizzbuzz_interview_cheatsheet.nim
# A port to Arraymancer of Joel Grus hilarious FizzBuzz in Tensorflow:
# http://joelgrus.com/2016/05/23/fizz-buzz-in-tensorflow/
# Interviewer: Welcome, can I get you a coffee or anything? Do you need a break?
# ...
# Interviewer: OK, so I need you to print the numbers from 1 to 100,
# except that if the number is divisible by 3 print "fizz",
# if it's divisible by 5 print "buzz", and if it's divisible by 15 print "fizzbuzz".
# Let's start with standard imports
import ../src/arraymancer, math, strformat
# We want to input a number and output the correct "fizzbuzz" representation
# ideally the input is a represented by a vector of real values between 0 and 1
# One way to do that is by using the binary representation of number
func binary_encode(i: int, num_digits: int): Tensor[float32] =
result = newTensor[float32](1, num_digits)
for d in 0 ..< num_digits:
result[0, d] = float32(i shr d and 1)
# For the input, we distinguish 4 cases: nothing, fizz, buzz and fizzbuzz.
func fizz_buzz_encode(i: int): int =
if i mod 15 == 0: return 3 # fizzbuzz
elif i mod 5 == 0: return 2 # buzz
elif i mod 3 == 0: return 1 # fizz
else : return 0
# Next, let's generate training data, we don't want to train on 1..100, that's our test values
# We can't tell the neural net the truth values it must discover the logic by itself.
# so we use values between 101 and 1024 (2^10)
const NumDigits = 10
var x_train = newTensor[float32](2^NumDigits - 101, NumDigits)
var y_train = newTensor[int](2^NumDigits - 101)
for i in 101 ..< 2^NumDigits:
x_train[i - 101, _] = binary_encode(i, NumDigits)
y_train[i - 101] = fizz_buzz_encode(i)
# How many neurons do we need to change a light bulb, sorry do a division? let's pick ...
const NumHidden = 100
# Let's setup our neural network context, variables and model
let
ctx = newContext Tensor[float32]
X = ctx.variable x_train
network ctx, FizzBuzzNet:
layers:
hidden: Linear(NumDigits, NumHidden)
output: Linear(NumHidden, 4)
forward x:
x.hidden.relu.output
let model = ctx.init(FizzBuzzNet)
let optim = model.optimizerSGD(0.05'f32)
func fizz_buzz(i: int, prediction: int): string =
[$i, "fizz", "buzz", "fizzbuzz"][prediction]
# Phew, finally ready to train, let's pick the batch size and number of epochs
const BatchSize = 128
const Epochs = 2500
# And let's start training the network
for epoch in 0 ..< Epochs:
# Here I should probably shuffle the input data.
for start_batch in countup(0, x_train.shape[0]-1, BatchSize):
# Pick the minibatch
let end_batch = min(x_train.shape[0]-1, start_batch + BatchSize)
let X_batch = X[start_batch ..< end_batch, _]
let target = y_train[start_batch ..< end_batch]
# Go through the model
let clf = model.forward(X_batch)
# Go through our cost function
let loss = clf.sparse_softmax_cross_entropy(target)
# Backpropagate the errors and let the optimizer fix them.
loss.backprop()
optim.update()
# Let's see how we fare:
ctx.no_grad_mode:
echo &"\nEpoch #{epoch} done. Testing accuracy"
let y_pred = model
.forward(X)
.value
.softmax
.argmax(axis = 1)
.squeeze
let score = y_pred.accuracy_score(y_train)
echo &"Accuracy: {score:.3f}%"
echo "\n"
# Our network is trained, let's see if it's well behaved
# Now let's use what we really want to fizzbuzz, numbers from 1 to 100
var x_buzz = newTensor[float32](100, NumDigits)
for i in 1 .. 100:
x_buzz[i - 1, _] = binary_encode(i, NumDigits)
# Wrap them for neural net
let X_buzz = ctx.variable x_buzz
# Pass it through the network
ctx.no_grad_mode:
let y_buzz = model
.forward(X_buzz)
.value
.softmax
.argmax(axis = 1)
.squeeze
# Extract the answer
var answer: seq[string] = @[]
for i in 1..100:
answer.add fizz_buzz(i, y_buzz[i - 1])
echo answer
# @["1", "fizzbuzz", "fizz", "4", "buzz", "fizz", "7", "8", "fizz", "buzz",
# "11", "fizz", "13", "14", "fizzbuzz", "16", "17", "fizz", "19", "buzz",
# "21", "22", "23", "24", "buzz", "26", "fizz", "28", "29", "fizzbuzz",
# "31", "buzz", "33", "34", "buzz", "fizz", "37", "buzz", "fizz", "buzz",
# "41", "fizz", "43", "44", "fizzbuzz", "46", "47", "fizz", "49", "buzz",
# "51", "52", "53", "fizz", "buzz", "56", "fizz", "58", "59", "fizzbuzz",
# "61", "62", "fizz", "64", "65", "66", "67", "68", "69", "70",
# "71", "fizz", "73", "74", "fizzbuzz", "76", "77", "fizz", "79", "80",
# "81", "82", "83", "fizz", "buzz", "86", "fizz", "88", "89", "fizzbuzz",
# "91", "92", "93", "94", "buzz", "96", "97", "98", "fizz", "100"]
# I guess 100 neurons are not enough to learn multiplication :/.