OpenGPT seems to sort-of-work with Nim also.
>>> write a number guessing name in nim
resulted in:
import random
let secretNumber = random.randint(1, 10)
echo "I'm thinking of a number between 1 and 10. Can you guess what it is?"
while true:
let guess = readInt("Enter your guess: ")
if guess < secretNumber:
echo "Your guess was too low. Try again!"
elif guess > secretNumber:
echo "Your guess was too high. Try again!"
else:
echo "Congratulations! You guessed the secret number!"
break
It doesn't work perfectly. There is no randint in the random library. But that is quite the starting point.
>>> write the negamax algorithm in nim
and i got:
import math
# The `Score` type is used to represent the score of a position in the game.
# A positive score means that the player to move is winning, while a negative
# score means that the player to move is losing. A score of 0 means that the
# position is a draw.
type Score = int
# The `negamax` function calculates the best move for the player to move in the
# given position. It returns a tuple containing the best move and the score of
# the position.
#
# `board` is the current state of the game.
# `depth` is the maximum depth to search. A depth of 0 means that the function
# should not recurse any further.
# `alpha` and `beta` are the bounds of the alpha-beta search.
# `player` is the player to move.
proc negamax(board: Board, depth: int, alpha: Score, beta: Score, player: Player): tuple[Move, Score] =
# If the maximum search depth has been reached or the game is over, return
# the score of the position.
if depth == 0 or gameOver(board):
return (nil, evaluate(board, player))
# Initialize the best score and best move to negative infinity and a
# sentinel value, respectively.
let bestScore = -math.inf
let bestMove: Move
# Iterate over all possible moves.
for move in getMoves(board):
# Make the move and recurse.
makeMove(board, move)
let (, score) = negamax(board, depth - 1, -beta, -alpha, -player)
unmakeMove(board, move)
# Update the best score and best move if necessary.
if -score > bestScore:
bestScore = -score
bestMove = move
# Update the alpha bound.
alpha = max(alpha, bestScore)
# If the beta bound has been exceeded, stop searching.
if alpha >= beta:
break
# Return the best move and score.
return (bestMove, bestScore)
Damn. That is a good starting point. And it's isn't a copy of my public version of negamax.
I also played a little bit with OpenGPT and Nim these days. I tried to see how good could it be on creating bindings:
Which won't work (there are too many failures), but it doesn't look too alien either.
I was wondering how good would it be this sort of approach for bindings. There are some open source options: GPT-J, GPT-Neo, ... that could be finetuned for Nim. But the training process looks scary to me.
I am amazed about how these things will look in 5-10years.