For frontend, Karax is the way to go.
For backend, there are at least Jester and Rosencrantz.
There's no Nim-sponsored hosting solution, you can pick anyone you like. I used to use Zeit when they offered Docker deployments. People say Heroku is nice.s
For future reference, here's a link to the Heroku buildpack for Nim: https://elements.heroku.com/buildpacks/vic/heroku-buildpack-nim
Haven't tested it yet
For future reference, here's a link to the Heroku buildpack for Nim: https://elements.heroku.com/buildpacks/vic/heroku-buildpack-nim
Haven't tested it yet
Found this in the "Curated Packages" list for Nim: https://github.com/planety/prologue
Has anybody here used Prologue? How does it compare to Jester?
Myself, I use Digital Ocean for hosting, using docker-compose. I keep the source code (w/o credentials) for each site on separate private repos and put the results on a separate repo for the docker instance.
Essentially, each website is compiled with C. I write a move.sh bash script to move only the important files, including the executable, to the distribution repo.
So, to publish changes from any of the websites:
Each website in the distribution directory has a subdirectory for each website containing:
An example docker file:
FROM python:3.7
WORKDIR /ngosite
ADD . /ngosite
(Ignore the word "python" up there. I'm not running any python; it is just a convenient starting image for me.)
In the "nginx" subdirectory, I have a single conf file for each website similar to:
# ngo.conf
server {
listen 80;
server_name nimgame.online.localtest.me;
location / {
proxy_pass http://ngo:5000/;
proxy_set_header Host "nimgame.online";
}
}
server {
listen 80;
server_name nimgame.online;
return 301 https://nimgame.online$request_uri;
}
server {
listen 443 ssl;
server_name nimgame.online;
ssl_certificate nimgame.online.crt;
ssl_certificate_key nimgame.online.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://ngo:5000/;
proxy_set_header Host "nimgame.online";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded_Proto $scheme;
}
}
My overall docker-compose.yaml file looks something like this:
version: '3.3'
services:
ngo:
restart: always
build: ./ngo
ports:
- "5000"
volumes:
- "./ngo:/ngosite"
command: /ngosite/webapp
tr:
restart: always
build: ./tr
ports:
- "5000"
volumes:
- "./tr:/trsite"
command: /trsite/webapp
ps:
restart: always
build: ./ps
ports:
- "5000"
volumes:
- "./ps:/pssite"
command: /pssite/webapp
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
- "443:443"
links:
- ngo
- tr
- ps
volumes:
datavolume:
I don't run any database myself. IMO, that is a good way to lose data unless you are a skilled DB admin and have built a full cluster. Instead I have a subscription with ScaleGrid for shared databases. In fact, I choose my database instances on ScaleGrid that are on the same AWS network as my Digital Ocean instances. No point to doing database queries across the open Internet backbone.
In fact, I generally never store important new data on the Docker. I do store cache data locally. For example, I store IP Geo lookup caches locally. It's okay if that get's wiped out from time to time.
I'm not claiming my setups are ideal, but this can be a starting point.
I thought Digital Ocean wouldn't have been an option with Nim, but Docker seems to make it possible...
You don't need Docker to run Nim on Digital Ocean (or any VPS/server hosting provider). You've got a whole Linux machine to do with what you wish. Most of the time Docker is overkill.
@dom96
True. I run docker-compose for consistency and separation mostly.
consistency
It almost guarantees that the instance on my local laptop will be truly replicated on any instance I run on the cloud. My nginx configs always includes *.localtest.me support.
Basically, I can test the website by:
I also use docker-compose 's virtual environment variable support to avoid storing passwords and credentials in repos. Mostly. I'm not always consistent about that :).
separation
I typically run a couple dozen low-traffic websites on a docker-compose instance. I don't want the websites messing with each other. Actually, with Nim, I suspect I can run much more than that. Nim-compiled websites are really fast and have a tiny footprint.
In fact, right now, one of my Nim-based websites is really messing up due to some obscure db-related bug. But the other 6 Nim sites on that same instance are running smooth and unaffected.
downside
The biggest downside, IMO, is the learning curve. If you are not comfortable with Docker, that is a non-trivial amount to learn. It has a lot of conceptual curve balls. It's like git. Only easy once you already understand it. A catch-22.
But in general you are right. Anything than can run nginx and a compiled unix program can host a nim-based website. In fact, one would be hard-pressed to find systems that can't run a nim website. I've put them up on linode also, where I don't use Docker. (but could, but I use linode for experiments usually.)