I am trying to write a stub service that answers a random response with a small, random delay. I expected that sleepAsync would be the thing to use, like this:
import asynchttpserver, asyncdispatch, strtabs, json, math
proc handler(req: Request) {.async.} =
let
r = random(2) == 0
j = %{"valid": %r}
await sleepAsync(random(2000))
await req.respond(Http200, $j,
{"Content-Type": "application/json"}.newStringTable)
randomize()
let server = newAsyncHttpServer()
waitFor server.serve(Port(8080), handler)
Unfortunately, this seems to behave differently than I thought. I expected sleepAsync to park my coroutine, so that other coroutine were free to be scheduled. The expected output would be that requests expect a random latency, but they are all served timely.
Instead, what I see is that during sleepAsync the whole server is dead and other requests are not served. This means that, as long as requests come at a sufficient rate, the delay accumulates as if everything was sync.
What am I doing wrong?
I tested this code here on OSX and got:
time ab -n 100 -c 100 http://127.0.0.1:8888/test
This is ApacheBench, Version 2.3 <$Revision: 1663405 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient).....done
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8888
Document Path: /test
Document Length: 14 bytes
Concurrency Level: 100
Time taken for tests: 2.042 seconds
Complete requests: 100
Failed requests: 47
(Connect: 0, Receive: 0, Length: 47, Exceptions: 0)
Total transferred: 8547 bytes
HTML transferred: 1447 bytes
Requests per second: 48.97 [#/sec] (mean)
Time per request: 2042.253 [ms] (mean)
Time per request: 20.423 [ms] (mean, across all concurrent requests)
Transfer rate: 4.09 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 10 5.6 10 21
Processing: 506 1238 584.3 1011 2020
Waiting: 506 1238 584.3 1011 2020
Total: 507 1248 584.7 1031 2041
Percentage of the requests served within a certain time (ms)
50% 1031
66% 1530
75% 2022
80% 2026
90% 2031
95% 2037
98% 2038
99% 2041
100% 2041 (longest request)
real 0m2.078s
user 0m0.015s
sys 0m0.027s
This looks perfectly fine and like the expected result to me.
EDIT: Not sure what the 47 length errors are though :)
user@localhost ~> ab -n 1000 -c 100 http://localhost:8080/ This is ApacheBench, Version 2.3 <$Revision: 1604373 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: localhost Server Port: 8080 Document Path: / Document Length: 15 bytes Concurrency Level: 100 Time taken for tests: 13.679 seconds Complete requests: 1000 Failed requests: 476 (Connect: 0, Receive: 0, Length: 476, Exceptions: 0) Total transferred: 85524 bytes HTML transferred: 14524 bytes Requests per second: 73.10 [#/sec] (mean) Time per request: 1367.938 [ms] (mean) Time per request: 13.679 [ms] (mean, across all concurrent requests) Transfer rate: 6.11 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 0.7 1 3 Processing: 6 1245 579.0 1067 2138 Waiting: 6 1245 579.2 1067 2138 Total: 7 1246 579.0 1068 2140 Percentage of the requests served within a certain time (ms) 50% 1068 66% 1586 75% 1592 80% 1613 90% 2109 95% 2116 98% 2131 99% 2139 100% 2140 (longest request) user@localhost ~>