regards to the send sys call, i would say the 3rd is what i expect to use, 1, 2, 4 IMO are sendall in semantic.
i know nim have the great overload feature, but as a user, i would expect a more clear proc naming instead of figuring out it's meaning by looking the parameters and return.
regards to the send sys call, i would say the 3rd is what i expect to use, 1, 2, 4 IMO are sendall in semantic.
Note that's actually not correct.
It took me a while to figure out all the overloads of send. Though as @araq points out their names are clear and each have their purpose that's tied to the types you call it with.
What leads to the confusion with std/net (in my opinion) is the lack of an equivalent to Python's sendall. I believe it's easy for users to expect that the high level send variant works similar to Python's sendall. The docs aren't clear about it. I thought the same thing as @haoliang at first because I had grown accustomed to Nim generally just "doing the right thing". Right? ;)
Currently the send variants try and send all of the data. The "high level" send call just throws an exception, but doesn't try to ensure all the data is sent. Generally this makes sense as most OS'es send all the bytes unless there's a socket error. In that case the user has to decide what to do (e.g. handle a closed socket).
Unfortunately, handling the edge cases properly requires using the low-level send. Mainly there's EINTR that can cause partial writes to occur. It only occurs under very heavy loads or lots of small writes, etc.
I've been wanting to make a PR for a send variant that does have the semantics of Python's sendall. I've written the code for something like it but deleted it since I wasn't sure about the API/contract. This post reminded me of the issue though. It seems that a sendAll rather than another send variant would be useful the different semantics. There's precedent of similar proc's in std/net like recvLine. It'd make it easy to correct socket programs using just std/net. Well, more correct socket programs.
Alternately, perhaps having a trySend that reports the bytes written would let the user re-write the correct data. Perhaps something like:
var bytesWritten = 0
while not socket.trySend(data[bytesWritten..^1], bytesWritten):
# check for socket closed and such
That's kind of hacky seeming though (and is probably incorrect).
oh, i see and yet.
3, 4 Socket.send do the same thing like send syscall, but 1 definitly means send all "if one send did not send all data, send again"
the first send's implementation
proc send*(socket: AsyncFD, buf: pointer, size: int,
flags = {SocketFlag.SafeDisconn}): owned(Future[void]) =
var retFuture = newFuture[void]("send")
var written = 0
proc cb(sock: AsyncFD): bool =
result = true
let netSize = size-written
var d = cast[cstring](buf)
let res = send(sock.SocketHandle, addr d[written], netSize.cint,
MSG_NOSIGNAL)
if res < 0:
let lastError = osLastError()
if lastError.int32 != EINTR and
lastError.int32 != EWOULDBLOCK and
lastError.int32 != EAGAIN:
if flags.isDisconnectionError(lastError):
retFuture.complete()
else:
retFuture.fail(newOSError(lastError))
else:
result = false # We still want this callback to be called.
else:
written.inc(res)
if res != netSize:
result = false # We still have data to send.
else:
retFuture.complete()
# TODO: The following causes crashes.
#if not cb(socket):
addWrite(socket, cb)
return retFuture
3, 4 Socket.send do the same thing like send syscall, but 1 definitly means "if one send did not send all data, send again" like python's socket.sendall
Ah yah, the async versions do ensure they send all the data. Thanks, good to know (I don't use async).
Could be helpful for the docs on send#4 to clarify that the call doesn't ensure all the bytes are sent. Currently the docs for it just say "sends data to a socket".
I made a test program that does a partial send on a socket (socketpair). It can happen with tcp sockets as well, just harder to trigger.
I think that with send#4 it's not possible to properly recover from a partial write:
import posix
import nativesockets
import net
import os
when isMainModule:
echo "running"
var sv: array[2, cint]
var res = socketpair(
Domain.AF_UNIX.toInt,
SockType.SOCK_STREAM.toInt, # or O_NONBLOCK.cint,
Protocol.IPPROTO_IP.toInt,
sv
)
echo "socketpair:res:", repr(res)
echo "socketpair:val:", repr(sv)
if res < 0:
raise newException(OSError, "error")
var
sh0 = sv[0].SocketHandle
sh1 = sv[1].SocketHandle
s0 = sh0.newSocket(Domain.AF_UNIX, SockType.SOCK_STREAM, Protocol.IPPROTO_TCP, buffered = false)
s1 = sh1.newSocket(Domain.AF_UNIX, SockType.SOCK_STREAM, Protocol.IPPROTO_TCP, buffered = false)
sh0.setBlocking(false)
sh1.setBlocking(false)
#// Get buffer size
var sendbuff: cint =
when hostOS == "linux":
128*1028
elif hostOS == "macosx":
32*1028
var optlen: SockLen = sizeof(sendbuff).SockLen
echo "sock:sndbuff:size: ", sendbuff
res = posix.setsockopt(sh0, SOL_SOCKET, SO_RCVBUF, sendbuff.addr.pointer, optlen)
echo "sock:set:sndbuff:", repr(sendbuff), " sz: ", optlen.int, " res: ", res
if res < 0:
echo "osLastError(): ", osLastError().osErrorMsg()
var sendbuffR: cint
var optlenR: SockLen
res = posix.getsockopt(sh0, SOL_SOCKET, SO_RCVBUF, sendbuffR.addr.pointer, optlenR.addr)
echo "sock:get:sndbuff: ", repr(sendbuffR), " sz: ", optlenR.int, " res: ", res
echo "\nsend: "
var tdata = newString(2*sendbuff)
for i in tdata.low..tdata.high: tdata[i] = 'a'
echo "sending:count: ", tdata.len()
when defined(sendhighlevel):
try:
s0.send(tdata)
except:
echo "error: couldn't send all data, how to recover?"
echo getCurrentExceptionMsg()
else:
var sentCnt = 0
let scnt = s0.send(tdata.cstring, tdata.len())
echo "sent:count: ", scnt
if scnt != tdata.len():
echo "error: couldn't send all data, but could recover"
# regardless of the failure type, rdata has some on it
var rdata = newString(2*sendbuff)
echo "recv:buff:sz: ", rdata.len()
let rcnt = s1.recv(rdata, rdata.len())
echo "recv:bytes:got: ", $rcnt
Mainly there's EINTR that can cause partial writes to occur. It only occurs under very heavy loads or lots of small writes, etc.
IIRC we disable EINTR on Nim's sockets.
IIRC we disable EINTR on Nim's sockets.
Oh I'll dig into that. I've mostly gotten EAGAIN's. In theory I think a TCP socket would exhibit the same issue as above when the kernel buffer filled up (when using setNonBlocking(true)). It's just harder to write an example. But I'll see what I can come up with, and post them as a Github issue (bug, rfc?).