Description
If a client call
times out, the server raises a std::system_error
when trying to write the answer.
I'm not sure whether this should be silently caught by the server or if an interface to handle this type of error should be created.
A temporary workaround is to increase the client timeout, which doesn't resolve the problem if the client quits. When the run
function of the server is used, the exception can be caught and the server restarted, which is not the case with the asyn_run
function.
Note: I tried using rpc::server::suppress_exceptions
but this only catches exceptions from within the bound callbacks.
The following is a minimal example.
Compilation and run
Running on Linux (Mint, 18.2).
server
$ clang++ server.cpp -o server -Iinclude -Llib -std=c++14 -lrpc -lpthread
$ ./server
terminate called after throwing an instance of 'std::system_error'
what(): shutdown: Bad file descriptor
Abandon
client
$ clang++ client.cpp -o client -Iinclude -Llib -std=c++14 -lrpc -lpthread
$ ./client
rpc::timeout: Timeout of 50ms while calling RPC function 'bug'
Sources
server.cpp
#include <thread>
#include <chrono>
#include "rpc/server.h"
int main() {
rpc::server server(9999);
server.bind("bug", []{
using namespace std::chrono_literals;
std::this_thread::sleep_for(2s);
});
server.run();
}
client.cpp
#include <iostream>
#include "rpc/client.h"
#include "rpc/rpc_error.h"
int main() {
rpc::client client("localhost", 9999);
client.set_timeout(50);
try {
client.call("bug");
} catch(rpc::timeout& e) {
std::cerr << e.what() << '\n';
}
}