Chunk large messages to avoid size limitations
The current communication solutions (websockets and gRPC) put limits on message size, effectively preventing models with too many parameters from being shared over network.
With websockets, this results in client-side error websockets.exceptions.ConnectionClosedError: sent 1009 (message too big); no close frame received
.
With gRPC, this results in client-side error grpc.aio._call.AioRpcError: ... grpc_status:8, grpc_message:"Received message larger than max (... vs. 4194304)"}
.
A quick-and-dirty solution might be to configure both applications not to limit messages' size (e.g. using max_size=None
in websockets' objects-creation backend code). However the proper way to tackle this issue should be to implement some sort of message-splitting method, either cutting the message content into chunks, or specifying that some fields (model weights, optim aux-vars...) are to be received through secondary exchanges with relaxed size constraints and/or chunking mechanisms.