-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design discussion #1
Comments
i'm feeling pretty good about HTTP/JSON API with a unix socket based stream manager, where the client polls the configured directories in the filesystem for new incoming sockets |
Per discussion with bigs on zoom:
The protocol header must contain at minimum:
We settled on using delimited protobuf for the protocol header, as this saves the need to write custom serializers. |
Before going into design considerations too much, let's flesh out our motivations so we get on the same page: #3 Notes on the current discussion:
|
If we are going JSON-less, let's not add another format -- we can use protobufs for the control protocol and multiplex in the UNIX socket. |
For multi-tenant applications we might have the issue of who's handling the streams -- there can be only a single stream handler for each protocol. |
Couple of points here:
For the multi-tenant mode, the master control plane could accept only two commands:
Just some initial brainstorming, really. |
I think that requiring clients to implement yamux/secio is a huge burden for bindings implementors (speaking as one :) |
Agree. Bindings should not perform multiplexing, that's precisely what the daemon does for them, tunnelling each stream onto a physical mapping atop a local transport. So there'd be a 1:1 mapping between a local resource (e.g. shm, socket) and a backing stream. Regarding secio, if we want multitenancy and isolation in the future, I guess we'll need an encryption mechanism to avoid apps cross-reading streams. But yeah, that complicates binding implementations. Alternatively, we could leverage OS resources like cgroups to provide the isolation. |
Technically, we don't even need to do that as long as we can whitelist (although we may want to anyways).
My thinking here is that we'd use a super special local-only transport. That is, we build a SHM/unix domain socket transport that does 90% of the work on the server side. We could even have multiple: a simple one that uses a new file descriptor per stream and a fancy one that uses memory mapping and a single socket for control information. We also don't need to do any secio/encryption as it's all local and privacy/authentication can be enforced by the kernel. The real tricky part here would be key management because we currently expect all "peers" to be identified by a public key. We could do an authentication round using signatures but that feels like overkill. The only tricky part here would be peer IDs but, in theory, |
Let's not have so many words and no code! |
I'm sorry, but is SHM transport is really needed?
libp2p-daemon become a bottleneck. SHM will allow clients to send data in much bigger speeds then libp2p-daemon will be able to handle and send to network, because network is much more slower then SHM. In such case libp2p-daemon will needs to hold big buffers to keep all the incoming packets, or lock incoming clients until it will be able to send data over network. |
@cheatfate the spec has actually been formalized (in its current state) in |
Yeah, as @bigs says SHM is just on the radar, but not an immediate priority. We're aware of the complexity, and it'll warrant much experimentation. I think your remark boils down to needing a mechanism for backpressure. Unix domain sockets inherently provide this (I believe). With SHM, it'll need to be part of the protocol agreed between both processes. I have a lot of investigating to do before I can provide better answers, but for now SHM is just in the wishlist ;-) |
yep, exactly. i envision back pressure as existing through use of something like a ring buffer that can mark messages that have/have not been read.
…On Oct 5, 2018, 10:30 AM -0400, Raúl Kripalani ***@***.***>, wrote:
Yeah, as @bigs says SHM is just on the radar, but not an immediate priority. We're aware of the complexity, and it'll warrant much experimentation.
I think your remark boils down to needing a mechanism for backpressure. Unix domain sockets inherently provide this (I believe). With SHM, it'll need to be part of the protocol agreed between both processes.
I have a lot of investigating to do before I can provide better answers, but for now SHM is just in the wishlist ;-)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
at this point, implementation details are starting to settle a bit. i'm going to close this conversation for now. |
Opening a thread for the discussion of the control API for our daemon.
Control API
Responsibilities
Implementation details
Two solid options:
Stream Proxy
Responsibilities
Implementation details
Editor's note: I think we can get away by polling the filesystem/shared memory where our streams are created as opposed to polling the control API, which would make things simpler.
The text was updated successfully, but these errors were encountered: