-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPNS over PubSub Pinning #4
Comments
I definitely want to see an IPNS pinning service in some form ... it appears there may be multiple ways to implement such a service... eg. republishing existing IPNS records in the DHT (not pubsub). Various related ideas have been around for a long time!
I recall some discussion (perhaps in the last IPNS Tiger team meeting?) about the meaning of the TTL in the IPNS records, and how that possible could be extended, and how third-party peers might be able to republish IPNS keys to keep them alive in the DHT. Of course, the DHT is slow, so I'd like to understand better how IPNS interacts with pubsub, and how the proposed persistence layer would work. There seems to be some differing opinions on where the persistence might live on the network and in the code, so we might want to try implementing several different approaches to facilitate discussion and see what works best by doing testing (if we have the luxury of taking some time to get it right). |
It is very difficult currently to find out what works and what does not. Pinning ipns names still seems to just hang (all my deamons are started as |
The definition of pinning that I am using has two components:
@marcinczenko if I understand correctly ipfs pin add pins the CID (data) at the end of that path and not the pathing itself. More concretely, if QmKey is the IPNS key that points to QmData then I'm not sure if you were running into issues with IPNS publishing over PubSub (e.g. @jimpick An overview of IPNS over PubSub now and with some of the proposed changes are below CurrentlyMost of the go code for this is available at https://github.com/libp2p/go-libp2p-pubsub-router First time publishing (per boot of the node)When doing an IPNS publish the publishing node both puts the latest IPNS record in the DHT and bootstraps the PubSub network for the particular IPNS key. The publisher then sends to the IPNS record over the PubSub network. PubSub BootstrappingBootstrapping has two components: The node advertises that it supports PubSub for a particular topic/IPNS Key, and the node discovers other nodes that support PubSub for the given topic/IPNS Key. The advertising/discovery mechanism currently used for PubSub is to put/retrieve DHT provider records (i.e. the publisher's peerID and multiaddresses, like we do for IPFS) for the data Subsequent publishingPublish the IPNS record to the DHT and over PubSub First time resolving (per boot of the node)Do an IPNS resolve over the DHT and bootstrap PubSub to the particular IPNS key. The resolution occurs over the DHT since unless you are lucky enough that a PubSub node publishes an update while you're in the middle of resolving PubSub will have no records available. Subsequent resolvingIf messages came in over PubSub since the first resolution then the latest message will be cached and waiting to be returned during a subsequent resolution. Proposed IPNS over PubSub ImprovementsPubSub BootstrappingMake PubSub's bootstrapping mechanism compliant with anything following the discovery interface. See libp2p/go-libp2p-pubsub-router#28 for more details. First time resolving (per boot of the node)Most of this work is on-going at libp2p/go-libp2p-pubsub#171 Race the DHT against PubSub for resolving the data. However, this time it will be possible for us to receive data from PubSub because as soon as we connect to a PubSub node subscribed to our topic/IPNS Key they will send us the latest version automatically. This seemingly small change both makes IPNS initial resolution much much faster and gives us republishing basically for free. Where IPNS over PubSub Pinning comes inGiven that we have IPNS over PubSub improvements coming down the pipeline that will finally enable us to actually do pinning/non-author republishing I'd like us to start working on how this pinning will work. There are some UX suggestions at ipfs/kubo#4435, but there's space to explore here both in the UX and connection/resource management. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@marcinczenko if you could open an issue in go-ipfs and tag me (and this issue) to continue this conversation that would be great. Because the way you search through the DHT is based on your key I'd recommend trying to generate a few keys and see if you get the same results, or if some keys perform better than others. Also, if in the go-ipfs issue you could put the version of your binary that would be great (there have been some fixes since 0.4.20, such as at ipfs/kubo#6291). Thanks for the bug report and see you on the other issue. |
I tried a couple of keys - it does not seem to have any impact - they are all pretty consistent. I will create an issue on go-ipfs. |
Back on the original thread of this suggestion - @aschmahmann - do you have additional constraints and use cases you'd imagine this project solving? fleshing out the UX of working with this command and some of the requirements for how it needs to interface with other parts of the system would add clarity to this proposal |
@momack2 @jimpick with IPNS over PubSub memory-based pinning is now implicitly done any time a node calls A useful UX could be just copying the IPFS pin UX and putting behind the name command. The most important thing to implement is persisting IPNS pins to disk and restarting the relevant PubSub + DHT advertisements on reload. The next most important thing to worry about is scaling. This includes both vertical scaling to minimize resources (like connections) required to be allocated by the pinner as well as horizontal scaling (e.g. orchestrating a multi-node IPNS pinset that might need many connections open and some bandwidth, but very low resource consumption otherwise). Once the persistence + CLI UX is available we can talk more in depth about which scaling direction to tackle first (my current thinking is horizontal). |
As I read to through this thread, I believe the proposal in question is in fact an upgrade to the IPFS protocol and not a "mini project". This conversation should be moved to ipfs/notes. I can do it if you agree, 👍 this comment to let me know :) |
I'm fine with wherever we want this conversation to take place. However, if in the interest of time implementing this is easier as a separate libp2p based |
There is on-going work to make IPNS over PubSub have a faster initial resolution time by adding persistence to PubSub (e.g. libp2p/go-libp2p-pubsub#171). This scheme works in p2p scenarios where various users are each subscribed to the PubSub channel named after the IPNS Key. However, another use case we'd like to support involves having dedicated pinning nodes that can each provide many IPNS Keys.
Audience: IPNS users that want to plan to provide many IPNS keys from a single machine. This may also include IPFS pinning services interested in providing IPNS support.
Impact: Will enable interactions such as users publishing content to IPNS and paying someone else make sure it's available, all without giving away their private IPNS signing keys.
Stakeholders: go-ipfs team (and infra if we'd like to test deploying a pinner internally)
The text was updated successfully, but these errors were encountered: