Decentralization is frequently sold as a means against censorship: if we use a decentralized system such as IPFS, we don't need to have a DNS hierarchy to serve content, so it is no longer viable to block a particular domain.
But as long as we have ISPs and a common communications architecture, if we start using a content-addressable system, doesn't that help censorship? As a censor, I jump from having to censor all domains that may serve one particular document (which is difficult, as we can see with pirate bay), to just having to force ISPs to block urls with the hash of the document that I want to censor.
So we go from having to jump among domains, to having to jump among content hashes, which seems much less practical, isn't it?
In my mind, until we have some sort of mesh network with efficient cache systems, the decentralization topic seems (to me) that is providing answers to the wrong questions.
I think by default IPFS also doesn't even replicate content. So you don't even have to block hash lookups in the whole network but just take down the one host that currently has the only replica.
Pretty sure it used to be that you have to deliberately pin content that you want to share on your node. Else nodes that accessed it will throw it out of their cache if they don't need it anymore.
Maybe they changed the behavior in the meantime though, but IPFS didn't permanently replicate uploaded content in the past without some deliberate user action.
Yes, you have to pin to keep it, but you download it and begin serving it automatically every time you load something. You can continue serving that for a while after -- specially if you're not downloading many other things after.
The canonical way of getting content on IPFS isn't through some HTTP gateway, but by running a node yourself. In that case there's no way for an ISP to block anything.
But as long as we have ISPs and a common communications architecture, if we start using a content-addressable system, doesn't that help censorship? As a censor, I jump from having to censor all domains that may serve one particular document (which is difficult, as we can see with pirate bay), to just having to force ISPs to block urls with the hash of the document that I want to censor.
So we go from having to jump among domains, to having to jump among content hashes, which seems much less practical, isn't it?
In my mind, until we have some sort of mesh network with efficient cache systems, the decentralization topic seems (to me) that is providing answers to the wrong questions.