Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> your isp probably won't let you host services but that's been the case for decades

How would your ISP know?

NNTP originally was originally store-and-forward over dialup like links, no?

So, if I contact a different server and upload/download the changed data, how would they even know?

I guess the big issue would be having some "rendezvous" server in order to help with NAT punching as well as authentication.



No, NNTP is a TCP protocol. Early USENET moved messages over systems like UUCP.

NNTP is to USENET what ActivityPub is Mastadon. USENET is a confederation of cooperating peers, currently using NNTP, but it could be any mechanic to move the messages. You could wire it up using SCP I imagine if you were motivated to do so.

Specifically, the NNTP protocol does not talk directly to how USENET works. For example, you can find out how to exchange messages for particular topics using NNTP, but not how to actually create those topics. That left to the actual software and administrators. USENET using the concept of "Control Messages" to exchange information about things like newsgroup status, but the content and format of those messages are not specified in NNTP.

As a USENET peer, pretty much all peers are "equal". You host your own news server, you tell it what groups you're interested in, it peers with another host and exchanges messages about those groups. At that level, they're equal.

But just because you create a newsgroup on your system, doesn't mean that group is instantly propagated across the planet. That's where the USENET governance kicks in as to who is going peer and distribute new groups, or not. Each individual relationship within the peer group can be different.

I can't say exactly how a news client differs from a news server. It may not have any of the peering logic, posting and reading groups directly from a server which then handles the peering. News servers "peer", clients are lighter weight.


The protocol is from the era where each protocol used a dedicated TCP port rather than "it's all JSON over HTTPS" like now.


Yeah. Amen to that

There were some bad ideas in the beginning, FTP as an example of several of those


FTP splitting the data and control ports was a smart implementation. It allowed for optimization of the ports (one focused on throughput) and meant that control could still occur even while long responses were happening on the data port.


I haven't tested it, but I imagine it allows you to trick an FTP server into making an HTTP request (without TLS).

It probably seemed like a good idea at the time, and there was no way to know the problems without trying it.

It also allows you to send a command to cancel an ongoing transfer.


Well... RFC 953 specifically mentions using PORT to send a file to a line printer so that seems intentional. [1]

For streaming mode STORe/RETRieve the connection closes at the EOF so you could send the request but the response would be lost.

[1]

    It is possible for the user to specify an alternate data port by
    use of the PORT command.  The user may want a file dumped on a TAC
    line printer or retrieved from a third party host.


it also allowed for FXP, which was a godsend in early pirating/warez days ;-)

...or so I've heard


This could have been better accomplished by multiple connections to the same port.


A lot harder to implement QoS on that.


QoS didn’t exist when FTP was invented. The protocol was designed badly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: