I think trivial protocols like this are a good thing to start with for educational purposes, because implementing one correctly does require quite a bit of effort for someone who has had no experience with networking or RFCs.
Even for something as simple as QOTD the implementer has to consider things like message lengths and interpret terms like "should" (a recommendation, not an obligatory condition for compliance.) Observe that the standard also doesn't mandate that the message must change only once per day, so the implementation presented is compliant. :-)
For TCP Echo, because TCP is a stream-oriented protocol and AFAIK since you can't actually send and receive simultaneously in code - it's always read or write - the question of how much to echo back, and after how long, is also something to consider. Theoretically, an echo server could wait to send until several GB of data were received or the connection is closed, buffering the data limitlessly, and still be compliant. This also shows the importance of being clear and precise when writing standards or protocol specifications in general, should you ever need to do so.
I did consider that scenario, but I suppose what really happens is dependent upon the duplex of the medium, how the network stack handles it (there's certainly a nontrivial amount of synchronisation required...), and if the CPU is multicore. WiFi for sure is half-duplex so I think the two threads will just alternately run.
Late 90's I did firmware for print servers. The echo server was pretty important to us for testing our hand-rolled TCP/IP stack.
Print server management was done through a Telnet interface. We also supported LPD which was one of the stupider protocols ever to see the light of day.
I added a QOTD service to the firmware as an easter egg.
As I mentioned when someone brought up the history of UDP, the original idea was that datagram protocols would be implemented at the IP level, as seen here. UDP offers the same functionality, but one level higher. In BSD, it was easier to do things from user space at the UDP level rather than at the IP level, and adding new protocols directly above IP fell out of favor.
Try to get an IP packet that's not TCP, UDP, or ICMP through a consumer level Internet provider.
I've never had much difficulty with ESP (protocol 50), 6in4 (Protocol 41), or GRE (protocol 47). By and large, if it's IP, your packet will get to the destination without too much filtering in North America with most of the major ISPs (Comcast, AT&T, etc...)
As someone who just set up IPv6 tunnels to both home and the office, I had no issues with 6in4 (proto-41) traffic with both ISPs I used. This is in The Netherlands.
I just confirmed that I can route a raw IP packet from my home connection to my server, by way of my ISP and hosting provider.
On the server, I ran "socat IP4-RECV:254 STDOUT", and on the client I ran "socat STDIN IP4-SENDTO:theservername:254", then typed at the client. Came through just fine.
To clarify Animats' point about getting non-TCP/UDP/ICMP packets through, it should probably be pointed out that it is difficult at scale. Yes, you may be able to send it from your network-aware workplace straight to your home computer, but if you release an some network product, an IM server/client perhaps, and switch it after a couple of years and some solid succuss to use SCTP only, you better bulk up your tech support staff first....
If nothing else, UDP provides ports, so multiple applications can be sending and receiving UDP datagrams at once without getting confused, or, indeed, even being aware of each other.
Many of these old protocols don't die easily and tend to linger around forever. Maybe there's a nostalgic element to keeping them alive for sysadmins :)
Take Shodan results with a grain of salt. When you look at the entire IP4 space, you will find a little of anything.
In a decade of doing pen tests in a mix of professionally capacity and informally for friends, I have never seen echo or daytime, and saw QOTD once on a test box on the CS department of a university.
Of course, working with organizations who sought out someone to do a pentest probably self-selects out networks which would have this kind of nonsense. Reducing attack surface by turning off services or blocking them at various firewalls has been standard operating procedure for IT for at least 2 decades.
Yeah, out of 4 billion addresses there are ~20,000 QOTD servers so I'm not arguing that they're pervasive :) Just saying that they're not completely dead yet.
Pretty much every port below 1024 is reserved for one protocol or another, but many of them have been obsolete for years. It seems that whoever was in charge of assigning well-known ports back then just handed them out like candy.
Well, who am I kidding? This is the same IANA that used to hand out humongous blocks of IPv4 addresses to anyone who asked.
Should we try to deprecate dead protocols so that low ports can be put into better use? Or have we come to expect that all new technologies will simply reuse ports 80 & 443, so we have no need to set aside new well-known ports anymore?
Not everything has to be RFC approved. If I had the need for a new protocol, I'd just use one of the dead protocol ports anyway.
I suspect firewalls blocking everything but ports 80 and 443 has a lot more to do with why so many services these days are being stacked on top of them. I used to run a SOCKSv5 SSH tunnel home when I worked for a more restrictive employer, and of course I stuck it on port 443.
DNS is even more open then Port 80 and 443. Lots of small WLAN appliances which are in the most internet cafe`s today could be easily blown by putting a vpn at the dns port
Enough administrative firewalls block non-80/443 ports that it's harder to deploy a protocol that uses them. This has got a bit better with UPNP and admin education, but it's the only reason absurdities like XMLRPC-over-HTTP got off the ground.
I'm actually psyched about Palo Alto's app-id and Snort OpenAppId that maybe firewalls will start allowing things through by behavior instead of port. Then we can have the internet back the way it was designed.
BOFH-admins will configure to accept-and-bitbucket-default; that is, make the other party think it's gotten through, and then ignore everything it has to say.
Maybe throw in some fuzzing: accept-and-respond-with-gibberish-default.
Given the definition[1] of the echo protocol works on UDP you could potentially spoof the address to be coming from another echo server and have packets going back and forth indefiniately, correct?
Doesn't spoofing the IP portion of UDP require a compliant network provider? I thought most upstream links would look at a spoofed packet and say, "Hey... no."
Not that this makes it impossible, just more difficult.
> May 1983 [footnote] Fwiw, RFC 2616, for HTTP, was published the same month, so at least some people were doing actual work in those days.
RFC 2616 was published in June 1999.
I don't know what Sir Tim was doing in May 1983, but I'm pretty sure he wasn't writing an RFC for a protocol that he wouldn't invent for six more years.
Also, if you send a large amount of data to the echo server, the server crashes. This is due to how data is read off the wire into a buffer. A suggestion is to use a fixed size buffer. I did test this earlier and I'm sorry that I crashed it.
This actually brings up an annoyance with FF (well, Pale Moon, but same difference). If you try to open, say, pchs.co:17 with FF, it'll pop up a prompt saying "this address is restricted" - with no way of overriding it.
You have to go into the config and add a key (!) to actually be able to access it. And worse, there's no way I've seen to actually just straight disable the "feature". You have to add an individual port, or a range of ports, or a comma-separated list of ports or ranges.
(For those wondering, it's "network.security.ports.banned.override", with a value of a port, or range, or comma-separated list of ports or ranges. For example: "7,13,17".)
There are various security-related jiggery-pokeries you can perform with access to some of those old protocols as they interact with browser security. It's safer just to disable them. And, well, let's be frank, the inconvenience of not being able to hit "echo" servers through your browser is pretty minimal.
I, uh, don't even know what you're trying to say there. Is that some form of agreement or a claim that it's nonsense? If it's the latter, well, it's not. Security attacks against some of these old protocols were demonstrated. The blacklist, as I understand it, may be a bit larger than it needs to be because conservatively a few more things were blocked than were demonstrated, but there were demonstrations.
Well, at least you have a choice to send that information. Much unlike the majority of web browsing experience, where you send quite a bit of information without having any choice at all. The author claims it is a "toy" which basically means use at your own peril. c.f. happy fun ball
Interesting, it's like a layer-4 NAT. I'm not so optimistic about its practicality though, as we don't seem to have any sort of port shortage at the moment and a lot of new applications just get put on top of HTTP/HTTPS anyway.
I have actually used daytime for a "real" use: as a quick and dirty way of eliminating the possibility of guest clock drift when running benchmark scripts inside of emulated guests with unreliable timekeeping. Obviously a bad idea for benchmarks measured on the order of seconds, but probably fine for benchmarks running for hours. ntpdate -q would probably work just as well though.
Server.accept will block (wait) until a new connection happens. Once the callback (in the form of a Ruby block) completes the thread will end. So it starts a potentially infinite number of threads but only one per connection and each one is terminated pretty quickly. This is a pretty common way to write a server that can handle multiple simultaneous connections.
Ruby (and most languages) evaluates the arguments before passing them through to the function. So it first evaluates server.accept, which blocks until a new connection, then passes the return value through to Thread.new which spawns the new thread.
The parameters to Thread.new are just passed straight through to the block.
I spoke to someone a few years ago who has an asymmetrical transit cost agreement between two companies. He joked that it may have been lucrative to just pipe /dev/random to their echo port 24/7.
I suspect that is one of the many reasons that is a dead protocol.
This isn’t about the protocol, but you should know my code for this is really sloppy because it was my first time attempting to use vim and literally everything was hard.
Ahh, Vim. It makes me happy to know that more seasoned developers than myself have issues with it as well.
Even for something as simple as QOTD the implementer has to consider things like message lengths and interpret terms like "should" (a recommendation, not an obligatory condition for compliance.) Observe that the standard also doesn't mandate that the message must change only once per day, so the implementation presented is compliant. :-)
For TCP Echo, because TCP is a stream-oriented protocol and AFAIK since you can't actually send and receive simultaneously in code - it's always read or write - the question of how much to echo back, and after how long, is also something to consider. Theoretically, an echo server could wait to send until several GB of data were received or the connection is closed, buffering the data limitlessly, and still be compliant. This also shows the importance of being clear and precise when writing standards or protocol specifications in general, should you ever need to do so.