Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Three Dead Protocols (annharter.com)
196 points by englishm on July 16, 2015 | hide | past | favorite | 75 comments


I think trivial protocols like this are a good thing to start with for educational purposes, because implementing one correctly does require quite a bit of effort for someone who has had no experience with networking or RFCs.

Even for something as simple as QOTD the implementer has to consider things like message lengths and interpret terms like "should" (a recommendation, not an obligatory condition for compliance.) Observe that the standard also doesn't mandate that the message must change only once per day, so the implementation presented is compliant. :-)

For TCP Echo, because TCP is a stream-oriented protocol and AFAIK since you can't actually send and receive simultaneously in code - it's always read or write - the question of how much to echo back, and after how long, is also something to consider. Theoretically, an echo server could wait to send until several GB of data were received or the connection is closed, buffering the data limitlessly, and still be compliant. This also shows the importance of being clear and precise when writing standards or protocol specifications in general, should you ever need to do so.


AFAIK since you can't actually send and receive simultaneously in code - it's always read or write

Sure you can, there's no problem having a thread writing while another reads in parallel.


I did consider that scenario, but I suppose what really happens is dependent upon the duplex of the medium, how the network stack handles it (there's certainly a nontrivial amount of synchronisation required...), and if the CPU is multicore. WiFi for sure is half-duplex so I think the two threads will just alternately run.


Late 90's I did firmware for print servers. The echo server was pretty important to us for testing our hand-rolled TCP/IP stack.

Print server management was done through a Telnet interface. We also supported LPD which was one of the stupider protocols ever to see the light of day.

I added a QOTD service to the firmware as an easter egg.

I'm going to go soak my teeth now.


As I mentioned when someone brought up the history of UDP, the original idea was that datagram protocols would be implemented at the IP level, as seen here. UDP offers the same functionality, but one level higher. In BSD, it was easier to do things from user space at the UDP level rather than at the IP level, and adding new protocols directly above IP fell out of favor.

Try to get an IP packet that's not TCP, UDP, or ICMP through a consumer level Internet provider.


I've never had much difficulty with ESP (protocol 50), 6in4 (Protocol 41), or GRE (protocol 47). By and large, if it's IP, your packet will get to the destination without too much filtering in North America with most of the major ISPs (Comcast, AT&T, etc...)

I can't speak for other countries.


GRE tends to bugger off down a hole in a lot of ISPs in the UK from experience. Very annoying.


Is that a routing issue, or a fragmentation problem? Reducing your MTU on a GRE link greatly improves performance.

I'd be interested in hearing if there were any ISPs that didn't just forward GRE packets using normal IP routing conventions.


Absolutely no idea. AFAIK they just disappear into a void.

Used to be like this on Demon, Virgin Media and Easynet. The latter fixed their stuff circa 2007 however.


As someone who just set up IPv6 tunnels to both home and the office, I had no issues with 6in4 (proto-41) traffic with both ISPs I used. This is in The Netherlands.


I just confirmed that I can route a raw IP packet from my home connection to my server, by way of my ISP and hosting provider.

On the server, I ran "socat IP4-RECV:254 STDOUT", and on the client I ran "socat STDIN IP4-SENDTO:theservername:254", then typed at the client. Came through just fine.


Same here (Portuguese consumer level ISP).


To clarify Animats' point about getting non-TCP/UDP/ICMP packets through, it should probably be pointed out that it is difficult at scale. Yes, you may be able to send it from your network-aware workplace straight to your home computer, but if you release an some network product, an IM server/client perhaps, and switch it after a couple of years and some solid succuss to use SCTP only, you better bulk up your tech support staff first....


If nothing else, UDP provides ports, so multiple applications can be sending and receiving UDP datagrams at once without getting confused, or, indeed, even being aware of each other.


UDP is very heavily used in many areas.Certainly nothing wrong with it


These protocols may be deprecated, they may be unused and they may be out of sight but they aren't completely dead yet:

https://www.shodan.io/report/9xshqrdb

Many of these old protocols don't die easily and tend to linger around forever. Maybe there's a nostalgic element to keeping them alive for sysadmins :)


Take Shodan results with a grain of salt. When you look at the entire IP4 space, you will find a little of anything.

In a decade of doing pen tests in a mix of professionally capacity and informally for friends, I have never seen echo or daytime, and saw QOTD once on a test box on the CS department of a university.

Of course, working with organizations who sought out someone to do a pentest probably self-selects out networks which would have this kind of nonsense. Reducing attack surface by turning off services or blocking them at various firewalls has been standard operating procedure for IT for at least 2 decades.


Yeah, out of 4 billion addresses there are ~20,000 QOTD servers so I'm not arguing that they're pervasive :) Just saying that they're not completely dead yet.


Top products, second one: Windows qotd. Trying to google it, and google immediately suggest third term for the query: exploit.

Well.


Hell, many of them are enabled by default or poor decisions by whoever builds the SOE.


Pretty much every port below 1024 is reserved for one protocol or another, but many of them have been obsolete for years. It seems that whoever was in charge of assigning well-known ports back then just handed them out like candy.

Well, who am I kidding? This is the same IANA that used to hand out humongous blocks of IPv4 addresses to anyone who asked.

Should we try to deprecate dead protocols so that low ports can be put into better use? Or have we come to expect that all new technologies will simply reuse ports 80 & 443, so we have no need to set aside new well-known ports anymore?


Not everything has to be RFC approved. If I had the need for a new protocol, I'd just use one of the dead protocol ports anyway.

I suspect firewalls blocking everything but ports 80 and 443 has a lot more to do with why so many services these days are being stacked on top of them. I used to run a SOCKSv5 SSH tunnel home when I worked for a more restrictive employer, and of course I stuck it on port 443.


DNS is even more open then Port 80 and 443. Lots of small WLAN appliances which are in the most internet cafe`s today could be easily blown by putting a vpn at the dns port


And yet, the OpenBSD team was never able to get a protocol number for CARP (which I've used with great success)

https://en.wikipedia.org/wiki/Common_Address_Redundancy_Prot...


Enough administrative firewalls block non-80/443 ports that it's harder to deploy a protocol that uses them. This has got a bit better with UPNP and admin education, but it's the only reason absurdities like XMLRPC-over-HTTP got off the ground.


I'm actually psyched about Palo Alto's app-id and Snort OpenAppId that maybe firewalls will start allowing things through by behavior instead of port. Then we can have the internet back the way it was designed.


"Looks like TLS". "Also looks like TLS". "That's funny, this one looks like TLS too".


This is very true. That's why you MITM everything with your own CA!


Not necessarily; presumably conservative admins will still configure it to deny-default. Especially if the traffic is encrypted and unfamiliar.


BOFH-admins will configure to accept-and-bitbucket-default; that is, make the other party think it's gotten through, and then ignore everything it has to say.

Maybe throw in some fuzzing: accept-and-respond-with-gibberish-default.

accept-and-spam-MX-record-always


How about accept-and-randomly-lose? I'm a big fan of RFC 748 [1]

[1] https://tools.ietf.org/html/rfc748


Just start working with 25, but invent new HELO verbs instead. It worked for HTTP...

Honestly, with 65K+ ports, why would people want to re-use old ones ?


Given the definition[1] of the echo protocol works on UDP you could potentially spoof the address to be coming from another echo server and have packets going back and forth indefiniately, correct?

https://tools.ietf.org/html/rfc862


This is the premise of an old bit of code called Pepsi.c. I recall having juvenile fun with it. Many Cisco routers at the time had these ports open. http://www.hoobie.net/security/exploits/hacking/pepsi.c


Source code written by teenagers is always such a joy.


Especially the greetz and rage. I knew quite a few on both sides 😂


Doesn't spoofing the IP portion of UDP require a compliant network provider? I thought most upstream links would look at a spoofed packet and say, "Hey... no."

Not that this makes it impossible, just more difficult.



> May 1983 [footnote] Fwiw, RFC 2616, for HTTP, was published the same month, so at least some people were doing actual work in those days.

RFC 2616 was published in June 1999.

I don't know what Sir Tim was doing in May 1983, but I'm pretty sure he wasn't writing an RFC for a protocol that he wouldn't invent for six more years.

https://www.ietf.org/rfc/rfc2616.txt


The first actual RFC on HTTP was RFC 1945[1] from 1996. However, HTTP had been in use on the Web for a couple of years already when it was published.

[1] https://tools.ietf.org/html/rfc1945


Sixteen, even.


I think your implementation of "RFC 862, the Echo Protocol" wouldn't work if the input doesn't end in a newline.


Also, if you send a large amount of data to the echo server, the server crashes. This is due to how data is read off the wire into a buffer. A suggestion is to use a fixed size buffer. I did test this earlier and I'm sorry that I crashed it.


Oops, I should have read the comments, I too crashed it testing this theory.


Also, pressing ^D causes the server to infinitely loop sending newline characters.


This actually brings up an annoyance with FF (well, Pale Moon, but same difference). If you try to open, say, pchs.co:17 with FF, it'll pop up a prompt saying "this address is restricted" - with no way of overriding it.

You have to go into the config and add a key (!) to actually be able to access it. And worse, there's no way I've seen to actually just straight disable the "feature". You have to add an individual port, or a range of ports, or a comma-separated list of ports or ranges.

(For those wondering, it's "network.security.ports.banned.override", with a value of a port, or range, or comma-separated list of ports or ranges. For example: "7,13,17".)

Once you do, it works fine.


There are various security-related jiggery-pokeries you can perform with access to some of those old protocols as they interact with browser security. It's safer just to disable them. And, well, let's be frank, the inconvenience of not being able to hit "echo" servers through your browser is pretty minimal.


Pure applesauce.


I, uh, don't even know what you're trying to say there. Is that some form of agreement or a claim that it's nonsense? If it's the latter, well, it's not. Security attacks against some of these old protocols were demonstrated. The blacklist, as I understand it, may be a bit larger than it needs to be because conservatively a few more things were blocked than were demonstrated, but there were demonstrations.


I've been running a QOTD service on my server for the last few years:

    $ nc zx2c4.com 17
Source here: http://git.zx2c4.com/mulder-listen-daemon/tree/mulderd.c

I also run a toy telnet server:

    $ telnet zx2c4.com
:P


A toy telnet server that requires me to send my Google credentials to a random server, unencrypted over the internet? Nice!


Well, at least you have a choice to send that information. Much unlike the majority of web browsing experience, where you send quite a bit of information without having any choice at all. The author claims it is a "toy" which basically means use at your own peril. c.f. happy fun ball


Don't frget about TCPMUX listening on port 1. (RFC 1078) That's a serious stuff that could see many applications even in today's world.


Interesting, it's like a layer-4 NAT. I'm not so optimistic about its practicality though, as we don't seem to have any sort of port shortage at the moment and a lot of new applications just get put on top of HTTP/HTTPS anyway.


I have actually used daytime for a "real" use: as a quick and dirty way of eliminating the possibility of guest clock drift when running benchmark scripts inside of emulated guests with unreliable timekeeping. Obviously a bad idea for benchmarks measured on the order of seconds, but probably fine for benchmarks running for hours. ntpdate -q would probably work just as well though.


Wait, did she just start an infinite number of threads in a loop, or is ruby awesome in ways I didn't know?


Server.accept will block (wait) until a new connection happens. Once the callback (in the form of a Ruby block) completes the thread will end. So it starts a potentially infinite number of threads but only one per connection and each one is terminated pretty quickly. This is a pretty common way to write a server that can handle multiple simultaneous connections.


Soo the parameter to Thread.new is a function that blocks on the parent thread, before a new thread is even created?


Ruby (and most languages) evaluates the arguments before passing them through to the function. So it first evaluates server.accept, which blocks until a new connection, then passes the return value through to Thread.new which spawns the new thread.

The parameters to Thread.new are just passed straight through to the block.


Ahhh! I thought it was passing the accept function and not its result. Been too long, ruby! Thanks guys :)


Yep, that's exactly what's happening. A little confusing because ruby doesn't require brackets for zero argument function calls.


I spoke to someone a few years ago who has an asymmetrical transit cost agreement between two companies. He joked that it may have been lucrative to just pipe /dev/random to their echo port 24/7.

I suspect that is one of the many reasons that is a dead protocol.


That's like a byte a second, what's that going to do?


Nice little exercise. Just implemented the three servers in Node.js over lunch time.

[1] https://github.com/foliveira/echo-is-not-dead

[2] https://github.com/foliveira/qotd-is-not-dead

[3] https://github.com/foliveira/daytime-is-not-dead


RFC 2616 has been superseded by RFC 7230 et al.


This isn’t about the protocol, but you should know my code for this is really sloppy because it was my first time attempting to use vim and literally everything was hard.

Ahh, Vim. It makes me happy to know that more seasoned developers than myself have issues with it as well.


Hmmm, I run Q4TD[1] and now I’m thinking I should implement my own QOTD service…

I wonder if I could do that with Google App Engine talking to the blog and just picking random posts.

[1] http://q4td.blogspot.com/ http://www.twitter.com/q4td https://plus.google.com/u/0/110672212432591877153/posts http://www.facebook.com/quote4theday


Every time I look down the well known port numbers I imagine setting up a box with every protocol running.

A bit of an aside, how many people still use plain netcat? I switched to ncat years ago, and haven't looked back.


No mention of finger, port 79.

https://en.wikipedia.org/wiki/Finger_protocol


I suspect a few more implementations of these are going to spin up. I just did the qotd in Go: https://github.com/kyleterry/qotd


0.0.0.0 where's the IPv6 support?


The QOTD seems to just hang sometimes. Anyone have any guesses as to why?


For zero-based arrays, which Ruby has, it looks like the random_index passed to the CSV array can exceed the array's bounds due to the '+1':

    random_index = rand(quotes_array.length + 1)
    @quote_body = quotes_array[random_index]["Quote"]
    @quote_author = quotes_array[random_index]["Author"]
https://github.com/theaisforannie/qotd/blob/master/qotd.rb#L...


Ah, and then it just silently throws an exc and never closes the connection. Nice.

Gracias Señor@!


someone should tell her about the fortune file :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: