Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

... That only works because there's a human driving the browser. I've yet to hear of a concrete example of how this would apply to API clients.


> ... That only works because there's a human driving the browser.

It works for unattended web clients (like Google's spider) too -- and not just for generating basic listings, but for structured schema-based data to update Knowledge Graph. That's one of the foundations of many of the new features of Google Search in the last several years.


it works because links are defined in the hypertext and discovered by clients (say by the browser when a page is visited), so are new functionalities. A (well designed) web app is always up to date. In an Android native app the API URL(s) are (99%) hardcoded using the knowledge about the API of a certain moment. This auto-discovery mechanism works also for a spider.

Auto discovery does not mean that links are understood (@rel may help but..) you may need a human to decide but..

Suppose a (rest) application that lists links to its "services" in home page with the "service" page describing the service following a certain standard. You may have a bot that checks periodically the application for services you are interested in and be notified if a new service is available, with the possibility to navigate to the page and possibly subscribe.


two points:

1. why do you necessarily assume that REST API's are only accessed by robots? A human developer can benefit from HATEOAS quite a lot by being able to use the RESTful service's outputs as its own documentation. The developer can discover the features of the API by following links provided in the API outputs.

2. An API client can check that it matches the interface specified by the API just by comparing the URI's it is accessing with the URI's provided by the HATEOAS part of the RESTful service. You can automatically detect changes, breakages, or new feature introduction. This doesn't spare the client developer from having to update their client code, but it gives that developer a powerful tool for getting updates about the RESTful service.


So basically, putting structured comments in the API output would have the same effect? Instagram does that. They don't update their API docs, and instead put comments in the results so you might stumble on them while debugging But specifically to hyperlinks, I don't see the point. For instance, a search API might want to return s next link. So they can do that with a URL. Or they can just include a next token I pass to the search API. The latter is somewhat easier to program against since you often abstract the HTTP/URL parts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: