Learning Prolog was one of the most exciting things about getting a C.S. degree. It's vastly underrated, though admittedly not very useful in real-life.
The book Seven Languages in Seven Weeks has a chapter on Prolog. There is an interview in there with a guy who was doing research on dolphins and used Prolog to build a dolphin simulator. As they learned new things about the dolphins they would add "facts" to the simulator and used it to try to discover new things.
They could also use it for "debugging" training problems. He had a story about a dolphin who would apparently ignore a specific instruction and just sink down into the tank. They were able to use the simulator to figure out that the problem was not with the dolphin, but with both the instruction given and the surrounding environment at the time they would give the instruction. It turned out the dolphin was actually "following the rules", but the trainers failed to see it.
He also talks about using Prolog to build a scheduling system. Access to trained dolphins is a precious resource and it was important to find the optimal schedule for research. With a handful of facts and rules, Prolog was able to discover it for them. Pretty neat stuff.
If you want to learn about Prolog, the best book IMHO is _The Art of Prolog_ by Sterling and Shapiro. The newer edition is out of print and expensive (but worth it!), the previous edition is more like $6 and will give you a good taste. _Programming in Prolog_ by Clocksin & Mellish is a pretty straightforward overview of the language, but TAoP really gets at Prolog's real potential.
I also liked _Clause and Effect_, which is somewhat like a "Little Schemer" for Prolog, with a couple fairly substantial projects at the end. "Learn Prolog Now" (http://www.learnprolognow.org/) is free online, but a bit basic.
CTM has a great chapter on relational programming and Prolog, too - Peter Van Roy worked on Aquarius Prolog (http://www.info.ucl.ac.be/~pvr/Peter.thesis/Peter.thesis.htm...), and he also follows up with constraint programming. I highly recommend using a Prolog with constraint programming libraries, because it will make a big difference when Prolog's default behavior is too naive.
I would recommend learning Prolog, because it's sufficiently different from every other language that it will change the way you think, and even if you never use it in production, it's an excellent prototyping language. As a bonus, learning Prolog will help with Erlang, and vice versa. (Erlang was initially a DSL on SICStus Prolog, and many Prolog idioms came along.)
> As a bonus, learning Prolog will help with Erlang, and vice versa. (Erlang was initially a DSL on SICStus Prolog, and many Prolog idioms came along.)
As far as Erlang I have always been put off by the lack of backtracking. They took the best feature from Prolog and threw it out. Everything else is already strange and quite awkward, and those are the parts they kept but threw away backtracking. (I am looking at Elang as a general purpose language here, I know that there is no need for backtracking in a realtime telephony system).
By the way, if you want to experiment with prolog try SWI prolog. It's open, it has an IDE and a bunch of modules, including support for CHR (constrain handling rules).
The (at the time de-facto) standard for Prolog shifted between the two editions, and the latter has several more chapters with larger example projects. Still, the core of the book is about logic programming itself, with Prolog is used an example language.
Keep in mind you will not learn enough about any of these languages that you can run off and build something. Each chapter covers a different language and primarily discusses some of the features of each that you may not see in another. It's kind of like a programming language buffet where you can have a small taste of a bunch of different things.
CTM (http://www.info.ucl.ac.be/~pvr/book.html) is a similar treatment, but covers about a dozen programming models (paradigms' trade-offs, semantics) rather than languages. Comparisons between languages can often get bogged down in surface details.
I once worked with a guy who was using Prolog to develop a system that devised optimal cargo hold layouts for shipping barges. Seems fairly "real-life" to me.
Optimal packing problems are fun 1-hour programming contest fodder - so many problems that you might solve in Prolog are trivially written imperatively or functionally as combinatorial or permutational recursive searches with pruning - but they are such a small kernel of a real-world application that I don't think they justify the complexity of linking in another language, runtime, etc.
Where you're dynamically building up facts and clauses, and you need more general solving logic at run time, then I think Prolog is more warranted. But that's a smaller niche, IMO.
> trivially written imperatively or functionally as combinatorial or permutational recursive searches with pruning
In other words: an ad hoc, informally-specified, bug-ridden, slow implementation of half of Prolog.
I agree that Prolog would probably fare better as an embedded language like Lua, though - it's best suited to things like rules engines and parsers, rather than whole applications.
No, you don't need a general-purpose solution for this kind of problem, so you don't need to implement an inference engine - so you wouldn't have an implementation of Prolog.
Most of these kinds of problems are only about 100 lines of code even in fairly verbose languages, depending on how complicated the set you need to permute / select from is, and how complicated your clauses are.
I speak from experience here, solving packing problems in programming competition questions. Pretty much any high school / college programming competition includes one of these kinds of problems, and the time budget is rarely a lot more than an hour, as it's usually one of the simpler questions.
If only reality was based on timed competitions that 15-year-olds win. However, the problem that my colleague was solving was a lot more complicated than stacking a bunch of static boxes in a idealized compartment. In fact, the type of "boxes" and the containment structures were not known at the time of the application development. Instead, he only knew that each could be described along various parameters. For example, there are these things called Foo and they can never be stacked on top of a Bar unless there was a Baz in between, but even then only if the transport was could support that load. These were very complex logistics rules. I suppose you could have written that all in a C loop, but you'd be toast when the new Quuxes were released and new ships were developed, or yada yada yada. As it turns out Prolog worked great.
I guess programming an open-ended system would have taken the 15-year-olds an extra 6-7 minutes.
The more open-ended the clauses are, the more complex it gets, for sure, because you end up embedding a DSL to encode the rules (e.g. expression trees with a simple interpreter); and the more you push up that complexity, the more sense a niche solution like Prolog makes.
What a lot of this ends up coming down to is the domain expertises of the code author. If you're uncomfortable with recursion, or writing simple interpreters, then Prolog is a much bigger win. Otherwise, it has a higher hurdle to climb.
Interpreters themselves are pretty trivial. Writing a parser and interpreter for a simple expression language (no control flow or anything like that, just a predicate possibly with some arguments) should by itself take no more than an hour. I know from past experience it takes me about 25 minutes, as it's one of the first things I do when I learn a new language.
You know what? You're right, certainly moreso than people are giving you credit for. Writing a backtracking, depth-first-search brute-forcer really isn't that hard (especially if you have continuations). It's once you've also added e.g. unification that you're well on the way to reimplementing Prolog.
Considering that it still requires a significant amount of code that the Prolog solution would not require, and that code will have to be tested and debugged, I don't consider it "trivial" in any way I use the word. If we were talking theory, maybe, but we're talking implementation.
Consider that we know, and have known for some time, proper ways to manage dynamic memory. We still consider doing it correctly in practice not trivial and consider automatic garbage collection a productivity win.
From what I can remember of being an academic researcher (it was a while ago) the term "trivial" was indeed used to indicate something that wasn't of any real research interest.
This does indeed annoy people who put a lot of work into building complex systems that actually do useful stuff.
I do systems research. We only use "trivial" if something is so easy as to be an afterthought. If you're talking theory, then, fine, solved problems are "trivial." But we're not talking theory.
I'm in grad school right now. That sounds about right to me. "Trivial" means "someone already did it, while "easy" means "my graduate student is working on it." :-)
Prolog in effect supplies the recursion into search trees, but for these kinds of problems, it doesn't give you much more than that. Recursion in imperative languages, on the other hand, is not exactly difficult, so long as you're not using an 80s era BASIC interpreter or similar. The clauses largely turn into predicates, Boolean expressions in if-statements. For this kind of static problem, you might have 3x more code in an imperative language, but most of it would be bookkeeping for recursion and backtrack, and syntactic cruft with holes in the middle for your constraints and satisfaction criteria predicates. For most static problems with a fixed set of rules, you really won't be looking at much extra code.
This kind of algorithm is normally expected to be written, tested, debugged and presented in one hour in programming competitions with fifteen year-old contestants. And it's only the algorithm that Prolog would help you with in this case; it's the whole application on the exterior where all the real polish goes, where the testing is more subtle. Writing the core algorithm of a packing problem, or a routing problem, is completely trivial compared to the work that goes into the interface for entering the data. Programming competitions work with a text input file to get rid of this complexity. I speak as one who has competed in and won such competitions.
And FWIW, I don't think GC is a productivity win because managing dynamic memory is not trivial. Managing dynamic memory is trivial, but it's tedious, and it warps your program design, forcing it to be more imperative, more mutable and less functional. The cost of manual management isn't in managing the memory per se, but in managing the complexity of an application that can't rely on GC to manage the memory.
IMO GC's productivity win comes from the lowered bookkeeping costs of more modular and expressive programming styles - most notably in the ability to return newly allocated structures as return values from functions without having to negotiate away the ambiguity as to who owns them. It lets you built data structures that may share substructure, and write transformations over them which may or may not rewrite parts of the substructure, and the transformation may or may not be memoized etc. Doing this with GC is straight-forward; handling all these options with manual memory management requires ownership handoff protocols, shared ownership hacks (e.g. reference counts), inflexible conventions, a preference to mutate rather than return new, etc., greatly increasing code complexity.
And I speak of GC as someone who these days normally writes in a non-GC language, but have implemented garbage collectors, both on the compiler (metadata) and runtime (the marking and sweeping) sides.
"This kind of algorithm is normally expected to be written, tested, debugged and presented in one hour in programming competitions with fifteen year-old contestants."
Would you deploy this code in a production application?
"Programming competitions work with a text input file to get rid of this complexity. I speak as one who has competed in and won such competitions."
So are you then saying that these kinds of problems take about an hour to solve, assuming you are someone who competes in and wins timed programming competitions? Also, what will be the maintenance costs of changing requirements going forward, as compared to adding Prolog clauses?
Well a lot of prolog implementations also feature finite domain and rational domain solvers, global constrains that I don't think your 15 year old buddies would be able to code in an hour.
Having said that, most problems we solve here are NP-Complete, most of the time the trivial pruning you are mentioning does not really extend the limits for the computation.
For sure, I was speaking in the context of the packing problems, and even simpler discrete combinatorial problems, which really are pretty simple. But I wouldn't dismiss trivial pruning lightly; many problems can have their search branches pruned early based on their costs exceeding the current best solution, and as that solution gets better, it will start pruning higher up the search tree.
The knapsack problem, as you know, is NP-Complete.
After you've coded a bunch of programming competition problems, yes, they really are trivial. There's only a handful of techniques, primarily around what kind of set you're going to permute or select from; getting the correct results are easy, but usually exponential in time. Apply some constraints at each recursion to prune the depth, and perhaps memoize them if the substructure is similar, and you very quickly narrow in on efficient solutions. Use some heuristics, and you can sacrifice optimality for more speed.
It's definitely not rocket science. 15 year old kids have been doing this for years in IOI etc.
Now we're mixing up two criteria. Contest problems are small enough that an optimal solution can be found in reasonable time; but since packing problems usually end up NP complete or NP hard, in the real world you'll end up using heuristics.
Optimal packing problems are fun 1-hour programming contest fodder - so many problems that you might solve in Prolog are trivially written imperatively or functionally as combinatorial or permutational recursive searches with pruning
This is true when the problem is stable and well-specified, but when constraints are being added and removed, when there are "nice-to-have" constraints that save money but may become less tractable as the problem scales, when customers want you to experiment with novel constraints -- you do not want to maintain a homebrew imperative implementation. (I can't say anything about a functional version.) Making your implementation maintainable is tantamount to re-implementing a powerful, general-purpose system such as Prolog.