RE the learning curve: Do you think the APL family can exist without the terse syntax? Are there any aspects of the language that just wouldn't work well without it?
A comparison I can think of is that Lisp's whole code/data duality works well because the code is s-exprs, and that it would be pretty awkward to transform that if the syntax were more python-y
On the other hand, semicolons and braces have nothing to do with any Java features....
Yes, most of the examples look very limited compared to Lisp but there is Rebol / Red (http://www.red-lang.org/) which is atleast as powerful as Lisp in terms of homoiconicity (as well as in other things), if not more. Red/Rebol don't use sexps, don't look like this (((())))), are highly dynamic and homoiconic.
> RE the learning curve: Do you think the APL family can exist without the terse syntax?
So yes, I think that the APL family could exist without the terse syntax, but well hey, everything has its tradeoffs and all this is subjective - everyone has his own thinking - there might be someone in this world who in love with the APL syntax :D
I do love APL's syntax. It's part of the appeal for me.
That said, Red looks like a godsend. I was looking for something I can use without much setup and learning costs, that is small, elegant, efficient, fast, gets the job done, and is long-term maintainable. Red seems to fit the bill. I'm still evaluating it and it still looks good.
> Do you think the APL family can exist without the terse syntax? Are there any aspects of the language that just wouldn't work well without it?
Considering the evolution of APL, yes.
Ken Iverson was teaching math in Harvard, when he was growing the - mathematical - notation to describe the subject formally. It was happening in late 1950's, and the computers were entering our life. At some point it became reasonable to use that evolved notation as A Programming Language - or APL, for short.
However, the idea was to make understanding easier - all along! Same idea which Alan Kay is pursuing with his works, same idea with many good technologies, when they are young - Java, JavaScript... Importance of notation - be it in math, or in formal-and-executable-math is the ideology of early APL. Now they say "it's hard to beat expressiveness of a whiteboard", but still attempt to use, say, an integral sign (and, similarly, one-letter variables), as one would do when showing the idea on a whiteboard. The history of hand-written math preferred terse notation - and that is carried on in APL as executable math.
So... yes, terseness is important, and APLers will also say that choice of symbols is important too. It's only to beginners that they feel strange - pretty soon programmer learns them, gets used to them and refuses to substitute them with more readable longer variants - even though that would make certain things better. You don't name variables on a whiteboard with long_names or camelCase - at best you use sub- and superscripts. Granted, you can have cups and power towers and other things, and they were really impractical with 1960's level technologies... yet at least you have a singular text direction for expressions, in editors or in print.
J makes certain things more logical, and switches to ASCII. May be we'll invent even better notation (to me, this - http://matt.might.net/articles/discrete-math-and-code/ - looks like a good start to think about basic building blocks), but so far, we have APL family languages as arguably closest thing to math notation. To some, it's the closest way they can imagine between making a solution in the head and explaining that to computer.
In agreement with @avmich above, and to turn the question around from above:
> Do you think the APL family can exist without the terse syntax? Are there any aspects of the language that just wouldn't work well without it?
Do you think mathematics would exist or be shared and done as efficiently if it did away with the terseness of its symbols, Greek letters, etc...?
Yes, in J you can define average as:
+/%#
so that,
(+/%#) 2 3 4 5
returns 3.5
Or,
avg=: +/%#
avg 2 3 4 5
returns 3.5
Or,
sum =: +/
divided_by =: %
tally =: #
So, if you want to take baby steps into J you can:
avg=: sum divided_by tally
avg 2 3 4 5
returns 3.5
But, that defeats the purpose of being able to concisely manipulate abstractions to get work done for the sake of readability, and ONLY for those people who refuse to learn the symbols as in mathematics. Have you ever heard from an adult who has had basic mathematics say, 'I have to keep looking up that greek letter π'? It only takes me a very little while to review my concise code. I'd rather learn this sort of pattern recognition of short code when dealing with high abstractions to the spaghetti-like appearance, to me, of Java or C (I like C!). Strangely enough I find Lisp more clear to C-like langs. It may be more than just subjective preference, but order of reasoning in the syntax.
Most adults give up on maths around the level where the amount of symbols start to grow, or slightly before (to be clear, I'm not claiming notation is the only cause of this - it certainly isn't, but it does mean we really don't know whether most people would manage to pick it up; personally I think it is a major factor in making people struggle with maths).
For my part, the notation was certainly part of it. It matters. I used to love maths but I started finding it impossible to read even as the concepts were still easy enough to understand. I'd write things out as programs instead to understand it without being hampered by the notation. I learned symbolic differentiation that way, for example.
But I quickly realised that this basically closed maths off to me as a viable subject, and I opted out of all optional maths courses other than boolean logic for my CS studies.
I get that for those who find mathematical notation easy to work with, it seems indispensable, but don't underestimate the amount of people for whom the notation is the barrier that basically makes mathematics inaccessible.
I've picked up quite a bit of maths since, but always by understanding the concepts through code rather than trying to parse mathematical notation.
I've had - and still have, to a degree - similar troubles with understanding math notation. Some 9 years ago a friend, solidly mathematician, gave a good advice to me - at least try to pronounce the math expression aloud. That forces you to pay attention to each symbol - instead of trying to immediately find word-like "phrases", failing that and missing the expression as a whole.
APLs are - as Arthur Whitney likes to employ - supposed to be read symbol-by-symbol, since each one represent a whole operation. That's why APL program are so dense - you may have plenty of work done in a short line (but many APL writers prefer to keep lines short, to help with understanding). A screenful of APL can be moderately sized program, without need for scrolling... When you write J, you might be tempted to add more and more operations to the left (APL and J work from right to left), until it's hard to "explain" what the whole expression does - because it does so much. With an alternative - short, understandable expressions - you have another problem, that of naming :) - trying to give a traditional, long-worded name to an intermediate result of expression doesn't fit well with the rest of the style... So a good sense of balance is very valuable.
Another saying which may help to understand APL's mindset is that APLers spend 5 minutes to write a program, then spend next 1 hour to write the same program better and cleaner. That's what I understand as refactoring; the idea is to make the code more readable, the expression more obvious, more obviously correct, more reuseable... Properly done, the expression is a pleasure to look at and well worth the efforts to decipher it symbol-by-symbol. And another great option is documentation - lots of comments, just as in good math texts, where the idea is explained in plain English and then succinctly put to code.
This way I better see the math background behind the problem. I certainly have an easier time to change something here and there - the changes required are so small. I also see similarities between different pieces of code, if they are put as similar expressions close to each other, on the same screen. Yes, the notation is terse... but it at least has good sides.
Musicians learn to read music. Yes, it is hard to learn it first, before playing or listening, but that is how it has been incorrectly taught. Same with mathematics. Learning mathematics by coding is perfectly fine, but when you start to go higher to the more abstract level, you need abstract symbols. That can, and should come later, I believe. But, I would rather deal with learning C=2πr as standing for the circumference of a circle, than writing it out longhand. Not to mention, diagrams are good too, to understand what a 'radius' or 'pi' is. The longhand version would be at least a paragraph. You just wouldn't write many pages of longhand to avoid symbols. Fear of mathematics is a result of poor teaching, not symbols.
You are exactly right. My favourite is when resistance to APL-family languages comes from people who claim to be polyglots, and you ask them which languages and they list: C, C#, java, javascript, VB, ruby, python etc. Those are all really the same language with slightly different wording and details. Like german and english. APL-family is like Mandarin or Japanese. Lisp is like Latin (in fact lisp is really a mechanism to author your own language with s-exprs). Obviously, these languages won't be comprehensible to you until you put the effort into learning the abstractions... this isn't like transitioning from C++ to Java where you can broadly carry the same concepts over.
There is nothing difficult about reading these languages to people who use them, anymore than it is difficult for a musician to read sheet music. In fact, APL languages have less ambiguous parsing and precedence rules which I find make them easier to read.
> Those are all really the same language with slightly different wording and details. Like german and english.
I'd argue you only think that because they've opted for a familiar syntax. E.g. the object model (or lack of one) is vastly different between these languages and they employ drastically different type systems.
Ruby is closer to Smalltalk than C in most respects other than syntax, for example, and provides most of the abilities lisp gives you, including the ability to define domain-specific languages. The major aspect you're not getting from Ruby would be homoiconicity.
Here's and article comparing Lisp and Ruby[1].
I'm not saying learning languages outside of this group isn't important, but that a lot of the reason why languages like Lisp and APL are seen as so different has more to do with syntax than semantics. Most people don't know what the semantic differences even are, because getting past the alien syntax is too much effort. When you do, the differences aren't all that huge.
> It's only to beginners that they feel strange - pretty soon programmer learns them, gets used to them and refuses to substitute them with more readable longer variants
That reminds me of how McCarthy always intended to replace S-expressions with M-expressions, but it never happened because programmers found they liked the S-expressions after all.
I believe Shen [1] creates m-expressions when certain expressions are passed through its compiler. Shen should appeal to Haskellers and Lispers I would believe. Deech has created a Shen port for elisp [2], and it exists on many other language platforms, because it runs on an enriched lambda calculus called Klambda implemented in a small set of functions.
Do you think the APL family can exist without the terse syntax?
I have yet to meet a programming model that can only exist in one specific syntax. The closest I've seen is lisp-style metaprogramming, which requires syntax that makes its nesting structure obvious (you're operating on pieces of syntax, so of course you need to know something about it); of course, you could use indentation instead of parens to show that structure if you like. APL-like operator lifting is entirely a semantic issue. It can exist just fine in s-expressions, ML-like syntax, Python-like syntax, etc.
APL-family languages typically even tie their own hands in terms of semantic flexibility because of their syntax. They have separate syntactic classes for (first-order) data, first-order functions, and second-order functions. The parser doesn't know what to do with an identifier unless it knows that identifier's definition, which makes compiling APL pretty awkward (ever seen a parser that tries to do data flow analysis?). It also limits their flexibility as functional programming languages. J, at least, by keeping a symbolic representation of all functions at run time, enables a sort of workaround where you can package up a vector of first-order functions, but this is much more awkward to work with than actual first-class functions.
But what really disappoints me about APL-style syntax is limiting its flexibility as an array-processing language. The language semantics offer this great ability to automatically "lift" any operation to work on arbitrarily high-dimensional arrays. Want the square root of every number in a 3-tensor? Go for it! Want to average every column in a matrix? Just average it! In more recent incarnations, you can even mix and match the shapes you're lifting up to, like adding a vector to every column in a matrix.
But that flexibility stops short if your function needs three inputs. Infix-only syntax has no way to write function application with more than two arguments, so you're forced to pack some of those inputs together into a single argument. Now you can't choose to lift the operation to every argument separately: you're forced to put the same array structure around multiple arguments. If you try to get around this by writing a second-order function (rather like currying), you now have one argument you can't lift to because only first-order functions get the dimension lifting.
So not only is the programming model not inextricably tied to this particular syntax, I'm not even convinced it's the best syntax for the programming model.
The semantic core of APL is operations on arrays. These same operations could be expressed in any other language. SaC[1] was a research project that expressly brought APL functions into a C-like language. Yorick[2] is an interpreted, C-like, array-based language with a production quality implementation.
A comparison I can think of is that Lisp's whole code/data duality works well because the code is s-exprs, and that it would be pretty awkward to transform that if the syntax were more python-y
On the other hand, semicolons and braces have nothing to do with any Java features....