There is a difference between visual programming and natural language.
Natural languages already have tokens, syntax, and grammar whereas visual fields do not have them. All those elements must be imposed onto a visual language before it can be translated into the machine language (e.g. there is no obvious convention for visual commands or visual conditionals).
Well, sure there's a difference between visual programming and natural language. There's a difference between both of those and augmented reality, which also has remained firmly demoware. And those three are different from the Minority Report-style interface that's been talked about a lot lately courtesy of the Kinect.
Note in particular the differing levels of what you might call "eventual feasibility" for each of those things... but none of that changes the fact that they are all demoware, and another demo isn't really all that intrinsically interesting until they push it somewhere actually useful, and thereby, new. Demoware isn't necessarily a permanent status, but another demo is not sufficient to escape.
there is no obvious convention for visual commands or visual conditionals
I don't know about that. Within a particular domain, you can certainly come close to representing these things, or presenting the right controls for a user to represent commands and conditionals. For an example, check out QuickFuse http://quickfuseapps.com
Before we built this, we thought about common ways people "drew" voice apps, and commands and conditionals both have certain natural visual representations in the "voice app" space, e.g., blocks with branching arrows and writing text to indicate spoken words.
The convention is arbitrary (e.g. most people don't have receivers on their phones and hanging up is done by pressing down a button not by placing the receiver in a horizontal position).
If there were an obvious visual convention, the start button would not need to say "start" and the hang up button would not need to say "hang up."
That's not to say that arbitrary visual conventions can't have great utility (the alphabet being a case in point). But to illustrate the issues with graphic conventions, Quickfuse does not use the long established conventions for flowcharting. It uses natural language instead.
Why would you exclude text labels from visual programming? I think this is an artificial distinction. Sometimes they provide the best balance of space consumption vs. clarity. Labels on their own aren't natural language, they are at most terms put into context by the surrounding graphics; would your criteria be that visual programs use icons for every single concept?
Also, you'll find that that where we departed from conventions for flowcharting, we did it to save pixels, or make the UI more accessible. There is a tradeoff in visual programming between ease of editing and clutter--compare with Max MSP, which has a stark UI, at the cost of having you memorize certain textual commands. If you draw a bare flowchart, it's not obvious how to manipulate it until you draw other GUI controls on top of it or make some modifications. However, it is the general paradigm we are tapping into.
>"Why would you exclude text labels from visual programming?"
The context of my initial context was in response to "fully visual programming" in the ancestor comment and the implication that natural language programming faced similar challenges.
The expediency provided by labels is a result of the arbitrariness of graphic interpretation. I am not suggesting that the use of labels isn't helpful, only that the use of labels doesn't really differentiate "visual programming" as a subset of programming, i.e. in and of itself a text label is not significantly more visual than text on a terminal screen.
For example, the label is necessary because the visual convention for "start" is ambiguous. Left, right, top and bottom are all used as a starting position depending on the arbitrary conventions of the context. Likewise, a MaBell styled receiver, green flag, a vertical stroke or a hand with index finger extended as if to press a button may all be used to indicate start.
I used your departure from flowcharting conventions as an illustration of the unique problems with graphical communication conventions. I am not implying that deviating from flow charting convention is a bad idea.
My point is that your deviation from flowcharting conventions is arbitrary in the sense that it was driven by factors irrelevant to the process of flow charting (i.e. the limitations of the medium on which the flowchart is presented rather than concerns about the mapping of graphic symbols to processes).
It is "qsort.m" which provides the context for understanding the icon below it. However, the icon isn't more representative of a quick sort than a bucket sort or a bubble sort. It's simply an arbitrary mapping of a textual token (qsort) to a graphic token (example of arbitrariness is that lighter colors on top maps to lowest values to the left).
The arbitrariness of the mapping between text tokens and graphic tokens quickly increases beyond convention the ability to establish a convention (example: "Number of cheeseburgers shipped from Peoria to Toledo last month."
Natural languages already have tokens, syntax, and grammar whereas visual fields do not have them. All those elements must be imposed onto a visual language before it can be translated into the machine language (e.g. there is no obvious convention for visual commands or visual conditionals).