Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Austral Programming Language (austral-lang.org)
152 points by frutiger on July 27, 2023 | hide | past | favorite | 118 comments


Flashbacks to TA'ing freshman programming 101 in Pascal: every student got hung up on when to use a period or semicolon or end. And from Austral's fib example (snipped)

    module body Fib is
        function fib(n: Nat64): Nat64 is
            if n < 2 then
            else
            end if;
        end;
    end module body.
We here all understand BNF, Ada, Modula, etc and parsing but imagine explaining to the first day student: Why is there no "end function" like for the other contexts? When do I use a semicolon vs a period to close a context? You shouldn't need the "railroad diagram" to understand the syntax.


Declarations don't need `end function` because it's always clear what you're closing.

Statements need an `end if`, `end for` etc. because it lets you find your way in nested code. The rationale for the syntax explains it a bit: https://austral-lang.org/spec/spec.html#rationale-syntax

FWIW I will probably get rid of the `module is ... end module.` bit because it adds unnecessary nesting.


I appreciate the explicit nest around module. I was just suggesting a symmetric and consistent syntax that can be stated simply, to align with the stated goal.

Dare I say, the (important) bit of Lisp syntax fits in a sentence! Yeah they hide the complexity in the library instead...


> Statements need an `end if`, `end for` etc. because it lets you find your way in nested code. The rationale for the syntax explains it a bit: https://austral-lang.org/spec/spec.html#rationale-syntax

From the link:

>> }

>> }

>> }

>> }

>> }

>> Which one of these corresponds to the second for loop? Unless we have an editor with folding support, we have to find the column where the second for loop begins, scroll down to the closing curly brace at that column position, and insert the code there. This is manual and error-prone.

I'm not fully convinced of that argument. Here's a devil's advocate take...

The keywords do indeed help when the corresponding nested control structures are off-screen, but if the code you are reading is not refactored to move the control structures into their own function so that the indentation doesn't get that much out of hand, you likely have bigger problems with the code than determining which `end` corresponds to which control structure.

IOW, having this sort of identification is moot: if it is needed, then the code itself is in such poor condition that it's likely not very readable anyway. In many cases it won't make a difference anyway (nested 'if's, for example - seeing multiple `end if` doesn't help) and the developer is still going to place a comment specifying which particular `if ()` is being ended.

Having the unadorned closing braces (`}`) leaves the developer one of three options:

1. Refactor that code just to be able to read it, or

2. As you point out, add in comments like `// end if`, etc, or

3. Leave it as it is.

If it's left as is, there's bigger problems in the code anyway.


Why? No ability to nest functions? Why do you need a semicolon at the end of "end"?


> No ability to nest functions?

Correct

> Why do you need a semicolon at the end of "end"?

Per the rationale[1], "The purpose of the semicolon is to provide redundancy, which aids both reading and parser error recovery." Also, "For many people, semicolons represent the distinction between an old and crusty language and a modern one, in which case the semicolon serves a function similar to the use of Comic Sans by the OpenBSD project."

[1]: https://austral-lang.org/spec/spec.html#rationale-syntax


This language looks super promising. With the exceptions of 'no type inference' and 'no arithmetic precedence', I really like its anti-features list.

With regard to 'no arithmetic precedence', I tried

    printLn((1 + 2) + 3);
and

    printLn(1 + 2 + 3);
Sure enough, the first one compiles, but the second doesn't.

Also, (n-1) is a parse error unless you put a space after the minus.

I got curious if recursion was properly handled, given it wasn't in the anti-features list, but no luck:

    module body Foo is

        function go(acc: Nat64, n: Nat64): Nat64 is
            if n = 0 then
                return acc;
            else
                return go(acc + n, n - 1);
            end if;
        end;

        function main(): ExitCode is
            printLn(go(0, 135000));
            return ExitSuccess();
        end;

    end module body.
yields

    Segmentation fault (core dumped)


I prefer the rule in my own languages of "no arithmetic expressions whose meaning can be changed by adding parentheses". So `x + y - z` is allowed but `x - y + z` is not.


If operators are over-loadable, you support floats (in a non ffast-math mode), or you treat overflow in most non-modular-arithmetic ways, (x + y) - z and x + (y - z) are different.

Maybe it's worth saying "they're close enough to the same that parentheses should be optional", but I can definitely see the argument for just requiring them regardless.


Valid point, though my current language supports neither overloading, floats, nor non-modular integer overflow, and parentheses are permissible where not required (for extra-semantical cases where order does matter) :)


This is a cool idea that I hadn't heard or thought of before.


I wouldn't rely on TCO being available, the bootstrapping compiler right now just emits very simple C (though GCC/LLVM might eliminate the recursion if they can).

Ideally I'd like stack overflow to be a clean abort rather than a stack overflow (just to make the error message more explicit) but I haven't got around to adding that.


That's curious - one of the example programs computes the fibonacci sequence. The language is clearly still a work in progress. I wonder what the difference is in your recursion and what's listed?

https://austral-lang.org/examples/fib


> I wonder what the difference is in your recursion and what's listed?

How deep you recurse :D


I disagree with you about 'no type inference'. I understand why some swear by type inference, but personally I prefer the complete clarity it provides to avoid it. If writing those characters annoys you, have tooling help you with avoiding that.


For a language that seems to market itself based on being secure, this is pretty troubling


Austral is still alpha, so this kind of thing is expected. It's heading in a good direction, give it some time.


I would add 'no macros' being a no no too.


For me, this is of interest:

  Austral’s module system is inspired by those of Ada, Modula-2, and Standard ML, 
  with the restriction that there are no generic modules (as in Ada or Modula-3) 
  or functors (as in Standard ML or OCaml), that is: all modules are first-order.

  Modules are given explicit names and are not tied to any particular file system 
  structure. Modules are split in two textual parts (effectively two files), a 
  module interface and a module body, with strict separation between the two. The 
  declarations in the module interface file are accessible from without, and the 
  declarations in the module body file are private.

  Crucially, a module A that depends on a module B can be typechecked when the 
  compiler only has access to the interface file of module B. That is: modules 
  can be typechecked against each other before being implemented. This allows 
  system interfaces to be designed up-front, and implemented in parallel.
It was a mistake how C++, Java and other languages forgot to split interface declaration from implementation definition, IMHO. Good to see that Austral learned from Modula-2.

Before I can form an opinion regarding Austral, though, I would need to see some larger programs implemented in it, for instance some low-level systems code, a generic data structure, some high-level business logic.


>It was a mistake how C++, Java and other languages forgot to split interface declaration from implementation definition

Huh? C++ is split into header files (interface) and cpp files (implementation)...


I guess there is no "strict separation" in C++, since that mechanism is, I believe, optional. Adding implementations to your header files might never pass PRs, but still.

It does enable header-only libraries though.


But there's no .cpp file for many (most?) uses of templates.


If you think about what templates are, it's not hard to understand why they must go completely in the header files. It's literally a source code template. It's a set of instructions to generate code at compile time, depending on the template parameters, hence 100% of the source code must be available at the point of instantiation. Just the interface is not enough.


Of course, but it still breaks the split between interface and implementation that people are talking about here.


Is no one going to talk about their capabilities system? That shit looks cool. A compile time permissions system for which resources can be used. I wonder how fool proof that can be made. Are there escape hatches in the form of arbitrary assembly/linking? Could a leftpad module security issue be deterred with this?


Yes, any leftpad-like security issue could be mitigated by the fact that you’d need to inject strange capabilities like network access to the leftpad function.

It is assumed this would raise eyebrows from the user of this function. Furthermore if you were to take a “safe” function and replace it with a dodgy one in a later version, the function signature would change and users would need to update their code. So nothing quite so brazen would get past.

Of course if you are mixing in arbitrary assembly/machine code in your binary via linking that might make a syscall and that could potentially be unsafe.


Did not expect docs to be this exciting...

    On July 3, 1940, as part of Operation Catapult, Royal Air Force pilots bombed the ships of the French Navy stationed off Mers-el-Kébir to prevent them falling into the hands of the Third Reich.
    This is Austral’s approach to error handling: scuttle the ship without delay.


The British would probably be happy to do that even without the Third Reich. There are still British people today pissed about the French surrendering too quickly.


Given how controversial the British attack was on their allies even after the French assured them that ships would not be captured, I guess this passage is, as the kids say, "shots fired". :)

https://en.wikipedia.org/wiki/Attack_on_Mers-el-Kébir


The creator of Austral writes good fiction https://borretti.me/fiction/


I really love the Design Goals and Rationale sections of the specification[1]. I have an interest in the landscape of new low-level languages like Odin[2], Vale[3] etc. Austral has the clearest "statement of intent" about how it is designed and where it is going.

[1]: https://austral-lang.org/spec/spec.html

[2]: https://odin-lang.org/

[3]: https://vale.dev/


This is cool but everything is too verbose.

`austral compile hello.aum --entrypoint=Hello:main --output=hello`

vs

`go build`

Etc etc.


On the contrary, I like it when the interface to my tools err on the side of precision at the cost of verbosity. I can always write a shell script or Makefile that does all the boring stuff once I've learned what the inputs mean, but if a tool only provides an overly simplified interface, it is much less obvious what it's doing, or what the other options might be, if any.

What does `go build` do? What files does it implicitly rely on? I have no idea. But I have a pretty good idea of what that `austral` command is going to do without having read any documentation about it.


You might, but I don't. In the argument `--entrypoint=Hello:main`, where does `Hello` come from? Is it some root module? What about `main`, is that some default, or the name of a file without an extension? This strikes me as just enough verbosity to be confusing, and not enough to be explicit.


Right above that line in the docs is a pretty clear view of 'Hello' and 'main'. Its a file "hello.aum" which declares a module 'Hello' with a function called 'main'. You compile the file to be an executable by giving it function to run as the entrypoint. This really couldn't be clearer I dont think.


Feel free to invoke "go tool compile" manually. "go build" is just a frontend


The idea is the compiler has a bunch of explicit flags, but the build system (which doesn't exist yet) will have the `foo build`, `foo run` etc. commands and find the files using a package manifest.

Essentially like `cargo` vs. `rustc`. I have a little prototype of the build system in Python but haven't pushed it up yet.


what's encouraging you to conceive of the build system and the language as separate things? I never understood why most people making new languages seem to want to have each of these be distinct—why not just define the build using the same language?


So there's differing views on this, Zig famously has the build system built in.

To me, having them separate forces you to keep things simple, because the build system can't communicate with the compiler except through compiler-provided interfaces.

Also, I think I like about languages like C, Rust, is that: if I wanted to, I could implement the build system without forking the compiler. In C specially because Make will print all the compiler invocations for you. It lets people build tooling that is not part of the compiler.

I think it's good from a simplicity perspective that language users can figure out what set of compiler invocations a build file "compiles down" to.


Realistically your provided build tool won't be everything. Cargo is enough to build my toy projects and even some fair size software written mostly in Rust, but it's not enough to build Mozilla, or Linux, or Android, or other large systems with a bunch of Rust in them - including Rust's own compiler and standard library.

But, when you get big enough to where this sort of provided tool isn't enough, chances are tooling is now somebody's actual problem anyway, if you're a for-profit somebody's job is to look after the tooling, you can invest in learning a specialized tool or even writing one because that's a proportionate effort, it makes sense.


For the same reason I want my TV and media player to be separate devices, instead of an all-in-one "smart" TV.


it's not that I don't understand the analogy part of your analogy... I just don't understand how these things are actually analogous in any way.


I appreciated that it listed "no destructors" immediately after the top-line "no garbage collection", so I didn't need to read any further. What it means is it offers no ability to encapsulate resource management, so not useful for me. That doesn't mean it is not useful to others.


No, on the contrary, Austral is entirely built around resource management. The central concept is linear types, which is about ensuring 1) resources are disposed of and 2) resourced are used according to their protocol, e.g. no use-after-free.

There's "no garbage collection" because Austral lets you have manual memory management without the danger, like Rust.

There are no destructors in the sense of special destructor functions which are called implicity at the end of scope, or when the stack unwinds. Rather, you have to call the destructors yourself, explicitly, and if you forget the compiler will complain.

This sounds verbose until you start paying attention to all the mistakes you make all the time that involve, in some way, forgetting to use a value. The language makes it impossible to forget to do something.


People might feel like it is too verbose, but I think it is good to have the clarity. I write C at my day job and I have no problem with the 'verbosity' if it provides clarity of what happens. What I want is the compiler to help if I ever forget who owns a particular data value and miss to clean it up. For that linear types are perfect. I also prefer their simpleness over Rust's affine types which easily gets very complicated (see the difference between theirs and your borrow checker). Linear types gives me an easy way to define basic "state machines" for how to handle the data using types and then verifies that I implemented them correctly. That is kinda all I need. Feels like a good "get shit done" language.


From my reading of it, it does have what you're looking for. Specifically, while there are "no destructors", you are required to call a function to consume the value. Failing to consume the value is a compile time error. You can roughly approximate thinking about this as having destructors, but you're required to explicitly call them and the compiler won't let you write code that doesn't call the destructor.


This is nice. It does seem mutually exclusive with any early return though, like exceptions.

On the phone now so I only read the page on linear types, but will look at this closer when back at my desk.

In my own language I am considering destructors purely so that early returns are viable. I'd like to see if there is any alternative to destructors that aren't 'defer' or similar.


Exceptions and linear types are difficult to combine. Implicit destruction is fine though, provided destruction can't fail.


the page "What are linear types?" seems to address "resources" and the management thereof. not my cup of tea (but then, neither are destructors and garbage collection), but it's an interesting idea.


At least linear types means you'll never forget it. It also solves the problem of what to do when your destructor needs to error (e.g. closing a file).

The big downside is the verbosity of covering every branch of your code with your explicit close calls unless another mechanism is provided.

And it doesn't seem like succinctness is a top priority for this language.


You missed a chance to learn something.


didn't read it yet, but that are other ways, specifically something like `defer` in go or zig


Looking nice, I like the rationale.

Some questions from my side:

- 1. As far as I understand, there are multiple models for a linear type system. Which one does Austral implement? Is it verified to be correct?

- 2. Since there is a static checker: What are the limits on 1. expressivity and 2. scalability?

- 3. What is the intended memory model (pointer and synchronisation)?


I agree. I would also like to know what the performance implications of this language are. Specifically, having two or more CPUs accessing a single object sounds easy in theory, but in practice it is quite complex.

My suspicion is that this language would be unusable for real time applications, which is ironically what it would be most useful for.


> No arithmetic precedence.

Interesting. I've wondered about this when making an expression parser. Obviously it makes parsing way easier and mistaken precedence is often a cause of bugs (especially in C where some of the operator precedence is plain wrong). But on the other hand that's got to be quite annoying surely?


In my experience you don't use nested arithmetic often enough to make it annoying. Most arithmetic in computing is basically `count := count + 1`?

What is more annoying to me is looking at an expression that mixes arithmetic and logical/comparison operators and mentally trying to recover the parentheses. Because precedence is not just PEMDAS: it involves all binary operators in the language, including logical and bitwise ones.


Don't forget ternaries and null-checking operators too. `x ? y : z ?? "oh boy"`


It really depends on how you implement it. I don't think that requiring parentheses in something like a+b*c is annoying, but if they also prohibit a+b+c, that's a different story.


Smalltalk has this "feature" and was doing fine. It was pretty popular at some point.


I can understand the intent behind most of the design 'no's - except for subtyping. why is this a problem? I was just looking at the union/sum distinction and the same thing came up. maybe this is a good learning moment.


Type systems invariably have soundness issues when subtyping is allowed. They might've fixed it in the last couple years, but as an example I'm pretty sure Java and C# both have bugs where some methods should be co/contra-variant and others should not but the language only supports subtyping acting one way or the other for the whole type. For a concrete example:

1. Suppose Civic is a subtype of Car

2. Suppose you have a List<Civic>

3. Suppose a method asks for a List<Car>

4. You can "clearly" provide that original list of civics because every single thing in the list is a Car, and it's a list of those things, so it adheres to what the method seems to want -- maybe the method computes average cylinder count or something.

5. Everything we just described is fine and dandy so long as the list itself is immutable (the individual cars could still be mutable safely), but running some of the mutable list methods will cause runtime crashes and segfaults. E.g., if you append a toyota camry to the list then you've somehow snuck a camry into the original List<Civic>.

In that example, some of the methods would be safe if you interpreted a List<Civic> as a List<Car> (like grabbing the car at a particular index and finding that the particular car is a civic), but others require the subtyping relationship to go the other direction (e.g., if you interpreted the List<Civic> as a List<BlueCivic> and appended a BlueCivic then the invariants expected by `append` work in both cases).

That sort of thing just scratches the tip of the iceberg, and as a rule of thumb all your type systems are unsound, _especially_ if they involve subtyping. Things you would hope would be caught at compile-time are punted off to scary runtime heisenbugs that might not be detected for ages. The type system is helpful at reducing errors but woefully incomplete even for the things it's supposed to catch.


> No subtyping.

Adding it open a can of worms. Your type-inference/checking must account for "similar but..." types, you need to follow hierarchies/graphs/trees,etc, you need a way to "re-import" code that "belongs to the super type" (and probably recheck it?), it not always mesh well with other features (or make it harder).

Aside: One of the big reasons to make a bit list of "NO" is to avoid the temptation of "add some sugar to make this easier" but without fully understanding the consequences until becomes later.


Well, the language has no implicit type conversions so there is no much point in having subtyping either.

On the other hand, subtyping interacts very, very badly with both type inference (it almost instantly becomes undecideable, in practice as well as in theory) and with typeclasses (again, decidability issues).


PL designers like to cast features that make their job harder and user's life easier as "anti-features"


This.


why are these new languages always built to solve problems in windows or unix or whatever

and the main problem in those os are that there are 21 languages and thats before you even start talking about build systems, config files, and query languages and serialization. how would adding a new language to fix memory leaks make anything better? i see much worse problems here like you still have to build an sql query out of a string. the whole rust having this was just to make c people more comfortable to using a post 90s language (i.e memory safe)


I still didn't see in the documentation what's supposed to happen in case of overflow..


In accordance with the 'scuttle the ship' philosophy, Austral programs abort immediately when trapping arithmetic overflows, with the message "Overflow in trappingOpname (TypeName)".


Yet Austral returns optional types from any memory allocation function rather than calling abort.

And stack overflow is a memory allocation failure, so why is the discrepancy? I.e. for the language focusing on correctness this is an unfortunate omission.

On the other hand none of popular or semi-popular system languages allows to explicitly control stack consumption. Zig has some ideas, but I am not sure if those will be implemented.


There is discrepancy because things the programmer can prevent are handled differently from things the programmer cannot prevent.

The programmer can always statically ensure that the program doesn't experience a trapped overflow, and that the stack size is not exceeded. All the information to do that is available when the programmer runs the compiler.

But there is no way to prevent a memory allocation failure when using `calloc`, since the information required to do that is not available when the programmer writes the code. In fact, when running on POSIX systems it's not even possible to check _in advance_ at runtime whether a memory allocation will succeed.

This is why Austral's allocateBuffer(count: Index): Address[T] returns an Address[T], which you have to explicitly null-check at run-time (and the type system ensures that you can't forget to do this).

Of course, on some non-POSIX systems such as seL4, the programmer can know at compile-time that memory allocations (untypedRetype) will not fail. When you use Austral on such systems, you don't have to use calloc/allocateBuffer at all.


To statically ensure a stack overflow does not happen requires that recursion is rejected by the type system. Austral does not do that so the stack overflow is a dynamic condition similar to memory allocation failures.


Maximum stack usage can be calculated in the presence of recursion. Tail calls can be handled as branches instead of nested call frames, but also non-tail calls are tolerable if you have (or can infer) some measure to determine maximum call stack depth.

It's a pain, and the type system rejecting any recursion is certainly simpler, but that's not a strict requirement.


Inferring the number of loop integrations or recursion levels is in practice impossible when the number depends on the user input.

For a system language I would like to see that when the compiler cannot infer the bound on the stack size or when that static bound exceeds some static limit, a function call is treated as fallible.


> As in the real world, an object cannot be copied or destroyed without first filling out a lot of forms, but on the other hand, the transmission of objects is relatively painless.

This is fantastic. Clearly, bureaucracy is what we needed all along for memory safety.


> No destructors.

What's the logic behind this? It's nice to have a way to pair resource usage with disposal. Or does the linear type system allows not to forget about the resources which have to be closed.


Yes, the compiler will remind you if you haven't cleaned up a value of linear type.

The 'no destructor' rule follows the 'no hidden flow' design, similar to Zig. Some doesn't like it, but personally I prefer it.


Yep, I agree with you that it's better to be explicit than implicit here. Initialization/destruction order is often tricky, and source of obscure bugs.


Indeed, the linear type systems makes sure that you won't forget to dispose of resources.

Linear types enable manual memory management without memory leaks, use-after-free, double free errors, garbage collection, or any runtime overhead in either time or space other than having an allocator available. More generally, it enables us to manage any resource (file handles, socket handles, etc.) that has a lifecycle without letting us forget to dispose of the resource (e.g. leaving a file handle open), dispose of it twice, or use it after disposal (e.g. reading from a closed socket), all without runtime overhead.

The Austral tutorial's chapter on linear types (https://austral-lang.org/tutorial/linear-types) explains how this works in a fairly clear way.


I see 'async' as an anti-pattern and that linear types are good for concurrency, but I don't see anything about concurrency primitives. Is there a plan for concurrency?


> designed to be simple enough to be understood by a single person

Interesting that this was mentioned. Are there languages that are not simple enough to be understood by a single person?


I think the author sees Rust as a language which, even if it's not beyond the mind of a single person yet, trends in that direction. He puts it pretty fairly in a recent blog post[1]:

> Rust does something very practical: it says what properties it will enforce, but doesn’t say how. That is, you know references have to uphold the law of exclusivity, but how the compiler achieves this is subject to change. So the borrow checker is allowed to evolve over time, in the direction of accepting more programs and becoming more ergonomic while retaining safety.

> The upside is that most of the time you can use Rust without thinking about lifetimes or the borrow chchecker. The downside is that the borrow checker is hard to spec (it’s basically “whatever rustc does now”) which makes is hard to have multiple implementations of Rust. Some people argue that’s fine or a good thing because multiple implementations waste effort. This argument has merit, but I think languages being specification-defined is a good thing from a stability perspective. It’s what they call a tradeoff.

[1]: https://borretti.me/article/type-systems-memory-safety#rust


Yes. C++ is it’s entirety is (probably) not understood by anyone.


This quite a shallow observation, but I really wish new languages would cut down on verbosity and boilerplate. Making people type out “function” instead of “fn”, “def”, or nothing at all feels like unneeded friction.

I’m not looking for APL levels of terseness, but I also don’t want to have my code mistaken for an essay filled with what amounts to scaffolding.


I'm the opposite. I find the code much easier to read when it's verbose, including very long, descriptive function names and the like. And with modern IDEs and autocomplete, it's not really making people type out anything longer than the first couple of letters anyway. The gains are on the backend, where people are reading the code. And don't even get me started on unnecessary aliases in SQL!


With syntax highlighting being ubiquitous, does it really matter whether the keyword is "function" or "fn"? And at that point, why not make it terse instead of taking up more space and adding unnecessary noise?


If you're going to take that approach why have any keyword?

`foo(){}` is just as clear as `fn foo(){}`


A block is a namespace that has an entry point, and optionally a return value and arguments.

There is no need to distinguish between functions, modules classes, lambdas, or whatever.

Hence, no need to distinguish their start or ending with keywords, as long as you can determine their scope.

Brackets determine scope, and unlike indentation or words, that is all they are used for.


The language’s syntax seems to be influenced by languages designed by Wirth, such as Pascal, Modula-2, and Oberon. These languages have many fans, but they have quite verbose syntax compared to either languages influenced by C or those influenced by the ML family.

Personally I prefer more terse syntax, but I’ve heard some people defend the style of more verbose languages like Pascal and Java when writing large programs.


Looks more like Ada to me.


Have you ever seen the d3 javascript visualisation examples? The author has a physics background, so the code ends up being very verbose with lots of comments about simple programming stuff, yet incredibly terse when it comes to heavy mathematics. I would have done the opposite.

It made me realize that people naturally want to be more verbose when they are less comfortable with the concepts and less verbose when they are very familiar with the concepts. It also made me realize it is totally subjective and there may not be a right answer, that it depends on someone's background and familiarities.


Or it means they cut and pasted the algorithm and don’t understand it.


For every person that says this, there's 5 people that say the opposite.


That’s because it fundamentally doesn’t actually impact all that much.

By most people’s account, the actual coding part of programming isn’t really a majority of their time. Domain research, speccing, debugging, testing, planning, etc. the microscopic savings in key presses are just really such a strange thing to even debate about when you think about it.

Major tersness pushes also tend to cause “symbol soup” and, personally, I find symbol soup very hard to digest.


>the microscopic savings in key presses are just really such a strange thing to even debate

It's about readability


Studies of program verbosity have shown that long vs short names have little impact in other programmers' reading comprehension or code maintainability.


Experts[1] agree that boilerplatey declarations and verbose keyword groupings are bad and Java has been deemed inhumane because of this.

[1]Me


TIL.

Do you remember which languages they looked at?


I can't find the study I was thinking about, but here are two other studies I found. Neither study directly supports what I had recalled, but one study concludes longer names are more productive and the other concludes shorter names. :) How well these academic studies apply to production code is another question...

* "Shorter [C#] identifier names take longer to comprehend" (2019) https://link.springer.com/article/10.1007/s10664-018-9621-x

In this paper, we investigate the effect of different identifier naming styles (single letters, abbreviations, and words) on program comprehension. We conducted an experimental study with 72 professional C# developers who had to locate defects in source code snippets. ... We found that word identifiers led to a 19% increase in speed to find defects compared to meaningless single letters and abbreviations, but we did not find a difference between letters and abbreviations.

* "[Java] Identifier length and limited programmer memory" (2009) https://www.sciencedirect.com/science/article/pii/S016764230...

names used in existing production code are long enough to crowd programmer’s short-term memory. This provides evidence that software engineers need to consider shorter, more concise names. As the study considers individual names extracted from production code, it tends to underestimate the demand on memory because there is no need to remember context as well.

* "Evaluation of Rust code verbosity, understandability and complexity" (2021) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7959618/


You’re right, and that’s part of the reason I remarked my comment’s shallowness. It was more of a personal lament rather than a strong criticism of what looks like a really interesting language.


Reminds me of this talk:

https://www.youtube.com/watch?v=5kj5ApnhPAE

And the quote to go along with it:

"I'm always delighted by the light touch and stillness of early programming languages. Not much text; a lot gets done. Old programs read like quiet conversations between a well-spoken research worker and a well-studied mechanical colleague, not as a debate with a compiler. Who'd have guessed sophistication bought such noise?"


"I'm always delighted by the light touch and stillness of early programming languages."

Except for COBOL, of course, which is one of the oldest.


Or Fortran being terse in all the wrong ways.


I'll say the Ada-style syntax grows on you.


I’d like it to just look exactly like C, but with new ideas in some incompatible syntax showing it is not part of C.


Ehh C is a pretty miserable language to parse. No function definition keyword means you have to get pretty far to realize that you're parsing a function. Pointer syntax is ambiguous with expression grouping. And let's not even get started with the preprocessor. I'd say a good model for simple syntax is Go. Rust has a nice compromise of syntax too, but there are some awkward edge cases.


Shame they never fixed this. Would have been nothing to add a fn keyword at the beginning of all function definitions and then use the MS EEE strategy to bring it into the standard. Of the many changes the committee could make with C, I have to believe a function keyword would be among the least contentious.


Least, perhaps? But certainly contentious.

In my view, the job of the C committee should be to eliminate undefined behaviour and any remaining specification ambiguities (or errors) and otherwise leave the damned language alone.

If you want a new language every 3 years, you can always use C++.


Probably couldn’t since people could have named functions “fn”. Although idk, one keyword breaking in 50 years is probably excusable


Well they didn't mind trampling all over symbols that start with 'str', which is bigger breakage in my opinion, so they could have added a function keyword.


Zig? Or Nim? Both of those are Algol-ish.

Ooh! What about [0]? Gambit Scheme has what they call `six-script` which is essentially a refactoring of Scheme grammar to fit into an Algol-esque syntax.

[0] http://gambitscheme.org/latest/manual/#GSI, see section 2.6.1


I agree. For a language designed to appeal to grumpy old programmers (see the list of so-called anti-features), it’s not exactly arthritis-friendly.


\ the ultimate


Has Austral been used for any real-world, production projects?

If so, what?


Almost certainly not, and there are good reasons for that.

Work on Austral was initiated by Borretti in 2021, but its earnest development really only commenced in January this year. So we're talking about a language that's, for most intents and purposes, less than a year old. Even if the January release was stable, there would not have been time for anyone to develop and deploy significant projects in it. But it is not stable: it still under construction, with certain inherent instabilities. Notably, recent updates changed the borrow syntax and FFI pragmas. Similarly, the surrounding infrastructure (compiler and standard library) is far from production-ready yet.

I built an Austral interface for the seL4 Core Platform (https://github.com/zaklogician/libmantle/tree/main), and ran some Austral apps on hardware. The purpose of this was experimental (we wanted to know how Austral can help us design fail-safe seL4 Core Platform APIs, and it was a success), but probably among the closest anybody came to "production": I wrote more Austral code than currently included in the Austral standard library.

It revealed several bugs, including a typo in Standard.Buffer which leads to the invariant check for the Buffer type always failing, and issues related to the compiler's handling of large unsigned literals.

Austral shows great promise, and has already provided us with valuable insights about linear API design, and exciting glimpses into its future capabilities. Using Austral in production code, however, might require a few more years of maturation and stabilization. So if you want something that people use for writing production code, check back in a couple of years.


This is pretty great to hear! I really hope that as Austral develops it can find a niche as a "minimum viable safe language" with a small surface area, spec, competition in implementations etc.


Thank you for the sober and interesting comment. I would also be interested in a blogpost or similar extended writing about this experience.


If you have the time, I would be delighted to hear about your experience, especially the things about Austral that were inadequate.


[flagged]


Maybe there was supposed to be 'which' instead of 'what'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: