There is no performance penalty associated with try/catch. At least in JVM languages, try is free (its a compile time phenomenon) and throw is also free (memjmps), its only the creation of the exception that is expensive and this can be avoided by pre-allocating an exception (at the cost of no stacktrace info).
Developers are better off in the end with proper exceptions because these sorts of Result objects don't scale. Haskell ultimately embraces proper exceptions and I suspect Rust will get there too eventually. F# actually has proper exceptions which makes you wonder why the author is reinventing the wheel badly.
There are different ways to implement exceptions with different performance trade-offs.
Mechanism used by CPython (and IIRC many C++ implementations) involves having per-thread flag "exception occurred" which is checked after every function return (this makes sense performance-wise when essentially every function contains "implicit try/finally" in form of bunch of Py_DECREF or destructors of locally scoped variables).
The same idea can be implemented by returning instances of special "ExceptionThrown" class and checking for that, which you in fact can do by doing "if argument is ExceptionThrown return argument" in prologue of every function, which is exactly the thing described in the article. (early versions of dfsch did exactly this)
Only when you know that in most cases you don't need to do significant cleanup on stack-unwind (ie. you have tracing GC that directly scans stack or you don't allocate that much) it starts to make sense to do stack unwinding by doing non-local jumps.
Typical implementation of that has some linked-list of currently active try/catch/finally sites that has to be maintained at runtime (typically as part of activation records on control stack). Extension of this idea is that such sites do not necessarilly have to unwind-stack before running the recovery code (i.e. Windows' SEH). And then that you can have two such lists, one for handling errors and other for cleanup during stack unwind (i.e. CL-style condition system)
So the author isn't reinventing the wheel. I should have explained that Railway programming isn't supposed to replace doing try/catch, it's typically best suited for scenarios where errors are not exceptional. It works best when errors are occurring very regularly, are part of the logic of the system, and where you want to ensure the programmer takes advantage of F#'s exhaustive pattern matching to ensure developers account for all the scenarios.
In this scenario, performance matters because you want the creation of the error (these aren't exceptions) to be relatively cheap because it is a regular part of the system.
In these scenarios, using exceptions would be abusing exceptions, because exceptions are typically supposed to be exceptional.
For a web server under high load, exceptions are extremely expensive to build the stack frame as mentioned below. The rule of thumb is to reserve Exceptions for Exceptional Circumstances (oom, unable to connect to db etc) and to use a pattern like in this article for un-exceptional exceptions such as required field validation errors. Having done this clean up a few times, it makes a huge difference.
I've had a guy come in on an F# project that used exceptions in a relatively data intensive workflow and rewrite it to use Results like this.
The amount of boxing and unboxing as you go through the various switches actually can ultimately make this much less efficient than the exception based flow. The code was ultimately something like 8 times as slow, and topped out CPU on beastly boxes which was surprising to me.
Performance myths like this are really interesting. Usage of features is usually not inherently responsible for bad performance, but the way they are used. Usage of these features can amplify the effects of bad software architecture. Some features might just be elected as the culprit while there is really a different problem.
If a little boxing and unboxing in a functional language takes 7 times as long as the rest of the code, something must be very very wrong. It's hard to believe that.
In this concrete case: a data intensive workflow should really have no or only few error situations. The performance of exceptions should not matter at all. I would guess that reworking the code to avoid exceptions also included reworking the program structure.
I don't believe the boxing/unboxing performance claim either. I built a large project in this style (in F# specificially) and the efficiency is not a problem. It's possible said person built his custom implementation that had issues but if you use ILSpy to decompile how union types end up compiled you'll see it comes down to a boolean check against an integer followed by a pointer dereference.
OCaml has very fast exception throwing/handling mechanism which apparently is why some of the applications built in OCaml make heavy use of exceptions for control flow (which as been mentioned by others, if you push too far is a nightmare to to reason about). The appropriate strategy to port such code to F# for example (which is a highly similar language to OCaml) is to switch to unions in the return type.
The approach advocated by Scott is a general pattern with which you build an entire application around. It does marvel when building a system surrounded by others that you cannot trust and database with brittle consistency (as most 15+ years line of business database eventually become).
As for some of the other suggestion advocated in the comments, you have to note that composability becomes a factor. If you are dealing with a collection of entries each of which might fail or succeed, the railway oriented programming approach scales to handling collection of errors/successes gracefully which so many error handling strategies fail to meet.
I have tried designing an application around Haskell's error monads in the past, and it was an absolute disaster. There was so much typing. The types became really hard to understand and the program structure became really rigid. I do not know that it can't be done better, but I've heard prominent Haskell guys say that combining errors (or any monads at at all) sucks in practice.
And I've got zero problems with just handling errors procedurally. I realize that errors simply combine very badly: If there are many possible error kinds the best you can do in the end is usually to just quit. That's not what they promised on the tin :-)
So in the end I just write procedural code and I'm careful not to get into situations where many different kinds of errors can happen. I select a few error cases that I care enough about to handle. Because handling even a single kind of error means a lot of complexity on top of a software project.
Yes I agree it can become unwieldy if it is used throughout an application. I think the sweet spot is too keep the pattern at the application boundary. You can use business objects the way Scott describes them for the internals without threading Either/Results monads throughout your application. Moving calls to services and databases to the application boundary also helps keeps the internals (where most of the business logic will reside) clean of Either/Results. For the few things left that could fail in the internals, stick to exceptions mechanism and avoid catching. I'm a big believer bugs should make the application crash.
You're describing a highly sub-optimal solution if there was performance degredation, and lots boxing/unboxing is a code smell.
The biggest Exception penalty is only paid once when the exception code needs to be JITed along with more minor performance issues. Dollars for donuts, any CLR language will be faster using not-Exceptions, so a refactoring to model the exception path would have to do a fair bit work in unrelated areas to be slower.
I think you misunderstood his arguments. He's stating that favouring a Result-type monad (as opposed to using exceptions that may never occurs) leads to boxing of values inside a tag of a union-type. Threading exceptions handling where exceptions are never thrown in practice typically little cost.
However this is really different from boxing of value-types in a situation where you don't have access to generic collections for instance and as such we're talking about a different cost. Unless you're entire application is passing integers or floats around (and not proper objects/records) it's highly unlikely the pattern is causing the situation he described.
Developers are better off in the end with proper exceptions because these sorts of Result objects don't scale. Haskell ultimately embraces proper exceptions and I suspect Rust will get there too eventually. F# actually has proper exceptions which makes you wonder why the author is reinventing the wheel badly.