Not really… unless you have a different definition of the word ‘constraint’ than I do. Personally, I don’t see ‘constraints’ as having anything to do with monads: in fact, I see the ultimate goal of monads as allowing you to relax various constraints a language imposes on your code — things like ‘execution proceeds from one line to the next’, or in Haskell ‘expressions have no side-effects’.
It might be more helpful to think about this in terms of actual code samples. Consider a data type which can represent a computation which can fail. This data type has two possible states: ‘Failed’, and ‘Success(return_value)’ (where the ‘return_value’ field can be anything). Now, let’s say you have one result which might have failed, and you want to pass it to another fallible computation, which we can represent as a function from an input to a fallible output. How do you do it? Well, you might define a function something like this (in vaguely Rust-like pseudocode):
fn andThen(value, nextComputation) {
match value {
Success(return_value) => return nextComputation(returnValue);
Failed => return Failed;
}
}
That is, continue on with the next computation if you have a result, and short-circuit otherwise.
Now let’s consider a different case: what if your computation is instead nondeterministic, and can return multiple results? In this case, you might instead have a nondeterministic value — let’s represent it as a list — which you got from one computation, and you need to supply it as input to another nondeterministic computation. Then you’ll need to write something like:
fn andThen_nondet(value, nextComputation) {
mut result = [];
foreach (v in value) {
result.concat(nextComputation(v));
}
return result;
}
Or a third case — promises for asynchronous values! Which in most languages have an ‘andThen’ method too, allowing you to take one asynchronous value and pass that value to an asynchronous computation. I’ll skip an example for this one since it’s so well-known, but obviously this ‘andThen’ method has roughly the same structure as the others I’ve shown.
Monads, therefore, are nothing special — they’re just data types with an ‘andThen’ method to let you sequence them. The method always has the same form: inspect the input to temporarily separate any returned value(s) from their context, then call the next computation to wrap those value(s) back up in the required context. This gets neatly summarised in the Haskell type signature:
It might be more helpful to think about this in terms of actual code samples. Consider a data type which can represent a computation which can fail. This data type has two possible states: ‘Failed’, and ‘Success(return_value)’ (where the ‘return_value’ field can be anything). Now, let’s say you have one result which might have failed, and you want to pass it to another fallible computation, which we can represent as a function from an input to a fallible output. How do you do it? Well, you might define a function something like this (in vaguely Rust-like pseudocode):
That is, continue on with the next computation if you have a result, and short-circuit otherwise.Now let’s consider a different case: what if your computation is instead nondeterministic, and can return multiple results? In this case, you might instead have a nondeterministic value — let’s represent it as a list — which you got from one computation, and you need to supply it as input to another nondeterministic computation. Then you’ll need to write something like:
Or a third case — promises for asynchronous values! Which in most languages have an ‘andThen’ method too, allowing you to take one asynchronous value and pass that value to an asynchronous computation. I’ll skip an example for this one since it’s so well-known, but obviously this ‘andThen’ method has roughly the same structure as the others I’ve shown.Monads, therefore, are nothing special — they’re just data types with an ‘andThen’ method to let you sequence them. The method always has the same form: inspect the input to temporarily separate any returned value(s) from their context, then call the next computation to wrap those value(s) back up in the required context. This gets neatly summarised in the Haskell type signature: