> and (c) requires a language feature (sequence points) to disambiguate in which order side effects get executed. With Haskell, we just don’t have to care.
Reading up to this point, I had to chuckle a bit. I have struggled with the Haskell type system more than I care to admit; It's almost never "we just don't have to care" when comparing to most other popular languages.
That being said, this article does a nice job of gently introducing some of the basics that will trip up somebody who is casually looking through code wondering what *> and <*> and <* do. As usual, there is a steep learning curve because of stuff like this all over the codebase. If I walk away for a month, I need to revisit >>= vs >> and other common operators before I can be productive. It probably doesn't help that I never actually speak to a human about these concepts so in my head it's always ">>=" and not "bind."
I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine. The exception is when parsing with one of the parsecs where you get a lot of <* and *> usage and all the tutorials use those symbols.
But I like <|> , it feels very clear that it has a sort of "or" meaning.
> I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine
Interesting. Probably it's just me, but I first learned monads using >>= (in OCaml), so at the beginning I found the Haskell do notation more confusing (and indentation rules didn't help). >>= is just a function and I understand its signature well. On the other hand, "do" is a syntactic sugar that I sometimes had to "mentally desugar" in some cases.
Funny, but I remember the difference between `>>=` and `>>` even though I haven't written Haskell in a couple of years.
`>>=` passes to the right the value coming from the left, while `>>` drops it.
To give an example, you use `>>=` after `readFile` to do something with the contents.
You use `>>` after `putStrLn` since `putStrLn` doesn't return a meaningful value.
> It's almost never "we just don't have to care" when comparing to most other popular languages.
Struggling with Haskell type system is not an experience of somebody who has developed an intuition about Haskell type system. Granted, it is not a binary thing, you can have good intuition about some parts of it and struggle with others.
I think they way you put it is, while technically true, not fair. Those "most other" languages are very similar to one another. It is not C# achievement, that you don't struggle with its type system coming from Java.
This is like people struggling with Rust because of burrow checker, well, they have probably never programmed with burrow checker before.
I struggled with the borrow checker because I’m smarter than it is, not because I haven’t worked with one before. Mainly I say “I’m smarter” because I’ve worked on big projects without it and never had any issues. Granted, I’ve only gotten in a fight with it once before giving up on the language, mainly because it forced me to refactor half the codebase to get it to shut up and I had forgotten why I was doing it in the first place.
Let's be realistic, rust is great, especially compared to C.
But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.
If you are dealing with common system level needs like double linked lists, rust adds back in the need for that super human level memory of undefined behavior, because the borrow checker is limited to what static analysis can do.
IMHO, the best thing Rust could do right now is more clearly communicate those core limitations, and help build tools that help mitigate those problems.
Probably just my opinion, and I am not suggesting it is superior, but zig style length as part of the type is what would mitigate most of what is problematic with C/C++
Basically a char myArray[10];, really being *myArray is the main problem.
Obviously the borrow checker removes that problem, but not once you need deques treeps etc...
If I could use Rust as my only or primary language, memorizing the borrow checker rules wouldn't be that bad.
But it becomes problematic for people who need to be polyglots, in a way that even Haskell doesn't.
I really think there's ways for Rust to grow into a larger role.
But at this point it seems that even mentioning the limits is forbidden and people end up trying to invert interface contracts, and leak implementation details when they're existing systems are incompatible with the projects dogma.
It is great that they suggest limiting the size of unsafe code blocks etc....but the entire world cannot bend to decisions that ignores the real world nuances of real systems.
Rust needs to grow into a language that can adjust to very real needs without and many real needs will never fit into what static analysis can do.
Heck C would be safer if that was the real world.
I really do hope that the project grows into the role, but the amount of 'unsafe' blocks points to them not being there today, despite the spin.
In my particular case, I was absolutely sure it was safe without having to test it. Would it remain so forever? Probably not because software changes over time.
In those cases what you are supposed to do is write your logic using unsafe, and wrap it with lifetime safe APIs. That way your "smartness" joins efforts with rusts, and not-so-smart people can use your APIs without having to be as smart.
In a nutshell, first class effects and built in set of patterns for composing them get rid of boilerplate code. Combine that with type safety and you can churn out relatively bug free code very fast.
I always maintain that this is just familiarity, Haskell is in truth quite a simple language. It's just that the way it works isn't similar to the languages most people have started with.
I believe there's a strange boundary around the idea of simple vs easy (to quote rich hickey) and I don't know how to call it.. (or if somebody named it before)
functional and logical languages are indeed very simple, small core, very general laws.. (logic, recursion, some types) but grokking this requires unplugging from a certain kind of reality.
Most people live in the land of tools, syntax and features .. they look paradoxically both simpler than sml/haskell so people are seduced by them, yet more complex at the same time (class systems are often large and full of exceptions) but that also makes it like they're learning something advanced, (and familiar, unlike greek single variables and categ-oids :).
People intuitively expect things to happen imperatively (and eagerly). Imperativeness is deeply ingrained in our daily experience, due to how we interact with the world. While gaining familiarity helps, I’m not convinced that having imperative code as the non-default case that needs to be marked specially in the code and necessitates higher-order types is good ergonomics for a general-purpose programming language.
> Understanding the map signature in Haskell is more difficult than any C construct.
This is obviously false. The map type signature is significantly easier to understand than pointers, referencing and dereferencing.
I am an educator in computer science - the former takes about 30-60 seconds to grok (even in Haskell, though it translates to most languages, and even the fully generalised fmap), but it is a rare student that fully understands the latter within a full term of teaching.
Are the students who failed the pointer class the same ones in the fmap class?
I didn’t say “using map” I said understanding the type signature. For example, after introducing map can you write its type signature? That’s abstract reasoning.
Pointers are a problem in Haskell too. They exist in any random access memory system.
That’s an unfair comparison because these are two unrelated concepts. In many languages, pointers are abstracted away anyway. Something more analogous would be map vs a range loop.
I suppose an absolute beginner would need someone to explain that Haskell type signatures can be read by slicing at any of the top level arrows, so that becomes either:
> Given a function from `a` to `b`, return a function from a `list of as` to a `list of bs`.
or:
> Given a function from `a` to `b` and a `list of as`, return a `list of bs`.
I find the first to be the more intuitive one: it turns a normal function into a function that acts on lists.
Anecdotally, I've actually found `map` to be one of the most intuitive concepts in all of programming. It was only weird until I'd played around with it for about 10m, and since then I've yet to be surprised by it's behavior in any circumstance. (Although I suppose I haven't tried using it over tricky stuff like `Set`.)
`fmap` is admittedly a bit worse...
fmap :: Functor f => (a -> b) -> f a -> f b
But having learned about `map` above, the two look awfully similar. Sure enough the same two definitions above still work fine if you replace `list` with this new weird `Functor` thing. Then you look up `Functor` you learn that it's just "a thing that you can map over" and the magic is mostly gone. Then you go to actually use the thing and find that in Haskell pretty much everything is a `Functor` that you can `fmap` over and it starts feeling magical again.
You and I have a math part of our brain that appreciate the elegance from the algebraic structure.
I’m saying that thing you did where you start representing concepts by letters which can be populated by concrete objects is not a skill most people have.
Maybe at its core, but Haskell in the wild is monstrously complex because of all the language extensions. Many different people use different sets of extensions so you have to learn them to understand what’s going on!
Not really, the vast majority of extensions just relax unnecessary restrictions. And these days it's easy to just enable GHC2021 or GHC2024 and be happy.
Accessibility is not an issue. It takes only a little bit of effort to get productive with a Haskell codebase. I think it's more of a mental block because the language is different from what one might be used to. What Haskell needs, and doesn't have, is a compelling reason for people to make that small effort (i.e. the killer usecase).
"Relatively bug free code very fast" sounds like a killer use case to me.
So why hasn't it happened? Some possibilities:
1. People are just ignorant/unenlightened.
2. Haskell is too hard to use for most people. I think that different programmers think in different ways, and therefore find different languages to be "natural". To those whom Haskell fits, it really fits, and they have a hard time understanding why it isn't that way for everyone, so they wind up at 1. But for those who it doesn't fit, it's this brick wall that never makes sense. (Yes, this is about the same as 1, just seen from the other side. It says the problem is the language, not the people - the language really doesn't fit most people very well, and we can change languages easier than we can change people.)
3. Haskell isn't a good fit for many kinds of programming. The kind of programs where it fits, it's like a superpower. The kinds where it doesn't, though, it's like picking your nose with boxing gloves on. (Shout out to Michael Pavlinch, from whom I stole that phrase.)
What kinds of programs fit? "If you can think of your program like a pipe" is the best explanation I've seen - if data flows in, gets transformed, flows out. What kind of program doesn't fit? One with lots of persistent mutable state. Especially, one where the persistent mutable state is due to the problem, not just to the implementation.
Haskell is superb at handling persistent mutable state. The only problem is that you may have analysis paralysis from choosing between all the choices (many of them excellent).
Off the top of my head, memory safety challenges for junior Haskellers (laziness footguns), State monad being fundamentally flawed: there is an inability to get at and log your application state just before a crash, bloated tooling, GHC frequently breaking existing code. Laziness and monadic code makes debugging painfully difficult.
I acknowledge that those things can be challenging, however I'd like to respond to some of the specific issues:
- Space leaks due to laziness are a solved problem. I explain the technique to solve it at: https://h2.jaguarpaw.co.uk/posts/make-invalid-laziness-unrep... This technique has not completely percolated throughout the community, but I am confident that it does actually resolve the "laziness causes space leaks issue"
- Flawed state monad: well, you point out the analysis of its flaws from the effectful documentation. That's correct. The solution is: just use effectful (or another similar effect system. I recommend my own: Bluefin)
- GHC breakage: I've been keeping an inventory of breakage caused by new GHC versions, since GHC 9.8: https://github.com/tomjaguarpaw/tilapia/ There has been very little! The Haskell Foundation Stability Working Group has had a massive effect in removing breakage from the ecosystem.
- Laziness and monadic code makes debugging painfully difficult: I mean, sort of, but if you're using monadic code in the style of a decent effect system like effectful or Bluefin this is a non-problem. It's hardly different from programming in, say, Python from the point of view of introducing debugging printfs or logging statements.
Thanks, I've followed along with a lot of your posting on the Haskell discourse. One thing regarding this matter:
>well, you point out the analysis of its flaws from the effectful documentation. That's correct.
Thinking deeper about this, that there is essentially no way to fix this issue with StateT, because of the type choice, the monad, the composability requirement, all conspiring together to not be undone, does that signal something deeper that is wrong with the flexibility of Haskell, that we can progressively paint ourselves into a corner. Could it not happen again, but with another late breaking requirement, for effectful, or bluefin?
Basically StateT uses Either to model either the application state or an error. So if your application throws, you lose the current state forever. They sort of painted themselves into a corner with this choice of type, there's no real way out now.
I agree loosely with what haskman above says about creating relatively bug free applications, the guard rails are so robust. But those same guard rails mean you can paint yourself into a corner that it is harder to get out of without imperative state, case in point above.
Most functional languages give you so much more choice in managing persistent mutable state than the typical imperative languages. In the latter everything can be made into persistent mutable state so you have both those that are due to the problem and those that are due to the implementation.
Haskell gives you a wide range of tools, from simulated state like the State monad, to real ones like the ST monad and IORef inside the IO monad. For synchronization between threads you have atomic IORef, MVar, and TVar.
If you problem requires you to have persistent mutable state, Haskell helps you manage it so that you can truly separate those persistent mutable state that's due to the problem from those that's due to the implementation.
The reasons are going to vary depending on who you ask. I personally don't agree with any of your reasons. In my opinion, as a long time user of Haskell, the practical reasons are the following -
1. Tooling has historically been a mess, though it's rapidly getting better.
2. Error messages are opaque. They make sense to someone familiar with Haskell, but others cannot make the leap from an error message to the fix easily.
3. It's a jack of all trades. The resulting binaries are not small. Performance can be very good but can be unpredictable. It doesn't compile nicely to the web. Doesn't embed well. There is basically no compelling reason to get into it.
4. The ecosystem is aging. You can find a library for almost any obscure usecase, but it would be many years old, and possibly require tweaking before it even compiles.
Oracle influenced / bought academia into teaching Java for a generation. See Dijkstra’s criticisms[1] from the time, from when his department was forced to stop teaching Haskell to undergrads for political reasons. Note that Haskell had not been too hard for Dijkstra’s undergrads.
Later, Python took its place, since people realized the Java ecosystem was way too complicated and was turning off would-be CS students. Python directly targeted the academic use case by having similarities to C, Java, and Bash——it was not a better language, it just made existing imperative and object-oriented assignments easier for classroom environments. Believe it or not, a lot of programmers and even academics sort of give up on exploring significantly unfamiliar directions after graduating.
Nah man, Oracle… here is a personal story. I was teaching at a Uni, introductory programming course. Dean hits me up and asks if I can teach introduction to web development as then current professor was going on maternity leave. I was like “heck yea, that sounds like fun.”
before the first class I get an email from one student asking if they must purchase the book for the class since it $275 (this is years ago) and I was taken aback, what kind of book costs $275 - even for a college textbook that was nuts. I told him to not purchase it until we meet for the first class. I go to the office and see my copy of the book, it is programming the web with oracle forms from oracle press!!!! I talked to Dean and he was like “yea, that is what we need to teach!” needless to say that, none of the kids bought the book, and I did NOT teach oracle forms, and I was never given that class again :)
I don't think that's it. I know plenty of people who were taught lisp first thing at university, and as soon as someone handed them an imperative language, they never looked at lisp again. And lisp is way easier than haskell IMO as IO is just a function and not a pilosophical concept
I wouldn’t assume that your colleagues were less capable as undergrads than the students that Dijkstra encountered at UT Austin.
Imperative languages do offer many advantages over Haskell, in that most coursework and industry jobs use them and that, consequently, their ecosystems are much further developed. These advantages are a consequence of university programs' alignment with the imperative and object-oriented programming paradigms, to Oracle's benefit.
Your colleagues having never looked back at lisp is hardly evidence that Haskell would have been too difficult for them or that Oracle didn’t have a hand in this.
I don't think that holds water. We've had functional programming for longer than oracle or java have existed, and for far longer than oracle has owned java. Haskell itself has been around for longer than java or oracle-owned java.
Functional programming just seems harder for people to get into. Perhaps it's bad for everyone that people don't make that effort, but it doesn't seem like a conspiracy
My first CS class was in Scheme (R6 iirc), and the year after they switched to python. Then a thousand cries in failure to understand python metaclasses. They are garbage at the repl, and you have a distinct set of folks that edit their editors.
Most Python programmers don't really have to understand metaclasses or other advanced concepts like descriptors. The main metaclass they'd use would be to create abstract classes, and these days you can just subclass ABC.
He's asking for something that is really inappropriate. He lost in the CS department, and he wants the Budget Council to decide on what languages should be taught? Like they know anything about it!
He lost a political battle, and he's appealing it to the only place he can, and he's buttering them up to do it, but the people that actually know something about the topic decided against him already.
And, you're quoting only one side of the battle. One vocal and eloquent side, but only one side. Maybe look into why the UT CS department made that change? (And not why Dijkstra says they did.)
4. History. In those types of discussions, there are always "rational" arguments presented, but this one is missing.
> One with lots of persistent mutable state.
You mean like a database? I don't see a problem here. In fact, there is a group of programs large enough, that Haskell fits nicely, that it cannot be 3; REST/HTTP APIs. This is pretty much your data goes in, data goes out.
No, I mean like a routing switcher for a TV station. You have a set of inputs and a set of outputs, and you have various sources of control, and you have commands to switch outputs to different inputs. And when one source of control makes a change, you have to update all the other sources of control about the change, so that they have a current view of the world. The state of the current connections is the fundamental thing in the program - more even than controlling the hardware is.
Thanks. This does sound like a state machine, though, but the devil is probably in the details. Yes, here Haskell is probably a bad choice, and something where direct memory manipulation is bread and butter should do better. Which is completely fine; Haskell is a high level language.
But in your example, PHP is also a bad choice, and alas, it dwarfs Haskell in popularity. I can't really think of where PHP is a great fit, but Haskell isn't.
Okay, might be definitional, but when I think of 'first class', I think of something baked in to the language. So in Haskell's case, in the Prelude I suppose.
The generalized version of ‘traverse/mapM’ that doesn’t just work for lists, but any ‘Traversable’ type is absolutely amazing and is useful in so many cases.
‘traverse :: Applicative f => (a -> f b) -> t a -> f (t b)’
And you can derive it for free for your own datatypes!
The amount of code I’ve manually written in other languages to get a similar effect is painfully large.
I had a generic report class that essentially fetched a bunch of database rows, did some stuff for each row, and then combined the results together into a report. (This was in Scala, I know Haskell doesn't have classes, but presumably similar situations can happen)
For one client, we needed to accumulate some extra statistics for each. For another, we needed to call their web API (so async I/O) to get some of the data used in the report. By making the generic superclass use a generic Applicative type, we could keep the report business logic clear and allow the client-specific subclasses to do these client-specific things and have them compose the right way.
Wanting custom applicative types is rarer than using a standard one, but it can be a good way to represent any kind of "secondary effect" or "secondary requirement" that your functions might have. E.g. "requires this kind of authorisation" or "must happen in a database transaction". But a lot of the time you can implement custom things using reader/writer/state, or free, rather than having to write a completely from-scratch applicative.
"This seems rather … procedural. Even though we get all the nice guarantees of working with side effectful functions in Haskell, the code itself reads like any other procedural language would. With Haskell, we get the best of both worlds."
Working with the IO monad is much more complex, especially if you want to use other monadic types inside that code.
> Working with the IO monad is much more complex, especially if you want to use other monadic types inside that code.
Yes, mixing other monadic types with IO can be a challenge. One possible solution to that is not just not use other monadic types inside the code. Just use a general-purpose effect system instead. I recommend effectful or Bluefin (the latter my own project).
I like Haskell, but I think it suffers from the same issues preventing most lisps to gain any traction: every codebase is different and reinvents its own DSL or uses different extensions.
Lispers hailing macros and metaprogramming cannot understand that power is also the very thing that makes jumping from project to project difficult, I have no intention of relearning your cleverly designed DSL or new extensions.
There's a reason why Java or PHP have plenty of widely used killer software, while monocle-wielding lispers and haskellers have very little to show after many decades.
It's not FP being the issue, it's just that their power attracts crowds interested in code more than the features/business/product.
I don't blame them, I love Haskell and Racket but I think very few teams can scale and make the compromise worth it.
The upside for haskell is that whenever you come to some new code, or return to your own old DSL-y code, the types are there as spikes in a cliff to help you move onward. With, e.g elisp, I always get a mild headache when I need to add some new feature to a codebase I myself wrote.
Perl was like this, though it did have wide adoption at one point. But people got sick of having 20 ways to do the same thing so Python was born. Is there an equivalent in the FP world?
Everyone else is responding with FOSS, so I'll respond with some companies:
Co-Star, the astrology SaaS, is apparently written with a Haskell backend. I'd love to have seen the casting call for that.
I believe the Mercury bank also runs most of their backend stuff on Haskell. Functional languages in general are surprisingly common among financial investment firms.
Some of Target's stuff is written in Haskell. I think there was at least one big Facebook project that was written in Haskell, but they may have moved away from it by now. Awake Security does some Haskell stuff.
One thing which might be surprising is Haskell is apparently quite strong for general backend web dev.
> Haskell is apparently quite strong for general backend web dev
Yep. Mostly because of the https://www.servant.dev/ framework (but see also IHP, Yesod, and other frameworks). Servant lets you declare your HTTP API at the type level, and then it will infer the correct types for your endpoint handlers. You can also extract OpenAPI specs from it, generate clients for Haskell or other languages, etc.
My current employer, Bellroy, uses Haskell for pretty much all new code and Servant for all new HTTP APIs. https://exploring-better-ways.bellroy.com/our-technology-sta... is an older post discussing the shift to Haskell. We've found Haskell code to be much more compact than the equivalent Ruby, and significantly more robust.
Going from Ruby to Haskell is, itself, quite a good signal of quality for me. Start strong and end stronger. Sounds like you've got a good thing going!
I maintain Haskell code for five different customers, some large projects, some smaller, projects of varying ages up to over a decade. All the projects do "server backend" stuff, some web frontend too. I love how secure I feel making changes to things I haven't touched in a while.
It does one of the things I find most annoying about Haskell programmers, which is that they think the language is magic just because it has ADTs.
> HASKELL MAKES ILLEGAL STATES UNREPRESENTABLE.
That's not true!
It's especially not true for numeric programming, Haskell really doesn't provide good support for that. Basically only Ada and dependent-type languages do.
It is most definitely true for numeric programming (and Haskell is _somewhat_ dependently typed). For example, look at this finite bit width concatenation operation:
How could I forget Serokell, too! An Estonian software development firm that uses Haskell and Nix as basic building blocks.
I think they were using Agda or something too for a while, but it appears I can't find what I'm thinking of on their site anymore. Really interesting guys if you're located in the Baltic states.
I've used Haskell several times for implementing isolated 'maths business logic units in commercial backend applications.
In one such system I built had the main (REST API exposing) backend implemented in Kotlin with a separate application in Haskell doing a complex set of maths driven business rules against GIS data to calculate area specific prices.
The amount of IO on the Haskell side was fairly minimum and abstracted away quite nicely.
Haskell allowed expressing all complexity in a way that was easy to audit and translate from business/data analyst requirements.
Would do again :-) But only with the correct amount isolation so you can lean into Haskell's strong sides.
I've said for awhile that Pandoc is one of the very few Haskell programs that isn't exclusively used by Haskell programmers.
There are plenty of interesting Haskell programs (e.g. Xmonad, Darcs), but a lot of people explicitly use them because they're Haskell.
Pandoc, on the other hand, is useful to pretty much anyone who has ever needed to convert documents. It's one of the first things I install on most computers.
There are stronger reasons to avoid wayland. It's a protocol that manages to have all implementations slightly different, and creating an EGL context per the glxgears results in a hard crash, both with AMD and NVIDIA cards. I assume I messed up the EGL context, but why does my entire desktop need to crash? Xkill is a much better UX, and that's kinda sad. An Xorg app failing dramatically doesn't murder my desktop session
If you want to stay in the land of monads there is https://github.com/SimulaVR/Simula?tab=readme-ov-file "a VR window manager for Linux". Should've been called MetaMonad ;) but I guess that was already taken by the phylum metamonada, don't want to get on their bad side.
I think Pandoc too, but yeah it's a fairly accurate meme that Haskell isn't really used to make anything except Haskell compilers and tutorials. Last time I checked there was actually an exhaustive list of software written in Haskell somewhere, which they meant as a "look how successful it is - all these projects!" but is really "it's so unsuccessful we can actually write down every project using it".
I once had a hard to track down bug in some code making use of conduit[0], which is introduced using examples like `main = runConduit $ (yield 1 >> yield 2) .| mapM_C print`.
Dutifully replacing every occurrence of (>>) with (>), because it was more modern, suddenly changed the semantics somewhere, due to the fact that (>>) is defined with fixity `infixl 1 >>` and (>) as `infixl 4 >` - i.e. both are left-associated operators, but (*>) binds tighter than (>>) and some of the myriad of other operators you may encounter.
This is a bit surprising. Why replace `>>` with `*>` "because it's more modern"? Can't you just stick with what's working? The former hasn't been deprecated.
Haskell dilettante here… The “IO a” vs “a” reminded me of async vs sync - where the first one returns a promise/future to be awaited on, rather than a result.
Is there any parallel there, or is it an entirely wrong perception I got ?
Of course. Promise is a monad, .then is more or less equivalent to the >>= operator and await makes it look more imperative-ish just like <- in Haskell.
Note that in JS you'll need to be inside an async function to use await, just like in Haskell you'll need to be inside the do notation to use <-. Otherwise, you'll need to play with .then just like you would need to play with >>= in Haskell.
About other languages, one way to achieve asynchronous behavior without having it implemented in the language is using ReactiveX (https://reactivex.io/). It's hard to understand at first, but it if you understand the IO Monad it becomes easy, as it's basically the same.
Yes, except in a lot of languages, a Promise is the representation of the result of a computation that may already be happening, whereas an IO is a computation that will have data injected into it by some interpreter. But it's a very close comparison, in the sense that both represent contexts that future computation can be appended on. Also, in some languages, the Promise type is truly monadic.
There’s a parallel because Promises in a language like JavaScript are “monad-like”, so they’re similar to the IO Monad here. I am not a functional wizard so I’m sure that was not a fair comparison in some way, but it’s how I have thought of it. They’re both a representation of a side effect and require that effect be respected before you can get to the value inside it
Not so much a representation of a side effect (after all List is a Monad as is addition on integers no side effects anywhere in sight) as the reification of a particular category.
JavaScript's `Promise` is particularly interesting because it is a Monad over "all JS values which do not contain a method called `then` in their prototype chain" or as Dominic put it 12 years ago:
> Indeed, I like the way @medikoo phrases it. There's, practically speaking, nothing wrong with being a monad on the category of non-thenables.
> Is this really so different from IO everywhere in Haskell?
It's a starting point. You'll have to somehow chain (e.g. you want to print "def" after "abc", how would you do that?). You'll also need to put a value into your IO, e.g. if you are inside that IO, all the computations will need to produce IO. If you solve those things, you'll have a Monad.
But, wait! You're in JS, right? You have IO Monads there, perhaps only don't know yet. They are the JS Promises! Look this:
// assuming that you have a sync input function, that receives
a string from stdin and returns it
let input_str = new Promise((resolve, reject) => resolve(input());
let input_int = input_str.then(s => parseInt(s));
let square = input_int.then(n => n * n);
square.then(sq => console.log(sq));
If you want to proceed with your poor man's IO, you'll need to implement your "then". Going further, the equivalent in Haskell (not the best code, though) to that is:
Ok, but Haskell has a syntax for cleaning that, called do notation:
main = do
input_str <- getLine
let input_int = read input_str :: Int
let square = input_int * input_int
putStrLn (show square)
And JS? JS has also its syntax, using await. Just like Haskell that it only works inside the do notation, in JS it will only work inside async functions:
let input_str = await new Promise((resolve, reject) => resolve(input()));
When you express it has "Haskell has the IO monad" and "Haskell has the Maybe monad", it's no biggie because other languages have them. All Java Objects might or might not be there (so they all implement Maybe). And all Java methods might or might not do IO, so they all implement IO.
The real sell is: Haskell objects which are not Maybe are definitely there. Functions which are not IO will always give the same input for the same output.
I think as long as the code sticks to the discipline of never actually doing I/O but only manipulating functions that perform them it would basically be doing the same thing as IO monads in Haskell.
So print(s) returns a function that when called prints s. Then there needs to be function that joins those functions, so print(a); print(b) evaluates to a function that once called prints out a and then b.
What makes Haskell special in my opinion is 1) it generalises this way of achieving "stateful" functions, 2) enforces such discipline for you and makes sure calling functions never produces side effects, and 3) some syntactic sugar (do, <-, etc) to make it easier to write this kind of code.
Also note that the above example only does output which would be the easier case. When it comes to code with input, there will suddenly be values which are unavailable until some side effects have been made, so the returned functions will also need to encode how those values are to be obtained and used, which complicates things further.
Am I the only one who never tried Haskell but who when reading discussion about it ends up thinking real-world (with GHC extensions) Haskell has way too much (sometimes historical) cruft? It really detracts me from it.
In practice it only detracts a little bit. You can enable GHC extensions project-wide and there are alternate standard libraries (preludes) that are more modern.
If you want a language that is very Haskell-like without the historical baggage or the laziness, PureScript is very good. It’s main compile target is JavaScript, so it’s built for different use cases.
Am I right in thinking that there are efforts to provide a better out-of-the-box experience, with some of that cruft dealt with for people who don't need the backwards compatibility? For myself, I found long prologues of extensions/options/whatever massively off-putting.
It's a thirty year-old language, it's bound to have cruft. However modern codebases tend to showcase a pretty efficient combination of language features, oftentimes optimised for productivity rather than research in lazy FP. Such codebases are https://github.com/flora-pm/flora-server or https://github.com/change-metrics/monocle
- map is easier on newcomers. Once you you understand functor, you'll remember to use fmap anyway. Also, I still use map for lists after 10+ years.
- I don't fully understand this, but do you mean that every `IO a` function would be better off being `IO (Maybe a)`?
- AFAIK, there are no dependent types in Haskell yet, but say that the type level programming you can do today is what you mean then you are already in quite advanced territory. Yeah, I guess it could be more polished.
I mean that `head` should be `List a -> Maybe a` rather than `List a -> a`. If you really really want an `a`, you should provide `NonEmptyList a` or something.
It's a very good explanation but I'd welcome more humility. Instead of using phrase lie-to-kids, I could say that all the dance around effects in Haskell is just implementation detail and/or leaky abstraction.
> and (c) requires a language feature (sequence points) to disambiguate in which order side effects get executed. With Haskell, we just don’t have to care.
Reading up to this point, I had to chuckle a bit. I have struggled with the Haskell type system more than I care to admit; It's almost never "we just don't have to care" when comparing to most other popular languages.
That being said, this article does a nice job of gently introducing some of the basics that will trip up somebody who is casually looking through code wondering what *> and <*> and <* do. As usual, there is a steep learning curve because of stuff like this all over the codebase. If I walk away for a month, I need to revisit >>= vs >> and other common operators before I can be productive. It probably doesn't help that I never actually speak to a human about these concepts so in my head it's always ">>=" and not "bind."
I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine. The exception is when parsing with one of the parsecs where you get a lot of <* and *> usage and all the tutorials use those symbols.
But I like <|> , it feels very clear that it has a sort of "or" meaning.
One exception for me about >>=, is instead of this:
thing <- getThing
case thing of
writing this:
getThing >>= \case
Not so much because it is less code, but fewer variables to name.
You can pipe into Elixir cases too, and the language I'm developing also allows for this and more.
The difference here seems to be that \case is a lambda?
Yeah `>>= \case` is a really nice pattern! I use it a lot.
> I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine
Interesting. Probably it's just me, but I first learned monads using >>= (in OCaml), so at the beginning I found the Haskell do notation more confusing (and indentation rules didn't help). >>= is just a function and I understand its signature well. On the other hand, "do" is a syntactic sugar that I sometimes had to "mentally desugar" in some cases.
Funny, but I remember the difference between `>>=` and `>>` even though I haven't written Haskell in a couple of years. `>>=` passes to the right the value coming from the left, while `>>` drops it.
To give an example, you use `>>=` after `readFile` to do something with the contents. You use `>>` after `putStrLn` since `putStrLn` doesn't return a meaningful value.
> It's almost never "we just don't have to care" when comparing to most other popular languages.
Struggling with Haskell type system is not an experience of somebody who has developed an intuition about Haskell type system. Granted, it is not a binary thing, you can have good intuition about some parts of it and struggle with others.
I think they way you put it is, while technically true, not fair. Those "most other" languages are very similar to one another. It is not C# achievement, that you don't struggle with its type system coming from Java.
This is like people struggling with Rust because of burrow checker, well, they have probably never programmed with burrow checker before.
> burrow
FYI it's "borrow" (as in when someone lends something to you) not "burrow" (which is a tunnel/hole)
Starcraft player detected
i'm sure burroughs liked checker
I struggled with the borrow checker because I’m smarter than it is, not because I haven’t worked with one before. Mainly I say “I’m smarter” because I’ve worked on big projects without it and never had any issues. Granted, I’ve only gotten in a fight with it once before giving up on the language, mainly because it forced me to refactor half the codebase to get it to shut up and I had forgotten why I was doing it in the first place.
> I’m smarter than it is
Given how many important projects maintained by smart people have dealt with bugs that safe Rust makes impossible, I'm inclined to doubt that.
Let's be realistic, rust is great, especially compared to C.
But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.
If you are dealing with common system level needs like double linked lists, rust adds back in the need for that super human level memory of undefined behavior, because the borrow checker is limited to what static analysis can do.
IMHO, the best thing Rust could do right now is more clearly communicate those core limitations, and help build tools that help mitigate those problems.
Probably just my opinion, and I am not suggesting it is superior, but zig style length as part of the type is what would mitigate most of what is problematic with C/C++
Basically a char myArray[10];, really being *myArray is the main problem.
Obviously the borrow checker removes that problem, but not once you need deques treeps etc...
If I could use Rust as my only or primary language, memorizing the borrow checker rules wouldn't be that bad.
But it becomes problematic for people who need to be polyglots, in a way that even Haskell doesn't.
I really think there's ways for Rust to grow into a larger role.
But at this point it seems that even mentioning the limits is forbidden and people end up trying to invert interface contracts, and leak implementation details when they're existing systems are incompatible with the projects dogma.
It is great that they suggest limiting the size of unsafe code blocks etc....but the entire world cannot bend to decisions that ignores the real world nuances of real systems.
Rust needs to grow into a language that can adjust to very real needs without and many real needs will never fit into what static analysis can do.
Heck C would be safer if that was the real world.
I really do hope that the project grows into the role, but the amount of 'unsafe' blocks points to them not being there today, despite the spin.
In my particular case, I was absolutely sure it was safe without having to test it. Would it remain so forever? Probably not because software changes over time.
In those cases what you are supposed to do is write your logic using unsafe, and wrap it with lifetime safe APIs. That way your "smartness" joins efforts with rusts, and not-so-smart people can use your APIs without having to be as smart.
Did you rewrite the code base into C++ to avoid the borrow checker?
No, I just stopped contributing to the project and never touched Rust again.
I gave a talk that goes more into what makes imperative programming better with Haskell compared to traditional imperative languages. https://speakerdeck.com/ajnsit/supercharged-imperative-progr...
In a nutshell, first class effects and built in set of patterns for composing them get rid of boilerplate code. Combine that with type safety and you can churn out relatively bug free code very fast.
Thanks for sharing, I really liked that!
With the downside of 99% of all devs not understanding anything. Sure haskell may be a great language, but even greater languages are accessible.
I always maintain that this is just familiarity, Haskell is in truth quite a simple language. It's just that the way it works isn't similar to the languages most people have started with.
I believe there's a strange boundary around the idea of simple vs easy (to quote rich hickey) and I don't know how to call it.. (or if somebody named it before)
functional and logical languages are indeed very simple, small core, very general laws.. (logic, recursion, some types) but grokking this requires unplugging from a certain kind of reality.
Most people live in the land of tools, syntax and features .. they look paradoxically both simpler than sml/haskell so people are seduced by them, yet more complex at the same time (class systems are often large and full of exceptions) but that also makes it like they're learning something advanced, (and familiar, unlike greek single variables and categ-oids :).
People intuitively expect things to happen imperatively (and eagerly). Imperativeness is deeply ingrained in our daily experience, due to how we interact with the world. While gaining familiarity helps, I’m not convinced that having imperative code as the non-default case that needs to be marked specially in the code and necessitates higher-order types is good ergonomics for a general-purpose programming language.
Familiarity is a part, but abstract reasoning is fundamentally harder than concrete.
Understanding the map signature in Haskell is more difficult than any C construct. Now do IO monad.
> Understanding the map signature in Haskell is more difficult than any C construct.
This is obviously false. The map type signature is significantly easier to understand than pointers, referencing and dereferencing.
I am an educator in computer science - the former takes about 30-60 seconds to grok (even in Haskell, though it translates to most languages, and even the fully generalised fmap), but it is a rare student that fully understands the latter within a full term of teaching.
Are the students who failed the pointer class the same ones in the fmap class?
I didn’t say “using map” I said understanding the type signature. For example, after introducing map can you write its type signature? That’s abstract reasoning.
Pointers are a problem in Haskell too. They exist in any random access memory system.
That’s an unfair comparison because these are two unrelated concepts. In many languages, pointers are abstracted away anyway. Something more analogous would be map vs a range loop.
`map` aint so bad...
I suppose an absolute beginner would need someone to explain that Haskell type signatures can be read by slicing at any of the top level arrows, so that becomes either:> Given a function from `a` to `b`, return a function from a `list of as` to a `list of bs`.
or:
> Given a function from `a` to `b` and a `list of as`, return a `list of bs`.
I find the first to be the more intuitive one: it turns a normal function into a function that acts on lists.
Anecdotally, I've actually found `map` to be one of the most intuitive concepts in all of programming. It was only weird until I'd played around with it for about 10m, and since then I've yet to be surprised by it's behavior in any circumstance. (Although I suppose I haven't tried using it over tricky stuff like `Set`.)
`fmap` is admittedly a bit worse...
But having learned about `map` above, the two look awfully similar. Sure enough the same two definitions above still work fine if you replace `list` with this new weird `Functor` thing. Then you look up `Functor` you learn that it's just "a thing that you can map over" and the magic is mostly gone. Then you go to actually use the thing and find that in Haskell pretty much everything is a `Functor` that you can `fmap` over and it starts feeling magical again.You and I have a math part of our brain that appreciate the elegance from the algebraic structure.
I’m saying that thing you did where you start representing concepts by letters which can be populated by concrete objects is not a skill most people have.
Oh. Yeah that's a good point.
Maybe at its core, but Haskell in the wild is monstrously complex because of all the language extensions. Many different people use different sets of extensions so you have to learn them to understand what’s going on!
Not really, the vast majority of extensions just relax unnecessary restrictions. And these days it's easy to just enable GHC2021 or GHC2024 and be happy.
Accessibility is not an issue. It takes only a little bit of effort to get productive with a Haskell codebase. I think it's more of a mental block because the language is different from what one might be used to. What Haskell needs, and doesn't have, is a compelling reason for people to make that small effort (i.e. the killer usecase).
I don't have a math background. Without a math background, you will have a bad time with Haskell.
There are many successful Haskellers without a math background.
"Relatively bug free code very fast" sounds like a killer use case to me.
So why hasn't it happened? Some possibilities:
1. People are just ignorant/unenlightened.
2. Haskell is too hard to use for most people. I think that different programmers think in different ways, and therefore find different languages to be "natural". To those whom Haskell fits, it really fits, and they have a hard time understanding why it isn't that way for everyone, so they wind up at 1. But for those who it doesn't fit, it's this brick wall that never makes sense. (Yes, this is about the same as 1, just seen from the other side. It says the problem is the language, not the people - the language really doesn't fit most people very well, and we can change languages easier than we can change people.)
3. Haskell isn't a good fit for many kinds of programming. The kind of programs where it fits, it's like a superpower. The kinds where it doesn't, though, it's like picking your nose with boxing gloves on. (Shout out to Michael Pavlinch, from whom I stole that phrase.)
What kinds of programs fit? "If you can think of your program like a pipe" is the best explanation I've seen - if data flows in, gets transformed, flows out. What kind of program doesn't fit? One with lots of persistent mutable state. Especially, one where the persistent mutable state is due to the problem, not just to the implementation.
Myself, I lean toward a combination of 2 and 3.
Haskell is superb at handling persistent mutable state. The only problem is that you may have analysis paralysis from choosing between all the choices (many of them excellent).
>So why hasn't it happened?
Off the top of my head, memory safety challenges for junior Haskellers (laziness footguns), State monad being fundamentally flawed: there is an inability to get at and log your application state just before a crash, bloated tooling, GHC frequently breaking existing code. Laziness and monadic code makes debugging painfully difficult.
I like Haskell for Advent of Code only.
I acknowledge that those things can be challenging, however I'd like to respond to some of the specific issues:
- Space leaks due to laziness are a solved problem. I explain the technique to solve it at: https://h2.jaguarpaw.co.uk/posts/make-invalid-laziness-unrep... This technique has not completely percolated throughout the community, but I am confident that it does actually resolve the "laziness causes space leaks issue"
- Flawed state monad: well, you point out the analysis of its flaws from the effectful documentation. That's correct. The solution is: just use effectful (or another similar effect system. I recommend my own: Bluefin)
- GHC breakage: I've been keeping an inventory of breakage caused by new GHC versions, since GHC 9.8: https://github.com/tomjaguarpaw/tilapia/ There has been very little! The Haskell Foundation Stability Working Group has had a massive effect in removing breakage from the ecosystem.
- Laziness and monadic code makes debugging painfully difficult: I mean, sort of, but if you're using monadic code in the style of a decent effect system like effectful or Bluefin this is a non-problem. It's hardly different from programming in, say, Python from the point of view of introducing debugging printfs or logging statements.
Thanks, I've followed along with a lot of your posting on the Haskell discourse. One thing regarding this matter:
>well, you point out the analysis of its flaws from the effectful documentation. That's correct.
Thinking deeper about this, that there is essentially no way to fix this issue with StateT, because of the type choice, the monad, the composability requirement, all conspiring together to not be undone, does that signal something deeper that is wrong with the flexibility of Haskell, that we can progressively paint ourselves into a corner. Could it not happen again, but with another late breaking requirement, for effectful, or bluefin?
> State monad being fundamentally flawed
Could you explain a bit about what you mean here? In what way is it fundamentally flawed?
I'm not saying you're wrong; I just want to understand.
Basically StateT uses Either to model either the application state or an error. So if your application throws, you lose the current state forever. They sort of painted themselves into a corner with this choice of type, there's no real way out now.
This explains in more detail:
https://github.com/haskell-effectful/effectful/blob/master/t...
I agree loosely with what haskman above says about creating relatively bug free applications, the guard rails are so robust. But those same guard rails mean you can paint yourself into a corner that it is harder to get out of without imperative state, case in point above.
Most functional languages give you so much more choice in managing persistent mutable state than the typical imperative languages. In the latter everything can be made into persistent mutable state so you have both those that are due to the problem and those that are due to the implementation.
Haskell gives you a wide range of tools, from simulated state like the State monad, to real ones like the ST monad and IORef inside the IO monad. For synchronization between threads you have atomic IORef, MVar, and TVar.
If you problem requires you to have persistent mutable state, Haskell helps you manage it so that you can truly separate those persistent mutable state that's due to the problem from those that's due to the implementation.
The reasons are going to vary depending on who you ask. I personally don't agree with any of your reasons. In my opinion, as a long time user of Haskell, the practical reasons are the following -
1. Tooling has historically been a mess, though it's rapidly getting better.
2. Error messages are opaque. They make sense to someone familiar with Haskell, but others cannot make the leap from an error message to the fix easily.
3. It's a jack of all trades. The resulting binaries are not small. Performance can be very good but can be unpredictable. It doesn't compile nicely to the web. Doesn't embed well. There is basically no compelling reason to get into it.
4. The ecosystem is aging. You can find a library for almost any obscure usecase, but it would be many years old, and possibly require tweaking before it even compiles.
Oracle influenced / bought academia into teaching Java for a generation. See Dijkstra’s criticisms[1] from the time, from when his department was forced to stop teaching Haskell to undergrads for political reasons. Note that Haskell had not been too hard for Dijkstra’s undergrads.
Later, Python took its place, since people realized the Java ecosystem was way too complicated and was turning off would-be CS students. Python directly targeted the academic use case by having similarities to C, Java, and Bash——it was not a better language, it just made existing imperative and object-oriented assignments easier for classroom environments. Believe it or not, a lot of programmers and even academics sort of give up on exploring significantly unfamiliar directions after graduating.
[1] https://www.cs.utexas.edu/~EWD/OtherDocs/To%20the%20Budget%2...
> Oracle influenced / bought academia into teaching Java for a generation
I think you probably mean Sun? Or maybe that Oracle continued what Sun started.
Nah man, Oracle… here is a personal story. I was teaching at a Uni, introductory programming course. Dean hits me up and asks if I can teach introduction to web development as then current professor was going on maternity leave. I was like “heck yea, that sounds like fun.”
before the first class I get an email from one student asking if they must purchase the book for the class since it $275 (this is years ago) and I was taken aback, what kind of book costs $275 - even for a college textbook that was nuts. I told him to not purchase it until we meet for the first class. I go to the office and see my copy of the book, it is programming the web with oracle forms from oracle press!!!! I talked to Dean and he was like “yea, that is what we need to teach!” needless to say that, none of the kids bought the book, and I did NOT teach oracle forms, and I was never given that class again :)
I don't think that's it. I know plenty of people who were taught lisp first thing at university, and as soon as someone handed them an imperative language, they never looked at lisp again. And lisp is way easier than haskell IMO as IO is just a function and not a pilosophical concept
I wouldn’t assume that your colleagues were less capable as undergrads than the students that Dijkstra encountered at UT Austin.
Imperative languages do offer many advantages over Haskell, in that most coursework and industry jobs use them and that, consequently, their ecosystems are much further developed. These advantages are a consequence of university programs' alignment with the imperative and object-oriented programming paradigms, to Oracle's benefit.
Your colleagues having never looked back at lisp is hardly evidence that Haskell would have been too difficult for them or that Oracle didn’t have a hand in this.
I don't think that holds water. We've had functional programming for longer than oracle or java have existed, and for far longer than oracle has owned java. Haskell itself has been around for longer than java or oracle-owned java.
Functional programming just seems harder for people to get into. Perhaps it's bad for everyone that people don't make that effort, but it doesn't seem like a conspiracy
My first CS class was in Scheme (R6 iirc), and the year after they switched to python. Then a thousand cries in failure to understand python metaclasses. They are garbage at the repl, and you have a distinct set of folks that edit their editors.
Most Python programmers don't really have to understand metaclasses or other advanced concepts like descriptors. The main metaclass they'd use would be to create abstract classes, and these days you can just subclass ABC.
Re Dijkstra:
He's asking for something that is really inappropriate. He lost in the CS department, and he wants the Budget Council to decide on what languages should be taught? Like they know anything about it!
He lost a political battle, and he's appealing it to the only place he can, and he's buttering them up to do it, but the people that actually know something about the topic decided against him already.
And, you're quoting only one side of the battle. One vocal and eloquent side, but only one side. Maybe look into why the UT CS department made that change? (And not why Dijkstra says they did.)
> So why hasn't it happened?
4. History. In those types of discussions, there are always "rational" arguments presented, but this one is missing.
> One with lots of persistent mutable state.
You mean like a database? I don't see a problem here. In fact, there is a group of programs large enough, that Haskell fits nicely, that it cannot be 3; REST/HTTP APIs. This is pretty much your data goes in, data goes out.
HTTP APIs are just an interface. The application behind it can be almost anything.
> > One with lots of persistent mutable state.
> You mean like a database?
No, I mean like a routing switcher for a TV station. You have a set of inputs and a set of outputs, and you have various sources of control, and you have commands to switch outputs to different inputs. And when one source of control makes a change, you have to update all the other sources of control about the change, so that they have a current view of the world. The state of the current connections is the fundamental thing in the program - more even than controlling the hardware is.
Thanks. This does sound like a state machine, though, but the devil is probably in the details. Yes, here Haskell is probably a bad choice, and something where direct memory manipulation is bread and butter should do better. Which is completely fine; Haskell is a high level language.
But in your example, PHP is also a bad choice, and alas, it dwarfs Haskell in popularity. I can't really think of where PHP is a great fit, but Haskell isn't.
Sounds like you want green threads, a concurrent runtime, and best-in-class synchronisation primitives.
> first class effects
There's not really first class effects though, ultimately just IO.
Effect systems like effectful and Bluefin absolutely provide first class effects, ultimately not too dissimilar to Koka.
Okay, might be definitional, but when I think of 'first class', I think of something baked in to the language. So in Haskell's case, in the Prelude I suppose.
They’re first class in the sense that they can be described, stored, computed, separated etc from pure functions. Fair to call them first class.
Koka has first class effects, but I think we have different interpretations of the statement above.
Could you please clarify what you mean by 'stored'?
When will we get the best of both worlds where the imperative parts of a Haskell program run at C++/Rust speeds?
We've had Scala for about 20 years. (Is the JVM quite C++/Rust speed? No. But it's close enough for any realistic use case)
Probably never since Haskell is always copying and GC'ing data...
Well, you could implement (in Haskell) a separate heap for (all or some of the) imperative code.
The generalized version of ‘traverse/mapM’ that doesn’t just work for lists, but any ‘Traversable’ type is absolutely amazing and is useful in so many cases.
‘traverse :: Applicative f => (a -> f b) -> t a -> f (t b)’
And you can derive it for free for your own datatypes!
The amount of code I’ve manually written in other languages to get a similar effect is painfully large.
The fully qualified type is
and deriving it for your own types is as simple asCorrect! I simply copied the definition from the type class, but the context is important.
Can you kindly make a real word example that is not usual Maybe or Either example, that uses user defined data type?
I understand how Applicative works, but I don’t know how to apply (pun intended) to my data types.
I had a generic report class that essentially fetched a bunch of database rows, did some stuff for each row, and then combined the results together into a report. (This was in Scala, I know Haskell doesn't have classes, but presumably similar situations can happen)
For one client, we needed to accumulate some extra statistics for each. For another, we needed to call their web API (so async I/O) to get some of the data used in the report. By making the generic superclass use a generic Applicative type, we could keep the report business logic clear and allow the client-specific subclasses to do these client-specific things and have them compose the right way.
Wanting custom applicative types is rarer than using a standard one, but it can be a good way to represent any kind of "secondary effect" or "secondary requirement" that your functions might have. E.g. "requires this kind of authorisation" or "must happen in a database transaction". But a lot of the time you can implement custom things using reader/writer/state, or free, rather than having to write a completely from-scratch applicative.
It is definitely not procedural.
"This seems rather … procedural. Even though we get all the nice guarantees of working with side effectful functions in Haskell, the code itself reads like any other procedural language would. With Haskell, we get the best of both worlds."
Working with the IO monad is much more complex, especially if you want to use other monadic types inside that code.
> Working with the IO monad is much more complex, especially if you want to use other monadic types inside that code.
Yes, mixing other monadic types with IO can be a challenge. One possible solution to that is not just not use other monadic types inside the code. Just use a general-purpose effect system instead. I recommend effectful or Bluefin (the latter my own project).
Somewhat tangential, but the only software I know of that's written in Haskell is ImplicitCAD https://implicitcad.org/
I like Haskell, but I think it suffers from the same issues preventing most lisps to gain any traction: every codebase is different and reinvents its own DSL or uses different extensions.
Lispers hailing macros and metaprogramming cannot understand that power is also the very thing that makes jumping from project to project difficult, I have no intention of relearning your cleverly designed DSL or new extensions.
There's a reason why Java or PHP have plenty of widely used killer software, while monocle-wielding lispers and haskellers have very little to show after many decades.
It's not FP being the issue, it's just that their power attracts crowds interested in code more than the features/business/product.
I don't blame them, I love Haskell and Racket but I think very few teams can scale and make the compromise worth it.
The upside for haskell is that whenever you come to some new code, or return to your own old DSL-y code, the types are there as spikes in a cliff to help you move onward. With, e.g elisp, I always get a mild headache when I need to add some new feature to a codebase I myself wrote.
Perl was like this, though it did have wide adoption at one point. But people got sick of having 20 ways to do the same thing so Python was born. Is there an equivalent in the FP world?
Everyone else is responding with FOSS, so I'll respond with some companies:
Co-Star, the astrology SaaS, is apparently written with a Haskell backend. I'd love to have seen the casting call for that.
I believe the Mercury bank also runs most of their backend stuff on Haskell. Functional languages in general are surprisingly common among financial investment firms.
Some of Target's stuff is written in Haskell. I think there was at least one big Facebook project that was written in Haskell, but they may have moved away from it by now. Awake Security does some Haskell stuff.
One thing which might be surprising is Haskell is apparently quite strong for general backend web dev.
> Haskell is apparently quite strong for general backend web dev
Yep. Mostly because of the https://www.servant.dev/ framework (but see also IHP, Yesod, and other frameworks). Servant lets you declare your HTTP API at the type level, and then it will infer the correct types for your endpoint handlers. You can also extract OpenAPI specs from it, generate clients for Haskell or other languages, etc.
My current employer, Bellroy, uses Haskell for pretty much all new code and Servant for all new HTTP APIs. https://exploring-better-ways.bellroy.com/our-technology-sta... is an older post discussing the shift to Haskell. We've found Haskell code to be much more compact than the equivalent Ruby, and significantly more robust.
Oh, hey, I purchased one of your leather phone cases recently. Big fan!
Going from Ruby to Haskell is, itself, quite a good signal of quality for me. Start strong and end stronger. Sounds like you've got a good thing going!
I maintain Haskell code for five different customers, some large projects, some smaller, projects of varying ages up to over a decade. All the projects do "server backend" stuff, some web frontend too. I love how secure I feel making changes to things I haven't touched in a while.
> Co-Star, the astrology SaaS, is apparently written with a Haskell backend. I'd love to have seen the casting call for that.
They have a page about it: https://www.costarastrology.com/why-haskell/
It does one of the things I find most annoying about Haskell programmers, which is that they think the language is magic just because it has ADTs.
> HASKELL MAKES ILLEGAL STATES UNREPRESENTABLE.
That's not true!
It's especially not true for numeric programming, Haskell really doesn't provide good support for that. Basically only Ada and dependent-type languages do.
It is most definitely true for numeric programming (and Haskell is _somewhat_ dependently typed). For example, look at this finite bit width concatenation operation:
https://hackage.haskell.org/package/clash-prelude-1.8.2/docs...
How it compares to Ada though, I could not say.
You’re correct that Mercury uses Haskell for its backend: https://serokell.io/blog/haskell-in-production-mercury
How could I forget Serokell, too! An Estonian software development firm that uses Haskell and Nix as basic building blocks.
I think they were using Agda or something too for a while, but it appears I can't find what I'm thinking of on their site anymore. Really interesting guys if you're located in the Baltic states.
I've used Haskell several times for implementing isolated 'maths business logic units in commercial backend applications.
In one such system I built had the main (REST API exposing) backend implemented in Kotlin with a separate application in Haskell doing a complex set of maths driven business rules against GIS data to calculate area specific prices.
The amount of IO on the Haskell side was fairly minimum and abstracted away quite nicely.
Haskell allowed expressing all complexity in a way that was easy to audit and translate from business/data analyst requirements.
Would do again :-) But only with the correct amount isolation so you can lean into Haskell's strong sides.
https://hledger.org/
hledger - a plain text accounting tool
Don't forget Pandoc!
I had no idea it was Haskell. So much for what I heard “no practical use”. Pandoc seems like very real life practical Application for me.
I've said for awhile that Pandoc is one of the very few Haskell programs that isn't exclusively used by Haskell programmers.
There are plenty of interesting Haskell programs (e.g. Xmonad, Darcs), but a lot of people explicitly use them because they're Haskell.
Pandoc, on the other hand, is useful to pretty much anyone who has ever needed to convert documents. It's one of the first things I install on most computers.
And Pandoc's author is not even a software professional. He's a philosophy professor. :^)
I worked on Cardano which is probably one of the larger Haskell projects (and Agda, which lets you write proofs and compile into Haskell!)
PostgREST and Hasura (before rewrite) are written in Haskell.
https://simplex.chat/
Private messenger for desktop and mobile platforms. It's mostly written in Haskell except for UI.
Also the server backend for the Wire messenger https://github.com/wireapp/wire-server
Pity that Wire never took off. It combined all the advantages of messaging apps for a while (available on many platforms, e2ee, etc.).
Goes to show that success is not determined by technology.
Having a quick look through their repos, it looks like they don't use Haskell for the mobile platforms?
> cpp-for-mobile > Template for cross-platform mobile app with native UIs and C++ core
https://github.com/simplex-chat/simplex-chat/blob/stable/fla...
Their flake.nix indicates they use Haskell to generate cross compiled shared library for Android, iOS, Mac, Linux and Windows.
I am not expert in Nix but at high level I can see they are indeed using Haskell.
ah that makes sense thanks!
https://tidalcycles.org/ – think ImplicitCAD but for live music performances?
also http://detexify.kirelabs.org/classify.html was surprisingly useful in university
Man I really want to learn a good code-driven CAD app sometime. Last time I tried to look at ImplicitCAD the website was down so I just gave up lol
CADQuery/build123d is the other big one I'm interested in.
Shellcheck is another useful one (linter for shell scripts)
https://www.shellcheck.net/
Haskell is an amazing type checker that’s occasionally used for writing software. ;-)
This thing is awesome!
I tried it on couple of one liners and it found a couple of potential problematic points, one for each one liner.
ShellCheck is an essential part of my projects' CI, couldn't do without it!
The aura package manager for AUR, as well as the xmonad tiling WM.
The other ones most people point to are https://pandoc.org and https://shellcheck.net
Xmonad, though most people have moved on to other window managers at this point.
I still use it, it's the main reason I don't want to switch to Wayland. Everything looks and feels like it did 15 years ago, it's great!
There are stronger reasons to avoid wayland. It's a protocol that manages to have all implementations slightly different, and creating an EGL context per the glxgears results in a hard crash, both with AMD and NVIDIA cards. I assume I messed up the EGL context, but why does my entire desktop need to crash? Xkill is a much better UX, and that's kinda sad. An Xorg app failing dramatically doesn't murder my desktop session
https://github.com/waymonad/waymonad (no idea if it's usable)
Not me, because I hate change!
Still on XMonad, what are some good alternatives?
If you want to stay in the land of monads there is https://github.com/SimulaVR/Simula?tab=readme-ov-file "a VR window manager for Linux". Should've been called MetaMonad ;) but I guess that was already taken by the phylum metamonada, don't want to get on their bad side.
I think Pandoc too, but yeah it's a fairly accurate meme that Haskell isn't really used to make anything except Haskell compilers and tutorials. Last time I checked there was actually an exhaustive list of software written in Haskell somewhere, which they meant as a "look how successful it is - all these projects!" but is really "it's so unsuccessful we can actually write down every project using it".
Here you go :)
https://joyful.com/Haskell#What+Haskell+apps+are+out+there+%...
Oh yeah I was thinking of this list:
https://wiki.haskell.org/Haskell_in_industry
Imagine a "Go in industry" page. Even a "Rust in industry" would be laughable at this point.
Ok I'm maybe being a little unfair, but you get the point.
Haskell has strong monads, missed this in the article.
It makes Haskell even more procedural, you can use intermediate variables in do blocks.
http://blog.sigfpe.com/2023/08/what-does-it-mean-for-monad-t...
> >> is an old name for >
I once had a hard to track down bug in some code making use of conduit[0], which is introduced using examples like `main = runConduit $ (yield 1 >> yield 2) .| mapM_C print`.
Dutifully replacing every occurrence of (>>) with (>), because it was more modern, suddenly changed the semantics somewhere, due to the fact that (>>) is defined with fixity `infixl 1 >>` and (>) as `infixl 4 >` - i.e. both are left-associated operators, but (*>) binds tighter than (>>) and some of the myriad of other operators you may encounter.
-- [0] - https://github.com/snoyberg/conduit
This is a bit surprising. Why replace `>>` with `*>` "because it's more modern"? Can't you just stick with what's working? The former hasn't been deprecated.
Maybe I'm not so bad for using "superfluous" braces quite often, although this is not specifically the reason why I do.
Haskell dilettante here… The “IO a” vs “a” reminded me of async vs sync - where the first one returns a promise/future to be awaited on, rather than a result.
Is there any parallel there, or is it an entirely wrong perception I got ?
> Is there any parallel there
Of course. Promise is a monad, .then is more or less equivalent to the >>= operator and await makes it look more imperative-ish just like <- in Haskell.
Note that in JS you'll need to be inside an async function to use await, just like in Haskell you'll need to be inside the do notation to use <-. Otherwise, you'll need to play with .then just like you would need to play with >>= in Haskell.
About other languages, one way to achieve asynchronous behavior without having it implemented in the language is using ReactiveX (https://reactivex.io/). It's hard to understand at first, but it if you understand the IO Monad it becomes easy, as it's basically the same.
Yes, except in a lot of languages, a Promise is the representation of the result of a computation that may already be happening, whereas an IO is a computation that will have data injected into it by some interpreter. But it's a very close comparison, in the sense that both represent contexts that future computation can be appended on. Also, in some languages, the Promise type is truly monadic.
There’s a parallel because Promises in a language like JavaScript are “monad-like”, so they’re similar to the IO Monad here. I am not a functional wizard so I’m sure that was not a fair comparison in some way, but it’s how I have thought of it. They’re both a representation of a side effect and require that effect be respected before you can get to the value inside it
Not so much a representation of a side effect (after all List is a Monad as is addition on integers no side effects anywhere in sight) as the reification of a particular category.
JavaScript's `Promise` is particularly interesting because it is a Monad over "all JS values which do not contain a method called `then` in their prototype chain" or as Dominic put it 12 years ago:
> Indeed, I like the way @medikoo phrases it. There's, practically speaking, nothing wrong with being a monad on the category of non-thenables.
https://github.com/promises-aplus/promises-spec/issues/101#i...
In other languages we can do a poor man’s IO with a function like:
Is this really so different from IO everywhere in Haskell?> Is this really so different from IO everywhere in Haskell?
It's a starting point. You'll have to somehow chain (e.g. you want to print "def" after "abc", how would you do that?). You'll also need to put a value into your IO, e.g. if you are inside that IO, all the computations will need to produce IO. If you solve those things, you'll have a Monad.
But, wait! You're in JS, right? You have IO Monads there, perhaps only don't know yet. They are the JS Promises! Look this:
// assuming that you have a sync input function, that receives a string from stdin and returns it
let input_str = new Promise((resolve, reject) => resolve(input());
let input_int = input_str.then(s => parseInt(s));
let square = input_int.then(n => n * n);
square.then(sq => console.log(sq));
If you want to proceed with your poor man's IO, you'll need to implement your "then". Going further, the equivalent in Haskell (not the best code, though) to that is:
main = result
Ok, but Haskell has a syntax for cleaning that, called do notation:main = do
And JS? JS has also its syntax, using await. Just like Haskell that it only works inside the do notation, in JS it will only work inside async functions:let input_str = await new Promise((resolve, reject) => resolve(input()));
let input_int = parseInt(input_str);
let square = input_int * input_int;
console.log(square);
When you express it has "Haskell has the IO monad" and "Haskell has the Maybe monad", it's no biggie because other languages have them. All Java Objects might or might not be there (so they all implement Maybe). And all Java methods might or might not do IO, so they all implement IO.
The real sell is: Haskell objects which are not Maybe are definitely there. Functions which are not IO will always give the same input for the same output.
I think as long as the code sticks to the discipline of never actually doing I/O but only manipulating functions that perform them it would basically be doing the same thing as IO monads in Haskell.
So print(s) returns a function that when called prints s. Then there needs to be function that joins those functions, so print(a); print(b) evaluates to a function that once called prints out a and then b.
What makes Haskell special in my opinion is 1) it generalises this way of achieving "stateful" functions, 2) enforces such discipline for you and makes sure calling functions never produces side effects, and 3) some syntactic sugar (do, <-, etc) to make it easier to write this kind of code.
Also note that the above example only does output which would be the easier case. When it comes to code with input, there will suddenly be values which are unavailable until some side effects have been made, so the returned functions will also need to encode how those values are to be obtained and used, which complicates things further.
Am I the only one who never tried Haskell but who when reading discussion about it ends up thinking real-world (with GHC extensions) Haskell has way too much (sometimes historical) cruft? It really detracts me from it.
In practice it only detracts a little bit. You can enable GHC extensions project-wide and there are alternate standard libraries (preludes) that are more modern.
If you want a language that is very Haskell-like without the historical baggage or the laziness, PureScript is very good. It’s main compile target is JavaScript, so it’s built for different use cases.
Am I right in thinking that there are efforts to provide a better out-of-the-box experience, with some of that cruft dealt with for people who don't need the backwards compatibility? For myself, I found long prologues of extensions/options/whatever massively off-putting.
Sure, use
It's a thirty year-old language, it's bound to have cruft. However modern codebases tend to showcase a pretty efficient combination of language features, oftentimes optimised for productivity rather than research in lazy FP. Such codebases are https://github.com/flora-pm/flora-server or https://github.com/change-metrics/monocle
What cruft?
The standard library is atrocious, mainly I believe for historical reasons:
- map only works on Lists (one needs fmap for functors)
- head throwing exceptions instead of returning Maybe
- dependent types bolted on later: they're much nicer in Idris
> head throwing exceptions instead of returning Maybe
The choice is keep it is it is, or change it. Changing it would break vast amounts of existing code. Which option do you prefer?
- map is easier on newcomers. Once you you understand functor, you'll remember to use fmap anyway. Also, I still use map for lists after 10+ years.
- I don't fully understand this, but do you mean that every `IO a` function would be better off being `IO (Maybe a)`?
- AFAIK, there are no dependent types in Haskell yet, but say that the type level programming you can do today is what you mean then you are already in quite advanced territory. Yeah, I guess it could be more polished.
I mean that `head` should be `List a -> Maybe a` rather than `List a -> a`. If you really really want an `a`, you should provide `NonEmptyList a` or something.
Haskell provides https://hackage.haskell.org/package/base-4.21.0.0/docs/Data-...
I agree! The current `head` should be renamed `unsafeHead` (or something), and the new `head` should have the type you wrote.
It's a very good explanation but I'd welcome more humility. Instead of using phrase lie-to-kids, I could say that all the dance around effects in Haskell is just implementation detail and/or leaky abstraction.
[dead]
[dead]