As far as I can tell, the main difference is this uses proxies to trigger side effects as opposed to virtual dom diffing. Perf wise, I like that people are pushing the boundaries, but also obviously React is still very popular and that indicates that its level of performance is good enough for a lot of devs </shrug>.
In terms of features:
- This was already pointed out in another comment, Aberdeen doesn't seem to be JSX compatible, but it supports similar hyperscript-like syntax (e.g. $('div.foo:hi') vs m('.foo', 'hi'))
- Unclear what the level of support is for various features that exist in Mithril. e.g. Router appears to be an add-on/example, and I don't see anything about HTTP request utils or SVG/MathML. Mithril is meant to be sort of a one-stop shop for SPAs; it could just be that the focus of the two libraries is slightly different, and that's ok if that's the case.
- Mithril has hooks to access the raw DOM and uses RAF for batching, it's not clear to me how integration w/ other DOM-touching libs (e.g. dragula) would work here.
I like the idea of using proxies as a way of implementing reactivity in principle, and I even played around with that idea. IME, there were a couple of things that made it problematic w/ POJO models IMHO. The first is that proxies are not friendly to console.log debugging, and the second is that implementing reactivity in collections (Map/Set/Weak*) was getting a bit too complex for my minimalist tastes.
Thanks for your input! Although I've never had a good excuse to use it yet, I've glanced at mithril.js quite a few times, and I really appreciate the apparent simplicity and the batteries-included philosophy.
Indeed, the main difference with Aberdee is in the way change tracking and updates are handled. This is actually a pretty significant, both in terms of internals and in the way you use the framework.
Fun fact: I once created a another library (for educational purposes) that's a lot more similar to mithril (though not as fully featured): https://www.npmjs.com/package/glasgow
As I said, I'm a fan of mithril's batteries-included approach. The router is part of Aberdeen, but not imported by default. SVG support is on my todo-list. And I'll be exploring nice ways to integrate with some backends.
Aberdeen allows (but discourages) you to access the raw DOM as well (through `getParentElement()`), as long as you don't detach/move nodes it created.
I don't know and can't find what you mean by RAF batching. Aberdeen batches all updates (using a setTimeout 0), and then reruns all dirty scopes in the order that they were created. That also means that parent-scopes go before child-scopes, cancelling the need for separately updating the latter.
It seems that console.log debugging of proxied objects works pretty well (now, in Chrome at least). It shows that it's a proxied object, and it shows the contents, without subscribing, which is usually what you'd want for debugging. If you do want to subscribe, a `console.log({...myObj})` does the trick. Or use `clone()` (provided by Aberdeen) for recursively subscribing to deep data structures.
I apparently have the same minimalist taste, having recently removed Map support to simplify things. :-) I'll probably have another go at adding Map/Set/Weak* once I figure out how to do it without with a tiny amount of code. :-)
There's many pockets in Toronto with high density of Chinese commerce, the area around Steeles east of the IBM campus is one. Pacific Mall mentioned in the article is only one of the commercial centers in the area. There's also Metro Square a few blocks west, the T&T plaza next door, Skycity on Midland/Finch, and various other small plazas around the area, full of Chinese food goodness.
Which if I recall correctly they are only profitable due to a one time revenue boost right of something like hundreds of millions ?
And that one time "revenue" boost was that a company they own they are asserting is now valued more than last year. And they are calling that "revenue".
I took a look at a fool article. If you exclude the unrealized return from an investment they made, their profit are in the single digit millions. Not really confidence inspiring, but maybe I am wrong and that they could indeed make their business sustainable.
It implies that their margin is really low. We're talking about revenues in hundred of millions and they're only able to make single digit millions.
Next quarter, their margin might entirely be wiped out by a slight downturn in business. If they could increase their margin and make profit consistently, then I'll change my mind.
Is a particularly big percentage of their costs fixed? Because that affects the math a lot. If we consider a generic slightly-profitable company facing a 25% revenue drop, there's a huge gulf between a world where costs drop 10% and a world where costs drop 22%.
>One of the pre-requisites for that is being profitable on a GAAP basis.
Uber lost money for 22 straight quarters, then made money for the last two. Bit optimistic to think it's smooth sailing from here, in my opinion. I enjoy Uber, the product, but they are middle manning a couple of low margin industries. Hard to imagine it being a multi-hundred-billion dollar company.
> That's a somewhat outdated narrative to still be parroting.
It's not. Most of those "innovators" are still posting losses in the hundreds of millions
> Uber just got included into the SP 500. One of the pre-requisites for that is being profitable on a GAAP basis.
Because you can somehow lose a billion dollars a year for 10 years, become a publicly traded company with a steatement "we don't even know if we'll ever turn a profit", still continue operating at a huge loss for several years, write off 6 billion in losses, and finally become profitable enough to be included in S&P.
Any any other, sane world, Uber would be gone after two years of losing a billion dollars a year. Not crawl into S&P after 10 years of unsustainable losses.
> it seems as if Uber’s losses were self-evidently sustainable.
With the unlimited free investor money. Same goes for the rest of "amazing starup innovators" of recent years (e.g. YCombinator's startups). The flow of money has now stopped/slowed, and we now see mass layoffs and a wave of bankrupcies.
Losing a billion dollars a year for 10 years is not a sustainable business. But somehow it has become the norm in IT.
Uber has four office buildings in SF mission bay. One was never used, the other 3 were underutilized, so employees are going to be moved/consolidated into 2. These buildings also have street-front retail lease spaces, so presumably by having more people coming into the area, that increases foot traffic for those spaces as well.
Uber has several different APIs for users. A naive purist might think that's silly until you realize a rider is user, a driver is a user, a courier is a user, a restaurant owner is a user, a line cook is a user, a doctor's secretary is a user, a Uber employee is a user, a freight broker is a user, an advertising manager is a user... people can simultaneously be multiple types of users and have multiple profiles as a single type of user, and did I mention that you have to properly secure PII due to being in a high regulated industry? And that's just users.
Don't even get me started on anything money related :)
Plus there's a surprisingly high floor on the number of apis a large company needs for basic stuff like "set up new hires automatically in all the system needed"
Where does a line cook's use case fit in it? From what I know, uber eats sends a order to a restaurant, an employee manually punches in the order on their POS system, and the order ticket goes into the kitchen
You don't think there are a variety of people at a restaurant who might interact with the system? Is this particular detail so very important to the point of the parent comment?
I think the parent comment tried to prove a point by making an extremely frivolous claim and naming every person they could think of as a “user” which means they are either wrong or they failed to adequately make whatever point they were trying to. Uber doesn’t need an api for line cooks, so using them as a justification for a large number or micro services was not rhetorically sound.
I love seeing language projects that implement their own compiler backends. There really aren't enough of these around. Anything that helps bridge the knowledge gap between newbies and production grade compilers is a net positive in my books.
And extra brownie points for writing it in C. Think whatever about memory safety and language age but C code is abundantly explicit about what's going on with the code, which is something I really appreciate when approaching some new code to learn stuff.
I saw a video on YouTube recently from an independent content creator who went to look for these mythical sea nomads, and apparently what he found was that the idyllic scene of a remote tribe living off the land with highly refined but primitive fishing techniques was largely fabricated.
The locals were alegedly just going along with the westerner film makers, partly because in their eyes, participating in a mockumentary was an amusing opportunity that doesn't come along every day.
He then tried to get them to show him how they really fish, and it turns out they use modern gear like flippers and wet suits but also do incredibly dangerous things like breathing out of a tube connected to a machine on a small boat in the middle of the night.
The video doesn't get into the exact science, but it also looks like the fishing isn't sustainable either, as divers reported needing to progressively dive deeper/take bigger risks over time to find high yields.
Well it's entirely possible both are true; this mutation allowed them to fish more efficiently for thousands of years, and they've shifted to modern techniques for convenience and out of competition.
Yes, this is most likely. The biological advantage was useful, before humans invented modern technology that allows for actually unsustainable, industrial fishing.
I believe something similar happened with Margaret Mead. In "Coming of Age in Samoa", Mead made the wild (and completely bogus) claim that, in the state of "nature", adultery is not a problem. It's all free love, man. She ostensibly drew on what Samoans were telling her, but what they were telling her was apparently the result of their own impish sense of humor. If you look at what actually happened, you find reports of broken jaws and stabbings whenever, say, a Samoan husband discovered his wife in bed with another man.
We can presume that the problem here is a remarkable sloppiness and credulity on the part of Mead, but other scholars note that she was having (or had recently been in) an adulterous affair herself. A guilty conscience needs resolution, and resolution begins with admission of guilt and remorse, but when pride is in the mix, this cannot be. So projection and rationalization become very tempting. Mead appears to have chosen the later.
(Aldous Huxley made a similar point when remarking why he had, in his younger years, taken a nihilistic position with respect to meaning, namely, that he wanted a way to rationalize sexual revolution, licentiousness and lust. If nothing means anything, then why not sleep around and indulge what would otherwise be recognized as depraved sexual desires? Much of ideology involves some pathetic and embarrassingly domestic and mundane moral failing at its core that's been rationalized into a pompous edifice. Communism, for example, sprouts from envy, while rapacious capitalism from greed.)
Communism sprouts from envy? It sprouts from the majority being economically oppressed by the minority. One need not "envy" the boot on one's neck to fight it off.
It has had its obvious and drastic implementation issues, but its provenance is not envy.
JS has two top level grammars, one defaults to loose mode and the other defaults to strict mode, among other nuances.
Possibly the most devious nuance is whether the spec's appendix B applies, which affects whether html comment syntax is valid (yes, this is a thing). The html comment token can therefore be parsed either as a comment or as a series of operators depending on the grammar being used.
Effectively, this means it's possible to craft a program that does different things in CJS vs ESM mode.
The "boring" way is go do some sales and get clients to pay for something, then scale up as cash flow increases. That way, if it doesn't work out, at worst you just lost time instead of being half a million in the red on a business loan.
I'm stumbling into this thread right after experiencing what appears to be a pretty catastrophic failure of Google's main product. As I write this, the search results for "Google stock" (among other queries) returns zero results ("Your search - google stock - did not match any documents").
I'm not really sure what to make out of these discussions about how X or Y Google engineering is, while the production service is broken for an end user like me.
The decline of google’s search engine has really been dramatic. It honestly is just a poor product at this point and I find myself having to use yandex, bing and variety of other tools now to find what I am looking for.
My guess is that between SEO companies and Google just trying to maximize ad profits the product is in terminal decline
They really don't give a shit how many search engineers they drive away with 50+-hour weeks and their endless criticism. When it became uncool to have Google Search on your resume in 2018, I left.
Searching for "Google stock" shows me correct results. A stock chart followed by various search results.
There's no news of a widespread Google failure. Maybe you have a browser extension interfering? Or there's some kind of very localized hiccup.
In any case, your experience right now isn't even close to representative. For its scale and complexity, Google search is probably one of the most reliable services ever built.
Small update: It's definitely not extensions, it's giving the same result on two different devices (mobile and laptop). I've narrowed it down to there being something going on w/ being logged into specific accounts. On my work account, I get no results (and this is a query that used to return results under the exact same setup just last week). Trying on an old personal gmail account, I'm getting the UI localized to what seems to be mandarim for who knows what reason (I don't speak mandarim, and don't even use this account on a regular basis).
As for why this happens, I have no idea. I've had Google Maps completely black out on me and then eventually magically fix itself many months later.
As for reliability, I would probably have agreed if it was a "simple" system (which the original Google was). Today, I'm not so sure. I at least understand that Google today is made up of a large number of subsystems, and subsystem failures like the ones I'm experiencing (and bad search results as others have also reported) do in fact erode my trust in the product. "Your 99.9% is not my 99.9%" feels like an apt quote here.
Possible. I noticed recently that Google search no longer works with NoScript. It used to work. Not sure when this changed, since I don't often enable NoScript.
Don't know why you are getting downvoted - search quality has declined drastically.
I've had multiple occasions where Google reproducibly fails to find exact matches in the page title (no problem for Bing). This cannot be explained by mysterious AI ranking or Unicode issues since Google gave me zero results, the website is non-political, and the title is just plain ASCII.
This never happened ten years ago. Whatever they are doing now, they are seriously screwing things up.
Are you really sure your machine isn't running malware that intercepts queries?
The query [ google stock ] would never return zero results.
If this is really happening to you, please post an actual screenshot demonstrating that a standard browser in incognito (not logged in) mode on a standard OS returns no results for [ google stock ].
I'm not saying Google's core product hasn't slipped but that's one query I run every day.
As far as I can tell, the main difference is this uses proxies to trigger side effects as opposed to virtual dom diffing. Perf wise, I like that people are pushing the boundaries, but also obviously React is still very popular and that indicates that its level of performance is good enough for a lot of devs </shrug>.
In terms of features:
- This was already pointed out in another comment, Aberdeen doesn't seem to be JSX compatible, but it supports similar hyperscript-like syntax (e.g. $('div.foo:hi') vs m('.foo', 'hi'))
- Unclear what the level of support is for various features that exist in Mithril. e.g. Router appears to be an add-on/example, and I don't see anything about HTTP request utils or SVG/MathML. Mithril is meant to be sort of a one-stop shop for SPAs; it could just be that the focus of the two libraries is slightly different, and that's ok if that's the case.
- Mithril has hooks to access the raw DOM and uses RAF for batching, it's not clear to me how integration w/ other DOM-touching libs (e.g. dragula) would work here.
I like the idea of using proxies as a way of implementing reactivity in principle, and I even played around with that idea. IME, there were a couple of things that made it problematic w/ POJO models IMHO. The first is that proxies are not friendly to console.log debugging, and the second is that implementing reactivity in collections (Map/Set/Weak*) was getting a bit too complex for my minimalist tastes.