Hacker Newsnew | past | comments | ask | show | jobs | submit | urxvtcd's commentslogin

We found an ancient tablet, dated it, reconstruded a long-dead language well enough to read it, reconstructed the night sky on that day, five and a half thousand years ago, found the orbit of this thing, and connected it to a geological formation thousands of kilometers away. Humans can do some amazing stuff.

Seems like it is no longer considered to be anything to do with a meteorite impact. It's hard to find a good source. This is the best I found: https://en.wikipedia.org/wiki/List_of_possible_impact_struct...

I think this paper's abstract claims that wooden debris from the landslide has been dated to 5000 years older than the Sumerian tablet: https://www.researchgate.net/publication/329153343_The_produ...


If you're looking for a source on the landslide, another commenter here posted this, that seems more reliable than wikipedia. Searching for Kofel's impact, rather than landslide, brings up nonsense because there's only pseudo-evidence for that.

https://www.sciencedirect.com/science/article/abs/pii/S01695...

It dates the landslide to about 9400 years ago (BP), so this article about the star map putting it at 5500 years ago seems to be a colourful fabrication (my bad). The author of the meteor theory apparently even tries to connect it to Sodom and Gomorrah being hit by the passing heat! Lol

Finding reliable info on this "planisphere" tablet isn't easy. As far as I can tell it was untranslated and kept a low profile until this impact story


>> It dates the landslide to about 9400 years ago (BP), so this article about the star map putting it at 5500 years ago seems to be a colourful fabrication (my bad).

Don't feel bad. Genuinely exciting if it were true.


Eh, so too good to be true.

Yeah, it was quite a compelling story, and it's at least a genuinely beautiful and intriguing tablet. The author Hempsell does have some talent though, in seemingly getting a reputable university to publish his book... I'm thinking he was quite canny in finding this attractive untranslated tablet with little else written about it, and then employing enough knowledge about a combination of different subjects (ancient Sumerian, asteroid orbits, Alpine geology) that no single reviewer was able or motivated to properly evaluate all the arguments. Or he just had a friend at the press.

There are true stories that don't involve asteroids but are just as compelling. Anything by Irving Finkel, such as:

https://youtu.be/LUxFzh8r384


There is software working at

- finding correspondences (solve parts of the puzzle) and reconstructing at least part of certain documents ( https://arkeonews.net/new-ai-tool-fragmentarium-brings-ancie... , https://virtualcuneiform.org/ )

- somewhat 'infering' some missing parts ( https://voices.uchicago.edu/ochre/project/deepscribe/ ),

- 'normalizing' (find the 'standard' form) of glyphs ( https://news.cornell.edu/stories/2025/03/ai-models-make-prec... )


Erich von Däniken comes to mind.

I find it an absolutely amazing (note I did not use ‘incredible’ on purpose: I consider this explanation very credible indeed). We have a creditable record of a meteor impact dated exactly 29 June 3123 BC. That’s 1,880,145 days ago as of today. It simply boggles my mind.

"The astronomers made an accurate note of its trajectory relative to the stars, which to an error better than one degree is consistent with an impact at Köfels."

---------

This is what I find most amazing: Sub-degree accuracy in a measurement from before chariots. The people of this time had donkey-pulled battle carts that were so slow they had to be abandoned if there was a retreat, but they were able to record and measure astronomical events this accurately.

It's also mind-boggling to consider why they were making such observations. It was all about omens that could determine the success of harvests or battles. There is certainly some of what we might now consider scientific thought going on here. They produced omen tables that exhaustively covered every combination of events they could think of, not yet realizing that some combinations were impossible (e.g. A Lunar eclipse at high noon).

Omens sound silly today, but the fundamental motivation of early astronomers was to make sense of what was going on in the heavens in order to help make better decisions on the ground. If everyone believed in these omens, they had real power and the predictions these astronomers made may have had large impacts.


Yes, it’s absolutely amazing that they were making and recording such accurate empirical measurements for entirely the wrong reasons. “As in heaven, so below.” I wonder how many of the theoretical basis we now consider to be bedrocks will be overruled by entirely incompatible paradigms by the 72nd Century (or however they will refer to it). “Like: aww look, they came up with this weird idea of a Higgs Boson and measured its mass five thousand two hundred years ago using a crude instrument they called a ‘particle accelerator’, little did they know that…”.

Although the reasoning has changed, the motivation was very similar to today. They were meticulous in making observations, made records that will probably still be around when most of ours have dissolved into entropy, and all because they thought it might help them make better decisions.

I'd like to think future scientists (or whatever we might become) will look back on scientists of today and see kindred souls toiling under a different set of conditions.


And of course they weren’t misguided in looking to the heavens for predictive potential: astronomical configuration foretold seasonal changes and indicated when it would be propitious to sow crops or harvest them. They were simply a little over-ambitious in terms of correlating one-off events to terrestrial domains (until a falling rock torches a bunch of proto-Austrians, that is).

Humanity is awesome because we are naturally constrained in semantic-space, making it relatively straightforward to reverse engineer things that ancient humans made even if we share basically zero overlap in culture.

Not true! We loved beer then and we definitely still love it now.

> reconstruded a long-dead language well enough to read it

We "reconstructed" Sumerian through the fairly intuitive process of finding reference works describing the language, and reading them.


That's cool isn't it? Even to the Akkadians, Sumerian was an ancient language (prehistoric!), that became sacred.

Aren't there also bilingual texts that are used for learning it? Or maybe I'm thinking of different versions of stories, in Sumerian and later Akkadian or Babylonian.

I'm curious how the modern pronunciation is arrived at. Is that a lot of convention and guess work or is it reasonably secure through knowing (approximately) Akkadian pronunciation via other Semitic languages?


> I'm curious how the modern pronunciation is arrived at. Is that a lot of convention and guess work or is it reasonably secure through knowing (approximately) Akkadian pronunciation via other Semitic languages?

I would also be interested in material on this. The pronunciation is clearly not obvious; our first attempt at reading the name "Gilgamesh" came out "Izdubar". But it's also not just gone the way, say, Old Chinese pronunciation information is.

Note that our knowledge of Akkadian pronunciation is quite a bit better than our knowledge of other old Afroasiatic languages, because Akkadian is written with vowels.

A fun example is that we know the vowel in the name of the Egyptian god conventionally called "Ra" because he is mentioned in an Akkadian text. (That "a" in the English version of the name represents an Egyptian consonant, not a vowel.)


Or… we are very good at telling amazing stories that make sense.

And then they make tiktok

Baba is You is literally a sokoban for software engineers, highly recommended. It's quite difficult though.

How isn’t a knowledgeable person incentivized to find vulnerabilities but not disclose them?


I don't understand your question, sorry.


Yeah, sorry for not being clear enough. I just struggle how a good faith market can even exist. I immediately start thinking how participants would be incentivized to cheat by neglecting or even introducing vulnerabilities to win. Maybe I’m just a bit too cynical and/or should do more reading on the topic.


Few weeks ago I'd disagree with you, but recently I've been struggling with concentration and motivation and now I kind of try to embrace coding with AI. I guide it pretty strictly, try to stick with pure functions, and always read the output thoroughly. In a couple of places requiring some carefulness I coded them in executable pseudocode (Python) and made AI translate it to the more boilerplate-y target language.

I don't know if I'm any faster than I would be if I was motivated, but I'm A LOT more productive in my current state. I still hope for the next AI winter though.


First time reading this. It's actually funny how disliking exceptions seemed crazy then but it's pretty normal now. And writing a new programming language for a certain product, well, it could turn out to be pretty cool, right? It's how we get all those Elms and so on.


    > disliking exceptions seemed crazy then but it's pretty normal now
Help me to clarify. Are you saying that when Joel posted (~20 years ago), disliking exceptions was considered crazy? And, now it is normal to dislike exceptions?

Assuming that my interpretation is correct, then I assume that you are a low level systems programmer -- C, C++, Rust, etc? Maybe even Golang? If you are doing bog standard enterprise programming with Python, Java or C#, exceptions are everywhere and unavoidable. I am confused. If anything, the last 20 years have cemented the fact that people should be able to choose a first class citizen (language) that either has exceptions or not. The seven languages that I mentioned are all major and have billions of lines of legacy code in companies and open source projects. They aren't going anywhere soon. C++ is a bit special because you can use a compiler flag to disable exceptions... so C++ can be both. (Are there other languages like that? I don't know any. Although, I think that Microsoft has a C language extension that allows throw/catch!)


I wasn't around back then, but it must've been at least a bit crazy, considering Atwood threw an exception (heh) high enough to write a blog entry about it. What I think has happened is that with functional programming concepts sort of permeating mainstream, and with the advent of languages like Go and Rust (which I wouldn't exactly call low-level, for different reasons), treating errors as values is nothing unorthodox in principle. I'm not sure how real or prevalent this is really, just a guess.

I'm not trying to advocate going against the stream and not using exceptions in languages based around them, but I can see it being pulled off by a competent team, which I'm certain Joel could put together.


That's how we got Rust.


If AI is such a competitive advantage, why are AI companies even trying to sell it? Wouldn't it bring more money to use a bleeding edge internal model and just vibe a couple of facebooks at the fraction of the cost and profit like crazy?


It seems the people who think they can just tell computers to write code for them, also are the people who are inclined to tell other people to build apps for them.

We are hurdling towards a brave new world where only 10% of humans have to work, and the other 90% form the bureaucracy on top.


Every time I see an article like this I ask the same question: show me the money?

If it’s so good why hasn’t it made you rich?


May I ask you how you model your functional code in Python, in absence of Haskell's algebraic data types?


Full algebraic data types wouldn't have added much here. Product types are already everywhere, and we didn't need sum or exponential types.

Splitting IO and pure code was just routine refactoring, not a full redesign. Our app logic wasn't strictly pure because it generates pseudorandom numbers and logs events, but practically speaking, splitting the IO and shell from the relatively pure app logic made for much cleaner code.

In retrospect, I consider FCIS a good practice that I first learned with Haskell. It's valuable elsewhere, even when used in a less formal way than Haskell mandates.


I have written a small system in Elixir adhering to FCIS. Not used to the approach, I was pretty slow and sometimes it felt like jumping through hoops set by myself, lol, but I loved it, the code was very clean, testable, and refactorable. Highly recommend it as an exercise, it was surprising just how much state and IO can be pushed out.


Yeah, N++ is super floaty, there's A LOT of inertia. It might feel off at the beginning, but when you get the hang of it, it's just beautiful. It's the opposite of twitchy. You work to preserve the momentum through jumps and corners and evasion maneuvers, it's got that sleek race-y feel. I get it, it's not for everyone, but for me it's bonkers good.


I’ll go one step further: what makes you think it’s an average auto-translate job? I didn’t notice anything weird, felt like your average, slightly ranty HN post. I’m not a native speaker though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: