Hacker Newsnew | past | comments | ask | show | jobs | submit | drzaiusx11's commentslogin

I'd wager it's largely disruptive and dangerous in a highly localized way due to the small percentage of folks doing it. Doesn't make it an acceptable practice though. One person "rolling coal" can temporarily blind 3 or 4 cars back and several across depending on wind conditions, etc.

I live in a progressive state and unfortunately encounter "coal rolling" regularly. I also assume that's the point. Someone has to "own all the libs" as it were

However, I do agree that there aren't enough folks "rolling coal" in aggregate to really move any needles on planet-scale environmental impacts though. Just VERY unpleasant to be caught behind.


Oil in the exhaust in quantities high enough to produce acrid white smoke is extremely common on a number of ICE engines, like blown head gaskets on E25s (found in most Subarus before their Toyota involvement in 2010) for example

Having a viable alternative to the KHTML-lineage engines (blink/webkit) besides Gecko will be a boon for the web.

I haven't been super happy with Mozilla's management of Firefox, although it's my daily driver and a great browser. I just don't have super high confidence it'll be viable long term from a corporate standpoint, especially since it's largely been payrolled by Google which makes Blink, so having another real alternative would be great. Having a sustainable, grassroots community project in Servo makes me have hope again (after Mozilla dropped the ball on them...)


not mozilla-related, but there's also the ladybird project if you're looking for sustainable alternatives: https://ladybird.org/

I had a home brewed ram expansion board (still do actually, in a box somewhere...) I powered everything up a couple years ago when my kids found it and asked what the heck it was. Still works

My original VIC 20 machine that I had in the 80s still works as well, but a few things have been replaced along the way. I still have the same 24K expansion cart that my Dad built 40 years ago and it also still works.

Sounds authentic

Do you mean specifically the Frogger game? Or in general? - If you mean in general, it hopefully does sound very close to the original VIC 20. I actually reversed engineered the VIC chip schematic from photos of the silicon chip, so the sound emulation is based on what I worked out from the reversed engineered schematic. Some of my discussion on that is covered here:

https://sleepingelephant.com/ipw-web/bulletin/bb/viewtopic.p...


That's awesome! I had access to an electron microscope at my last makespace and always wanted to decap a simple chip for reverse engineering. A close friend of mine did that with a SID and recreated it in verilog with good results, which I always found fascinating. Great work on the VIC20 side! And yeah, my VIC20 was never the most stable machine, crashing frequently but I also had a bunch of janky expansion boards attached so ymmv...

I'm also using the knowledge gained from reverse engineering the VIC chip to contribute to a soon to be released VIC chip replacement device called the PIVIC, powered internally by a Pico 2 chip.

Sounds cool! Similar to the SID project, assuming you're also aiming for pin compatibility.

I'm curious why you choose the pico platform over something like TinyFPGA which could be near 100% gate level compatible over a pico with software emulation. I bet the < $3 ICE40 has enough gates?

I haven't really looked at the pico2 yet, maybe it's one of those new hybrid arm+fpga designs and you'd have the best of both worlds?

EDIT: sadly no CPLD/FPGA on the pico2 front, at least according to [1]. Pico2 does add a new RISC-5 core (as a coprocessor? I only skimmed...) So I guess you'd have to do a bunch of timer interrupts to keep things in your emulator clock aligned if you're going pin compatible.

1. https://pip-assets.raspberrypi.com/categories/1214-rp2350/do...


The new graphics driver stack they're touting (capable of running unmodified modern windows display drivers) along with support for x86_64 landing may result in increased interest in the project. They've already made a lot of progress with almost no resources as is. It's truly an impressive project.

I haven't heard much about the ArduCopter (and ArduPilot) projects for a decade, are those projects still at it? I used to run a quadroter I made myself a while back until I crashed it in a tree and decided to find cheaper hobbies...

Well at least crashing drones into trees has never been cheaper hahaha. So it's super easy to get into nowadays, especially if it's just to play around with flight systems instead of going for pure performance.

They're alive and well and producing some pretty impressive software.

Crashing your drone is a learning experience ;)

Remote NSH over Mavlink is interesting, your drone is flying and you are talking to the controller in real time. Just don't type 'reboot'!


Fellow old here, I had several 56k baud modems but even my USR (the best of the bunch) never got more than half way to 56k throughput. Took forever to download shit over BBS...

The real analog copper lines were kind of limited to approx 28K - more or less the nyquist limit. However, the lines at the time were increasingly replaced with digital 64Kbit lines that sampled the analog tone. So, the 56k standard aligned itself to the actual sample times, and that allowed it to reach a 56k bps rate (some time/error tolerance still eats away at your bandwidth)

If you never got more than 24-28k, you likely still had an analog line.


56k was also unidirectional, you had to have special hardware on the other side to send at 56k downstream. The upstream was 33.6kbps I think, and that was in ideal conditions.

The special hardware was actually just a DSP at the ISP end. The big difference was before 56k modems, we had multiple analog lines coming into the ISP. We had to upgrade to digital service (DS1 or ISDN PRI) and break out the 64k digital channels to separate DSPs.

The economical way to do that was integrated RAS systems like the Livingston Portmaster, Cisco 5x00 seriers, or Ascend Max. Those would take the aggregated digital line, break out the channels, hold multiple DSPs on multiple boards, and have an Ethernet (or sometimes another DS1 or DS3 for more direct uplink) with all those parts communicating inside the same chassis. In theory, though, you could break out the line in one piece of hardware and then have a bunch of firmware modems.


The asymmetry of 56k standards was 2:1, so if you got a 56k6 link (the best you could get in theory IIRC) your upload rate would be ~28k3. In my expereience the best you would get in real world use was ~48k (so 48kbpd down, 24kbps up), and 42k (so 21k up) was the most I could guarantee would be stable (baring in mind “unstable” meant the link might completely drop randomly, not that there would be a blip here-or-there and all would be well again PDQ afterwards) for a significant length of time.

To get 33k6 up (or even just 28k8 - some ISPs had banks of modems that supported one the 56k6 standards but would not support more than 28k8 symmetric) you needed to force your modem to connect using the older symmetric standards.


Yeah 28k sounds more closer to what I got when things were going well. I also forget if they were tracking in lower case 'k' (x1000) or upper case 'K' (x1024) units/s which obviously has an effect as well.

The lower case "k" vs upper case "K" is an abomination. The official notation is lower case "k" for 1000 and lower case "ki" for 1024. It's an abomination too, but it's the correct abomination.

That's a newer representation, mostly because storage companies always (mis)represented their storage... I don't think any ISPs really misrepresent k/K in kilo bits/bytes

Line speed is always base 10. I think everything except RAM (memory, caches etc.) is base 10 really.

* 56k baud modems but even my USR (the best of the bunch) never got more than half way to 56k throughput*

56k modem standards were asymmetric, the upload rate being half that of the download. In my experience (UK based, calling UK ISPs) 42kbps was usually what I saw, though 46 or even 48k was stable¹ for a while sometimes.

But 42k down was 21k up, so if I was planning to upload anything much I'd set my modem to pretend it as a 36k6 unit: that was more stable and up to that speed things were symmetric (so I got 36k6 up as well as down, better than 24k/23k/21k). I could reliably get a 36k6 link, and it would generally stay up as long as I needed it to.

--------

[1] sometimes a 48k link would last many minutes then die randomly, forcing my modem to hold back to 42k resulted in much more stable connections


Even then, it required specialized hardware on the ISP side to connect above 33.6kbps at all, and almost never reliably so. I remember telling most of my friends just to get/stick with the 33.6k options. Especially considering the overhead a lot of those higher modems took, most of which were "winmodems" that used a fair amount of CPU overhead insstead of an actual COM/Serial port. It was kind of wild.

Yep. Though I found 42k reliable and a useful boost over 36k6 (14%) if I was planning on downloading something big¹. If you had a 56k capable modem and had a less than ideal line, it was important to force it to 36k6 because failure to connect using the enhanced protocol would usually result in fallback all the way to 28k8 (assuming, of course, that your line wasn't too noisy for even 36k6 to be stable).

I always avoided WinModems, in part because I used Linux a lot, and recommended friends/family do the same. “but it was cheaper!” was a regular refrain when one didn't work well, and I pulled out the good ol' “I told you so”.

--------

[1] Big by the standards of the day, not today!


> several 56k baud modems

These were almost definitely 8k baud.


In case anyone else is curious, since this is something I was always confused about until I looked it up just now:

"Baud rate" refers to the symbol rate, that is the number of pulses of the analog signal per second. A signal that has two voltage states can convey two bits of information per symbol.

"Bit rate" refers to the amount of digital data conveyed. If there are two states per symbol, then the baud rate and bit rate are equivalent. 56K modems used 7 bits per symbol, so the bit rate was 7x the baud rate.


Not sure about your last point but in serial comms there are start and stop bits and sometimes parity. We generally used 8 data bits with no parity so in effect there are 10 bits per character including the stop and start bits. That pretty much matched up with file transfer speeds achieved using one of the good protocols that used sliding windows to remove latency. To calculate expected speed just divide baud by 10 to covert from bits per second to characters per second then there is a little efficiency loss due to protocol overhead. This is direct without modems once you introduce those the speed could be variable.

Yes, except that in modern infra i.e. WiFi 6 is 1024-QAM, which is to say there are 1024 states per symbol, so you can transfer up to 10bits per symbol.

Yes, because at that time, a modem didn't actually talk to a modem over a switched analog line. Instead, line cards digitized the analog phone signal, the digital stream was then routed through the telecom network, and the converted back to analog. So the analog path was actually two short segments. The line cards digitized at 8kHz (enough for 4kHz analog bandwidth), using a logarithmic mapping (u-law? a-law?), and they managed to get 7 bits reliably through the two conversions.

ISDN essentially moved that line card into the consumer's phone. So ISDN "modems" talked directly digital, and got to 64kbit/s.


An ISDN BRI (basic copper) actually had 2 64kbps b channels, for pots dialup as an ISP you typically had a PRI with 23 b, and 1 d channel.

56k only allowed one ad/da from provider to customer.

When I was troubleshooting clients, the problem was almost always on the customer side of the demarc with old two line or insane star junctions being the primary source.

You didn’t even get 33k on analog switches, but at least US West and GTE had isdn capable switches backed by at least DS# by the time the commercial internet took off. Lata tariffs in the US killed BRIs for the most part.

T1 CAS was still around but in channel CID etc… didn’t really work for their needs.

33.6k still depended on DS# backhaul, but you could be pots on both sides, 56k depended on only one analog conversion.


56k relied on the TX modem to be digitally wired to the DAC that fed the analog segment of the line.

Confusing baud and bit rates is consistent with actually being there, though.

As someone that started with 300/300 and went via 1200/75 to 9600 etc - I don't believe conflating signalling changes with bps is an indication of physical or temporal proximity.

I think it was a joke implying you'd be old enough to forget because of age, which in my case is definitely true...

Oh, I got the implication, but I think it was such a common mistake back then, that I don't think it's age-related now - it's a bit of a trope, to assume baud and bps mean the same thing, and people tend to prefer to use a more technical term even when it's not fully understood. Hence we are where we are with terms like decimate, myriad, nubile, detox etc, forcefully redefined by common (mis)usage. I need a cup of tea, clearly.

Anyway, I didn't think my throw-away comment would engender such a large response. I guess we're not the only olds around here!


No, just that confusing the two was ubiquitous at the time 14.4k, 28k, and 56k modems were the standard.

Like it was more common than confusing Kbps and KBps.

I mean, the 3.5" floppy disk could store 1.44 MB... and by that people meant the capacity was 1,474,560 bytes = 1.44 * 1024 * 1000. Accuracy and consistency in terminology has never been particularly important to marketing and advertising, except marketing and advertising is exactly where most laypersons first learn technical terms.


I started out with a 2400 baud US Robotics modem with my "ISP" being my local university to surf gopher and BBS. When both baud rates and bits per second were being marketed side by side I kinda lost the thread tbh. Using different bases for storage vs transmission rates didn't help.

Yeah I got baud and bit rates confused. I also don't recall any hayes commands anymore either...

If only search engine AI output didn't constantly haluciate nonexistent APIs, it might be a net productivity gain for me...but it's not. I've been bit enough times by their false "example" output for it to be a significant net time loss vs using traditional search results.

Gemini hallucinated a method on a rust crate that it was trying to use and then spent ten minutes googling 'method_name v4l2 examples' and so on. That method doesn't exist and has never existed; there was a property on the object that contained the information it wanted, but it just sat there spinning its wheels convinced that this imagined method was the key to its success.

Eventually it gave up and commented out all the code it was trying to make work. Took me less than two minutes to figure out the solution using only my IDE's autocomplete.

It did save me time overall, but it's definitely not the panacea that people seem to think it is and it definitely has hiccups that will derail your productivity if you trust it too much.


My favorite with ChatGPT is:

"Tell me how to do X" (where X was, for one recent example, creating a Salt stanza to install and configure a service).

I do as it tells me, which seems reasonable on the face of it. But it generates an error.

"When creating X as you described, I get error: Z. Why?"

"You're absolutely correct and you should expect this error because X won't work this way. Do Y instead."

Gah... "I told you to do X, and then I'm going to act like it's not a surprise that X doesn't work and you should do something else."


You're absolutely right

it's not just that you are absolutely correct but you are also absolutely right

It's even worse when LLM eats documentation for multiple versions of the same library and starts hallucimixing methods from all versions at the same time. Certainly unusable for some libraries which had a big API transition between versions recently.

Using ChatGPT and phrasing it like a search seems like a better way? “Can you find documentation about an API that does X?”

It will often literally just make up the documentation.

If you ask for a link, it may hallucinate the link.

And unlike a search engine where someone had to previously think of, and then make some page with the fake content on it, it will happily make it up on the fly so you'll end up with a new/unique bit of fake documentation/url!

At that point, you would have been way better off just... using a search engine?


how is it hallucinating links? The links are direct links to the webpage that they vectorized or whatever as input to the LLM query. In fact, on almost all LLM responses DuckDuckGo and Google, the links are right there as sited sources that you click on (i know because I'm almost always clicking on the source link to read the original details, and not the made up one

I would imagine links can be hallucinated because the original URLs in the training data get broken up into tokens - so it's not hard to come up with a URL that has the right format (say https://arxiv.org/abs/2512.01234 - which is a real paper but I just made up that URL) and a plausible-sounding title.

Yeah, but the current state of ChatGPT doesn’t really do this. The comment you’re replying to explains why URLs from ChatGPT generally aren’t constructed from raw tokens.

You are absolutely right! The current state of ChatGPT was not in my training data.

How do you explain it then, when it spits out the link, that looks like it surprisingly contains the subject of your question in the URL, but that page simply doesn't exist and there isn't even a blog under that domain at all?

Near as I can tell, people just don’t actually check and go off what it looks like it’s doing. Or they got lucky, and when they did check once it was right. Then assume it will always right.

Which would certainly explain things like hallucinated references in legal docs and papers!

The reality is that for a human to make up that much bullshit requires a decent amount of work, so most humans don’t do it - or can’t do it as convincingly. LLMs can generate nigh infinite amounts of bullshit for cheap (and often more convincing sounding bullshit than a human can do on their own without a lot of work!), making them perfect for fooling people.

Unless someone is really good at double checking things, it’s a recipe for disaster. Even worse, doing the right amount of double checking makings them often even more exhausting than just doing the work yourself in the first place.


I’ve used Claude code to debug and sometimes it’ll say it knows what the issue is, then when I make it cite a source for its assertions, it will do a web search and sometimes spit out a link whose contents contradict its own claim.

One time I tried to use Gemini to figure out 1950s construction techniques so I could understand how my house was built. It made a dubious sounding claim about the foundation, so I had it give me links and keywords so I could find some primary sources myself. I was unable to find anything to back up what it told me, and then it doubled down and told me that either I was googling wrong or that what it told me was a historical “hack” that wouldn’t have been documented.

These were both recent and with the latest models, so maybe they don’t fully fabricate links, but they do hallucinate the contents frequently.


> maybe they don’t fully fabricate links

Grok certainly will (at least as of a couple months ago). And they weren't just stale links either.


After getting beaten for telling the truth so frequently, who wouldn’t start lying?

I haven't seen this happen in ChatGPT thinking mode. It actually does a bunch of web searches and links to the results.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: