Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's terrible, I love it. I also love that all the setup scripts are badly-written Perl scripts, and even some custom C code just to change the mixer rather than use alsactl. This is 1000% something I would have made years ago. edit Holy moly, the configs are CSV files.... that encode JSON?!

And hey, Slack 15 is out! If they do a new release they won't need to release again for another 6 years!



"badly-written Perl scripts"

They do appear to be badly written on purpose, like this "hotdog-aboutDrives.pl" one:

https://github.com/arthurchoung/HOTDOG/blob/master/hotdog-ab...

The entire script is this:

  #!/usr/bin/perl
  $str = `df`;
  print $str;


The rationale: https://hotdoglinux.com/PerlAndUseStrict/index.html

" Most tutorials about Perl will tell you to always 'use strict' or something to that effect. However, the general rule for HOT DOG Linux is to not 'use strict'. The reasoning behind this is that each Perl script should be as simple as possible. It should be simple enough to not need 'use strict'. If it gets to the point where the script would be easier to deal with if it did 'use strict', then that is an indicator that the script is too complicated and should be broken up."

I tried to come up with a comparison for this but failed. It's sort of like saying, "If you need to wear a helmet to do some activity, you should probably ......... break up the activity into three shorter activities ........."


Since the smaller scripts run in separate processes, there is in fact a safety benefit to the break-up, which compensates in some way for the lack of strict. Processes are isolated; you know that a function call in one script will not misuse a function in another. Or that a global variable defined in one script can't be modified or accessed in the other. The scripts can only communicate via external mechanisms: command line arguments, environment variables, files/pipes/sockets, more rarely shared memory.

Unix was built using an extremely unsafe language. Yet it reached a decent level of reliability in just a few years, a lot of which was owed to the system applications being small programs isolated into separate processes.


Simple is beautiful, less is more.


You just move the complexity one layer up, at the composition of all those small utilities.


> You just move the complexity one layer up, at the composition of all those small utilities.

Out of curiosity, what would the name of the metric for measuring this tradeoff be? Average lines of code per script/program? Average scripts/programs to accomplish a given task?

I feel like it'd be really good to talk more about this tradeoff, e.g. having smart programs that do a lot but are bloated (OpenVPN, OpenSSL, Docker all come to mind) versus smaller programs that do less, but chain together (most GNU tools with the piping mechanism) and the extreme ends of this scale.

Yet, i don't even know how much research has been done into this, what the status quo is and what terms to even look up. It's like talking about the difference between a monolith application or a microservices application, an abstraction that would be applied to tasks and the ways to do them, much like we have SLoC or cyclomatic complexity in regards to reasoning about code (though neither is exactly perfect either).


Average number of interfaces/solution. If you only have 1 program, you have 1 interface (1 set of command-line arguments, 1 set of environment variables, 1 STDIN, 1 STDOUT). If you have 50 programs, you can have 50 interfaces. So, more interfaces, but.... more interfaces.

Composeability requires many different interfaces, but not every solution needs composeability.


Fair point, but doesn't that also kind of muddy the waters because interfaces also being a regular programming construct? E.g. you might have 50 libraries with 50 interfaces that still go in one very large program, no? And in practice that would be very different from chaining 50 different scripts/simple tools together.


That is false. When a small utility terminates, I'm assured that any file descriptors which it opened are closed, that any memory it allocated is gone, and that it didn't touch any data structures of the adjacent programs I'm composing it with. That's a whole lot of complexity that didn't move to the next layer.

Debugging complexity is reduced also because if something causes an abnormal termination, only the containing utility will die, not the entire composition.


How do you compose those small utilities?


War is peace, ignorance is bliss.


If a brick could fall on you and kill you, just use smaller bricks.


If a crib needs a gate it should be lower down.


> CSV files.... that encode JSON?!

We need a new schema language with the ability to specify such cross-format …formats.



I use jsonlines on those occasions https://jsonlines.org/examples/


This is nice. But less space-efficient than CSV when its strictly tabular, since CSV has columns legend on first row allowing 'pure' rows, whereas JSON will have to key every field, on every row.


Your use case is literally the first item in the link I shared (and the reason I shared that link too):

  ["Name", "Session", "Score", "Completed"]
  ["Gilbert", "2013", 24, true]
  ["Alexa", "2013", 29, true]
  ["May", "2012B", 14, false]
  ["Deloise", "2012A", 19, true]


I don't understand why people keep using CSV today while SQLite is MIT-licenced and can be used everywhere, has a very portable and lightweight implementation, has a stable file format with long-term commitment, and a good compromise on the type system that gives enough flexibility to be on par with CSV if some entries happen to have a different type...


Are you suggesting sending around SQLite dbs as a data interchange format? Or to replace some other csv use case?


Yes exactly. As a data interchange format I think it is much better than CSV (more compact, faster, more efficient, less error-prone, etc.)


Installed base. You can bet there are thousands of AS/400s and similar architectures putting out CSVs. Also you cant get more lightweight than CSV, i have several microcontroller projects that output CSV.


Switch from the developer world to the business world and everybody has Excel to open the CSV files with the article information, the sales numbers and so on and can work with that. How do you even read data from SQLite to Excel? VBA? Some obscure connector? With CSV it's "import" or even "open file".


Ironically Excels implementation of CSV is terrible. It’s constantly destroying data (eg large numeric and pseudo-numeric fields) not to mention the whole issue around any cell bringing with an equal symbol being converted into a function.


It totally is, but that is was most people know, next to the XLS/XLSX files themselves.


And some localizations of excel uses semi colon instead of comma! So no interoperability.


Haha. Wait to see when you receive a file made in a different language ( for example an excel file with formulas created on a japanese language computer).


Sqlite has an odbc driver that excel can use


Because if I want to do a graph from some data, it's much much easier to open the csv in Excel (or LO Calc) and create a graph from a sum of the subset of three columns VS a fourth column than it use to write an SQL query.


Does Excel support SQL? Because I feel confident that Excel+SQL would 100% replace Tableau and save us a million dollars


You have been able to include the results of sql queries in excel for decades.


Plus Excel includes PowerQuery with the M programming language which lets you do everything you can in SQL, just slower and with more verbosity.


Excel can't open SQLlite files


Yes it can, using the sqlite odbc driver.


Does that work with File > Open with a stock Excel install? Does editing work seamlessly?


No, but that's not why you would move your data to sqlite. I was only responding to the question of whether Excel could read it, which it can.


He said open, not read. They're subtly different terms but the distinction is important.


you can have json lines where each line is json array


That's like a file with S-expressions, only worse (because those already existed, whereas this had to be made up as a new thing, without offering any improvement).


You could make a similar argument about the redundancy of JSON or even XML too. S-Expressions predates all of them and can represent any of their data structures too.

The problem is S-Expresssions is still pretty niche where as JSON support is widespread and supporting jsonlines when you already support JSON is trivial.

If you’re old enough to be a LISPer then you should be old enough to realise that there’s always going to be a multitude of competing standards and sometimes we have to make pragmatic choices.


Yes, in fact I frequently make that argument about JSON and XML. I'm definitely being consistent on that one.


JSON lines has a few nice properties.

The biggest advantage is that 1 line = 1 record, so you can use unix tools like head, tail, grep, sed to work on it.

It's also easy to sync state in case of errors, since a newline always means record boundary.

In CSV and normal JSON a record can span multiple lines,which sucks, so you almost always have to write custom code to work on them.


An S-expression file with a newline after each top-level expression has the same property. So that's not really a problem.


Clojure's EDN is pretty decent for this.

ingy and I came up with p3rl.org/JSONY as a "JSON with less ceremony" compromise that I often find useful (it has a PEG grammar that's a superset of JSON so pretty easy to re-implement elsewhere)


The hyphen prefixed arrays are very weird. It looks like YAML but behaves differently. Doesn’t help that YAML is also a superset of JSON too.

I also don’t like space (char 32) used as a field delimiter because that opens up entire categories of bugs that are already a daily annoyance for shell scripting.

I respect your thought processes but I can see this particular specification being more confusing than productive to most other people.


The hyphen prefixed arrays were an ingyism. I don't use those at all but since he wrote the grammar I didn't feel like it was particularly fair to bitch about him sneaking in a single feature he wanted.

I use it basically like:

  { key1 value1 key2 [ value2a value2b value2c ] }
i.e. treating JSONY as "JSON but with as much of the syntactical noise as possible optional" which is what I always wanted it to be.

Plus because JSONY isn't at all indentation sensitive pasting chunks of JSON in becomes IMO a lot easier to comprehend.

A valuable thing for me is that if e.g. I'm writing a daemon that communicates via newline delimited JSON over a TCP or UNIX socket I can have a development mode where it uses JSONY as the parser so it's faster/easier to do quick interactive tests with socat.


I'm not going to criticize personal usage but for the point of accuracy the indention sensitive syntax of YAML is entirely optional. Like I said, it's a superset of JSON, which means the following is valid YMAL:

   { key1: value1, key2: [ value2a, value2b, value2c ] }
Granted that's still got a few additional control characters (comma and colon) but personally I think that aids readability because it wouldn't be clear to my what the intent of the parameters were in your example if you hadn't named them "key" and "value". But as I said, I'm not going to criticize personal usage.


"badly-written Perl scripts..."

You are much too kind to the monkeys who put this together. The config is nice, but Chicago? Chicago font? I thought that was buried deep, long ago.


Am I weird for liking Chicago?


Chicago is a great font. It’s also strangely popular in Japan, used in video games and I’ve seen several stores’ signs set in it:

https://www.alamy.com/stock-photo-exterior-shop-front-of-cop...

Not so great kerning though.


Sincerely, no, it is a good low PPI font with a lot of (ugh actually unintended pun) character for its historical use.


I like Chicago too, but only the original 12 pt. bitmap version. For some reason it never looks quite right when scaled, I wonder why.


Probably because Chicago was originally created by Susan Kare in just the 12pt bitmap form for the original Macintosh. The scalable TrueType version was created years later for System 7 by different people, and I'm afraid they really didn't capture the spirit very well.


I like Chicago too, but fonts are a contentious topic! I really like Computer Modern too but apparently some people properly hate it.


Do you enjoy casserole that calls itself pizza, and roast beef sub-sandwiches that demand dipping?


Why yes I enjoy both of those things. But the former is better classified as a lasagna. And the latter is just a good way to keep your sandwich intact with such a moist condiment.


I like when they put weird shit on hot dogs


If you ever visit Vancouver (or apparently LA or Santa Monica) you have to try out Japadog! Seriously, check these out. Can never go wrong with the Terimayo (or even better, Spicy Cheese Terimayo)... http://www.japadog.com/menu_En.html haha :D


Sport peppers ftw!!


I’ve only ever had a “Chicago dog” in Minnesota, and the pepper relish was neon colored and over-seasoned with who knows what. But I’ve been told this is not a faithful representation, so I’ll try a real one next time I visit actual Chicago.


No. I've never really liked anything about Chicago the city (although I have some fond memories living in a nearby suburb for a few years as a child.) I don't like the food, I don't like the politics etc.

I do like the font though. It's a bit like the Plan9 font, it just feels comfortable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: