It's terrible, I love it. I also love that all the setup scripts are badly-written Perl scripts, and even some custom C code just to change the mixer rather than use alsactl. This is 1000% something I would have made years ago. edit Holy moly, the configs are CSV files.... that encode JSON?!
And hey, Slack 15 is out! If they do a new release they won't need to release again for another 6 years!
" Most tutorials about Perl will tell you to always 'use strict' or something to that effect. However, the general rule for HOT DOG Linux is to not 'use strict'. The reasoning behind this is that each Perl script should be as simple as possible. It should be simple enough to not need 'use strict'. If it gets to the point where the script would be easier to deal with if it did 'use strict', then that is an indicator that the script is too complicated and should be broken up."
I tried to come up with a comparison for this but failed. It's sort of like saying, "If you need to wear a helmet to do some activity, you should probably ......... break up the activity into three shorter activities ........."
Since the smaller scripts run in separate processes, there is in fact a safety benefit to the break-up, which compensates in some way for the lack of strict. Processes are isolated; you know that a function call in one script will not misuse a function in another. Or that a global variable defined in one script can't be modified or accessed in the other. The scripts can only communicate via external mechanisms: command line arguments, environment variables, files/pipes/sockets, more rarely shared memory.
Unix was built using an extremely unsafe language. Yet it reached a decent level of reliability in just a few years, a lot of which was owed to the system applications being small programs isolated into separate processes.
> You just move the complexity one layer up, at the composition of all those small utilities.
Out of curiosity, what would the name of the metric for measuring this tradeoff be? Average lines of code per script/program? Average scripts/programs to accomplish a given task?
I feel like it'd be really good to talk more about this tradeoff, e.g. having smart programs that do a lot but are bloated (OpenVPN, OpenSSL, Docker all come to mind) versus smaller programs that do less, but chain together (most GNU tools with the piping mechanism) and the extreme ends of this scale.
Yet, i don't even know how much research has been done into this, what the status quo is and what terms to even look up. It's like talking about the difference between a monolith application or a microservices application, an abstraction that would be applied to tasks and the ways to do them, much like we have SLoC or cyclomatic complexity in regards to reasoning about code (though neither is exactly perfect either).
Average number of interfaces/solution. If you only have 1 program, you have 1 interface (1 set of command-line arguments, 1 set of environment variables, 1 STDIN, 1 STDOUT). If you have 50 programs, you can have 50 interfaces. So, more interfaces, but.... more interfaces.
Composeability requires many different interfaces, but not every solution needs composeability.
Fair point, but doesn't that also kind of muddy the waters because interfaces also being a regular programming construct? E.g. you might have 50 libraries with 50 interfaces that still go in one very large program, no? And in practice that would be very different from chaining 50 different scripts/simple tools together.
That is false. When a small utility terminates, I'm assured that any file descriptors which it opened are closed, that any memory it allocated is gone, and that it didn't touch any data structures of the adjacent programs I'm composing it with. That's a whole lot of complexity that didn't move to the next layer.
Debugging complexity is reduced also because if something causes an abnormal termination, only the containing utility will die, not the entire composition.
This is nice. But less space-efficient than CSV when its strictly tabular, since CSV has columns legend on first row allowing 'pure' rows, whereas JSON will have to key every field, on every row.
I don't understand why people keep using CSV today while SQLite is MIT-licenced and can be used everywhere, has a very portable and lightweight implementation, has a stable file format with long-term commitment, and a good compromise on the type system that gives enough flexibility to be on par with CSV if some entries happen to have a different type...
Installed base. You can bet there are thousands of AS/400s and similar architectures putting out CSVs. Also you cant get more lightweight than CSV, i have several microcontroller projects that output CSV.
Switch from the developer world to the business world and everybody has Excel to open the CSV files with the article information, the sales numbers and so on and can work with that. How do you even read data from SQLite to Excel? VBA? Some obscure connector? With CSV it's "import" or even "open file".
Ironically Excels implementation of CSV is terrible. It’s constantly destroying data (eg large numeric and pseudo-numeric fields) not to mention the whole issue around any cell bringing with an equal symbol being converted into a function.
Haha. Wait to see when you receive a file made in a different language ( for example an excel file with formulas created on a japanese language computer).
Because if I want to do a graph from some data, it's much much easier to open the csv in Excel (or LO Calc) and create a graph from a sum of the subset of three columns VS a fourth column than it use to write an SQL query.
That's like a file with S-expressions, only worse (because those already existed, whereas this had to be made up as a new thing, without offering any improvement).
You could make a similar argument about the redundancy of JSON or even XML too. S-Expressions predates all of them and can represent any of their data structures too.
The problem is S-Expresssions is still pretty niche where as JSON support is widespread and supporting jsonlines when you already support JSON is trivial.
If you’re old enough to be a LISPer then you should be old enough to realise that there’s always going to be a multitude of competing standards and sometimes we have to make pragmatic choices.
ingy and I came up with p3rl.org/JSONY as a "JSON with less ceremony" compromise that I often find useful (it has a PEG grammar that's a superset of JSON so pretty easy to re-implement elsewhere)
The hyphen prefixed arrays are very weird. It looks like YAML but behaves differently. Doesn’t help that YAML is also a superset of JSON too.
I also don’t like space (char 32) used as a field delimiter because that opens up entire categories of bugs that are already a daily annoyance for shell scripting.
I respect your thought processes but I can see this particular specification being more confusing than productive to most other people.
The hyphen prefixed arrays were an ingyism. I don't use those at all but since he wrote the grammar I didn't feel like it was particularly fair to bitch about him sneaking in a single feature he wanted.
I use it basically like:
{ key1 value1 key2 [ value2a value2b value2c ] }
i.e. treating JSONY as "JSON but with as much of the syntactical noise as possible optional" which is what I always wanted it to be.
Plus because JSONY isn't at all indentation sensitive pasting chunks of JSON in becomes IMO a lot easier to comprehend.
A valuable thing for me is that if e.g. I'm writing a daemon that communicates via newline delimited JSON over a TCP or UNIX socket I can have a development mode where it uses JSONY as the parser so it's faster/easier to do quick interactive tests with socat.
I'm not going to criticize personal usage but for the point of accuracy the indention sensitive syntax of YAML is entirely optional. Like I said, it's a superset of JSON, which means the following is valid YMAL:
Granted that's still got a few additional control characters (comma and colon) but personally I think that aids readability because it wouldn't be clear to my what the intent of the parameters were in your example if you hadn't named them "key" and "value". But as I said, I'm not going to criticize personal usage.
Probably because Chicago was originally created by Susan Kare in just the 12pt bitmap form for the original Macintosh. The scalable TrueType version was created years later for System 7 by different people, and I'm afraid they really didn't capture the spirit very well.
Why yes I enjoy both of those things. But the former is better classified as a lasagna. And the latter is just a good way to keep your sandwich intact with such a moist condiment.
If you ever visit Vancouver (or apparently LA or Santa Monica) you have to try out Japadog! Seriously, check these out. Can never go wrong with the Terimayo (or even better, Spicy Cheese Terimayo)... http://www.japadog.com/menu_En.html haha :D
I’ve only ever had a “Chicago dog” in Minnesota, and the pepper relish was neon colored and over-seasoned with who knows what. But I’ve been told this is not a faithful representation, so I’ll try a real one next time I visit actual Chicago.
No. I've never really liked anything about Chicago the city (although I have some fond memories living in a nearby suburb for a few years as a child.) I don't like the food, I don't like the politics etc.
I do like the font though. It's a bit like the Plan9 font, it just feels comfortable.
And hey, Slack 15 is out! If they do a new release they won't need to release again for another 6 years!