Good alert deduplication and dependency rules are worth so much. "Dear alerting, don't start throwing a fit about those 600 systems over there if you can't even reach the firewall all traffic to those systems goes through". Suddenly you don't get throttled by your SMS provider for the volume of alerts it tries to send, and instead just get one very spicy message.
Snark aside, this also impacts resolution time, because done well, this instantly points out the most critical problem, instead of all the consequences of one big breaking. "Dear operator, don't worry about the hundreds of apps, the database cluster is down".
They used to, but I wouldn't want to go back to that. Believe me, compilers that continue and try their best are a massive improvement in many cases, allowing you to fix more issues between compilation attempts.
Perhaps, I don't really program much c/c++. but in my experience most of the subsequent errors are due to the first error. So even where there might be several places I could fix the code my standard operating practice is to find the first error, fix that and see what it cleans up.
But like I said, I am not much of a C programmer. The compiler authors feel strongly about pushing past all possible errors and keep doing it so perhaps there is merit to this practice. but it bugs the heck out of me.
98% of the time those lengthy messages are useless, but the other 2% of the time they're critical to tracking down the problem.
A year or two ago Visual Studio added a pop-up that parses such lengthy compiler messages into a clickable tree list. I found it annoying at first, until I discovered I could dock it to the side, ignore it 98% of the time, but still go look at the details when relevant. This is an idea other compilers should copy.
Maybe ships should copy this approach too: issue fewer warnings, but provide a list of warning details for review when necessary.
Also, an EV is as green as the grid. Hamburgs public transportation is heavily investing into electrical busses, because a bus is expected to function for 10 - 15 years. Meaning, a diesel bus built today will be as polluting in 2035 as it is today, though they are also looking at alternatives there. But an electrical bus will become cleaner and cleaner over time.
> For example, Azure Standard_E192ibds_v6 is 96 cores with 1.8 TB of memory and 10 TB of local SSD storage with 3 million IOPS.
Is a well-stocked Dell Server going for ~50 - 60K capex without storage before the RAM prices exploded. I"m wondering a bit about the CPU in there, but the Storage + RAM is fairly normal and nothing crazy. I'm pretty sure you could have that in a rack for 100k hardware pricing.
> When they tell their base managers to crack the whip and force them to give the whole “you are not working hard enough, tighten up. Shorter lunches, clock in 5 minutes early, etc” speech to the base employees, they will absolutely feel resentment and do LESS work, not more.
The most influential question from team lead trainings over the years has been: Do you trust your employee to want to complete the task and purpose they have, or do you need to control them? There are a few names for this, Theory X and Theory Y mainly.
And don't be snide and just say that the current economy forces you to work due to wages. A lot of people I know would just create their own creative work if they had all the money in the world. So yeah, I think if you frame a persons job and purpose in the company right, you can trust them to work. This may not work in all industries, but in tech it seems to hold.
An example where this is in my experience a good guidance: Someone starts slipping their metrics, whatever those are. Comes in late, is hard to reach remotely. Naturally they should get slapped with the book right? Nah. If you assume they want to work well, the first question should be: Why, what is going on?
In a lot of cases, there will be something going on in their private life they are struggling with. If you help them with that, or at least help them navigate work around this, you will end up with a great team member.
Like one guy on the team recently had some trouble during the last legs of building a house and needed more flexible time. We could've been strict and told him to punch it and take their entire annual vacation to manage that, even if he just needs to be able to jump away for an hour or two here and there. Instead we made sure to schedule simple work for him and have him work with a higher focus on educating his sidekicks, tracked the total time away and then booked it as 3-4 days at the end. Now it's a fun story in the teams lore they are fond of having navigated that, instead of one guy sulking about having lost all of his vacation in that nonsense.
> In a lot of cases, there will be something going on in their private life they are struggling with. If you help them with that, or at least help them navigate work around this, you will end up with a great team member.
Note that there already has to be a pretty high level of trust between that employee and their manager for this to work; if I don't feel like I can trust my manager, I will absolutely keep my lips zipped about anything not directly work related.
Oh absolutely, and it would be my responsibility to build this. In fact, I don't even need details. I just prefer to know about a team members situation and have a plan around it, before clients, internal customers, our boss and HR start coming knocking with hard questions or worse.
I'm now somewhat interested in the study to see how they accounted for possible hidden factors.
If a team lead or manager spent the time to track birthdays and took time out of their day to have a 10 minute chat with someone on their birthday, they probably exhibit a number of other behaviors that could be summarized as "treating their employees as humans". That's the boss people tend to like to work with and possibly go another mile for them.
If tolerating your boss during a normal day takes 9 of your 12 spoons of energy for the day, it takes very little further push to be spiteful. At worst, they may force you to find another workplace with a better boss.
This is a study from an elite institution published in a respectable journal in the social sciences. Certainly they took the time to perform a controlled experiment and assigned managers at random to deliver the birthday cards late or on time. That would be cheap to do and minimally invasive for the human subjects.
[Reads abstract]
They didn't? It's a pure observational study that one measure of sloppiness in the organisation correlates with another? What do we pay these guys for?
Per abstract it's a "a dynamic difference-in-differences" analysis, which means likely that they see whether the employee behavior changes after the event. But establishing causation with it still requires quite a few assumptions.
PNAS is kinda known for headline grabbing research with at times a bit less rigorous methodology.
> Certainly they took the time to perform a controlled experiment and assigned managers at random to deliver the birthday cards late or on time. That would be cheap to do and minimally invasive for the human subjects.
If the results are true, it would be actually quite expensive because of the drop in productivity. It could also be a bit of a nightmare to push through ethical review.
They could start by observing the rate at which birthday cards are delivered on time, and not vary too much from that.
I suppose the impact on productivity isn't known in advance, and it might be that failing to receive a birthday card from a normally diligent manager costs the company more in productivity than it gains from a sloppy manager unexpectedly giving one on time.
However, if at some point somehow it shines through, that this is just another checklist being ticked off, without actual sincerity behind it, this all goes down the drain, and the time would be better spent on actual work environment improvements, rather than wet handshakes and pseudo "we are a family".
This has been my understanding for e.g. European chips as well:
First you subsidize and support the creation of currently not commercially viable chip fabs on-shore. Literally handing companies money under some obligation into the future.
Eventually the on-shore chips are produced, but they have higher total cost of ownership for the users: Logistics may be cheaper due to less distance, or more expensive because they are not well-trodden paths yet. Production costs like labor, water, energy could be higher. And the chips could just have higher failure rate, because problems in the new processes need to be kinked out.
But to get local consumers to switch to these switches, one applies tariffs to other sources of chips so the on-shore chips become more competitive artificially, until they become actually cheaper and competitive.
The way it is threatened here isn't in the use case of tariffs at all from my limited understanding.
I'm a broken record about this by now, but stories like these keep reminding me how broken the law is for ethical hackers in Germany. If an ethical hacker found something like this in Germany, it would from my knowledge not be clear if entering an empty password counts as "circumventing or breaking a security barrier". "No password barrier" has recently been clarified in courts, but "Static Password" hasn't.
And once you break a security barrier, you're breaking the law. Even GDPR doesn't help you there - that just ensures more people are breaking different laws. And this can get all your devices seized, land you in jail, end your career, cause thousands of Euros of equipment loss, because the new laptop naturally got lost in the return process after 6 - 12 months.
And thus, many people with the skill to find such problems and report them silently to get them closed do ... nothing. Until bad people find these holes and what the article describes happens. And Europe has hacker groups who could turn our cybersecurity upside down in a good way. Very frustrating topic.
> At the end of the trial, however, this had little impact on the verdict. The presiding judge stated for the record that the mere fact that the [publicly available] software had set a password for the connection meant that viewing the raw data of the [publicly available] program and subsequently connecting to the [publicly available] Modern Solution database constituted a criminal offense under the hacker paragraph.
Yes, taking publicly available data verbatim (no ROT13, nothing) and talking to a publicly available server on the internet can in fact be a criminal offense.
Thank you for providing an example that is exactly showing how messed up this is:
> Der Vorsitzende Richter gab zu Protokoll, dass alleine die Tatsache, dass die Software ein Passwort für die Verbindung gesetzt habe, bedeute, dass ein Blick in die Rohdaten des Programms und eine anschließende Datenbankverbindung zu Modern Solution den Straftatbestand des Hackerparagrafen erfülle
> The Judge gave to protocol that just the fact that the software requires a password for the connection, implies that a look at the raw data of the program and a subsequent database connection is considered hacking.
So yes, entering an empty password can cause all of your electronic devices in all your registered residences to be seized as evidence.
Note that the decompilation is on the complexity level of "strings $binary".
Germany is the most contradicdory country I know of, and such a huge warning flag to anywhere else. For decades, half of children's education has been spent on hammering in "Never Again". Surely there are two huge lessons to learn there: 1. Do not judge the value of people based on their biological characteristics they were born with 2. "I was just following orders" is not an excuse, and one needs to instead do what is right regardless of protocol.
There is no European country which does a worse job at both of these. Germany is easily the number one country in the world for "protocol is everything". It doesn't matter how detrimental and damaging the rules are, the rules are the rules, and they must be followed. This case is the millionth example. The rules are interpretable as it being illegal to access data with a publically available password using this password, so we're going to apply them, despite it being patently absurd. For the first point, German's reponse to Gaza (the slowest in all of the West) said everything.
> The rules are interpretable as it being illegal to access data with a publically available password using this password, so we're going to apply them, despite it being patently absurd.
I very much agree. I do think that this kind of ethical hacking should have a legal framework around it, to protect both sides during such an access. But this should be more on the basis of responsibly minimizing access to protected data as well as minimizing foreseeable damage.
For example - running a select on a database may show you private and protected data, but if this is done to validate a problem, fine. Start digging for data on specific persons? Touch something called "Pump Controls"? This would however require technologically competent judges, and those are rare.
As I said, a frustrating topic and it will become very interesting if a hostile state starts pushing on this.
German government and courts are as opportunistic as everywhere else. German government ignores EU laws (ex: water protection), its own courts (ex: air pollution court orders, time record keeping for teachers) and worker protection (ex: false self employment of music teachers).
PostgreSQL has 2 memory-related parameters you need to set for larger instances - work_mem and shared_buffers, as these need to be set to a percentage of the VMs memory to utilize it well. However, pretty much every PostgreSQL setup guide names these two values, and on a managed PostgreSQL hosting I'd expect these to be set.
Outside of memory, log_duration, temp_file_limit, a good query plan visualizer and some backup and replication (e.g. PGBackrest and Patroni) are also generally recommended if self-hosting. Patroni doesn't even need an external config store anymore, which is great since you can just run it onto 3-4 nodes and get a high quality HA, easy to manage PostgreSQL cluster.
But those two parameters are pretty much all to have a PostgreSQL process thousands of transactions per second without further tuning. Even our larger DBs hosting simple REST-applications (opposed to ETL/Data warehousing) had to grow quite a lot until further configuration was necessary, if at all.
Checkpointing probably becomes the next issue then, but modern PostgreSQL actually has great logging there -- "Checkpoints occur too frequently, consider these 5 config parameters to look at". And don't touch VACUUM jobs, as a consultant once joked, he sometimes earns thousands of dollars to say "You killed a VACUUM job? Don't do that".
So yeah, actually running PostgreSQL takes a few considerations, but compared to 10 - 15 years ago, you can get a lot with little effort.
And now that I've read that the second time, this is very close to various kinds of therapy.
For example, anxiety exists and sometimes occurs, and it means parts of me are trying to be very careful and precise about something. This can be a problem at times if it overcomes you, but it can also be leveraged into a strength once you figure why it's flaring up at the moment.
Another example, travel used to be a nuisance, but now I've setup and continue refining some packing and preparation checklists for trips of varying length. Now it's a big no-brainer to be well-prepared for a short work-trip and I'm usually very calm about it.
Unless you replace the entire workforce, you'd be surprised how much organizational work and soft skills are involved in an infrastructure at scale.
Like sure, there is a bunch of stuff like monitoring, alerting that is telling us that a database is filling up it's disk. This is already automated. It could also have automated remediation with tech from the 2000s with some simple rule-based systems (so you can understand why those misbehaved, instead of entirely opaque systems that just do whatever).
The thing is though, very often the problem isn't the disk filling up or fixing that.
The problem is rather figuring out what silly misbehavior the devs introduced, if a PM had a strange idea they did not validate, if this is backed by a business case and warrants more storage, if your upstream software has a bug, or whatever else. And then more stuff happens and you need to open support cases with your cloud provider because they just broke their API to resize disks, ...
And don't even get me started on trying to organize access management with a minimally organized project consulting team. Some ADFS config resulting from that is the trivial part.
Snark aside, this also impacts resolution time, because done well, this instantly points out the most critical problem, instead of all the consequences of one big breaking. "Dear operator, don't worry about the hundreds of apps, the database cluster is down".
reply