Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I'm increasingly tolerant of other technologies.

I would like to think I am, but recently I have noticed that everyone seems to want to jump on the NoSQL bandwagon. A lot of the jobs I see I would be a good fit want experience using MongoDB (or something similar).

I have been looking to learn them, but having read up on the technologies, I can't find a reasonable use case for any of them in my own work. One that would not be equally or better suited to Postgres or MySQL (which I know well).

I have come to the conclusion that 90% of people using NoSQL don't actually need it, and are throwing away a lot of useful features just so they can play with the latest fad.



NoSQL DBs definitely have their place. For instance my current project uses one to handle and do limited querying on data from other systems without needing to have a full schema for that data. You could do it in SQL by breaking out the columns we need and storing the rest in an XML blob, but it would be annoying. One rule of thumb might be that anytime you want to stick an XML blob into SQL, consider NoSQL instead.

That said, I think you're right that 90% of the time SQL is a better solution. You don't fully realize all the great stuff a real SQL engine does for you until you have to go without it.


Perhaps with age comes increasing tolerance and increasing ability in discerning hypes from useful technologies?

I'm a pretty 'young' person, and a much younger programmer, so I can't quite say I got the 'oo shiny lemme use that' thing out of my system, but I'm okay with that (for now it doesn't negatively impact my work, and I learn tons of stuff). But in other areas of life, I've noticed that I generally become less dichotomous in my approach to new and other things, but more discerning too.


NoSQL DBs have real advantages - e.g. you have a huge production relational DB, you want to make schema changes - you have to take your app offline for hours.


No. You can migrate big production relational DBs with much less than hours of downtime; sometimes there's no downtime at all. Please, either qualify your statement, or stop speaking in such an absolute sense.


Would you mind expanding on how to minimize downtime when applying migrations to large DBs? It's clear the grandparent isn't familiar enough with the relevant techniques to realize their statement needed qualification, or could count as "nonsense", so they could probably benefit from the explanation.


Sure, it can be accomplished in a variety of ways: swapping a migrated replica; breaking big schema changes into multiple incremental migrations that minimize the amount of time the database is locked and back-filling data; or spinning up an entirely new cluster, migrating everything, and then switching traffic over from the old cluster to the new cluster at the load balancer. Any number of these techniques can be mixed and matched to accomplish minimal and/or zero downtime depending on your constraints. [1] The most important thing is making sure your application code is able to operate with multiple schema versions.

I used to have a presentation from Facebook where they spoke about their zero-downtime migration process, but I can't find it, so this SO post [2] will do. Of course, there's a bunch more information out there to read up on.

[1] https://www.honeybadger.io/blog/2013/08/06/zero-downtime-mig...

[2] http://stackoverflow.com/questions/6740856/how-do-big-compan...


If you're taking down a SQL DB to make schema changes then you're doing it wrong. Even resource-intensive changes can be done online by applying the changes in stages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: