Considering the bible is not even a cohesive work with a single style, language, or even writer, it really never stood a chance of standing next to the Illiad as a work of literature. It's importance, besides the obvious importance of a particular translation itself, is solely historic, cultural.
This is so tendentious a thing to say as to just be weird. Do Northrop Frye, Harold Bloom, M. H. Abrams, and pretty much every other scholar who has written about it, not know what literature is?
If they think it stands on it's own literary merits, then they are deluded. The Iliad is far more sophisticated.
If we want to talk about importance then that is another story, but merely on the quality of writing there is hardly a comparison between the two.
"The Bible" is a curated though disorganized collection of confused and contradictory prose and poetry, many of which were evolved rather than written, filled with massive sections of irrelevant tedium (Have you read Numbers? I mean honestly...). Together it only forms a cohesive work with logic-defying "interpretation".
The King James translation has literary importance for it's influence on the English language, and the collection in general obviously has immense historic importance, but other than that? Yeah... no.
On the other-hand the Iliad is a masterpiece of epic poetry and a fine example of writing by all standards. No excuses need to be made for it, and that really is the most telling thing to realize.
Thanks for expanding. But doesn't this boil down to hair-splitting over the definition of "literature"? Does folklore count as literature? Many scholars would say yes, and have given it serious literary study, yet it doesn't even exist as a canonical text.
>Tell me more about how you read the GoF book, and applied the patterns in a language that does not even have interfaces or type-hinting. What a joke.
I don't see any joke. Jokes are of the type "Knock knock, who's there", etc. Using GoF patterns in a language without interfaces or type hinting? Not so much.
Most of the design patterns can be applied in any language, some languages even have inherent support for some of them. Not much to do with types or interfaces (besides them offering extra checks).
Facade, Decorator, Flyweight, Iterator, Proxy, etc... Equally at home in Java, C++, Smalltalk, Python or Ruby...
>On a recent article, another "Ruby expert" (who writes and sells programming books) was shocked by the fact that you can reuse code with inheritance. You should read it.
>Wouldn't it be more important to have people to come up with brilliant solutions on problems they are working on, rather than hard problems in general that have already been solved?
Most of the times the guys that can come with "brilliant solutions" on new problems are those that know the solved problems inside and out.
The other guys just keep reinventing the wheel (only worse).
The difference being that novel solutions require actual domain knowledge rather than an n-levels-deep memorization of the entirety of your Algorithms 1 and 2 courses from university.
It's not either/or, you need to understand, not (just) memorize, the algorithms enough to see where they could be applied in or adapted to the domain in question.
Similarly, I could quip that nontrivial novel solutions require "actual" algorithms knowledge rather than exhaustive domain knowledge. Not surprisingly, in reality it's a bit of both.
There is this Shakespearean scholar who doesn't remember any of the works of Shakespeare. You ask him to recite a sonnet his response is they are written down he doesn't have to memorize them. Doesn't even own a print copy Google after all right? He is trying to come up with a new understanding of the works of Shakespeare.
You have to have the assumptions down cold before you can challenge them. You have to have the basics down cold before you can reshape them. Having the fundamentals down so cold that you can apply and work with them is the beginning of the path not the end.
Ok, you make a certain point, and I'm going to correct myself.
Google does not interview based on the material from Algorithms 1 & 2 in university. Merely having the fundamentals taught in undergrad down-pat will not actually help much on a Google interview.
Google interviews based on a database of Interview Questions, which are largely unique to the BigCo recruiting process. They're not real problems I've ever seen anyone actually have to solve; they're basically checks to see if you have studied the subject of BigCo coding interviews in itself.
To repeat my example from below: how do you find all the identical anagrams of words in a text, given the text? This is not a problem taught in Algorithms class, and in fact, there are several different ways to apply Algorithms 1&2 knowledge to the problem. Only one of those (tokenize and sort the tokens in lexical ordering to get a "canonical" form of the string, then build your own key-value map from canonical strings to bags of words) will get you the job, and in fact it's the one that involves pretending you can't build sets of characters and use them as keys to a hash table. In fact, if you could hash character-sets, it's much cheaper to do so rather than employ a counting sort followed by a key-value store.
Actually, as I remember, we didn't (and don't) do much with key-value data structures in Algorithms class at all, not as I took it, and not as I've TA'd it. The question is used precisely because it tests your knowledge of How to Play the Interview Game rather than your fundamental computing knowledge.
Err, plenty of answers would get you the job besides sorting, including order insensitive hash functions and tables, neat automata based solutions, etc.
FWIW: I've hired people who even never got an "optimal" answer, but had brilliant ways of thinking about it that didn't turn out to be better/faster.
In any case, they deliberately ask questions that require applying knowledge, and designing an algorithm, rather than memorizing the way to reverse a circular linked list or whatever. So they are questions designed to test whether you can solve problems you haven't seen before by applying basic computer science knowledge.
I get/can afford to spend 45 minutes with a candidate. If you've got a better way to understand whether a candidate can apply computer science knowledge, I'd love to hear it.
Most problems being worked on for real would take at least 20-30 minutes to introduce (probably a lot longer to explain constraints, etc)
Nobody is going to have interesting insights in fifteen minutes, and engineers are too busy to spend hours per candidate.
If you can't see how the anagrams question tests basic data structure and algorithms knowledge, I don't know what to tell you.
Err, plenty of answers would get you the job besides sorting, including order insensitive hash functions and tables, neat automata based solutions, etc.
I actually gave hashing/hash-table of a small character-bag of my own making as the first answer. The interviewer then told me to solve the problem "without any [standard-library] data-structures". I had to question that, and was told I had to come up with everything myself, as if writing my own standard library in C. The interviewer was then satisfied when I gave the sorting answer, and I did get the positive call-back.
It was a matter of giving him what he wanted, not necessarily just solving the problem.
You seem to make the assumption that an interview question should be representative of the work you would do on the job. You could also see such questions as measuring a variable of interest to the interviewers; in this case that would be creative problem solving. The fact that it's neither something you did in algorithms class nor something you will do on the job is on purpose, it forces you to come up with something new.
A Shakespearean scholar who doesn't remember any of the works of Shakespeare? Does he have some kind of memory problem? He's spent years of his life examining the works of Shakespeare in detail, reading around the subject, probably watching play after play, travelling to see new interpretations being put on, and he can't quote me a single sonnet? Either he's incompetent, or he has some kind of memory problem.
Now extend to algorithms. I wouldn't hire a programmer like this Shakespearean scholar.
Yes, but the algorithms relevant to a domain are not Algorithms 1&2 class algorithms. For example, writing a network protocol requires knowledge of finite-state machines, writing a type-inference engine requires knowledge of unification, writing a thread scheduler requires knowledge of various scheduling algorithms.
Knowing the most efficient way to find all the anagrams in a text is not relevant to any of them, though. And yes, I was given that problem in an interview last week. I got it right, but the actual position involved more knowledge of architecture, assembly, and compilation than of Algorithms Guru Yoga.
Well, I hope you weren't asked that at Google , since it's a banned question. If you were, let me know privately :)
I assume you understand the goal of these questions is not to figure out if you can perform memorization of algorithms 1 + 2, but to understand how you think about problems and watch you do it?
Note that, at least at Google, they still try to hire generalists (in most cases), so you would still get that "kind" of question there, even for a job involving architecture and assembly. This is a deliberate choice.
Of course, i'm sure you also realize that a lot of the algorithms in "architecture, assembly, and compilation" were developed by people who started out more as "Algorithms Guru Yoga" folks.
Gregory Chaitan, who developed Graph Coloring for register allocation, was definitely not a compiler guy
Only half the folks on the original conference paper for Static Single Assignment form for compilers were compiler people actually trying to solve that problem.
Robert Tarjan was not a compiler guy, but his scheme for computing dominators is still the main one used today, and his union-find algorithms (he also proved the upper bound on the already-known versions) are what are used for a lot of type unification algorithms.
The list goes on of "Algorithms Guru Yoga" folks who came up with the premiere solutions to problems in the field you are talking about.
So when you say "novel solutions require actual domain knowledge rather than an n-levels-deep memorization of the entirety of your Algorithms 1 and 2 courses from university.", it's a bit hard to take that seriously.
Novel solutions require both domain level knowledge, and the understanding you learned in Algorithms 1 and 2.
Even the question you got about anagrams does not require any n-level wrote memorization. It requires only the most basic understanding of data structures, and an understanding of how to design algorithms.
Because this one is very widely known, shows up in books on interviewing at google, etc, so it fails to be a question that is useful for evaluating anymore.
At the very least, I can assure you it wasn't Google asking that question. It was, however, a well-known BigCo who employ quite a lot of software engineers and are generally known as the best at what they do.
Now, in answer to the rest of your post: you are broadly overestimating the curriculum of Algorithms 1 & 2. That is, Algorithms 1 & 2 + knowing you need to allocate registers is not going to necessarily lead you to graph coloring.
I assume you understand the goal of these questions is not to figure out if you can perform memorization of algorithms 1 + 2, but to understand how you think about problems and watch you do it?
I have heard this explanation; I simply don't see the evidence as in its favor. Basically, if you were looking for real problem-solving skill, you would have to entirely abandon the 45-minute interview format, whether it's on the phone or in-person.
That is, real problem-solving, as I've seen it and done it, tends to involve some kind of real problem domain (so you know what sort of solution is desirable), a real resource allocation (so you know what performance trade-offs to make), a less "academic exam" set-up (so you can experiment with solutions and see how they look), and more time to think than 20-35 minutes.
Interview environments are: relatively domainless, have no set resource allocation constraints other than "do well at Big-O performance measures", come as "pass/fail tests" rather than collaborative solving efforts, and are given with heavy time-pressure.
"My normal methods are useless here" -- those being to sketch out many solutions in a notebook over a period of hours or even days. Worse: the effect is almost to punish people who've had the audacity to spend significant portions of our careers in specific domains, from web-dev to type theory.
Also worse: interviewers tend to be unsatisfied until given their favored solution to their chosen question. In the example cited above, my interviewer was unhappy with my choice of a hash-table for a key-value store and actually asked me to go back and redo the problem "without using preexisting data structures". He ended up happy (I was called to do a second interview) when I told him we could make an insertion-sorted linked list of key-value mappings.
Why didn't I think of that in the first place? Because an insertion-sorted linked list performs worse than a hash-table as a key-value mapping data structure. A hash-table is, as far as I know, writing calmly and without pressure, the go-to key-value map for real programming. But most interviewers I've met are somewhat disappointed at seeing a hash-table, because using low-level data structures from the standard library is what stupid people do. Of course, it's also what experienced working programmers do.
Again, this one wasn't at Google, but it's a fairly good example of how BigCo interviewers come off as trying to play a round of Who's Mister Clever? more than actually engage in a problem-solving process. Other favored tactics include dinging you for what programming language you use, or questioning the syntax of your white-boarded code (had this happen to me in the same interview, turned out I was right when we asked one of the interviewer's colleagues).
Here's a suggestion: wake up, look at the clock. Now make a decision about whether it's time to get up, then take action based on that.
This is new? This is news? This is helpful? This is of deep interest?
None of the above - it was a waste of time.
But it's not just this one thing. I see so many people intent on sharing their personal revelations that it's now impossible to find genuine insights that are more widely applicable than that one person's experience. This is the age of crowd-sourced "wisdom", where everyone thinks their own discoveries will apply universally. Anecdote over data, experience over research.
However, I am going to learn something from it. There's a lesson that I'm taking away for my own personal growth and edification. I'll share it here with you in case you want to learn from it as well: some stuff on HN (and other sites) is a complete waste of time. Be ruthless about what you read - you only have one life to spend. Time spent cannot be regained.
I agree with you that this is not HN worthy. However, I think you're taking the wrong approach to stuff like this. Obviously the best thing to do is not read it. However, as others have pointed out these are recipes. You'll need to do your own tweaking, testing, changing, and perhaps even rewrite most of it. This is how you get started, not where you jump off.
>So I still have to wake, look at the clock, make a decision about whether it's time to get up, then take action based on that.
Yes, no magic unicorns and fairies spreading pixie dust to wake you up at just the right moment, I'm afraid.
The indecipherable to you message of the story is: don't go back to sleep if you happen to wake up shortly before the alarm clock. Just wake up.
Now, if you woke yourself up at 3:00, as you ask in your previous comment, that doesn't fall on the "shortly" period before a 6:30 way, so by all means go back to sleep. I mean, Oh, those hard decisions...
>But it's not just this one thing. I see so many people intent on sharing their personal revelations that it's now impossible to find genuine insights that are more widely applicable than that one person's experience. This is the age of crowd-sourced "wisdom", where everyone thinks their own discoveries will apply universally. Anecdote over data, experience over research.
I think that what you write above ONLY APPLIES TO YOU. Perhaps you should do some more research into the matter, instead of shoving your personal anecdotes on to us.
> The indecipherable to you message of the story is: don't go back to sleep if you happen to wake up shortly before the alarm clock. Just wake up.
Yes, I actually got that. Of course, it took reading the whole article to realize that it was telling me that, so I'm left wondering why it was posted to HN.
>> But it's not just this one thing. I see so many people intent on sharing their personal revelations that it's now impossible to find genuine insights that are more widely applicable than that one person's experience. This is the age of crowd-sourced "wisdom", where everyone thinks their own discoveries will apply universally. Anecdote over data, experience over research.
> I think that what you write above ONLY APPLIES TO YOU. Perhaps you should do some more research into the matter, instead of shoving your personal anecdotes on to us.
OK, so I'll start my research. Has this blog post taught you anything? Was this worth the time you spent reading it? I'll go ask a few more people, and if I get statistically significant results I might post about it.
>Building responsively is an investment for the future. You can bet good money that there will be more and more diversity of screen sizes as we keep going.
I find responsive design a cheap copout, jack of all screen resolutions, master of none.
Just create a good desktop design, and make sure it plays on a modern smartphone with tap-to-zoom. Tablets will work automatically.
I dont'see "more diversity of screen sizes" coming affecting this at all. New screen sizes will just be bigger than an 2010 iPhone/Android phone. Those can play well with desktop-sized designs anyway.
And even if you have a 30" monitor you still don't need more than 1000-1200 pixels of webpage estate anyway. For one, people can't read or follow freakishly long lines of text easily.
>When you use this manipulation to get your way to your kids, the kid watches you, and you find yourself the lab rat of a psychological manipulator far more capable then yourself. And the kid uses tricks you never heard of to shape your behaviour. Which causes a feedback loop of who can trick the other into doing what they want better.
It's not a manipulation or a trick.
It's a "reward system". You get X only if you do Y. Period.
It has worked wonders for ages. Still does in most parts of the world. The US has a messed up system that ends with annoying, over-self-indulged, entitled children and adults.
Walk outside. My god, yesterday some guy didn't like me walking in the crosswalk, when he wanted to make a right turn so he swerved into the oncoming traffic lane to make his right (and cut me off). This versus wait 5 seconds.
Annoying, over-self-indulged and entitled. Also damn dangerous.
Where I live in America, Seattle, has a amazing reputation for pedestrians being highly obedient of walk signals and drivers being highly observant of crosswalks (and generally polite).
But in fact everyone I know who commutes on foot has been hit by a car in the last 5 years. No one’s been hurt badly, thankfully. (I don’t think anyones even gone to the hospital in that time period - unlike the bicycle commuters I know.)
I know that's small sample size - and poorly chosen - still it suggests bad things.
Well, yeah. Human beings are not equipped to operate motor vehicles safely, and there is a mountain of evidence to support that fact. It doesn't have anything to do with parenting, though.
Road rage? People going on rampage with guns and killing tens of innocents? Fatso kids screaming in restaurants and behaving badly in public? The most BS self-entitled complaints in restaurant/hotel/etc reviews always written by Americans? People posting pictures of their cats? Do any of these things ring a bell?
Are you asking if I've heard of these things, or if I know their relative prevalences in different countries and what that's supposed to mean? Because yes, and no, although my intuition is that the road rage, shootings, and racist YouTube comments look way worse other places.
That you think that unsourced opinion piece constitutes "secondary corroboration" indicates that perhaps you do not, in fact, understand when citation is needed.
Tens of Thousands of top notch literary critics, philosophers and intellectuals beg to differ...