1. Become aware that something I'm interested in learning exists
2. Watch and skim a bunch of videos at 2x speed around the idea of the thing (usually keynotes or videos created by the author) to get hyped up
3. Go through the documentation's getting started guide while following along
4. Immediately start building something with the new thing
Treat everything beyond this as question driven development[0] or basically JIT (just in time) learning.
For context the first 3 steps are usually no more than a weekend or a few days.
I do this loop all the time and it hasn't really failed yet for learning all sorts of things (5+ programming languages, a bunch of stuff about Linux, Ansible, Docker, Vim, Terraform, Kubernetes, video production, and the list goes on). These are learning things very quickly at a level where you can comfortably bill out freelance work or get employed.
The trouble I've found with JIT learning and project driven learning is that it doesn't cover things I don't know I don't know. Often, there's more than one way to solve a problem so I'll solve it with what I already know how to do, meanwhile there's actually a simpler and more efficient solution that I would know if I'd used traditional learning resources ahead of time.
Yep, this topic is covered in the blog post, search the post for "Expectations of Question Driven Development".
The TL;DR is until you write a lot of code you'll have no idea what you're doing so it's fully expected you'll be writing, refactoring and deleting code as you go. It's only after you've taken so much action that you discover the more simple and efficient solutions.
Front loading all the reading and research isn't going to help you get there sooner. IMO the best time to read a book or take a course on a subject is months after you've built something real, because then you can apply and understand all of the efficient solutions and end up with a bunch of takeaways to improve what you've done. Things that might take you 1-2 days to apply after you've read the book. This is also covered near the end of the blog post btw.
If you ever want to reconsider the topic of SWE not being Engineers, here's an example.
It's typically not feasible to "wing it".
Cost, time, and Effort make it so you need to be correct the first time. mistakes do happen, but when I switched from Design Engineering to SWE, I'm allowed significantly more mistakes. These can be inefficiencies, incorrect outcomes, bugs/errors, etc... No big deal because I can recompile. Can't do that with a half million dollar in steel molds with production on a set date.
Additionally with engineering, I'll have 4 layers of management review and sign off. With programming it's subjective code reviews.
Inexpensive/inconsequential mistakes simply enable bigger mistakes to be made. Systems grow more complicated as long as they're allowed to.
You can recompile a syntax error, but even in software, you can't back track on things like choice of language / tech stack / architectural design decisions without wasting millions of dollars.
For traditional industries, there's (presumably) some best practice or de-facto standard for many things, since most things have already been invented in the past.
For software, you have contradicting best practices with people arguing convincingly both ways. Everybody has their favorite tech stack and some end up being fads. New technologies come out every couple years, and if your project is successful enough and runs long enough, you either get stuck with old tech or spend millions of dollars figuring out an upgrade path. Sometimes the "upgrade" is actually another fad, but sometimes the upgrade is crucial to your future business.
Not sure whether these things fall under "engineering" but I don't think it's inherently less "hard" than what traditional engineers do. Sure your junior dev is not going to do this, but people who make these decisions are often called software engineers ("senior", "lead", whatever).
This seems like gatekeeping. You’re describing the awesome power of software: it can be updated and infinitely replicated and is therefore more tolerant of mistakes. Just because the field has this boon doesn’t mean that those working in it aren’t “real engineers” or whatever.
Suppose technology was sufficiently advanced to make steel mold production nearly free and instant. Are the mold designers no longer engineers then? (Watch out for 3D printing, by the way...)
This is an interesting topic. "Engineer" is a word that people respect. "Developer" is a meh word. You'd rather want to be Software Engineer as opposed to Software Developer even though some companies call you former and the other latter despite your responsibilities and everything being exactly the same.
Common people would respect classical engineers more because they can see that they are building something tangible. And it seems many definitions for engineers everywhere seem to exclude "software", something like "a person who designs, builds, or maintains engines, machines, or structures.".
Fun story is, when I was doing medical checkup for my work the old lady saw I was marked as "software engineer", and she didn't like it and asked me for an alternative.
I believe this confusion arises between the valid definition of the act of "engineering" with another equally valid definition of the profession of engineering, which like most professions, require some sort of formal degree.
You could also be winging it when you are inventing something or building something experimental and given the thing is not large enough to last several years. E.g. when inventing a light bulb for the first time you would be iterating on it and "winging it".
Learning fundamentals doesn't mean you don't code as you learn, it just means you code to learn rather than code to finish a project. Coding to learn is deliberate practice and is the best way to learn fundamentals. Just learning whatever you need to get a specific project done means you will always have a lot of holes, likely you'd get more things done in the medium to long run by practicing and learning things properly from the beginning instead.
The goal isn't to code to finish the project and ignore any questionable code you've written because "it works".
It's to write a lot of code which is practice in the end. It's expected you'll be doing an ongoing combination of writing code and looking things up as you go, but you're not looking for the first working solution that you copy / paste and move on. The topics you end up researching will lead you to write good code if you put in the effort.
There's been plenty of examples of doing this where I spent hours going over a few functions because I wanted to make sure I was doing things nicely and the journey to go from the original version to the end version lead to learning a lot. That might have been reading through 5 pages worth of search results, skimming as many videos as I could find and maybe even asking an open question somewhere which yielded code examples written by people who have been working with the language for years.
You can then use all of that as input to guide your code. Through out the process I may have written a few versions and ultimately landed on 1 based on what feels good when using it, isn't too clever, easy to test, easy to understand, runs quickly, etc. Basically all of the properties that make code good.
I use a combination of katas (repeating a task multiple times - though not necessarily with the exact same code - to polish my understanding of how to do the task) and diving into rabbit holes deliberately to stumble upon things I don't know I don't know.
I'd like to implement something like this. What sort of tasks do you define in your Kata's? I'm thinking of adding them into an Anki deck to repeat them regularly.
I did this to learn JS and React Native and it let me get started quickly, but wow was my code horrible. It was a good year before I stopped doing really stupid things in JS, because I only ever had learned enough JS to "Get Things Done".
This was in stark contrast to how I learned Java and C#, I deep dived into the internals of the languages and their runtimes (and at one point even worked on a version of the .NET CLR!), and all the code I wrote was in a mindset where I was cognizant of exactly what the runtime was doing to my code, how memory layout looked, how the GC works, and so on and so forth.
I would've spent a year being 100% more productive if I'd spent just another week or two learning JS properly.
Any advice for dealing with tool choice paralysis?
I've been interested in getting into video editing and music production, among other things, but I tend to get stuck trying to decide between different tooling and it saps my drive. I get the idea that I should just pick one and run with it, but get stuck on trying to pick the "best". Which I guess I tend to define nebulously, since some tools might be easier to learn but others are more powerful/featureful, etc.
That's probably more simple than you'd want to hear, but there is no such thing as "best". If you search for the best in anything, you can find tons of posts/articles/whatever all suggesting different tools are the best. That's the beauty of the internet. You'll always find something to substantiate your assumptions, or in this case make you doubt them.
I can only speak to your music production ambition, but I've spent way to much time looking for the best DAW, compressor/effect/synth plugin, best hardware. All those things are distractions. See which ones are popular and available for your platform and pick one. Learn to produce music first, it doesn't matter what tool you do that in, if a year from now you end up deciding you don't like your tool you can still change it. Its like worrying about which language to program in without even being able to write simple programs. Just pick one.
Even if you like one, you should (_after_ you finished learning the first) also go through the other alternatives, for perspective. And even though you found one that worked for you, doesn't mean that others won't click even better.
The only way you can truly evaluate the choice between tools is to be familiar with all of them, but yes just start with one arbitrary one that seems fine, instead of trying to learn them all before deciding. But at the end you should know something about all of them.
My rule of thumb is: if it's free choose the most popular one, if it's paid pick the perceived second best one (usually better value for your money).
Version control system? Git (most popular)
Linux Distro? Ubuntu (most popular)
CPU? AMD (perceived second best)
Rideshare? Lyft (perceived second best)
DAW? FL Studio (perceived second best)
This system has generally worked out well for me. Note that the popular choice is just to get me started because it usually has the best troubleshooting/community resources, years later when I'm more experienced sometimes I'll switch to something less popular (for example I no longer use Ubuntu in favor of openSUSE Tumbleweed).
Man, honestly, not too sure about this, picking the "second best" is very random and can probably affect your choices of stack/methods in the long run.
Take for example the DAW part, i chose FL studio to start with for the same reason as the "best" one being pretty complicated.
Unfortunately when I did switch to Ableton and saw what the industry standard was, my whole year long progress pretty much went to s*t because of how overly simplified FL is,
Just an opinion.
I remember going through this and ended up learning Nuendo and Ableton. Then I realised I knew all this technical stuff without a grasp of the fundamentals. I reckon if I had, I’d have been extraordinarily more productive in FL Studio than I ever was with Ableton without it.
Those might not be optimal choices—or they might be—but they are very unlikely to make the difference between project success and failure. Anything you can do with Vue you can do with React, and vice versa, with a modicum more or less effort, and similarly for Vim and Emacs.
On the contrary, if you choose MooTools or YUI over React and Vue, or Notepad or Microsoft Word over Vim and Emacs, that might cause project failure.
(I prefer Emacs over Vim for programming, but it's probably easier to get started with Vim nowadays as a new user. SO has 17208 #emacs questions and 26937 #vim questions.)
It does depend on the situation, and how easily you are able to change tools down the road. However analysis paralysis can be a never ending cycle, so at a certain point you need to accept that you just need to pick one.
If its a low cost, easy to switch tool then I'll force myself to stop over analyzing and just dive into the tool. Theres no better way to learn a tools shortcomings than actually using it.
A high cost tool is a lot more difficult. Personally, I assign myself a deadline to pick the tool (eg this week I'll research, next Monday I'll purchase) and then I must follow through on that day. Otherwise I will just keep overanalyzing every single comparison until neither tool looks attractive.
There are situations where you are going to pick the wrong tool. It happens. An example is I started music production in Logic Pro X, hated it, and ended up switching to Ableton. I spent a lot of time researching the two, but it was only once I started using them that I realized which tool suited me better.
Very literally: just pick one, even if it's at random.
People have this awesome ability to work around all kinds of inefficiencies and stumbling blocks. If you had nothing else but pen and paper which you then submit to another person who somehow feeds it to a machine you would still find ways to achieve what you want efficiently.
Remember what people were able to do with punch cards!
Once you have a better understanding of the thing you're learning you'll be able to make your own reasoned choices about tools.
It's my biggest weakness, so it's definitely not like I have this fully under control.
The problem I have around tool paralysis is mostly related to permanence, especially when each choice is technically really good but has their own individual known flaws.
I'll fearlessly try a bunch of things out, implement the same project in each thing, compare my results and then pick one based on what feels best for me when the thing I'm implementing doesn't take a long time.
Editing a specific video is a great example of something with little permanence. You can try out a few audio / video editing tools in a few days while you actively do your editing and then pick the one the best meshes with your brain and stick with it until it becomes a problem. Each video might take a few hours to edit from beginning to end, and is a self contained unit for the most part. That's what makes it feel pretty temporary to me.
But for me, building a long living web app, or the idea of a SAAS app is one of the hardest things to pick a language or web framework for because you can easily talk yourself into thinking "well this is my potential life, it needs to be the right tech choice or I'm stuck with the wrong decision forever".
But this is an illusion. I know it's an illusion, I've proven to myself multiple times it's an illusion and there's countless examples online showing this is an illusion but damn it, this permanence magician is really good so I often find myself going back and forth, implementing bits and pieces in 1 thing but never feeling motivated to finish because I always think there's something better just around the corner.
With that said, the thing that pushes me over the edge to actually do it is usually the act of making a decision and running with it. Knowing full well this isn't a perfect solution but a perfect solution or tool or tech stack doesn't exist. It's just picking the thing that checks off the most boxes on what you like and prefer and then embracing it.
In the end, nothing beats your own personal experience. If 7 people say a tool sucks and 3 people say the same tool is amazing that doesn't mean the tool sucks. It means 3 people found a tool they really like and it's working for them. Don't let reviews or others fully control your decision. Absolutely take their feedback into account but never follow the crowd for the sake of following the crowd.
So, there are two possible reasons it's hard to make a choice.
One is that you're lacking crucial information that will make it clear that one choice is much better than the other. For example, if you've never programmed before, and you want to make some interactive web pages, it might not be obvious whether Python, C++, or JavaScript is a better language to learn. Picking Python would be disastrous, but you don't know enough to understand why yet. In this situation, you have several possible courses of action:
1. Ask someone who knows about the problem. Better, two or three people. Follow their advice.
2. Snoop on conversations between people who are doing what you want to do. Imitate what they're doing until you know enough to have opinions of your own. Nowadays with the internet this is usually pretty easy, although the web forums where leading-edge hardware designers chat mostly require not only knowledge of Chinese but also actual mainland Chinese friends to vouch for you in case you start posting stuff about Tiananmen Square and the Uighurs. This is not a new problem; my aunt had to learn German to get her chemistry degree.
3. Try several courses of action briefly, then abandon the ones that were least fruitful.
The other possible reason is that, although you basically have most of the relevant information, the difference between the possible choices is very small, so it's hard to tell which one is really best. You have become Buridan's Ass; toss a coin before you starve to death in the midst of plenty.
I do this same loop but I’ve given up trying to learn anything quickly.
When it comes to tech I don’t use for my day job I’ll do an iteration like this every 3-6 months and after 2 or 3 years I’ll have become comfortable with it.
The nice thing is that while I won’t remember everything, each loop feels easier and I can go a bit deeper into the subject because the total number of brand new things I have to wrap my head around is on a downward trend.
> If you only follow guided resources, you'll wind up in tutorial hell.
I disagree very strongly here. You can wind up in tutorial hell if you only read surface level small to medium sized posts.
My "career superpower" is that I'm always working my way through one textbook or another. You get serious depth this way and avoid getting stuck with a surface level understanding. Plus, with depth you can end up with really strong fundamentals, which make learning the next thing that much easier.
I know a lot of people who read tons of blog posts on topics but never crack open a textbook, or people who watch youtubers explain academic papers yet never open up the actual paper.
Maybe my "learn stuff quickly" trick is... don't. Spend a decade accumulating deep knowledge slowly, and it'll add up. The road to "tutorial hell" is paved with blogposts and missing fundamentals.
Although a lot of computer books in the last couple decades have bought into the model of “let’s teach the language through building a project” structures which often means that they skip key stuff or shoe-horn it into the project in ways that don’t make sense.
The term "career superpower" is great, i find that the ability to relate a current problem to a large collection of notes taken on a daily basis works extremely well.
A lot of information for these notes come from in-depth articles and blog posts and since they are online resources, it is trivial to add them as notes.
Microsoft OneNote has been the go-to tool to keep track of all the notes.
Textbooks do have their place for hard problems, e.g. algorithms or AI. But these problems tend to be hard to solve quickly, if at all.
> The road to "tutorial hell" is paved with blogposts and missing fundamentals
Even worse is that blog posts can have incorrect, bad, or even dangerous information. If you've just found it in a search, you have no idea of the skill level of the author.
I recently came across a post about a JavaScript WYSIWYG editor which stated that you didn't need to worry about sanitizing the HTML output from it because it took care of that for you. An attacker could of course send malicious data straight to the server or manipulate the client code however they want, so without also sanitizing on the server (also a hard problem), this opens a huge security hole. Ten years ago I probably would have naively followed this and thought everything was okay.
Tutorials are often useful, but they also cannot be trusted, especially without other resources. I feel I've learned the most from reading in-depth sources like official docs, specs, RFCs, or the source code of high-quality, well-maintained libraries. It can be harder at first, but I learn things I wouldn't have thought to look for, and there is something of a consistency in presentation (at least by comparison) that makes each additional one easier to go through.
> It's often said that the internet has democratized education: the sum of human knowledge is only a Google search away!
Mediocre to bad information is a Google search away (unless it's Google Scholar). Generally speaking, the published knowledge is orders of magnitude more valuable; don't waste your time and pollute your thinking on what you find on the open Internet.
To be fair, I've skimmed a lot of tech books that were absolute trash. I would say the level of quality on blog posts and books are about the same; it's just easier to filter out trash books by reading reviews (unfortunately, I've worked in a lot of niche areas where it didn't matter what the review score was for a book, I just needed some resource to push me further).
What types of books do you recommend? E.g. the one I've found most useful are from the CS curriculum, the industry ones are hit and miss, even the O'Reilly's.
Oh man, it's tough to answer this one well. I just counted up the books I've worked through on my shelf. After college, I've gone through about 15 completely, and partially worked through another 15 or so. Plus a couple reference texts that have been handy. Plus a huge number of papers. They take up nearly twice the space my old college texts do, which is kind of wild to look back on!
Are you curious about anything in particular? Or if you're just wondering what kind of books I'm talking about, highlights include CLRS, Probabilistic Graphical Models, Mostly Harmless Econometrics, and Characters & Viewpoint. Next up on my list are more creative writing books and a couple theoretical stats books (I'm working towards a book on semiparametrics, but first need a better foundation to follow a book I've been recommended).
It feels like everyone either hasn't read the rest of the article, or conveniently ignored the "Mixing guided and unguided learning" section, then went on to cherry-pick quotes to trash the article. The irony...
"With software development, though, mistakes are free! If we make a mistake, we can tab back to our editor, change the code, and try again. We even have helpful error messages that can (sometimes) point us in the right direction. This is an incredible luxury, and not one that we take advantage of enough."
This is definitely very underrated and partly why I think computer science education is still sort of broken in some universities. Trial and error should be embraced as a a part of the process of building things. But instead, many curriculums in universities use exams to test your knowledge without any sort of debugger or console. You just have to go off of what you know, getting things as close to correct as you can in the first pass.
All evaluations in CS should surround some type of project based work, it's a huge luxury other fields don't have. Students studying architecture or mechanical engineering, for example, simply can't build a functioning bridge as a test due to the cost of raw materials and how fatal an error may be (that would be really cool though). It's an advantage that we literally can build the bridge equivalent as a project in the CS world.
No debugger is going to help you understand most of the concepts in a good CS curriculum. It isn’t coding school, it’s learning how computers work at the most fundamental levels.
Incorporating the "learning x the hard way approach", which is about typing the code, rather than just copying and pasting the code, also aids with learning quickly and giving more lasting power to the lesson. The best part of this is the mistakes you're more likely to make by typing, which forces you to look more closely at the original code so you can retype it correctly. I remember much more that way. Even more than that, making such mistakes may draw more attention to the object in error and consequently learn more about that object and how it fits into what you're trying to do.
Came here to mention typing vs. copy/pasting. When going through a tutorial, don't copy and paste. Type out the code. I turn off code-completion in my editor as well.
Diversity of learning also helps for me, if you're 'thrashing' on a problem without making progress, take a break to focus on something else - even another skill entirely you're learning in parallel like cooking, writing, or something physical. It's almost like background processing - you need to load up the context and let your brain chew on the problem a bit and when you return to the problem a solution will usually present itself.
When it comes to software, one approach that has worked really well for me is "Build your own X". Re-implement a tool from scratch (minimal feature set, not production-quality code) and learn how it works under the hood.
I've done this myself for Git, Docker, Redis, SQLite, Shell and more - it requires patience, but I've found the approach very effective.
Walk into any bookstore, and you'll see how to Teach Yourself Java in 24 Hours alongside endless variations offering to teach C, SQL, Ruby, Algorithms, and so on in a few days or hours. The Amazon advanced search for [title: teach, yourself, hours, since: 2000 and found 512 such books. Of the top ten, nine are programming books (the other is about bookkeeping). Similar results come from replacing "teach yourself" with "learn" or "hours" with "days."
Because programming languages are not the only thing I want to learn. In fact, I know that the time required to really learn all the things I'd like to learn far exceed my lifetime. So if I could learn about a bunch of them quickly and then decide on which ones to focus later on, that would be awesome.
I agree, but the concern is that desperation is not a good guide. I appreciate that not everyone has the privilege of sitting comfortably for a few years immersing themselves in the knowledge of a deep subject, but I fear that without that the goal of earning a decent standard of living will not be quite achieved.
What I mean is that I've had programming jobs that didn't need a lot of deep knowledge on my part and they weren't jobs I wanted to keep. I wanted to keep the jobs where I was valued for knowledge and skills that I had worked hard to get. Those jobs gave me a decent standard of living, including the improvement to quality of life gained by the appreciation of one's value by one's colleagues, and the self-esteem and confidence that this provides.
I thought this was going to be about how the brain functions, as in how new neuron connections are created and then reinforced. Although the author talks about his approach to using guided and unguided/independent learning it it could be reduced to 1.) use guided learning to establish the initial connections in the brain and 2.) do things independently as a way of recall which strengthens the connections and causes new ones to form. A third option would be to teach others; nothing like having to explain something to someone to trigger the recall of material and exercise those neuron connections.
I struggle with learning new domains or skills where I don't know what to focus on and start.
Example: I've always been fascinated by 'space' and have been determined to invest more time into this hobby by learning more. However, with such a giant array of disciplines (astronomy, cosmology, physics, astrobiology), I don't know where to focus, read a bunch of random information, and feel like I don't retain 90% of it.
It's easier, in my opinion, to learn a skill that has more of a practical application (like the author's Blender 3D modeling education).
I think the fact that it slides in slowly is what it makes it creepy. I was scrolling/skimming the article and all of a sudden I saw movement to the side and it freaked me out.
I think learning is a personal thing. It will definitely differ from person to person based on their passion.
Off topic - I like the template/theme used in this site. Initially thought it could be like Hugo or Jekyll theme, then couldn't find more details. After some time using it felt this is custom made site with all animations and all.
> Off topic - I like the template/theme used in this site. Initially thought it could be like Hugo or Jekyll theme, then couldn't find more details. After some time using it felt this is custom made site with all animations and all.
The site is indeed custom built on top of Next.js. You may be interested in this the "How I Built My Blog"[^1] article on the same site.
Immersion into the topic and developing my opinions and taste first is what makes learning things quicker for me. I can be my own guide if I know the territory and have my preferences, and if I know what to do before I learn how to do it. Key for me is to spend enough time exploring a topic and getting passionate about it.
One idea I've been using recently is to start writing unit tests around a piece of code or functionality as I'm trying to use it. That technique narrows the scope to a single thing, and I'm forced to be explicit about what I expect. That and reading the source code are my gotos.
And an obligatory counterpoint that the course is incredibly shallow.
1. Pomodoro
2. Test often
3. Just start learning, you'll start liking it
There. You've just taken the course.
Of course this comment is reductionist. But the course, to me, could be a medium-length article with the same effectiveness (but much less feel-good, "you can do it" content).
Can you please share more details on how it changed your life? Are you a fast learner now? Can you also draw the contrast between your earlier and current learning approach?
I do think that all of us have a different take on how we should learn stuff quickly. Some of us learn through visualization; some of us learn by doing the stuff itself. It's just a matter of finding the best method for you to learn different things.
1. Become aware that something I'm interested in learning exists
2. Watch and skim a bunch of videos at 2x speed around the idea of the thing (usually keynotes or videos created by the author) to get hyped up
3. Go through the documentation's getting started guide while following along
4. Immediately start building something with the new thing
Treat everything beyond this as question driven development[0] or basically JIT (just in time) learning.
For context the first 3 steps are usually no more than a weekend or a few days.
I do this loop all the time and it hasn't really failed yet for learning all sorts of things (5+ programming languages, a bunch of stuff about Linux, Ansible, Docker, Vim, Terraform, Kubernetes, video production, and the list goes on). These are learning things very quickly at a level where you can comfortably bill out freelance work or get employed.
[0]: https://nickjanetakis.com/blog/learning-a-new-web-framework-...