1. Check that the code is addressing the problem at hand and is fulfilling a business need (mostly making sure it’s linked to a reasonably-written ticket)
2. Align style and make code look consistent across repos/teams (which includes pointing out similar work which could be reused)
3. Force other people to read and understand the change, so the original author isn’t on the hook for all support/oncall questions)
4. Highlight hard-to-understand sections that might be unclear to someone newly-arriving to the code, suggesting those parts need more explanation or a simpler implementation
5. Copy editing on error messages and docs to remove typos/passive-aggressiveness/make important information easier to see
6. Find bugs by humans running static analysis (“this will race/deadlock”) and provide pointers to resources to help the reviewee learn (“see <some link> for why and how to avoid it”)
7. Human-run linting (if the automatic linters miss/mess up stuff)
I’ve also see it used as a way to bully other less-knowledgeable devs by nitpicking and issue-raising when the commenter isn’t included on the original list of reviewers and leaves criticism without any constructive feedback (“this is wrong”)
Rarely have we spotted bugs in code review: that’s what tests, oncall, and a robust rollback mechanism are for
mostly making sure it’s linked to a reasonably-written ticket
This one seems to come up quite often recently, but to me sounds an awful lot like "satisfying the urge for process and structure". Yes, code people write in professional environments should generally be satisfying user/business requirements, but that's something that can be discussed periodically, perhaps with your manager -- it doesn't seem like something that needs somebody looking over your shoulder all the time.
Ticketing systems can be useful tools, especially in environments with lots of user-submitted requests, but I'm suspicious of trying to tie every single piece of work to a ticket. That seems rather like a tool which should be your servant becoming master.
Edit: if you can't have a sensible conversation, potentially with a not-especially-technical person, about what value you've created in the past few months, then you might have a problem. One of my worries about ticket-focussed environments is that it's possible to focus on the small-scale and lose sight of what you're creating.
The point of the tickets is to have traceability from requirements down to the code. In some industries, this is mandatory so they can satisfy e.g. safety standards.
Sure, it's process which is mandated in some fields.
I still think it's net harmful, and can lead to over-emphasising the small/local vs understanding the application and its value as a whole.
In cases where "you really need to understand requirement OBTUSE-1234 before you even think about touching this function" really does apply, a comment in the code addresses this much better, and will be seen by future developers even if they aren't religiously reading the VCS blame output for everything they edit (or if it's been clobbered by reformatting or other changes).
A lot of these points are highly subjective and therefore dangerous points of friction. If you're going to start having a discussion about the "business need" when some presumably competent developer needs something they have already written merged, there's going to be a problem and it's not the code in question.
Again, I'm presuming competence. Code reviews can help with tutoring new arrivals, but at some point they need to graduate to a responsible, self-directed actor. At that point, chances are their code reviews will get signed off with a superficial "lgtm" anyway.
This is mostly to make it easier to understand why the hell someone did something in a particular way and if it’s safe to change it to a more contemporary style years later. The reviewer should trust the author and the business requirements that lead to the change being created, and just check that it’s linked to a ticket that would make sense to that future maintenance programmer.
1. Check that the code is addressing the problem at hand and is fulfilling a business need (mostly making sure it’s linked to a reasonably-written ticket)
2. Align style and make code look consistent across repos/teams (which includes pointing out similar work which could be reused)
3. Force other people to read and understand the change, so the original author isn’t on the hook for all support/oncall questions)
4. Highlight hard-to-understand sections that might be unclear to someone newly-arriving to the code, suggesting those parts need more explanation or a simpler implementation
5. Copy editing on error messages and docs to remove typos/passive-aggressiveness/make important information easier to see
6. Find bugs by humans running static analysis (“this will race/deadlock”) and provide pointers to resources to help the reviewee learn (“see <some link> for why and how to avoid it”)
7. Human-run linting (if the automatic linters miss/mess up stuff)
I’ve also see it used as a way to bully other less-knowledgeable devs by nitpicking and issue-raising when the commenter isn’t included on the original list of reviewers and leaves criticism without any constructive feedback (“this is wrong”)
Rarely have we spotted bugs in code review: that’s what tests, oncall, and a robust rollback mechanism are for