One of the key skills of management is delegation. But it’s not simply just passing on work to your employees. Have you ever given someone an assignment and it came back late and not nearly like you wanted it. It may be that the employee isn’t as good as you, doesn’t pay attention to detail like you, or work as diligently as you. These things may be true. However, more likely, the problem was that the way you delegated the assignment was highly inadequate.

If you want it done right, learn how to enable your employees to do it right. If you want it done right, you don’t have to do it yourself. How? You must be organized. You must plan. And when delegating, you must be very specific. You have a very specific outcome in mind for an assignment but your employees can’t read your mind. You must explicitly describe the feature in detail in writing. Verbal communication is a horrible way to communicate requirements. For big projects, break down the assignment into smaller tasks and provide meaningful checkpoints to receive feedback and give guidance. The more knowledgeable and senior the employee, the less guidance you need to give. For new hires, you should make the decisions. Once the new hire has experience, let them make decisions but inform you of the decisions so you can intervene or overrule where needed. For trusted and experienced senior engineers, decisions can be made and acted upon solely by the engineer. This dependence on employees making their own decisions is pure delegation freeing up your time for other activities, and this is the ultimate goal of delegation. It also provides employees autonomy which is a key factor in employee happiness. Finally, the importance of the task must be clearly communicated, preferably with a defined date on which it is due.

As manager, with proper preparation and organization, assignments with specified features and a precise priority and due date, and the appropriate amount of interaction based on the employee’s ability, you should be able to delegate effectively. If you feel like you have to do everything yourself to achieve the quality that you desire, you need to change the way you delegate. As boss, be humble and take responsibility. If the outcome is not as you desired, it is likely that you are more at fault than the employee. Delegation is a sign of a good manager, it allows people to grow, and it allows the organization to grow.


Timing and Sizes

A break from process. I find this table to be quite useful for comparison and as a rule-of-thumb reference.

0.5 ns Fetch from L1 Cache
1   ns Execute typical instruction
5   ns Branch misprediction
7   ns Fetch from L2 Cache
25  ns Mutex lock/unlock
100 ns Fetch from RAM
250 ns System call overhead*
500 ns Context switch**
20  us Send 2KB on network
50  us Fetch from SSD
250 us Read 1MB from RAM
500 us Round trip packet send in same data center
5   ms Fetch from HDD
20  ms Read 1MB sequentially from HDD
150 ms Round trip packet to Europe

* Linux has some system calls cached, e.g. getpid which take ~5ns.
** Context switch has many dependencies and may take 10x longer. Switching to a new process is slightly more expensive. A new process causes TLB flush and thus L1 Cache misses (L1 cache stores virtual addresses).

64  B Cache Line
4   K RAM Page Size
64  K L1 Cache
512 K L2 Cache
8   M L3 Cache
8   G RAM
1   T HDD

Fewer Mandatory Processes

Expanding on last week’s post about no mandatory code reviews, let me describe another scenario that I frequently run into, which also advocates common sense over mandatory process.

You are making a change or fixing a bug and during that act, you discover something unrelated that is broken, either by testing the product or just looking at the code. You make a quick code change to see if your thinking is correct and it turns out, yes, you just found and resolved an issue unrelated to the original issue you were working on. Now, at this point, you’ve found an issue, root caused it, made a change, and tested it. Most likely, with a minute or two more of investigation, you should be confident that your fix is correct and you can check in the fix to trunk. That’s right, two more minutes and you are done with this bug. Use a meaningful check-in comment to git and go back to concentrating on your original issue. However, most companies employ mandatory processes that all changes must be on a branch, all changes must be tracked in JIRA, and all changes must be code reviewed. Thus, copy your change, create a branch, paste your change, rebuild to verify nothing broke, create a JIRA, describe the issue and resolution, fill in all the mandatory fields (most of which at this point are probably useless since you’ve already resolved the issue), create a code review, go do something else while you wait for all code reviewers to review the changes, come back to this change, merge back to trunk, rebuild to verify nothing broke, and commit. All in all, what could have taken a few minutes, now took an hour in extra process. Ask yourself, was that extra hour really worth the ROI, i.e. to backtrack and fill in the process items for an issue already root caused and fixed? I do not think so. Plus, as mentioned here there are side effects of task switching (slows down progress and creates bugs), or the common case of engineers bypassing readability issues in the code (incorrect comments, fixing poor variable names) because the process is too cumbersome to fix. Also it’s common for engineers to pull in unrelated changes into a single change due to the heavyweight process, though git has a good workaround for this (subversion does not).

Overall, mandatory process has a negative toll on software development. It’s a process-over-people mentality. For reasons already stated, the outcome results in a slowdown of productivity and less readable code. In today’s environment, being slow is death. Companies cannot afford to spend an hour on tasks that can be done in minutes. Like last week’s article, this is a controversial topic and many smart engineers will say my ideas will lead to buggy code, but through experience I know this is not true. With a people-first mentality and properly trained engineers, you can have a more productive team delivering more features and delighting customers.

No Mandatory Code Reviews

I am now going to present one of my more controversial views, my thoughts on code reviews. Yup, I do not believe all code should be code reviewed. What!? No code reviews. How can you ensure quality code? You will surely introduce many bugs. That’s disastrous! In my mind, I feel my view on code reviews should not be controversial, but talking with other engineers, other very bright and successful engineers, I know this is controversial.

Beginning where we should, what is the purpose of code reviews? To find bugs. To share knowledge about the code. To share coding style/practices. But primarily, it’s to find bugs. One of my biggest complaints about engineers is their blindness to not considering the business, specifically when it comes to ROI. Did you find a bug in the code review? Yes, thus it was worthwhile. Well, maybe not. If it took you 30 minutes to find a cosmetic bug, it’s not worthwhile. If it took 30 minutes to review code, you found no bugs, and you’re still confused about the code, it’s definitely not worthwhile. Taking ROI into account, code reviews need not happen under certain conditions

To answer the question under which conditions code reviews are not necessary, it’s easier to identify the conditions in which code reviews should occur. Mandatory code reviews should occur for new employees, for code in which the author is not the expert, and for any changes committed close to delivery date. When delivery is months away and you are the expert in the code, there is no need for a mandatory code review. The key word here is mandatory. If the code is particularly tricky and you want someone to review it, great, create a code review. Also, all changes to the code base should be communicated to the entire team. I prefer to have a hook so that a check in to git sends an email automatically to the team. The email will have the commit information and which files changed. Thus, the whole team, not just specific team members, have a chance to review the code on their own time.

The advantages of non-mandatory code reviews are many. It allows engineers to concentrate on a different task without having to go back-and-forth between tasks as code reviews can occur over days (do not underestimate the slowdown and inadvertent bugs created due to multitasking). The whole team, not just those invited for the review, participate in the process. Email hooks notify and record all changes. It places the change in trunk sooner so developers can integrate the change easily and immediately. It gets the change to QA sooner for testing, which is important for effective code base turnaround and bug resolution as discussed here.

The other main advantage of non-mandatory code reviews are obtained by avoiding the disadvantages associated with certain code reviews. Busy engineers can cause code reviews to be delayed for many days. When the author is the expert in the area, it becomes extremely unlikely that other reviewers will find meaningful bugs simply by reviewing code. It is also very likely that the reviewers will not fully understand the code. So you say, the reviewer should learn the code? No. You’re living in a fantasy world. I have been asked to be in code reviews where if I truly performed a valid code review it would take me hours and hours. Probably close to the whole day. All in the name of learning the code that I am not familiar with, probably will never touch, and will likely forget after 3 months. So, no, the ROI is not worthwhile, not even close. The bugs that are recorded in these types of reviews are almost always cosmetic and pedantic. I’ve found the best bugs are found by actually testing the code, not by looking at the code. Sure, certain bugs like race conditions and deadlocks are easiest found by inspecting the code, these often require thorough knowledge of the system to find, again, most suited for the expert in the area.

In sum, while all code reviews are *valuable*, not all code reviews are *worthwhile*. The benefits of completing the change, being able to task switch to something else, and allowing QA and other developers immediate access to the code, all weigh in on determining whether having a code review is worthwhile. In my experience, limiting mandatory code reviews to new employees, non-expert code changes, and when close to ship date, and performing non-mandatory code reviews on all other changes provides the most effective environment for software development.

Vertical Slicing

As I mentioned in Don’t Miss Deadlines, an important key to delivering on time is to tackle the tasks that have cross-team interaction first. More broadly, this is referred to as vertical slicing or vertical integration. While there are some very specific concepts to vertical slicing, I like to stick to the general idea: for any feature that touches different sections of code, different teams, or different physical components, define a minimal, bare bones end-to-end operation and implement that first so that it is operational. Then build out the features horizontally. Any issues, especially issues that necessitate a change to design or architecture, almost always are found in the linking between the disparate components. The team will be much more likely to succeed and deliver on time if these major issues are identified earlier in the project.

Vertical slicing sounds easy in concept, but it is not. As a team lead, you must really push the various teams to aim for that initial deliverable, that initial vertical slice. Take for instance a new feature that has components A, B, and C that need implementation in the database layer, business logic layer,  and user interface. The standard approach would be to complete components A, B, and C for the database layer, while in parallel completing components A, B, and C for the business logic layer and again in parallel for the UI. Each layer conforms to a spec and so when you glue all pieces together, everything “should just work”. Instead, for vertical slicing, finish component A end to end first. Then work on component B, followed by component C. Working for a vertical slice will likely be less efficient, it will create waste, to complete a vertical slice first. Sometimes there will even be portions of code that are written solely for the vertical slice that will then be thrown away. Thus, it’s up to the team lead to use their experience and common sense to define how the vertical slice is carved up. Personally, I have found the initial vertical slice is the most important, after which tasks can be worked on horizontally. In the above example you could finish component A end to end (get the all important initial vertical slice), followed by component B/C completed together rather than finishing B end to end before working on C. Each project will be slightly different.

While the initial, common sense instinct is to break up tasks horizontally by component, because this is often how teams are organized, an initial vertical slice is an important milestone to identify issues early and to achieve working features. Vertical slicing is a natural result of following agile methodologies and when a vertical slice can be demoed to a customer and an issue is identified early, you’ll realize quickly the value of vertical slicing.

Out of Control

Let’s do a fun exercise. A lot of companies find themselves in a position with legacy code, sometimes decades old, with a recent surge in customers, revenue, and now feature requests. As the new features are implemented, it takes much longer than expected to implement with a lot of introduced bugs. Also, often when bugs are found, it takes a *long* time to debug and fix them. In these cases you will often claim, and probably rightly so, that the team is resource constrained and overworked. How do you solve this problem where things feel out of control?

There are two paths, 1) live with it. Do this if the legacy software is truly gargantuan and still relevant, i.e. most of the code is still used and you have a massive customer base with lots of customization. Joel Spolansky advocates this approach. The other option is 2) refactor the code. Do this when you have potential of losing market leadership because of the inability to keep up with competitors. This is the approach I generally prefer and which I will address below.

To start, clearly identify why bugs are introduced with changes to the code. For legacy code, usually it’s because of classic coding issues, most notably where dependencies exist between modules that should be independent, referred to as spaghetti code. The spaghetti code can be exacerbated by difficult-to-change interfaces, say with a server or embedded device to which your code communicates, i.e. to fix issues, not only must your code be fixed, but that other code owned by another team (or even company) must be updated. So where to begin actually solving the problem?

To fix this issue, start by creating ownership of portions/modules of code. Do not create a team where everyone owns everything and nobody is responsible for anything. With ownership you create responsibility and accountability. Second, perform any quick fixes that drastically improve (aka low hanging fruit) debugability of the code. This will allow the team to continue fixing the critical items that are needed to keep the business in business. Third, prioritize the refactoring of the code. Do not try to obtain approval from management to set aside time to refactor the code base. In my opinion, this need not be “approved”, it should be part of your job in managing the team’s workload. Fourth, start defining the interfaces of these modules – both the current interface and what the interface *should* look like. This is probably the most important part of refactoring – it doesn’t have to be perfect but you’ll find refactoring fruitless if you do not have a defined interface. Fifth, create axioms. See my blog post on axioms for details. Sixth, with a clearly defined interface, consider writing test cases. However, do not over-emphasize this part. Test cases are great but it’s easy to get caught up spending too much time here. It may be good to time box this chore. Seventh, de-emphasize the current list of bugs and prioritize the refactoring effort. The refactoring should as a consequence fix almost all the outstanding bugs and should include several new features. This resolution of bugs and addition of new features is also why you need not receive management approval for refactoring, instead emphasize these real user-facing benefits of the refactoring. Eighth, create a culture of no fear. There will be a period of time where nothing seems to work and large swaths of features are broken. Fantastic. Part of the process of breaking things is learning. That area of legacy code that nobody wants to touch because nobody understand it and updates to it always creates bugs, well, now you have somebody who fully understands it. That’s progress. Ninth, keep QA actively testing. With a highly available code base and tight coupling between developers and QA (more details on my QA blog), bugs will be caught early! Tenth, get to work. Do not over-emphasize planning. I’ve seen teams produce bottom-up refactoring of the entire code base, in beautifully documented, planned-in-stages, power point slides and UML flowcharts. It can look impressive – everybody likes a well planned venture, especially one that involves such drastic change. However, the team is inadvertently performing the waterfall approach. These ten steps are an outline of how to approach refactoring.

Sure, the refactoring that I have described is simplified. Also, what I have outlined is hypothetical and common sense and specific issues must be addressed. But it’s good to have a simple, yet complete, plan of attack. Emphasis on simple, as described in step 10. Refactoring has huge benefits. A cleaner code base *drastically* improves productivity. There are fewer bugs. When bugs are found, they are fixed faster. Features are introduced faster and with fewer side effects. Then there are the additional benefits: engineers are happier. Engineers much prefer to work in clean code where they are more productive. It requires a team effort, a culture of no fear, a lot of hard work, but the benefits are enormous.

Quality Assurance (Part 2)

Since the last post was growing too long, I broke it into two parts. Here’s the remainder:

4) Engineering does not provide step-by-step test instructions
I mention this item because it is a required process at my current job that I have recommended be discontinued. It requires each JIRA bug and feature have step-by-step test instructions on how to verify the issue. Step-by-step instructions from engineering to QA defeats the whole purpose of QA. They should have the high level understanding of the feature and test it using their own method, hopefully with the mindset of trying to “break” the feature and find bugs. Step-by-step instructions is not only ineffective, it’s counter-productive. Talking with the QA engineers, they often simply follow the instructions, if it works, close the JIRA ticket and move on. To me, that means the issue was never truly tested, plus the time spent testing was more or less wasted as it is likely that the engineer ran those exact same steps for their testing. Thus, it goes back to relying on QA to have a good understanding of the feature and to be able to test it effectively with the axioms, not with step-by-step instructions.

5) Good balance between automated testing and ad hoc testing
I love automated testing. It’s great for catching regressions and can save time in the long run, but there seems to be a trend toward an over-reliance on automated testing. I recently manually tested a feature that is set up for automated tested. I found over 10 bugs, some of them major. QA had become so reliant on their automated tests that they simply ran the tests, if it passed, they assumed everything was working. QA must perform manual testing and when you have great QA engineers with attention to detail, they will develop some inventive testing methods and find some really subtle but important bugs, bugs that automated testing would never find.

6) Don’t test everything
Some things are not worth testing. Build breakages and minor or cosmetic changes to the source code are just some examples of things that may be logged and fixed in JIRA (or whatever bug tracking software you use) that need not be verified by QA. Basically, use common sense to determine when QA should spend their time testing. Don’t just mindlessly send every ticket to QA for testing. Use common sense so their time is well used.

Overall, it’s all about trusting QA and using common sense. Do not treat QA like monkey robots that simply follow prescribed testing steps. If you do, you’ll soon find your best engineers leaving the company. Trust that they are knowledgeable about the product and will use good testing practices to ensure the product is of good quality when it ships. You can trust them if they are logging lots of bugs and are actively asking questions. If they are silent, something is wrong. Basically, it comes back to the Agile Manifesto principal of People > Process. Process is important but enable and develop your People and that emphasis will be evident in the final product.

Quality Assurance (Part 1)

I’ve seen highly effective quality teams and quite ineffective quality teams and noted vast differences in the processes, culture, and mindset of how these teams operated. While not a thorough analysis across the industry (working with two data points here), the reasons why the one team was ineffective seems very logical. Thus, it’s worth talking about it.

The number one goal of QA is to find bugs. You could say that it is to ensure quality of a product, and while that may be true let’s put that to the side because I have found that message to be an excuse that QA gives when they aren’t finding as many bugs as they should. Thus, using number of bugs found as a metric, you should be able to measure QA effectiveness. These are the key ways that I believe make QA most effective : 1) Quick code base turnaround and bug resolution, 2) Lots of interaction and questions with the engineering team, 3) Good common sense when testing rather than following elaborately detailed test plans, 4) Engineering should not provide step-by-step test instructions, 5) Good balance between automated testing and ad hoc testing, and 6) not every bug or feature ticket requires QA testing.

1) Quick code base turnaround and bug resolution
I had an experience with the highly effective quality team that I will never forget. QA logged a bug on a feature that I just completed earlier that day. I knew exactly what the issue was when I saw the bug and fixed it and tested it within 5 minutes. Checked in the change and resolved the bug. Our system was set up for QA to immediately pull the new change, who then tested the fix and found that it still was not working. The QA engineer reopened the JIRA ticket and assigned it back to me. Duh, I had my engineering blinders on and missed something obvious (this is the whole point of having QA – to find those bugs that engineers sometimes are blind toward). I fixed it correctly this time and sent it back to QA. They tested it and closed the JIRA ticket. This whole process occurred in less than 20 minutes. In other companies with much worse processes, who operate on dev branches that are only pushed to QA once per week, this whole process could have taken half a month, and the overall time consumed would be *much* greater than 20 minutes due to the context switching of both the developer and QA. For a highly effective QA team, they need to be able to work off the same code base as the engineers.

2) Collaboration
QA needs to understand how features work and anytime they have any doubt, the development team is there to help. The key, however, is for QA to ask lots of questions. The use of axioms is the documented explanation of features between engineering and QA. On the reverse side, engineering needs to ensure they respond and are available to QA when they need assistance.

3) Common Sense > Elaborate Test Plan
I recently looked into our QA’s test documents and they are filled with redundant, mind-numbingly obvious test steps. As part of testing feature X on the customer website, each test portion of Feature X was filled with the same two starting steps, “test step 1), log into the customer site”, “expected result: the customer site shall appear”, “test step 2), click on feature X”, “expected result: this should open the feature X page”. Seriously, I couldn’t believe it. It’s as if there is a mindset that test steps should be written for people with zero knowledge of the product. If that were the case, we should outsource testing to a much cheaper QA team. Then, can you imagine if Feature X changes slightly such that it is a sub feature of Feature Y, then all the test steps on how to navigate to Feature X have to change. This is inefficient and does not make for an enjoyable work experience for the testers. Any decent QA engineer who has basic knowledge of the feature will know how to test it (and if they don’t, they should seek help).  Actually, taking this a step further for this specific example regarding the customer website, *all* features on any user-facing page should be self evident as to their purpose, and especially so for the tester whose job it is to test the website. If the features are not obvious, then you should really look into improving the website and the QA engineer should note this non-obvious feature. I don’t want to get too sidetracked on the specific case of a user-facing feature, the overall point: enable your QA engineers with the tools and knowledge they need to properly test the product and rely on them, not elaborately detailed test plans, to find bugs.



Features and the behavior of the product should be documented. The PM, QA, Marketing, Sales, and Docs teams all need to understand how the product behaves without being overburdened by too much detail. Even for developers, it’s nice to have a reference and overview of how a feature works. Most requirements docs I have seen are horribly over-complicated, include too much detail, cross-reference JIRA tickets, cross-reference QA test cases, and are often incomplete, dated, or just plain wrong. The cross references are bad because they are almost always incomplete. The documentation becomes incomplete, dated, and wrong because it is complicated and/or the procedure to update the documents and have them approved is too process intensive. In short order, the whole effort becomes a waste of time. Thus, let me introduce you to axioms.

To resolve the issue, 1) documentation must be simple, 2) it must be easy to update and maintain, 3) it must convey the necessary information. I have found what I call axioms to be a good compromise to the problem described. So, what is an axiom? An axiom is a simple truth. It is a statement of how a feature behaves. No flow charts, no graphs, just a statement. For example, “Processing a database that does not match the engine’s version will result in deletion and regeneration of the database from the original contents”. Axioms assume general knowledge of the product. Axioms provide sufficient information for the QA team to produce tests. Any feature self evident when using the product is not included in the list of axioms. For example, if there is a button in the UI called “print document”, there need not be an axiom stating “the product should provide a button that allows the user to print the document”. Avoiding self evident items keeps the axioms from being cluttered. Another example is “The ‘go’ button shall be colored blue”. There is no reason to document this as a feature. If the ‘go’ button changes to green, almost always the color was changed by design. Again, the reason to avoid such axioms is to prevent clutter. So yes, the addition of axioms is subjective with the rule “self evident axioms are not documented”. Axioms should be stored in a wiki that is easily accessible and editable by everyone, in which people can receive notifications when updates occur, which maintains a revision history, and is searchable. An alternative to a wiki are simple .txt files in a central repository or under source control, though this is less ideal. Whatever you do, don’t put them in a Word document or other difficult-to-access object. One addition to axioms that may be done is tagging each axiom with a version so readers know when features were introduced into the product. To produce axioms the first time without too much effort, simply tag these as “legacy” or “<= v12” if the current release is version 12. The key point is to keep the procedure simple.

I have yet to see a non-mission-critical company be able to keep accurate, up to date requirements. As such, I like to use axioms. The key behind axioms is to keep the process simple otherwise the documentation will soon become a waste of time as it loses value due to inaccuracies. It should take an engineer seconds to make changes and an axiom should only be created if it is self evident to avoid clutter. Overall, axioms provide a nice compromise between not having requirements at all and having an overly burdensome process to create requirements.

Architecture Astronauts

Joel Spolsky criticized the senior engineers who architect projects or platforms from a very generalized, abstracted, high-level viewpoint. But doesn’t this abstracted viewpoint lead to modular, scalable, maintainable code? Isn’t that what we are taught are all good aspects of programming? Was Joel wrong? No, no he wasn’t. In fact, this is one of Joel’s best points in his blog. Beware the architecture astronaut’s project!

The issue fundamentally boils down to whether the project is worth it. First, is the problem actually *useful* to solve. Second, is the ROI worth it? If the end result is an improvement but it results in a garbled, over-engineered, overly complicated product or platform, then no, it’s not worth it. Also, if it took 18 months to design this “beautiful”, abstract framework that could have been done by a python script in less than a day, then no, it’s not worth it. Is it borderline genius? Perhaps. But if it doesn’t solve real problems simply, then I’m not impressed. Be careful as architecture astronauts are generally incredibly smart engineers, with strong opinions who are often very persuasive, and, unfortunately, they are also often quite arrogant. Their projects are easy to identify as these projects are often hailed as revolutionary, breakthrough, and game-changing. Meet the engineer who claims the Linux interface is poorly designed and has too many gotchas. He then seeks to design a framework to improve all of Linux by himself … because Linux, through all its history and amazing engineers is substandard. That’s arrogance. Good, pragmatic programmers are humble enough not to claim such things and definitely do not force their white whale projects onto others. Good engineers often know that the better solution is incremental change. Boring, simple, incremental improvements.

Joel extols pragmatism well when he summarized architecture astronauts, “Tell me something new that I can do that I couldn’t do before or stay up there in space and don’t waste any more of my time.”