Being able to compare estimated time versus actual time can be useful. I’m not sure that “velocity” - the ratio between estimated and actual time spent on tasks - is particularly helpful, because in my experience estimates are not consistently wrong by a constant factor. It’s knowing what work you’re bad at estimating that’s helpful. Do you fail to appreciate the risks involved in adding new features, or do you tend to assume all bug fixes are trivially simple? — 25: 315-318
I’d recommend that even readers who consider themselves experienced object-oriented programmers read Object-Oriented Programming: An Evolutionary Approach and Object-Oriented Software Construction. — 50: 693-696
The goal of testing is to identify opportunities for the project team to exploit. — 56: 780-780
Remember that the tester is trying to find differences between expected and actual behaviour: discovering their causes is something that only needs to be done once the team has decided a code fix is appropriate. — 57: 798-799
Testing software and writing software share the following property in common: it’s not doing them that’s beneficial, it’s having done them. — 61: 861-862
What does a software architect do? A software architect is there to identify risks that affect the technical implementation of the software product, and address those risks. Preferably before they stop or impede the development of the product. — 77: 1096-1098
User personas give the impression of designing for users, when in fact the product team has merely externalised their impression of what they want the software to be. It’s easy to go from “I want this feature” to “Bob would want this feature” when Bob is a stock photo pinned to a whiteboard; Bob won’t join in with the discussion so he won’t tell you otherwise. The key thing is to get inside the fictitious Bob’s head and ask “why” he’d want that feature. Sometimes teams that I’ve been on where personas are used nominate someone to be their advocate during discussions. This gives that person licence to challenge attempts to put words in the persona’s mouth; not quite the same as having a real customer involved but still useful. — 96: 1373-1378
The article Three Schools of Thought on Enterprise Architecture explores the effects of these boundaries on considering the systems involved. — 96: 1382-1384
You need to know what you’re building for, so you need to have some understanding of the problem domain. Yes, this is asymmetric. That’s because the situation is asymmetric; you’re building the software to solve a problem, the problem hasn’t been created so that you can write some software. That’s just the way it is, and compromises must come more from the software makers than from the people we’re working for. The better you understand the problem you’re trying to solve, the more you can synthesise ideas from that domain and the software domain to create interesting solutions. In other words, you can write better software if you understand what it is that software will do. That’s hopefully not a controversial idea. — 100: 1439-1444
As with pair coaching, this is a situation where acting like a petulant toddler can be to your advantage. The domain experts are likely to have particular ways of doing things; finding out why is what’s going to uncover the stuff they didn’t think to tell you. It’ll be frustrating. Some things we don’t have real reasons for doing; they’re just “best” practice or the way it gets done. Probing on these things will set up a cognitive dissonance which can lead people to get defensive; it’s important to let them know that you’re asking because you’re aware how much of an expert they are at this stuff and that you just need to understand the basics in order to do a good job by them. — 102: 1462-1467
Satisfying human needs is what Herzberg deems a hygiene factor: people must have their basic needs met before they can be motivated to pursue other goals. — 106: 1535-1536
This section really reiterates what came before; you should be building software that your users need in preference to what they want. That’s the ideology, anyway. Reality has this annoying habit of chipping in with a “well, actually” at this point. — 108: 1567-1569
There’s a lot of good material out there on these other aspects of coding. When it comes to organisation, for example, even back when I was teaching myself programming there were books out there that explained this stuff and made a good job of it: The Structure and Interpretation of Computer Programs; Object-Oriented Programming: an evolutionary approach; Object-Oriented Software Construction. The — 119: 1719-1723
What all of this means is that there is still, despite 45 years of systematic computer science education, room for multiple curricula on the teaching of making software. That the possibility to help the next generation of programmers avoid the minefields that we (and the people before us, and the people before them) blundered into is open. That the “heroic effort” of rediscovery described at the beginning of this section need be done, but only a small number of times. — 121: 1755-1758
don’t necessarily have to write your reflections down, although I find that keeping a journal or a blog does make me structure my thoughts more than entirely internal reflection does. In a way, this very book is a reflective learning exercise for me. I’m thinking about what I’ve had to do in my programming life that isn’t directly about writing code, and documenting that. Along the way I’m deciding that some things warrant further investigation, discovering more about them, and writing about those discoveries. — 122: 1777-1781
hard to sell you on their way of thinking. — 123: 1788-1788
You could imagine an interpretation in the form — 133: 1932-1932
According to some researchers in the field of disaster response, there are five considerations in risk estimation, leading to five different ways to get risk management wrong: incorrect evaluation of probability (usually presented as optimism bias, the false belief that nothing can go wrong); incorrect evaluation of impact (again, usually assuming optimistically that the damage won’t be too great); statistical neglect (ignoring existing real data in forecasting future outcomes, usually in favour of folklore or other questionable heuristics); solution neglect (not considering all options for risk reduction, thus failing to identify the optimal solution); and external risk neglect, in which you fail to consider factors outside the direct scope of the project that could nonetheless have an impact. — 135: 1966-1972
you could say “here’s the problem as I see it, this is what we want to get out of solving it, and here is the solution”. Now your colleagues and partners are left in no doubt as to why you believe in the approach you present, and you’ve set a precedent for how they should present their views if they still disagree. The conversation can focus on the problems facing the project, not the imbeciles around the table. An aside on persuasion Framing an argument in this way is a well-known rhetorical technique. First people identify themselves as facing the problem you describe, so that when you describe the benefits of a solution your audience agrees that it would help. When you finally present your proposed solution, people already know that they want it. Nancy Duarte’s talk at TEDxEast goes into more depth on this theme. — 148: 2170-2178
In a column called “Mood” in Communications of the ACM, Peter J. Denning investigates the ways that moods can affect our interactions with each other, even transmitting the moods socially between members of a team. He notes that when everybody is positive, collaboration is easy; when everybody is negative, the outcome is likely to be bad so it’s best to avoid what will most likely become confrontational. — 173: 2562-2565
The Association of Computing Machinery’s code of ethics and professional conduct is a short document, comprising 24 ethical imperatives members are expected to follow: one of which is that membership of the Association is contingent on abiding by the other imperatives. — 178: 2634-2636
I have certainly never been asked in an interview whether I’ve ever acted unethically. I’ve been asked what I know of Perl, and how I interact with other people on a team, but never whether I’ve failed to respect the privacy of others. — 180: 2658-2659
As the manuscript for this book came together, I realised that a lot of the content was based on a limited and naive philosophy of software creation. I was outlining this philosophy as it applied to each chapter, then explaining what the various relevant tasks were and how they fit into that philosophy. Here it is, written explicitly and separately from other considerations in the book: Our role as people who make software is to solve problems, and only incidentally to make software. Making software for its own sake is at best a benign waste of time and money, or at worst detrimental to those exposed to it. Our leading considerations at all times must be the people whose problems we are solving, and the problems themselves. — 185: 2731-2737
Advanced project funders will consider protected revenue (how many customers will not jump to a competing product if this feature is added) and opportunity cost (what work could we be doing if we decline this work), factoring those into the decisions about the project. — 192: 2834-2836
Are new features always bigger and more expensive than bug fixes? No. Do bug fixes always cost us money, and never attract or protect income? No. Are new features sometimes snuck into maintenance? Yes. Are bug fixes sometimes held off until new project releases? Yes. Then why aren’t they budgeted together? It could be for ethical reasons: perhaps programmers feel that maintenance problems are mistakes they should own up to and correct free of charge. But remember that one of Lehman’s Laws says that the satisfaction derived from software will decay as the social environment involves. Not all bugs were bugs at the time of writing! You cannot be apologetic for work you did correctly before a change in the environment. To me, this suggests a need for a more nimble economic model, one that treats any change equally regardless of whether it’s a bug fix, feature addition or internal quality clean-up. Forget what we’ve already spent and made on this product (for that way lies the sunk cost fallacy), what will the proposed change cost? What will it get us? How risky is it? What else could we be doing instead? What alternatives do we have? — 192: 2846-2855
I’ll start with a hypothesis: that what’s being traded is not the software itself, but capability first, and time second. Given the desire, but inability, to do something as a problem, anything that solves the problem by enabling that thing is valued. This is what economist Herbert Simon described as bounded rationality, or satisficing. So a first solution discovered, whether ideal or not, is still valued. This already explains why the infinite supply “problem” is not real: on discovering that a product can be purchased that meets their needs, a consumer is likely to settle for making the purchase as a satisficing solution—many will not spend the extra time on researching a pirated version of the app. — 196: 2899-2904
and error. The Software Engineering Body of Knowledge can be thought of as a guide to what to learn from the published literature on software engineering. When — 203: 3001-3002
I have evaluated my own knowledge of computer science against the Programmer Competency Matrix for the last few years, and in the course of writing this text created the Programmer Courtesy Matrix to summarise the material here. — 204: 3015-3017
For novice programmers, self-taught, apprenticed and educated50 alike, the course from hobbyism to professional software making - whatever the context in which that software is made, and whatever the specific definition of “professional” we choose - starts with awareness of software as a means to solve problems, not as an end in itself. The next step is awareness of the gap between their novice competence and the current state of the art. How they choose to close that gap is less important than awareness of the gap’s existence. — 205: 3035-3039
Organisation of this book There are three parts to this story. The first, and necessarily the longest, antithesis: a deconstruction of the state of OOP as it exists today. To get to the kernel of a good idea, you have to crack a few nuts. Part one is the agitation that necessarily precedes revolution. The second part, thesis: a reconstruction of OOP using only the parts that were left over after the antithesis. Part two is the manifesto: once we’ve seen that the last few decades of status quo haven’t been working for us, we can evaluate something that will. The third, synthesis: a discussion of the ideas from OOP that aren’t being provided by today’s object systems, and the ideas and problems that OOP doesn’t yet address at all. These are the next steps to take to pursue the ideas behind object thinking. Part three is the call to action. This is not a pure takedown, a suggestion that we have been monotonically doing it wrong for three decades: the antithesis part of this book questions, rejects and destroys a lot of built aspects of OOP, but by no means all of them. And by no means purely the later ones, either: the message is not that Smalltalk was created in some computational garden of Eden and that Sun tasted of the forbidden fruit which doomed us all to Java. Belief in a primaeval wisdom (urwissenheit) leads to an uncritical “tradition for tradition’s sake” in the same way that belief in primaeval stupidity (urdummheit) leads to an uncritical “novelty for novelty’s sake”. Rather this is an attempt to find a consistent philosophy, a way of thinking about software, and to find the threads in the narrative and dialectic history of the making of software that are supportive and unsupportive of that way of thinking. Because OOP is supposed to be a paradigm, a pattern of thought, and if we want to adopt that paradigm then we have to see how different tools or techniques support, damage, or modify our thoughts. — 6: 60-74
It means relinquishing the traditional process-centered paradigm with the programmer-machine relationship at the center of the software universe in favor of a product-centered paradigm with the producer-consumer relationship at the center. — 12: 156-158
This comes from a good intention - inheritance was long seen as the object-oriented way to achieve reuse - but promotes thinking about reuse over thinking about use. — 19: 274-275
Bertrand Meyer’s principle of Command-Query Separation, in which a message either instructs an object to do something (like add an element to a list) or asks the object for information (like the number of elements in a list) but never does both. — 24: 350-352
A similar partial transfer of ideas can be seen in Test-Driven Development. A quick summary (obviously if you want the long version you could always buy my book) is — 44: 665-667
Behaviour-Driven Development marries the technical process of Test-Driven Development with the design concept of the ubiquitous language, by encouraging developers to collaborate with the rest of their team on defining statements of desired behaviour in the ubiquitous language and using those to drive the design and implementation of the objects in the solution domain. In that way, the statement of what the Goal Donor needs is also the statement of sufficiency and correctness - i.e. the description of the problem that needs solving is also the description of a working solution. This ends up looking tautological enough not to be surprising. Constructing Independent Objects The theme running through the above is that sufficiency is sufficient. When an object has been identified as part of the solution to a problem, and contributes to that solution to the extent needed (even if for now that extent is “demonstrate that a solution is viable”), then it is ready to use. There is no need to situate the object in a taxonomy of inherited classes - but if that helps to solve the problem, then by all means do it. There is no need to show that various objects demonstrate a strict subtype relationship and can be used interchangeably, unless solving your problem requires that they be used interchangeably. There is no need for an object to make its data available to the rest of the program, unless the problem can be better solved (or cheaper solved, or some other desirable property) by doing so. I made quite a big deal above of the open-closed principle, and its suggestion that the objects we build be “open to modification”. Doesn’t that mean anticipating the ways in which a system will change and making it possible for the objects to flex in those ways? To some extent, yes, and indeed that consideration can be valuable. If your problem is working out how much to bill snooker players for their time on the tables in your local snooker hall, then it is indeed possible that your solution will be used in the same hall on the pool tables, or in a different snooker hall. But which of those will happen first, will either happen soon? Those are questions to work with the Goal Donor and the Gold Owner (the person paying for the solution) on answering. Is it worth paying to solve this related problem now, or not? Regardless of the answer, the fact is that the objects are still ready to go to work as soon as they address the problem you have now. And there are other ways to address related problems anyway, which don’t require “future-proofing” the object designs to anticipate the uses to which they may be put. Perhaps your SnookerTable isn’t open to the extension of representing a pool table too, but the rest of the objects in your solution can send messages to a PoolPlayer in its stead. As the variant on the Open-Closed Principle above showed, these other objects could be ignorant of the game played on the table. Some amount of planning is always helpful, whether or not the plan turns out to be. The goal at every turn should be to understand how we get to what we now want from what we have now, not to already have that which we will probably want sometime. Maybe the easiest thing to do is to start afresh: so do that. — 52: 782-807