September Writing Challenge, Final Post: Okay, Not Quite 30

So: I missed a few days toward the end of this month. Life got a little intense, I got a head cold, and I wobbled.

On the up side: I did more writing on this blog in this past month than I have in many months prior – around 13,000 words. (There was also some other writing that did not turn up here.)

This was valuable enough that I’m going to do it again. For October, I’m either going to do a sketch a day or another blog-a-day. I haven’t decided which yet.

#31days

September Writing Challenge, Post 25: Some Productivity Omphaloskepsis

I’ve read before about how willpower and attention are akin to finite resources that get depleted and need to be allowed to recover, and I think that model has helped me realize something about my own cycles of productivity.

My usual behavior around non-work projects seems to go something like this:

  1. Starting from a relatively fallow period, something catches my interest (it could be anything – software, music, art), and I dive in.
  2. I have early successes, and this adds to my general level of energy and excitement, and I take on one or more other projects that interest me, thinking that I’ll ride this wave of motivation.
  3. If I have not been careful or realistic about how much stuff I voluntarily take on, it rapidly gets to the point where I can’t possibly make progress on everything. If I have been careful and realistic, it doesn’t matter, because something else will come along that I must take on, and it rapidly gets to the point where I can’t make progress on everything.
  4. Suddenly I feel like I’m failing at half or more of the stuff I’ve taken on, and things get set aside, sometimes indefinitely.
  5. Lather, rinse, repeat.

This can happen on a time scale anywhere from two weeks to three months.

If you’re a software process nerd (or possibly a general productivity nerd), you may have heard of Kanban, a method of process control. One of its central tenets on Kanban is “limit your work-in-progress“. For Kanban, that’s usually expressed at the task level, but I think for some of us (read: me) it might be wise to look at that at a higher level, and limit the number of projects I try to handle.

This is not necessarily a new insight, in general or for me personally, but I clearly need to be reminded.

September Writing Challenge, Post 24: Three Things I Wish American Tech Culture Would Learn

Note: I’ve had a couple things holding my attention this week, and as a result missed a couple of days of the writing challenge. I’ll catch up.

One more note: I’m having a slightly rant day. Bear with me.

There are a bunch of things that could be done to make the tech culture more sane and humane. Here are three that rank highly on my list:

1. Working more hours does not necessarily make you more productive. In fact, it may make you far, far less so. We work in one of the few professions where it is possible to do negative work on a daily basis – that is, to make the code worse than we left it. We are more likely to do this when we work long hours. Unfortunately, both American work culture and the tech subculture seek to twist overwork into a virtue. It’s not. Overwork leads to bad decisions. If your boss doesn’t understand this, give him the slide deck I linked earlier in this paragraph (which contains a ton of researched information on productivity topics beyond just hours). If he willfully ignores the facts and says he doesn’t believe it, go work for someone smarter, and let him fail on someone else’s broken back. Also: If you think you’re somehow the exception to this, you’re not. There’s ample research out there – I urge you to look it up.

2. Trading sleep for work just makes you dumber, not more productive. This goes hand-in-hand with the issue of long hours; as with overwork, our culture makes a badge of honor out of sleep deprivation. (I was guilty of this myself when I was younger.) When we don’t get enough sleep, it degrades the quality of our work, and our ability to notice how much our work has degraded. This may be a reason so many people think they’re exceptional in this regard. Spoiler: They’re not. Again, there’s loads of research; Google is your friend.

3. The software profession is not a meritocracy. At least, it’s not if you’re black or a woman. This is made worse by the fact that white guys in the profession often think they’re too smart to have unconscious biases about race, gender, sexuality, &c. It’s made worse still by the fact that most of us in the profession who are any good at it actually did work hard to get there, and feel there’s merit in the rewards we’ve gathered. But if it’s not a meritocracy for everyone, it’s not a meritocracy for anyone, and those of us on the inside need to check our privilege and start examining our own behavior.

</rant>

September Writing Challenge, Post 23: Five Things That Make My Workday Far More Productive

There are a few things I find it almost impossible to get through a workday without:

  1. The Internet: This seems so obvious that it almost feels like cheating to include it. SDK documentation, SDK bugs that are not in the documentation, algorithms, computing language tricks, example code, security alerts, third party libraries… And, for break time, everything else.
  2. A good pair of headphones: Music, binaural beats, pink noise, phone calls, Google hangouts… Many days, I spend more time with the phones on my head than off.
  3. A zipper-front hoodie: I’m sensitive to temperature when I’m working. There’s a lot of benefit in being able to regulate my insulation.
  4. The Dvorak keyboard layout: 60% less finger travel. QWERTY is so 19th century.
  5. A door that I can close: ‘Nuff said.

September Writing Challenge, Post 22: The Customer Is Always Right

If you read yesterday’s post, the title of today’s might seem odd.

If you are designing or writing software for someone other than yourself, you are ethically bound to give your client or employer your best advice on how to meet the project’s goals. (Being persuasive in this is one of the big reasons you should cultivate your communication skills.) Whoever is paying for your skills will then take your advice, or they will not. It could go either way, for reasons that are largely out of your hands.

Whichever way that goes, you are then ethically bound to do it the way the customer wants. Their project, their money. Sometimes, that means implementing things – often user experiences, but sometimes deeper technical details – in a way you know to be somewhere on the scale from “suboptimal” to “imbecilic”. Sometimes, deliberately delivering less than the best possible product* at someone else’s request can fall on an emotional scale from “mildly annoying” to “soul-eroding”.

If that’s too much for you, you could go and make your own product. If you don’t have the savings cushion to take that leap, you could work on a side project on your own time – that can be a real sanity-saver (and a great way to sharpen your skills). And when all else fails, learn this mantra: “Not my circus, not my monkeys.” However you do it, it can be valuable to learn to distance yourself emotionally from work that does not belong to you, when the need arises. Software and business are both complex endeavors, and their intersection will always involve compromise.

Give your customer your best advice, then build the very best version of the thing they are paying for, and know that at the end they are getting what they asked for and deserve the results, for good or ill.

And, of course, make sure that best advice you gave earlier in the project is documented somewhere, with your name attached. Couldn’t hurt.


* If you’re working on medical devices or air traffic control software, and someone could end up maimed or dead, the advice in this post is less applicable. Buck up and push your case harder, in that event.

September Writing Challenge, Post 21: The Customer Doesn’t Know Shit

If you are designing or writing software for someone other than yourself, you’ll spend some about of time wanting to roll your eyes at people thinking they know how to do your job. Non-technical product managers will suggest specific technical solutions that they’ve heard of but don’t clearly understand. (MongoDB seems to turn up a lot in this context, for no reason I can discern.) Salespeople with no special background in UX design will prescribe inappropriate or outdated UI idioms. (Hamburger menus, a.k.a. pancake menus, still retain a lot of mindshare.)

Your natural and completely appropriate reaction to this sort of thing might be to want to start hitting people with a shovel, but this is bad for repeat business, and not legal in some places.

The thing you have to remember is that these nice people hired or contracted you (or your employer) because they don’t know how to do what you do, even if they sometimes forget this during requirements definition. You’re the expert. When a client falls into buzzword glossolalia, thinking he’s offering a valid technical solution, a better approach is to holster your shovel and say something like: “Let’s not get bogged down in technical details this early. Why don’t we take a step back, and you tell me what you want to accomplish by doing that, and then I can do a proper assessment of whether X is the right tool for the job.”

Gently draw the non-technical client’s attention away from the technical questions best left to you; bring your focus and theirs to the business problem they want to solve, and about which you may reasonably hope they know something. I’ve yet to have a client object to me caring about their problems and wanting to choose the best way to solve them. (Of course, then you’re on the hook to actually solve them, but that’s a topic for another post.)

There is, of course, a flip side to this, but I’ll write about that tomorrow.

September Writing Challenge, Post 20: Of Bikes and Lungs

Some years ago – let’s call it twenty – I was having an espresso at Sonsie on Newbury St. in Boston. The big front windows were open to the sidewalk. A guy in his twenties went by on a bicycle, smoking as he rode. Not only was he smoking, but there was an ashtray attached to his handlebars by a bit of hose clamp. This was, for obvious reasons of aerodynamics, not going to hold any ash or butts – it was clearly more of a statement. I suspected he thought he was pretty hardcore.

A few days ago, at the Starbucks on Capitol St. in Indianapolis, I saw another guy on a bike. He was in his sixties or later. He had oxygen tanks hanging off his pannier rack, connected to a nasal cannula that he was wearing as he rode. It did not look like a statement.

That guy was pretty hardcore.

September Writing Challenge Post 19: Five Books by Female Authors

Warren Ellis, who is one of my favorite (male) authors, wrote a blog post about a woman on Tinder who refused contacts from men who could not name five books by female authors which they had read. (You can surely guess the results if you’re… well, awake.) Since I haven’t patted myself on the back for being a New Age sensitive guy in a couple days, I thought I’d give it a go.

I won’t use Boneshaker by Cherie Priest, because Ellis used it in his list, and I don’t want to appear to be cheating. Sticking with works of fiction that I enjoyed:

  1. The Left Hand of Darkness by Ursula K. LeGuin: LeGuin is an astute writer of stories about the future that are not about the future.
  2. The Handmaid’s Tale by Margaret Atwood: I could say the same of Atwood that I just said of LeGuin. Possibly more so.
  3. Frankenstein by Mary Shelly: This reformulation of the tale of Prometheus is widely credited as the first science fiction novel, and is still among the best.
  4. Like Water for Chocolate by Laura Esquivel: Someday I’ll make it through the whole thing in the original Spanish. I still recommend the translation.
  5. The Temeraire novels by Naomi Novik: The Napoleonic Wars, but with dragons. Yeah, it’s brain candy, but it’s well-written, engaging brain candy.

Bonus non-fiction pick: Cracking the Coding Interview by Gayle Laakmann McDowell, which I recommend to anyone – especially non-CS majors – interviewing for a software gig.

September Writing Challenge, Post 18: The D in SOLID

Note: This is the fifth of five posts I’m writing on the SOLID principles of object-oriented programming. Part 1: S, Part 2: O, Part 3: L, Part 4: I

Any discussion of the Dependency Inversion Principle should start by answering the question: What, exactly, is being inverted?

A lot of object-oriented systems start with classes mapping to the higher-level requirements or entities; these get broken down and individual capabilities get drawn out into other classes, and so on. The tendency is to wind up with a pyramidal dependency structure, where any change in the lower reaches of the pyramid tends to bubble up, touching higher and higher-level components.

As an example, let’s think about the Service/Endpoint/Marshaller classes I discussed in my earlier post on the Single Responsibility Principle. It would be very easy to start writing the service class, decide to break out the Endpoint class, and do so in a way that made assumptions that you were calling an HTTP web service – for example, you might assume that all responses from the service would have a valid HTTP response code, or that parameters had to be packaged as a URL-encoded query string.

So what happens if your requirements change such that you must directly call a remote SQL store using a different protocol? You’re going to have to change at least two classes, because of assumptions you made about the nature of your data source.

With the Dependency Inversion Principle, we are told that first, we should not write high-level code that depends on low-level concretions – we should connect our components via abstractions; and second, that these abstractions should not depend on implementation details, but vice versa. I’ve seen the “inversion” part of DIP explained a few different ways, but what I see being inverted is the naïve design’s primacy of implementation over interface.

When you start thinking about how to break down subcomponents, take a step back and think about the interfaces between components, and do your best to sanitize them – remove anything that might bite you if implementation details change.

In the case of the Endpoint, that might mean writing an interface that takes a dictionary of parameter names and values, with no special encoding, and providing for success and failure callbacks. A success callback could give you some generic string or binary representation of the data you requested (which can be passed to a parser/marshaller next). The arguments to the failure callback would be a generic error representation (most platforms have one), with an appropriate app-specific error code and message – not an HTTP status code, or anything else dependent on your data source.

DIP is a key way of limiting technical risk; in this example, after we have changed the interface to be generic with respect to the data source being called, a change to the Endpoint class requirements necessitates little or no corresponding change to the Service class, and vice versa.

The Obligatory Recap

Over these past five posts, I’ve covered five principles for building resilient object-oriented systems, with resiliency being defined as resistance to common classes errors, low cost of change, and high comprehensibility (i.e., well-managed complexity).

Here are all five once more, not with their canonical formulations (you could get that from the Wikipedia page on SOLID), but with my own distillation of the core lesson (IMHO) from each:

  • Single Responsibility Principle: Give each class one thing to do, and no more.
  • Open/Closed Principle: Extend components, rather than modifying them.
  • Liskov Substitution Principle: Stay aware of the promises your classes make, and don’t break those promises in subclasses.
  • Interface Segregation Principle: Classes should advertise capabilities discretely and generically.
  • Dependency Inversion Principle: Details should depend on abstractions, never the other way around.

Go forth, and write some awesome software.

Next, I’ll post something lighter and non-technical. Promise.

September Writing Challenge, Post 17: The I in SOLID

Note: This is the fourth of five posts I’m writing on the SOLID principles of object-oriented programming. Part 1: S, Part 2: O, Part 3: L

The Interface Segregation Principle is probably the easiest of the five SOLID principles for most programmers to grasp, if for no other reason than they’ve been exposed to it constantly if they’ve been working with an object-oriented language. The ISP says that small, single-purpose interfaces are to be preferred to large, omnibus interfaces.

Finding examples is easy. Here are a few lines pulled from the Cocoa Foundation headers:

For those of you who aren’t fluent in Objective-C: In each of those lines, the identifier immediately before the colon is the name of a class or protocol being declared (as distinct from being defined), an identifier immediately after the colon but outside the angle brackets is the parent class of the class being defined, and identifiers inside the angle brackets are protocols to which the class or protocol being defined will conform.

If you’re a native Java speaker, @protocol is very similar to interface; if C++ is your thing, @protocol is akin to a pure abstract base class. In all three cases, it’s all contract and no implementation.

What contracts are these interfaces expressing?

  • NSCopying exists “for providing functional copies of an object.”
  • NSMutableCopying is “for providing mutable copies of an object.”
  • NSSecureCoding offers all the NSCoding methods for archiving an object, and additionally allows an object to assert that it unarchives securely.
  • NSFastEnumeration is “implemented by objects wishing to make use of a fast and safe enumeration style.”

…and of course, each class has the methods that make it special: an NSArray has the the operations you’d expect for an ordered, randomly-accessible collection of objects; NSString allows you to search for substrings, and so on.

Each interface defines a very specific capability – you could almost call them atoms of functionality (or promised functionality).

So why do we break up our object declarations into these separate interfaces?

First, it offers you a certain amount of protection. NSCoder (non-Cocoa heads: it archives objects complying with the NSCoding protocol) only needs to know about those methods relating to object serialization. Someone writing an NSCoder subclass doesn’t know and doesn’t need to know about copying or enumeration or any of the other things Foundation objects commonly do, and therefore can’t do anything surprising to an object that is passed into that subclass (like mutate it unexpectedly via a method having nothing to do with archiving). It allows you to expose only those methods a particular caller should care about, and in that way avoid surprises.

Second, it allows you more freedom in how you express the capabilities of a class. Imagine modeling a bird in Objective-C:

This looks straightforward, but what about subclasses that don’t need all of those capabilities? Should Ostrich or Penguin throw an exception when you call -fly? Should it be a no-op? What is it reasonable for calling code to expect? You could make Bird a protocol instead of a base class, and make flying-related operations optional:

…but then what do you do when it comes time to model a Bat? Flying is a very similar operation, but all the code you wrote that needs -fly is expecting a Bird. You don’t want to duplicate the same code for a Bat, and you certainly don’t want to start checking types and casting, because you’re eventually going to have to implement FlyingSquirrel, and FlyingFish, and who knows what else, and that code will turn into an error-prone hairball. If the -fly operation is used in the same way on each class, the calling code shouldn’t care about the specific type, only whether -fly is implemented.

With interface segregation, we can declare all of these things very flexibly:

Behavior that is shared across class hierarchies is broken out into a special-purpose interface. A method to check the altitude of a flying animal doesn’t need to know whether it’s a flying bird or a bat; the method signature - (NSFloat)checkAltitude:(id<Flying>)flyingAnimal; makes it clear that this code cares only about flying animals. You can’t even pass a Penguin to this method. (Another note for the non-ObjC-ers: id<Flying> means any object that conforms to the Flying protocol.)

Going back to the Foundation classes I referenced at the beginning of this post: It might be tempting to say that most of the classes need most of the same functionality, so why not put all the copying, archiving, and enumeration methods on NSObject, or make a subclass or protocol called NSFoundationObject that offers all the relevant methods?

That would work fine for the collection classes, all of which implement all the interfaces. Then we get to NSString… What does it mean to enumerate a string? Our first naïve thought might be to treat the string as a collection of characters, but nothing in NSCopying says anything about a character encoding, so that’s not going to work. Someday, someone is going to try to enumerate a string, and it will… crash? Throw an exception? Behave like an empty collection? Doing nothing isn’t even an option, because the lone method on NSFastEnumeration has a return value. (And the answer is not to have NSCopying‘s method take a character encoding enum, and have classes that don’t need it ignore it – that’s making the problem worse, not better.)

It gets even sillier with NSNumber. What does it mean to enumerate over an atomic value type? What does it mean to have a mutable copy of it? It would be senseless for NSNumber to claim that it offers these capabilities.

So, it doesn’t. Every type advertises only those capabilities that are meaningful to it, with interfaces that describe those capabilities minimally and generically.

And for those of you diving into Swift, this mode of thinking is a precursor of the new hotness, Protocol-Oriented Programming.

Come back tomorrow for the thrilling conclusion of the SOLID series: the Dependency Inversion Principle.