September Writing Challenge, Post 16: The L in SOLID

Note: This is the third of five posts I’m writing on the SOLID principles of object-oriented programming. Part 1: S, Part 2: O

The Liskov Substitution Principle is probably the deepest and most “academic” of the SOLID principles of object oriented design. It states that replacing an object with an instance of a subtype of that object should not alter the correctness of the program.

If you read the Wikipedia page for the Liskov Substitution Principle, you’ll see that there is a whole lot packed into that word “correctness”. It touches on programming by contract, const correctness, and a lot of terms that will have limited meaning to people who don’t have a degree in computer science or mathematics. It can also be difficult to see how some of the more academic-sounding constraints apply to the real-world systems that we write. I’m going to try to back into it with a couple of “ferinstances” that motivate a more practically applicable (if slightly less rigorous) formulation of the LSP.

The “classic” LSP example is the square/rectangle problem. It’s natural for us to think of a square as a “specialization” of a rectangle; if you say, “a square is just like a rectangle except that all its sides are of equal length”, most people won’t object.

When you try to bring this abstraction to an object design, however, things break down. Let’s lay out this object hierarchy in Swift – where I had to jump through a couple of hoops to get the square’s constraint to work as needed:

Calling code that expected a rectangle to have its height and width vary independently, or code that had expectations about any derived quantity (like area or the position of a vertex) is at risk for being broken now.

What’s the general principle we can draw from this? It might help to restate the square/rectangle relationship: “A square satisfies all of the constraints of a rectangle, and adds the constraint that its sides must be of equal length.” For the operation of setting width, the Rectangle allowed us to expect that its height would be invariant. The Square breaks that expectation – because of its extra constraint, its property setters mutate state that the parent class’s setters don’t touch. This is part of what it means in that Wikipedia article when it says that “Invariants of the supertype must be preserved in a subtype.”

There are other kinds of constraints that break expectations of calling code. You might be writing an object in a payroll system that has a method to compute compensation, and it might have a method signature like Currency computeCompensation(Employee emp, Timesheet latestTimesheet). That’s a very specific contract made with the calling code, and a subclass may not add a constraint by, for example, demanding that emp must be of the subclass OvertimeEligibleEmployee. Calling code has the reasonable expectation that it may pass in any Employee object or any instance of a subclass of Employee, and further constraining the type of emp breaks that expectation – so badly, in fact, that every OO language that I’ve worked in (which isn’t all of them, by any means, but it’s a fair sample of the common ones) disallows changes to overridden method signatures. You could get around it in the child class’s overridden method by downcasting to OvertimeEligibleEmployee. If you’ve ever been warned against downcasting, this is exactly why – you’re basically saying, “the caller says this is an instance of Employee, but I know better”, and sometimes you’ll be right, but at some point you’re going to be wrong about that and introduce a crash or a hard-to-trace logic error.

This, to me, is the core of the Liskov Substitution Principle: it’s all about constraints and expectations. If your child class introduces a constraint that would break any plausible expectation of the code calling an instance of the parent class, you’re breaking the LSP, and you may or may not be breaking your program.

The LSP is the most restrictive of the five SOLID principles and the easiest to break, either unintentionally, or because you decided that a downcast or an extra property mutation in a child class is okay just this one time. And the LSP gets broken all the time in production code and even well-regarded framework code, sometimes productively. For you Cocoa heads: You’ve seen mutable subtypes of immutable types – NSMutableArray is a subtype of NSArray, NSMutableString is a subtype of NSString… How does that stack up against the “history constraint” cited in the Wikipedia article? Bonus question (that might lead you to drink): How would you change this hierarchy of types to “fix” that?

I encourage you to do some reading on it, and to develop a feel for the innocuous-seeming changes in the lower reaches of your class hierarchies that might break expectations of code written against your parent classes – and likewise for the times when you can profitably but deliberately break the LSP to get things done.

September Writing Challenge, Post 15: The O in SOLID

Note: This is the second of five posts I’m writing on the SOLID principles of object-oriented programming. Part 1: S

The Open/Closed Principle says that software components should be open for extension, but closed to modification.

Sometimes you will see this described in terms of inheritance. Drawing from the examples in my post yesterday, take the case of a retail product that has multiple possible JSON representations exposed by a web API. The product detail API call will give you everything you need to fill in a product information page. Looking up order history, though, you’ll have some part of that same information, plus a quantity, an order price (which may be different than the current purchase price), &c. For the sake of argument, let’s also say you have a well-tested ProductMarshaler class that handles data received from the product detail endpoint. In Swift, you might have something like (assuming the JSON has been parsed into a dictionary):

But what about the order API response, which needs something extra? We could go and alter the existing ProductMarshaler… but that feels uncomfortably close to a violation of the Single Responsibility Principle. It also feels icky to muck around in tested, working code. But we don’t want to duplicate everything from the existing marshaler either, because duplication is bad, m’kay?

We don’t need to alter the stable, working portion of the system in this case, we just need to extend it. In OO languages, one way to do that is with inheritance:

We have closed off a stable part of the system to changes, but taken advantage of the fact that that part is open for extension. We’ve taken advantage of common behavior while specifying only the extra bits we need for our special case.

Inheritance isn’t right for every case, though. What if you’re working on a payroll system that requires you to draw employee data from both a sexy, new JSON-based API and an old and busted XML-based API? Inheritance doesn’t buy us much here – there’s no obvious, natural way to draw common functionality out of the tasks of extracting data from these two formats… But the one thing they do have in common is that you’re passing in a string representation and getting back an Employee model object. Most modern OO languages allow you some way to define an interface – that is, a sort of contract for your object’s behavior – without specifying an implementation. In Java, that might look like:

And your classes would look like:

…and similarly for the JSON marshaller.

In both cases, when we say the class implements an interface, we’re forcing it to commit to a contract: “I’m going to take in a string representation and return an Employee object and send you errors out-of-band.” The interface is the part that we have closed to modification; we’re saying that all our employee marshallers must commit to this contract. Implementation is left open; the newer JSON implementation (which, being new, might still be seeing changes to its requirements) can change freely without impacting the legacy XML implementation.

If you read articles about SOLID and Open/Closed on the web, you’ll see both interpretations of the principle, inheritance-based and interface-based (with the interface-based interpretation being newer and hipper). Both are useful. There may be other interpretations, as our thinking and our computing languages evolve. The most profitable way to think about the Open/Closed Principle is to ask yourself: What are the parts of the system that are likely to change very slowly, or not at all? Whether those are interfaces or implementations, those are the things that you should figure out early, and close to modification. These stable, closed parts of your system should be the coupling points between the “open” parts of your system. With interfaces especially, those closed portions of your object design can be used to segregate regions of change and technical risk, so that they remain tractable in the face of requirements changes and debugging.

And speaking of interfaces, they’ll figure heavily into tomorrow’s post on the Liskov Substitution Principle.

September Writing Challenge, Post 14: The S in SOLID

Note: This is the first of five posts I’m writing on the SOLID principles of object-oriented programming, and the fourteenth of 30 posts that I’m writing in September.

If you write object-oriented code, but have never heard the phrase “SOLID principles”, we should talk.

SOLID is a mnemonic for these five things that you should pay attention to when doing object design:

  • Single Responsibility Principle
  • Open/Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

Come back over the course of the week to see the last four, but today I’m going to ramble a little about the Single Responsibility Principle – or as I sometimes call it, the Small, Dumb Object Principle.

In code calling web services, there’s a pattern I commonly see: Someone will write a class that takes some query parameters, calls the network endpoint, pulls out the JSON (or an error) from the response, interprets the JSON into a model object, and returns it to the user (usually asynchronously, via a block).

I think that’s a fine interface to show to calling code – request params in, model objects out – but I don’t think much of it as an object design. There’s too much going on in that one class. I would break it up like so:

  • There should be an Endpoint class that does nothing but turn the parameters into an HTTP request, and blindly report the result (successful JSON response or an error state).
  • There should be a Marshaller class that does nothing but consume JSON and spit out model objects.
  • There should be a Service class that coordinates the other classes, and perhaps does some manipulation/saving of the models or interpretation of the error state before reporting them to the user.

To some people, that seems like overkill – three classes when you could have one. I’d say they’re not thinking deeply enough about software requirements and how they tend to change.

The Single Responsibility Principle states that a class should have only one responsibility. Put another way, there should only be one category of system requirement that forces each class to change.

What happens when your web service gets versioned? What if you have to handle multiple versions of the API in your code? If you’ve coupled your JSON marshalling code tightly to the output of your network call, it’s going to be a pain to tease that out. It’s wiser to separate your concerns, keep the network code dumb about how its output will be used, and be able to swap in the right Endpoint object as needed.

Keeping the Marshaller separate has benefits, too. What if you’re working on a dating app that has identical user objects come down as part of login, search results, messages, and a local newsfeed? You’ll want that marshalling code packaged reusably, decoupled from any one API call, but accessible to them all.

Another problematic implementation I see is packaging the marshalling code inside the model class – why shouldn’t the model object create itself? You can have one less object, and it doesn’t break the reusability case I described above.

What if you’re working on a retail app that has different product representations come down from different API calls? The product detail call is going to have a lot more information than the search results, and will also look different from products in a shopping cart (which will at least have a quantity added), or the products in order history (which will have a quantity, an order price separate from the current price reported by the detail call, possibly a link to the order…). If you try to handle all these cases in the model object, three quarters of your model code is going to be devoted to translating these different JSON representations into model instances. Does that make sense? Does that match with the “data and business logic” description of the M in MVC? On the other hand, a small class hierarchy of Marshallers can handle this cleanly while keeping the model class ignorant of the details of its own serialized representation.

Because the Endpoint and Marshaller objects are ignorant of each other, they’ll need help to get anything done. So I write a Service object that passes parameters to an Endpoint, passes its output to the Marshaller, and returns the results to the user.

Is the Endpoint pulling from an HTTP web service? A SQL data store? An interface to a fax line? The Service doesn’t care; it just knows that the Endpoint takes parameters and asynchronously spits out data and errors. Is that data in JSON? XML? Packed binary format? Cuneiform? The Service doesn’t care; that’s the Marshaller’s problem. It just knows to feed the Marshaller data and collect the model objects that fall out – at most, it has to choose the right Marshaller based on a data type or other out-of-band signal from the Endpoint.

Because each class is small and limited in scope, such classes will tend to be very testable in isolation – which is great if you’re doing TDD or any other automated testing practice. Because each class is small and limited in scope, it will be very easy to look at the code and reason about what the object is doing – and that makes your life so much easier when you’re debugging.

The most common complaint I’ve heard about this way of doing things is that it leads to a proliferation of small classes and class files, and I have admit, that leaves me a little baffled about the logic behind the objection. Reducing the number of files in a project is not a reason to do anything. Is business value somehow inversely proportional to the number of files in a project? Does such an objection refute any of the practical points about reusability, complexity, or coupling that the Single Responsibility Principle addresses? As Edward Tufte has famously said: “There is no such thing as information overload. Only bad design.” I’d extend that to say that there is no such thing as too many class files in a project – there are only poorly organized projects.

Small, dumb classes with standard interfaces are the building blocks of stable systems. This is one of the key ways we manage and mitigate complexity in object-oriented programming.

Tomorrow, I’ll share some thoughts about the Open/Closed Principle, which is one of the ways we manage change in OOP.

September Writing Challenge, Post 13: The Cake Is a Lie

Ever played Portal (or its sequel)? It’s a great game that combines characteristics of the first-person shooter genre with spatial relations puzzles in a really engaging way. I don’t think I’m giving too much away when I say that the plot driver in the game is a maliciously insane artificial intelligence. She promises you cake if you can complete all the puzzles.

Later in the game, you might run across this graffito:

You were hoping for cake, weren't you?

It’s a narrative moment that manages to be disheartening without being surprising.

If you write software (or do interactive design, or any one of the jobs related to the field), you may, especially when being made an offer by a startup or early-stage company, be asked to take stock in lieu of salary. The company has limited funds, and needs to “extend its runway” or “manage its burn rate”, or whatever buzzphrase the kids are using these days to rationalize paying sub-market salaries. They’ll ask whether you wouldn’t rather have the upside from a good exit next year (or the one after, or the one after that) than a few tens of thousands of dollars in your paycheck this year (or next year, or the one after that)? There will probably be emotional manipulation as well, centered around the assertion that they want to work with people who believe in what they’re doing. You may hear the phrase “team player”.

Ignore everything they tell you that isn’t in the paperwork.

I’m not saying don’t take equity. It’s a very nice thing to have, if the company actually works out. Taking it in lieu of salary, though, can be a major mistake. You’re basically agreeing to get paid in lottery tickets.

There are a few reasons why trading money for stock is a dangerous idea (unless you’re a founder, which is a whole different matter):

The company probably won’t make it. Most don’t. That’s a bummer, but it’s a fact, and no matter how much you actually like the company or what it’s doing or the people working there, it’s a fact you shouldn’t avoid when negotiating your compensation.

You’re almost certainly getting common stock. Google around about the difference between common and preferred stock. The latter is what founders and investors get, and one of the things it means is that if the company folds, people with preferred stock get something out of the company when its assets are liquidated. You don’t. And if the company doesn’t fold…

A successful exit doesn’t necessarily mean that you get rewarded. You might make out, but a lot depends on the ethics of the company’s investors. A new investment round that you think should raise the value of your shares above the exercise price might not have that effect. Investors will sometimes reapportion the outstanding shares or otherwise fiddle with paper and numbers such that all the reward accrues to them, while the value of your shares stays flat.

You have very little control over the outcome. One of the rationales for offering equity rather than salary is that, as a key employee in a young venture, you should share in both the risks and rewards of the venture. In reality, though, even if you do everything perfectly, you have very little influence over whether the company succeeds (unless you’re a founder or executive). You’re giving up some chunk of your compensation to chance and the behavior of others.

Add to this that a lot of these companies want you to work insane hours (and I have a whole different rant about why that’s a bad idea, for you and the company)… Well, I don’t think the ethics of that are talked about enough. Fodder for another post or three.

By my way of thinking, if a company can’t pay market salaries for the professionals it needs to do business, then it’s not well-capitalized enough to do business. In the best case, this means they’ll cut corners on other stuff like tools, work environment, support staff, and other things that might have supported you in doing your best work. In the worst case, they know they’re low-balling you and the stock will probably be worth less than the cereal box tops you used to collect for your school, but for their own reasons they’re okay with gambling with your livelihood and financial well-being.

It’s often made worse by the fact that the person trying to sell you this deal may actually think it’s a good thing, and won’t understand if you don’t think he’s doing you a huge favor. (There are reasons why company founders are sometimes delusionally optimistic, and that can even be a Good Thing™ – but again, that’s fodder for another post.) No matter how smart, ethical, and sincere the person is who’s trying to get you to take his lottery tickets instead of the money your skills are worth, this is not a person who is rationally serving your interests. They’re serving their own interests, and those interests of their investors that they are contractually obligated to serve – and rationality is a crapshoot.

Research the salary range you can command in your market. Read the paperwork and understand what you’re being offered when someone offers you equity in a company (or other non-monetary compensation) as part of your compensation package. Know yourself and how much risk you can tolerate, knowing that any upside to your equity is unlikely to materialize and unpredictable in its magnitude if it does. Get a lawyer if you need help understanding the paperwork. Don’t be shy about negotiating – as I write this, it’s still a seller’s labor market in the software world.

Most of all, make your own choices for your own reasons. Focus on the facts of your offer, and not on what the person making the offer thinks the upside will be (it will always be huge), or what they think the risk will be (they will minimize it), or their reasons that you should take the deal. “Team player” is too often a code phrase for someone who will be loyal to the company’s interests without having that loyalty reciprocated.

Please don’t misunderstand me: I think you should take risks, but I think they should be smart risks that serve you. I don’t think that every employer is out to screw you, but a few definitely are, and even for those that aren’t the onus is always on you to watch out for your own interests.

And I’m certainly not saying you shouldn’t take equity in a company – but I am saying that almost any deal that would be unfair without the equity is still unfair with it, given the uncertainties. Demand a fair deal, and don’t feel bad about walking away from an offer that doesn’t work for you.

September Writing Challenge Post 12: TIL, Radix Sort Edition

It occurred to me over coffee this morning that I’d never implemented my own radix sort; so, while the caffeine kicked in, I did that, and I learned a few things.

(Everything below applies only to my radix sort implemented in Objective-C in Xcode 6.4 on a machine with an Intel Core i7 processor, sorting a list of a million pseudorandom positive integers with values in the interval [0, 108). YMMV.)

Once I got the sort implemented and performance tests set up, the first thing I did was to parametrize it by radix. When you see a radix sort demonstrated in books, you always see it done in base 10. Unless the numbers were represented as strings, that always just seemed weird and unnatural to me, so I experimented with different radices. For my less-than-rigorous test, a base 2 sort took about 3 times longer than a base 10 sort, which was slightly slower than a base 16 sort. Base 100 was faster still.

This makes sense: Radix sort time complexity is O(kN), with N as the number of elements to sort and k as the maximum number of digits in the radix you’ve chosen. Radix goes up, k goes down. Gist and code:

With that finding, I wondered how much better you could do choosing a power-of-2 radix, and doing a little bit bashing – Objective-C is still a superset of C, and things like the C bit-shift operators are native instructions on Intel processors, so I figured I might be able to squeeze a little more performance out of my sort by eschewing % and / in favor of >> and &. I was correct – I can get almost a 2x improvement using bit manipulation over arithmetic operations in the base 2 and base 16 sorts. Gist and code:


  • I realize I am discovering nothing new here. These were just my own learnings from poking around over coffee.
  • It should not come as a surprise that bad things happen when you add negative numbers to your test set (at least with this implementation).
  • Moving to base 256 with signed int test data and the range of numbers I was working with led to more bad things happening.
  • Maybe I’ll have some more coffee.

September Writing Challenge, Post 11: Oops.

So, I spaced out on my post for the September writing challenge yesterday. I had a relatively juicy topic in mind that was going to take more than 5 minutes – which was outside the parameters of the challenge, but I’ve been playing pretty loose with that – I was running late in the morning, figured I’d get it done in the evening, then I got home and had a bunch of high-priority stuff to do, and then there may have been some cocktails in there somewhere, then I went to bed.

It occurs to me today: There was no guideline built into the challenge for what to do if I missed a day. Have I blown it? Do I skip a day? Do I write an extra post to make up lost ground? Are the Writing Challenge Police going to kick in my door? All of these things are possibilities.

Well, it’s my challenge, so my rule for a single missed day shall be: I’ll get right back to it (because why not?), I’ll do an extra post to make up (you’re reading it), and I’ll examine what happened (again, you’re reading it). What follows may only be interesting to those currently obsessing about habit formation.

[Commence omphaloskepsis]

First, the up side: This has been the first time in ages that I’ve written anything but Facebook status updates or the most mundane of work-related emails for 10 days in a row. So, yay.

I’ve highlighted the value of keeping the challenge parameters tractably small. If the thing hadn’t been large enough that I chose to procrastinate, I wouldn’t have missed the day. This affects my thinking about what I’m going to attempt for a challenge next month.

It’s difficult for me to get discretionary stuff done in the evenings. After work, I still have personal and household responsibilities, and on top of that I’m trying to make progress on multiple side projects. Also, cocktails.

Related: I tend to commit near or slightly above my ability to execute. That’s something I’ve struggled with for as long as I can recall. I would like to a) find a way to make some of the important things habitual so that they take less attention and willpower to do consistently, and b) teach myself to set projects aside for later, without feeling like I’m losing or abandoning something.

It might help to have a plan about what to do when I do miss a day (because it’ll happen again). This is a tidbit I picked up from that book by Charles Duhigg that you see around all the time: People in surgical recovery (I believe the specific surgery was a joint replacement) did much better when they had put some thought into what they’d do when they hit problems in their recovery. The therapy to recover from that kind of surgery can be long, painful, and complex; most of my projects are not so physically painful, but I’d still probably benefit from planning for something besides the happy path.

Okay – off to the showers, and then I’ll come back and choose a topic for another (probably shorter) post.

September Writing Challenge, Post 10: If You Can’t Teach It, You Don’t Know It

The title of this post was more than long enough already, but it could probably be clarified a little with one more phrase: If You Can’t Teach It, You Don’t Know It Nearly as Well as You Could.

This is something I figured out back when I was teaching first- and second-year college physics: I never knew my subject as well as when I was able to successfully convey it to someone else.

I’ve come to think that teaching a concept is an important stage in learning it. It’s not that everyone must be a teacher, though it is a valuable and important skill. It’s not that you can’t successfully use a concept you haven’t taught – because of course you can. Successful teaching, though, involves expressing a concept clearly and succinctly, and often from multiple directions. Being forced into that level of clarity and circumspection has the effect of solidifying a concept for the teacher as well as the student.

I believe in this idea enough that I’ve used it to choose topics for conference talks or lunch-and-learn sessions. I’ll be working on something, I’ll need to use a certain tool, and I’ll notice that I don’t know nearly enough about the tool – so I’ll commit to a talk and create a situation where I must then learn enough about the tool to teach someone else to use it, and to answer the kinds of questions I anticipate will be asked (roughly 50% of which are questions I also have). It doesn’t make me an expert off the bat, but it provides a solid foundation that, combined with repeated use, will get me there.

September Writing Challenge, Post 9: Brief Thoughts on Today’s Apple Event

There was a lot of cool stuff introduced today. I think we’re all (“we” = “Apple geeks”) looking forward to performance improvements and a more powerful SDK on watchOS 2. The iPad Pro would be seriously tempting to me if I hadn’t just bought an 11″ MBA (and even so). I really wish Pencil worked with other iPad models, because I’d like one. The new tv has some cool stuff in it (and another SDK to learn! yay!), but the killer feature for me is the multi-channel search; going through the siloed apps on my “smart” TV for each of the services I subscribe to – and I do use a few – is a PITA.

The most interesting news on the iPhone, to me, was 3D Touch. The snap consensus on Twitter seems to be that it has discoverability issues (which is true) and that a lot of people will have trouble explaining it to their parents (which they will). The same has been said, though – also correctly, to varying degrees – of double-click, right-click, pull to refresh, long press, swipe, long swipe, et al. It strikes me as another thing that people are just going to have to poke at for a while, then they’ll get used to the idiom, then they’ll all use it without thinking about it.

And my favorite thing about both iOS 9 & El Capitan is not any particular OS feature (though there are a few cool ones, especially on the iPad). It’s that I get to use Swift 2, with try/catch, defer, guard, protocol extensions, and API availability. Aside from the dynamic language features of Objective-C (I do miss method swizzling sometimes) and a few stupid preprocessor tricks, Swift has caught up to Objective-C as a tool on Apple platforms, and surpassed it in many respects as a general-purpose language. And now that it’s open source, there’s a chance it may even replace Ruby in my affections.

September Writing Challenge, Post 8: What I Care About in a Technical Interview

If I’m interviewing you for a technical position, I’ll ask a bunch of questions. Most of them are warm-ups, or checking off a list of basic information I’m collecting. There are two questions that take up most of the interview time, and contribute the most to my interest (or lack thereof) in your candidacy.

First, I will ask you to describe the technical details of a recent project. I’ll get down to the object design level, and I’ll question decisions like your choice of data store. If you are articulate about the details of the software you’ve written, and you have reasons for the decisions you’ve made, that works in your favor. It also lets me poke at how well you know the SDK we’ll be using, but that’s secondary.

I also have a pet algorithms & data structures question (which I will not share here). I’ve asked it to a few dozen people, and maybe two have taken it to a solution that is optimized in time or memory, and only one pulled out multiple good solutions. There is no single correct solution, and you’re not expected to get there anyway. While I definitely use this question to probe CS fundamentals, that’s not the most important part of your answer – I’ve worked with some sharp people who didn’t know Big O Notation from Maxwell’s Equations. It’s much more important to me how creative you are in solving the problem, how many paths you see to go down, what hidden assumptions creep in, and what tools you reach for.

Comment fodder: Does anyone want to share favorite interview techniques? How about good or bad interview questions you’ve been asked?

September Writing Challenge, Post 7: Five Songs in Heavy Rotation on My iPhone

Here are five songs I have been listening to a lot, lately:

  1. Fight to Win by Goodie Mob feat. Cee Lo Green: This is just one of those songs that makes you feel like a badass. Sometimes you need that in the morning.
  2. MVC by James Dempsey and the Breakpoints: One of five songs I’m committed to learning how to play before my next CocoaConf.
  3. Patterns of a Diamond Ceiling by Marnie Stern: Because she can shred, and because it’s a really interestingly constructed song.
  4. Howlin’ for You by The Black Keys: Sometimes I just need some rock & roll. Not prog rock (which I love and listen to a ton of), not pop rock (I have Fun. on my phone just like everyone else), but straight up & down rock and fucking roll.
  5. Oopa by The Orb: I’ve been listening to a lot of ambient & abstract electronic music lately, mostly to listen to how it’s constructed, and to see if I could make something similar.