Geeks are people, too. Sort of.

Dependency Injection and the YAGNI Principle

One of my new personal projects involves writing an application using Node.js and MongoDB. It’s going to have a RESTful interface for the services, and MongoDB as the database (I haven’t decided about what I’m going to use for the front-end, but Angular.js is a strong contender). I had looked at using to build it, but after playing with it for a while, it seemed to bring in too many things I wouldn’t end up needing, so I decided to build it all from scratch using Restify for the RESTful services.

I’m writing it in an MVC-ish way (really, since it’s REST, it’s more like MRC - Model - Route - Controller) where the top-level module requires routing modules from whatever Javascript files are in a particular directory (lib/routes), and calls them as functions, passing in the Restify server instance so the routes can be set up. Each route would call a corresponding “controller”, which would bring in a corresponding “model”, and set up the functions that each route would use to do its thing. So, the question became: how do I pass the database connection down to the model?

When I looked at how does things, I noticed that they use a dependency injection package called “dependable”. The way it works is that you create a “container” using dependable, and register dependencies in it, then the module that needs it would grab the container and read the dependencies out of it.

First of all, this isn’t dependency injection, because the module needing the dependencies has to actively retrieve the dependencies from a container, rather than having them injected (hence the term “dependency injection”). Of course, it doesn’t really matter what you call it, the package performs the basic function well - removing the coupling between the module creating the dependency and the module using it. However, I question whether it makes sense for the stack to use it in this case, for the simple reason that I believe this is a case where coupling is fine, in that the top-level module is already coupled to the routing modules, which are coupled to the controllers, which are coupled to the models, so pretending that they don’t know anything about one another is kind of silly.

This speaks to another issue I’ve seen a lot of (especially in the Java world), which is a lack of understanding of the role YAGNI (“You Ain’t Gonna Need It”) should play in programming.

One of the most egregious examples of this (which I’ve been guilty of myself) is the idea, in Java, that any class which is a dependency of another class should have an interface which the depending class uses as a proxy for it. In other words, if I have a class A which contains class B, I feel like I have to create a BInterface interface which A really contains, but which is implemented using B. The idea behind this is: what if, some day, I create a new class C which implements BInterface, but with a different implementation. Then, I don’t have to change A’s declaration, I just change where it creates B to create C.

This is fundamentally the same idea as the one where you have to dependency inject everything - you want everything to be perfectly flexible, so you don’t have to make changes when the implementation changes. However, the hidden expense here is that you now have twice as many classes/modules that you have to manage. Admittedly, half of them are interfaces, so they don’t do much, but it’s still adding a lot of complexity where it isn’t needed, because, chances are, YAGNI. And, in those places where you do need it, creating a new interface is trivial (especially in something like Eclipse), so you can do it then.

Flexibility always comes with a cost. The key to a good design is knowing where to add flexibility, and where not to. In the case of my Node.js code, I decided to pass the database connection through the route, simply because I think it’s fine that the top-level module understands that the routes might need to pass it down to their dependencies - that’s what having it as a parameter means - either “I’m going to need it”, or “one of my children will need it”. Either way, it doesn’t make the design any less clean, and, if something changes where I do need dependency injection, well, I can always add it later. YAGNI.

Logging Using EventEmitters in Node.js

I’ve been working a lot in node.js lately for a work project. Javascript as a language is an odd duck (pun not intended). It’s got all these incredibly powerful features - dynamic typing, inheritance-by-prototype, functions as first-class objects - but it has some really odd anachronisms, including having to use the C-style for (var i = 0; i < length; i++) loop to iterate over arrays. Node adds a lot to the language as well, such as EventEmitters, which are very powerful. This afternoon I found a nifty new use for them: logging.

One of the issues that always seems to come up is the fact that logging is one of those things that is both local and global - it’s local, in that you want to do the logging at the place where the event you’re logging occurs, but global in that you want your logging configured globally - you don’t want each and every class/module to have to “know” about logging. One consequence of this, especially with respect to testing, is that you end up having to configure loggers for your unit tests, which, in a way, makes them no longer “unit” tests at all. Optimally, what you want is a situation where you’re logging locally, but if logging hasn’t been set up by anyone, the logs just go into /dev/null. Basically, you want your logging to involve sending out logging “events”, which are either captured by something, or not. EventEmitters give that to you for free.

All EventEmitters are, for those of you unfamiliar with them (but familiar with OO terminology) are an implementation of the observer pattern, but a really lightweight and easy to use one. I won’t go into details on how it works, if you’re interested, look at the docs (or, even better, check out eventemitter2, which adds wildcards and namespaces to it). What I want to talk about is how to leverage it to make logging nicer.

For instance, if you have a logging package that you’re using (I’m using winston), you just create a module containing a class that is an EventEmitter:

var EventEmitter = require('events').EventEmitter;
var util = require('util');

function LogEmitter() {;

util.inherit(LogEmitter, EventEmitter);

Then, create an instance of your emitter, and you make your logging method emit an event:

var logEmitter = new LogEmitter();

module.exports.log = function(level, message) {
  logEmitter.emit('logging', level, message);

This just emits a logging event when log() is called, passing the parameters along.

Finally, you also have something listening if someone initializes logging:

var initialized = false;
module.exports.initialize = function() {
  if (!initialized) {
    initialized = true;
    logEmitter.on('logging', function(level, message) {
      // Log the message through your logging package here

Then, you’re good to go. Just distribute logging.log() calls throughout your code. If something in the code calls logging.initialize(), great, your messages get logged. If not (like, say, in a unit test), the messages go into the bitbucket.

Inheritance in Functional Languages

There’s a tendency amongst proponents of functional languages, like Javascript, to consider inheritance an anachronism of older (and, by implication, worse) OO languages and OO design. One example is this discussion by Mikito Takada. This is what he says:

I think classical inheritance is in most cases an antipattern in Javascript. Why?

There are two reasons to have inheritance:

  1. to support polymorphism in languages that do not have dynamic typing, like C++. The class acts as an interface specification for a type. This provides the benefit of being able to replace one class with another (such as a function that operates on a Shape that can accept subclasses like Circle). However, Javascript doesn’t require you to do this: the only thing that matters is that a method or property can be looked up when called/accessed.
  2. to reuse code. Here the theory is that you can reuse code by having a hierarchy of items that go from an abstract implementation to a more specific one, and you can thus define multiple subclasses in terms of a parent class. This is sometimes useful, but not that often.

The disadvantages of inheritance are:

  1. Nonstandard, hidden implementations of classical inheritance. Javascript doesn’t have a builtin way to define class inheritance, so people invent their own ones. These implementations are similar to each other, but differ in subtle ways.
  2. Deep inheritance trees. Subclasses are aware of the implementation details of their superclasses, which means that you need to understand both. What you see in the code is not what you get: instead, parts of an implementation are defined in the subclass and the rest are defined piecemeal in the inheritance tree. The implementation is thus sprinkled over multiple files, and you have to mentally recombine those to understand the actual behavior.

I favor composition over inheritance:

  • Composition - Functionality of an object is made up of an aggregate of different classes by containing instances of other objects.
  • Inheritance - Functionality of an object is made up of it’s own functionality plus functionality from its parent classes.

Now, I don’t disagree with him in principle here - inheritance can be an anti-pattern when overused, which it is, even in the world of OO languages (I’ve even seen it implemented, and then overused, in C, which, well, you’ve really got to see to believe). However, that being said, I think what concerns me about this quote in particular, and the corresponding attitude amongst those in the function-programming community in general, concerns a basic misunderstanding of what inheritance (“classical” or prototypical) is, and a further misunderstanding of one of the fundamental ideas behind software design.

What is Inheritance For?

To begin with, we need to distinguish clearly between interface inheritance and implementation inheritance, because they are very different things. Interface inheritance is about defining a contract for an interface - no sharing of implementation is involved at all. It’s basically how you achieve both polymorphism and a well-defined interface in statically-typed languages. Since it doesn’t share any code, talking about “composition vs. inheritance” or “code reuse” really has nothing to do with it. What most such discussions are really talking about is implementation inheritance.

The question then becomes: what is implementation-type inheritance for? After all, it’s fairly trivial to see that you can achieve the same thing with composition or, in languages that support it, mixins. Why have it at all? The answer has to do with one of the fundamental ideas behind the design of anything: communication.

Design is Communication

In his book The Design of Everyday Things (which, IMHO, should be required reading for any software developer), Donald Norman gives an example of the design of a door, and what it communicates to those who want to use the door. If the door swings only one way, then placing a pull handle on the side that opens in and a push plate on the side that opens out communicates, almost at a subconscious level, what the person approaching the door needs to do in order to open the door.

Similarly, the software we write should be communicating something about its use to the next developer who reads it. That developer, if we do our job right, should have an almost intuitive understanding of how to use or modify the code with very little effort. What does this have to do with inheritance? Glad you asked.

On a superficial level, what inheritance does is communicate what classes belong to a particular group (the classical “is-a” test for inheritance). But, what inheritance really communicates, and what differentiates it from composition, is that it tells you what behavior is required for a class to be a particular thing. In other words, inheritance communicates the required behavior for a class, whereas composition communicates optional behavior.

To see this, imagine that you have to write a custom implementation for some third-party framework which you’ve never seen the code for before. It already has classes that have other implementations for the same sort of thing you’re trying to accomplish, so you crack one of them open to see how they did it. You immediately notice that, in this one implementation, they’re passing in a reference to another class and using it in their implementation. Would your immediate conclusion (before looking at anything else) be that you would need to also use that same class in your implementation, or would you think that it’s probably just being used by this particular implementation? In other words, would you think this was required behavior for any class trying to implement this type of class, or would you think it was optional? What if you saw that it was being inherited instead of being composed?

As another example (and, really, the one that brought this to my attention in the first place), I’m attempting to write a DAO framework in Node.js for a set of applications my company is building. This framework would have different implementations depending on what underlying data store was being used. However, one of the things I wanted it to be able to do is to read from an in-memory cache, if one is provided, before going to the data store. This is behavior I wanted to be part of the framework and any DAO class within it. The question becomes, should I use inheritance or composition? In my opinion, using composition would be a mistake, because it would communicate that this is optional behavior that might not be used by some implementations, whereas I want it to be used by all. Hence, I would use inheritance.

This sort of thing, this ability to communicate the functionality of code intuitively, is the difference, IMHO, between well-designed code (i.e., code that takes little or no effort to modify or augment) and badly-designed code (i.e. code that is a pain to modify, and that you have to read through extensively to use).

When Not To Use Inheritance

Mixu is very much right when he talks about the misuse of inheritance - people have been using it inappropriately for pretty much the entire time it’s been available. One anti-pattern for inheritance is, of course, using it for code reuse between classes which have no real connection to each other. Another is to use it as a dumping ground for common behavior - this is typically seen when you have a base class that exists for valid reasons, but then lazy programmers use it to add common functionality which should be extracted into a composed or utility class. Those are all valid anti-patterns of inheritance use.

However, the reverse case is also true - it is an anti-pattern to use composition where inheritance is called for. The “code smell” for inappropriate composition is when you see the same boilerplate code around the use of a particular composed class in a bunch of classes doing the same thing - a sure sign of using composition where inheritance is required.

I am also concerned where he says that he uses inheritance, “but not that often”. That would imply, to me, that either he almost never writes polymorphic code, or if he does, the code almost never shares behavior. In my experience, polymorphic classes often do share behavior, at least at some level, and so for a developer to say he doesn’t use inheritance very often implies to me that he might be either unconsciously writing a lot of boilerplate code, or making his code bend over backwards in order to avoid using the dreaded inheritance. Either way, I’m sceptical.

Inheritance in Javascript

That being said, I think Mixu has a very good point about the design of Javascript inheritance - it sucks. It’s shoddy, it invites, as he says, nonstandard implementations, and is error-prone. Even Node.js’ solution - util.inherits() - is kludgey at best. It should have been made a keyword in the language so that, again, it’s clear what’s going on, rather than having to hunt around for certain coding structures which imply it. However, that’s a problem with Javascript, not inheritance.

Using a Raspberry Pi as an iBeacon

There’s an excellent blog post by by James Nebeker and David G. Young about simulating an iBeacon using a Raspberry Pi and a bluetooth dongle. Since I already had both, I thought I’d give it a try. It worked really well, and I’ve even put together the files necessary to do it in my GitHub repository.

You’ll need to install BlueZ as well for this to work. Once you get it all installed and working you can use one of the mobile iBeacon apps, such as iBeacon Locate. Very fun to play with, and a good way to be able to develop mobile apps to access iBeacons without having to wait for them to come out.

Also, it’s just fun to play with.

More Fun With BeagleBone Black

I’ve discovered that owning a BeagleBone Black (or, in my case, two) is kind of like owning one of those really, really nice cars that you buy but you can almost never drive because it spends so much time in the shop. It’s a lovely piece of hardware, it really is, but the people who created it seem to have gone to great lengths to make sure that all you can do is sit back and admire it, because you certainly won’t be able to do anything useful with it.

My next foray into BBB craziness was attempting to get a wifi dongle to work with it, so I don’t have to constantly hook it up to my wireless adapter. Should be easy, right? Of course not, this is the BeagleBone Black we’re talking about. Nothing is easy.

I bought the Edimax EW-7811-UN, which is recommended for BeagleBones, Raspberry Pi’s, and the BBB. Unfortunately, what I didn’t realize is that, while it used to work with the BBB, some OS changes were made to it so that it takes a lot of wrangling to get it to work, if you can get it to work at all (I couldn’t).

First of all, it seems that, at some point, the drivers weren’t really available, so you would have to build them yourselves. I think the drivers are available now, but I went ahead and compiled the drivers per these instructions. The next hurdle was getting it to work with connman, the connection manager, which is, basically, almost completely undocumented - Fun! (Note to developers everywhere: in the words of an former boss - “if it ain’t documented, it doesn’t exist”).

However, from what I can gather, connman uses wpa_supplicant for the WPA transactions (if you’re using WPA, which, um, you should be). So, after fiddling around for a while, here’s what my /var/lib/connman/settings file looks like:




And here’s my /var/lib/connman/wifi.config file:

Security = wpa2-psk

where my_ssid is my SSID name, and my_encrypted_passphrase is the result of running wpa_passphrase <ssid> <password>.

Finally, my /etc/wpa_supplicant.conf - I’m running WPA2-Personal on my wireless router, so this is, as far as I can tell, the way you set up wpa_supplicant:


        pairwise=CCMP TKIP
        group=CCMP TKIP

Now, just getting to this point took a lot of tweaking and messing around with various settings. However, it’s still not connecting. It’s finding the access point, and even getting its MAC address, but it’s unable to authenticate. Here’s what happens when I run wpa_supplicant manually in debug mode:

# wpa_supplicant -iwlan0 -c/etc/wpa_supplicant.conf -d
  (irrelevant parts omitted)
wlan0: New scan results available
wlan0: Selecting BSS from priority group 0
wlan0: 0: 5c:96:xx:xx:xx:83 ssid='my_ssid' wpa_ie_len=0 rsn_ie_len=20 caps=0x11 level=87
wlan0:    selected based on RSN IE
wlan0:    selected BSS 5c:96:xx:xx:xx:83 ssid='my_ssid'
wlan0: Request association: reassociate: 0  selected: 5c:96:xx:xx:xx:83  bssid: 00:00:00:00:00:00  pending: 00:00:00:00:00:00  wpa_state: SCANNING
wlan0: Trying to associate with 5c:96:xx:xx:xx:83 (SSID='my_ssid' freq=2462 MHz)
wlan0: Cancelling scan request
wlan0: WPA: clearing own WPA/RSN IE
wlan0: Automatic auth_alg selection: 0x1
RSN: PMKSA cache search - network_ctx=(nil) try_opportunistic=0
RSN: Search for BSSID 5c:96:xx:xx:xx:83
RSN: No PMKSA cache entry found
wlan0: RSN: using IEEE 802.11i/D9.0
wlan0: WPA: Selected cipher suites: group 16 pairwise 16 key_mgmt 2 proto 2
wlan0: WPA: clearing AP WPA IE
WPA: set AP RSN IE - hexdump(len=22): 30 14 01 00 00 0f ac 04 01 00 00 0f ac 04 01 00 00 0f ac 02 00 00
wlan0: WPA: using GTK CCMP
wlan0: WPA: using PTK CCMP
wlan0: WPA: using KEY_MGMT WPA-PSK
WPA: Set own WPA IE default - hexdump(len=22): 30 14 01 00 00 0f ac 04 01 00 00 0f ac 04 01 00 00 0f ac 02 00 00
wlan0: No keys have been configured - skip key clearing
wpa_driver_wext_set_operstate: operstate 0->0 (DORMANT)
netlink: Operstate: linkmode=-1, operstate=5
Limit connection to BSSID 5c:96:xx:xx:xx:83 freq=2462 MHz based on scan results (bssid_set=0)
wlan0: Setting authentication timeout: 10 sec 0 usec
EAPOL: External notification - EAP success=0
EAPOL: Supplicant port status: Unauthorized
EAPOL: External notification - EAP fail=0
EAPOL: Supplicant port status: Unauthorized
EAPOL: External notification - portControl=Auto
EAPOL: Supplicant port status: Unauthorized
RSN: Ignored PMKID candidate without preauth flag

and basically it cycles through this over and over. Now, I’m sure this all means something to whoever wrote this code, but to me there’s absoutely nothing helpful here. it’s written for someone to try to debug the program, not to help someone troubleshoot a problem with their setup. And, yet, this is, as far as I can tell, the only troubleshooting tool these fine folks have seen fit to provide. Nice.

So, after wading through all this garbage, and trying a bunch of different things, I finally gleaned (with absolutely no help from whoever wrote this crappy software, thank you very much!) that it’s failing authentication. Why? Who knows. Really, there’s a special circle of hell for people who write error messages like this for their users to have to decode. When you get there, you spend eternity debugging Windows NT BSOD error messages.

So, I tried looking up the incredibly clear message EAPOL: Supplicant port status: Unauthorized. Now, there are a number of posts that imply that you need to have eth0 disconnected in order for this to work (which makes absolutely no sense to me whatsover - since when are you only able to connect to one network interface at a time?), but doing that seems to make no difference. I’ve also seen a lot of descriptions of procedures people use to get this to work that are the technological equivalent of holding a dead chicken over the machine and praying. This didn’t work for me either (well, okay, I didn’t try the chicken…yet).

Which brings up my gripe: why is this so $@%^!&@ hard? Why do you have to go through this hell to just connect to a wireless access point - something that they do in the Mac and Windows worlds all the time? Would it be so bleeping hard to just spit out some relatively helpful messages - something along the lines of “authentication failed”, or even “bad passphrase”? Something.

If you are guilty of writing software like this, do me a favor: go to the mirror, grab yourself by the collar and slap yourself repeatedly, saying “Bad Programmer! Bad!” over and over, because it’s exactly what I’d be doing if I were there with you.

btle.js 0.2.0 Released

I just (okay, a few days ago) released version 0.2.0 of btle.js to npm. This has a bunch of API changes - the connect method now gives you a Device object, instead of a Connection. All the ATT methods are now on the Device object, but, in addition, you can access all the GATT functionality by querying the device for services, which returns Service objects, and services for characteristics, which return Characteristics objects.

I’m also working on API docs for the whole thing.

Fun With BeagleBone Black

So, I’ve decided to try to play with ZigBee, and since I have a couple of BeagleBone Blacks hanging around doing nothing, I thought I’d try setting it up on them.

First thing I came across was that the BBB’s seem to have issues with accessing their UARTS. Even via bonescript, there seem to be issues.

So, first thing I did was to upgrade to the latest firmware, and then do opkg update followed by opkg upgrade to get all the latest stuff. However, when I tried to run a bonescript program (the one from here), I got module bonescript not found!!!. WTF?

Since I had seen in the bonescript docs that the pinmode call doesn’t really work until bonescript 0.2.3, I figured I’d be smart and just try to upgrade to the latest npm version of bonescript.

Long story short: bad idea. I had to go through all sorts of hell to make the npm install work (including editing the node-gyp configuration file to avoid a bug in the python version check), I finally got bonescript 0.2.3 working. So, I tried my test program and…the network connection died. Every time I ran the program, the same thing happened.

Turns out, this is an issue with the latest bonescript. So, I ended up having to back down to the previous version of bonescript via opkg. Once I investigated the module bonescript not found issue, it turned out that, for some reason, the bonescript module in /usr/lib/node_modules wasn’t getting written, so I had to do a opkg remove bonescript followed by opkg install bonescript to make it all work.

All this, and I haven’t even tried to get the XBee stuff working yet. Oy.

Node.js Bit Operations

I was working on trying to get the barometer readings from my TI SensorTag using Node.js when I came across this problem. See, the user’s guide has two code examples for the algorithm for the pressure - one in C, and one in Java. The one in C uses primarily bit shift operations, whereas the one in Java uses Math.pow() to do the same thing. Naturally, I tended towards the bit shift operations since it makes the code a bit clearer as to what it’s doing (ultimately, from a performance perspective, it doesn’t matter since Math.pow(2, x) probably ultimately resolves to bit shifts anyway).

However, when I did this, I kept on getting pressure values that were all over the place. When I broke it down, it looked like the “offset” and “scale” factors, which rely heavily on bit shifts, were bouncing all over the place.

So, I did some looking, and discovered that, although Node.js stores all its variables as 64-bit floating point numbers, when it comes to bit operations, it does those as 32-bit numbers. A quick test showed that this is true:

> a = Math.pow(2, 31)-1
> a >> 10
> a = Math.pow(2, 32) -1
> a >> 10
> a >> 2

Notice the sharp transition once you hit that 32-bit border. What that meant was that any time I was doing bit operations, if the value I was operating on was less than 2^32, I got the right answer, otherwise I got garbage. Fun.

So, when I switched to the Java algorithm (modified, of course, because they seem to have forgotten to divide those values by 100 at the end) everything worked.

Good. To. Know.

Published Bluetooth LE Module

Well, I published my Node.js module for Bluetooth LE, btle.js (pronounced “Beetle Juice”) to npm. Even though it’s labeled version 0.1.0, it’s got most of the functionality that’s necessary for Bluetooth LE - reading attributes, writing commands and requests, and listening for notifications. I’m hoping to add more functionality over the next few weeks/months.

The main reason I was doing this was to get my TI SensorTag working with my Raspberry Pi and Bluetooth LE dongle, which it now does. I’ve even got the beginnings of a Node.js module for the sensortag, sensortag.js, which is built on top of btle.js. I’ve got everything working except the Barometric Pressure sensor readings and the Gyroscope readings.

The nice thing about btle.js is that it’s purely native C++ code, talking directly to the Linux Bluetooth stack - it’s not having to shell out to run gatttool, for instance, which is pretty nice.

For anyone wanting to write some Bluetooth LE code for Linux in a non-Node.js environment, I extracted the low-level I/O code from the Bluez project, removing the dependency on glib so that it’s a true, low-level Linux I/O package. The higher-level code is dependant on libuv, of course - I couldn’t figure out any good way to separate them - but it shouldn’t be too difficult to extract what you need from there as well.

What to Do?

So, I haven’t posted in almost 2 months. Part of that is the fact that the last two months have been very busy - work is crazy busy trying to do all the work and get more, and I’ve been busy with the family doing end-of-summer stuff. However, part of the problem is, basically, organization.

I’m, basically, a fairly organized guy, in that I manage to prioritize things and get the high-priority things done. However, I’m also not particularly big on multi-tasking, so that, quite often, my high-priority tasks get all the attention, and all the little, lower-priority things (like this blog) get very little.

One thing I’m constantly searching for is a tool to help me manage all my projects, personal and professional. I’ve tried every todo list/organizational tool out there. Some I stick with for months at a time, others go away almost instantly, but I’ve never found one that’s quite right.

The problem is that none of them work the way I work, or organize things the way my mind organizes things, so whenever I use them, there’s always this cognitive dissonance going on, which gets worse and worse over time until…I find suddenly that I haven’t used it in a while, and everything’s out of date.

I think part of the problem is that every one I’ve tried is basically designed like a todo list on steroids - lots of bells and whistles, but at the end of the day, it’s just a list of things to get done. Unfortunately, that’s not how things work with me. Projects very rarely gel into discrete things to do, at least none but the most trivial of projects. For me, a project is fairly amorphous until I’m working on it, and the things to do suddenly appear to me - but by that time, I’m either doing them or about to do them. I very rarely need to keep track of them at that point - usually, at that point, they’re fairly obvious.

What’s worse, a task list is actually detrimental to my usual creative process. There’s nothing that can strangle creativity faster than having a laundry list of tasks to complete. What I need is not “Do A then B then C” but “Here’s what I was thinking the direction might be. Does that make sense now, or should I take it in another direction?”

For me, the difficulty lies in keeping track of all the projects, primarily, keeping track of where I am with the project, and when it was I last did something on it, and finally, my thoughts on the project - where I think I’ll be going next with it. In a way, it’s really more like notes to my future self about where my present self left things, rather than a set of todos.

Anyway, writing this down has helped me figure out at least a minimal set of requirements:

  1. I need something which will keep track of my projects
  2. Within the projects, I need some place to keep notes on the project, both notes about what I’m doing, and also a way to talk about where I think the project could go
  3. I also need a way of capturing what things, in a general sense, that I think need to be done. These aren’t necessarily tasks so much as general ideas about what I think might need to be done.

The more I think about this, the more it seems like I need something GitHub-ish. For example, I just found this post about managing projects with GitHub, and, while they’re talking mainly development projects, maybe there’s a way to leverage GitHub for my other projects, too.