Musings on designing experiences & (re)engineering complexity
You can’t improve what you don’t measure
In a recent conversation with an executive director of a youth-reaching program, we talked about developing applets (small apps) and process changes which enabled one mentor to spend less time working on the admin tasks of mentoring and more working towards the direct “jobs to be done” of seeing direct correlation between activity and outputs. The director’s face said more than any of the words that remained within that vein of the conversation. It wasn’t just the attention to something innovative already wished for on the organization level, but that it was done such that understing the immediate value of what was and wasn’t being tracked made the solution ideal.
We shouldn’t be surprised by such moments. At every stop, there’s something of a “working yourself out of a job” to each role. This focus comes one from an innate ability to find new challenges in various roles. And also the leverage brought from various disciplines into those roles - focuses to which academic and professional professions are starting to recognize are not as siloed as they’d been taught and compensated.
These applets and process tweaks are great, but what’s the point? We can admit to doing them to keep from dealing with finite details of processes or communications which the tools and behaviors of the age have likely already addressed. Successes here have pointed towards items being opened to a much larger audience and the types of organizational changes end up rippling into other spaces. But if you are similar to us here; these are largely personal pieces. It might reflect more as selfishness, but no one aims to improve another’s job unless it makes their job much easier first.
How to decide what to do and not do? Look at friction points. Where are the gaps in operational or behavioral items where a rethink has been needed, but either resources or capacity weren’t there. Can you take some off-the-shelf items and architect something differently enough that the friction and efforts in existing methods are shown to hide opportunities? Are you intuitive to simply guess? Its never an accident when an off-the-wall guess might take you down some interesting roads. Many times, the decision to just do it leads to lessons which might not get employed until tools, technologies, or behaviors evolve. That’s an ok return on the investment - though not so appealing to patience, budgets, or maybe even ther role itself.
What kinds of metrics are pulled from these experiments then? Totally depends on the engagement. In one of the experiments, the question about the length of time spent on a program was the product of someone’s best guess. A designed project tracker didn’t just show the time-engaged on the project, but how often the project manager updated the tracker (fuller engagement), what resources were developed (centralized location versus email threads), and that project in relation to all others (leading to some open budget convos amongst other things). A time logging applet turns into a multi-calendar, invoice-generating, and report creating system. The information gleaned from such works isn’t always an accidental find. The numbers are always there - exploring their value takes a different kind of effort.
That effort is the value of metrics. On one end, you can go down the line, but if you aren’t validating the approach, or even the energy expended, then whatever metrics you’ve been holding towards means little to nothing. If you don’t take the chance to explore, then you won’t know if the metrics you are using are all together valid at all. In the noted experiments, we found holes in a process which were better addressed by going outside of existing frameworks to attempt to answer. This was innovative, but only because the the normative behavior made clear there was no other way to define what success looked like. We had to go outside of the normative in order to see there were better metrics, and better value to be pulled into the business, technical, and customer experience.
A company can hire for skills, but cannot hire for innovation (to steal a recent tweet-quote). Innovation’s value comes from an effort towards some ideal state. What is that ideal state? Is that ideal state measurable today, or will something new need to be invented in order to get there. Are you maturing the capacity to do work differently? Or, are you still working out the tasks to be done so that getting to the “jobs to be done” actually happens and your product/process is validated?
Ask these kinds of questions. Explore down these roads. And then see what about metrics matter, and what you are actually doing to get there.