Tag Archives: usability

Monkigras 2014: Sharing craft

After Monkigras 2013, I was really looking forward to Monkigras 2014. The great talks about developer culture and creating usable software, the amazing buzz and friendliness of the event, the wonderful lack of choice over which talks to go to (there’s just one track!!), and (of course) the catering:


The talks at Monkigras 2014

The talks were pretty much all great so I’m just going to mention the talks that were particularly relevant to me.

Rafe Colburn from Etsy talked about how to motivate developers to fix bugs (IBMers, read ‘defects’) when there’s a big backlog of bugs to fix. They’d tried many strategies, including bug rotation, but none worked. The answer, they found, was to ask their support team to help prioritise the bugs based on the problems that users actually cared about. That way, the developers fixing the bugs weren’t overwhelmed by the sheer numbers to choose from. Also, when they’d done a fix, the developers could feel that they’d made a difference to the user experience of the software.

Rafe Colburn from Etsy

While I’m not responsible for motivating developers to fix bugs, my job does involve persuading developers to write articles or sample code for WASdev.net. So I figure I could learn a few tricks.

A couple of talks that were directly applicable to me were Steve Pousty‘s talk on how to be a developer evangelist and Dawn Foster‘s on taking lessons on community from science fiction. The latter was a quick look through various science fiction themes and novels applied to developer communities, which was a neat idea though I wished I’d read more of the novels she cited. I was particularly interested in Steve’s talk because I’d seen him speak last year about how his PhD in Ecology had helped him understand communities as ecosystems in which there are sometimes surprising dependencies. This year, he ran through a checklist of attributes to look for when hiring a developer evangelist. Although I’m not strictly a developer evangelist, there’s enough overlap with my role to make me pay attention and check myself against each one.

Dawn Foster from Puppet Labs

One of the risks of TED Talk-style talks is that if you don’t quite match up to the ‘right answers’ espoused by the speakers, you could come away from the event feeling inadequate. The friendly atmosphere of Monkigras, and the fact that some speakers directly contradicted each other, meant that this was unlikely to happen.

It was still refreshing, however, to listen to Theo Schlossnagle basically telling people to do what they find works in their context. Companies are different and different things work for different companies. Similarly, developers are people and people learn in different ways so developers learn in different ways. He focused on how to tell stories about your own failures to help people learn and to save them from having to make the same mistakes.

Again, this was refreshing to hear because speakers often tell you how you should do something and how it worked for them. They skim over the things that went wrong and end up convincing you that if only you immediately start doing things their way, you’ll have instant success. Or that inadequacy just kicks in like when you read certain people’s Facebook statuses. Theo’s point was that it’s far more useful from a learning perspective to hear about the things that went wrong for them. Not in a morbid, defeatist way (that way lies only self-pity and fear) but as a story in which things go wrong but are righted by the end. I liked that.

Theo Schlossnagle from Circonus

Ana Nelson (geek conference buddy and friend) also talked about storytelling. Her point was more about telling the right story well so that people believe it rather than believing lies, which are often much more intuitive and fun to believe. She impressively wove together an argument built on various fields of research including Psychology, Philosophy, and Statistics. In a nutshell, the kind of simplistic headlines newspapers often publish are much more intuitive and attractive because they fit in with our existing beliefs more easily than the usually more complicated story behind the headlines.

Ana Nelson from Brick Alloy

The Gentle Author spoke just before lunch about his daily blog in which he documents stories from local people. I was lucky enough to win one of his signed books, which is beautiful and engrossing. Here it is with my swagbag:

After his popular talk last year, Phil Gilbert of IBM returned to give an update on how things are going with Design@IBM. Theo’s point about context of a company being important is so relevant when trying to change the culture of such a large company. He introduced a new card game that you can use to help teach people what it’s like to be a designer working within the constraints of a real software project. I heard a fair amount of interest from non-IBMers who were keen for a copy of the cards to be made available outside IBM.

Phil Gilbert’s Wild Ducks card game

On the UX theme, I loved Leisa Reichelt‘s talk about introducing user research to the development teams at GDS. While all areas of UX can struggle to get taken seriously, user research (eg interviewing participants and usability testing) is often overlooked because it doesn’t produce visual designs or code. Leisa’s talk was wonderfully practical in how she related her experiences at GDS of proving the worth of user research to the extent that the number of user researchers has greatly increased.

And lastly I must mention Project Andiamo, which was born at Monkigras 2013 after watching a talk about laser scanning and 3D printing old railway trains. The project aims to produce medical orthotics, like splints and braces, by laser scanning the patient’s body and then 3D printing the part. This not only makes the whole process much quicker and more comfortable, it is at a fraction of the cost of the way that orthotics are currently made.

Samiya Parvez & Naveed Parvez of Project Andiamo

If you can help in any way, take a look at their website and get in touch with them. Samiya and Naveed’s talk was an amazing example of how a well-constructed story can get a powerful message across to its listeners:

After Monkigras 2014, I’m now really looking forward to Monkigras 2015.


Monkigras 2013: Scaling craft

The work of William Morris, my GCSE history teacher said, was a bit of a moral dilemma. Morris was a British designer born during the Industrial Revolution. British (and then world) industry was moving rapidly towards mass production by replacing traditional, cottage-industry production processes with the more efficient, and therefore profitable, machines. One thing that suffered under this move to mass production was the focus on function and quantity over decoration and quality. Morris reacted against this by designing and producing decorations like wallpaper and textiles using the traditional craft techniques of skilled craftspeople. My history teacher’s point was that although Morris, a passionate socialist, was able to create high quality goods by using smaller-scale production methods, only wealthy people could afford to buy his designs; which was hardly equality in action. On the other hand, the skills of craftspeople were being retained, quality goods were being produced, and the craftspeople were getting paid for that quality of their work.

My pretty, handcrafted latte
My pretty, handcrafted latte

Monkigras 2013, in London last week, took on this theme of ‘scaling craft’ in the context of beer, coffee, and software. All parts of this trinity of software development can benefit hugely from a focus on quality over quantity. Before I went to Monkigras, I wasn’t really sure what to expect from a tech event advertised as having a lot of beer. It did have a lot of beer (and coffee) available but if you didn’t want it you could avoid it (several people I talked to said they didn’t usually drink beer). And no one seemed to get ridiculously drunk. And there were a lot of very cool talks.

The beer was also a fun analogy to apply to software development. Despite pubs in the UK closing hand over fist at the moment, microbreweries are on the rise. Microbrewing is about producing beer in small quantities on a commercial basis so that quality can be maintained whilst still viable as a business. One of the things we learnt from a brewer at Monkigras is that the taste of water varies according to where it comes from. Water is a major component of beer so if the taste of your water supply changes, the taste of your beer changes. To maintain the quality of the beer you brew, you must work within the natural resources available to you and not over-expand. Similarly, quality comes from skilled and knowledgeable people who need to be paid for their skill. If you take on cheaper staff and train them less so that you can make more profit, you will end up with a poorer quality product. You get the idea.

Handcrafting a wooden spoon.
Handcrafting a wooden spoon.

This principle applies to all areas of craft, whether it’s producing quality coffee, a quality wooden spoon, quality conference food, or organising a quality conference, you have to focus on quality and ensure that if you scale what you do so that it’s more readily available to more people, you don’t sacrifice quality at the same time. And, importantly, that you know when to stop. Bigger doesn’t necessarily mean better.

Software is misleadingly easy to produce. Unlike making physical objects, there is very little initial cost to producing software; you can make copies and then distribute them to customers over the Internet at very little cost. Initially, at least, it’s all in the skill of the craftspeople and their ability to identify their target users and market. If they can’t make what people will buy, they will go out of business very quickly. As software development companies get larger, the people who make the software inside the company become further removed from the selling of their software to their customers. So they become more focused on what they are close to, the technology but not who will use it.

Phil Gilbert on IBM Design Thinking
Phil Gilbert on IBM Design Thinking

Phil Gilbert, IBM’s new General Manager of Design, comes from a 30-year career in startups, most recently Lombardi, where design was core to their culture. IBM has a portfolio of 3000 software products so, when Lombardi was acquired by IBM, Phil set about simplifying the IBM Business Process Management portfolio of products, reducing 21 different products to just four and kicking off a cultural change to bring design and thinking about users to the centre of product development. Whilst praising IBM’s history of design and a recent server product design award, he also acknowledged at Monkigras: “We are rethinking everything at IBM. Our portfolio is a mess today and we need to get better”. Changing a culture like IBM’s isn’t easy but I’ve seen and experienced a big difference already. Phil’s challenge is to scale the high-quality user-focused design values of a startup to a century-old global corporation.

One of the things that struck me most at Monkigras, and appealed to me most as a social scientist, was the focus on the human side. Despite it being a developer conference, I remember seeing only one slide that contained code. The overriding theme was about people and culture, not technology; how to maintain quality by maintaining a culture that respects its craftspeople and how to retain both even if the organisation gets bigger, even if that naturally limits how much the organisation can grow. Personal analogy was also a big thing…

Laser-scanned model of the engine
Laser-scanned model of the engine

Cyndi Mitchell from Logspace talked about her family’s hog farm and working within the available resources. Shanley Kane from Basho used Dante’s spheres to describe best product management practices. Steve Citron-Pousty from RedHat use his background as an ecologist to manage communities and ‘developer ecosystems’ (don’t just call it an ecosystem; treat it like one). Diane Mueller from ActiveState talked about her 20%-time project to build a crowdsourced database of totem poles and the challenges of understanding what gets people to want to contribute to such projects. Elco Jacobs talked about his BrewPi project: automatically managing the temperature of his homebrewing fridge using a RaspberryPi based controller, and how he has open-sourced to build a community to kick start it as a potential small business. Rafe Colburn from Etsy more directly makes the link between craft and software engineering in his slides.

3D printer making a spoon
3D printer making a spoon

I don’t know much about William Morris so I don’t know which presentations he would have enjoyed or disagreed with. Morris was a preservationist and started the Society for the Protection of Ancient Buildings to ensure that old buildings get repaired and not restored to an arbitrary point in the past. So maybe he would have found laser-scanning and 3D printing interesting. Chris Thorpe is a model train geek and likes to hand-make his own models of real-life objects. He too is interested in alternatives to mass manufacturing and has started to look at how to make model kits. He uses a laser to scan the objects and a 3D printer to prototype the models. He can then send the model to a commercial company who can make it into kits for him to sell. He has recently used his laser-scanning technique to scan a rediscovered old Welsh railway engine to preserve it, virtually at least, in the state in which it was found.

I had a great time with lots of cool and fun people. Well done to @monkchips for scaling a conference to just the right level of intimacy and buzz. The last thing I saw before I left was the craftsman making a wooden spoon pitted in competition against the 3D printer making a plastic spoon.

You can find many of the slide presentations and more about the conference Lanyrd.

How do you help the user decide?

One thing that I often debate with developers is why the error message  “An unexpected error has occurred.” isn’t a good message. Afterall, to the developer, the error *is* unexpected; otherwise, they’d have created a better error message for it.

From the user’s perspective (who the software is written for, after all), they don’t want to know that the error was not expected by the people who wrote the code. The user wants to know that the software (especially when it’s important to their job/finances/life) is in control and knows *exactly* what’s going to happen when you press a certain button. The user has to be able to trust the software and trust that the developer(s) of that software knew what they were doing when they wrote it.

So it gets difficult for the developer/designer when they have to make a call that potentially risks breaking that trust. For instance, supposing you (as a developer) were to provide a new feature that is really beneficial to the target user but there is a small risk that something will go wrong in a big way for that user if they try to use that feature. As developer, what do you do?

Typically, I’d predict, you would make the feature optional so that you aren’t forcing the user to use a feature that could potentially (however unlikely) cause them serious problems. If the user does try to use the feature, you provide scary warnings of what could occur in certain circumstances. Hopefully, that will put them off unless they really know what they’re doing.

Okay, that’s the developer’s perspective. And it’s entirely understandable and even laudable that the developer is doing what they can to keep the user safe.

So, switch now to the user’s perspective. The target user is computer literate but had no knowledge of the development of this feature or who developed it. This user could benefit greatly from this new feature but when they attempt to use it, they get a scary warning message which, as intended, makes them think twice about whether to use the feature or not.

Now what does the user decide? Granted, risk is all about weighing up the costs and benefits, and to one person the relative benefit will outweigh the possible cost. To make a decision, however, a person needs as much information as they can possibly get. In this case, the only (and therefore critical) information is provided by the developer; that is, how informative and/or scary the warning message is.

If the developer provides a lot of information that makes the feature look useful, the user might just choose to use it. But if the developer makes the warning message as scary as possible, the user will probably opt not to use it.

The developer wants users to use the new feature because they’ve made the effort to develop it and it really could benefit many users. The developer, however, doesn’t (understandably) want the responsibility of trashing someone else’s laptop in some way. So the result is that the developer pushes that responsibility off on to the user, when in fact the developer has far more information available to help them make that decision than the user has.

If you’re the user, though, how are you supposed to make that call?

For example…

Computer Janitor is a utility that was introduced in Ubuntu Intrepid (I think) so that you could run it to clean up old kernels that are no longer needed, and other bits and pieces of packages that are no longer used. When I first tried to use it, I raised this bug, which, it turned out, had this duplicate.

Essentially, CJ could potentially remove packages that you might need. So when you try to use it tries to scare you into deciding whether you really really want to risk it. I raised the bug because the scary words don’t actually help you decide – in that, if you aren’t easily scared by such things, the scary message only determines how scared you are – not how well-informed you are to make a decision…and isn’t going to help when you break your computer.

What would maybe be more helpful is if CJ used a stricter set of criteria when selecting which files to remove. In this case, CJ might leave on your system  some files or packages that could be removed, instead of the reverse where it might incorrectly remove files or packages you need. The former is surely the preferably outcome for the majority of users (who would rather have a few unneeded files on their machine than a broken machine).

It would also be possible then, for the minority of users who really really know what they’re doing, to selectively delete the files that probably can be removed but CJ isn’t certain about. In this case, users are only presented with a decision to make if they actually seek it out but the majority of users are still able to benefit from the safer (if slightly less effective) behaviour. In fact, it would be better overall if CJ ran automatically during an Ubuntu upgrade so that the user really doesn’t have to care about it (unless they really really want to).

This is not intended as a dig at Computer Janitor as I think it’s a useful feature in general and I’m kindof surprised that this kind of clean-up wasn’t being done already whenever you do an upgrade of Ubuntu. Also, I think the bugs I’ve linked to above have caused a bit of a headache for the developers.

This issue of forcing users to make ill-informed decisions is a very common occurrence throughout software development and is certainly not specific to Ubuntu; it’s just that Ubuntu is a public development effort and provides examples that are relatively easy to explain. 🙂 So please don’t be offended if you are part of the development teams for either Computer Janitor or Ubuntu!

So, if you, as developer/designer, find that you’re having to give scary messages to make a user *really* decide if they want to continue, consider stepping back from it, thinking about the possible decisions the user could make and what the consequences of those decisions are. Even talk to some of your target users and find out what decisions they’d make. Just because you can successfully scare them off doesn’t make it a successful feature – if the feature is potentially useful to the user, they should be able to safely use it (no matter what level of fear you instill in them). Then look at the bigger picture, think about it in a different way, and see if the decision can be made for the user, or the decision can even be removed altogether.

I’m sure that it’s not as easy as it might sound. And it’s not always easy to recognise situations like this. I’m hoping, though, that, having thought this through while writing this post, I’ll actually remember it in future the next time a similar issue occurs for me.