In my career, I have heard the quote “Consistency is the hobgoblin of little minds” when a client or business owner wanted to do something new, often for the sake of doing something new. The full quote by Ralph Waldo Emerson is ““A foolish consistency is the hobgoblin of little minds.” There is a very important distinction there.

Consistency is essential to reducing the cognitive load of your interface – the mental effort required to complete a task. When a design is consistent, every interaction feels smooth and frictionless. When it is too inconsistent, the user must expend unnecessary effort figuring out the interface instead of completing the work.

The field of user experience design has roots in human factors and ergonomics, a field that, since the late 1940s, has focused on the interaction between human users, machines, and the contextual environments to design systems that address the user's experience. With the proliferation of workplace computers in the early 1990s, user experience became an important concern for designers. It was Donald Norman, a user experience architect, who coined and brought the term user experience to wider knowledge.

I invented the term because I thought human interface and usability were too narrow. I wanted to cover all aspects of the person's experience with the system including industrial design graphics, the interface, the physical interaction and the manual. Since then the term has spread widely, so much so that it is starting to lose its meaning.

—Donald Norman

The term also has a more recent connection to user-centered design, human–computer interaction, and also incorporates elements from similar user-centered design fields.

The thing about best practices is they never stay the same.

Long ago, best practices told us fixed-width websites using table-based design were the way to ensure a consistent experience for users (of course, all users were surfing using desktop computers, and you had to choose 800×600 resolution to get all of them). Best practice also led us to the era of “looks best in Internet Explorer” or Netscape Navigator. Back then, I thought I was keeping up with the trends to help anyone who came to my site see things the way I intended.

My problem, and the problem shared by the people who created and popularized the best practices — was I’d chosen a my own familiar, comfortable context for the sites I’d build. I was building websites for my context: the browsing conditions that I was used to. I was doing my work on a fast computer with a modern browser, large high-resolution monitor and a high-speed internet connection—that’s what the web was, to me.

We have to change our context, from providing the web the way we intend, to allowing visitors to consume our web the way they desire.  That could mean on a mobile device, using a variety of browsers, or on a 3G connection. Our web resources have to be flexible enough to adjust to the context of the visitor, instead of allowing ourselves to set the ground rules. That means keeping up with the latest best practices, and being willing to challenge or even reject “best practices” that don’t serve our visitors.

I teach orienteering and GPS navigation to Boy Scouts, and hear a lot from older Scouters on both sides. Map and compass guys always tell me they would never trust themselves to a battery-powered device when they are in the wilderness, and GPS guys scoff at the sets of directions like “go 342 degrees for 120 feet” and can’t believe anyone still does that. On the one hand, if you use a map and compass, you have to have a better feel for the topology around you and understand each step (pardon the pun) of getting from point A to point B. If you use a GPS, it doesn’t matter where you start, you can get to point B as long as you trust the technology implicitly and it doesn’t fail.

I see parallels in the discussions I have at work sometimes. There are older coders (like myself) who grew up with computers we programmed from the ground up, going through DOS and Unix commands, working intimately with the file system, and understanding HTML from the foundational levels. That gives us a strong background on which to understand the performance model of the software and what could be wrong when things don’t work as expected. On the other hand, younger coders don’t care that you used to have a DOS layer with Windows on top and a browser above that. They have been digital natives their whole lives and can take for granted much of the early years of computing because they rely on tools and frameworks that shield them from the minutia. They have difficulty when the code doesn’t perform as expected, because they don’t really understand everything that it is supposed to do in the first place. Having said that, they also can keep pace with change better because they don’t have a ton of bad habits to break, and they don’t have the same blinders on about what is possible and what isn’t. Not knowing something is impossible is often the first step to making it possible.

I am grateful I grew up when I did and have an understanding of what came before, while being able to take advantage of tools that don’t require me to code everything by hand anymore. It’s a balance that everyone has and I’m sure years from now the young coders today will be complaining that the new coders of the day don’t understand the CSS and Javascript libraries they are using in the next generation of technology. To quote Battlestar Galactica – “All of this has happened before, and all of this will happen again.” So say we all.