Let's face it. The last time I thought about the scientific method, it was probably in high school. Back when the chairs for those heavy, black-topped were too tall for your knees to fit underneath and wearing safety googles meant not being able to see through a level of dust and film left on the plastic from years of science classes. I never ended up practicing anything close to science after high school so I don't have an intimate knowledge of the process, but I'd like to think that there's something to be garnished from proven methodology.
Beyond all the awkwardness of teenage misunderstanding, one thing I never got to do in science class was actually use the scientific method as it was intended: as iteration. It builds in an understanding that failure is almost certain and we're never going to prove our hypothesis correct the first time (unless we're superhuman or cheating).
A traditional waterfall approach is much like the scientific method of high school, we go through the motions, but there's never a chance to go back to change our hypothesis and test again. Once one 'deliverable' is signed off on, it's impossible to go back and change that part of the equation. If we're lucky, some testing may be done on wireframes or visual design comps, but once that phase is complete, it's very difficult to change. Waterfall protects teams from ever changing scopes and clients that can't decide on what they want, sure, but it also holds back teams from creating the best products.
Yet, agile doesn't work for every project. So how can we adapt our traditional design process to factor in the near certainty that we will fail, not at everything, at something? I'd like to think that we're all better designers tomorrow than today, and there have definitely been projects where I could've gone back and fixed aspects of a design based on insights learned later on. Perhaps we should be treating both user experience and visual design in a frame much like the scientific method so we can iterate and bring more rigor to a design process.
Let's take a look at the steps of the scientific method and see how to use them in terms of our own design process.
These usually end up being more 'How' questions than 'Why'. Many times, these questions come from our kickoff meetings with our clients. We usually go through a few different exercises to prioritize goals. In one of our recent kickoffs, the top priorities were to increase conversions and simplify/automate specific parts of the cms. Let's take that one step further by turning them into questions. How can we increase conversions from the homepage to getting a person to sign up for a service? How can we simplify the cms so bulk content can be added and maintained automatically?
These are the educated guesses we take when we make design decisions. Only have one button on the homepage will increase clicks to that page. Friendly, family-oriented and candid photography relates with our top personas, making it more likely they'll explore the website. People want to fill out a contact form more than sending an email to an anonymous address.
Design is executed. My preference is to test closest to production format (usually code), but some testing is better than none. Any time I've done user testing, it becomes clear really quick if the assumptions I made while designing were correct or ended up confusing users. Asking the questions before designing also makes it easy to test since the questions already exist.
This is where all the testing results are sorted through to identify if our hypotheses were correct or incorrect. New hypotheses, execution and testing may be needed and should be planned for now.
We should assume that some of our hypotheses were wrong and build at least a phase of iteration into our design timeline before showing clients. Why? It makes it easier to defend any design decision to a client because it's been tested and proven. Instead of doing a shoulder shrug or coming up with some arbitrary design speak bullshit to defend a design, a designer can point to specific tests as reasons for decisions.
Well, for starters, testing gets built into the process instead of getting placed at the end of a phase. Instead of designing while these assumptions arise, hypotheses specifically get created to test answer our key questions before starting any design. This helps narrow focus during the design phase down to the main goals identified for any web build. When testing starts, the specific things to test for are already noted through our key questions and hypotheses. Additionally, analysis of tests provides strong documentation and defense for any design decision, both for ux and visual design.
Adding in this methodology to a timeline may lengthen it, so any team needs to seriously consider where they need to cut the fat. (Personally, I think it exists in visual comp deliverables, but knowing code, I can be very biased.) What it does is strengthen and defend designs, helping them become better by adding in iteration and testing through specific questions and assumptions.