A couple of days ago I watched a video created by Google to show Webmasters the tools that Google provides to help them make their sites as good as possible. The video contains tons of great information and the tools that Google provides for free look fantastic.
One of the coolest tools, and one that I didn’t know about before this video, was a tool that allows Webmasters to do A/B testing of their sites. If you aren’t familiar with A/B testing, the general concept is that each visitor will be shown a different version of your site, an A version and a B version, randomly but in equal proportions over the same period of time. So for example, over the next week half the visitors to my site will see a version A with a big green button for proceeding to the next step and the other half will see a big orange version for proceeding to the next step. The point of the tool is to see whether A or B is the better design based upon what percentage of users get to the next step for each.
The Google presenter describing this tool talked about how difficult it is to know whether a design is truly good just by looking at it. He recommends that this version of “design by data” is the optimal way to create any site.
I want very badly to disagree with this. I have a gut feeling that it is wrong but it is very difficult to argue against the data-centered reasoning from Google.
My gut says that designers will use this data as a crutch – as a way to defend lazy design that looks exactly like the design they’ve put out previously. “No, we can’t do it that way, remember we tested it.”
I think another step in the testing process needs to be added – figuring out why it works. Why does design A work better than design B? If you can’t find out why then you can’t add anything to your stockpile of design wisdom. Once you can find out why then you can iterate off of that reason to find an entirely new set of designs. You can use that data point to spark creativity rather than inadvertently stifling it.