In May, after what seemed like an eternity, I finished up my graduate work. The university I attended was research-heavy and encouraged their graduate students to publish and conduct experimental research. Over three years and seemingly endless lecture hours as professors urged us to publish, they repeated what seemed likea mantra: “you learn something from failure.”
Each time this was repeated I thought, “Yes, if you’re the first one to conduct research into a field or if you’re trying to replicate someone’s results for the first time, yes, you learn something from failure. But if you do all the necessary research your hypothesis and your results should align.”
Then I failed.
A few months ago I sat in my graduate advisor’s office, my head in my hands, staring at an SPSS readout. “This should not be happening,” I said, “I have twenty-odd studies that say this is not the expected result.”
“The results are not as expected,” he said, “there are variables we may not have accounted for, but what you have here is still valuable.”
A little rankled at hearing that I might not have controlled well enough I said, “I’ve read absolutely everything there is out there. I have three massive portfolios and a full USB drive of research. I’ve even contacted the researchers myself to ask them if there’s anything they thought I should review. How did I miss something?”
“Then we both missed something,” he said, as he went on to outline that my proposal had been reviewed by four other professors in the department and was deemed sound. Seeing my growing frustration he repeated the line, “remember, you always learn something from failure.”
This was the first time I truly understood what my professors had been trying to tell me. Together, my professor and I outlined three variables that could have influenced my results, which we included in my discussion section, and placed a heavy caveat on my findings.
Flash forward to last week wherein a vendor sent me a report on a website with several key metrics missing. I asked the vendor to revise the report with the metrics included, to which they responded that they only liked to report the positive metrics to clients. My first thought upon reading the email was, “you always learn something from failure.”
As the vendor had stated the omitted metrics weren’t favorable and showed there were likely deficits that needed to be addressed. With that said, omitting those metrics painted an inaccurately rosy picture of the project and wouldn’t have alerted the client to issues that needed to be addressed.
Including those metrics in the report allowed us to say to the client, “users are really engaging here, here and here, but there are some weaknesses we should consider addressing.”
This is why I’m stickler for honesty and transparency in reporting web and social analytics. While omitting results because they’re not favorable isn’t OK no matter what the context, we, and our clients, need to know where we may be ‘failing’ so we can learn and fix the issues going forward.
As I was once told by several wise men and women, you always learn something from failure.