Have you ever had a retail experience that starts with a bang and ends with a thud?
That’s what happened to me recently at Wachovia Bank, and I share it as it illustrates a core practice of Customer Satisfaction surveys that needs re-engineering.
I had occasion to open an account at Wachovia and the experience was the best I have had in retail banking for many years. It was influenced by two key factors:
- To my good fortune, the branch was in a small community and oriented to personal service
- The staff were all old school bankers who take time to know their customers and listen to their needs
The very next day, I received a call at home (yes, it was at dinner time) as a follow up to my visit to the branch. Impressed with the quick follow up, I agreed to participate in a live survey that was to take “3-5 minutes“. The questions were general and asked for rating on a 1-7 scale. I made a mental note that many questions were repetitive and, like many Customer Satisfaction surveys, seemed to be crafted to elicit as many “7’s” as possible.
When we wrapped up, the operator asked me if I would be willing to participate in an additional “3-5 minute” survey that would hone in on branch specific questions. I agreed as I wanted to give props to the branch personnel. As the automated survey devolved into no more than a digital repetition of the first survey and did not provide me an opportunity to recognize the people who provided such good service in the branch, I disconnected.
Just like on Gilligan’s Island, when they started out for a “3 hour tour, a 3 hour tour”, I felt that I had been invited to spend 3-5 minutes and then duped into a 12-15 minute experience that was artificial and frustrating.
I really do not understand how corporate executives continue to be lulled into attaching importance to Customer Satisfaction surveys. Most surveys are prefaced by indicating that “only a 10 or 7 will indicate full satisfaction” and the questions are too generic to allow any real insight to be collected. When consumers are lulled to sleep with too many questions and too little discernible differences, responses become less and less meaningful. Face it, in that circumstance, consumers just want to wrap it up and could care less what number rating is offered up.
This is one more item of evidence that cements my conviction that Customer Satisfaction surveys are not reliable indicators of future customer loyalty or intent to repurchase. The survey I would like to conduct would be to gauge the level of skepticism associated with Consumer Satisfaction awards, the ones you often see as part of automobile advertisements. Given the customer experience I had with survey execution, why should we really think that one of those trophies means that the car next to it is worth our hard earned money?
It is time to rethink, restructure, and rewrite how these surveys are executed if they are to continue to have relevance in the market.