Tag Archives: customer feedback

short black line

Is “Feedback” Genuine Listening?

We should not assume that a group asking for “feedback” is really listening. Listening is a cultivated and individual skill.  Feedback is typically less refined and subject to organizational filters.   

Recently I noticed that the New York Times seems to have stopped publishing letters in its Sunday Magazine and Book Review. Not an earthshaking change maybe. But it began to strike me that this deletion of reader’s opinions was odd when juxtaposed with the paper’s fall-over-backwards requests for feedback after doing something as simple as reporting a missing paper. The single checkmark notification is a nano-second act, yet it provoked a request to know how satisfying or difficult  the experience was. It seemed that their priorities were upside down. Why dismiss reader’s comments while keeping a useless exercise about a simple matter? I suspect this is a kind of irrationality that grows out of an automated system which doesn’t know what matters.  We are on the midst of similar requests for feedback from CX (Customer Service) teams responsible for designing the “customer journey” in retail. They can satisfy themselves by signaling concern for customers without setting up the tools needed to fully follow through. Listening is a demanding intellectual exercise; responding to an set of a-priori questions is not.

It’s worth remembering that the term “feedback” arose as a name for noise or interference produced by an electrical circuit back onto itself. The deafening growl of a public address system is an example. We get a double dose of aural unpleasantness if Uncle Fred gets his karaoke microphone too close to the speakers.

To be sure, I’m an outlier for still expecting a newspaper to be in the driveway each morning. But this simple example suggests a growing trend in how we are asked to interact with agencies, businesses and organizations. Our communications with these entities seems less about the specifics of a response, and more about creating a running tally of stock complements, complaints, or experiences that can be processed into data-driven marketing. “How did we do?” asks the online store. “Did we answer your question?” a tech website wants to know. The answers will only need a simulacrum of listening, without anyone knowing enough to learn much from the answer.

With some exceptions the idea of “customer care” now amounts to the creation of a digital interface between an increasingly impatient live body on one end, and a digital “bot” with a set of closed-option questions on the other. Companies like Bizrate specialize in setting up such systems for clients. But rarely do organizations allow a customer with a specific question to frame their issue in their own way. Speaking broadly, as a culture we are under the paradoxical impression that we need to appear consumer-driven, but we don’t need to hear that much. Surely customer comments can do some good. But we are already so overtaxed with incoming messages that these pre-formed exchanges seem like they hardly matter.

More often than not, the organizations repertoire of a group’s “answers” cannot easily match the particular variables embedded in a question. Hence, no one is really “chatting.” We have all ended up at the top of a phone tree when none of the options seem good. To change metaphors, more than I can count I’ve ended an exchange with a chatbot feeling like I got pushed onto the wrong train. Try dealing with your cable supplier, and you will likely conclude the experience feeling like you ended up going to Duluth rather Dallas.

What is both ingenious and perverse in these end-of-transaction questions is seemingly how much an organization pretends that it is listening. The problem, of course, is that prompts generated by algorithms cost practically nothing to produce. And they may actually yield some data that can satisfy the performance expectations of management. It seems like the marketing department is growing, but the service department has been hollowed out. Odds are that an organization really doesn’t want to hear you on your terms.

black bar

The Scourge of Closed Option Questionnaires

Most organizations are disinclined to invest in the labor to directly address a consumer question or complaint. Their pattern of not wanting to authentically listen mirrors our modern malady of wanting to be heard more than we want hear.

Organizations now operate with the perceived need to survey customers about the quality of the service they received. The impulse is fine.  They want satisfied consumers.  And they would welcome high ratings that can be part of their advertising and marketing campaigns.  Then, too, many consumers now understand a thing or two about the logic of consumer behavior. They know that a failing business may only see a customer once, especially if competitors are just a click away.

Savvy customers and attentive businesses are all good.  But the instruments for measuring customer satisfaction are often facile. The best tool for learning about a customer’s experience is a live representative ready to trouble-shoot a problem.  But person to person contact is increasingly rare.  Most organizations are not inclined to invest in the staff that would require. Their pattern of not wanting to authentically listen mirrors our modern malady of wanting to frame a conversation before the other can respond. I have noticed that even at auto service departments, agents are often too busy keying in routine data about my car, such as mileage, to ask what kind of service it may need. Most don’t even ask why I made the appointment.

But the worst offender in the measurement of customer satisfaction is the online, phone or mail questionnaire.  Most are written to be tallied and converted into a number for each item. You know the drill:

“How would you rank your service experience?”

Very Good      Good        Fair          Poor

Would you recommend this product to others?       Yes         No

For obvious reasons these are called a “closed” option questions. Your attitude is to be gleaned from the adjective that you identify frequently from an outline questionnaire.

I recently completed a multi-page questionnaire for a newly purchased car.  And, incredibly, even with all the questions, the carmaker failed to set up a form that would let me clearly state my reason’s for the purchase.  (If you are interested, key controls were not on a fussy touch screen.)

Closed option questions appear to be good for the organization because they can be tabulated, hence, quantified, hence assumed “objective.” The bean counters among us love them. Even as a college teacher, I was required to give out these uniform questionnaires. But much of the feedback is coated in a thick fog of ambiguity.  For a student, useful feedback to a professor is not judging class lectures to be only “fair,” but their reasons for circling this term.

If any organization asks a really good question about their service (i.e., “What was most disappointing about your experience?”), the organization might learn something, but this kind of open-option question cannot be numerically tallied.  And A.I. technology is not that smart.  A person within the company would have to read the statement and engage in some active problem solving (especially if the same problem is mentioned by others). That’s an interpretive act: the kind of creative analysis we are squeezing out of routine consumer practices. To be sure, a car manufacturer will get a great deal of attention from a company representative who wants to sell them a zillion tires. The consumer looking for just a good set of four?  Not so much.