VOICES In the Trenches
How reliable are your portfolio’s customer
By Chris D’Ascenzo, PgMP, PfMP
Chris D’Ascenzo, DBA, PgMP, PfMP, is founder of
Ascendt Group LLC, Green Lane, Pennsylvania, USA.
He can be reached at firstname.lastname@example.org.
USING CUSTOMER SATISFACTION SURVEYS in a
portfolio of IT projects and programs presents complex challenges. Portfolio managers must balance strategic objectives against external customers’ demands
spelled out in survey results. And in large organizations, portfolio managers have to align portfolios to
strategies that serve different customer cultures.
But one of the biggest challenges is even more fundamental: determining how useful these surveys are
and finding ways to improve them.
As an executive in the national security and defense
industry, I once led a portfolio containing over 100
projects, task orders and program elements, and more
than 1,000 IT employees. With so many factors, I began
to realize that implementing a customer satisfaction
discipline was necessary to assess how our customers
saw us. But I knew it was not sufficient to gauge customer affinity or to make investment decisions. Favorable customer satisfaction doesn’t tend to be a strong
predictor of retaining customers in the face of factors
like IT service delivery model efficiencies, migration to
cloud architectures and value pricing.
For four years, my portfolio team conducted
detailed customer satisfaction surveys addressing
responsiveness, teamwork, conflict resolution, back-
office support, agility and overall satisfaction. We also
developed a value measurement framework to add
context to the customer satisfaction data-collection
process. We implemented guidelines on tangible and
intangible benefits recognition, expected values of
benefits on a project or program basis, and a host of
approaches for customer engagement and analysis.
This customer satisfaction survey discipline worked.
Still, our ability to assume that we would garner new
business or retain long-term contracts during a mar-
ket downturn could not depend on survey data alone.
While it provided good feedback, it did not yield a reli-
able leading indicator of repeat business.
So my organization adopted a more strategic use of
the customer satisfaction information. To shape the
portfolio, we began to look beyond the data and identified potential gaps in our competitive differentiation by
using the satisfaction metrics themselves.
For example, the surveys showed that although
clients gave us excellent ratings in the majority of the
portfolio accounts, it was getting cheaper for them
to switch to alternative IT services suppliers. This
prompted us to evaluate more aggressive and creative
pricing and staffing solutions. Additionally, I noticed
over time that several clients gave us excellent overall
satisfaction ratings, but their emphasis on back-office
support rose steadily as client budgets became more
aggressive. Ignoring this trend despite excellent customer satisfaction scores would take the edge off our
competitive value proposition going forward.
Looking beyond the obvious customer satisfaction
data also allowed us to identify trends in customer priorities. These trends could be determined from doing
a regression analysis on the satisfaction data. In several
instances, subtle shifts in customer strategic emphasis,
like teleworking and social media infrastructure evolution, were detectable—just enough to help us in developing counter strategies to shape the portfolio and
ward off competition.
In the case of customer satisfaction measurement,
we learned valuable lessons about collecting data and
looking beyond it. I continue to try to find more leading indicators of portfolio performance. PM