Categories
Feature Stories

The Vanishing Deliverables


1. Monday Morning Panic

Lisa Hughes stared at the screen.
Her project tracker looked like a crime scene: amber flags turning red, tasks overdue, owners missing. The “Digital Citizen Portal” — a key part of the council’s transformation programme — was supposed to go live in three weeks.

Instead, deliverables were quietly vanishing.

She scrolled again, heart sinking. No updates from content design, no sign-off from procurement, and the supplier’s milestone report hadn’t arrived. Yet every meeting last week ended with the same cheery line: “We’re on track.”

Lisa wasn’t new to trouble. In ten years of delivery management across local government, she’d seen enough projects drift. But…

Something about this one felt slippery — like trying to hold water in her hands.

Everyone seemed busy, yet nothing moved.


2. The Stand-Up That Went Sideways

At the daily stand-up, the team gathered in their usual horseshoe:

  • Tom from IT, muttering about API dependencies.
  • Priya from Comms, juggling content approvals.
  • Ahmed, the supplier’s PM, camera off on Teams as always.

“Okay,” Lisa started, trying to sound calm. “Let’s go around — what’s blocking progress?”

Silence. A few shuffled papers.

Then Tom said, “We can’t move forward on integration until the service owner signs off the data fields.”
Priya frowned. “I thought that was approved?”
“It was,” Tom said, “but Legal raised something new about GDPR.”
“That’s news to me,” Priya said. “Who’s dealing with Legal now?”
“Not me,” said Tom. “That’s Ops.”
“No,” said Lisa, “Ops said they only handle the data hosting, not content capture.”

The conversation spiralled. By the time they finished, they’d agreed to “pick it up offline” — which, Lisa knew, meant no one would.

She closed the call and opened her notebook. In big letters she wrote:

Who actually owns what?


3. Rewinding the Tape

Later that afternoon, Lisa met with her programme manager, Hannah.

“Everyone’s working hard,” Lisa said, “but no one’s accountable. We’ve got three teams, four departments, and two suppliers. Each thinks the other’s got it.”

Hannah nodded. “Classic council matrix. We set up governance last year, remember? There’s a steering group.”

“Right,” said Lisa. “But the steering group thinks they’re ‘advisory’. The delivery teams think the steering group owns decisions. It’s a hall of mirrors.”

“Okay,” said Hannah. “Map it. Who’s responsible for what?”

That word — map — sparked something. Lisa opened her laptop and sketched a grid. Across the top: “Responsible, Accountable, Consulted, Informed.” Down the side: every deliverable she could name — procurement plan, API integration, content approval, accessibility testing.

It was messy, but it was a start.


4. The RACI Revelation

By the next morning, Lisa’s desk was covered in scribbled RACI notes.

She’d colour-coded roles — blue for supplier, green for council staff, yellow for the PMO. Patterns leapt out immediately.

  • Procurement tasks: nobody marked as “Accountable.”
  • Content design: two people marked “Responsible.”
  • Data governance: six different “Consulted” parties, but no decision-maker.

No wonder everything was stalling. Every decision had dissolved into endless consultation.

She shared the chart with Hannah. “It’s worse than I thought. Everyone’s touching everything, but no one’s driving it.”

Hannah exhaled slowly. “Okay. Let’s fix ownership first. Then we’ll worry about deadlines.”

They called a meeting with all leads. Lisa projected the RACI on the screen.

“I’m not here to blame anyone,” she said. “But we need clarity. Who’s actually on the hook for each deliverable?”

There was awkward laughter. Someone joked about “too many cooks.” But as they went line by line, the mood shifted. People started volunteering to take clear roles.

By the end, they had a new version. For the first time, it looked sane.


5. When One Fire Hides Another

Two days later, another issue surfaced.

The supplier missed yet another milestone. Ahmed finally joined the call, apologetic but vague. “There’s a dependency on your infrastructure team,” he said.

Lisa frowned. “Which one?”
“The data services group,” he said. “They haven’t provisioned the new environment.”
“I wasn’t aware that was blocking you,” she said.
“Well, we raised it in our internal report,” Ahmed replied.

Lisa opened the shared folder. No sign of any report.

After the call, she sat back and thought: Every time we fix one issue, another appears. It felt reactive, like treating symptoms instead of curing the disease.

So she pulled out a blank page and wrote a question she often used when problems got slippery:

“Why?”

Then below it:

“Why is the supplier missing milestones?”

Because dependencies aren’t managed.

Why aren’t they managed?
Because no one has visibility.
Why not?
Because updates are trapped in private reports.
Why are reports private?
Because no one agreed how progress should be shared.

Four “whys” later, she had the root cause: lack of a shared information flow.


6. Drawing the Map

Lisa booked a half-day workshop called “How the Work Really Works.”

Using a whiteboard, she drew circles for each group: supplier, IT, Comms, Legal, PMO, and Service Owner. She asked each to mark the teams they interacted with most often. Lines criss-crossed the board until it looked like a spider’s web.

Then she asked, “Where does information get stuck?”

That question changed everything.

The Legal team admitted they only heard about data decisions at the eleventh hour.
Comms revealed they didn’t know when supplier updates were due.
The supplier said they never saw PMO dashboards, so they couldn’t align milestones.

In an hour, they’d built a living Stakeholder Map — showing not hierarchy, but flow.

They agreed to create a shared “single source of truth” dashboard and new update rhythm. For the first time, the fog began to lift.


7. Turning the Tide

Two weeks later, the difference was visible.

The dashboard updates ran every Friday. Each owner posted their progress openly. There was friction at first — nobody likes exposure — but soon the transparency became normal.

When blockers emerged, people tackled them fast instead of burying them.

At one meeting, Tom said, “Before, I didn’t know who to chase. Now I can see exactly who’s responsible for what.”
Priya added, “And it’s not personal anymore — it’s just clear.”

The supplier hit its next milestone on time. The portal’s beta launch date stabilised. For the first time in months, the delivery tracker looked like it might tell the truth.

Lisa didn’t relax — she’d learned that projects can slip quietly again if you stop watching — but she felt control returning.


8. The Quiet Victory

On launch morning, Lisa arrived early.

The new Digital Citizen Portal went live at 9:00 a.m. sharp.
There were a few teething issues — a broken link here, a login glitch there — but nothing catastrophic.

By midday, the Chief Information Officer sent an email: “Congratulations — solid delivery. Thank you all.”

Lisa smiled at her screen. She knew it wasn’t perfect, but the win wasn’t the portal itself. It was that the team had finally built trust and visibility — foundations that would last longer than any launch.


Reflection: What Really Happened Here

The problem looked like missed deliverables, but the real issue was unclear ownership and hidden communication flows.

Lisa’s story shows how easily complex projects drift when responsibility blurs and updates vanish into silos.

The tools that helped her turn it around were simple — but powerful when applied together:

  • RACI Matrix: surfaced the lack of clear accountability.
  • Five Whys: uncovered the deeper cause — a missing information-sharing structure.
  • Stakeholder Mapping: made the invisible relationships visible, showing where communication needed redesign.

The turnaround didn’t come from heroics. It came from seeing the system clearly — and then deliberately redesigning how people worked within it.


Author’s Note

This story illustrates how structured clarity tools — like the RACI MatrixFive Whys, and Stakeholder Mapping — can shift a struggling project from confusion to control.
They sit within the Failure Hackers problem-solving lifecycle, moving from “symptom recognition” through “diagnosis” to “countermeasure.”

When projects feel chaotic, start by making ownership visible, ask why until you find the real barrier, and then map the relationships that keep information moving.
That’s how small moments of clarity turn into sustainable success.

Categories
Stories

The Disappearing Customers – A Story About Solving a Customer Retention Problem

1. A Puzzling Decline

The first sign came on a Tuesday morning.

James Patel, Customer Service Lead at the mid-sized online retailer EconHome Direct, opened the weekly dashboard expecting steady numbers. Instead, the “repeat customers” graph had nosedived — down 18% in a single month.

He checked the complaints inbox. It was flooded: “Order missing.” “Refund not received.” “Still waiting for my delivery!”

At first glance, it looked like normal post-Christmas backlog noise. But as the complaints piled up, James noticed a pattern — most came from returning customers, the ones who used to praise their service.

He flagged it to Sophie, the Operations Manager. “We’re losing our best people,” he said. “They’re not just unhappy — they’re gone.”

2. The Quick Fix That Failed

Sophie’s first reaction was procedural: “Maybe it’s the courier.”

She emailed their delivery partner, who promised to “investigate delayed consignments.” Then she told IT to check whether the new CRM integration was “causing slow updates.”

Weeks passed. The courier blamed the warehouse. The warehouse blamed IT. IT blamed “legacy data.”

Meanwhile, cancellations climbed, refund requests grew, and repeat orders kept dropping.

The managing director finally called an emergency meeting:

“We can’t keep blaming logistics,” she said. “Something systemic is off. We need to find out where customers are dropping out — and fix it before next quarter.”

3. Mapping the Journey

James suggested a workshop. He’d used Process Mapping before in a logistics role and thought it might reveal what reports were hiding.

They booked a spare meeting room, printed the customer journey steps, and stuck them to the wall with masking tape. Each department added its part:

  • Customer Service: “Order placed → confirmation email sent → dispatch confirmation → delivery.”
  • Warehouse: “Pick item → pack item → print label → scan barcode → hand to courier.”
  • IT: “Sync data from warehouse system to CRM → push update to customer portal.”

By the end, the wall was covered in coloured sticky notes, each representing a step or system interaction. Then they walked through a few actual orders — tracing them from “placed” to “delivered.”

Halfway through the second example, the pattern hit them.

“Look here,” said Sophie, pointing to a gap between “refund processed” and “confirmation sent.”

“That step’s automated,” said IT.

“But the emails stopped sending two weeks ago,” replied James.

A quiet pause settled over the room.

4. The Hidden Break

The culprit wasn’t the courier.

It wasn’t even logistics.

It was a silent failure in the CRM-to-email sync process — the system responsible for confirming refunds and returns.

Whenever a customer requested a return, the warehouse processed it, but the confirmation never reached the customer. To them, it looked like silence.

So they called support. When they couldn’t get through, they posted negative reviews or charged back their payments.

“It’s not a delivery failure,” Sophie said. “It’s a trust failure.”

The insight reframed everything. The problem wasn’t “late packages.” It was “customers losing confidence because they didn’t hear from us.”

5. Digging Deeper — The Fishbone Workshop

To stop the bleeding, Sophie asked the team to run a short Fishbone Diagram session — a structured way to explore causes across multiple dimensions.

They drew a large fish skeleton on the whiteboard, with the “problem” written at the head:

Customers not returning after service issue.

The bones represented categories: People, Process, Technology, Policy, and Measurement.

Then they filled it in:

  • People: Lack of communication between warehouse and support.
  • Process: Refund confirmation steps not monitored.
  • Technology: Email automation failure.
  • Policy: No standard operating procedure for missed notifications.
  • Measurement: No alert when email success rates dropped.

Within an hour, they’d visualised how a single system glitch rippled across the organisation. It wasn’t one mistake; it was five small weaknesses interacting.

James looked at the board and said, “We thought we had a delivery problem. What we really have is a feedback loop problem.”

6. Following the Evidence

The next morning, IT traced the technical fault.

Two months earlier, a CRM update had changed the email authentication protocol. Nobody noticed — the test emails still worked, but the bulk send queue silently failed.

At the same time, Customer Service switched to a new shared inbox. Messages from the refund system were being delivered to the wrong address.

Each small change made sense on its own. Together, they created a perfect gap — where communication simply vanished.

The failure wasn’t malicious. It was invisible maintenance drift — a classic symptom in complex organisations.

(You can read about this kind of failure propagation in the Symptom Sensing section of the Failure Hackers problem-solving lifecycle.)

7. Testing the Fix

Sophie insisted on a two-week pilot before a full relaunch.

They:

  • Restored the CRM email connection.
  • Set up a dashboard to monitor “confirmation success rate.”
  • Added a human check for refunds over £50.
  • Automated a weekly test of all transactional emails.

Then James ran follow-up calls with ten customers who’d complained earlier.

“I got my refund confirmation instantly this time,” one said.

Another replied, “You fixed it — I thought you’d gone bust!”

Small victories — but powerful.

Within three weeks, repeat order volume began to climb. The trust metrics in customer feedback rose 22%. For the first time in months, the curve turned green.

8. Writing the Story Down

Sophie asked the team to capture everything they’d done — not as a technical ticket, but as a story.

They used the Root Cause Analysis format to document what happened, why it happened, and what they’d changed.

Their summary looked like this:

StageDescription
SymptomDecline in returning customers and rising complaints.
Problem DefinitionCustomers not receiving refund confirmations after returns.
Root CauseCRM email integration failure + unmonitored process dependency.
CountermeasuresTechnical fix, monitoring dashboard, process ownership clarification.
LearningTechnical errors often mask process design gaps; map flows end-to-end.

That final line — “map flows end-to-end” — became the mantra for their next project.

9. The Bigger Lesson

Three months later, Sophie reflected with her director.

“Funny thing,” she said, “We used to think problems lived inside departments. But every time something breaks, it’s between them.”

He nodded. “The work happens in the gaps.”

The company began applying Process Mapping routinely for every customer-facing change — not as bureaucracy, but as insurance. They called it “Seeing the flow before touching the system.”

By summer, repeat orders had recovered fully. But more importantly, the organisation learned to think systemically. The crisis had become their classroom.

Reflection: The Power of Seeing the System

This case highlights a common trap — treating customer symptoms as isolated issues instead of tracing them back through the system.

What worked here wasn’t heroics. It was structured curiosity — combining observation, mapping, and collective sensemaking.

The team used:

The turning point wasn’t fixing an email glitch — it was seeing the invisible structure that made the glitch matter.

(For more on diagnosing underlying system issues, explore Symptom Sensing in the Failure Hackers lifecycle.)

Author’s Note

This story although fictional demonstrates how operational issues can reveal deep system design flaws — and how visual tools like Process Maps and Fishbone Diagrams convert confusion into clarity.

It sits within the Failure Hackers problem-solving lifecycle, bridging diagnosis and workaround design.

Whether you’re managing a digital project, a retail operation, or a public service, the lesson remains the same:

Problems rarely live where they appear.

The map always tells a deeper story.