Connected Intelligence Can't Fix Your Broken Knowledge Management
By David Barry
Text below includes excerpts from an article originally published by Reworked. For the full story, click here.
Knowledge management initiatives have fallen short for thirty years’ running. Every new technology wave promises to fix it. Communities of practice would unlock tacit knowledge. Expertise directories would connect people to answers. Collaboration platforms would break down silos.
Now comes AI-powered knowledge management — what Cisco calls Connected Intelligence — and the pitch is identical: knowledge flows across human and machine boundaries, decisions happen at unprecedented speed and collaboration occurs without friction. The only difference is that artificial intelligence does the heavy lifting this time.
Whether that changes anything depends on a question no vendor wants to answer: What if the technology wasn’t the problem?
Automating the Easy Parts, Calling It Progress
Nobody struggles to store information anymore. Enterprises are drowning in it. Storing information isn't hard. What's hard is keeping the context intact and making past experience useful when real decisions are on the table" said Yancey Sanford, chief information and research officer at MSTRO. This is why knowledge management keeps failing despite ever-better technology. Organizations capture everything and make almost none of it useful. Context evaporates, confidence drops and repositories become graveyards. I knowledge management tools promise to change this dynamic. Mike Clifton, co-CEO at Alorica, described how his company abandoned "static repositories" for "dynamic in-the-moment knowledge delivery" through its Knowledge IQ platform. Instead of hunting through wikis, Alorica's 100,000 employees receive context-aware knowledge embedded into customer service workflows. The company reports improvements in handle time and first-call resolution.
Ambiguity and Complexity Require Human Judgment
Even this success story reveals limits. "AI can't interpret conflicting policies, understand cultural nuance or resolve ambiguous or emotionally charged situations" Clifton acknowledges. Retrieval, summarization, validation and compliance checking can be automated but ambiguity or complexity require human judgment.
Tools Fail Because People Do. AI Doesn't Change That
Cisco's Connected Intelligence strengthens all three relationship types: people to people, people to AI, and AI to AI. But if P2P knowledge sharing has been broken for decades, bolting AI onto broken processes creates broken processes with better dashboards. Braksiek's research also suggests AI strengthens P2P sharing, but only "when paired with strong, intentional knowledge management practices" Without guardrails, AI weakens human connection by reducing informal interactions or encouraging people to bypass colleagues, assuming the technology has answers. The theory sounds reasonable when AI handles retrieval and error detection, people spend time on judgment, empathy and problem-solving instead of system archaeology. Experts coach and mentor instead of correcting avoidable mistakes. Clifton calls this building super humans, not replacing them.
Layering Sophisticated AI on Poor Knowledge Management Practices
Here's what nobody wants to confront: if Alorica needs to train 100,000 people to work alongside AI effectively, and most organizations can't manage basic change management, scalability becomes a fantasy. "Technology amplifies the environment it's placed in" warned Clifton. If your culture supports knowledge sharing, AI will accelerate it. If not, AI will simply expose the gaps more quickly. AI-powered knowledge management works only when tools support how people learn, decide and work together daily. That requires treating knowledge as something that grows as the organization learns, not static content filed away and forgotten. The technology enables that shift. Organizational politics and misaligned incentives prevent it. AI doesn't change that.
The Need for Algorithmic Accountability
Clifton described Alorica's "human-in-command design" where every AI action includes confidence scores, source transparency and audit trails "Algorithms facilitate knowledge flows, but accountability never moves away from people” he said.
Solving the Knowledge Management People Problem
Three decades of knowledge management failure have taught us that problems are rarely technological. Organizations struggle with silos created by structure and incentives, cultures that don't reward sharing, strategic ambiguity that makes it unclear what knowledge matters and the challenge of preserving context when experience transfers between people. These are human and organizational problems that don't yield to better software. The new generation of knowledge management software is intended to accelerate knowledge flows and reduce friction. AI excels at both but accelerate flows in an organization that doesn't know where it's going and you get chaos at higher speed. Reduce friction in systems where people aren't moving in useful directions anyway and nothing meaningful changes; it just happens faster and more efficiently.