The document was technically perfect. Every API endpoint documented. Every parameter explained. Every error code cataloged.

It was also completely useless as training.

I'd just reviewed what an engineer at one of my clients considered "training documentation" for a new feature rollout. He'd spent weeks on it. The information was accurate, comprehensive, and organized exactly how the system architecture was organized.

Which is precisely why nobody could learn from it.

Engineers know HOW things work. Learners need to know WHEN and WHY to use them.

Over 12 years—from Kraken's crypto trading platforms to Instacart's delivery systems to Delta's airline operations—I've watched this same problem sink feature after feature. Not because the engineering was bad. Because the translation never happened.

Why Perfect Documentation Makes Terrible Training

Here's what happens at most tech companies.

A team builds a feature. Engineering documents how it works. That documentation gets handed to a learning team (if one exists) or just published directly to users as "training materials."

Then support tickets flood in. Users can't figure out how to apply the feature to their actual work. The feature that took months to build sits unused because nobody understands when they'd need it.

I watched traders at Kraken struggle with a technically perfect trading automation feature. The documentation explained the algorithm beautifully. It never explained the market conditions where you'd actually use it.

At Delta, operations staff got a 47-page guide to a new scheduling system that documented every button but never explained the workflow.

Same pattern. Different industries. Identical root cause.

The expertise curse. When you deeply understand something, you forget what it's like not to know it. Research on cognitive load theory shows that experts unconsciously assume others share their mental models. They optimize for technical completeness because that's how they think about the system.

Learners don't have those mental models yet. They need to understand the moment of need before they can absorb technical details.

Three Questions Engineers Never Ask (But You Must)

Most instructional designers skip straight to "What does this feature do?" That's the engineer's question.

The learner's questions are different.

When does this actually matter in their workflow?

Not "what can this feature do" but "at what moment in someone's actual workday do they need this?"

At Instacart, we built training for a route optimization feature. The engineers wanted to start with the algorithm—how it calculated the most efficient delivery sequence. Fascinating if you're an engineer. Useless if you're a delivery driver.

What drivers needed: "When you're 20 minutes from your last stop and dispatch sends you three new orders, this feature tells you which order to deliver first so you're not backtracking."

That's the moment. The specific situation where this feature solves a problem they're experiencing right now.

The exercise I use: Have engineers describe a user's typical day hour by hour. Then identify exactly where this feature fits. Not "it helps them work more efficiently" but "at 2 PM when they're juggling five competing priorities, this tells them which one to handle first."

Common mistake: Starting with "Here's what this feature does" instead of "Here's the moment you need this."

What's the business value they should care about?

If you can't articulate clear value in one sentence, it might not deserve formal training yet.

The "so what?" test. What can users do NOW that they couldn't do BEFORE? If the answer is vague ("enhanced visibility" or "improved efficiency"), you don't have training content. You have a feature that might not be ready for broad rollout.

At one company, stakeholders insisted we create training for a new dashboard. I asked the value question. After 20 minutes of discussion, we realized the dashboard showed the same data as the existing system, just with different graphs.

We didn't build that training. We questioned whether the dashboard itself was needed.

Compare these:

  • "The new dashboard provides enhanced visibility" (vague, meaningless)
  • "See at-risk customers 3 days earlier" (specific, time-bound, actionable)

The second version works because a user can immediately picture how this changes their work.

Sometimes you discover during this process that a feature doesn't have clear value. That's when you need to tell stakeholders that training won't fix a product problem.

Better to have that conversation before spending weeks building unusable training.

What will they try first?

Most users won't read documentation. They'll open the tool and try something.

Your job: Make sure that first thing they try works and builds confidence.

At ClickDimensions, we built a marketing automation tool with 47 different configuration options. Engineers wanted comprehensive training covering every option. Classic expertise curse—assuming users need to understand the complete system.

What actually happened: 83% of users set up one type of campaign in their first session.

We focused initial training entirely on that single use case. Advanced features we made discoverable through contextual help, but we didn't force-feed them upfront.

The 30-second confidence builder: Design training so users accomplish something meaningful in under a minute. That success creates the motivation to learn more.

What we left out of initial training:

  • Edge cases affecting less than 5% of users
  • Advanced configurations requiring prerequisite knowledge
  • Feature combinations for power users

How we made advanced features discoverable: Contextual tooltips that appeared when users reached certain milestones. "You've created five campaigns. Want to learn about advanced targeting?"

The danger of comprehensive training: You overwhelm beginners trying to help experts. The result is training that serves neither group well.

Now, those three questions give you a framework. But there's still the practical problem of actually extracting this information from engineers who think in systems, not user moments.

Do Your Homework Before You Waste the Engineer's Time

Most instructional designers start by scheduling time with the engineer.

That's backwards.

Before talking to any subject matter expert, I spend 2-3 hours building my own understanding:

By the time I schedule that SME interview, I should know 70% of what I need. The meeting then focuses on the 30% only they can answer—edge cases, design decisions, error states.

This changes the conversation completely. Instead of "Can you explain how this works?" it's "I understand it does X in most cases. What happens when Y occurs?"

Engineers respect this approach. You're not wasting their time asking them to repeat documentation you could have read. You're asking questions that demonstrate you actually understand the system.

The SME interview done right

Come with specific questions only the engineer can answer.

At Kraken, I learned that crypto trading platforms have unique error states that aren't obvious from documentation. The engineer could walk through "What happens when the market moves faster than the order can execute?"

Record the session (with permission) so you can focus on conversation, not note-taking. The most valuable insights come from follow-up questions you can only ask when you're fully engaged.

Questions that reveal what actually matters:

That last question is gold. It reveals the expertise curse in action. The things engineers assume are self-evident are usually the exact points where users get confused.

Test your understanding with someone who doesn't know anything

Find someone who wasn't in any of the meetings. Hand them your draft. Watch them try to use the feature based only on what you wrote.

Do not help them.

This is painful. You'll watch them struggle and want to jump in with clarification.

Don't.

Their confusion reveals exactly what you missed in translation. At Delta, I watched a pilot stare at training I'd written for a new scheduling system. After 90 seconds of reading, he closed the document and opened the old system instead.

That taught me more than any feedback session could have. The moment you lose users isn't when they don't understand—it's when they decide it's not worth the effort to figure out.

The Button Location Trap (And Three Other Ways to Waste Your Time)

Teaching button locations instead of concepts

Buttons move. User interfaces get redesigned. Concepts don't change.

Bad training: "Click the blue button in the upper right corner"
Better training: "Open the export menu"

At one company, we rebuilt the entire UI six months after launch. Training that focused on button locations became instantly obsolete. Training that focused on concepts remained useful.

Assuming technical terminology is understood

Your "API call" is their "huh?"

At Instacart, engineers referred to "batching orders." Delivery drivers called it "stacking deliveries." Same concept, different language. Training that used the engineer's terminology confused the exact people who needed to use the feature.

The translation: Use the language your users actually speak, even if it's less technically precise.

Explaining every possible scenario

The 80/20 rule applies brutally to training. Eighty percent of users will encounter twenty percent of scenarios.

Build training for that 20%. Make the other 80% of scenarios searchable reference material, not required learning.

Building comprehensive training for features that will change next month

At fast-moving companies, features evolve rapidly. Spending three weeks building comprehensive training for version 1.0 means it's obsolete when version 1.1 ships.

Better approach: Build minimal viable training. Cover the core use case. Update when the feature stabilizes.

Translation Isn't Dumbing Down

It's optimizing for learning.

Engineers will thank you when support tickets decrease. Users will actually adopt features you spent months building. Products will deliver the value they were designed to create.

This skill translates across every tech company you'll work at. The platforms change. The technologies evolve. But the gap between technical accuracy and learning effectiveness remains constant.

The mindset shift: Think like a user who's 30 seconds into trying this for the first time, not a builder who spent six months designing it.

That's the translation that matters.