Leadership & Strategy
Data Over Instinct: How We Increased Team Leaders by 40% On my last platoon before moving into leadership, I watched two talented EOD technicians get...

Daniel Dopler
Oct 24, 2025

Data Over Instinct: How We Increased Team Leaders by 40%
On my last platoon before moving into leadership, I watched two talented EOD technicians get stuck in qualification limbo. They were close to becoming Team Leaders, the critical supervisory role that allows operators to lead missions independently. But every time I followed up after they had run three to five more training drills, I got the same vague response: "They're just not ready yet."
I asked what specifically they needed to improve. The answer was always some version of "It's a big responsibility to grant the qualification. They just need more seasoning."
I never got a clear answer about what "ready" actually meant. Without objective criteria, I couldn't help them prepare better. I couldn't identify specific weaknesses to address. And neither could they.
Years later, when I became Director of Training at EOD Mobile Unit Eleven, I remembered that frustration. I built a system to ensure no one else would be trapped in that same limbo.
The Problem: When Gut Feelings Drive Career Decisions
EOD Team Leader is one of the most important qualifications in our field. It means you're trusted to lead missions with at least one other operator, working independently and making decisions that protect personnel and property. Teams with more qualified Team Leaders have more operational flexibility. Individuals with the qualification have more career opportunities.
For as long as anyone could remember, the qualification was granted based on instinct. Senior leaders watched operators perform, and when it "felt right," they granted the designation. When it was an obvious yes or no, the system worked fine. But when someone was close to qualifying, the subjective nature created problems.
Some of the older legacy guys would pick their favorites and make the people they didn't like work harder for the same qualification. The favoritism wasn't always intentional, but personality and relationships influenced decisions as much as competence.
More importantly, talented operators were making career decisions based on these subjective evaluations. I had seen specific cases where people chose to leave the service because they felt they were being held back unfairly. Without objective standards, there was no way to prove whether the assessment was accurate or biased.
As I moved into the training coordination role before becoming Director of Training, I was evaluating EOD teams and noticed we had no objective way to measure performance for certain positions. Team Leader was the most glaring gap.
The frustration I had experienced as a young technician, watching my teammates stuck without clear guidance, came back. But now I was in a position to fix it.
The Solution: Building the 85% Standard
I created a standardized grade sheet with 20 specific performance points covering all aspects of EOD operations. Every training scenario would be graded using the same criteria, creating a consistent measurement across all operators and all mission areas.
We started using the grade sheets on every training event. Many scenarios were being conducted with two-man teams, which meant we were collecting data on anyone who led a scenario, not just people already being considered for Team Leader qualification.
I worked with the training department and coordinated with other Mobile Units to make the grading as objective as possible, reducing personality-driven decisions. Once we had refined the system, we presented it to the Commanding Officer for input and buy-in.
The standard we established: to become a Team Leader, an operator needed to pass at least 85% of their performance points on at least 15 scenarios across all mission areas. It wasn't about perfection. It was about demonstrating consistent competence across diverse situations.
If someone was close but not quite meeting the standard, we could review their grade sheets, identify specific shortfalls, and tailor training to address those weaknesses.
The Data Reveals Hidden Talent
After implementing the system for eight months, we reviewed the results. By our objective standards, many more people had the statistics to be qualified as Team Leaders than had been granted the designation under the old gut-feeling method.
I created spreadsheets for each team tracking their drills and scenarios. If someone's performance felt borderline, we could run additional drills, but if they maintained above 85%, they would be awarded Team Leader. The data removed the ambiguity.
Before the system, we had 29 to 30 qualified Team Leaders across the command. After eight months of data-driven qualification, we had over 40. That's a 40% increase in qualified supervisory positions, not because standards were lowered, but because objective measurement revealed talent that had been overlooked.
This wasn't just about numbers. Each team gained additional Team Leaders, making them more flexible in operational settings. Individuals who might have left the service out of frustration now had clear pathways to advancement.
The Power of Specific Feedback
One of the most valuable aspects of the data-driven system was the ability to provide targeted coaching based on objective observations.
We had an operator who consistently rushed through his decision-making process. On easy drills and scenarios, he would move quickly without thinking through each step, skipping safety procedures that in real situations would cause unintended detonations. His scores were borderline because he would excel on complex problems where he naturally slowed down, but fail on simple scenarios where his speed became reckless.
I reviewed his grade sheets and identified the pattern through the evaluators' notes and his performance trends. The problem wasn't competence. It was pacing.
I coached him on tactical pauses, moments where he needed to slow down and use his reference cards to ensure he didn't skip critical steps. We worked on first identifying when these pauses were necessary, then taking a moment to inventory the situation, review his procedures, prepare his process, and then execute.
He was using speed as a motivator, but he needed to create space and time to make correct decisions. After a couple of days of focused practice on this specific weakness, he crushed the scenarios. He went from borderline to clearly qualified because we could identify the exact problem and address it directly.
That kind of targeted coaching was impossible under the old system. "You're not ready yet" gives no actionable feedback. "You're rushing tactical pauses and skipping safety steps on simple scenarios" gives a clear path to improvement.
The Operational Impact: Flexibility Under Pressure
The true value of having more qualified Team Leaders became clear during my last deployment before retiring. We had two platoons, 16 people total, providing regional response, conducting exercises, and maintaining readiness for immediate response to threats against U.S. Government property in Bahrain.
The operational demands were constant and overlapping:
At any moment, we needed two people in Bahrain for immediate response
A typical exercise would take 4 to 8 people, at least one of whom had to be a Team Leader
If the Exploitation Lab was conducting explosives training with a partner nation, we needed to send a Team Leader and a Demolition Operations Supervisor
Multiple times, 3 to 5 EOD techs were requested for multi-week Visit, Board, Search, and Seizure missions with 5th Fleet vessels and Coast Guard Maritime Security Response Teams. These missions required an EOD Team Leader and Demolition Operations Supervisor.
On at least four occasions, we were down to only two people on the island for immediate response, one of whom was an EOD Team Leader. We could augment with personnel from other teams if needed, but the baseline was thin.
Having more supervisory qualified personnel enabled me to send operators out to the more challenging and rewarding missions more frequently. Teams could support multiple exercises simultaneously. We could fulfill VBSS requests without compromising immediate response capability.
This operational flexibility existed because we had invested in building depth through objective qualification standards. Instead of hoarding qualifications for a select few, we had identified and developed everyone who met the standard.
The Broader Lesson: Data Doesn't Replace Judgment, It Improves It
I won over most of the skeptics because they genuinely wanted clear guidelines they could use to train more effectively. Senior leaders weren't trying to be arbitrary. They just didn't have tools to make consistent, defendable decisions in borderline cases.
The 85% standard didn't eliminate judgment. Evaluators still had to assess performance on each of the 20 criteria. But it provided a framework that made those judgments more consistent, more defensible, and more focused on actual performance rather than personality or relationships.
The system also created accountability in both directions. Operators knew exactly what they needed to demonstrate. Evaluators knew they had to document specific performance gaps, not just deliver vague assessments.
Most importantly, the data revealed that our instincts about who was "ready" for Team Leader had been systematically underestimating talent. We weren't lowering standards. We were measuring them more accurately.
The Insight
Subjective standards protect the evaluator. Objective standards protect the mission. When you can't describe what "ready" looks like, you're not protecting quality — you're protecting the status quo.
Civilian Translation
Every organization has a version of this problem. Performance reviews built on "culture fit" and "leadership potential" rather than defined behaviors. Promotion decisions driven by who mentors like rather than who performs. Hiring processes that measure confidence better than competence.
The 40% increase in qualified team leaders didn't come from lowering the bar. It came from defining the bar clearly enough that people could actually clear it — and evaluators could actually measure it.
In corporate terms: your highest performers aren't always the ones who got the most visibility. Some of your best people are stuck in qualification limbo right now, performing at the required level and waiting for someone to define what "ready" actually means. Build the rubric. Measure the outcome. The talent was there the whole time.
The Takeaway: Measure What Matters
Subjective evaluations feel more sophisticated than checklists and spreadsheets. Experience and intuition matter in leadership. But when gut feelings determine career advancement and operational capability, you lose talent and create resentment.
The 40% increase in qualified Team Leaders didn't happen because we made qualification easier. It happened because we made it clearer. Objective measurement revealed people who had been performing at the required level but hadn't been recognized because they didn't fit someone's mental model of what a Team Leader looked like.
Building the grading system required time, coordination across units, and buy-in from leadership. Tracking the data in spreadsheets for 102 to 105 personnel across multiple teams was administrative work that didn't feel glamorous. But the operational impact, 40% more qualified supervisors enabling more complex mission support, proved the investment was worth it.
In my next role, I'm looking for organizations facing similar challenges: important decisions being made on instinct, talented people being overlooked by subjective processes, and operational capability being limited by unclear qualification standards. The principles that increased Team Leaders by 40% apply anywhere evaluation criteria can be made more objective and talent development can be made more systematic.
When you measure what matters and make standards clear, you don't just improve fairness. You unlock capability you didn't know you had.





