How a European tech leader cut assessment time by 66%
11 support agents. 10 minutes each. A pilot that gave managers the first clear picture of where communication gaps were affecting customer experience.
One of Europe's largest IoT security manufacturers needed to assess soft skills across its global customer support team. The existing approach was manual, subjective, and couldn't scale.
SoftTrainer replaced their slow evaluation process with structured, scenario-based assessment. For the first time, managers could see team-wide patterns at a glance.
At a Glance
A fundamentally broken evaluation process
The challenge
This company operates a 24/7/365 multi-channel support operation serving partners, installers, and end users across 180+ countries. With rapid global expansion (3,000+ employees, millions of protected users), the quality assessment process for customer-facing teams couldn't keep up.
Manual evaluation of a single support agent took around 60 minutes. It depended entirely on the evaluator's judgment. There was no consistent way to compare agents, no team-wide view of communication gaps, and no way to know if the evaluator was even measuring the right things.
HR knew there were problems. They just couldn't measure them.
Before
- One-by-one expert evaluation
- Subjective, evaluator-dependent
- No team-wide comparison possible
- Qualitative impressions only
- Scheduling bottleneck
After — with SoftTrainer
- Scenario-based, async assessment
- Structured, consistent scoring
- Team-wide patterns visible instantly
- Quantified skill gaps with specifics
- Everyone assessed in parallel
A fundamentally different approach to assessment
Why SoftTrainer
Scenario-based, not opinion-based
Agents handled realistic customer interactions, not abstract questionnaires. Angry customers, technical complaints, de-escalation under pressure.
Structured, comparable scoring
Every agent assessed against the same competency framework. Results are quantified, consistent, and directly comparable across the team.
Fully asynchronous
No scheduling, no disruption to 24/7 shifts. Agents completed assessments on their own time without coordination overhead.
Team-wide visibility
Managers received a single view of skill gaps across the entire team, broken down by competency, by person, and by pattern.
11 agents, 10 minutes, real results
The pilot
11 customer support agents completed realistic, scenario-based assessments through SoftTrainer. Each simulation was designed around their actual work: handling angry customers, de-escalating complaints, navigating complex technical conversations.
The simulations measured 4 key competencies critical for a support team of this caliber: Problem Solving, Tension Reduction, Clarity & Politeness, and Support & Trust.
agents assessed in a single pilot cycle
competencies measured per agent
The numbers that led to action
Key results
Less Assessment Time
60 min → 10 min per person
L&D Efficiency Gain
Less diagnosing, more acting on findings
Had Critical Skill Gaps
Invisible to manual review
What the data actually showed
The findings
The pilot uncovered recurring weaknesses that managers had sensed but couldn't measure consistently. When the data came back, the patterns were specific and actionable.
Skill Gaps Identified
% of team showing this gap
Did not confirm understanding before responding
Chaotic answers, hard for customers to follow
Biggest growth area across the team
Bureaucratic language reducing empathy
Competencies Measured
Team average scores across 4 key areas
What changed after the pilot
Impact
Speed
Assessment time dropped from 60 to 10 minutes per person. The entire team was assessed in the time it previously took to evaluate two agents manually.
Visibility
For the first time, managers had a team-wide view of skill gaps. Not just who needs help, but exactly where. The data was broken down by competency, by person, and by pattern.
Actionability
L&D efficiency improved by 66%. Expert time shifted from diagnosing problems to acting on clear findings. The skill gap data gave the team a concrete starting point for targeted training.
Scale
The process became fully asynchronous. No scheduling, no coordination headaches. Agents completed assessments on their own time without disrupting their 24/7 shift rotation.
Beyond the pilot: structured development
What happened next
Clear baseline established
The company had its first objective, comparable skill baseline across the entire support team, broken down by competency and individual.
Training priorities identified
Instead of guessing where to invest, L&D could target the specific gaps that data revealed: paraphrasing, response structure, and trust-building.
Scalable process proven
The pilot demonstrated that structured assessment could work asynchronously at scale, without disrupting operations or requiring expert evaluators.
Based on a pilot with one of Europe's largest IoT security manufacturers (3,000+ employees, 180+ countries, millions of protected users). Company name anonymized.
See what SoftTrainer could reveal about your team
We'll build a custom assessment around your scenarios, just like we did for theirs.