Scaling Human Empathy:

A Case Study on Empower Work’s AI Assistant for Peer Counselors

When workplace stress or job loss strikes, people turn to Empower Work for free, confidential, real-time career counseling over text chat. Given rising economic and political uncertainty along with new workplace partnerships, Empower Work’s text line demand was growing, and we wanted to proactively ensure that our peer counselor volunteer base could match the need. 

The result: A peer counseling AI assistant designed to amplify—rather than replace—the human connection at the heart of our service. Adopted by 65% of our peer counselors, it streamlines workflow efficiencies, helps them when they’re stuck, and allows them to handle more simultaneous help conversations. 

The Challenge

Demand on Empower Work’s text line doubled in 2024 and continued to rise in 2025 amid economic uncertainty. At the same time, our research investigations made two things clear:

  • Help seekers strongly preferred human support for emotionally complex work issues based on our own user experience research. 

  • Human counseling responses resulted in stronger immediate and long-term positive outcomes and perceived empathy versus AI responses when the source was disclosed, according to peer-reviewed findings in Nature on AI empathy for challenging situations. This was explained by an AI empathy gap - while AI could demonstrate cognitive empathy (accurately recognizing someone’s state), it couldn’t demonstrate affective empathy (truly feeling with someone) and motivational empathy (feeling concern that compels willingness to invest effort to help).

So, our guiding question became: How do we use AI to scale the real human empathy at the heart of our service without replacing it? On shifts, our peer counselors faced specific constraints that limited their efficiency and bandwidth:

  • Capacity ceiling (~2 simultaneous chats): Most counselors could comfortably manage about two conversations at once without sacrificing quality. Each chat averages ~50 minutes, requiring counselors to hold evolving details in working memory.

  • “Stuck” moments under a 3-minute clock: Volunteers sometimes felt uncertain about how to respond, and our standard of < 3 minutes per reply made that uncertainty especially stressful.

  • Manual, time-intensive workflows: the following processes were highly manual and slowed counselors down: writing end-of-session summaries, catching up on prior messages during handoffs, and finding resources in a complex template library.

The Solution

Empower Work built an AI Peer Counselor Assistant—an embedded plugin right at the point of use. It reviews the live counseling conversations in real time and supports counselors with:

  • Next response guidance: Offers a small set of editable suggestions aligned to Empower Work’s nine-skill pedagogy, helping counselors craft empathetic, forward-moving messages under a 2–3 minute window.

  • Relevant resources: Surfaces targeted options from our vetted career, training, mental health, financial resources, and legal resource library, so counselors can share the most aligned supportive service fast—without hunting through templates. 

  • Quick summary: Generates concise post-session summaries after the session and provides instant catch-up snapshots for shift changes/hand-offs. 

Our Approach

1. Responsible, human-centered AI. We built directly with peer counselor needs and voice, minimizing any new risks to counselors or help seekers. This included:

  • Co-design with an AI Peer Counselor Advisory Council: Early testers provided weekly feedback that shaped prompts and UI from day one.

  • Human-in-the-loop: The assistant suggests. Then counselors review, personalize in their own voice, and screen before sending.

2. Lean UX. We prioritized impact/cost trade-offs, testing small changes, measuring outcomes, and iterating quickly. This included:

  • Context engineering over fine-tuning: Instead of costly fine-tuning, we used Retrieval-Augmented Generation with high-quality, tagged conversations and our vetted resource library. This gave each use case just enough context for accuracy and tone at a fraction of the cost. It also avoided the operational burden of re-training models whenever providers update them.

  • Continuous refinement based on UXR: We built feedback directly into the assistant and traced outputs and feedback in order to continuously refine prompts based on ongoing counselor feedback.

The Outcomes

We built the AI Assistant to augment people, not replace them. Six months in, the evidence suggests that’s exactly what’s happening — it’s saving time on the tasks, nudging overall capacity upward, and becoming part of the normal shift workflow — so counselors can be more present and human.

Engagement & adoption — a tool that sticks, not a novelty

  • Adoption: ~65% of counselors who have shifts with active conversations use the assistant at least once during a shift.

  • Retention/daily active users: Sustained daily usage grew steadily after launch with no novelty drop-off. From early June through early November, daily active counselors more than tripled. In short, usage expanded and stabilized at a higher level, consistent with a tool that’s becoming part of the normal shift workflow.

  • Helpfulness ratings: 93% report the assistant is helpful; 70% call it very to extremely helpful.

It’s helped me get unstuck during challenging conversations.
— Dana, Empower Work Peer Counselor

Capacity—from two-at-a-time to (nearly) three-at-a-time with increased impact

  • Counselors are moving from about an average of ability to handle about 2 simultaneous conversations without the assistant to almost 3 on average with the assistant with the same or higher levels of impact.

  • Just as important, the share who can handle 3+ at once grew from 27% to 53%—a hugely meaningful shift in allowing to help more people with continued strong outcomes.

Speed & workflow efficiency—time back where it matters

The assistant is shaving minutes off repeated tasks, allowing counselors to spend more time on the empathy and human connection part of their conversations. 

  • Resource sharing: moved from 2.40 min without the assistant to 1.43 min with the assistant (about 1 min less time; ~41% faster).

  • Session summaries: moved from 4.80 min without the assistant to 1.93 min with the assistant (about 3 minutes less time; ~60% faster).

  • Handoff catch-up: catching up on conversation during a shift hand off moved from 5.33 min without the assistant to 2.25 min with the assistant ( about 3 minutes less time, ~58% faster.

It’s like having an extra set of eyes watching the road, yet I am still driving.
— Cyn, Empower Work Peer Counselor

What We Learned

When less is more: why we eliminated free-form open chat 

  • In our early mockups, we included an open-response chat and three prompt buttons (Next Response Guidance, Relevant Resources, Summary). In practice, the open chat created problems in a fast-paced environment where counselors aim to respond within 2–3 minutes: it increased cognitive load and slowed replies. It also introduced a real risk of accidentally sending an AI prompt to the help seeker—damaging rapport. We removed the open chat and kept fixed prompts only.

The goldilocks level of AI optionality ~ 4 choices

  • While we chose not to allow open-ended prompting, we did want counselors to exercise judgment. For both Next Response Guidance and Relevant Resources, we found that about four options is the sweet spot. Four provides enough variety that—even if one or two suggestions miss—there’s usually at least one or two genuinely useful choices to build on. It also nudges counselors to mix, match, and adapt rather than defaulting to a single “correct” answer, reinforcing human discernment without creating choice overload.

Low initial adoption ≠ low interest

  • About three months post-launch, ~45% of counselors who’d had a shift still hadn’t tried the assistant. A follow-up survey revealed almost all were interested—they just forgot it was available during shifts. We began sending personalized day-before reminders with a short tutorial and peer quotes about how they were using it. 65% of nudge recipients went on to use the assistant on their next shift, and many of those became consistent users. 

Bigger models are not always better

  • After extensive testing, we kept GPT-4 Mini for Next Response Guidance rather than upgrading to GPT-4/5. Higher models were only slightly better in quality but added ~10 to ~60 seconds of latency per prompt—an unacceptable trade-off in a live, time-boxed setting. In our context, speed is a huge part of quality. 

Conclusions and next steps

The AI assistant proved what we hoped: we can scale human empathy without replacing it. On the strength of those results, we’re extending AI into wraparound support beyond live sessions—while keeping every conversation itself firmly human-first.

For counselors—post-session AI feedback

We’re adding post-session AI feedback so support doesn’t end when the chat does. The assistant now reviews a session and reflects back on what went well, where there’s room to grow, and a few concrete ideas to try next time. Early prototypes have been met with lots of enthusiasm, and we’re moving to an MVP launch this fall.

For help seekers—post-session AI follow-up support

We’ve validated strong interest from help seekers in an AI follow-up after their sessions that distills key takeaways, shares relevant resources, suggests tiny doable next steps, and follows up on those steps to help keep them on track, and we hope to start working on a proof of concept soon. 

Across both tracks, the goal remains the same: use AI to amplify the human core of Empower Work—making it easier for volunteers to show up at their best and for help seekers to carry support forward into their daily lives.


We’re eager to connect with partners, collaborators, and thought-partners in responsible AI. Reach us at team@empowerwork.org.