Helping Support Managers track volume, capacity and performance in real-time

Context

Intercom is a complete customer service solution platform, focused on providing best in class support tools for companies looking to drive faster growth through better relationships with their users.

I joined the team in August 2021 and was tasked with leading the design of our first real-time data tool. Our objective was to provide managers in support organizations with a way to monitor conversation volume, capacity, and team performance in real-time. This ensures that they have the information they need to provide good customer service and to tackle any problems that may arise in the day-to-day operations of a support organisation.

Understanding the problem

Through customer feedback surveys and validation calls, my team learned that the lack of real-time metrics was a common challenge when using and adopting Intercom as a primary support tool. Intercom had built a way to access historical data and reporting but these tools were geared towards a different set of user jobs, and the primary jobs of monitor volume and capacity in real-time were not being met.

I worked closely with a Product Manager and a Design Manager to build a panel of companies with which we could collaborate closely to understand the problem deeper and validate early solutions, and through multiple validation calls learned that a critical responsibility of support managers is to monitor conversation volume and supervise agents in real-time.

Support managers struggled to understand the real-time health of their support organisation and assess the availability of support agents on a day-to-day basis; they had to piece this information together by relying on limited information from Intercom, third-party apps, and private apps.

Defining jobs and areas of focus

It was important to clearly define the jobs our customers were trying to get done to both anchor our decisions as we build different aspects of the product and to define success when talking to users to validate our solution. After multiple conversations with customers we defined two main jobs:

Monitor capacity: When I monitor conversations, help me quickly understand conversation volume in the inbox and where conversations are waiting the longest, so that I can adjust capacity if needed.”
Monitor volume: When I monitor teams, help me quickly understand who is online/away/logged in/out and for how long, so that I can better estimate how long it’ll take us to get through the backlog and move capacity if needed.”

Defining these two jobs was crucial to the success of the project and helped us ground product, design, and engineering decisions. The jobs impacted both the ultimate layout of the tool and the metrics we chose to present to users. Answering common questions in the project, such as "Is this the right priority?", became much easier if we always anchored the answer around solving the jobs our customers were trying to get done.

Ideating, testing early and using “high-touch betas”

We mocked up early concepts to present to our user panel and the feedback was promising. Anchoring the layout of the tool around both the monitor and capacity jobs resonated with users. We learned that depending on the job you were trying to get done you looked at a different metric.

Sometimes you might be looking at an inbox to understand its volume and this will prompt you to look into things like “total open conversations” or “total conversations waiting for a reply”. When reviewing an agent, those questions were entirely different, like “Time since shift started” or “conversations resolved per hour”.

A good understanding of the job that managers were trying to achieve and the questions they were trying to answer determined much of the solution and tool metrics. We learned that introducing a way to "drill down" into specific metrics was also helpful. It was not enough to help managers understand that there were "10" conversations waiting for a reply; the key insight they were looking for was “which conversations have been waiting the longest from those 10”, so managers can take immediate action. Having a summary dashboard also allowed support managers to understand how their team was performing in real-time, but also take necessary action if needed.

Throughout the development process, we collaborated with various companies to test and validate our ideas. Our early introduction of a "beta" program was crucial for the project's success. This program allowed us to test new concepts and assumptions on a weekly basis, and to evaluate our product in more realistic scenarios using real data. The high-touch beta program was feasible because the dashboard did not interfere with existing workflows. Instead, it served as an additional layer of information that managers could use while performing their work. This allowed us to communicate with them frequently and gather detailed qualitative feedback, while validating our assumptions.

Final solution and outcomes

I worked closely with a full cross-functional team and other designers and engineers across Intercom to build a simple dashboard that helped managers get a snapshot of their support organisation in real-time and allowed them to quickly understand volume and capacity.

The dashboard become a one-stop-shop for managers to monitor their inbox volume on a daily basis. We found that for some of the companies we worked with, the dashboard became a key aspect of using Intercom as a Support Manager. We launched the project in December 2021, and after a few months met our customer NPS target. We also saw an increase in engagement, and customers actively mentioned that they did not switch away from Intercom thanks to the ability to have all the relevant information in one place.

Read more about the launch of the real-time dashboard here