Efficient Internet Chat Services for Help Desk Agents

Efficient Internet Chat Services for Help Desk Agents Zon-Yin Shae1, Dinesh Garg2 , Rajarshi Bhose2, Ritabrata Mukherjee3, Sinem Güven1, Gopal Pingali...
Author: Grace Austin
4 downloads 0 Views 390KB Size
Efficient Internet Chat Services for Help Desk Agents Zon-Yin Shae1, Dinesh Garg2 , Rajarshi Bhose2, Ritabrata Mukherjee3, Sinem Güven1, Gopal Pingali1 1 IBM T.J. Watson Research Center 2 IBM India Research Lab 3 IBM Global Business Services {zshae, sguven, gpingali}@us.ibm.com, {dingarg1, rajbhose, rito_mukherjee}@in.ibm.com organization is looking at alternative help channels and mechanisms to improve efficiency and reduce costs relative to the cost of servicing phone calls. One such channel of interest is Internet chat.

Abstract This paper presents a model for “Help Desk chat” which is distinct from the “buddy-list” model for the conventional collaborative chats. Help desk chat includes several distinct capabilities including scheduling and routing functionality, archival of problem resolution sessions, integration with ticketing databases, unification with knowledge management systems, and efficient interfaces for agents to effectively handle and multi-task several chat sessions. The main motivation is to provide an alternate channel to voice calls that increases help desk agent efficiency while improving end user satisfaction. By implementing an end to end help desk chat system and piloting it in a large global enterprise, we demonstrate that help desk chat indeed meets these goals. Analysis results over numerous help desk chat transcripts quantitatively show the effectiveness of chat over voice across key help desk performance indicators including first call resolution, average speed to answer, average call duration, extent of multi-tasking, and end user satisfaction.

There are two principal models of Internet Chat – the Buddy-list model and the Help Desk model. We have seen enormous growth of the Buddy-list model and Instant Messaging (IM) over the last several years where users select a peer from a buddy list based on the peer’s presence status within IM. There are an estimated 100 million Internet chat users, where a user is defined as a unique name on one of the major chat networks – AOL Instant Messenger (AIM), Microsoft Messenger (MSN) or Yahoo! Messenger (YMSG), and Internet Relay Chat (IRC) [7]. Recently, the Help Desk model of Internet Chat has also gained popularity as an alternative to the voice call channel. In this model, users cannot select the peer to chat with but the system routes the chat to an appropriate help desk agent based on some business rules. To the best of our knowledge, there is not much literature available in the public domain investigating the system protocol, architecture, and effectiveness of chat models. A recent publication [8] analyzes the Buddy-list chat protocol. This paper complements [8] as it addresses the system architecture effectiveness of the Help Desk model.

1. Introduction Today, we all live in a “post manufacturing” economy which is based on services. The service sector has grown over the last 50 years to dominate economic activities in most advanced industrial economies, yet the scientific understanding of modern services is rudimentary [9]. The information technology has played a key role in revolutionizing the service industry. The advancement in IT has not only lowered the cost of service, it has also created avenues to enhance revenue through service and more importantly it has given rise to a new service paradigm namely IT Service Delivery.

This paper presents an integrated enterprise system that enables Internet chat for help desks. The paper argues that Internet chat for Help Desks needs several distinct capabilities including scheduling and routing functionality, archival of problem resolution sessions, integration with ticketing databases, unification with knowledge management systems, and efficient interfaces for agents to effectively handle and multi-task several chat sessions. We develop system architecture for providing and unifying these various functionalities. We also describe a pilot application of the architecture in a real enterprise help desk setting. We demonstrate that chat is indeed a more efficient alternative by comparing chats with phone calls across major key help desk performance indicators including first call resolution, average speed to answer, average call duration, extent of multi-tasking, and end user satisfaction.

In a typical IT Service Delivery business, the help desks contribute a significant component of the cost of delivering services to enterprises that strategically outsource their IT management or other business processes. Today's help desks are dominated by phone calls as the primary channel through which end users reach the help desk. To survive in a competitive world, today every IT service delivery

1

Websphere Portal Server (WPS) SPI to get user credentials such as the user first name, last name, email address. etc.

The paper is organized as follows. Section 2 outlines the system architecture for Help Desk Chat Services and Section 3 describes the web chat user interface. Section 4 discusses a unified chat agent interface. Section 5 discusses analysis of chat transcripts and Section 6 presents results on real chat data. Section 7 concludes the paper.

The chat server takes the user inputs as well as buffers the agent responses. Thus there are two separate processes; one that includes sending the messages to the chat server and the other task is to fetch any responses that the agent generated from the chat server. The use of AJAX has helped in designing a system in which the portlet automatically updates itself without requiring page rendering as well as creating the asynchronous communication between the sending and polling sequences.

2. Chat Services System Architecture Figure 1 shows the system architecture of a help desk chat service. It consists of the following components: a chat portlet run in a portal server which provides the GUI to end users. The portal server includes a large set of existing portlets of enterprise applications and services that can be easily integrated and composed along with the chat service. The chat server is an intelligent queue based routing engine. All the users’ chat requests are stored in the chat server queue. The chat server routes the chat requests to the available agent based on their skill and some preset business rules. The chat server routing business rules determine how the help desk operates. For example, how many simultaneous chat sessions a particular agent can accept, the maximum number of times that an agent can reply before the session is automatically redirected to another agent, etc. LDAP is used for verifying users’ entitlement to the chat service. The ticket system interface supports communication with a problem ticket system. The example shows the IBM eESM problem ticket database. The Enterprise Information System refers to Enterprise back end databases and applications.

Users

Agents

Chat Server

Chat Portlet

Unified Agent Desk Top

Web Browser Chat transcript stored automatically

LDAP (Entitlement)

Ticket System Interface

Ticket Popup

eESM Ticket Database

Enterprise Information System

Figure 1: Chat Service System Architecture for the Help Desk

The service flow is as follows. A user starts the chat using an Internet web browser to access the chat service URL. A problem ticket is opened for this chat and a request is automatically created. This chat request is then routed to an agent. At the end of the chat session, the chat transcript is stored automatically in the eESM ticket database. Each agent can support multiple simultaneous chat sessions. This is an important cost saving factor for the help desk. The unified agent desktop interface enables agents to access and integrate the Enterprise information system directly from the chat interface.

4. Enabling Back End Utilities into Unified Agent Chat Interface The current help desk approach to searching for a solution document to help the customer involves moving away from the focus of main user’s contact channel (call, chat, or email), identifying potential keywords that describe the customer’s problem and using a separate search tool to look up relevant solution documents. This approach is not very effective since it breaks the continuity of the agent’s interaction with the current context (e.g., helping a customer by chat), by requiring the agent to switch between separate tools to identify the solution and then help/advise the customer. Further, the current approach makes it difficult for agents to multi-task since they have to not only keep track of each user’s context separately and switch between them, but also switch between different tools (chat interface and solution search interface). To address these issues, we provide a unified platform for agent to access the relevant tools and information easily. Integration of the agent chat interface to our Agent Desktop Authoring Tool [1] can provide a unified interface for agents to administer all key help desk functionality, such as problem ticket

3. Web Chat User Interface The chat portlet is the front end interface which serves two purposes – a) taking inputs from the user and relaying these to the chat server, and b) retrieving agents’ responses and displaying these to the user. The basic chat portlet architecture consists of a chat Servlet that runs in the same context as the portlet and an AJAX engine that executes on the client browser. When the chat portlet loads for the first time, the view JSP of the portlet renders the AJAX Engine on the client browser. The AJAX Engine uses the XMLHTTPRequest Object and is compatible on both MS and Netscape browsers. The chat portlet also uses

2

time and enable retrieval of these phrases along with their frequencies through an index server. We then use these extracted terms to retrieve relevant documents from the database of solutions. To improve this retrieval [2], we obtain a glossary of domain specific terms (noun and verb phrases) in the knowledge base. These can be easily matched with the noun and verb phrases extracted from problem tickets and are also more likely to appear in problem ticket records compared to non-domain specific terms. Following the approach in [3] we determine the domain specificity of each extracted term in terms of the ratio of probability of the term appearing in a general corpus to the probability of the term appearing in the domain-specific corpus. To further improve retrieval, we search on the original extracted terms from problem descriptions as well as their synonyms (determined through Wordnet).

creation, solution search and presentation, as well as selfhelp portal content authoring. Another important aspect of our unified platform is its ability to automate context definition and solution extraction to help agent context-switch more effectively. The idea is that, since we already have the ability to analyze chat transcripts and extract context-specific keywords, such keywords can then be automatically fed into our solution DB to extract and display relevant solution documents to the agent. Although this helps greatly for solution document extraction, one problem with this approach is that plain keyword extraction alone does not necessarily capture the customer’s problem context. To help narrow down the potential solution documents to more meaningful context-specific matches, we use a data mining approach, described next.

The Agent Desktop Authoring Tool thus automatically identifies most frequently occurring (phrase, action verb) pairs, and their associated solution documents, and displays them for the agent’s use (Figure 3). When an agent is engaged in a chat session with a customer, automatically extracted keywords can then be checked against the most commonly occurring phrases and action verbs to see if relevant solution documents have already been identified for this particular problem context. If such a match exists, the solution document retrieved would therefore be much more refined and context-specific. Otherwise, plain keyword matches can retrieve and display the relevant solution documents. The resulting application provides several improvements over the existing solution document retrieval and display mechanisms. Firstly, the agent does not have to contextswitch between the chat interface and the solution DB interface. More over, the agent does not have to identify keywords and look up solution documents manually. All information they need to help the current customer is provided in a unified platform, and is available at a glance. Significant time increases can thus be obtained through automation and context-extraction.

Figure 2: Problem Ticket Mining and Solution Retrieval The Agent Desktop Authoring tool provides text mining capabilities for contact center problem tickets. We mine the unstructured problem descriptions in the problem ticket records to find the most frequently occurring phrases. We then use these terms extracted from problem tickets as query terms to search and retrieve relevant knowledge documents. Figure 2 shows an overview of our approach.

5. Chat Statistics Portlet The basic idea for developing the Chat Statistics Portlet is to provide a single window for analysis of the chat functionality to get an understanding of chat usage and draw inferences about the ability of the agents in handling user sessions and the efficiency of the over all chat system in handling customer issues and providing solutions. This is particularly important as help desks typically monitor a variety of statistics related to the conventional channel of voice calls such average speed to answer, average call duration, first call resolution etc. To estimate the efficacy of web chats, it is important to monitor similar metrics for

We analyze the unstructured text of problem descriptions to extract noun phrases and associated verb phrases. This approach extracts useful information from problem ticket records while not relying on grammatically sound descriptions. We also maintain the frequencies of extracted phrases over all problem records for any selected period of

3

and will be accessible only to authorized uses, i.e. there is no provision for anonymous user. The current functionalities that the statistical portal provides are (a) statistical analysis over common parameters drawn over duration of transcripts, (b) generation of graphs for visual representation of the statistics, (3) a fully functional search over the chat transcripts, (4) generation of reports over the analysis, (5) extraction of the key words.

chats. In addition, Chat has its own set of distinct metrics. For examples, average of simultaneous chat sessions for an agent, average first response, average speak-to-answer time, average and peak length of chat sessions, time between interaction, average number of simultaneous sessions being handled by an agent, etc. The chat statistics portlet is an individual component that is separate from the chat portlet. It provides role-based access

Figure 3: Agent Desktop Interface for Automatically Identifying Relevant Solution Documents This portlet has been implemented based on the following components (Figure 4): JfreeChart (chart generator component), Jasper Reports (report generator), Lucene (text based search engine), JAXB (Parsers for XML Data bindings).

would be the database. Jasper Reports [5] can be used to prepare reports on the data. Lucene [6] is a java based text search engine. Lucene is used to provide keyword based search over all the chat transcripts, while looking for a particular item in the existing chat transcripts.

The architecture of the Chat Statistics Potlet is as follows. The XML data is in-memory generated by reading the chat transcript file.. After the XML is generated, it is populated in the table. The XML data can be retrieved by SQL queries from the table and parsed using XSLT or any XML parser to get back the original text for display in the Chat History or for further data analysis as on required. The JfreeChart [4] component is used to draw graphs and charts based on the data analysis from the database. This is not real time, meaning it can only produce results based on predefined existing values. In our case the data repository

Currently the transcripts are delivered manually in the form of a text file. Each text file contains a set of transcripts. Thus a single text file delivered may contain N number of transcripts, where each of the N transcripts may be a full complete chat session or may be representations of abandoned sessions. Once these text files are received they are put into directories that are created date wise, meaning directories are created according to the date on which the text files are created. Once the text transcripts are placed in proper directory structure, a Text2XML component creates XML representations of the individual transcripts.

4

[6:41:23 PM] User: Hi - I'm wondering if I may have some parm set incorrectly when archiving as I asked a coworker to try the exact same thing and it works? [6:41:39 PM] Agent: I am currently working with another customer and will be with you ASAP. I will get back to you within 2 minutes [6:41:49 PM] User: thanks [6:57:35 PM] Agent: Hi [6:57:40 PM] Agent: Sorry for keep you waiting [6:58:15 PM] User: so I have been playing around more with the archiving tool and I can't seem to find a reason why it does this? [6:58:52 PM] User: I just archived several notes 4 more times and 3 out of 4 times it worked great. 1 more time it proceeded to not put the documents where I expected them to go? [7:00:47 PM] Agent: Can you close your lotus notes [7:01:15 PM] User: done [7:01:57 PM] Agent: Now click on my computer and then go to c drive and open the notes folder [7:02:29 PM] User: I'm in the C:\\notes directory [7:03:01 PM] User: however my archive is in another directory that gets backed up if that matters [7:03:38 PM] Agent: now open the data folder and then try to delete the file cache.ndk and log.nsf files from the data folder [7:04:23 PM] User: done [7:04:44 PM] Agent: now open your notes [7:04:54 PM] Agent: and try if you still have the same issue [7:04:55 PM] User: dia [7:06:19 PM] User: well the notes that are archived aren't in the correct folder, and being that the problem is some what intermittent how will I know if this fixed the problem? [7:06:48 PM] User: Is there a way from the archive log to get the notes that I archived(~300) into the right folder? [7:10:32 PM] Agent: let me check this for you [7:10:52 PM] User: I really need to leave the office soon. Unless you have something quick is there a way you could email me some ideas? [7:11:23 PM] Agent: Sure will try to look into some options and try to Email you [7:12:00 PM] Agent: Or else you can always call us with the ticket number so that we can help you with your issue [7:12:20 PM] User: That would be great thanks. I know the emails are getting moved to my 'archive' file. Its just without them being organized in their original folders it makes them very hard to be useful. [7:12:33 PM] User: Please just send me a note if you have any ideas and I can try later. Thanks. [7:12:54 PM] Agent: Sure will definitely do that [7:13:00 PM]MathewL: Thanks. [7:13:28 PM] Agent: Apart from this issue is there any thing else I can assist you with? [7:13:41 PM] User: not at this moment. Thx

Chat Portlet

CHAT HISTORY PORTLET

Portal logs

DB

Dir:14A ug

1.txt 2.txt 3.txt

XML

Dir:15A ug

Dir:16A ug

LUCENE Chat Stats Portlet

JFREECHA JASPER

Figure 4: Chat Context Statistic Analysis The idea behind the conversion of textual transcripts format to XML representation is to capture the actual relevant data in the XML format, remove the unnecessary data, and re arrange the data so that XML tags represent the contents of data implicitly. Text2XML begins by starting with the root directory of the transcripts and splits the text file into individual transcripts. It then converts the unstructured text into XML. Text2XML uses the open source XML Writer module for conversion of the input source into XML structure. The JAXB Parser reads each individual XML file produced by the Text2XML and extracts the data from the XML by applying business logic on the data and finally populates the data into the database. In the next section, we show the results obtained from the Chat Statistics Portlet in more detail with the help of an example.

6. Transcript Analysis Results

Table 1: A sample chat transcript with rich set of meta data

The chat transcript analysis is crucial from the point of view of measuring the chat service performance and comparing this service with the phone call service. As discussed earlier, each chat session between an agent and the user is logged in the form of a chat transcript which is a semi structured text document. A sample chat transcript is shown in Table 1 below. For privacy reasons, we have replaced the names of the individuals, organizations, products, etc (orange colored text).

The chat transcript is captured with a very rich set of related meta data for our study. At the first line of the transcript, it records the overall system time stamps when the chat requests flow through the system queue from the end user to agent. For examples, the time stamps of the chat entering and exiting the chat server queue, and the time stamps of each chat conversation. This meta data is useful for analyzing the system dynamics, such as session length, system performance, etc.

/////Jan 3, 7:13:57 PM Mathew:::TicketID:::30477801:::start:::6:40:06 PM:::InQ:::6:40:06 PM:::OutQ:::6:40:11 PM:::END:::7:13:57 PM [6:40:11 PM]Help Desk Agent ('Agent') has joined the session [6:40:11 PM]User: \"archived notes for some reason are not going into the folders they are contained in when I manually archive them(when I select multiple notes, not when I do a single note).\n\n\n\nSelect a product: Lotus Notes and Applications\nWhat version of Notes are you using? Lotus Notes 7\nWhat area of Lotus Notes do you need help with? Archiving\n\" [6:40:43 PM] Agent: Hi

We also record the problem ticket number created for this chat request which enables the help desk operator to trace back and forward between a chat transcript and the associated problem ticket record in the problem ticket database system. Users initiate a chat request from a portal upon which they are first prompted to enter their problem

5

description. This problem description is automatically recorded in the problem ticket database, and also appears in the chat transcript sent to the agent’s chat session window. All the chat transactions within a chat session are time stamped. These time stamps are useful to analyze the detailed dynamics of the agent handling the chat. We extract the following fields from these chat transcripts and put them in the form of an XML file.

TRANSFERRED: If the agent cannot provide the complete solution of the problem to the user and passes the ticket to the next level of support. This field indicated if the problem is transferred. Note that a problem that is not supported by the help desk will never be passed on to the next level of support. A problem ticket that is passed on to the next level of the support is the one in which the problem is supported by the help desk through eChat but the agent is not able to provide the complete solution.

TICKET ID: This is the unique problem ticket number created in the database for this chat session. DATE: The date on which the chat took place. CHATAGENT: The name of the help desk agent that serviced the chat request. CHATUSER: The name of the user who initiated the chat service request. INQTIME: The time when the chat request entered the chat server queue. User request will stay in the queue until being routed to an available agent. OUTQTIME: The time when the request is routed to an agent. At this time, this chat session request will pop up on the agent’s desk top. ENDQTIME: The time when the chat session is closed. FIRSTSENTENCETIME: The time when the first sentence is typed either by agent or by user. LASTSENTENCETIME: The time when the last sentence is typed either by agent or by user. WAITTIMEInQ: The time user waited in the queue. WAITTIMEInQ=OUTQTIME-INQTIME. WAITTIMEforAgent: The time from OUTQTIME till the agent enters the first answer. WAITTIME: The time the customer waited until the first answer response was obtained from an agent. This is used to calculate the chat equivalend to the common voice call metrica called ASA (average speed to answer a voice call). WAITTIME=WAITTIMEInQ+WAITTIMEforAgent. CHATTIME: This time signifies how much time was spent during the actual chatting. TOTALTIME: WAITTIME + CHATTIME. AGENTLINES: Total number of chat lines by the agent. USERLINES: Total number of chat lines by the user. AGENTKEYSTROKES: Total number of characters keyed in by the agent during the whole session. We have a list of canned responses which help us count the canned responses in any given chat transcript. USERKEYSTROKES: Total number of characters keyed in by the user during the whole session. This can indicate how much effort that use has to make to express their problem and get solution. CANNEDRESPONSES: The number of canned responses that were used by the agent during the whole. PROBLEM: The problem description that the user is asking a help for. SUPPORTED: In this pilot, not all the problem categories are supported by chat. This field indicated if the problem is supported.

As explained earlier, the eChat Statistics Portlet displays a wide range of statistics and business artifacts about these data. Before, we can explain these statistics; we need to define following quantities: N: Total number of problem tickets received through eChat during a particular period. N1: Number of supported problem tickets that were closed, i.e. not transferred. N2: Number of unsupported tickets. N3: Number of supported tickets that were open, i.e. transferred. FCR (First Call Resolution): This is the fraction of the supported tickets received in a month that were resolved without transferring them to the next level of support. It is easy to see that by definition, FCR = N1/(N1+N3) FTF (First Time Fix): This is the fraction of the tickets received in a month that were not transferred to the next level of the support. It is easy to see that by definition, FTF = (N1+N2)/N %Xfr: This the percentage of the tickets received in a month that was transferred to the next level of support. It is easy to see that by definition, %Xfr = (100*N3)/N The transcript analysis results of a 3 month pilot study with the eChat service system are shown in Table 2. In this pilot, the chat service was offered as an option to end users in a global enterprise through a web portal. For pilot purpose, the chat service was offered for a limited but randomly selected set of problem categories. For these categories, the end user is offered chat after they attempt self-help options on the portal and do not find an answer. The purpose of the pilot was to a) test the overall functioning of the help desk chat system; b) determine end users’ acceptance of the chat channel as an alternative to the voice call channel; c) study the ability of help desk agents to handle chats in general and simultaneous multiple chats in particular; and d) compare chat with phone calls along key performance indicators to determine if chat is indeed a viable alternative to phone calls.

6

Symbol

Description

Nov 2006

Dec 2006 Jan 2007

FCR

N1/(N1+N3)

89.78%

90.69%

91.93%

FTF

(N1+N2)/N

90.39%

90.92%

92.23%

%Xfr

%Tickets Transferred to Level 2 Desk ((100*N3)/N)

9.61%

9.07%

7.77%

ASA

Average Speed to Answer

18.50 Sec

15.58 Sec 21.56 Sec

ACD

Average Chat Duration

19.2 Min

19.47Min 17.63 Min

TBT

Total Busy Time of All the Agents (Total Chat Duration of All the Chats)

136.75110 196.6625 219.1405 Hrs. Hrs. 5 Hrs.

N / TBT

Avg. no. of Chat Sessions Handled Per Busy Hour

3.122

3.081

increased before performance degradation is observed. Another interesting observation in Figure 5 is that there are times when the agent has no chat request. At these times, they were handling other requests coming from online ticket/email submissions of problems. In Table 2, we also compare some of the metrics with the counter parts of phone calls. We discover that ASA for chat is about 20 seconds compared to about 70 seconds for phone calls in the same help desk over the same study period. However, the chat duration is longer than the phone call (19.36 minutes versus 11.4 minutes). Chats have an about 90% FCR which is about the same as Voice call even at the multi-tasking among 3 simultaneous chat sessions (See Figure 5). This shows that chat can reduce the cost of a contact center dramatically without sacrificing its service performance. The ASA for chat is especially much better than voice calls. This is related to the agent’s multi-tasking in chat, so can dramatically reduce the user’s waiting time.

3.404

Table 2: Results of Transcript Analysis Table 2 shows several interesting aspects resulting from the analysis of this real data. The analysis discovered that agent can constantly handle 3 simultaneous chat sessions without any degradation of the ASA. Although in the Buddy-list IM model users get used to open many simultaneous chat sessions, the situation in help desk service model is significantly different. In help desk Internet Chat service, the agent needs to concentrate on searching and finding a solution to each user’s problem. Hence, it is not obvious that agent can be multi-tasking on simultaneous chat sessions. We believe that this is the first time in the published literature that we are able to quantitatively demonstrate that agents can handle simultaneous chat sessions (at least up to three sessions) without degradation in average chat duration.

Typing Efforts of Users and Agents 120

N o o f S e n te n c e s T y p e d

100

80

User

60

Agent 40

20

0

1 3 5

7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83

Session ID

Figure 6: Number of Chat Sentences Typed by the User (purple) Versus the Agent (blue) Figure 6 shows another result of the chat transcript analysis that provides more interesting insights. It compares the number of sentences typed by the end user with the number of sentences typed by the agent across numerous chat sessions. Interestingly, the agent consistently types fewer sentences than the end user. This is an interesting characteristic of help desk chat sessions. It appears that the agents tend to be more terse compared to end users who tend to be more vocal in describing their problem, responding to agents’ questions etc. In other words, more time is spent by the end users than agents on chat sessions. This is a plausible explanation of the agents’ ability to multi-task across multiple user sessions. It is also important to note that this also gives agents a distinct advantage in chats where they have this multi-tasking option compared to phone calls where they are forced to stay on a single session even if the user is doing most of the talking. Finally, Figure 6 does include even those sentences which were not explicitly typed by the agent but resulted from the agent clicking on a canned response Canned responses further reduce the time spent by the agent in a chat session.

Figure 5: Agent’s Workload Diagram Figure 5 shows the typical variation of a chat agent’s workload. As can be seen, the agents are handling multiple chat sessions quite frequently. As discussed earlier, one of the system parameters is the maximum number of simultaneous sessions. This is fixed based on the help desk policy. In this case, this number was fixed at three. Hence, Figure 5 shows only a maximum of three sessions happening at any time. One of the important results of our pilot is that the agents are, indeed, able to handle the peak load of three without a problem. This also indicates that it may be worth exploring how high that number can be

Finally, we randomly conducted user satisfaction surveys on numerous users to understand the acceptability of the

7

by the user, prompting appropriate solutions to the agent, and even automatically responding to some user requests.

chat channel. As seen in Figure 7, the user survey feedback indicates that the customer satisfaction with chat is high across virtually all the areas: the wait time for chat to begin, the wait time between responses, help provided by agent, and the willing of the user to use chat to contact help desk.

In addition, all chat transactions within a chat session are time stamped. These micro time stamps are useful to analyze the detailed dynamics of the agent handling the chat. For example, in a chat session, how long an agent is in a polite chat mode and how long the agent is actually resolving the problem. This is a rich set of user and agent interaction dynamics that can be further studied from analyzing the text of the chat along with these micro time stamps. We conclude that Internet chat systems for help desks are a fertile area for further research.

CSATsurveys till 4thJanuary, 2006 Attribute

Total Surveys # of Responses Satisfied%

Wait time (for chat to begin)

233

229

79

Wait time (betweenresponses)

233

229

76

Help provided to you bythe agent

233

225

72

Using chat toask for help

233

228

80

Satisfied %

Figure 7: Results of Customer Satisfaction Surveys

8. References

7. Summary

[1] S. Güven, M. Podlaseck, and G. Pingali, “Transforming Contact Center Processes to Facilitate Agent Efficiency and End User Enablement”, to be submitted for publication. [2] G. Pingali, G. Chandalia, N. Modani, R. Gupta, L. Mignet, T. Syeda-Mahmood, G. Lohman, M. Podlaseck, and S. Güven, “Multimodal Mining for Contact Centers: Experiences, Opportunities, Challenges”, In Proc. IJCAI 2007 (Workshop on Multimodal Retrieval and Applications), Hyderabad, India, 6 January 2007. [3] Y. Park, R. Byrd, and B. K. Boguraev, “Automatic glossary extraction: beyond terminology identification”, In Proc. International Conference on Computational Linguistics, 2003, 1-7. [4] http://www.jfree.org/jfreechart/ [5] http://jasperforge.org/sf/projects/jasperreports [6] http://lucene.apache.org/java/docs/ [7] Oikarinen, J. and D. Reed. “Internet Relay Chat Protocol”, IETF RFC 1459, May 1993. [8] Raymond Jennings, Erich Nahum, David Olshefski, Debanjan Saha, Zon-Yin Shae, Chris Waters "A Study of Internet Instant Messaging and Chat Protocols" IEEE Network, July/August 2006 [9] Henry Chesbrough and Jim Spohrer, “A Research Manifesto for Services Science” Communications of The ACM, 49 (7): 35-40, July 2006 [10] Roland T. Rust and Carol Miu, “What Academic Research Tells us About Service”, Communications of The ACM, 49 (7): 49-54, July 2006.

We presented an integrated enterprise system that enables Internet Chat for help desks, which is significantly different from traditional collaborative chat systems. We developed a system architecture for providing distinct help desk chat capabilities including scheduling and routing functionality, archival of problem resolution sessions, integration with ticketing databases, unification with knowledge management systems, and efficient interfaces for agents to effectively handle and multi-task several chat sessions. We also described a pilot application of the developed Internet chat architecture in a real enterprise help desk setting. The analysis revealed that agent can handle 3 simultaneous chat sessions without degradation in their effectiveness in solving the problem for the users. In help desk Internet Chat service, an agent needs to concentrate on finding solutions to the users’ problems. Therefore, it was not obvious that help desk agents would be able to effectively handle simultaneous chats. We believe that this is the first time in the published literature that we are able to quantify the help desk agents’ ability to handle multiple simultaneously chat sessions in a help desk chat model. Although the average chat length is about 2 times that of the voice call length, the ability to conduct 3 simultaneous chat sessions significantly reduces the cost of a chat based help desk. In addition, the average speed to answer (ASA) of chat is 3.5 times faster than voice call. This can result in better user satisfaction. Customer stultification surveys over a large user population indicated exceptional excellent CSAT for chat along multiple dimensions. Finally, our experience with help desks indicates that there is room for even greater gains in productivity, for example through further analysis and mining of chat sessions, increased automation in chat responses, and real-time translation capabilities. Chat bots can be introduced to aid the agent in rapidly determining the problem being posed

8