Relevance of the study: contact centers are one of the primary channels of communication between customers and businesses, while maintaining service quality standards and overall communication quality is a key factor in achieving customer loyalty and corporate reputation, directly affecting customer satisfaction and service efficiency. Manual quality control of conversations is unable to provide full coverage, typically reviewing only 10–15% of dialogues. Moreover, such evaluations are often subjective and depend on the individual skills of quality controllers and the frequency of team calibration. The use of large language models enables not only the automation of dialogue analysis but also increases objectivity and scalability without the need for additional personnel. These factors determine the relevance and practical significance of the study.
Purpose of the study: to improve the process of communication quality control in contact centers by developing a prototype of an automated evaluation system based on a multi-level dialogue analysis model and the application of large language models (LLMs).
Research tasks:
1. to analyze modern approaches to communication quality control in contact centers;
2. to develop a conceptual model of a quality control system;
3. to design a multi-level communication evaluation model that includes criteria, subcriteria, and formalized indicators of violations;
4. to substantiate the selection of algorithms and architectural solutions for automated dialogue analysis;
5. to develop, implement, and experimentally validate an MVP of an automated quality control system by comparing LLM-based evaluation results with expert assessments;
6. to determine directions for further system improvement and scaling.
Object of the research: the process of communication quality control in contact centers, including the informational, software, and organizational components of automated evaluation systems.
Subject of the research: Methods and algorithmic principles of automated communication quality analysis in contact centers based on a multi-level evaluation model using large language models (LLMs) to detect and interpret violations in dialogues.
Scientific novelty of the research:
- improvement of dialogue analysis methods through the formation of an evidence-based evaluation framework;
- application of a knowledge base and instruction retrieval mechanism for large language models;
- substantiation of an evaluation accuracy verification methodology based on comparison with reference results and predefined instruction parameters;
Practical value of the work:
- a prototype of an automated communication quality control system has been developed;
- detection of violations and generation of explanatory feedback have been implemented;
- the proposed architecture is suitable for integration into corporate systems and scalable across different communication channels;
- proposal of a multi-level dialogue evaluation model.
Research advisor: A. Protasov


