The centrality of data quality within data management, and more broadly in business decision-making processes, constitutes an essential prerequisite for the reliability of analyses and, consequently, for the effectiveness of business strategies. The growing complexity and heterogeneity of data flows make data quality management an increasingly critical challenge, aimed at preserving the accuracy and consistency of extracted information. In this context, efficiency and timeliness become fundamental requirements. However, data quality processes are often resource-intensive and require specialized expertise. In this perspective, the adoption of generative artificial intelligence models, particularly Large Language Models (LLMs), opens new opportunities to support data management and quality control activities. This study proposes a conceptual framework for modelling the use of an LLM Agent to assist Data Quality and Application Maintenance teams. The framework is designed to facilitate the analysis of anomalies and malfunctions detected within systems responsible for data quality controls. Furthermore, the agent can suggest a set of technical solutions, specifying the potential impacts on the existing infrastructure and scheduled processes. This approach provides a solid foundation for future applications, promoting more efficient anomaly management and a strategic use of existing technical knowledge.