2.5

CiteScore

8.8

Global Impact Factor

Marketing Communications with Large Language Models (LLMs) and Deep Learning for Real-Time Personalized Content Selection


Paper ID: EIJTEM_2026_13_1_136-148

Author's Name: Kasi Viswanath kommana

Volume: 13

Issue: 1

Year: 2026

Page No: 136-148

Abstract:

Large Language Models (LLMs) and Deep Learning have quickly advanced to provide real-time, personalised content selection at scale, hence transforming marketing communications. AI-driven solutions are becoming increasingly necessary as traditional marketing strategies sometimes find it difficult to fit evolving consumer preferences and engagement patterns. This paper investigates how deep learning architectures, together with LLMs such GPT-4, LLaMA, and Falcon, automate and optimise tailored marketing material across several digital media.We examine how transformer-based LLMs and Natural Language Processing (NLP) improve consumer sentiment analysis, intent identification, and contextual content production. We also use retrieval-augmented generation (RAG) and reinforcement learning to create adaptive marketing plans that constantly improve material depending on real-time user interactions and behavioural data.Key issues in AI-driven marketing communications—including bias in AI-generated content, ethical issues, data privacy, and security concerns—also get attention in this work. Investigated are solutions including adversarial training, differential privacy, and federated learning to guarantee compliance and safe marketing automation driven by artificial intelligence. By means of empirical analysis and real-world case studies, we assess LLM-driven content personalisation against conventional marketing automation systems, therefore influencing engagement measures, conversion rates, and client retention. The results show that audience participation, content relevancy, and general marketing efficacy are much improved by LLM-powered marketing communications. This paper offers companies a methodical approach to use scalable artificial intelligence-driven marketing solutions while guaranteeing ethical AI practices and data protection.

Keywords: Large Language Models (LLMs), Deep Learning, Personalized Marketing, AI-Powered Content Selection

References:

1. Zhang, W.; Qin, J.; Guo, W.; Tang, R.; He, X. Deep Learning for Click-Through Rate Estimation. arXiv 2021, arXiv:2104.10584.
2. Reddy, S.; Beg, H.; Overwijk, A.; O’Byrne, S. Sequence Learning: A Paradigm Shift for Personalized Ads Recommendations. 2024. Available online: https://engineering.fb.com/2024/11/19/data-infrastructure/sequence-learning-personalized-ads-recommendations/ (accessed on 19 November 2024).
3. Viktoratos, I.; Tsadiras, A. A Machine Learning Approach for Solving the Frozen User Cold-Start Problem in Personalized Mobile Advertising Systems. Algorithms 2022, 15, 72.
4. Wang, R.; Shivanna, R.; Cheng, D.; Jain, S.; Lin, D.; Hong, L.; Chi, E. DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems. In Proceedings of the WWW ’21: The Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; Volume 2, pp. 1785–1797. [Google Scholar]
5. Zhao, F.; Huang, C.; Xu, H.; Yang, W.; Han, W. RGMeta: Enhancing Cold-Start Recommendations with a Residual Graph Meta-Embedding Model. Electronics 2024, 13, 3473.
6. Ye, Z.; Zhang, D.J.; Zhang, H.; Zhang, R.; Chen, X.; Xu, Z. Cold Start to Improve Market Thickness on Online Advertising Platforms: Data-Driven Algorithms and Field Experiments. Manag. Sci. 2022, 69, 3838–3860.
7. Ouyang, W.; Zhang, X.; Ren, S.; Li, L.; Zhang, K.; Luo, J.; Liu, Z.; Du, Y. Learning Graph Meta Embeddings for Cold-Start Ads in Click-Through Rate Prediction. In Proceedings of the SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Online, 11–15 July 2021; Volume 1, pp. 1157–1166. [Google Scholar]
8. Liu, Y.; Ma, L.; Wang, M. GAIN: A Gated Adaptive Feature Interaction Network for Click-Through Rate Prediction. Sensors 2022, 22, 7280.
9. Wang, Z.; She, Q.; Zhang, P.; Zhang, J. ContextNet: A Click-Through Rate Prediction Framework Using Contextual in-formation to Refine Feature Embedding. arXiv 2017.
10. Dilbaz, S.; Saribas, H. STEC: See-Through Transformer-based Encoder for CTR Prediction. arXiv 2023.
11. Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. 2021. arXiv 2021, arXiv:2106.09685. [Google Scholar]
12. Tan, Z.; Liu, Z.; Jiang, M. Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts. arXiv 2024, arXiv:2406.10471.
13. Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.; Rocktäschel, T.; et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Adv. Neural Inf. Process. Syst. 2020, 33, 9459–9474. [Google Scholar]
14. Salemi, A.; Kallumadi, S.; Zamani, H. Optimization Methods for Personalizing Large Language Models through Retrieval Augmentation. In SIGIR 2024, Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, Washington, DC, USA, 14–18 July 2024; Association for Computing Machinery, Inc.: New York, NY, USA, 2024; pp. 752–762. [Google Scholar]
15. Chen, J.; Liu, Z.; Huang, X.; Wu, C.; Liu, Q.; Jiang, G.; Pu, Y.; Lei, Y.; Chen, X.; Wang, X.; et al. When Large Language Models Meet Personalization: Perspectives of Challenges and Opportunities. World Wide Web 2024, 27, 42.
16. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, 71.
17. Xing, M.; Zhang, R.; Xue, H.; Chen, Q.; Yang, F.; Xiao, Z. Understanding the weakness of large language model agents within a complex android environment. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024; pp. 6061–6072. [Google Scholar]
18. Uchida, S. Using early LLMs for corpus linguistics: Examining ChatGPT’s potential and limitations. Appl. Corpus Linguist. 2024, 4, 100089.
19. Fan, W.; Ding, Y.; Ning, L.; Wang, S.; Li, H.; Yin, D.; Chua, T.S.; Li, Q. A survey on rag meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024; pp. 6491–6501. [Google Scholar]
20. Wang, H.; Li, Y.F. Large Language Model Empowered by Domain-Specific Knowledge Base for Industrial Equipment Operation and Maintenance. In Proceedings of the 2023 5th International Conference on System Reliability and Safety Engineering (SRSE), Beijing, China, 20–23 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 474–479. [Google Scholar]
21. Saikrishna Tipparapu, IAM based Audit Framework to enhance and protect the Critical Infrastructurefor Distributed System, Journal of Information Systems Engineering and Management, 2025,10(23s)e-ISSN:2468-4376DOI: https://doi.org/10.52783/jisem.v10i23s.3772

View PDF