TOP

KUBS News

Interview with Professor Kyuhan Lee – Recipient of the SK Paper Award 

2025.08.06 Views 39 국제실

Interview with Professor Kyuhan Lee – Recipient of the SK Paper Award 

 

 

In 2025, Professor Kyuhan Lee of Korea University Business School co-authored a paper titled "Leveraging Large Language Models for Hate Speech Detection: Multi-Agent, Information-Theoretic Prompt Learning for Enhancing Contextual Understanding" (with Sudha Ram), which has been accepted for publication in the Journal of Management Information Systems (forthcoming). The paper was honored with the prestigious SK Paper Award. This study proposes a novel AI-based approach to hate speech detection by transforming multiple large language models as autonomous agents that generate various prompts and select the optimal one based on an entropy-based method. The paper has been highly recognized in the academic community for expanding the potential of leveraging technology to realize social value in the field of information systems. 

 

Q1. First of all, congratulations on receiving the SK Award. Could you share your thoughts on receiving this honor? 
A1. Thank you very much. Not many universities have systems that directly reward research achievements like this, so I truly appreciate Korea University’s strong support and encouragement for academic research. This award serves as a great motivation for me, and I take it as a meaningful reminder to continue working hard and striving to produce impactful research. 

Q2. Could you briefly introduce the research for which you received the award? 
A2. My research focuses on designing and developing technological systems within the broader field of business studies. While much of the research in business emphasizes the interaction between humans and technology, my approach centers on building the technology itself. This paper proposes an AI-based system to address the growing problem of hate speech on online platforms. Given the massive volume of content generated in real time on social media, it is practically impossible for human moderators to review everything manually. This makes automated content moderation system essential. In this study, we positioned large language models not just as classifiers but as “autonomous agents” capable of engaging in deliberation. By allowing multiple AIs to discuss and reach collective judgment, we explored a new approach that offers the potential to detect hate speech more accurately and precisely than traditional methods. 

Q3. What inspired you to start this research? 
A3. This research does not directly lead to commercial profit in the traditional sense. I have always considered the question, “What kind of positive impact can this research have on society?” to be a key guiding principle in my work. Hate speech is not just a matter of discomfort—it is also linked to the broader issue of social contagion. If technology can help prevent its spread, I believe that alone makes the research meaningful. That belief was the primary motivation behind initiating this study. 

Q4. What do you think is the potential impact of this research on society or industry? 
A4. From a business perspective, companies may hesitate to implement social features on their platforms if they risk becoming spaces filled with hate speech—something that advertisers tend to avoid. In this context, automated content moderation systems become critical for protecting brand image and sustaining advertising revenue. On a societal level, since hate speech tends to spread like a contagion, technologies capable of effectively detecting and blocking it can make a meaningful contribution to the health and integrity of online communities. I hope this research can support efforts to use technology to reduce social conflict and promote a healthier, more constructive communication culture. 

Q5. What are your future research plans or areas of interest? 
A5. Moving forward, I want to go beyond just building systems and start analyzing AI technologies themselves. For example, can AI truly be considered a “thinking” entity? Can the outputs generated by AI be called “creative”? I aim to explore such philosophical questions from a technical perspective. Although technology continues to advance rapidly, how we understand and internalize it remains an area that requires deeper reflection and discussion. By combining technical analysis with philosophical inquiry, I hope to contribute to more meaningful and thought-provoking discourse on the role of AI in society. 

Q6. Finally, do you have any message for students? 
A6. I believe that diverse experiences are essential for developing strong problem-solving skills. Of course, studying your major and engaging in research are important, but activities that may seem unrelated at first can often lead to unexpected insights. For example, if you’re preparing for law school, it’s not just about mastering legal knowledge — a broad understanding of society is equally important. Even if something doesn’t interest you right away, I encourage you to step out of your comfort zone and explore new areas. These experiences will ultimately serve as a valuable foundation for shaping your own perspective. 

Summary of Professor Kyuhan Lee’s Paper 
Leveraging Large Language Models for Hate Speech Detection: Multi-Agent, Information-Theoretic Prompt Learning for Enhancing Contextual Understanding 
This study proposes a novel AI-based system for detecting hate speech by configuring large language models (LLMs) as autonomous agents that interact with one another to reach a decision. The researchers developed a method for automatically generating a variety of prompts and applied an information-theoretic, entropy-based criterion to select the optimal prompt, thereby enhancing contextual understanding and improving detection accuracy. This multi-agent approach addresses the limitations of traditional hate speech detection methods and demonstrates the potential for more sophisticated and efficient content moderation in online environments where vast volumes of content are generated in real time. Notably, the study presents a promising strategy for technologically controlling the social diffusion of hate speech and contributing to the development of healthier digital communities. As such, it has been recognized as a significant case of extending the potential for realizing social value through information systems.