TOP

KUBS News

“The interpretation and diagnosis of an AI model are a task for humans,” CDTB colloquium closing in

2023.03.28 Views 980 국제실

 

“The interpretation and diagnosis of an AI model are a task for humans,” CDTB colloquium closing in success.

 

The ‘Explainable AI discussion’ proved.

“Will bring an even greater impact than ChatGPT”

 

The Korea University Business school’s Center for Digital Transformation & Business (CDTB, director = Prof. Jae-hwan Kim) hosted a colloquium on the subject of “the explainability of AI models” at Korea University LG-POSCO hall, last Friday the 22nd.

In the event, Professor Yu-sung Park from the Department of Statistics at Korea University was invited as an exclusive speaker.

In his opening remarks, Professor Jae-hwan Kim’s shared greetings to to his gratitude to the students who attended this event despite their busy schedules.

 

 

He said that “There has been demand on the explainability of the AI-driven data, given the current situation in which we are early exposed to the significant role and action of AI,” hoping that “this serves as an opportunity to consider the utility this could provide to our research.”

He added a word of encouragement for the students, saying “I believe this lecture would be a great help in developing a graduation thesis for the students who took the time to attend.” 

The event took place both online and offline. 

Under the main subject of “the explainability of AI models,” the colloquium aimed to discuss ways to create methods to explain an AI model based on the results generated by completed AI models. “Explianibity of AI models” refers to a series of processes and methodologies that allow human users to understand and trust the results and outputs of machine learning algorithms.

 

 

Beginning with an explanation of the concept of Libraries and the types of Explainers, the lecture was composed of a detailed explanation of the concept and types of specific libraries and Explainers, such as SHAP(Shapley Additive explainers), LIME(Local Interpretable Model-agnostic Explanations), Anchors, Counterfactual Explanations, and the proof of the underlying principles of the prediction process of the particular Libraries. Some of the most well-known Libraries were introduced in the lecture, such as ELI5, ALIBI, SHAP, LIME, for the open-source library that executes and visualizes the Explainer is currently in the development phase. Professor Yu-sung Park further helped the audiences’ understanding of the subject by explaining the underlying theories of each Library and providing the statistical proof data, conducted by himself, and detailed visual materials on the process. Professor Park stressed the meaning and importance of the topic, stating “While the prediction accuracy of ensemble learning or deep learning models consisting of numerous hidden layers are much better than that of traditional linear models, there are difficulties in explaining how it internally works to create the predictions… we must be able to intuitively understand the logic of these models and explain Al models on such basis.” 

 

The lecture covered not only an academic proof of the Model but was followed by a proposal and discussion about the practical use cases of the model that has relevance with modern society. It also introduced a machine learning program that is useful to solve real-life cases such as health screenings and housing prices, for example, “what items of health screening could be improved to which degree in order to decrease the risk of cancer to below 5 percent,” along with its proof process.

 

At the end of the lecture, Professor Park repeatedly emphasized the implications of the explainablity of Ai models, noting that individual statistical data that AI cannot recognize can always exist. He claimed that “Therefore, it is the task of humans to diagnose the logical consistency of the AI model and propose solutions, regardless of the accuracy of the model itself.”

 

  

 

After the lecture, a discussion on how “explainable AI” can be meaningfully utilized mainly took place during the Q&A session with students. Concluding the lecture, Professor Park examined the prospects of the explainability of Ai models and its potential for development. Stating “the explainability of AI means that it can be used as a fundamental basis for hypotheses when companies hypothesize that certain factors of a product will affect sales,” he conveyed that “The explainability of AI provides fundamental evidence to support the usefulness of hypotheses by visualizing the impact of a single factor on the overall results.” Adding on, he expressed his optimism on the subject, predicting that “this provides a significant hint for the development of a new marketing theory and thus holds breakthrough potential enough to bring a greater impact than the recent issue of ChatGPT.”