본문 바로가기
대메뉴 바로가기
뉴스센터
전체메뉴
SUNY Korea
SUNY Korea
SUNY Korea
History Makers
History Makers
History Makers
SBU
SBU
SBU
FIT
FIT
FIT
Press Release
Press Release
Press Release
HOME
SUNY Korea
SUNY Korea
SUNY Korea
History Makers
History Makers
History Makers
SBU
SBU
SBU
FIT
FIT
FIT
Press Release
Press Release
Press Release
닫기
Search results for
'Lee Kwang-hyung'
by recently order
by view order
Dr. Chihmao Hsieh’s contribution to the Maeil Business Newspaper
Can a Focus on Empathy and Trust Guide Our Taste in Choosing the Right Problems to Solve? Written by Chihmao Hsieh In his inauguration speech earlier this year, Lee Kwang-hyung, the new president of KAIST, remarked “KAIST는 앞으로 인류가 당면한 문제를 찾아 정의하고 해결하는 것에 중점을 둬야 한다”. This is a fundamental issue in entrepreneurship: the opportunities that we find depend on how we search for to a problem, but also how we formulate problems in the first place. For example, we all know that it’s not easy to park a car in Seoul. If we formulate the problem as a lack of parking spaces, then maybe we should build more parking garages. However, if the problem is that there are too many idle cars, then we should better optimize public transportation, ride sharing, or the taxi system. Or maybe we decide that cars are currently too big, and smaller cars will be easier to navigate in tight spaces. These three different problem formulations all lead to very different technological directions. Indeed, our taste in the problems that we identify today affects the decisions and socio-technological environment that we have tomorrow. These days, society is addressing various problems with the aid of rapidly advancing artificial intelligence and robotics. Recent innovations certainly save money. Coffee barista robots don’t get tired. Digital news anchors don’t need health insurance. And self-driving trucks don’t ask for pensions. While jobs are at risk of being lost, new jobs will be created. In the near term, for example, we still need people to maintain the robots that make the coffee or fix the trucks that deliver our packages. Technological advances are fundamentally entangled with the changing human tasks and jobs on this planet. Set aside the idea of universal basic income, for now. We face three issues: which problems to solve, what technologies to develop, and how to identify and design the tasks and jobs eventually performed by humans. There is no magic equation that explains how this all actually unfolds. Often, even, new scientific technologies emerge so quickly that they arrive before we know what problems they should solve. With increasingly advanced AI, where applications are so broad and dramatic, we should stop to think: How do we formulate and choose which problems to solve? One diagnosis would require us to work backwards: ask ourselves how we want future society and work culture to reflect core values of humanity, and then use that vision to help constrain and guide the kinds of technologies we develop and the kinds of problems that we address. Recent research on service industries suggests that AI will tend to first replace mechanically-oriented jobs, then analysis-oriented jobs, then intuition-oriented jobs, and finally empathy-oriented jobs, in that order. Opinions differ regarding the overall timeline. However, building and refining a stronger culture for empathy now will help preserve a basis of humanity at work and in society, protecting ourselves from the most damaging economics of AI. Human empathy is not particularly scalable, but maybe we also should not be trying to make it scalable. should not be scalable. Marketing staff, news reporters, nurses, teachers and countless other occupations today can make valuable use of human empathy, and we should support that kind of sociocultural direction. On a broad scale, more products and services can be created to enable people to exhibit, share, and promote empathy, at work and at home. In the future, hopefully still more different kinds of empathy-oriented jobs can emerge. Besides empathy, we should also consider the importance of preserving the importance of trust in interactions between humans. This issue hasn’t been addressed much; instead, most scholars and policymakers today are worried about how we humans could best learn to trust AI. But there’s an important conceptual distinction between how humans trust AI and how they trust other humans. On one hand, trust between humans requires an element of vulnerability to others’ self-interest: if a father promises his child that he will drive her to an evening hagwon exam, she trusts him that he won’t get drunk while partying with co-workers, then arriving late to pick her up. On the other hand, machines and robots have no self-interest; we merely trust them to function reliably and make fewer errors than us. As our interactions become increasingly mediated by computerized technology and AI, we position ourselves to learn to “trust” and “forgive” computers and robots more than we maintain our ability and capacity to trust and forgive each other. As a civilized society, shouldn’t we be developing technology and policy that ultimately helps humans trust and empathize with each other more, not less? Recent developments offer potential case studies. Aria, for example, represents AI by SK Telecom that has been shown to empathetically stimulate senior citizens’ cognition and delay onset of dementia, while also doubling their daily travel distances. But does the AI-based replication of the late Kim Kwang-Seok’s voice and singing help to foster empathy or fulfill egos? Are recently developed AI-based ‘digital girlfriends’ more likely to support or cheapen cultural norms of human trust and empathy? While it’s true that business ventures should often be excused for generating ‘negative externalities’ (부정적인 외부효과), we should be actively avoiding those negative externalities that fundamentally discount any core value of humanity. Formulating the right problems, developing supportive technological solutions, and fostering meaningful jobs should be simultaneous considerations that will require business and government to have good taste. But good taste in the value of human empathy and trust is not straightforward. For example, empathy should not completely stifle competitive spirit, and sometimes trust between business partners should not cut off valuable exploration of the business environment. As countries—including South Korea—race to become world leaders in commercializing AI-based technology, they should all respect the responsibility of choosing what entrepreneurial directions we take. Overall, having good taste in humanity is simply becoming more valuable than ever. Hopefully AI won’t become an expert in that too. Click here to read the article
2021.06.17
Hits 19888
<<
첫번째페이지
<
Previous page
1
>
next page
>>
마지막 페이지 1