When the Algorithm Learns From You: The Human Question at the Heart of Digital Advertising
A major South Korean academic conference asks whether consumers are trapped in automated feedback loops—or empowered by them
SEOUL — Every click, every scroll, every second you linger on an ad is now a data point. That data trains an algorithm. The algorithm refines the next ad you see. And the cycle repeats—endlessly, invisibly, efficiently.
This feedback loop, powered by artificial intelligence, has become the foundation of modern advertising. But as algorithms grow more sophisticated at predicting what consumers want, a fundamental question is emerging: Are people still autonomous agents making free choices, or have they become components in a self-perpetuating machine?
The Korea Society of Advertising is confronting that question head-on. At its spring conference on April 4 at Korea University in Seoul, researchers and industry leaders will gather under the theme "Consumer in the Loop" —a concept borrowed from AI ethics that asks what happens when human beings are embedded inside the systems designed to influence them.
"This isn't just about better targeting or personalization," said Professor Seungchul Yoo of Ewha Womans University's School of Communication and Media, a leading voice in the field of AI ethics in marketing. "It's about whether we're still designing for human dignity—or just optimizing for efficiency."
The Loop That Never Stops
In traditional advertising, consumers were seen as targets: passive recipients of messages crafted by brands. But in today's algorithmic ecosystem, consumers are active participants—whether they realize it or not.
Their behavior—what they watch, buy, share, or ignore—feeds machine learning models that continuously adjust what content appears next. The result is a system where consumers are no longer outside the persuasion process. They are inside it, shaping it with every interaction.
This shift has profound implications. If an algorithm learns that a user responds to certain emotional triggers, it will serve more of them. If a platform discovers that outrage drives engagement, it will amplify outrage. The consumer's agency—the ability to choose freely—becomes entangled with the system's imperative to optimize. (소비자의 주체성과 선택의 자유가 시스템의 최적화 욕구와 얽혀버린다.)
"Personalization used to mean care," Professor Yoo noted. "Now it can just as easily mean surveillance."
The Trust Problem
The rise of generative AI has only intensified these tensions. Brands can now produce hyper-personalized content at scale, reaching consumers faster and more precisely than ever. But speed and precision do not guarantee trust.
A 2025 study cited by the Korea Society of Advertising found that while consumers appreciate relevant recommendations, they are increasingly wary of how their data is used. The fine line between "helpful" and "creepy" has never been thinner. (도움이 되는 것과 불쾌한 것 사이의 경계가 더욱 모호해지고 있다.)
"Automated optimization can deliver short-term performance," said Professor Yoo. "But if consumers feel manipulated, the long-term cost to brand trust can be catastrophic."
This is not a hypothetical concern. Scandals involving algorithmic bias, discriminatory targeting, and opaque data practices have eroded public confidence in digital platforms worldwide. In South Korea—a country with one of the world's most digitally connected populations—these issues are especially urgent.
A Design Challenge, Not a Technology Debate
The Korea Society of Advertising is framing the problem not as a question of whether AI should be used in advertising, but how.
"Consumer in the Loop" is not a call to halt innovation. It is a call to redesign it—with human values at the center. (소비자 인 더 루프는 혁신을 멈추자는 것이 아니라, 인간의 가치를 중심에 두고 재설계하자는 것이다.)
That means asking hard questions: Should consumers be able to see and challenge the data that shapes what they see? Should there be limits on how predictive models are used to exploit psychological vulnerabilities? Who is accountable when an algorithm amplifies harm?
These are not questions for academics alone. They demand collaboration among researchers, industry practitioners, policymakers, and the public.
A Moment of Reckoning
As artificial intelligence reshapes every dimension of modern life, advertising stands at a crossroads. It can continue down the path of ever-more-granular optimization, extracting value from consumer attention with increasing efficiency. Or it can pause to ask whether that optimization serves not just brands, but people.
The Korea Society of Advertising's spring conference is an invitation to have that conversation—before the loop closes entirely.
"We're not anti-technology," Professor Yoo said. "We're pro-human. And right now, the most important question we can ask is: What does it mean to respect a consumer in an age when algorithms know them better than they know themselves? (알고리즘이 소비자를 자신보다 더 잘 알 때, 소비자를 존중한다는 것은 무엇을 의미하는가?)"
For inquiries or further information, contact:
Professor Seungchul Yoo
Ewha Womans University, School of Communication and Media
Office: +82-2-3277-2240
Email: [email protected]
About the Conference
The Korea Society of Advertising Spring Conference will be held on April 4, 2026, at Korea University's Hyundai Motor Hall in Seoul. The event brings together researchers, brand strategists, platform executives, and policymakers to examine the ethical, strategic, and regulatory dimensions of algorithmic advertising.
Theme: "Consumer in the Loop: Rethinking Human Agency in Algorithmic Advertising"






