South Korea has set a precedent because the world’s second jurisdiction to introduce synthetic intelligence-governing laws, nevertheless it seems to be a balancing act between selling AI and holding it at bay, writes Brian Yap
As South Korea propels itself into a brand new period of regulated AI improvement, senior authorized consultants warning it’s essential for regulation companies to fastidiously choose AI companies and their suppliers. With a backdrop of political uncertainty following then South Korean president Yoon Suk Yeol’s martial regulation declaration and his subsequent impeachment, the Nationwide Meeting handed the Act on the Improvement of Synthetic Intelligence and Institution of Belief (AI Primary Act) on 26 December 2024.
Set to take impact in January 2026, the act – which consolidates 19 separate AI payments – marks the world’s second piece of complete AI regulatory laws, following the EU’s Synthetic Intelligence Act of Might 2024.
Hwan Kyoung Ko, a associate within the expertise, media and telecoms group at Lee & Ko in Seoul, and Kyoungjin Choi, a professor of regulation and director of the Centre for AI Knowledge and Coverage at Gachon College in Seongnam, had been amongst a number of authorized consultants at a public listening to into the AI Primary Act held on the Nationwide Meeting in September final yr.
Ko tells Asia Enterprise Regulation Journal that though it’s unlikely that regulation companies can be topic to the AI Primary Act for merely utilising AI in offering authorized recommendation or companies to purchasers, cautious diligence stays essential when collaborating with AI service suppliers to be able to defend shopper info.
“AI companies are anticipated to supply important effectivity enhancements to skilled customers, together with regulation companies, sole practitioners and in-house counsel,” says Ko. “Nevertheless, sure challenges persist, such because the hallucination concern in generative AI, limitations because of inadequate authorized coaching knowledge, and considerations concerning the reliability of such companies.”
Choi, who can be president of the Korea Affiliation for Synthetic Intelligence and Regulation, warns that compliance with the brand new regulation “turns into a problem” for regulation companies and attorneys if their purchasers or affiliated firms develop or present high-impact AI-related services or products.
“Due to this fact, it has turn into necessary for regulation companies and attorneys to grasp and cling to the assorted obligations imposed by the AI Primary Act,” says Choi.
Similarly to the EU AI Act, South Korea’s AI Primary Act divides AI techniques into high-impact and generative AI classes.
A high-impact AI system is one that will have a major impression on, or pose dangers to, the lives, bodily security and elementary rights of people when utilized in any one in all 10 particular areas. These embody the provision of power and the event of medical units.
A generative AI system is outlined as one which mimics enter knowledge to generate outputs corresponding to textual content, sound, photographs and different artistic content material.
Beneath the act, companies that develop or use AI and supply associated services and products to purchasers should give customers advance discover if such services and products are powered by high-impact or generative AI. They have to guarantee the security and reliability of their AI techniques, and create danger administration plans, impression assessments and person safety methods.
Nevertheless, there’s presently no particular interpretation or authorities steerage on the definition of high-impact AI. That is to be clarified by presidential decree and associated subordinate laws, anticipated to be launched earlier than the enforcement of the act.
The Ministry of Science and ICT web site states that subordinate legal guidelines are set to be accomplished throughout the first half of 2025. However many questions stay for firms, notably these within the expertise sector.
“Notably, high-impact AI operators are required to take particular measures to make sure security and reliability,” says Kum Solar Kim, a senior company counsel at Microsoft Korea in Seoul. “Nevertheless, for the reason that time period ‘AI enterprise operator’ encompasses each AI builders and AI-using enterprise operators, it’s unclear which particular operators will bear these obligations.”
Kim argues that, as violations of such provisions could lead to fact-finding investigations, corrective orders and fines by the Ministry of Science and ICT, it necessitates warning. She says that copyright points associated to AI mannequin coaching, which emerged throughout discussions on the introduction of an AI act, additionally stay unresolved.
Kim factors to a necessity for in-house counsel to completely analyse whether or not this regulation applies to their firms, figuring out the AI techniques that their firms develop or use, and assessing the related dangers. Whether it is decided that the regulation does apply, in-house counsel should set up complete regulatory compliance plans and take needed actions to make sure adherence to the brand new rules, she says.
Discovering stability
The AI Primary Act has its roots in AI’s fast development in recent times. The pace of improvement has prompted intensive dialogue about getting ready for potential AI-associated dangers.
Ko, of Lee & Ko, factors to rising considerations over points like the issue of regulating AI beneath current legal guidelines, and extreme reliance on AI with out human intervention in high-risk areas.
The earlier absence of a transparent regulatory framework for AI resulted in reliance on the appliance of particular person legal guidelines. From this, considerations arose about lowered authorized stability and predictability, and the opportunity of it deterring proactive enterprise funding in AI-related infrastructure.
The AI Primary Act primarily goals to forestall extreme regulation of AI whereas incorporating transparency rules to mitigate the misuse of AI applied sciences, corresponding to with deepfakes, and establishing self-regulating measures to make sure AI security and reliability.
Gachon College’s Choi informed ABLJ that South Korea additionally thought-about each the EU’s AI Act and the US self-regulatory strategy as key fashions when drafting its AI Primary Act.
The US doesn’t have any complete federal legal guidelines regulating AI, or particularly banning or limiting its use. As an alternative, the nation governs AI via current federal legal guidelines and tips, whereas counting on federal and state governments, industries and courts to manage it.
However Choi says that South Korea, missing the identical stage of AI competitiveness because the US and having a special authorized system, discovered it difficult to straight undertake the US strategy.
Whereas South Korea’s authorized system is much like the EU’s civil regulation system, there have been considerations that the EU’s stringent rules may hinder the event of South Korea’s AI business. This resulted in notable variations in regulatory frameworks, sanction ranges and particular provisions between the EU act and South Korea’s AI Primary Act.
“South Korea opted to introduce a authorized regulatory framework that’s much less stringent than the EU’s whereas selling autonomous AI improvement akin to the US strategy,” says Choi.
The EU AI Act adopts a risk-based strategy to synthetic intelligence, categorising AI into prohibited AI, high-risk AI, and particular varieties of AI, and imposing separate rules for general-purpose AI fashions.
It additionally imposes complete and differentiated obligations on suppliers, deployers, importers and distributors of AI. Non-compliance may end up in fines of as much as 7% of worldwide annual turnover, or EUR35 million (USD36.2 million) for offering prohibited AI companies throughout the EU.
In distinction, South Korea’s AI Primary Act doesn’t embody provisions explicitly prohibiting sure varieties of AI. As an alternative, it focuses on guaranteeing the security and reliability of high-impact, not high-risk, AI, transparency rules for generative AI, and the usage of high-impact AI. Violations of those provisions are topic to penalties corresponding to fines of as much as KRW30 million (USD20,500).
Ko explains that the choice to not undertake the EU’s complete regulatory framework and stringent sanctions additionally stems from variations in societal acceptance of AI expertise. “It additionally displays South Korea’s optimistic outlook on AI expertise and business improvement, and its nationwide methods and insurance policies tailor-made to the nation’s distinctive AI ecosystem,” he says.
Alternative knocks
South Korean regulation companies are already being approached by home and worldwide firms from totally different industries searching for authorized recommendation about complying with the act.
Tae Uk Kang, a associate within the mental property observe group and knowledge safety crew at Bae Kim & Lee (BKL) in Seoul, says most inquiries up to now focus preparations for every firm’s particular circumstances – corresponding to regulatory compliance measures and authorized danger administration. “Moreover, there have been many detailed questions concerning the particular that means and implications of particular person provisions throughout the AI Primary Act, in addition to their sensible applicability,” says Kang.
Keun Woo Lee, a associate and deputy head of the brand new mission group at Yoon & Yang in Seoul, specialises in mental property and new expertise together with AI. He has had purchasers knocking on his door for the reason that passage of the act, together with these from sectors like semiconductors, secondary battery, cloud service and gaming.
“Some purchasers want to analyse and reply to how this regulation will impression cloud enterprise,” says Lee. “Equally, gaming firms need to analyse and reply to the way it impacts present recreation improvement.”
Lee says purchasers are asking for evaluations of, and complete responses to, issues corresponding to how AI-driven authorized expertise impacts the work of in-house counsel, methods for firms to arrange for developments in AI expertise, and the moral and authorized points firms must be cautious of when adopting AI.