Seoul, Sep 15 (IANS) A North Korea-linked hacking group has carried out a cyberattack on South Korean organisations, including a defence-related institution, using artificial intelligence (AI)-generated deepfake images, a report showed on Monday.
Kimsuky group, a hacking unit believed to be sponsored by the North Korean government, attempted a spear-phishing attack on a military-related organisation in July, according to the report by the Genians Security Center (GSC), a South Korean security institute, reports Yonhap news agency.
Spearphishing is a targeted cyberattack, often conducted through personalised emails that impersonate trusted sources.
The report said the attackers sent an email attached with malicious code, disguised as correspondence about ID issuance for military-affiliated officials. The ID card image used in the attempt was presumed to have been produced by a generative AI model, marking a case of the Kimsuky group applying deepfake technology.
Typically, AI platforms, such as ChatGPT, reject requests to generate copies of military IDs, citing that government-issued identification documents are legally protected.
However, the GSC report noted that the hackers appear to have bypassed restrictions by requesting mock-ups or sample designs for “legitimate” purposes, rather than direct reproductions of actual IDs.
The findings follow a separate report published in August by U.S.-based Anthropic, developer of the AI service Claude, which detailed how North Korean IT workers have misused AI.
That report said the workers generated manipulated virtual identities to undergo technical assessments during job applications, part of a broader scheme to circumvent international sanctions and secure foreign currency for the regime.
GSC said such cases highlight North Korea’s growing attempts to exploit AI services for increasingly sophisticated malicious activities.
“While AI services are powerful tools for enhancing productivity, they also represent potential risks when misused as cyber threats at the level of national security,” it said.
“Therefore, organisations must proactively prepare for the possibility of AI misuse and maintain continuous security monitoring across recruitment, operations and business processes.”
—IANS
na/
Disclaimer
The information contained in this website is for general information purposes only. The information is provided by BhaskarLive.in and while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
In no event will we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this website.
Through this website you are able to link to other websites which are not under the control of BhaskarLive.in We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, BhaskarLive.in takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.
For any legal details or query please visit original source link given with news or click on Go to Source.
Our translation service aims to offer the most accurate translation possible and we rarely experience any issues with news post. However, as the translation is carried out by third part tool there is a possibility for error to cause the occasional inaccuracy. We therefore require you to accept this disclaimer before confirming any translation news with us.
If you are not willing to accept this disclaimer then we recommend reading news post in its original language.