DeepSeek is bustling with troubled privacy and security headlines as a newfound Chinese AI start-up. It has blocked access for lawmakers and federal employees in several countries and raised many eyebrows regarding its censorship and safeguards. Now it has received the charge from South Korea’s spy agency.
DeepSeek AI now draws ire from a spy agency
As per Reuters, the National Intelligence Service (NIS) from South Korea has targeted the AI company for excessive collection of and questionable responses to sensitive heritage-related topics concerning Korea.
“Unlike other generative AI services, it has been confirmed that chat records are transferable as it includes a function to collect keyboard input patterns that can identify individuals and communicate with Chinese companies’ servers such as volceapplog.com,” the agency was quoted.
This follows a government’s notification requesting different agencies and ministries to bar employee access from DeepSeek on security alarms. Other countries that have already established similar bans include Australia and Taiwan, and many more will be following suit.
DeepSeek would be giving its advertising partners free access to user data, allegedly also retrievable by the Chinese government because of local laws. According to The Korea Herald, the chatbot was also imparting related controversial answers to inquiries on culturally sensitive and contentious geopolitical issues.
Read More: The Most Common Issues With iPhone 16 Pro How to Fix Them
The chatbot delivers dissimilar answers when asked the very same question in the Korean and Chinese languages. Such will be the agency’s future move on more tests for evaluating the safety and security aspects.
Nonetheless, top security analysts disagree: while there are myriad security concerns raised with DeepSeek, it might unleash answers that concern experts even more. Citing an analysis by The Wall Street Journal, DeepSeek has regurgitated disturbing content that includes how one cooks up bioweapons, a Nazi defense manifesto, and self-harm encouragement.
DeepSeek proved to be the worst AI model
In an analysis by fellow AI giant Anthropic, the company’s CEO Dario Amodei mentioned that DeepSeek proved to be the worst AI model in their tests when it comes to generating extremely disturbing information such as the creation of bioweapons.
Just over a week ago, researchers at Cisco also tested it against jailbreaking tools across six different categories, and it failed to block every single attack. Qualys ran another round of tests on the AI and found that it had a dismal 47% pass at jailbreak testing.
Read More: The Next Apple Watch Ultra Could Be the Life Saving Upgrade
Then there are concerns about leaking sensitive data and disclosing it freely. Cybersecurity researchers at Wiz recently found over a million lines of chat history that contained sensitive information and was available to the public.
DeepSeek patched the hole, but its commercial penetration remains very controversial. NASA has banned DeepSeek use among its employees, and so has the United States Navy. Moreover, a bill seeking a DeepSeek ban on federal devices is also on the table.