AI makes it easier for hackers, according to Microsoft and OpenAI

AI makes it easier for hackers, according to Microsoft and OpenAI

AI has quickly gained a large place in our daily lives and its possibilities are numerous, such as predicting the time of death for example. However, there are some dangers you should be aware of. One of these is the risk that this technology will be misused to carry out large-scale attacks. And according to Microsoft and OpenAI, it has already started.

Microsoft and OpenAI warn against abuse of AI

The two companies say several groups of hackers are using generative AI tools to amplify their attacks. We are talking about teams supported by states such as Russia, China, North Korea and even Iran. But how does AI help hackers carry out these attacks?

According to Microsoft and OpenAI, AI is used in debugging code, searching open source information to detect targets, amplifying social engineering techniques, and even generating text for phishing. Hackers also translate text, for example to attack victims who do not speak the same language. Basic functionalities, but which, if used properly, can wreak havoc on Internet users.

Hackers backed by hostile governments use AI

Since the discovery of this exploit, OpenAI has closed access to generative AI such as ChatGPT or Copilot for certain groups. Among them Forest Blizzard (also called Fancy Bear or APT 12) which is connected to the Russian regime.

The report talks about a group called Emerald Street, or Thallium, from North Korea. The AI ​​is used to write fake reviews about universities or associations using spear phishing, i.e. identity theft. Hackers also used this technology to better understand the Follina vulnerability (CVE-2022-30190) in Microsoft’s hardware diagnostic tool. The Iranian group Curium, in turn, generated phishing emails or even code to create fraudulent sites.

Charcoal Typhoon, a Chinese group, carried out attacks in France via AI, mainly targeting higher education or the energy sector using social engineering. China is also funding Salmon Typhoon, which targeted the United States by unearthing classified information using this technology.

However, there is nothing to indicate that this measure will be sufficient to stop pirates. We imagine that these teams, backed by unfriendly governments, will have the means to regain access to these AI tools.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *