From OCR to self-learning malware, hackers are now leaning on AI to bypass security systems.

  • Cybercrime is a lucrative activity and one that’s getting easier to enter
  • Threats are becoming more widespread and sophisticated, attackers are increasingly leaning on AI to bypass security systems 

In an age where everything is becoming connected and data is regarded as a business’s most valuable commodity, cybersecurity continues to diversify in a hyper-competitive marketplace.

 

Set to hit a worth of US$248 billion by 2023, the prosperity of the sector is down to the constant growth and mutations of cyberthreats, which every year demands higher caliber weaponry with either better precision or a wider spread.

Cybercrime, today, is where the money is. The tools to enact it are widely available even to non-technical individuals. Anyone can get their hands on exploit kits of varying levels of sophistication, starting from a couple of hundred bucks, right up to tens of thousands.

A report by Business Insider revealed that a hacker seeding ransomware this way could make around US$84K a month on average.

This is both a massively lucrative and ‘accessible’ activity, so it’s certainly not going to subside. It’s predicted that, in the future, all our connected devices will be under attack constantly as cyberattacks become harder to detect, incessant, and ever more sophisticated.

The risk for businesses, of course, include serious damages in information loss, revenue loss, and a potential end to business operations, if not a crippling fine, injury, or even loss of life.

The cybersecurity market will continue to grow as a result, with vendors offering an expansive and sophisticated arsenal. At the same time, these companies and their customers will be locked in a constant race, with their defenses only as good as the next iteration of malware.

On both sides of this war, emerging technologies will continue to play a key role, and artificial intelligence (AI) is no exception.

Cybercriminals can take AI designed for legitimate use cases and adapt it to illegal schemes. Readers will be familiar with CAPTCHA, a tool that has been around for decades now in order to defend against credential stuffing by presenting non-human bots the challenge of reading distorted text. As far as a couple of years ago, however, a Google study found that machine learning-based optical character recognition (OCR) technology could solve 99.8% of these challenges.

Criminals are also using AI to crack passwords faster. Brute force attacks can be sped up using deep learning; researchers have fed purpose-built neural networks tens of millions of leaked passwords, and have asked them to generate hundreds of millions of new passwords, which in one trial, turned out a 26% success rate.

Given the black market of cybercriminal tools and services, AI can be used to make operations more efficient and profitable. As well as identifying targets for attacks, cybercriminals can start and cease attacks with millions of transactions in just minutes, owed to fully-automated infrastructure.

According to Malwarebyte’s paper When Artificial intelligence Goes Awry, AI technology could soon bring us into the unwelcome age of ‘malware 2.0’. While there are currently no examples of AI-powered malware ‘in the wild’, if the technology opened new avenues for profit, “threat actors will be standing in line to buy kits on the dark market or use GitHub open-source […]”

 

The biggest concern regarding AI’s use in malware is that new strains would be able to learn from detection events. If a strain of malware was able to determine what caused its detection, the same behavior or characteristic could be avoided the next time around. If a worm’s code was the reason for its compromise, for example, automated malware authors could rescript it. If attributes of behavior caused its detection, randomness could be added to foil pattern-matching rules.

The use of AI could also improve a method of certain Trojan malware variants, where they create new file versions of themselves to fool detection routines.

Faced by this fast-moving and evolving threat, cybersecurity will increase leverage the power of AI itself.

Advanced antivirus tools can leverage machine learning to identify programs exhibiting unusual behavior, to scan emails for indications of phishing attempts, and automate the analysis of the system or network data to ensure continuous monitoring.

Given that the cybersecurity industry is facing a widening skills gap, we can reasonably expect investments in ‘intelligent’ cybersecurity systems to be the next best course of action.

Full Story: https://techhq.com/2020/09/how-hackers-are-weaponizing-artificial-intelligence/

https://intechanalytica.com
Do you like Gemechu Taye's articles? Follow on social!
Comments to: How hackers are weaponizing artificial intelligence

Your email address will not be published. Required fields are marked *

Attach images - Only PNG, JPG, JPEG and GIF are supported.

Good Reads

Google today revealed Google Maps updates aimed at warning users of pandemic-related threats. Soon, maps will display all-time COVID-19 cases identified in an area, along with fast links from local authorities to resources. Google will also start to demonstrate how bus, train, and subway lines are crowded in more locations across the globe. Maps also […]
Today, $60 million was raised by Hover, a startup creating AI-powered apps that build 3D models of homes from smartphone images. The 200-employee firm says the proceeds will be used as Hover expands its product offerings to strengthen established partnerships with insurance companies.  6.26% of insured homes experienced a claim in 2017, compared to just […]

Worlwide

Google today revealed Google Maps updates aimed at warning users of pandemic-related threats. Soon, maps will display all-time COVID-19 cases identified in an area, along with fast links from local authorities to resources. Google will also start to demonstrate how bus, train, and subway lines are crowded in more locations across the globe. Maps also […]
Today, $60 million was raised by Hover, a startup creating AI-powered apps that build 3D models of homes from smartphone images. The 200-employee firm says the proceeds will be used as Hover expands its product offerings to strengthen established partnerships with insurance companies.  6.26% of insured homes experienced a claim in 2017, compared to just […]
Motional, the joint autonomous driving alliance between Aptiv and Hyundai, announced today that the state of Nevada has obtained permission to test its autonomous vehicles without a driver behind the wheel. The firm claims this is part of the completion of a phase of self-imposed testing and evaluation.  In the U.S., relatively few businesses have […]

Trending

WHEN SARTRE SAID hell is other people, he wasn’t living through 2020. Right now, other people are the only thing between us and species collapse. Not just the people we occasionally encounter behind fugly masks—but the experts and innovators out in the world, leading the way. The 17-year-old hacker building his own coronavirus tracker. The […]
13 September marks six months since the first coronavirus announced in Ethiopia.In the half-year since then, reported cases are close to 64 Thousend, with more than 996 deaths. At the onset, COVID-19 mainly affected the capital city. However, the virus is now moving from high-density urban areas to informal settlements and then onward to rural […]
Present international artificial intelligence (AI) inventory and progression in self-driving vehicle research and development Complementary subjects in technology are also artificial intelligence ( AI) and self-driving vehicles. In brief, without someone involved, you just can’t debate one. While AI has been rapidly applied in different areas, a new hot topic has been the way you […]

Login

Welcome to Intech Analytica

AI news hub. It checks trusted sites and collects best pieces of AI info.
Join Intech Analytica