It’s in every organization’s best interest to implement security measures that counter threats in order to protect artificial intelligence investments.

Security and privacy concerns are the top barriers to the adoption of artificial intelligence and for good reason. Both benign and malicious actors can threaten the performance, fairness, security, and privacy of AI models and data.

This isn’t something enterprises can ignore as AI becomes more mainstream and promises them an array of benefits. In fact, on the recent Gartner Hype Cycle for Emerging Technologies, 2020, more than a third of the technologies listed were related to AI.

 

 

Image: valerybrozhinsky - stock.adobe.com

Image: valerybrozhinsky – stock.adobe.com

 

 

At the same time, AI also has a dark side that often goes unaddressed, especially since the current machine learning and AI platform market has not come up with consistent nor comprehensive tooling to defend organizations. This means organizations are on their own. What’s worse is that according to a Gartner survey, consumers believe that it is the organization using or providing AI that should be accountable when it goes wrong.

It is in every organization’s interest to implement security measures that counter threats in order to protect AI investments. Threats and attacks against AI not only compromise AI model security and data security but also compromise the model performance and outcomes.

There are two ways that criminals commonly attack AI and actions that technical professionals can take to mitigate such threats, but first, let’s explore the three core risks to AI.

Security, liability, and social risks of AI

Organizations that use AI are subject to three types of risks. Security risks are rising as AI becomes more prevalent and embedded in critical enterprise operations. There might be a bug in the AI model of a self-driving car that leads to a fatal accident, for instance.

Liability risks are increasing as decisions affecting customers are increasingly driven by AI models using sensitive customer data. As an example, incorrect AI credit scoring can hinder consumers from securing loans, resulting in both financial and reputational losses.

Social risks are increasing as “irresponsible AI” causes adverse and unfair consequences for consumers by making biased decisions that are neither transparent nor readily understood. Even slight biases can result in the significant misbehavior of algorithms.

How criminals commonly attack AI

The above risks can result from the two common ways that criminals attack AI: Malicious inputs, or perturbations, and query attacks.

Malicious inputs to AI models can come in the form of adversarial AI, manipulated digital inputs, or malicious physical inputs. Adversarial AI may come in the form of socially engineering humans using an AI-generated voice, which can be used for any type of crime and considered a “new” form of phishing. For example, in March of last year, criminals used AI synthetic voice to impersonate a CEO’s voice and demand a fraudulent transfer of $243,000 to their own accounts.

Query attacks involve criminals sending queries to organizations’ AI models to figure out how it’s working and may come in the form of a black box or white box. Specifically, a black box query attack determines the uncommon, perturbated inputs to use for a desired output, such as financial gain or avoiding detection. Some academics have been able to fool leading translation models by manipulating the output, resulting in an incorrect translation.

A white box query attack regenerates a training dataset to reproduce a similar model, which might result in valuable data being stolen. An example of such was when a voice recognition vendor fell victim to a new, foreign vendor counterfeiting their technology and then selling it, which resulted in the foreign vendor being able to capture market share based on stolen IP.

Newest security pillars to make AI trustworthy

It is paramount for IT leaders to acknowledge the threats against AI in their organization in order to assess and shore up both the existing security pillars they have present (human focused and enterprise security controls) and the new security pillars (AI model integrity and AI data integrity).

AI model integrityencourages organizations to explore adversarial training for employees and reduce the attack surface through enterprise security controls. The use of blockchain for provenance and tracking of the AI model and the data used to train the model also falls under this pillar as a way for organizations to make AI more trustworthy.

AI data integrityfocuses on data anomaly analytics, like distribution patterns and outliers, as well as data protection, like differential privacy or synthetic data, to combat threats to AI.

To secure AI applications, technical professionals focused on security technology and infrastructure should do the following:

  • Minimize the attack surface for AI applications during development and production by conducting a threat assessment and applying strict access control and monitoring of training data, models and data processing components.
  • Augment the standard controls used to secure the software development life cycle (SDLC) by addressing four AI-specific aspects: threats during model development, detection of flaws in AI models, dependency on third-party pretrained models and exposed data pipelines.
  • Defend against data poisoning across all data pipelines by protecting and maintaining data repositories that are current, high-quality and inclusive of adversarial samples. An increasing number of open-source and commercial solutions can be used for improving robustness against data poisoning, adversarial inputs and model leakage attacks.

It’s hard to prove when an AI model was attacked unless the fraudster is caught red-handed and the organization performs forensics of the fraudster’s system thereafter. At the same time, enterprises aren’t going to simply stop using AI, so securing it is essential to operationalize AI successfully in the enterprise. Retrofitting security into any system is much more costly than building it in from the outset, so secure your AI today.

Full Story: https://www.informationweek.com/big-data/ai-machine-learning/dark-side-of-ai-how-to-make-artificial-intelligence-trustworthy/a/d-id/1338782?

https://intechanalytica.com
Do you like Gemechu Taye's articles? Follow on social!
Comments to: Dark Side of AI: How to Make Artificial Intelligence Trustworthy

Your email address will not be published. Required fields are marked *

Attach images - Only PNG, JPG, JPEG and GIF are supported.

Good Reads

Google today revealed Google Maps updates aimed at warning users of pandemic-related threats. Soon, maps will display all-time COVID-19 cases identified in an area, along with fast links from local authorities to resources. Google will also start to demonstrate how bus, train, and subway lines are crowded in more locations across the globe. Maps also […]
Today, $60 million was raised by Hover, a startup creating AI-powered apps that build 3D models of homes from smartphone images. The 200-employee firm says the proceeds will be used as Hover expands its product offerings to strengthen established partnerships with insurance companies.  6.26% of insured homes experienced a claim in 2017, compared to just […]

Worlwide

Google today revealed Google Maps updates aimed at warning users of pandemic-related threats. Soon, maps will display all-time COVID-19 cases identified in an area, along with fast links from local authorities to resources. Google will also start to demonstrate how bus, train, and subway lines are crowded in more locations across the globe. Maps also […]
Today, $60 million was raised by Hover, a startup creating AI-powered apps that build 3D models of homes from smartphone images. The 200-employee firm says the proceeds will be used as Hover expands its product offerings to strengthen established partnerships with insurance companies.  6.26% of insured homes experienced a claim in 2017, compared to just […]
Motional, the joint autonomous driving alliance between Aptiv and Hyundai, announced today that the state of Nevada has obtained permission to test its autonomous vehicles without a driver behind the wheel. The firm claims this is part of the completion of a phase of self-imposed testing and evaluation.  In the U.S., relatively few businesses have […]

Trending

WHEN SARTRE SAID hell is other people, he wasn’t living through 2020. Right now, other people are the only thing between us and species collapse. Not just the people we occasionally encounter behind fugly masks—but the experts and innovators out in the world, leading the way. The 17-year-old hacker building his own coronavirus tracker. The […]
13 September marks six months since the first coronavirus announced in Ethiopia.In the half-year since then, reported cases are close to 64 Thousend, with more than 996 deaths. At the onset, COVID-19 mainly affected the capital city. However, the virus is now moving from high-density urban areas to informal settlements and then onward to rural […]
Present international artificial intelligence (AI) inventory and progression in self-driving vehicle research and development Complementary subjects in technology are also artificial intelligence ( AI) and self-driving vehicles. In brief, without someone involved, you just can’t debate one. While AI has been rapidly applied in different areas, a new hot topic has been the way you […]

Login

Welcome to Intech Analytica

AI news hub. It checks trusted sites and collects best pieces of AI info.
Join Intech Analytica