Gartner’s Top 10 Strategic Tech Trends in 2020: AI Security - Stefanini

Gartner’s Top 10 Strategic Tech Trends In 2020: AI Security

From augmentation to AI, Gartner’s Top 10 Strategic Technology Trends for 2020 have covered a wide array of topics. Now, we’ve reached the end of the list. The final trend – AI security – covers the fact that “evolving technologies such as hyperautomation and autonomous things offer transformational opportunities in the business world.” However, with great innovation comes great responsibility, as AI also comes with heightened security risks. Therefore, security teams in the business world must be armed with knowledge regarding AI technologies, and how they affect security. Your business should aim to cover its weak spots in anticipation of these types of ambushes.

According to a Gartner report, AI security has three key perspectives:

Protecting AI-Powered Systems

According to Gartner, application leaders should closely monitor ML algorithms and the data they consume. This will help them determine whether “there are extant or potential corruption (“poisoning”) issues.” Once infected, data manipulation is very likely to compromise data-driven decisions, in which data quality, integrity, confidentiality and privacy are an absolute must.

There are five phases of ML pipelines that require protection: data ingestion; preparation and labeling; model training; inference validation; and production deployment. There are several risks associated with each phase, and enterprises need to ensure that they’re well prepared and armed with knowledge.  According to Gartner, 30 percent of all AI cyberattacks will involve training-data poisoning, AI model theft or adversarial samples to attack AI-powered systems through 2022.

Training-data poisoning

Hackers could potentially have unauthorized access to training data, and feed it incorrect or compromised data. This can lead to failure of an AI system. Data poisoning is more common in online learning models, as well as ML systems trained on user-provided data. Gartner advises that to mitigate the risk of data poisoning, limit “the amount of training data each user contributes” and examine “output for shifts in predictions after each training cycle.”

Model theft

Competitors have the ability to “reverse-engineer ML algorithms or implement their own AI systems to use the output of your algorithms as training data.” If successful, competitors can leverage the data to create their own ML models. According to researchers, model theft poses a higher risk to certain deep learning algorithms. To identify model thefts, start by looking into queries. Take note of the number of queries – is it out of the ordinary? Is there is a wider variety of queries? Gartner says that the best way to protect the prediction machines is to block the attackers and develop a backup plan.

Adversarial samples

Classifiers are vulnerable to adversarial samples. Samples can be altered to the point where an AI classifier can misclassify it. Moreover, these changes can go unnoticed by a human observer. Linear classifiers, support vector machines, decision trees, boosted trees, random forest, neural networks and nearest neighbor are susceptible to adversarial samples. To ensure the highest level of protection, Gartner wants you to deploy “a diverse set of prediction machines,” and “generate adversarial samples and include them in your training dataset.”

Leveraging AI to Enhance Cybersecurity Defense

It’s difficult to stay ahead of what AI has to offer when it comes to cybersecurity defense, as attacks are occurring at an increasing rate. Even the types of attacks are constantly evolving. According to Gartner, security tool vendors are exploiting ML to make improvements to their tools, decision support and response operations.

Thanks to research, frameworks and compute power, vendors can more easily access well-designed ML if they can get their hands on large quantities of relevant, high-quality training data. Gartner advises to “assess solutions and architecture, and challenge vendors on the latest ML-related attack techniques, including data poisoning, adversarial inputs, generative adversarial networks and other security-relevant innovations in ML.”

Anticipating Nefarious Use of AI by Attackers

The emergence of new AI techniques is exciting, but unfortunately, attackers are starting to use them with dangerous intent. Attackers are especially delving into ML to fuel their attacks, carefully studying its security areas. This nefarious activity is facilitated by the “commoditization of ML tools and the availability of training data,” says Gartner.

Gartner also notes that attackers will leverage ML to enhance targeting, exploits, the discovery of new vulnerabilities, the design of new payloads and evasion. Furthermore, attackers use ML to accelerate innovation in attacker techniques. On the bright side, researchers have been curating AI publications and code due to the heightened risk of malicious activity.

The technological frontier is exciting and the future looks bright. Yet, technological innovations also come with potential pitfalls. Know the risks by investing in your AI security.

Give Stefanini a call today to learn best practices when it comes to securing your company.

References:

Gartner: Top 10 Strategic Technology Trends

Gartner: Gartner Top 10 Strategic Technology Trends for 2020

With over 30 years of expertise in the technology market, Stefanini drives business in companies of all sizes. 

Contact us today to take advantage of the latest technology trends and apply them to grow your business.

 

 

 

Join over 15,000 companies

Get Our Updates Sent Directly To Your Inbox.

Get Our Updates Sent Directly To Your Inbox.

Join our mailing list to receive monthly updates on the latest at Stefanini.

transforming data through track and trace with klabin case study

Build Your IT Support Offering Quickly

Our eBook “LiteSD – Choose Endlessly Scalable Success” reveals how to integrate LiteSD platform into your organization.

Ask SophieX