Our view at Stack - our team love using Miro as an online workspace for innovation, enabling distributed teams to dream, design, and build together. With a full set of collaboration capabilities, it simplifies cross-functional teamwork, meetings, and workshops. Create concepts, map user stories, and conduct roadmap planning in real-time.
Developments in Generative Artificial Intelligence (GenAI) have had a huge impact on content creation, innovation, and collaboration. Everything from creating text, images, and videos, to sorting unstructured information, and generating or analyzing software code are easy to accomplish with the help of GenAI. But what can GenAI do when it comes to security?
When “AI” and “security” are used in the same sentence, there’s often a negative connotation. For example, we’ve seen attackers use GenAI to create deep fakes to scam people for financial gain, taking social engineering into a completely new dimension. What’s more, it’s never been simpler for threat actors to create content cheaply, quickly, and easily that imitates other companies or people for phishing purposes. GenAI can even create malicious code or look for security vulnerabilities within open sourced code to exploit.
While it’s important that all AI users — and especially leaders of organizations full of AI users — are aware of the technology’s security risks, there are also a number of ways in which AI-powered tools can improve enterprise security. This article focuses on those opportunities.
Classification and identification of different types of information
With recent developments in GenAI, it is now far simpler to identify and classify different types of information — and to impose protective measures for this information. While in the past we relied on matching static patterns of information, we can now distinguish unstructured information based on a much wider context, similar to the way a human mind can recognize information.
This will continue to have a significant impact on data protection and data leak prevention, especially when it comes to multi-modal types of information such as documents, emails, and canvases. It is even possible to see how distinct pieces of information might be sensitive in combination with other information. While previously only individual parts could be identified and classified, this distinction prevents the leaking of information that could be pieced together by an attacker to form a complete picture.
AI can also adapt to changes in the information and data based on input, so it can learn which information is considered sensitive. This is an advantage over models that need to be explicitly told and hard-coded, and it makes AI particularly well-suited to protect highly variable sets of information — which are common during innovation processes.
Anomaly detection
Baselining for anomaly detection through statistical methods has been around for ages — detecting anomalies that are subtle or substantial against preset thresholds — but Machine Learning (ML) has enabled the learning of new patterns and the ability to include a wider range of data. With GenAI, we can much more effectively detect anomalies in more complex data (such as in texts or pictures) because its models are able to consume larger amounts of unstructured data and multimodal information.
Hybrid approaches — GenAI-assisted anomaly detection with both supervised and unsupervised algorithms — have been shown to be more accurate and more effectively adjust to a higher variance of changes in the data (whereas traditional approaches need to be recalibrated). Most importantly, they can add more dimensions and forms of information into the anomaly detection which is a significant game changer.
These capabilities can be leveraged for everything from user anomaly behavior detection to identifying compromised accounts, network patterns, financial fraud detection, and much more.
Threat intelligence
The introduction of Large Language Models (LLMs), as well as the image and speech analysis and translation capabilities of AI models, allows for significant advancements in threat intelligence processing.
More sources with more varying contexts can be processed and synthesized for easier consumption, making larger data sets more viable and actionable. Similar to anomaly detection, sifting through larger varying data allows AI to pick up subtle patterns that humans might otherwise easily miss. Effectively gathering threats allows enterprises to protect themselves against similar attacks that have occurred elsewhere.
Contextual detection
Threat intelligence feed combined with anomaly detection creates an opportunity for the automation of contextual detection. Simply put, GenAI can take a large, rich data set from a threat intelligence feed and detect minute and subtle nuances in a smaller-scale environment that may otherwise go unnoticed.
Building data structures to gather data to better match other sources, such as vector databases, is a huge opportunity that newer AI models offer. Learnings from multiple sources can be collected and compared against a stream of different data, allowing a threat intelligence feed to be better operationalized in the future, similar to how GPTs or tuning of models based on one data set enables the analysis of other sets of information.
Task automation
Another key strength of AI is its capacity to automate tedious, repetitive tasks, consequently freeing up skilled personnel for higher-level analysis. This is particularly impactful in the cybersecurity space, where the resources needed to take on the monumental task of protecting companies’ vast data and systems are often scarce.
GenAI reduces the need for expertise and experience when setting up a tasked automation or query. Natural language can be used to instruct a GenAI model to generate the query and iterate quickly towards a desired result. Prompting GenAI is undoubtedly a skill, but it’s still less complex and less dependent on strict syntax, in addition to not being limited by no/low code abstractions to accomplish the same task.
Perhaps AI’s greatest benefit is its adaptability. Just as the contextual aspects can be integrated with greater efficiency, so too the models can adjust and adapt to continuous changes or cater to higher degrees of variations. This can both increase the efficiency of security staff and reduce the required entry-level skills needed, while greatly improving coverage of preventative measures and reducing vulnerabilities and weaknesses.
Assistants
Assistants and copilots have been one of the biggest evolutions in GenAI. Now that programming tasks can be simplified, GenAI can use its models to better support engineers. Specialized tasks such as vulnerability patching are made simpler because the model provides a larger set of knowledge and expertise for developers to draw on to solve security challenges, while repetitive tasks can be performed over a larger set of code.
Code and code analysis
With the advent of GenAI and, more specifically, Large Language Models (LLMs), we can now more easily comment or summarize code to be more accessible to engineers. What’s more, these models can boost capabilities to analyze code for security vulnerabilities, as well as suggest patches and mitigations.
Source code is usually vast for most monolithic systems and functions can be scattered across the code. There can even be dormant, unused code, which can confuse engineers unless appropriately commented or removed. AI-assisted analysis allows engineers to more effectively traverse a large code base.
While static code analysis has been around for a long time, it has often struggled with large amounts of false positives and equally false negatives. Due to the variating nature of source code, it has also been difficult to match where a vulnerability is introduced consistently. AI can sift through this vast amount of code, as well as deal with the high variance of ways to write the same function in the code. The result is a reduced rate of false positives and more accurate detection of previously unnoticed vulnerabilities in code scanning.
Security challenges with AI technologies
AI’s ability to process information makes it an attractive solution for addressing the growing volume and sophistication of cyber threats. However, organizations must approach AI-powered security with a clear understanding of its challenges. While this article focuses on the many opportunities that GenAI brings to the security space, it is important to also consider the challenges, such as:
- Jailbreaking and Prompt injection
- Data poisoning and manipulation
- Model inversion and attackers inferring sensitive information from training data
- Privacy risks of leaking personal information from training data
- Bias and fairness
- Supply chain risks
- Accuracy and faulty information
To learn more about the security risks associated with LLMs and AI models, a good place to start is the OWASP Top 10 for Large Language Model Applications.
As you think about the next steps in securing your enterprise, consider using AI-powered security tools developed by trusted partners whose platforms are already integrated into your organization’s infrastructure. Be sure to take note of how they openly demonstrate their AI principles and practices. Enterprise Guard, for example, is an industry-first, advanced security and compliance layer for Miro that uses unique machine learning detection models to automatically find, classify, and secure sensitive and confidential content on Miro boards.
Ultimately, a balanced approach in which AI is deployed to augment the skills and judgment of human experts will be the most successful framework to harness the benefits of this transformative technology.
If Miro is of interest and you'd like more information, please do make contact or take a look in more detail here.
Credit: Original article published here.