What does GenAi and Cybersecurity have in common?
SCMagazine has the story: Gartner Security Summit: “3 takeaways”
“Generative AI (GenAI) has emerged as a game-changer in the cybersecurity industry, offering both opportunities and risks. Jeremy D’Hoinne, a research vice president for security operations and infrastructure protection at Gartner, addressed this topic in his presentation the first day of the conference. On one hand, cybersecurity professionals can leverage GenAI to automate threat detection, analyze vast amounts of data, and generate actionable insights.”
There will be a showdown of sorts between the “GoodGenAI” and the “EvilGenAI”
“AI-driven methodologies can help organizations transition to advanced security frameworks, such as zero-trust, promising comprehensive protection against evolving threats. By integrating AI into threat detection, investigation, and response technologies, organizations can meet the demands of hybrid cloud environments and empower security analysts to work more effectively.
Moreover, AI products can identify vulnerabilities, monitor for abnormalities in data access, and alert cybersecurity professionals about potential threats in real-time. This proactive approach can save valuable time in detecting and remediating issues, enhancing overall security posture.”
What should we do? We have to change with the times and prepare for GenAI in both attack and defense:
Assessing Privacy Risks of Generative AI
Generative AI, particularly large language models (LLMs), presents new privacy and data risks when used in various business-critical processes. Here are some key points to consider when assessing the privacy risks associated with the use of generative AI: (Using 4 sources – https://www.prompt.security/ , Auditboard.com/blog , medium board-Generative AI in devsecops , and PWC Managing Generative AI Risks )
1. Potential Privacy and Data Risks:
- The growing use of public generative AI services by employees outside of sanctioned internal GenAI deployments introduces new privacy and data risks. This includes the inadvertent disclosure of sensitive enterprise data, potential data breaches, and regulatory violations
- There are risks of intellectual property theft if proprietary information is used in prompts to public AI models, which effectively trains the models
- Integrating genAI models into business-critical processes like content creation, customer-facing chatbots, software development, and decision support systems introduces new classes of risk specific to AI models
.2. Compliance and Sensitive Data Exposure:
- The likelihood of sharing confidential data has escalated with the rise in GenAI tool usage, leading to potential unauthorized access to sensitive data, intellectual property, privacy violations, and other security breaches
- Mitigating the risk of violating regulatory standards and preventing employees from exposing sensitive or confidential information to GenAI tools is crucial
3. Privacy Protection Measures:
- Organizations need to implement periodic risk assessments, effective privacy protection measures, obtain informed consent, and implement data anonymization measures to prevent compliance violations and data breaches
- Tools and platforms such as Lakera Guard, AIShield.GuArdIan, MLFlows AI Gateway, PrivateGPT, NeMo Guardrails, and Skyflows GenAI Privacy Vault aim to align LLM usage with privacy laws, ethical norms, and organizational policies through input scanning, output monitoring, and access controls
4. Security Operations and Automation:
- GenAI agents, powered by LLMs, can automate repetitive tasks associated with detection, investigation, and response in security operations, improving timeliness without sacrificing accuracy or completeness
Code Analysis and Review Processes:
- Generative AI can bolster code analysis and review processes in DevSecOps, automatically identifying potential code issues, security vulnerabilities, and best practice violations, thus reducing manual effort and ensuring higher-quality code .
6. Legal and Ethical Considerations:
- Not thoroughly reviewing generative AI outputs can result in inaccuracies, compliance violations, breach of contract, copyright infringement, erroneous fraud alerts, faulty internal investigations, harmful communications with customers, and reputational damage
-
GenAI applications could exacerbate data and privacy risks, making it crucial for legal teams to have a deeper technical understanding to challenge and defend GenAI-related issues
Of course the details are what matters – what actual firewalls, SIEM software, or IDS software should you have to help you in your network?
This is something you need to ask, when evaluating replacements, or initial installs.
Included in you assessments should be the testing of the AI software.
One should definitely have AI in mind when shopping for new software. Because the software will need to change with new attack software. (example: Next generation Firewall by Palo Alto which is a kind of firewall/IDS built-in).
An older image of a traditional firewall
Although these days one goes to a cloud computer and almost bypassing the firewall.
It is best to review your Firewall/ IDS or intrusion detection software, as well as whatever is needed for your situation.
It is best to be proactive with cybersecurity these days, as the attacks just keep coming…
Contact me to discuss your situation.
I also have my book “Too Late You’re Hacked” that will get you on the way to setting up a security policy.