Skip to main content

2 posts tagged with "cyber-security"

View All Tags

The Future of Application Security - Integrating LLMs into AppSec

· 4 min read
thilaknath
Product Security Specialist @ SAP
Key Takeaway

AI and Large Language Models (LLMs) are revolutionizing Application Security by automating routine tasks, enabling AppSec teams to scale their efforts without additional personnel.

The Challenge: Traditional AppSec Limitations

In today's fast-evolving software landscape, Application Security (AppSec) teams face mounting pressures:

  • Limited resources and budget constraints
  • Increasing need for proactive security
  • Manual processes that can't scale
  • Time-consuming security assessments

However, advancements in Artificial Intelligence (AI) and Large Language Models (LLMs) provide a promising solution.

The Traditional AppSec Challenge

Historically, AppSec teams engage with development teams to identify and remediate vulnerabilities early in the Software Development Lifecycle (SDLC). While these efforts are critical, they're typically manual and time-consuming. Tasks like risk classification, threat modeling, code reviews, and security assessments depend on human expertise and are subject to individual variability.

info

As organizations scale, it becomes impractical to expect AppSec teams to manually assess every component, and this is where AI offers a solution.

The Opportunity with Generative AI and LLMs

LLMs like OpenAI's models are reshaping the way we approach software development, providing capabilities to automate repetitive, labor-intensive tasks. By understanding and generating human-like text, these models can simplify complex security tasks, making AppSec more efficient.

Imagine a scenario where LLMs could automate routine security reviews and handle tasks that were previously too minor to warrant manual oversight, thereby expanding the coverage of an AppSec team without additional personnel.

Introducing AI-based "Security Oracles"

Using frameworks like Retrieval-Augmented Generation (RAG), organizations can implement AI-based "Security Oracles." These AI agents can:

  • Query best practices
  • Access security policies
  • Analyze organizational data
  • Provide contextual security insights

For example, SecurityGPT could answer questions, generate tailored recommendations, and produce security documentation by leveraging an organization's existing resources.

High-Level Workflow: AI-Enhanced AppSec Activities

Here's a breakdown of how AI agents could streamline a security review process, using the Security Review Process Funnel as a model:

1. Risk Classification

  • AI-powered risk scoring based on technical specifications
  • Alignment with organization's framework
  • Automation of lower-risk assessments

2. Rapid Risk Assessment

  • Integration with Mozilla's RRA guide
  • Automated impact analysis
  • Instant report generation
  • Standardized evaluations

3. Security Review Types

Standard Review

LLMs provide general recommendations on:

  • Authentication
  • Authorization
  • Encryption
  • Input Validation

Custom AI-Powered Review

Using RAG for:

  • Deep analysis
  • Customized recommendations
  • Expert-level insights

Implementing AI Agents in AppSec: Key Benefits

BenefitDescription
ScalabilityAutomate repetitive and low-risk assessments
ConsistencyReduce variability in risk assessments
ProactivityMonitor code changes and identify vulnerabilities early
Resource OptimizationMaximize impact of existing security engineers

Envisioning the Future of AppSec

Future Perspective

As LLM technology continues to advance, we may see AppSec workflows where AI and human expertise work seamlessly together. Security teams can focus on higher-order analysis while AI handles the foundational tasks, creating a proactive, resilient approach to security.

Conclusion

Integrating AI into AppSec marks a revolutionary shift in security practices, enabling organizations to scale their security efforts without adding personnel. While manual oversight remains crucial, the combination of human expertise and AI-driven automation offers a future where AppSec is:

  • ✅ Faster
  • ✅ More consistent
  • ✅ Ultimately more effective
Remember

The goal is not to replace human expertise but to augment it with AI capabilities for better security outcomes.

Memory Forensics!

· 3 min read
thilaknath
Student @ Concordia University

Have you ever wondered what would happen, when you are a forensic and in a position where u could not retrieve data from the culprits system since its encrypted using whole disk encryption software, then what’s your stand. This was eventually set in our mind when we started working on the project for recovering cryptography keys and we had particularly targeted only Truecrypt as we didn’t have much time to work on.

The use of strong encryption into operating systems has created a challenge for forensic examiners potentially preventing from recovering any digital evidence from a whole disk encrypted system. Because strong encryption cannot be circumvented without a key or passphrase, forensic examiners may not be able to access data after a computer is shut down . Whole disk encryption software such as PGP and TrueCrypt enable file-level encryption, as well as disk-level encryption that may be mounted as a volume and used to store data . TrueCrypt encrypts the whole disk using a selected encryption algorithm and hash algorithm, thus generating master key and secondary key which is based on multiple criteria .

The software needs to store the key in RAM, so a dump of the RAM can reveal the key. Dumping the RAM on a running system from another computer, without altering the integrity of the disk can be performed through FireWire (also known as IEEE1394 Bus). Another technique, called cold boot attack , suggests to physically shutdown and cool the DRAM chips and insert them in another computer, which can cause memory integrity errors and hence the need to perform additional recovery operations on the key. Here, the dump of the RAM is errorless.

Our project was phased in to three important part of development of which are given below as follows

  1. Dump the RAM using Inception through FireWire Inception is an open source tool written in Python, made to exploit FireWire connection to get access to the RAM. It tries to establish a Direct Memory Access connection to further initiate a dump of the memory and write it to a local file.

  2. Develop a key extraction software for TrueCrypt 7

AES encryption is a cipher based on Substitution-Permutation (SP)-network that works with 128, 198 or 256 bit keys. TrueCrypt uses the 256 bits version for an encryption using AES. Because this cipher is widely in use, fast in both software and hardware and is regarded as the de-facto standard in most new cryptographic applications, we will focus on it in this project.

  1. Prepare an automated procedure to show the success of the attack on a running system