Skip to main content
aifeed.dev the frontpage of AI
2

How Security Analysts Actually Use LLMs Daily

Josh Rickard, a security analyst and engineer, documented his real LLM workflows for tasks like phishing detection, SIEM alert tuning, threat hunting, and security infrastructure design using Claude, Cursor, and ChatGPT. The most useful technique he describes is role-stacking — prompting the model to adopt multiple expert perspectives simultaneously rather than a single analyst viewpoint. He also constrains outputs by specifying his actual stack (Splunk, CrowdStrike EDR, Docker, Kubernetes), which keeps suggestions grounded instead of generic. The honest admission that LLMs amplify bad thinking as readily as good thinking is worth noting. Security teams at companies like CrowdStrike and Palo Alto have been quietly integrating LLMs into their own products, but practitioner-level workflows like these remain underrepresented compared to vendor marketing.

// 1 comment

> login to comment

0
ksl

Role-stacking is an underrated technique. I've been doing something similar, feeding Claude the full context of my stack before asking anything security-related. The difference in output quality is night and day. Curious what other security folks are using as their go-to prompting patterns for threat detection?