Cybersecurity and GRC professional who builds and secures AI automation workflows using AI agents and LLMs. Self-taught, hands-on, and open to full-time or contract roles.
I come from a IT, cybersecurity and GRC background. Risk assessments, compliance frameworks, third-party risk, and policy work. But as AI started reshaping how teams operate, I saw a gap: most organizations are adopting AI workflows and automation faster than they can secure or govern them.
So I started getting my hands dirty. Instead of just reading about AI security, I'm building real projects. Triage automation, observable AI workflows, and systems that plug into the tools teams actually use. Every project I work on is something I can walk through, explain, and defend.
My value proposition is simple: I sit at the intersection of security knowledge and technical building. I understand the compliance and risk side, and I can build the AI-powered automation that makes teams faster, more consistent, and more observable. That's the gap I fill.
Real systems, not tutorials. Each project is designed to solve an actual operational problem with production-grade tooling.
End-to-end security event triage: intake → normalize → AI-assisted classification → Jira ticketing → Slack alerts → Langfuse observability. Every run is traced, ticketed, and auditable.
Prompt injection testing, regression datasets, quality scoring, and automated evaluation of triage accuracy over time.
Security Hub and GuardDuty integration, S3 evidence storage, and automated cloud security finding triage.
I'm open to full-time and contract roles in Cybersecurity, GRC, and AI automation. If you need someone who can bridge compliance and technical implementation; or if what I've built here resonates; I'd love to connect.