Skip to content

Purdue University · Doctor of Technology

Dennis Mercer

Building Explainable AI Systems for Cyber Threat
Intelligence and Security Analytics

01

About

I am a cybersecurity researcher and doctoral student in the Doctor of Technology program at Purdue University. My academic background includes a degree in Computer Engineering Technology, a Master of Science in Information Technology Management with an emphasis in Information Assurance, and a dual Master of Business Administration. These academic experiences are complemented by more than twenty years of professional experience in threat intelligence, security operations, and enterprise cyber defense.

My research focuses on the intersection of artificial intelligence, knowledge representation, and cyber threat intelligence. In particular, I investigate the use of neuro-symbolic AI and fuzzy logic inference to make AI-generated threat intelligence more reliable, trustworthy, and explainable. I also explore how AI-assisted ontology engineering can enhance the ability of security systems to interpret and reason about complex threat data.

02

Research Interests

03

Dissertation Focus

"Enhancing Cyber Threat Intelligence through Explainable AI: A Design Science Approach Using Knowledge Graphs, Ontologies, and Large Language Models"

My doctoral research addresses a critical gap in cyber threat intelligence (CTI) operations: the lack of transparency and explainability in AI-driven threat detection. Using a Design Science Research methodology, I am developing a framework that integrates knowledge graphs, ontology alignment, and large language models to improve network threat detection and attribution. The framework embeds explainable AI directly into the intelligence construction process, enabling analysts to understand not just what a model detected, but why it reached that conclusion. Central to this work is the use of imbalanced learning techniques to handle the highly skewed class distributions common in real-world threat data, alongside structured representations drawn from MITRE ATT&CK and STIX to ground the system in operational standards.

This research is conducted under the advisement of Dr. Julia Rayz, Professor and Associate Department Head in the Department of Computer and Information Technology at Purdue University, and a CERIAS Fellow specializing in natural language understanding, knowledge representation, and fuzzy logic.

04

Publications & Presentations

Publications and conference presentations will be listed here as they are completed.

05

Projects

Framework

AI-Driven CTI Framework (XCTIF)

An explainable cyber threat intelligence framework that integrates LLMs, knowledge graphs, and ontologies for automated threat analysis and attribution.

  • Python
  • Knowledge Graphs
  • LLMs
  • MITRE ATT&CK
Research

Ontology-Based Reasoning Models

Developing reasoning systems that align CTI ontologies (STIX, ATT&CK, D3FEND) for cross-framework threat intelligence correlation and analysis.

  • Ontology Design
  • STIX/TAXII
  • Semantic Web
Evaluation

LLM Evaluation Pipelines

Building systematic evaluation frameworks for assessing LLM performance on cybersecurity-specific tasks, including threat report summarization and indicator extraction.

  • NLP
  • Benchmarking
  • Python
  • Prompt Engineering
06

Contact

Interested in collaboration, speaking opportunities, or discussing research? Reach out below.