Cogniguard explores a simple but difficult problem: detecting cognitive overload in real time using noisy physiological signals, and translating that information into something a person can actually understand and act on.
The project combines EEG-based signal analysis, machine learning, and an AI-driven interface to estimate cognitive fatigue and surface it through a clean, real-time dashboard. Rather than focusing on raw signal accuracy alone, the goal was to explore how biosensing systems might support awareness and intervention without overwhelming the user.
The core question behind Cogniguard was:
How can we detect cognitive overload early, using imperfect biosignals, and present that information in a way that feels helpful rather than technical?
EEG data is inherently noisy, especially in non-clinical settings. Instead of treating this as a flaw to eliminate entirely, we treated it as a design constraint—something the system had to work with, not against.
My work focused on the data and modeling side of the system.
Because EEG signals are noisy and highly variable, I used a transfer learning approach, training models first on cleaner, more structured datasets before fine-tuning them for markers associated with cognitive fatigue. This helped stabilize learning and improve robustness when working with simulated or lower-fidelity EEG data.
When the system detects elevated cognitive load, an AI agent powered by the Cortex API translates the technical output into a simple, user-facing alert rather than exposing raw metrics.
The full pipeline includes:
- EEG signal processing and feature extraction in Python
- Machine learning models built with PyTorch
- A real-time Streamlit dashboard visualizing cognitive load and simulated brain activity
- Snowflake as a backend data warehouse to store sessions and analyze trends over time
The emphasis was on clarity and interpretability, not just prediction.
I worked primarily on:
- EEG data handling and preprocessing
- Model design and transfer learning strategy
- Integration between ML outputs and the AI agent
- Designing how cognitive load would be surfaced to the user
I also contributed to shaping the overall system flow and how technical outputs were translated into human-readable feedback.
This project reinforced my interest in neuro-adjacent systems that sit between raw biosignals and human experience. It raised questions around how much signal fidelity is actually necessary for meaningful intervention, and how adaptive systems might respond before users consciously recognize overload themselves.