This project focused on analyzing Steady-State Visual Evoked Potential (SSVEP) EEG data to explore how different signal processing and analysis techniques affect classification and interpretability.
We implemented an end-to-end EEG processing pipeline (MATLAB → Python/MNE → training-ready datasets):
- Data format: 11 channels (time + 8 EEG + trigger + LDA), sampled at 256 Hz
- Preprocessing: 1–40 Hz bandpass + 50 Hz notch filtering
- Event labeling: extracted trigger events and assigned frequency labels for a 4-class SSVEP task (9, 10, 12, 15 Hz)
- Epoching: stimulus-locked epochs (2s for batch training; also tested longer 8s epochs for analysis)
- Segmentation: sliding windows (example: 2.0s window / 0.2s step for ~90% overlap) to simulate near real-time classification
This let us move from raw recordings → labeled windows we could actually train on.
I contributed to the parts that make or break EEG projects:
- Built preprocessing + event/epoch extraction in MNE (RawArray construction, trigger event detection, epoch generation)
- Implemented segmentation strategies (fixed windows + sliding windows) and saved datasets for training (.npz)
- Helped test modeling approaches (baseline CNNs + time–frequency features via STFT)
- Supported communication: visuals + explaining why the signal behaved the way it did (noise, subject variability, etc.)
Working with SSVEP highlighted several issues:
- Sensitivity to noise and electrode placement
- Variability across subjects
- Tradeoffs between model complexity and interpretability
These challenges shaped both our analysis approach and how we presented results.
We ran multiple model baselines. Some performed near chance at first (which honestly was a useful reality check), and then performance improved when we used longer segments and time–frequency features.
Important note: sliding-window overlap can inflate performance if windows from the same original epoch end up in both train and test. In the next iteration, I'd evaluate using epoch-level or session-level splits to measure true generalization.
Beyond data analysis, I also helped with:
- Visual design and production of the final project video
- Communicating technical results in a clear, accessible way


If I continued this project I'd:
- Add a classical SSVEP baseline like CCA / filter-bank CCA for comparison
- Track performance vs window length + include harmonics (SSVEP often benefits from that)
- Add a simple noise/quality metric (SNR proxy) to explain why some sessions perform better than others