About Me

I’m an AI researcher specializing in multimodal learning and LLM integration, with a strong focus on time-series modeling, medical imaging, and wearables. My work spans designing scalable deep learning architectures that align diverse data types into unified vision-language and multimodal foundation models.

Beyond model development, I emphasize statistical evaluation, ensuring reliability in real-world clinical settings. I bring an interdisciplinary foundation in computer science, mathematics, and biomedical sciences, and thrive in collaborative research environments at the intersection of AI, medicine, and human health.

Research Interests

Multimodal AI Icon
Large Multimodal Models in Medical Imaging

Designing deep learning architectures to process diverse data types (from CXR, CT, echocardiograms, ECG, and text).

Wearables Icon
AI Techniques in Wearable Technologies

Developing time-series models using wearable sensors for improved remote monitoring efforts in patients.

Time Series Icon
Time-Series Models for Healthcare Monitoring

Focusing on deep learning for analyzing health monitoring data, including irregularly sampled, multivariable time series.