Machine Learning
I use machine learning to study a wide range of problems, using neuroimaging data, text data, and intensive longitudinal smartphone data.
Project 1: Prediction of memory scores from brain data
I used resting state fMRI data from 540 individuals in the Human Connectome Project to test whether we could use machine learning to predict individual differences in recognition memory, based on individual differences in the organization of the recognition memory network. Elastic-net regularized linear regression, random forests, and K-nearest-neighbor regression did not predict performance.
Project 2: Automatically identify, classify, and count details in memories with natural language processing
For detailed description, see my Natural Language Processing page
Project 3: Individualized prediction of suicidal thoughts using intensive longitudinal data and machine learning
In collaboration with Shirley Wang, I apply machine learning to smartphone data (e.g. several surveys a day and continuous gps, movement, and other data) to predict the intensity of suicidal thoughts after discharge from the hospital. To improve prediction, we train separate LASSO models for each individual, rather than using a group-level model (for similar personalized prediction approaches, see Fisher & Soyster, 2019. We find that dynamic features of the timeseries data (e.g. recent rapid fluctuations in suicidal thinking) are most predictive of subsequent suicidal thoughts. Initial results were presented at the Association for Behavioral and Cognitive Therapies (2021).
Project 4: Machine learning course redesign
I obtained a grant from Harvard University’s Graduate School for Arts and Sciences to create and design sections for a machine learning course. This course was aimed at PhD students in the psychology department. Each section involved programming in R and a discussion of conceptual material (sometimes a review, sometimes new material). I designed material for sections on clustering, markov chain models, principles of supervised learning, regularization (e.g. LASSO), SVM’s, tree-based approaches (e.g. random forests), basics of natural language processing, and basics of neural networks.
- Posted on:
- May 23, 2021
- Length:
- 2 minute read, 311 words