At Ahu Tongariki, Easter Island
|Who I am||Education||Professional Experience||Publication||Extras||Curriculum Vitae||Contact||Experiments|
My first task was to build a Hadoop and HBase cluster to process Big Data (news, blogs, Tweets). Then I taught myself to write MapReduce codes for text analysis. This changed the way my team process text data, from dealing several tens of thousand documents on a few separate machines to systematically analyzing millions of documents per day on a cluster and saving the results to a distributed database. Then I worked on Named Entity Recognition module and Event Extraction module, while simultaneously studying machine learning techniques. I also developed solvers for Binary SVM, Structural SVM, One-class SVM, Ranking SVM in C++ and Java.Project Participation
During the internship, I explored the potential of applying deep learning methods to health care problems, specifically predicting the future heart failure diagnosis. Applying stacked de-noising auto encoders to heart failure prediction enabled sophisticated analysis of the relation between patient features and heart failure diagnosis. Furthermore, through the combination of the word embedding technique and recurrent neural networks, I was able to improve the heart failure prediction performance from 0.81 AUC to 0.86 AUC. This work was published in JAMIA.Internship at Research, Development and Dissemination (RD&D), Sutter Health, California, May 2016 - August 2016
In my second internship at Sutter Health, I focused on developing interpretable deep learning models for predictive healthcare. Specifically, using the neural attention mechanism combined with RNN and MLP, I was able to design a sequence prediction model RETAIN that demonstrated similar AUC as RNN but completely interpretable; the model allowed precise calculation of how much each diagnosis/medication/procedure in the past visits contributed to the final prediction.Research Internship at DeepMind, London, U.K., Feb 2017 - May 2017
My first project was to train an embodied agent to find out the heaviest object in a virtual environment. This was an extended work of "Which is heavier?" experiment from Learning to Perform Physics Experiments via Deep Reinforcement Learning (Denil et al. ICLR 2017). The agent was equipped with a hammer to probe the objects, and a positive reward was given when the hammer was in contact with the heaviest object (hence the project name Pinata). The agent successfully learned to interact with the objects and stick to the heaviest one (example video 1, example video 2). My second project was related to language and communication.Research Internship at Google Research, Mountain View, California, May 2017 - Aug 2017
I was a member of the project team named FluidNets. The objective was to automatically learn the structure of neural networks given some resource constraint (e.g. number of parameters, number of FLOPs), using various regularization methods.
This is an extended work of "Which is heavier?" experiment from Learning to Perform Physics Experiments via Deep Reinforcement Learning (Denil et al. ICLR 2017).
Object densities are shown at the bottom right.
|Agent trained with 15-second episodes.||Agent trained with 50-second episodes
(time ticks 3 times faster, hence the video length 16 seconds)
The embedding matrix codeEmb.npy has the shape 27523 by 200. Each row is a specific medical concept (diagnosis code, medication code or procedure code) represented by a 200 dimensional vector. int2str.p is a Python dictionary that maps the dimension number of the embedding matrix to the string code of the medical concept. For example the first dimension of codeEmb.npy can be mapped to a string code "D_401.9". The first letter of the string code could be D, R, or P which respectively stand for diagnosis, medication and procedure. str2desc.p is a Python dictionary that maps the string code to the actual description of the medical concept. For example, the string code "D_401.9" is mapped to the string description "Unspecified essential hypertension".