Identifying Metastatic Cancer from Small Image Scans
Dengue in Data
×
Automated Analysis of Microscopy Images
Given a set of breast cancer microscopy images, classify each image into its class, where class is the method-of-action of the drug treatment administered to the cells in that image. 12 classes altogether.
From the training set, learn a set of image features that are sufficient to reconstruct any image (the "dictionary"). During reconstruction, not all images will use all image features, so hopefully, we'll be able to distinguish between classes by the relative importance (weight) of each image feature.
About 35 - 85% accuracy, depending on run parameters and fold. Not too bad, given a naive baseline of about 8% (random selection between 12 classes)
Graduate Thesis for MSc (Bioinformatics), 2017, at Melbourne University, supervised by Juan-Nunez Iglesias. Data furnished by the Broad Institute
Linked to a dummy database of randomly-generated users
Built with Node.js + Express
Challenge from Nils Fisher: build it with query-able API-requests? (Or rather, he said: "why would anyone do that, instead of having an additional SQL layer after the API GET request?")
Identifying Metastatic Cancer from Small Image Scans
Data are a set of about 220,000 training (labelled) images, and 57,000 test (unlabelled) images. Each labelled image is annotated with 0 or 1, for tumour absent or present respectively.
Using a convolutional neural network, classify a given image into class 0 or 1.
Achieved about 85% accuracy quite easily over a total runtime of less than an hour, using only a quarter of the dataset. Highest public scores are about 95%, with DenseNet-169.