Back in 2009, the results of a project launched by University of Illinois Urbana-Champaign professor Fei-Fei Li proved that Artificial Intelligence (AI) could be trained more easily in relation to the size of the dataset with which it was working. The project’s result was ImageNet, a dataset that boasted the ability to map out the entire world of objects.
In the near decade since ImageNet hit the Artificial Intelligence research community, major technology companies and universities alike have raced each other to find ways to expand datasets to test the limits of AI.
Last week, Stanford University published a research paper that suggests that AI algorithms can be trained to the point of near-human accuracy. Its authors used 100,000 x-ray images released by the National Institutes of Health in late September of this year to test its algorithm in comparison to diagnoses made by radiologists. The results? The Stanford team claims that its algorithm can detect pneumonia with similar accuracy to four trained radiologists.
Even more impressive: the team also tested the algorithm on 14 other diseases that the NIH included in the dataset, such as hernias and fibrosis. Here, the AI’s results actually superseded human accuracy; compared to the benchmark research from the NIH team, AI’s results for each of the 14 diseases had fewer false positives and false negatives.
Andrew Ng, Google Brain founder, served as co-author on the paper. Earlier this year, he told MIT Tech Review, “I think health care 10 years from now will use a lot more AI and will look very different than it does today.”
One thing we can expect for sure? The algorithms will continue to get better. ImageNet’s accuracy went from 75% to 95% in a mere five years. Look out, radiologists.