NY Times.com article yesterday on the deluge of data from DNA sequencing
The NY Times.com article yesterday on the deluge of data from DNA sequencing raised a a couple of interesting issues for me.
Here’s one important item I noticed:
“The cost of sequencing a human genome — all three billion bases of DNA in a set of human chromosomes — plunged to $10,500 last July from $8.9 million in July 2007, according to the National Human Genome Research Institute.
That is a decline by a factor of more than 800 over four years. By contrast, computing costs would have dropped by perhaps a factor of four in that time span.”
This example highlights an important point: the cost to perform computations is driven by more than Moore’s law. It’s also a function of our cleverness in designing efficient algorithms and characterizing problems in the most effective ways, and that kind of cleverness can lead to much more rapid improvements in our ability to do useful computations than just the trends in raw computing horsepower would indicate.
Now on to my second point. The big constraint in DNA research is fast becoming our ability to make sense of the voluminous data being generated by the new sequencing machines, and that takes human thinking, it’s not just a computational task. Just as in many other areas that are likely to see an explosion in data generation (caused by the revolution in ultra low power mobile information technology) there will be big opportunities for those who can combine careful critical thinking with information technology to sort through vast piles of data and help people generate actionable information. This is also one of the conclusions of the recently released ebook by Brynjolfsson and McAffee titled “Race Against the Machine”, which I highly recommend.