NHGRI’s Oral History Collection: Interview with George Church

NHGRI’s Oral History Collection: Interview with George Church


I’m George Church, and I’m a professor of
genetics at Harvard Medical School. I’m also a member of the Broad Institute and
the Wyss Institute. I was born on MacDill Air Force Base on 28
August, 1954 in Tampa, Florida. Yeah, there’s one teacher that I’ve acknowledged
several times publicly, which is Craden Bedford in ninth grade and eleventh grade. He was my math teacher. And he essentially let me off halfway through
the year, both years, because [laughs] some combination of my narcolepsy and the fact
that I was — I knew all the — I seemed to know all the material. So, he was a big influence on me. Also, my photography teacher from tenth grade,
John Snyder, was amazing influence. And then — well, plenty of them in college
and graduate school. But notably, Sung Houu Kim, who helped me
transfer from my sophomore year in college to graduate school. Yeah. Well, I graduated in my sophomore year after
working with him for a year and then tried graduate school at Duke with him, but then
I flunked out because I wasn’t paying attention to courses I had already taken. Kind of like high school, you know? [laughs] And I worked as a technician for a year with
Sung Hou. So, I was with him as undergraduate, graduate,
and technician. And then he said, “You probably want a PhD,”
so I went to Harvard to work with Wally Gilbert. Yeah, so I worked on crystallography of the
first folded nucleic acid, transfer RNA, which was the key to the genetic code. And wrote some software during that time that
was in use for 30 years thereafter. But I also did some experiments, but mostly
software. Well, so some of the software I wrote as a
crystallographer I rewrote as a rotation student at Harvard. Again, crystallography lab because they were
the only labs that had scanners which could scan films, and all data was collected on
films at the time, both X ray and sequencing data. And then, that software was — well, I was
consultant for Bio-Rad for a brief while, and then it was completely rewritten for genome
therapeutics in the late ’80s, early ’90s. And they and others used it to — for part
of the early stages of the genome project. I think it’s a mystery why Harvard would accept
me after flunking out of Duke. Usually, it’s the other way around or nothing. I think it’s because I was accepted at Harvard
for an application in previous years, and also, I had published five papers during the
time I was flunking out. I had — and on transfer RNA and also on methods
— sorry, a model for DNA protein interactions. Oh, and then Wally — he had come to Duke
for a day at the invitation of graduate students, and I spent almost the whole day with him
because I thought of an excuse that why I should be at every one of his meetings. I don’t think it had much impression on him. He wasn’t on the admissions committee or anything
like that, but it made a big impression on me. I was pretty sure even before he showed up
that I wanted to work in his lab and do DNA sequencing or possibly crystallography. But I decided on DNA sequencing. Well, yes. So, I was doing the crystallography and tRNA,
I — there was one point we wanted to ask how general our crystal structure was, and
did it apply to other tRNAs, so I typed in all the sequences available at the time, which
is not something you would do today — it took about an afternoon to type. Almost all the sequences were tRNAs, and I
typed them all in, including the modified bases. And I folded them up in the computer and I
said, “Wow, this is really cool. You know, I could get hundreds of 3D structures
from one 3D structure and a lot of sequences. And so, sequencing must be a lot easier and
it’s just as good.” And so, I knew somebody who was doing RNA
sequencing at the time at Duke, a senior graduate student. And I decided that I wanted to do something
much better than — much more high throughput than that. Maybe sequence, you know, every person’s genome. And so I didn’t really do the math right away,
but it seemed like it was plausible in the same way I went to Wally’s lab, which — at
the time, they were just beginning to do DNA sequencing, and so at that time, you know,
50 base pairs was a big deal. So, we had a ways to go [laugh]. So, the main one that was in use, the one
that I typed in all the sequences from was RNA sequencing where you would digest RNA
— you would label it and digest it and run it out on paper chromatography or paper electrophoresis. And there was a lot of radioactivity, typically. And a lot of high voltage for the paper electrophoresis. And kerosene-like substance that says “high
voltage” and paper and kerosene was in a room which had an automatically closing door and
a bunch of CO2 jets to put out the fire, should that ever happen. And so, then — and then we started converting
over to double electrophoresis. Did work with Tom Maniatis and then later
Wally’s lab. That was in the mid ’70s, ’77, yeah. We were well into the era of gel electrophoresis. Well, so I think the average student would
flounder around for a few months trying to look — even in a lab where you had protocols
working. And in labs where you didn’t have protocols
working, it takes maybe longer. But then once you were up to speed, Greg Sutcliffe,
the person who taught me sequencing — he was a sixth-year student when I was a first-year
student, he knocked off 4,000 base-pair, pBR322 with a little bit of help from me in about
a year. So, 4,000 base pairs a year. And he was extraordinary. Fred Sanger was also extraordinary, but there
weren’t that many at that time. We almost never talked about cost, other than
when we ran out of acrylamide. And we used so much acrylamide that it was
actually a big deal. It was like, a $13,000 purchase order because
we got it in quantity and high purity, and we used a ridiculously large amount of it
until Fred Sanger published his paper on how to use thin gels with thin lanes and low percentage
of acrylamide. And he clearly was being cost-conscious, but
you wouldn’t call it like, a technology improvement. It was, like, three obvious cost-saving things
that, by the way, also helped increase the read length. And when I said that 4,000 was a typical length,
I mean, most people were satisfied with, like, one run-up of — on the chest X-ray film. And so, you might get 60 base pairs, and that
would be their thesis. At the time that I entered the lab, you know
— but in terms of community, there really weren’t that many. There were little communities centered around
Wally Gilbert’s lab and Fred Sanger’s. And their trainees would slowly percolate
out, and they’d send — or Wally’s lab, at least, would send out these photocopied and
very colorful paper, like green and pink paper that they would send out so that people — I
guess so people could easily find them on their bench amongst the clutter, you know. Well, in Wally’s lab, since it was one of
the two centers, it was easy to find out what other people were doing. There was a little bit of traffic in between,
very, very little. In fact, I was — I think I might have been
the first person in Wally’s lab to do Sanger method. And a number of people said, “You’re going
to be thrown out of the lab.” You know, because that’s the enemy. And I said, “Well, we’ll see. I don’t think so.” [laughs] And he was fine with it. But that was kind of the limit of the flow
of information. Well, I mean, they were pretty independent
as far as I could tell. They both noticed the polychromate electrophoresis
on the DNA string gels that Tom [spelled phonetically] had developed. And they tried it out with different ways
of doing end labeling, meaning — so if you just draw up — where you’d get a ladder,
where each base was longer by one base pair, each — so the ladder could either be terminated
by a chemical cleavage or by the polymerase falling off. And they resulted in the same sort of thing. But they were wildly different methods. And Gilbert’s chemical sequencing took off
early. It was a little easier to implement on double-stranded
DNA, which was what most people had, 60/40 PBR plasmas, like pBR322. And Fred Sanger’s was kind of restricted to
single-stranded DNA, which PBR, sort of a Phi X 174 and G4 and N13 were a few examples. So, anyone, almost everyone had a double-stranded
piece of DNA they could label at one end. So, this chemistry took off for a while, but
ultimately, the dideoxy sequencing — once they had — once Jill Messing [spelled phonetically]
introduced single-stranded vectors that people could use, that took off because it was slightly
higher quality. And no toxic chemicals. Still had to do SR85. Right, yeah. So, as I was leaving Duke, I was thinking
about ways to change sequencing radically. And, of course, this is not healthy for an
incoming student, because you’re supposed to just learn the protocols. But the ideas I had kind of in a vacuum because
I was in transition from one lab to another. We’re having to do multiplexy — how you can
mix lots of things together, so the same volume would do multiple reactions. And I tried a little bit during my rotation
with Greg, and he said, “No, just finish your project on this plasmid.” And I did. It was fine. I was happy with that. And then I did one other project on RNA splicing. And then finally, I had a moment where I had
to decide whether I was going to continue to do RNA splicing or really follow my dream
of technology development. And I decided that RNA splicing I was doing
would maybe impact 3 or 4 people worldwide, while sequencing could affect more. And so, I tried the — I figured the ultimate
multiplexing would be to do the whole genome in one tube, right? So, all the reactions could be done together,
running the gel — you could have the whole genome in one gel lane. And the problem was just, you know, like,
de-multiplexing it. You’d sort of multiplexed it and so the idea
was to transfer it to a flat surface. We tried a number of different — many different
flat surfaces that would work — and then probe it and then image it and probe it and
image it. And that whole cycle of probing and imaging
was the first inkling of next-gen sequencing. I mean, this was in ’83 or something like
that, published in ’84. So, people were excited about it. I mean, we had no idea what was coming with
next-gen sequencing. They were excited, because you could do it
without cloning or PCR. You could do it without amplification, so
you could get things like methylation and then protein footprinting. So, simultaneously, we gave a way to do sequencing,
methylation and protein footprinting all from — essentially from nuclei of cells. Yeah. So multiplexing, part of it I had had as a
backup plan, that if I couldn’t sequence the whole genome in one tube, in one lane — or
— then I would reduce the complexity of the mixture by mixing a bunch of plasmids that
had your favorite inserts in them. And then you could do — maybe say 20 inserts,
which is what I settled on later. But as it turned out, the whole genome sequencing
did work, and it had many applications. Somebody wrote a book about it, a how-to manual. And then I said, “Well, multiplexing might
have certain advantages.” Even though it’s not the whole genome, it
will be more sensitive. I can now use non-radioactive methods because
they were not very sensitive. But by reducing the genome size, I could increase
the fraction of each target. And so, that was multiplex sequencing — was
the first paper I published after becoming a professor. In between, I had a short time as a post-doc
at VioGen, at UCFS, where I worked on stem cells, mainly stem cells mostly. And then — but then that 1988 multiplex paper
— along the way to that, I developed colorimetric sequencing, which was later used in, I think,
all of the high schools in Seattle, thanks to Lee Hood [spelled phonetically]. Ironically, using my colorimetric method rather
than his fluorescent method. Both of which were non-reactive, but I guess
one was less expensive. It was early days. And then I developed chemiluminescent [unintelligible]
— yeah, chemiluminescent, which was the main one that was used in genome therapeutics. And then — with Peter Richtarik coming into
the lab was key in that. And that changed us from being the biggest
reactivity user in Harvard to the lowest reactivity user, because we finally came up with a molecular
biology tool, a major one, that was non-radioactive. And then fluorescence was the third method
that we used that was non-radioactive. So, the multiplexing was a step towards next-gen
sequencing and was a step away from radioactivity. And it was combined with automation immediately. Even the genomic sequence, we really didn’t
automate it, even though I had developed the automation software back in ’77, ’78. It wasn’t super-popular. In fact, Greg, when he first heard that I
was doing this, he said, “What do you want to do that for? That’s like the only fun thing of DNA sequencing
is sitting down with your coffee and reading the gels.” So, I kind of put that on the side in ’78
but came back out in ’88 when we did the multiplexing. Right. Well, I heard about it from myself. I was one of the first people to talk about
it. But the first meeting — the first meeting
where it was discussed that I know of, was indeed the Alta 1984 meeting in Alta, Utah. I wouldn’t call it an organizational meeting. It was aimed at a different topic. It was kind of a hijacked, Shanghai’d meeting
where there was a very small number — it was like 10 scientists were invited. I was the youngest. And — It was about — it was estimating mutation
rates that might in some way be the consequence of atomic energy or other atomic bombs or
so forth. And — or even non-atomic and other energy
sources. Anything could cause it. And so, there was a presidential mandate to
estimate this. WE concluded in the first five minutes that
it was not feasible at the moment, certainly. And the best we could do was maybe with a
dollar base, we could sequence human genomes, or actually a human genome was the way it
was phrased. And that would eventually lead to some other
estimate of error induced by energy. But the point was we could do this. And, you know, we didn’t know whether anyone
was going to listen or not, because we knew we weren’t answering the mandate of the meeting,
which also was sponsored not just by DOE, but also by the Office of Science and Technology. It wasn’t — it was always — I think back
then it was OSTP. And it went back to DOE and went back to the
Office of Science and Technology and, at DOE, they just started — they got excited about
it. And they started writing checks. Mostly internally. DOE had two — three small advantages: They
were already up to speed on fluorescence-activated chromosome sorting, so they could make chromosome-specific
libraries. They were good at bioinformatics, because
George Bell had done more or less what I had done, except I did a whole lot more sequences. And then, they were also good at robotics,
because they had all kinds of robotics for handling radioactive substances and so forth. And then NIH had strengths, as well, mostly
in mapping — human genome mapping — and in model organisms. But at that point, nobody was talking about
model organisms. It seemed like — I was the only person at
the first three meetings, one of which was not organizational, and you could argue the
next two were — so it was the DOE and then the Santa Cruz, which was sort of independent,
you know, sort of university-run thing. And then DOE again in Santa Fe. And all of them were talking about THE human
genome. It was A human genome. Wasn’t even a diploid human genome, it was
just 3 billion base pairs. Pretty much all of them came to the conclusion
of a dollar a base, pulled out of the air. Because I know the average student could not
pull it off for a dollar a base at that time. And certainly, it was completely neglecting
the issues of repeats and scale and so forth. But they figured it was a dollar a base, so
it was almost — there was essentially no automation. It was little murmurs of automation in Japan,
and Watson was, like, being very nationalistic. Jim took me aside at a wedding in Cold Spring
Harbor and said, you know, “The Japanese are going to just, you know, eat our lunch. They’re going to take all of our, you know,
economic wherewithal in genomics if we don’t do something about it.” Which is sequence E. coli. And I said, “Well, you know, great.” [laughs] You know? I’d love to have them join. But anyway, they had — Fuji was involved,
and they were building sort of an automated process for making films, which were gels,
so gels that were, like, automated, built more or less the way that film, photographic
film was made. So, it seemed like a natural thing for them. And then they had a lot of other robotics
manufacturers who were interested. And I went over to Japan around that time,
and it was quite impressive. It was like shock and awe of the day. But they dropped out. Fuji decided it was too flaky, it would hurt
their reputation if they had bad quality of any sort. So, they dropped out. The robotic manufacturers, I think, realized
that having a robot do exactly what a human does is actually not cheaper than a human
at that time. In fact, still to this day if you want to
do something with a robot, it’s best to do it in a way that’s quite different from the
way a human does it. So, when we got to next-gen sequencing — when
we developed the first next-gen sequencing device, it didn’t look like a robot at all. There was no robotics involved, because everything
was essentially, again, in one tube. Whereas this — and spread out over a slide,
and it was essentially microscopy. You had a microscopy with a lot of moving
parts. So anyway, that was sort of ’84, ’86 when
those first three meetings occurred. Right. So, in ’87, it seemed like it was going to
be a real thing because in ’87, they started giving out grants. I think I might have been the first grant,
the first genome project grant in ’87. It was very modest. In fact, everybody was joking about how I
didn’t know how to ask for a DOE grant because it was, like — I think it was like $100,000
a year or something like that, and most DOE grants were in the many millions of dollars. So, it really — and — but it seemed like
a real thing then. And then NIH started getting excited about
it. And then the whole biological community kind
of said, “No, we can’t let this happen. We can’t let NIH get involved.” They weren’t so worried about DOE, because
DOE was already this kind of Byzantine, mostly intramural thing. And there wasn’t really any way for a NIH
researcher to get into the DOE. I mean, I came in straight out of the blue. I wasn’t even — didn’t even have a lab when
I first started talking to them. And so when NIH started getting interested,
then people freaked out and there was almost a letter very week to Science or Nature saying
why this is a bad idea, you know. I think if it had been put to a vote, it probably
would have been 99 to 1. But fortunately, Jim Watson was involved. And he was pretty charismatic, and he went
— not the word that everybody would have used to describe Jim, but anyway — he went
and talked to a lot of congressmen and it became a separate line item, as I recall. There was also, around that time — I think
it was in ’87 — there was an NRC — NASNRC — to look into this. And Maynard and I were outside consultants. We weren’t part of the committee. Maynard became part of the committee later. But that was — I think that was part of the
process of figuring out whether you could do it or not. And, you know, eventually NIH felt that they
couldn’t let Department of Energy run what would be the biggest and best biology project
in decades. And I think that — so they started working
together. And then kind of — then it seemed like — DOE
was, I think, more technology-oriented, but NIH, mostly their strength was model organisms
and mapping. It became a model organism mapping project,
for better or for worse. And I think a lot of the mapping stuff was
distracting. But they eventually got back to sequencing. Sequencing was in the — so the first round
of proposals — I was involved in three of the NIH and two of the DOEs. So, in addition to mine, there was another
one with Ray Gessell’s [spelled phonetically] lab in Utah for the DOE. So, we were the two DOEs that I can recall. And then NIH — oh, four — Jenny Mao [spelled
phonetically] was a collaborative research, later become genome therapeutics. They were doing microbacterium, lepri, and
tuberculosis. And then Wally Gilbert was doing microplasma,
which is different front microbacteria, at Harvard. And then at Stanford was David Botstein and
Ron Davis. And I’ve had many interactions with both
of them — all of these people — since then. And then the fourth one was Eric Lander. So, he was — it was his first grant, my second
grant. And these were all in the same stack, and
they all got funded. And he did his on mouse, and it was mostly
mapping. It was like, four out of five specifically
for mapping. All the other three from NIH and two for DOE
were all about sequencing. And then he had one sequencing section, which
was written by Lindy Garant [spelled phonetically] and I. And even it was only partially sequencing. It was mostly cDNAs and yeast, something like
that. Just a little bit of sequencing in it. But that — as soon as we got the grant, that
started to, like, take over. At least it would do all the mapping, we would
do it by sequencing if they could. And then in groups from mouse to human. So, that was — so I would work with them
because of proximity, but I would work with the other teams because I was interested in
sequencing. And it ended up most of the sequencing was
done with my methodology. There were — all five — five of the six
published using the multiplex sequencing at some point. But the one that did the most, that was collaborative
research which became genome therapeutics, they actually sequencing several whole genomes
with it. So I knew Eric well before that. He was in the Harvard bio lab. So, when I was a graduate student, he was
kind of a visiting professor from the business school, I think. And he was hanging out with Peter Churvis
[spelled phonetically] and Bill Gullibart [spelled phonetically]. Bill later went on to have a major role in
databases, even though at the time he was a fly geneticist with, I think, very little
interest in computers. But anyway, Eric would hang out with Bill
and learn genetics. And then I went off and did my post-doc and
didn’t have much contact with him again until I came back as an assistant professor. Then we started talking again. Around that time, he was doing a lot of work. He had done a series of rotations, you know,
even though he was a lecturer at the Harvard Business School, he had worked not only with
Bill and Peter, but also with Bob Horvitz and with David Botstein. And I think the one with David was the most
successful of his rotations, where he did a lot of math behind ideas that David Botstein
had had, I think, for years. And they, together, were able to implement
these things in either math or software or both. And so, he was well-known already, even though
he had not done any experiments. He may have still not done any experiments,
as far as I know. But anyway, so when we developed the center
together, I think he was one of the biggest. And most of the other ones were picking a
human chromosome and, I think, it was smart to pick a whole genome because, as it turned
out, whole genome approach was a better approach than the chromosome approach. And in the end, the human genome was done
by — whole genome. And, in fact, in a way, we weren’t aggressive
enough. It should have been whole genome shotgun from
the beginning, which was what I was advocating all the way through. But at that time, around late ’80s and early
’90s, almost every genome meeting you would go to, all the technologists — I mean, there
were very few technologists. All the people would call themselves technologists,
were aiming for one X coverage, that was like, the holy grail, was one-x coverage. And I would just shake my head and say, “Are
you kidding me? The holy grail should be bringing the cost
down so you can do any x coverage you want to.” Right? But that was just very unpopular sentiment
at the time, even though Fred Sanger, they all had already developed shotgun sequencing. But to some extent, people considered — Fred
was not a major component of these conversations. I mean, he was on his way to retirement, and
I think most people considered him, like, a freak of nature that could do things nobody
else could do, and just — who knows. And he was much more of a technician than
a leader, in a sense. I mean, he certainly had leadership skills
in a sense of setting an example, but he didn’t have a big lab. He had, like, typically one technician in
the lab and the occasional visitor his whole career. Even though he got two Nobel prizes. But anyway, he would come to visit our lab,
he would come sit in the room that I worked in, and he would spend all his time talking
to the technician in the room. They’d be talking about how much T-med they
used and, you know, what percentage gels and all this sort of stuff. That was — he was very focused. So, anyway, so Fred’s shotgun sequencing did
not have a big impact at the beginning. And one by one, each of these labs had to
learn the value of 7-x coverage or more. Model organisms were great. I was super — in fact, I think I was the
first one at the very first — every — all the first three meetings, I said, “No, we
need genome sequence comparisons.” Comparisons means you have to have something
to compare it to. You have two genomes. Some will be closer to it, some won’t. I said, “We might as well start with small
ones so that we get some payoff early on rather than spending 15 years and then hopefully
having a payoff at the end. Let’s get a bunch …” And that was not popular,
either. So, most of the things I was proposing were
not super popular. But what did happen is it was almost — in
order to deal with critics between ’87 and ’90, between the DOE and the NIH starting,
it was very politically adept to bring in model organisms. That’s what helped change David Botstein from
a critic into a supporter of the sequencing. He was always high on genomics and genetics,
but specifically, support the genome project, it helped to have yeast as part of the game. And then worms were obvious because Sulston
and Waterston had already done a lot of mapping, I mean, basically had all the clones in hand
to go. And that’s also what influenced the enthusiasm
for mapping, is because it worked so well on C. elegans. It didn’t work so well in humans. And so, those were obvious model organisms. Flies came in surprisingly late in the game,
and then bacteria got in as an organism. It was very funny that we call it THE bacterium,
you know? And it meant a whole bunch of bacteria. Microbacteria, E. coli, mopholis, microplasma. Both Wally’s lab and Craig’s and so forth. So they were all lumped in together as if
it was one organism, because they were so small. Yeah. So yeah, he was very significant, I think. So when that 1990 grants went in, I worked
with Jenny Mao and Ron Davis and David Botstein were together. And then Wally and Eric. The Botstein/Davis grant had — and it was
a beautiful grant. I think it was the best of all the grants
that I saw. It got the worst score. I mean, it was essentially rejected. But it eventually — it was overridden. But it projected pretty accurately, in a detailed
manner, all of functional genomics. It was all in that grant. You know, just all this beautiful biology
and technology and with yeast as the perfect way to test it out. It was at least eukaryotes, was better than
all these bacterial genomes. It was just a beautiful grant and just got
trashed. I mean, Maynard and I were there for the site
visit, and we could tell from the very beginning of the site visit that they had just came
in loaded — with loaded shotguns ready to take us out. And Ron — so at the time, there were two
ways of doing microarrays. One of them was Pat Brown’s, the other was
Affymetrix. And Ron had bet on the Affymetrix, which I
think in the end was the correct bet — in fact, totally correct bet. Was not rays at all for sequencing or for
RNA analysis. Still useful for SNP analysis. But anyway, Ron had — just went on to do
all kinds of innovative technology. And that — the legacy of that center was
Ron’s technology center, which is still in existence today. And just so many interesting technologies
have come out of there. Well, like I said, I mean, he was big on nationalism. He was persuasive on getting Congress to vote
for it, or getting NIH up to speed as — at that time, it was not a institute, it was
a center. And I think he lead to something that was
a serious enough effort that it had to become an institute. And I think he recognized the value of the
genome from very early on. I don’t remember him being one of the doubters
that had to be convinced. Yeah. He seemed pretty compelled from the beginning. But other than that, I don’t know. I don’t really remember much about his role. I think Cold Spring Harbor was one of the
places where annual meetings would occur. And Cold Spring Harbor was also some of the
grantees. But I don’t think that had anything to do
with why he did it. He felt it was sort of the next big thing
to do. Cold Spring Harbor had a good meeting structure
already for courses or meetings, mainly during the summer time, including meetings that he
had gone to when he was a postdoctoral fellow. And he announced some of his work on the DNA
structure, the early days of microbiology. So, he had a warm feeling with Cold Spring
Harbor and became the head many years later. And it had an infrastructure for having exciting
meetings — relaxed meetings. So, I think that and Santa Fe became the two
main meetings. There were also meetings held at Hilton Head,
GSAC – Genome Sequencing Analysis Conference. They tend to be a little more focused on the
sequencing a little bit earlier, while the other meetings were mapping and sequencing. But I went to all those meetings for a while
until they got into the — heavy into productions, sequencing, and I kind of lost interest and
went to do other things. Yeah, very, very rarely. Almost not at all. I mean, I was not an NIH grantee, right? So, I was a DOE grantee — I still am, since
1987. I was not a — I never got an NIH grant of
my own until 2004. The segs grant, yeah. So, I think that might have been part of it. I mean, I, in a certain sense had one through
my collaborations with collaborative research and Stanford and MIT and Wally’s at Harvard. So, in a certain sense, I had four. But I got zero dollars out of any of those
four. I mean, to my lab, interestingly. So, Eric got 19 million, and I got zero from
that first grant. So, it shows how good a businessman I was. [laughs] But I was very dedicated, and I worked
hard on any one of those who wanted me to work on it. But it didn’t land me as an obvious adviser
to NIH. So, I was an adviser to the NRC, the NASNRC
in ’87. But I didn’t really get that heavily involved
in any NIH efforts until I got to segs. And also, when the $1,000 genome launched
roughly around there during Zerhuny’s [spelled phonetically] era. I think Francis got the bug of — they weren’t
called grand challenges. What were they called? Road map. Anyway, there were like, 11 different possible
road maps. And I showed up — I did advise on that one. I think it was around — Resequencing the biome? Is that — No, no. That was one of them, but there was another
one on technology. Oh. And I was — The big one, right? It was one of the few times I was pretty emphatic. Actually, I was pretty emphatic all the way
through the project. I mean, my meek version of emphatic — which
was they should invest more in technology, because it was my gut feeling that the return
on investment in technology would be bigger than the return on investment of the sequence
itself. Both of which were significant, but I think
once they got serious about technology, the price plummeted by 3 million-fold. And almost none of that price plummeting was
due to — was traceable to anything that happened or any of the genome project. It — in other words, there were a lot of
things, a lot of technology development during the genome project, kind of minor, incremental
things. But they all just went out the window as soon
as next gen came in, because the only thing that we really use from the old days was shotgun
sequencing which we use big time in Next-gen, but shotgun sequencing predated the genome
project — and in fact, was mostly ignored in the first four years of the genome project. So anyway, I think that there should have
been more technology development from day one, and finally, that was in 2002 or 2003
when they started thinking about the project with $1,000, and even $1,000 too radical,
they had to couch in terms of $100,000 and $1,000 project. Yeah. So, I probably like Craig more than most — people. Throughout the years, we keep kind of tackling
the same problems, I think mostly independently. So, we both were attracted to small genomes
early on, and you know, arguably the collaborative research did on the first small genome which
was the helicobacter — but Craig and Ham did the first really peer reviewed one which
was Haemophilus, which was 1994/1995. So, that is one example; we also both got
the photosynthesis bug about the same time as far as DOE. We both also did — Cam and I, including Ham,
Smith is part of this, because he was essentially second in command to Craig, Ham did a lot
of the real technology side of things, and was the one who made all the shotgun libraries,
and he was the one who proposed synthesizing a genome and he got DOE funding for that,
even though he had not put out an RFA for it, and so then I did the same thing about
the same time. And the personal genome project was very similar
to Craig doing himself; the difference was I got the approval and he did not. But there were the same kind of ideas that
you could have unidentified individual. He started out not identifying himself but
then later did. We started out identifying ourselves. Anyway, there were many times where we would
do similar more things. The biggest difference that he used to acknowledge
publicly is that I was much more into technology development. He would say at meetings, “George is about
technology and we’ll use it”, which was I thought very gracious of him, and I tried
to return the compliment as much as I could. And so, you know, and he was, he was an early
adopter of everything, and the thing that really distinguished him early, the thing
that really was the first thing, was in 1987 I think, he had an NIH lab full of ABI equipment. He was a protein chemist, and so mostly ABI
equipment was protein, and he had a big budget, but very little room for people. I mean a typical NIH budget; you can get equipment,
but not people. So, he had to have a like, really automated
equipment, so when they came out with a DNA sequencer, he did not really feel that he
absolutely needed one but he felt, why not? It is another ABI machine, so he got one of
the first ones. He even got one before Lee Hood did, even
though Lee’s lab had developed it, they did not have the budget to buy one, as I understand
it, but Craig did. He got one — he was a pretty good chemistry
know-how, so he like optimized protocols a little bit, and then he did something pretty
clever; he is still at NIH. He ordered the cDNA clones from Clone Tech. So, cDNA was a good choice first of all, because
it was — got rid of all the junk DNA problems, so 100 times smaller problem in principle. He then sent the cDNA clones from Clone Tech
to Collaborative Research which is where I had my center, and they would take in laundry. They would do contract work for other people. So, they made plasopress [spelled phonetically]
for him, and would send the plasopress to NIH where ABI technician together with one
of his technicians would run it on the ABI machines, he essentially was using a third
company to do sequencing — and then he would run it without — I mean he would run it without
annotation, without proof reading straight into NCBI. And so, the whole thing was like almost a
paper project. It is like Clone Tech, Collaborative Research,
ABI, NCBI, and it is to some extent you just have to like make sure just do little quality
check to make sure everything is flying okay, and I thought it was brilliant. But a lot of people got — I was one of the
first people that got angry — before they even got angry about the patents, which was
not his fault, they got angry about quality, because a lot of the reads are going in and
they were not even human. They were unreadable, they were — the good
ones had a three percent error rate, right? And so, a lot of people said, “Ah, it should
not even be in the database”, but other people — like I think, it was Bert Vogelstein
[spelled phonetically], several of the cancer people went in there and they poked around
and found a bunch of cool oncogenes, and cancer related things like mismatch repair and so
forth; and so that the people who had prepared mind knew how to use it. So anyway, then the NIH did what a lot of
public and private institutions did, is they start patenting this because it seemed like
it was a good thing, and I am not sure whether Craig — I do not think Craig initiated that;
I think he went along with it. And then that got press, and it was like ‘oh
this is horrible’, you know, like we are patenting a [unintelligible] genome, you know
these low-quality cDNA reads which did not have an application and you really need to
have — if you have the [unintelligible] not obvious but useful. And a cDNA read was not — and other than
what people were doing which was searching databases with it, and so that got — the
interesting aspect of people getting upset about things, is that it usually results in
the opposite of what they want, right? The more you raise the alarm in public, the
more likely it is to get more money. So, three examples that I was very close to
was the [unintelligible] DNA, that attracted so much attention that it basically sparked
investors to invest in Genitech [spelled phonetically] and ViGene [spelled phonetically] and Cdis
[spelled phonetically] and MGEN [spelled phonetically], and so forth. It just, boom! Out of nowhere, they got excited about biotech
and then cDNA debacle and then the stem cell — for eight years, NIH was not funding stem
cells, so California raised $3 billion. I mean I am not sure there would have been
$3 billion spent for the whole United States on stem cells if there had not been this raucous. So anyway, so finishing off, you know, where
I feel Craig’s role in all this was, was that that was the start. I mean he was really not well known prior
to that, but then he formed these related nonprofit for profit, which was TIGR and HGS,
and that was a brilliant business deal right off the bat, because HGS funded TIGR. All TIGR had to do was give them CDIC funding
so eventually Craig got tired of that relationship and he bought back TIGR so he could be an
independent institute. By that time he lost the grants, and then
he — then I think he had this race for the genome which I did not think was, you know,
just using available technology he had an inside track with AVNI, which was a good thing
obviously. AVNI has got a better chance of winning, fighting
with his customers, because it has unfair — so that was brilliant too, business strategy. And then after the genome project was over,
then he got disinterested in human genetic, or sequencing pretty much for a while, and
opted for the synthetic biology. Using sequencing as a leverage to get people
interested in his synthetic interest, you know to sequence, you know, metagenomes and
things like that. But anyway, so in terms of the human genome,
I am not sure, it probably was a good thing to get kind of forced shotgun down everybody’s
throat. I mean he did that, I tried to do that, but
I was completely ineffective. He was very effective, and there was a period
of time where people were doubtful that you could do — hold you know, shotgun on even
a mouse, in fact I was part of the paper on how to do that, a computational paper, Gene
Meyers [spelled phonetically] and James Webber [spelled phonetically] from Marshfield did
a paper. I was a co-author in that but they took my
name off at the last minute and I just — I do not know why, I regret it but — I believed
you could do shot gun on any size genome and anyway, so he forced people to agree — and
even after he did, he went on mouse genome there were still people arguing that it was
not feasible, or you know, and they asked — there was a P&S paper written by Eric and
I think it was Bob and John, where they kind of like critiqued the whole process, and I
said “why don’t you and your critique address the mouse genome, because there really
was not much mouse mapping, scaffolding at the time he did that, and that kind of showed
that you could do it”. And they just did not want to have any part
of that, and at the time, the reason they asked, or the reason Eric asked me was at
the time I was the only person who had both the Venter and I guess it was the Santa Cruz
golden path or golden gate I cannot remember, version in my lap, I mean there were the two
sides were not sharing sequences and they were both sharing them with me, and so Nature
asked me to compare the two. So, they wanted to know whether I felt that
the one was a derivative of the other, and in my opinion it was not because when I went
and got the Santa Cruz genome it was actually in shambles. And this is not, I think it is documented
in our paper but not very aggressively, so what we did was, we went to NCBI where they
had quietly put in an FTP site with no fanfare whatsoever, an alternative assembly of the
human genome. This is in 2001 when there is a bunch of papers
and we looked at the two and said, “Ah man, this NCBI one is so much higher quality, we
do not want the public to look bad, we want to put the best foot forward, we will just
kind of call it the public genome, and I do not know if you saw the movie Contact, with
Carl Sagan — and they build this machine that allows interdimensional travel, and some
activists destroy it. I hope I am not ruining it — for people who
are listening to this, but there is a second machine that is built in some — this like
a rainy island in Japan. Anyway, that is what this is like. This was like the second genome, and the reason
the Santa Cruz one was so messed up, well first of all they rushed it with the small
staff. Basically, one guy, who was a hero — who
was painted as a hero. And — but also there were some bits that
were flipped, that made the assembly hard, that were NCBI discs that they had misinterpreted
what they meant, but anyway. We compared the two and they looked pretty
good, they were very similar, but clearly independent, so anyway, the critics did not
want to hear that and did not really care. And I think the answer is now everything is
done by shotgun. De novo sequencing, and resequencing. Male Speaker:
Right, right right, and do you think if there had not been, you know, Celera 1998, would
the public project have taken longer and cost more? George Church:
It is hard to say because I think the main innovation there was the 3700 instruments. And it is possible that would have like galvanized
everybody to go forward. It probably would have taken a little bit
longer, but when you consider the goal was a polished genome, I think we might have gotten
a polished genome faster, might have gotten a draft slower, so rather than a draft in
2001 and a polished one in 2004, we might have gotten a polished one in 2003; actually,
faster rather than slower. And the polished one got almost no attention
whatsoever, I mean, and the other thing that might have happened, is it might not have
been such a panic, they we were just warming up to the idea of technology development,
if there had not been a panic, then it might have been fewer than 3700 sought and more
alternatives sought, and then — and we might have tried to finish the genome rather than
produce a draft. I think all that were possibly unintended
consequences, negative unintended consequences of a race. So — and quite a bit I think was stimulated
by — now there was around the review where there was five centers which was one in Cambridge,
Massachusetts, one in St. Louis, one in Baylor, Texas, one at Collaborative Research, and
one at TIGR. And they draw off the two that were not classical
academic laboratories, right. So, TIGR was the non-profit but it clearly
was part of a for-profit institution that — and CRI was definitely for profit. They dropped us too, and I thought that could
have resulted in more innovation for at least a different way of thinking and also that
stimulated Craig to go off and do his own thing. So, I think that decision of dropping from
five centers to three was a poor management decision; maybe with hindsight. But I think if there had not been a race,
there might have been more — the one upsmanship would be who has got the better genome rather
than who has got the first genome, and we still do not have, you know, it is 16 years
later, we still have not finished single human genome anywhere, ever. I hope to fix that soon, but — no one has
really encouraged — no one has particularly encouraging of that, right? So, you have to kind of do it on your own. Yeah, I think we should remind ourselves that
it is not finished, and in some of the most interesting parts of the — genome I think,
is the center mirrors, could easily be involved in aneuploidies, in abortions, low birth weight,
cancer, all that has to do with segregation chromosomes. That is what center mirrors [spelled phonetically]
are all about. So I think it can be done. In fact I think the method that will get the
center mirrors and the other gaps will be the method that will displace all the other
methods, just like Next-Gen did. Yeah, so I knew Francis from his work on jumping
genome, clone jumping, and also from cystic fibrosis and a number of things he had done
before he came. I did not expect he was necessarily going
to be a fantastic manager, that you know, it just seemed like he was a regular post
doc like I was, and — but I know Eric was quite in favor of it and I think convinced
Francis to try for the position. And it turned out, I think he was really the
leader that we needed. He was sufficiently into science that he knew
what was right and wrong, and could help steer things, but was not so micro managing that
he would interfere with things. I think overall the NIH staff played a bigger
role in those grants than almost any NIH staff in history in NIH would be my guess. I could be wrong, but in terms of extramural
France, they were often co-authors on papers, I mean that is pretty deep involvement, and
Francis kept very active research of his own, which I think was great. Now, whether I don’t think, you know, I
do not know what it would have been like with somebody else. Clearly, you know, I think Jim might have
been the perfect person to get Congress to vote for it because he just, I mean he just
had more credentials than Fran, I mean he was a Nobel Prize winner at a very young age,
and head of Cold Spring Harbor and so forth and he had some credentials. So, if Francis had come in at the time Jim
had come in that probably would not have been so good even though it had been a less rocky
— there would not have been the rocky exit. But I think Francis is the right one for that
point onward, yeah. So, I have already mentioned some of the false
starts were the mapping — the mapping turned out to be less interesting than it seemed
at the time, and consumed almost all of our resources for the first five years. So even though we ended up ahead of schedule,
we would have been maybe a little bit more, and then the race I consider a false start
because it discouraged us from going for quality and it discouraged us from technology development. I think the Next-Gen sequence — so who was
involved; a city Brenner was developed this NPSS method which used beads, and I think
the ultimate answer was in flat surfaces, not beads, but he was clearly a pioneer, and
around 1994 he published, or patented NPSS, formed a company called Links. Links then licensed some of my technology
that is in a multiplexed tagging and strategy. And I consulted for them briefly and then
they merged with Celexa and Illumina, and at the time, what was interesting, is when
those three companies merged they did not use any of their technologies, so Illumina
was basically a bead snip company. Links was a bead ligation company or cleavage
and ligation, and then Celexa was a single molecule sequencing company in its original
enunciation. But basically, throughout all of this, they
maybe kept a little bit of the microscopy idea behind Links, and then in-licensed some
chemistry and implication methods and then just started running with it, and there is
still some dispute as to who invented the stuff that they had licensed from somebody,
maybe other than the inventor. So, Jing Wei Jue [spelled phonetically] was
clearly a pioneer in all of the Next-Gen in the developing, good reversible terminators
for — so getting peer reviewed articles, I think we have to remember the peer review
— and I get so entangled with patents on reversible terminators for flourescency sequencing
and later reversible terminators for nanopore sequencing which I think is still in the works,
but it is promising. So, the whole idea of sequencing by synthesis
has greatly helped by having these terminators. What else? For quality, most of the quality came in with
half the piping, oddly enough. And because in a way it could have gone down
with short reads because mostly errors have to do with replacements. If you try to put something that is mildly
repetitive on to a scaffold even it does not know where to go. But then haplo-typing — and I think the first
really good haplo-typing was complete genomics in 2012 where they figured out how the like
break five to ten cells up into like fractional genomes, into 300 equal plates, and then each
of those will be read which was kind of a mixture of 100 kiddie fragments and it was
as if you had done BACs which by the way was also an innovative thing going on in a genome
project was BAC; sequencing was BAC in cloning. The acts was a bit of distraction, but BACs
were really the thing. We had an excellent BAC library from Peter
De Jong [spelled phonetically], very early on and for some reason it was sidelined, did
not use it, I think it was because it was Peter De Jong’s genome, and so he was an
unknown individual but we could have gotten IVA approval for that. Instead they tried to make a diverse set of
BAC libraries which more or less failed and ended up with a non-diverse single person
again, just as he identifies a single person which I think is not necessarily a plus, but
anyway, the BACs were good, the not having the clone in the BACs is even better because
there are some cloning artifacts you get and some sequences that were not easy to go into
BACs, if you just fragment up the genome and then put it into a invitro application, you
ended up retaining a molecular genome. So anyway, that was 2012. The complete genomics published previous paper
in 2009, which was really the first, I think, a truly inexpensive genome, they had consumables,
meaning free agents and supplies and equipment amortization all in the order of $1500 per
person back in 2009. Ranged up to 4000, but it was in that — it
was not quite a $1000 genome but — who else was — I think that is it for innovations
and false starts off the top of my head, you know. I think there is relatively little logic because
there are relatively few labs that do it, so in academic labs it is much more tempting
to be really a doctor. You get labelled as a technologist if you
are really a doctor. And in industrial labs there is also a great
emphasis on incremental growth and in licensing things invented elsewhere. Typically, that elsewhere will be some kind
of hole in the wall technology group that developed one particularly cool thing, you
know, it might be a [unintelligible] paper in childhood developed sequent aced that was
— that made its way into the genome project for a while and NHGRI FY developed electrophoresis
and that got brought into ABI. So, it is a lot of this stuff — they were
not labs that were really full time doing technology development, they might be doing
mostly, you know, like chemistry or mostly biochemistry and they would have an insight
and then get it licensed. So, there were a few labs that were full time
technology obviously. Ron Davis’ [spelled phonetically] lab, Billy
Hood’s [spelled phonetically], and mine were arguably three of those labs. It was hard to be academic for industry, you
had to be kind of at the interface a lot, which is hard. In terms of pattern — so patterns within
that — the general pattern — most technologies get displaced; sometimes without a trace other
than the history. It might last about a decade; a decade is
a good length of time for technology to last, so our multiplex sequencing lasted about 13
years. Sanger’s radioactive sequencing lasted a
similar period of time but — depending on how you count, the fresh start with fluorescent
sequencing it lasted a bit longer, but it all kind of depends on where you — I would
consider almost a totally new method, right because, you know, you are not taking the
plates apart and slapping an x-ray film on it, you know. There is just a whole lot of differences,
so you are running everything in one lane — rather than four lane — I mean, just so
different. The only thing they had in common was — even
the [unintelligible] were not the same, there were the dye terminators and other innovations. So — so, you know, 10 years is kind of — it
takes about five years to go from a concept and maybe a preliminary paper to an instrument,
that is a rule of thumb, then that whole technology will last about a decade, and another will
come in. Sometimes we will build on top of it like
ABI did, sometimes we will completely displace the sequencing until there is not almost a
trace left of the electrophoretic euro, you know, and probably the next one might be nanopores. That does not quite fit the pattern perfectly
in the sense that it has taken a whole lot more than five years to get from a concept
for like 88 concept to barely working in 2016, so the first human genome was essentially
sequenced with nanopore in 2016, first bacterial maybe a year before that. And that is kind of the gold standard now,
is can you sequence a human genome at fairly high accuracy. So that took, you know, from 1998 to 2016
to arrive, and I do not know how long it will last, but I think it has the highest probability
of displacing the current big iron because, you know, in one shift you could benefit — well
you could already fit eight million sequencers, nanopore sequencers, and that is just as cheap
and reusable, so you can imagine, and I could scale that up to billions the way Ian Torrent
[spelled phonetically] did. So anyway, those are patterns that I can see. Well, part of it was single module is harder
than multi module, so it was pretty noisy, and it really required a complete re-do. I mean both of the way people thought about
electrophysiology, so when I did the first patent on nanopores I was thinking patch-clamp. So I went in there and patch-clamp does not
scale at all, so the idea changing — I mean just the idea of changing frequencing made
me think. From electrophoresis to flat on and Class
5, we had the ability to patch-clamp from this sort of artisamally [spelled phonetically]
crafted pore to something where you could have millions on a flat surface. And both cases we were trying to ape, mimic,
microfabrication in electronics we have these super flat layers and you put — you use some
combination with furthertography [spelled phonetically] and self-assembly to put down
the nanopores. But anyway, even with hindsight, that probably
is going to take a long time. And it probably is not that big of a loss
because right now, even with all the computer technology we have right now a great limiting
point for the Genia nanopores is dealing with the data because you are getting terabytes
of data from a little chip in short periods of time and so we need to — if we had had
those terabytes of data back in the 80’s it would not even be conceivable what to do
with it, right? Now at least we could kind of trim off, we
can start building field program locator rays and GPUs right on to the chip and do a lot
of data processing on to the chip. A lot of it is — if you do it really cleverly
it is incompressible data. Yeah, so the short answer is I think the $1,000
Genome was distinctly better than everything else for technology development, and I was
not a big fan of ChIPs — even though it was one of the first adopters of ChIP for a few
demonstration of articles, I was never — to me it was — I was just playing with it to
see if there was something there. The main thing that I got out of ChIPs was
stripping the DNA off the ChIP, so I use it in a very perverse way as rather than having
a lifely [spelled phonetically] order, strip them off and sort of building big pieces of
DNA out of them or using them for making libraries and to this day that is the only thing I use
ChIPs for is synthetic biology, not for analytic, which was the point for NHGRI efforts. I think NHGRI did not get on to functional
genomics early enough, I think the Inco project, once it got started was terrific. In a way, it encouraged technology development. Again, not as aggressively as $1000 genome,
I think Sing’s grant program was the best of the best. It encouraged out of the box thinking, it
encouraged interdisciplinary multi teams, not just multi collaborators, multiple people,
but actual multiple teams, putting together innovative and encouraged innovation which
is something you do not see often enough. And as it happens, Sing’s grant happened
to fund me for Next-Gen sequencing before the $1000 genome grant program started, so
when it started I did not apply for the $1000 genome grants because I already had the grant,
and I did not want to do double dipping. But I thought the two of those were complimentary
and if I had to choose I would have had that point picked the SEGs, I mean I had already
picked the SEGs before I knew the $1000 genome project grant was going. But I still would have picked the — I think
it was more intellectually exciting. I went to all of the — almost all of the
$1000 genome C-tech meetings, I was on the grant review committee — I was chair of the
grant review committee for a few years, and I loved them, you know, because they were
full of physicists and engineers. But the SEGs was full of, you know, innovation
and biology, so they were very complimentary. But I think those were the two real jewels
in the crown. Not just in NHGRI but in all of NIH. The only thing that comes close to those two
programs in my opinion are the transformative awards and the pioneer awards, because they
allow you to do things that would not normally get funded in a particular institute. Most of the really cool things that people
celebrate, or at least that I celebrate, are these interdisciplinary things that cut across
all the NIH institutes. Well, like I said, they had it sitting on
their plate with the Boston Davis grant and the fact that the grant got rejected, by peer
review, fair and square, just the wrong peer review, and then it got re-instantiated, but
greatly reduced in budget. I think that was the first sign, and then
when I was on the grant reviews for some of the genome — early genome project grants,
we were given strict instructions: no biology. I think that was partly to develop a distinct
portfolio for the NHGRI, relative to the other NHI institutes and later they started having
like joint grants, that was brilliant, it allowed more biology to come in and it also
effectively created a bigger fund for NHGRI related projects. But I think it was partly zoned in addition
from the other institutes that prevented NHGRI from traditional [unintelligible]. And there were always clever grantees would
figure out how to sneak it in, but — and it is certainly quite healthy now for many
years, but it took a while. So, nanopores in terms of technology, in terms
of biology and technology, I would say C2 sequencing and synthetic biology. I remember we were in a SEGs meeting in Stanford,
it was one of the first SEGs meetings, and Francis was there, and there was a Q&A period
where people got up and asked Francis questions, and I think Roger Brent asked a question of
within NIH, would NHGRI consider funding synthetic biology? And Francis gave a pretty quick answer, which
was “no.” [laughs] I raised my hand and said, you know,
well what if we had a program where you would do — make targeted mutations to see what
variance among significance, what their physiological effects would be? So that would be sort of synthetic biology,
but it would be testing the hypothesis of flowing, you know, out of genome project. And he gave a quick answer, which was “yes.” So, it is really the kind of the way it was
framed, and that was the basis essentially for our second SEGs. So, the first one fell in to sequencing starting
in 2004 and the second one was on testing hypothesis using genome engineering which
would propose eight fingers by the time we got the grant, we were already in talons and
[unintelligible]. So, in both cases we kind of exceeded what
was going on. So anyway, so I think synthetic biology is
going to have more and more impact on testing hypothesis and moving genomics, you know,
and find them as often, to really understand them you have to field thing with it. And then in C2 sequencing I think is another
one more on the back to the analytic side, but working together with the synthetic side
as we build more, you know, complex systems like organoids and organs possibly for transplant
testing hypothesis, testing drugs. If I could test the real thing and you would
have to understand how the genome plays out in every tissue of the body, and it is still,
there are many things are limited by the fact that I do not know what all the cells are
making. You know, we do not have a cell atlas that
is — a rallying cry that is happening now. I think it has to be a really good cell atlas
that ideally in C2 rather than, you know, approximating every cell is an isolated sphere,
not having any neighbors. In C2 you get the non-random distribution
of proteins and nucleic acids throughout. So body whatever cells surround it, you know,
morphology is important. I think those are going to be the two big
things; synthetic biology, testing variance so you know what you are up against [unintelligible]
the clinic, and you can consider gene therapy as a branch of synthetic biology or sister
of it, and then in C2 which could hopefully not too long. Yeah, well I think the main barrier to PMI’s
success, is sharing the data, not the quality. I think we need to adjust quality. I think democratization does not necessarily
result in lower quality, you know, for example. The quality of cell phones is much higher
now than when only rich people could afford it. Now there is seven billion cell phones and
seven billion people roughly, not quite one to one. I think that raw data can be quite poor and
the consistency can be quite good, so for example Pac Bio, even though it has the worst
quality data, raw data, it has the best contigs [spelled phonetically]. And so, when we sequence, you know, from de
novo we use Pac Bio because Illumina results in 300 contigs and FI will put it into one
contig per chromosome right away. And the consistency is pretty good. Same thing for the nanopores, I think one
of their advantages are going to be the long reads. So long reads gets you high quality, just
like haplo-typing gets you good quality. Anyway, I think it is not quite democratized
when everybody’s cell phone has a sequencer on it then I will consider it democratized,
because then you will be reading out your environment as you walk through life and that
will be reported out to the cloud and sharing data is the key things for medicine. I need to know when I walk into this room
everything that is in the air, everything that is in my food, allergens, pathogens,
non-pathogens, etcetera, and that will be truer than that, and I think we are heading
there. I mean we have got sort of in the $1000 genome
now, complete with interpretations and generally counseling. It will probably be $100 soon because of companies
like BGI and Illuminous moving in that direction. They will only move as quickly as competition
forces them to move because they run a monopoly, and then the nanopores will be pushing both
of those out of the way with potentially, you know, $10 or less. Once it gets to a certain low level, it becomes
monopolized and it is free. You know, Google Maps is free, a lot of Google
services are free to the consumer. Some of the do not even require you to look
at ads, right? I think that is — we are going there very,
very quickly now. At this point, even $1000 you can imagine
a lot of companies can make money by making it freely available.

Leave a Reply

Your email address will not be published. Required fields are marked *