Gas station without pumps

2019 February 17

Full-genome sequencing pricing

Filed under: Uncategorized — gasstationwithoutpumps @ 12:23
Tags: , ,

In the comments on Dante Labs is a scam, there has been some discussion on pricing of whole-genome sequencing.  There are a lot of companies out there with different business models, different pricing schemes, and subtly different offerings—all of which is undoubtedly confusing to consumers.  I’ve been trying to collect pricing information for the past year, and I’m still often confused by the offerings.

Consumers buy sequencing for two main purposes: to find out about their ancestry and to find out about the genetic risks to their health.

For ancestry, there is no real need for sequencing—the information from DNA microarrays (as used by companies like 23andme or ancestry.com) is more than sufficient, and those companies have big proprietary databases that allow more precise ancestry information than the public databases accessible to companies that do full sequencing.  The microarray approach is currently far cheaper than sequencing, though the difference is shrinking.

The major, well-documented risk factors for health are also covered by the DNA microarrays, but there are thousands of risk factors being discovered and published every year, and the DNA microarray tests need to redesigned and rerun on a regular basis to keep up. If whole-genome sequencing is done, almost all of the data needed for analysis is collected at once, and only analysis needs to be redone.  (This is not quite true—long-read sequencing is beginning to provide information about structural rearrangements of the genome that are not visible in the older short-read technologies, and some of these structural rearrangements are clinically significant, though usually only in cancer tumors, not in the germ line.)

For most consumers mildly interested in ancestry and genetic risks, the 23andme $200 package is all they need.  If they are just interested in ancestry, there are even cheaper options ($100 from 23andme or ancestry.com—I have no idea which is better).

My interest in my genome is to try to figure out the genetics of my inherited low heart rate.  It is not a common condition, and it seems to be beneficial rather than harmful (at any rate, my ancestors who had it were mostly long-lived), so the microarrays are not looking for variants that might be responsible.  Whole genome sequencing would give me a much larger pool of variants to examine to try to track down the cause.  To get high probability of seeing every variant, I would need 30× sequencing of my whole genome.  If I thought that the problem was in a protein-coding gene, I could get 100× exome sequencing instead.

The problem with whole-genome sequencing is that everybody has about a million variants, almost all of which are irrelevant to any specific health question.  The variants that have already been studied and well documented are not too hard to deal with, but most of them are already in the DNA microarrays, so whole-genome sequencing doesn’t offer much more on them.  Looking for a rare variant that has not been well studied is much harder—which of the millions of base changes matters?

The popular, and expensive, approach in recent genomics literature is to do genome-wide association studies (GWAS).  These take a large population of people with and without the phenotype of interest, then looks for variants that reliably separate the groups.  If there are many possible hypotheses (generally in the thousands or millions), a huge population is needed to separate out the real signal from random noise.  Many of the early GWAS papers were later shown to have bogus results, because the researchers did not have a proper appreciation of how easy it was to fool themselves.

Earlier studies focussed on families, where there is a lot of common genetic background, and each additional person in the study cuts the candidate hypothesis pool almost in half.  To narrow down from a million candidate variants to only one would take a little over 20 closely related people (assuming that the phenotype was caused by just a single variant—always a dangerous assumption).  I can probably get 4 or 5 of my relatives to participate in a study like this, but probably not 20.  I don’t think I want to pay for 20 whole-genome sequencing runs out of my own pocket anyway.

I have some hope of working with a smaller number of samples, though, as there has been an open-access paper on inherited bradycardia implicating about 16 genes.  If I have variants in those genes or their promoters, they are likely to be the interesting variants, even if no one has previously seen or studied the variants.  Of course, the size of the region means I’m likely to have about 80 variants in those regions just by chance, so I’ll still need to have some of my relatives’ genomes to narrow down the possibilities, but 8 or 9 relatives may be enough to get a solid conjecture.  (Proving that the variant is responsible would be more difficult—I’d either need a much larger cohort or someone would have to do genetic experiments in animal models.)

How expensive is the whole-genome sequencing anyway?  It can be hard to tell, as different labs offer different packages and many require more than the advertised price.

A university research lab like UC Davis will do the DNA library prep and 30× sequencing for about $1000, but not the extraction of the DNA from a spit kit or cheek swabs.  That is a fairly cheap procedure (about $50, I think), but arranging for one lab to do the extraction and ship to another lab increased the complexity of the logistics, to the point where I don’t think I’d ever get around to doing it.  Storing the sequencing results (FASTQ files), doing the mapping of the reads to a reference genome to get BAM files, and calling variants to get VCF files adds to the cost, though cloud-based systems are available that make this reasonably cheap (I think about $50 a year for storage and about $50 for the analysis).  Interpreting the VCF files can be aided by using Promethease for $12 to find relevant entries in SNPedia.

Fullgenomes.com offers packages from $545 to $2900, with an extra $250 for analysis.  The most relevant package for what I want would be the 30× sequencing package for $1295, probably without their $250 analysis, which I suspect is not much more than consumer-friendly rewrite of the results from Promethease (which can be very hard to read, so most consumers would need the rewrite).  Their pricing is a little weird, as the 15× sequencing is less than half the price of 30×, while the underlying technology should make the 30× cheaper per base.  I’ll have to check on exactly what is included in the $1295 package, as that is looking like the best deal I can find right now.

BGI advertises bulk whole-genome sequencing at low prices for researchers, but never responded to my email (from my university account) trying to get actual prices.  A lot of other companies (like Novogene) also have “request a quote” buttons.  My usual reaction to that is that if you have to ask the price, you can’t afford it.  Secret pricing is almost always ridiculously high pricing, and I prefer not to deal with companies that have secret pricing.

Dante Labs advertises very low prices, but does not deliver results—they seem to be a scam.

Veritas Genetics offers a low price ($999), but that does not include giving you back your data—they want to hang onto it and sell you additional “tests” that cost ridiculously large amounts.  I believe they will sell the VCF file (but not the BAM or FASTQ files it is based on) for an additional fee.

Most of the other companies I’ve seen have 30× whole-genome sequencing priced at over $2000, which is a little out of my price range.

 

2015 April 23

Very long couple of days

Yesterday and today have been draining.

Yesterday, I had three classes each 70 minutes long: banana slug genomics, applied electronics for bioengineers, and a guest lecture for another class on protein structure.  I also had my usual 2 hours of office hours, delayed by half an hour because of the guest lecture.

The banana-slug-genomics class is going well.  My co-instructor (Ed Green) has done most of the organizing and has either arranged guest lectures or taught classes himself. This week and part of next we are getting preliminary reports from the 5 student groups on how the assemblies are coming.  No one has done an assembly yet, but there has been a fair amount of data cleanup and prep work (adapter removal, error correction, and estimates of what kmer sizes will work best in the de Bruijn graphs for assembly).  The data is quite clean, and we have about 23-fold coverage currently, which is just a little low for making good contigs.   (See https://banana-slug.soe.ucsc.edu/data_overview for more info about the data.) Most of the data is from a couple of lanes of HiSeq sequencing (2×100 bp) from 2 libraries (insert sizes around 370 and 600) , but some is from an early MySeq run (2×300bp), used to confirm that the libraries were good before the HiSeq run.  In class, we decided to seek a NextSeq run (2×250bp), either with the same libraries or with a new one, so that we could get more data quickly (we can get the data by next week, rather than waiting 2 or 3 weeks for a HiSeq run to piggyback on).  With the new data, we’ll have more than enough shotgun data for making the contigs.  The mate-pair libraries for scaffolding are still not ready (they’ve been failing quality checks and need to be redone), or we would run one of them on the NextSeq run.  We’ll probably also do a transcriptome library (in part to check quality of scaffolding, and in part to annotate the genome), and possibly a small-RNA library (a UCSC special interest).

The applied electronics lecture had a lot to cover, because the material on hysteresis that was not covered on Monday needed to be done before today’s lab, plus I had to show students how to interpret the 74HC15N datasheet for the Schmitt trigger, as we run them at 3.3V, but specs are only given for 2V, 4.5V, and 6V.  I also had to explain how the relaxation oscillator works (see last year’s blog post for the circuit they are using for the capacitance touch sensor).

Before getting to all the stuff on hysteresis, I had to finish up the data analysis for Tuesday’s lab, showing them how to fit models to the measured magnitude of impedance of the loudspeakers using gnuplot.  The fitting is fairly tricky, as the resistor has to be fit in one part of the curve, the inductor in another, and the RLC parameters for the resonance peak in yet another.  Furthermore, the radius of convergence is pretty small for the RLC parameters, so we had to do a lot of guessing reasonable values and seeing if we got convergence.  (See my post of 2 years ago for models that worked for measurements I made then.)

After the overstuffed electronics lecture, I had to move to the next classroom over and give a guest lecture on protein structure.  For this lecture I did some stuff on the chalk board, but mostly worked with 3D Darling models. When I did the guest lecture last year, I prepared a bunch of PDB files of protein structures to show the class, but I didn’t have the time or energy for that this year, so decided to do it all with the physical models.  I told students that the Darling models (which are the best kits I’ve seen for studying protein structure) are available for check out at the library, and that I had instructions for building protein chains with the Darling models plus homework in Spring 2011 with suggestions of things to build.  The protein structure lecture went fairly well, but I’m not sure how much students learned from it (as opposed to just being entertained).  The real learning comes from building the models oneself, but I did not have the luxury of making assignments for the course—nor would I have had time to grade them.

Speaking of grading, right after my 2 hours of office hours (full, as usual, with students wanting waivers for requirements that they had somehow neglected to fulfill), I had a stack of prelab assignments to grade for the hysteresis lab.  The results were not very encouraging, so I rewrote a section of my book to try to clarify the points that gave the students the most difficulty, adding in some scaffolding that I had thought would be unnecessary.  I’ve got too many students who can’t read something (like the derivation of the oscillation frequency for a relaxation oscillator on Wikipedia) and apply the same reasoning to their slightly different relaxation oscillator.  All they could do was copy the equations (which did not quite apply).  I put the updated book on the web site at about 11:30 p.m., emailed the students about it, ordered some more inductors for the power-amp lab, made my lunch for today, and crashed.

This morning, I got up around 6:30 a.m. (as I’ve been doing all quarter, though I am emphatically not a morning person), to make a thermos of tea, and process my half-day’s backlog of email (I get 50–100 messages a day, many of them needing immediate attention). I cycled up to work in time to open the lab at 10 a.m., then was there supervising students until after 7:30 pm. I had sort of expected that this time, as I knew that this lab was a long one (see Hysteresis lab too long from last year, and that was when the hysteresis lab was a two-day lab, not just one day).  Still, it made for a very long day.

I probably should be grading redone assignments today (I have a pile that were turned in Monday), but I don’t have the mental energy needed for grading tonight.  Tomorrow will be busy again, as I have banana-slug genomics, a visiting collaborator from UW, the electronics lecture (which needs to be about electrodes, and I’m not an expert on electrochemistry), and the grad research symposium all afternoon. I’ll also be getting another stack of design reports (14 of them, about 5 pages each) for this week’s lab, to fill up my weekend with grading. Plus I need to update a couple more chapters of the book before students get to them.

2012 June 21

Crowdfunding genome project

Filed under: Uncategorized — gasstationwithoutpumps @ 20:37
Tags: , , , ,

Manuel Corpas is trying to get the genome of 5 members of his family sequenced, so that he can release the data for public analysis and development of genome analysis tools.

Crowdfunding Genome Project] Day 2: BGI Officially Agrees Sequencing « Manuel Corpas’ Blog.

Donations Sought For Whole Genome Sequencing: 40 Days To Go!

He previously released the genotyping of the same 5 members of his family, so you know that he is serious about doing a public release of the data.

2012 May 17

Performance of benchtop sequencers

Filed under: Uncategorized — gasstationwithoutpumps @ 18:17
Tags: , ,

I just read a recent article in Nature Biotechnology about the new small “benchtop” sequencing machines: Performance comparison of benchtop high-throughput sequencing platforms.  The authors compared the sequencers on de novo assembly of a pathogenic E. coli genome.

Unfortunately, since the article is published by Nature Publishing Group, it is hidden behind an expensive paywall ($32 for the article if your library does not subscribe).

The bottom line of the article is well summarized in the abstract, though:

The MiSeq had the highest throughput per run (1.6 Gb/run, 60 Mb/h) and lowest error rates. The 454 GS Junior generated the longest reads (up to 600 bases) and most contiguous assemblies but had the lowest throughput (70 Mb/run, 9 Mb/h). Run in 100-bp mode, the Ion Torrent PGM had the highest throughput (80–100 Mb/h). Unlike the MiSeq, the Ion Torrent PGM and 454 GS Junior both produced homopolymer-associated indel errors (1.5 and 0.38 errors per 100 bases, respectively).

The MiSeq generally came out looking best in most of the measures, because of its low error rate and large amount of data.  The short reads caused some problems in not being able to place some repeats, resulting in somewhat shorter contigs than when 2 454 GS Junior runs were used.  The MiSeq was the only one of the instruments run with paired ends (none were run with mate pairs), and there are repeats longer than the read lengths so none of the assemblies got down to one contig per replicon.

The error rate on the Ion Torrent was very high, though I understand that the company has come out with more improvements since the experiment was done, so the numbers may not be representative of results you would get today.

I look forward to a similar comparison of long-read sequencers later this year, when the Oxford Nanopore machine can be compared to the PacBio machine, and to the benchtop short-read machines tested in this paper.

2011 December 16

PacBio artifacts

Filed under: Uncategorized — gasstationwithoutpumps @ 21:30
Tags: , , , , ,

I recently was given some PacBio read data to assemble to figure out a repeat-rich area of a genome, and I’m curious about some of the artifacts I saw in the data.  I had much more data than I needed to assemble the region of interest, so the artifacts are not important for this particular project, but I’m wondering if anyone has done an analysis of errors other than indels in PacBio reads.

I’m working with the fasta files that are output from the “Secondary Analysis” step of the PacBio pipeline, and I have no access to the PacBio tools themselves, so I don’t know if the artifacts I’m seeing are  the result of the secondary analysis or are in the movies of the sequencing itself.

The first artifact I looked for was the obvious one that I expected: remnants of the adapter that had not been caught and removed by the secondary analysis.  There was a little of this contamination, but less than I expected.  Out of over 250,000 reads only about 400 had adapter sequences detectable by megablast (and half of those were only detectable by looking for double adapters).  These numbers are so small as to be negligible.

An artifact I was not expecting was for several reads to have a “fold” in them.  That is, the sequence would advance along one strand in the normal way, then turn around and go back along the other strand.  I’ve not counted how many of these there were, but they occurred often enough to pose some risk of contaminating the assembly. [UPDATE 17 Dec 2011: about 3% of the reads map to both strands using megablast.  Of course, these reads tend to be longer, so are more enriched in the set of reads that I used for assembly.] I first noticed them because the gene I was sequencing had a strong A vs. T imbalance, and suggested insertions were appearing with the wrong letter enriched.  When I looked at how the reads mapped to the consensus so far, I found megablast hits like this:

# Fields: Query id, Subject id, % identity, alignment length, mismatches, gap openings, q. start, q. end, s. start, s. end, e-value, bit score
xxx      ua-try16        89.70   2156    28      180     2       2088    3       2033    0.0     2321
xxx      ua-try16        87.97   1214    19      114     2160    3342    1997    880     0.0     1122

Note: that the read (which I’ve renamed “xxx”) matches in the forward direction for the first 2000 bases, then matches backwards from there for the next 1200 bases. I had many such reads, and it took some reading for me to realize that these artifacts were caused by the “circular consensus” library preparation protocol, which ligates a hairpin onto each end of double-stranded DNA.  The PacBio analysis is supposed to recognize the sequence of the hairpins and use the two halves to build a better consensus, but it clearly failed to do this in many cases.

The technique for generating circular consensus libraries for the PacBio instrument. Image from PacBio literature: http://www.pacificbiosciences.com/assets/files/pacbio_technology_backgrounder.pdf

I could probably extract more information from a set of reads by looking for the reverse complement mapping pairs and splitting the affected reads, so that the two halves could be independently mapped (in opposite directions) and both contribute to the consensus.

Incidentally, the megablast parameters are not right for aligning PacBio reads, as the gap openings should be much more frequent but the %identity much higher. I did not bother to figure out how to tweak the megablast costs to get better scoring, but in an alignment of a few thousand reads to my final consensus (confirmed by Sanger sequencing), using a different method, I got essentially no base errors, but short insertions and deletions are frequent. The average run length for matches was only 6.88, and the average run-length for inserts and deletes were 1.28 and 1.12, respectively.  Inserts were about 3.82 times as common as deletes.  Of course, some of these statistics are artifacts of an alignment method that preferred opening gaps to mismatching bases and preferred many short gaps to fewer longer ones.  Still, this suggests a match-match transition probability of 0.85, match-delete of 0.03, match-insert of 0.12, insert-insert of 0.22, and delete-delete of 0.11 (which are not the parameters I used in making the alignment).

Next Page »

%d bloggers like this: