Gas station without pumps

2018 March 13

Cabrillo College Robotics

Filed under: Robotics — gasstationwithoutpumps @ 13:56
Tags: , ,

I just donated to the Cabrillo College Robotics Club, to help them send students to the NASA Swarmathon this year:

I am not affiliated with Cabrillo College in any way (except as a resident of the county which they serve), but I’ve been impressed with their recent attempts to better serve the community, with an extensive Extension program of non-credit courses and a new Makerspace. So I look for small ways to support Cabrillo College.

The Cabrillo College Robotics Club looks like a good opportunity.They are trying to raise $7000 in a month, which may be difficult, given the resources available to community-college students.  The goal is to send the team to the NASA Swarmathon in April.  They won the 2016 NASA Swarmathon Virtual Challenge, and they are hoping to win the 2018 in-person competition this year, but first they need the funds to go there.


2018 March 2

Long-term care insurance

Filed under: Uncategorized — gasstationwithoutpumps @ 20:59
Tags: ,

Every year, I get a letter from the State of California’s Department of Health Care Services urging me to get long-term care insurance, but the reasoning in the letter never makes sense to me.  They say

Most of us have fire insurance on our homes, even though only 1 out of 1,000 home owners will ever have a serious fire. Consider that 700 out of 1,000 of us over the age of 65 will need some type of long-term care.

OK, I’ve considered it, and their argument makes no sense at all. Insurance is a trade-off—you pay somewhat more than the expected cost of something, in order to reduce the variance.  So for fire insurance, if you have a 1/1000 chance in 40 years of a $1 million loss, then the expected cost per year is about $250, but people are willing to pay about $1000 a year to reduce that loss to a few thousand if it does occur.  The variance is now quite small (almost constant cost whether or not there is a fire), rather than high probability of no cost with a low probability of enormous cost.

But it long-term care is almost always needed, then insurance is not the right vehicle for dealing with it—savings is.  The cost of the insurance would be more than you would need to save (otherwise the insurance company would make no profit), and if you turn out not to need the long-term care, you would still have the savings, but insurance premiums would be gone.

The only justifications I could see for long-term care insurance are

  • if there is a very small probability of needing very expensive care, so the insurance reduces the probability of a disastrous outcome, or
  • as a forced savings plan, because people can’t be trusted not to spend the money that they have set aside for the possibility of long-term care.

Whole-life insurance used to be used as a form of forced savings, which turned out to be very profitable for insurance companies, but very bad deals for consumers. (Separate term insurance and other forms of savings were much better financially.)  So I don’t trust insurance companies to devise good forced-savings plans.

That only leaves the very small probability of needing very expensive care as a possible justification, and the letter sent out each year implies just the opposite—that there is a very high probability of needing long-term care.  I wish that they provided real information—like the probability distribution of cost of long-term care for the population, so that I could figure out how much risk was really being covered by the insurance.

2018 February 28

Twenty-eighth weight progress report

Filed under: Uncategorized — gasstationwithoutpumps @ 22:43
Tags: , , , , , , , ,

This post is yet another weight progress report, continuing the previous one, part of a long series since I started in January 2015.

My weight has fluctuated a lot in the last year, but I’ve stayed pretty much in my target range for February.

It is probably about time for me to pick new breakpoints for long-term trend lines. I can’t really pretend that my “maintain” period is really modeled well as a single straight line.

My goal is to stay in the middle of my target range (~158 lbs), but this may be harder to achieve than I had originally envisioned three years ago when I started on the weight-control effort.  I’ve been skipping lunch entirely lately, which makes me a bit hungry on the days when it is 14 hours between breakfast and dinner (mostly Tuesdays and Thursdays when I have late lab classes to teach).

February had me riding an average of 4.78 miles a day—about normal for a 5-day-a-week commute with a tiny bit of weekend errand-running.  I have had huge grading loads most weekends, so sometimes I don’t even get time to do a short errand like biking down to Trader Joe’s for soy milk and cereal.

I’m still thinking that I’ll try doing some running this summer, not for weight control but to start training for running a marathon when I retire.  The longest distance I’ve ever run was 15km, and that was about 47 years ago.  I was recently told that Bike Santa Cruz County (that is the renaming of the “People Power” group that used to meet in my living room, back when I was a bike activist) sponsors a 12km run on 26 August 2018, mostly on flat dirt trails at Wilder Ranch.

If I plan on building up to longer runs by increasing length by 1km each week, then I’ll have to start training around Memorial Day. My initial training will be very easy, perhaps on the UCSC 800m track or the Santa Cruz High 400m track (both approximate measurements on Google maps).  Later in the summer I will probably alternate long days (leading up to 12km by the end of August) and short days (1–2km).  On short days, I should probably do some cycling to maintain some cardiovascular fitness—cycling up to the UCSC track to do two laps would probably make a decent short day.




2018 February 25

Weekend off!

Filed under: Uncategorized — gasstationwithoutpumps @ 15:43
Tags: , , , ,

I had only 2 hours of grading to do this weekend (but next weekend will make up for that, with more than 30 hours of grading), so I got a chance to do some other things for a change:

  • Buy groceries at Trader Joe’s.  (“Groceries” is misleading here, as I generally view Trader Joe’s as a beverage store—I bought soy milk, mineral water, hard cider, beer, port, and whiskey, plus cereal, chocolate, and prunes.  I don’t drink whiskey or mineral water and my wife doesn’t drink port or soy milk, but the cider and beer are for both of us.)
  • Do a protein structure prediction for a microbiology colleague.  I no longer use my own tools for protein-structure prediction, as they have succumbed to the changes in C++ and operating systems, so that they can no longer be compiled or run.  I’ve also not maintained the template library for several years.  Because the only predictions I get asked to make these days are ones for which there are good templates, I just use HHpred and Modeller on-line.  For that sort of prediction, they are quick and do an adequate job.  The goal of this prediction was to get a good guess of binding-site residues for a chemosensor, to guide site-directed mutagenesis.  Unfortunately, the available structures did not have ligands bound, and for most of them no one knows what the real ligand is anyway, so I had to make guesses based on the structure without solid evidence for how ligands bind to them.
  • Check whether the nFET and pFET we’ll be using next quarter have small enough gate capacitances to be driven directly from a comparator, or whether we’ll still need to use 74AC04 inverters as digital amplifiers.  We could probably just barely get away with using the comparators, but the chips end up running rather warm, so I’m still going to recommend using the digital amplifier.   One inverter for both the nFET and pFET gate seems to be fine, though—the rise and fall time is short enough that we don’t need to use a separate inverter for each gate.
  • Review courses for the Committee on Courses of Instruction meeting tomorrow—I only had 13 courses to review this time, and I’d already looked at half of them.

I still have this evening—maybe I’ll repot the free live Christmas tree my wife picked up yesterday.  We gave our old one away in January, because it was getting pot bound and we did not want to transfer it to a larger pot—the current one was as heavy as we could haul up the steps.  The new one is tiny, but should last us several years before it gets to be too big.  Today might also be a good day to put the Christmas ornaments back in the attic—we’ll probably have to rebox some of them, as Marcus (our kitten) has shredded some of the boxes.)

2018 February 24

Direct-to-consumer genome sequencing

Filed under: Uncategorized — gasstationwithoutpumps @ 10:31
Tags: , , ,

I’ve been thinking of getting my whole genome sequenced, along with my father’s and perhaps my siblings and my son’s (assuming I get consent on their part).  I’m primarily interested in seeing if I can determine what causes my Dad and me to have low resting heart rates (mine is around 50bpm).

The condition is known as bradycardia, which just means slow heart rate, and is applied to any resting heart rate under 60bpm.  It is not a dangerous condition, unless it gets extreme, in which case it can lead to passing out or falling asleep at inappropriate times. Most bradycardia is due to aging (collagen buildup in the heart or scarring from heart attacks), some is due to extreme exercise, but for a small fraction of cases it is inherited.  Since it affects me, my Dad, and some other relatives, despite very different diets and exercise levels, and I’ve had it since becoming an adult, it is almost certainly genetic, not environmental.

Treatment, if needed, is to use a pacemaker to maintain a minimum heart rate. I expect that I will need to get a pacemaker sometime in next 10–20 years, but with that support, I don’t expect any shortening of my lifespan.

My dad had to get a pacemaker installed in his 70s or 80s, but he is still going strong with it at 92.  There are debates about whether to set his minimum level at what his resting heart rate was for decades (around 50, as mine is) or around 70 (more typical for adults)—there may be some tradeoff between more alertness at the higher heart rate and better sleep at the lower heart rate.  Pacemakers have already gotten fairly sophisticated and try to distinguish between sleeping, resting, and active states.  I suspect that they will continue to get better, borrowing techniques from exercise-tracking wearables.

A survey article published last year, “Inherited bradyarrhythmia: A diverse genetic background” Journal of Arrhythmia 32 (2016) 352–358 by Taisuke Ishikawa, Yukiomi Tsuji, and Naomasa Makita, lists sixteen genes that have been associated with bradycardia, with most of the variants being autosomal dominant loss-of-function mutations.  Many of them are ion channels or calcium-handling genes.

I’d like to check my own genome to see whether I have variants in or near any of these genes known to be involved in heart-rate regulation.  There are several different sorts of genetic tests available:

  • Clinical tests for specific genetic variants (like the BRCA breast-cancer genes or Tay Sachs). These are generally old technology and provide small amounts of information.  They are often quite expensive.
  • DNA microarray panels. These look for a large set of known genetic variants, generally ones that are either fairly common or have been associated with genetic diseases.  This is what 23andme offers, as it is the cheapest technology.  Because inherited bradycardia is not common, and because it can have many different causes, there are no individual markers common enough to appear in standard SNP panels like the tests used by 23andme.
  • Transcriptome sequencing.  These sequence only the messenger RNA currently being produced for translation to proteins.  This method tells a lot about the state of the cells, but misses regulatory regions (which don’t code for the proteins) and any protein not currently being made.  The results are very different depending on what tissue is being sampled.  It is commonly done in research (including cancer/normal cell comparisons), but I don’t know any direct-to-consumer companies offering it, as the results are difficult to interpret outside the scope of specific experiments.
  • Exome sequencing.  This is targeted DNA sequencing that tries to sequence all the protein-coding portions of the genome.  It is probably the most common approach for direct-to-consumer sequencing, and is offered by several companies (including Helix, Novogene, …).  Some regulatory regions may be included in the sequencing, but most of the data is for protein-coding regions.  Helix has brought the cost of exome sequencing well below $1000, but they have a business model that makes the sequencing cheap and sells analysis apps for very high prices.  It is possible to buy the variant call data (though not the raw sequence data) from them and run standard analysis pipelines on Amazon Cloud, but you need to be a bioinformatician to figure out how to run the analyses—and interpretation is still a problem. The best price on whole-exome sequencing that includes getting all the data is probably from Dante Labs: $495
  • Whole genome sequencing.  This is currently the most expensive approach, and it tries to cover most of the genome (highly repetitive regions like the centromeres and telomeres produce data, but the reads can’t be mapped to a reference genome, because of the repetitions). It is also the only approach that can uncover novel variants in regulatory regions.  So far, the best price I’ve seen on whole genome sequencing is from Dante Labs: $695

Because I don’t know whether the variants are in the genes or in regulatory regions nearby, I’m considering getting whole-genome sequencing.  The Dante Labs website provides the most technical data of any of the direct-to-consumer sites I’ve seen:  they do 30X sequencing and return the raw data (FASTQ format), alignment to a reference genome (BAM format), and variant calls (gVCF format).  They don’t document what pipelines they use for mapping and variant calling (information needed for publication these days). They also don’t provide much interpretation of the variants, so far as I can tell from their website, just running through SNPeff, which is a reasonable first cut.  They do provide all the data in their price (many sites charge extra for you to get the data), and point to third-party websites like for interpretation.

With the gVCF file, I could do standard searches against variant databases such as dbSNP and OMIM (though I believe that SNPeff already does that), to get information about known variants, and I can also use a genome browser to look for variants that are near the genes known to be involved in bradycardia.  If I get the data for n very closely related genomes (me, my Dad, my siblings, my son, …), rather than just mine, I should be able to reduce the number of variants that are candidates by a factor of 2n (from an expected 3 million to about 190,000 for 4 genomes).  Proximity to the known cardiac pacemaker genes should reduce the candidates to around 240 with a single genome and around 25 with 4 or more genomes, even if the mutation is idiosyncratic to our family and not one of the already known variants related to bradycardia.

Note: more that 4 genomes will reduce the overall pool of candidate variants, but not the number near known genes, because variants near each other on the genome will be genetically linked—either both variants will be inherited or neither will be.  With 4 or 5 genomes, I can probably narrow down the candidates to just one of the 16 known cardiac pacemaker genes, but not to a specific variation near that gene, unless I get lucky and there is either an already known variant or there is a mutation in the coding region that would obviously disrupt function.  Of course, if I’m that lucky, I might be able to guess the relevant variant from my genome alone.

I think that my process will probably be to get my own genome sequenced and see what I can do with the data, then ask for my Dad’s genome.  After that, I can ask my siblings and my son (and perhaps my nephews and nieces) for more data, to see whether we can pin down the variant.  I find it interesting that this sort of analysis, which used to require million-dollar grants, is now accessible to citizen scientists at a price less than many spend on their hobbies.  The software has to be made more user-friendly and more easily accessible, but I think that is coming.

Next Page »

Create a free website or blog at

%d bloggers like this: