Gas station without pumps

2012 March 20

Petridish, another science crowd-funder

Filed under: Uncategorized — gasstationwithoutpumps @ 11:08
Tags: , , , , , ,

Thanks to a post on the New Zealand blog misc.ience (Petridish – the new kid on the science crowdfunding block), I’ve found out about another crowd-funding service, in addition to SciFund, that I blogged about before. Petridish was created specifically for science funding, and I’m not yet sure what its advantages and disadvantages are compared to SciFund.

As with any funding source, the important questions include

  • How much money can be raised?
  • What is the probability of getting the funding?
  • How much effort is involved in trying to get that funding?
  • What strings are attached to the funding?

SciFund charges 4% and 4% for credit-card processing—I believe that they are also a for-profit company, since they don’t mention tax deduction anywhere.

Petridish is a for-profit company, and they take 5% of all donations (the research projects are also responsible for credit-card fees, which I believe run another 3–5% depending on the card used, and can be much higher for tiny transactions, due to fixed minimum fees). Petridish is looking into ways to make (part of) donations tax-deductible, but they are unlikely to be successful at that.

SciFund is a keep-it-all funder—the person requesting the funding gets everything that is raised (minus fees), whether or not they reach their funding goal.  This allows setting a higher goal, though there are some incentives in place for keeping the goals realistic. Many projects reach their funding deadline without coming close to their initial funding goals and some go well over—funding amounts seem to be in the range $10–$10000 ($300–$3000 if you remove a few outliers), almost independent of what the funder requested, with a median of about $1000.

PetriDish is a all-or-nothing funder: “Projects will only be funded if they reach their goal before the deadline set by the researcher.”  That means that researchers have to guess how successful the crowd-funding will be when setting their goals, despite having no access to information about how many people visit the site, nor what the success rate is for other projects. (That information may become available, once PetriDish has some history to share.)

Researchers who guess wrong are unlikely to get a second chance: “We hand select the most interesting and meaningful projects we find to be featured on our site and then allow you to get involved.”  So not only do researchers have to guess at the tastes of the general public, but they also have to guess at the tastes of an unknown review panel.  The panel may be easier to please than a typical funding agency panel, though, as PetriDish is not risking any money by accepting a proposal—just a little bit of credibility if the project is bad.

I think that the keep-it-all funding of SciFund makes more sense for science funding.   Crowd-funding will rarely pay for a complete project—it will almost always be a small add-on that will enable doing a little more, not making or breaking a project.  Forcing the scientists to gamble on how much to ask for seems silly in that context.

What strings are attached?  Projects must offer rewards to the individuals funding the project, just like SciFund:

Every reward is unique to its project. Some rewards offered on Petridish include:

  • Souvenirs from the field, like a rock from the highest peak in Madagascar or a vial of water from 400 feet below the surface.
  • Talks or dinners with famous researchers
  • Limited edition photographs or artistic renditions of the subject matter
  • Acknowledgements in journals
  • Naming rights for new discoveries, like new species
  • In person participation in a field project

In my earlier post about SciFund, I discussed the possibility of using it to get some funding for banana slug genomics—a project that has some potential for being achievable with only about $5000 or $10000 in funds (as long as no one is paid from the funds—even one quarter of grad student funding costs too much).  The expensive part of scientific research is nearly always the personnel, and I don’t see any way that crowd-funding will make the slightest dent in that cost.

I see SciFund and Petridish as more an opportunity for outreach and publicity for cool projects than as serious sources of funding for science. In that context, I’m seriously tempted to put together a funding request for banana slug genomics, which has a “coolness” factor that few of my other projects have.  What’s stopping me is mainly my fear of the University bureaucracy, who will prohibit me from attempting crowd-funding, soak up any money that comes in as “overhead”, or just make it so difficult to use the money that it would be less painful to fund things out of my retirement savings.

2011 November 24

Harry Potter’s World—junk science at NLM

Filed under: Uncategorized — gasstationwithoutpumps @ 00:06
Tags: , , , , ,

I was recently pointed to a site at the U.S. National Library of Medicine that uses a popular literary figure to inspire kids to learn real science: Harry Potter’s World Renaissance Science, Magic, and Medicine.  They have both an English-class lesson plan (7th–10th grade) and a science-class lesson plan (7th–11th grade). I was prepared to praise them for this integrated curriculum, which seems to me like an excellent way to try to bridge C.P. Snow’s two cultures in academia.

But I glanced quickly down their list of resources and saw Human Mendelian Traits and Human Mendelian Traits for Teachers. A quick look revealed that both were propagating serious myths about human genetics—myths that have been comprehensively debunked at Myths of Human Genetics.

Unfortunately, the myths form a core part of the lesson, and so there is not a lot salvageable once the myths are removed.  I think that it may be appropriate for the NLM to take this lesson plan off their site until they can rework it into something consistent with what is known about human genetics.  They are not doing anyone a favor by putting their brand name on junk science.

2011 November 3

SciFund crowd-sourcing science funding

Filed under: Uncategorized — gasstationwithoutpumps @ 02:25
Tags: , , , , ,

For those science projects that don’t need a lot of money, but you can’t get Federal funding agencies interested in (or for which the time and effort it would take to write a proposal and get it funded are out of proportion to the money needed), there is now an alternative: crowd-funding.

Here’s how it works at SciFund (part of RocketHub):

A “creative” proposes a project, describing it with text, pictures, and/or videos.  There are 3 required components: a funding goal, a deadline, and rewards for the “fuelers” who donate money.  The rewards can be anything legal, except investment opportunities, lotteries, revenue share, or equity.  They can be tangible (like t-shirts, copies of artistic works, … ) or experiences (like seminars, opportunities to participate in research, … ).

RocketHub collects a percentage of all donations (4% if you make your target goal, 8% if you don’t) and passes on shares of credit-card fees (for another 4%), thus keeping 8–12% of money collected, which is not a bad overhead (according to Charity Navigator, the median for charities is about 10%).

Ariolimax dolichophallus at UCSC

Ariolimax dolichophallus at UCSC. Image via Wikipedia

I’ve been wondering it it would be worthwhile to put together a funding request for reagents for finishing the banana slug sequencing.  I think that we need between $5000 and $10000 for that, which may be in the range of crowd funding.  I’d have to pin down the amount better and get commitments from volunteers to do the sequencing if the money comes through.  Rewards could be banana-slug genomics t-shirts or coffee mugs (though the donations would have to be big enough to pay for the extra cost of the rewards).   We could also offer a webinar about the banana slug and its sequencing for any level of donation.

Of course, one problem with this idea is that the most likely people to contribute to a crowd-funding campaign for sequencing Ariolimax dolichophallusare UCSC alumni who are proud of the unusual mascot, but the dean of the School of Engineering has told faculty not to contact alumni about the silly idea of sequencing the banana slug (or at least, I’ve heard rumors to that effect—I’m sure that the development office wants to keep a tight handle on the reins of any fund-raising).

We’d have to come up with a way to convince people that sequencing the banana slug is really cool, as the competition for “fuelers” is pretty stiff, and some of the projects on SciFund sound pretty cool.

2011 July 29

Bad news for Matter and Interactions fans

Filed under: Uncategorized — gasstationwithoutpumps @ 10:10
Tags: , , , ,

One of the memes I’ve seen running through physics teacher blogs lately is teaching physics with computational modeling.  The Matter and Interactions text is often mentioned (for example, by John Burk in My favorite texts: Matter and Interactions). I’ve previously posted wondering whether it would be a good way for my son (and me) to learn physics, since he loves to program in Python already.

Today, Mark Guzdial announced on his blog results from a new Ph.D. thesis addressing a related question: whether students in general learned more physics from a computational modeling physics class or a traditional one.  The results are summarized in the blog title: Adding computational modeling in Python doesn’t lead to better Physics learning: Caballero thesis, part 1.

The details of the experiment are not in the blog post, so I don’t know such important things as how well the classes were matched for student demographics and teacher competence.  Usually in education research the matching of student demographics is relatively easy, but controlling for things like teacher experience and competence are very hard.  Since the goal was to measure how good a particular curriculum was (not how good a particular teacher or student was), controlling for teaching skills is essential, but nearly impossible.

Here’s the dilemma: a physics course based on computational modeling requires a different set of knowledge and skills than a traditional physics course.  Not only does the teacher have to know physics and how to teach it, but also needs to know how to program and how to teach programming.  Thus the computational modeling curriculum inherently requires more of the teacher.

Matching skill at teaching  physics (say by taking physics teachers with comparable results in traditional classes) is not enough.  One approach that could be tried is to have the same teacher teach parallel courses, one using a computational-modeling approach, the other a traditional approach.  Of course, you end up then with the teacher doing better at whichever method they prefer, and the results generally just confirm whatever bias the teacher had initially.

The usual approach in such cases is to do many replicates of the experiment, with different teachers and different groups of students, in the hopes that the errors due to uncontrollable variables will be independent and average out.  It is rare, though, that a PhD student can set up and fund such a huge study, and even rarer that a curriculum seller would do so (why risk showing your product is not as good as you think, when you only need a tiny, easily-manipulated study to sell things as research-based?).  So, without even having seen the thesis I already have some doubts about how well the results generalize.

Comparing the computational modeling course to a traditional course has another problem: the students have all the prereqs for a traditional course (calculus, in this case), but have no prior experience in programming.  This is like setting up a calculus-based course versus an algebra-based course without having the students take calculus and complaining that the students in the calculus-based course learned less physics, because the teacher had to spend so much time on calculus fundamentals.

OK, so I’ve dumped on the research methods of the education experimenters without even having seen the thesis, but not told you even the little that Mark Guzdial summarized from Caballero’s thesis.  Basically, students were tested on the Force Concept Inventory after taking either a traditional physics class or a class using Matter and Interactions, and the students in the traditional course did a lot better.  An analysis was done of what was taught in the courses, and the reason for the difference made sense: the traditional course spent a lot more time on the topics tested on the exam, while the computational modeling course spent a lot of time on teaching how to write simulations,  which was not tested by the FCI test.

So far as I can tell, the students were not assessed afterwards for their programming ability, though Mark Guzdial expresses some doubt that they achieved much skill at that also.

Of course, nothing in this study addresses my personal question: “Would the Matters and Interactions book be a good one for my son and me to invest some time in?” Neither of us is anywhere near typical of the students in physics classes.

One study that would be more relevant for me (though perhaps not of general enough interest for anyone to actually do it) would be to repeat Caballero’s experiment but with classes consisting entirely of computer science majors who had already had at least a year of programming classes.  That would address the question of whether computational modeling is a better way to teach physics if the students already know how to program.  Just as physics has math prerequisites, so that physics teachers don’t have to spend all their time teaching algebra and calculus, perhaps physics needs to have computational prerequisites as well.

Of course, it could just be that the computational modeling people are fooling themselves, and that even with students who already know how to program less physics is learned that way.  That seems unlikely to me, but not entirely impossible.



2011 July 13

Yet another open access journal

Filed under: Uncategorized — gasstationwithoutpumps @ 09:31
Tags: , , , ,

With hundreds of journals and journal publishers starting up open-access journals, why would I bother picking out one to write about?

As a researcher without funding, I can’t afford to pay $3000 for every paper I want to or ought to publish, and the University is not likely to provide funds for that (they have all those administrators to pay outsize salaries to and all that bonded debt to start paying down, after all), so the author-pays model for publishing does not work well for me, though I appreciate the value of free access to scientific literature.

I’ve blogged before about  an AAUP article on open-access publicationISCB open-access policies, and IEEE open-access publishing, as well as passing on an announcement of an advertiser-pays open-access journal.

There is a new journal coming out (name still undecided, so far as I can tell), that is trying a different model: direct subsidy of the journal by funding agencies.  A press release from HHMI announced that Randy Schekman, a cell biologist and HHMI investigator at the University of California, Berkeley will be the senior editor and gave a few details about the publication:

… their fundamental goals: publication of highly significant research; an independent editorial team comprised of active, practicing scientists; and a rapid and transparent peer review.

Expected to launch in about a year, the journal will be online and open access. Schekman says he does not expect the journal to hold the copyright to the literature, but to utilize Creative Commons licenses so that the data can be widely shared.

Schekman reports that editors will be appropriately compensated, noting for example that senior editors will be expected to devote 20 percent of their time to the journal and would be paid accordingly.

For the first three to four years, to help establish the journal, no fees will be charged to authors. Once the journal is established, it is anticipated that authors will be charged an article processing fee to cover some of the ongoing costs of publication.

So they almost got it right.  They’ll have direct funding of the journal for 3 or 4 years, but after that they switch to author-pays.  I’d rather see less “appropriate compensation” for editors and a promise to directly fund publication for longer.  I suspect that Schekman’s experience at PNAS (where he pushed a little in the direction of better science, but did not eliminate the old-boys’ network submission policies) will lead him to gold-plate the new journal and run through the funding agency money without leaving a lasting legacy of free-for-authors, free-for-readers journal.

I think that part of the reviewer and editor fatigue that makes it hard to recruit reviewers is not that they aren’t paid, but that there is a huge revenue stream which they are not part of.  People don’t mind doing volunteer work, but they hate someone else making money off their volunteer work.  I don’t like reviewing for the journals owned by the publishers who are making millions ripping-off libraries, and I’m reluctant to support a “vanity press” where the reviewers’ comments to reject are ignored in the quest for more author fees.  I would be much more willing to do volunteer reviewing work for a subsidized journal which made no money and had no expensive paid editors.

This new journal had the potential to be a low-cost, high-quality journal, but I think it just misses the mark, and by paying editors “appropriately” and paying referees a retainer, they’ll end up being too expensive to continue.  Funding agencies are in the habit of starting things, then killing them off a few years later by discontinuing funding.  Planning to switch to an author-pays model after a few years is essentially setting this up to be just like other open-access journals, but with much higher costs, and so higher author fees. If this high-cost model is followed, then it will end up being a colossal waste of money, when for about the same amount of funding, the journal could have been set up as a free-for-authors, free-for-readers journal for at least a decade.

« Previous PageNext Page »

%d bloggers like this: