Gas station without pumps

2014 October 25

Grading based on a fixed “precent correct” scale is nonsense

Filed under: Uncategorized — gasstationwithoutpumps @ 10:12
Tags: , , , , , ,

On the hs2coll@yahoogroups.com mailing list for parents home-schooling high schoolers to prepare for college, parents occasionally discuss grading standards.  One parent commented that grading scales can vary a lot, with the example of an edX course in which 80% or higher was an A, while they were used to scales like those reported by Wikipedia, which gives

The most common grading scales for normal courses and honors/Advanced Placement courses are as follows:

“Normal” courses Honors/AP courses
Grade Percentage GPA Percentage GPA
A 90–100 3.67–4.00 93–100 4.5–5.0
B 80–89 2.67–3.33 85-92 3.5–4.49
C 70–79 1.67–2.33 77-84 2.5–3.49
D 60–69 1.0–1.33 70-76 2.0–2.49
E / F 0–59 0.0–0.99 0–69 0.0–1.99
​Because exams, quizzes, and homework assignments can vary in difficulty, there is no reason to suppose that 85% on one assessment has any meaningful relationship to 85% on another assessment.  At one extreme we have driving exams, which are often set up so that 85% right is barely passing—people are expected to get close to 100%.  At the other extreme, we have math competitions: the AMC 12 math exams have a median score around 63 out of 150, and the AMC 10 exams have 58 out of 150.  Getting 85% of the total points on the AMC 12 puts you in better than the top 1% of test takers.  (AMC statistics from http://amc-reg.maa.org/reports/generalreports.aspx ) The Putnam math prize exam is even tougher—the median score is often 0 or 1 out of 120, with top scores in the range 90 to 120. (Putnam statistics from  http://www.d.umn.edu/~jgallian/putnam.pdf) The point of the math competitions is to make meaningful distinctions among the top 1–5% of test takers in a relatively short time, so questions that the majority of test takers can answer are just time wasters.
I’ve never seen the point of having a fixed percentage correct ​used institution-wide for setting grades—the only point of such a standard is to tell teachers how hard to make their test questions.  Saying that 90% or 95% should represent an A merely says that tests questions must be easy enough that top students don’t have to work hard, and that distinctions among top students must be buried in the test-measurement noise.  Putting the pass level at 70% means that most of the test questions are being used to distinguish between different levels of failure, rather than different levels of success. My own quizzes and exams are intended to have a mean around 50% of possible points, with a wide spread to maximize the amount of information I get about student performance at all levels of performance, but I tend to err on the side of making the exams a little too tough (35% mean) rather than much too easy (85% mean), so I generally learn more about the top half of the class than the bottom half.
I’m ok with knowing more about the top half than the bottom half, but my exams also have a different problem: too often the distribution of results is bimodal, with a high correlation between the points earned on different questions. The questions are all measuring the same thing, which is good for measuring overall achievement, but which is not very useful for diagnosing what things individual students have learned or not learned.  This result is not very surprising, since I’m not interested in whether students know specific factoids, but in whether they can pull together the knowledge that they have to solve new problems.  Those who have developed that skill often can show it on many rather different problems, and those who haven’t struggle on any new problem.

Lior Pachter, in his blog post Time to end letter grades, points out that different faculty members have very different understandings of what letter grades mean, resulting in noticeably different distributions of grades for their classes. He looked at very large classes, where one would not expect enormous differences in the abilities of students from one class to another, so large differences in grading distributions are more likely due to differences in the meaning of the grades than in differences between the cohorts of students. He suggests that there be some sort of normalization applied, so that raw scores are translated in a professor- and course-specific way to a common scale that has a uniform meaning.  (That may be possible for large classes that are repeatedly taught, but is unlikely to work well in small courses, where year-to-year differences in student cohorts can be huge—I get large year-to-year variance in my intro grad class of about 20 students, with the top of the class some years being only at the performance level of  the median in other years.)  His approach at least recognizes that the raw scores themselves are meaningless out of context, unlike people who insist on “90% or better is an A”.

 People who design large exams professionally generally have training in psychometrics (or should, anyway).  Currently, the most popular approach to designing exams that need to be taken by many people is item-response theory (IRT), in which each question gets a number of parameters expressing how difficult the question is and (for the most common 3-parameter model) how good it is at distinguishing high-scoring from low-scoring people and how much to correct for guessing.  Fitting the 3-parameter model for each question on a test requires a lot of data (certainly more than could be gathered in any of my classes), but provides a lot of information about the usefulness of a question for different purposes.  Exams for go/no-go decisions, like driving exams, should have questions that are concentrated in difficulty near the decision threshold, and that distinguish well between those above and below the threshold.  Exams for ranking large numbers of people with no single threshold (like SAT exams for college admissions in many different colleges) should have questions whose difficulty is spread out over the range of thresholds.  IRT can be used for tuning a test (discarding questions that are too difficult, too easy, or that don’t distinguish well between high-performing and low-performing students), as well as for normalizing results to be on a uniform scale despite differences in question difficulty.  With enough data, IRT can be used to get uniform scale results from tests in which individuals don’t all get presented the same questions (as long as there is enough overlap in questions that the difficulty of the questions can be calibrated fairly), which permits adaptive testing that takes less testing time to get to the same level of precision.  Unfortunately, the model fitting for IRT is somewhat sensitive to outliers in the data, so very large sample sizes are needed for meaningful fitting, which means that IRT is not a particularly useful tool for classroom tests, though it is invaluable for large exams like the SAT and GRE.
The bottom line for me is that the conventional grading scales used in many schools (with 85% as a B, for example) are uninterpretable nonsense, that do nothing to convey useful information to teachers, students, parents, or any one else.  Without a solid understanding of the difficulty of a given assessment, the scores on it mean almost nothing.

2014 October 13

Say this, not that

Filed under: Uncategorized — gasstationwithoutpumps @ 17:00
Tags: , , , , ,

This summer I bought my son a book to prepare him for college: Say This, NOT That to Your Professor: 36 Talking Tips for College Success. He read most of it, and found it to be reasonably well-written, somewhat poorly copy edited, and worth reading once. Most of the advice in the book he felt was just common sense, but that only means that he has been raised in an academic culture.  What the child of a professor sees as common sense in dealing with professors may seem arcane for someone coming from a different culture—perhaps the first in their family to go to college.

For the past 3 years, over half of our admitted students are first in their family to go to college. So what my son finds “common sense” may be the cultural knowledge of academia that many of the students at UCSC are missing.

After my son left for college, I decided to read the book for myself, to see if it was worth recommending to students at UCSC.

The author, Ellen Bremen, apparently teaches communication at a two-year college (Highline Community College in Des Moines, WA, about an hour and a half south of University of Washington by public transit), and some of the advice she gives seems to be more directed at two-year college students than research university students.  For example, she provides no advice on how to ask a faculty member if you can join their research group, because most 2-year college faculty have no time to do research, but she provides a lot of information about what to do when you miss half a quarter’s classes.

Her example students also seem to be a bit more clueless than the students I see at the University of California.  Perhaps this is because of the stricter admission criteria to UC, or perhaps she has selected the most extreme cases to use as illustrations. Or maybe I just haven’t dealt with enough freshmen—I generally see students in their sophomore through senior years, after they’ve had a chance to get acculturated to academia.

About 3/4 of Bremen’s book is dedicated to what students do wrong, and the last quarter to how students can deal with professors who screw up—about the right ratio for a book like this. Although the actual incidence of student mistakes and faculty mistakes is a larger ratio (more like 10:1 or 20:1), the student mistakes tend to fall into the same sorts of things over and over, with only the players changing names, so a 3:1 ratio is reasonable.

The advice she gives is generally good, though she recognizes only the teaching role for faculty, and assumes that all faculty have as much time and desire to meet one-on-one with students as she does.  At UC, many of the professors see their research role as more important than their teaching role (and the promotion process, summer salary, and publicity about faculty activity clearly favor this belief), so faculty are a little less willing to dedicate 10 hours a week to office hours or meet with students at random times outside office hours. I’m doing a lot of additional appointments this quarter, and it really does break up the day so that I can’t carve out a chunk of time for writing papers or programming.  In previous years I’ve kept one day a week free for working from home, free from student interruptions and meetings all over campus, but this quarter I’ve not been able to do that, so my research time and book-writing time has dropped to almost nothing.  Just coping with the pile of email from students every few hours eats up my day.  I find that a lot of student requests can be handled more efficiently by e-mail than by scheduling meetings—the extra non-verbal communication that Ellen Bremen is so fond of often gets in the way of the actual business that needs to be transacted.

Overall, I think that Bremen’s book is a good one, even if some of the advice is slightly different from I would give.  I think that she would do well to work with a second author (from a research university) for a subsequent edition, to cover those situations that don’t come up much at 2-year colleges.  Despite those holes, I still recommend the book for UC students, particularly first-in-family students.

 

 

2014 June 26

What you do in college may matter more than where you go

Filed under: Uncategorized — gasstationwithoutpumps @ 00:48
Tags: , , , , ,

Back in May, I read a blog post (Life in College Matters for Life After College) that pointed to the Gallup-Purdue Index Report 2014. I finally got the time to download the report and look at it.

The report has a rather ridiculous interpretation of copyright on its copyright page: “It is for your guidance only and is not to be copied, quoted, published, or divulged to others.” This is particularly ridiculous for a report that they are distributing for free—I think that they have a piece of boilerplate that they put on all their reports, written by lawyers who want to claim far more that copyright law really provides.  They’ve got deeper pockets than me though, so the threat is effective—I won’t directly quote them in my blog, but just summarize what I see as the main points.  If I mangle their message, they have only their own over-zealous lawyers to blame.

What the report is ostensibly about is whether college prepares students for an “engaging” job and a good life.  They were looking for whether students were engaged in their jobs and at five measures of well-being that Gallup has used in other studies: purpose, social, financial, community, and physical. They were also looking at how attached alumni were to their alma maters (which, of course, is primarily what Purdue was interested in, as that determines how much money they can extract from alumni).

Basically, they started with the assumption that the point of college is to get a “great job” and a “great life” (a debatable point, but a widely held belief).  They then tried to determine what produced these outcomes, by interviewing 30,000 graduates.  Note that they did not interview those who quit or were kicked out of college—they were only considered those that college thought had succeeded.  It might be interesting for them to look at the outcomes for those who dropped or failed out also, to see whether the things they think mattered in college also affected the students who left without a degree.  (I suspect that the effects would be even stronger, because of the higher variance in the outcomes, but guessing about sociology is not one of my strengths.)

Their main result was that it didn’t really matter much where people went to college (other than that results were consistently worse at for-profit schools)—what mattered is what they encountered there.  Having an inspiring professor who cared about them, excited them about learning, and encouraged them doubled the odds of their being engaged at work after college. Internships in which they applied their learning, multiple-term projects, and being extremely active in extracurricular activities also doubled the odds of their being engaged at work.

(They use the term “odds” rather than “probability” consistently, so I’m not sure if they mean the probability p or the odds ratio \frac{p}{1-p}. If p is small, these are almost the same, but the overall engagement at work for college grads was reported as 39%, so it makes a difference here.  At one point in the report they mention that 40% of students finishing in 4 years or less are engaged in their jobs compared to 34% of those who took five and a half or more years, claiming that completing in 4 years doubles the odds of engagement.  I can’t come up with any definition of “odds” that makes this more than a 30% difference.)

I think that UCSC does manage to provide some engaging faculty—most of the students I talked to in senior exit interviews had at least one faculty member who excited them about learning (but that’s fairly common—63% of graduates reported that in the Gallup-Purdue survey).  I don’t know that we do as well at providing professors who show that they care about the students or providing mentors who encourage students to pursue their dreams—those are hard to provide at scale, as they rely on matching personalities as well as having enough faculty time to spend. Indeed, in the Gallup survey only about 27% of graduates felt that professors cared about them as a person and only 22% felt they had a mentor who encouraged them, so we’re not alone in finding this difficult to supply.  I suspect that students doing senior theses get more mentoring than those doing group projects, but a lot depends on the student and whoever is supervising the work.

One thing that the Jack Baskin School of Engineering at UCSC is doing right—all the students in bioengineering, computer engineering, electrical engineering, and computer game design are required to do 2-quarter or 3-quarter-long capstone projects.  (That alone should be a 1.8× on the odds of being engaged at work, and only 32% of students in the survey reported having that experience.)  Our students do not do so well on the “extreme extracurricular activity”, though, as few engineering students feel they have time for much in the way of extracurriculars.  Internships are something that UCSC could be much better at—there is a huge industry base only 40 miles away in Silicon Valley, but students are left on their own for finding internships, and not very many do.

The two strongest predictors of engagement were not really what the college did, but what students thought about the college:  if they thought “the college prepared me well for life outside college” or that the college was “passionate about the long-term success of its students”.  These raised the odds of engagement at work by 2.6× and 2.4× respectively. Causality is not clear here, as these attitudes may have resulted from their engagement at work, rather than being causes of it.

The report is very sloppy about confounding variables:  they report that women are more engaged at work than men, and that arts, humanities, and social science majors are more engaged than science or business majors.  But they don’t seem to have done anything to determine which of the two highly correlated variables is the causal one here: gender or major.  Their sample is large enough that they should have been able to get at least a strong hint, despite the correlation.

One unsurprising result: those who took out large loans as students were much less likely to be thriving in all 5 areas of well-being than those who took out small loans or no loans. Since financial well-being is one of the areas, and large loans make it difficult to achieve financial well-being, this is hardly a surprising result.  It would have been more interesting if they had reported differences in just the other four areas—did the large loans have any effects other than the obvious financial one?  They’ve got the data, but they didn’t do the analysis (or they’re not sharing it in the free report, which seems more likely—I’m sure they’ll share it for a hefty consulting fee).

Given that there was almost no difference in well-being based on public vs. private or selective vs. non-selective colleges, the big negative correlation of large loans with well-being sounds like a strong argument to go to a college you can afford, rather than taking out large loans. (Again, the report did not attempt to look at confounding variables for the for-profit schools—how much of their poor performance was due to the large loans they encouraged their students to take out?)

The results for alumni attachment were much stronger than for well-being or job engagement, probably because the background level of alumni attachment was fairly low—only about 18% of college graduates were emotionally attached to their colleges by the criteria used in the poll.  The biggest drivers for emotional attachment were whether they felt the college had prepared them well and whether they felt it was passionate about the long term success of the students.  Again, I question the causality here—it seems likely that those who are emotionally attached are more likely to hold these beliefs, irrespective of what the college actually did.

I’m also confused by their “odds” again, where they report 48% of a group being emotionally attached as 6.1× the odds of another group where 2% are emotionally attached.  I don’t see how they computing their “odds”—it is a very odd computation indeed! Update: perhaps the odds they mean are \frac{p(x | y)}{p(x | \neg y)}, in which case they are comparing the 48% to some unprovided number, probably a little lower than the background 18%.  I’m still having a hard time making that 6.1.  Maybe \frac{p(x | y)(1-p(x|\neg y))}{(1-p(x|y))p(x | \neg y)}?  I can’t seem to make anything match their numbers.

Although the basic conclusion of the study seem reasonable to me (that what happens to you in college is more important than where you go to college, and that large loans make you miserable), the survey seems rather sloppily done, confusing correlation with causality, not attempting to disentangle confounding variables, and doing some sort of arithmetic that seems completely inconsistent so that the “odds” they report are incomprehensible. Also, they asked few questions and every question they asked seemed to have about the same effect on the odds, so I don’t know whether the survey was actually measuring anything (no negative controls).

I’d hesitate to invest money or make academic planning decisions based on this report.  I think that Purdue wasted a lot of money on a load of crap (unless they got a private report with a lot better data and analysis).

 

2014 March 13

Suggestions for changes to biomed training

Filed under: Uncategorized — gasstationwithoutpumps @ 09:56
Tags: , , , , ,

Yesterday I attended a a discussion lead by Henry Bourne (retired from UCSF) about problems in the training system for biologists in the US.  His points are summarized fairly well in his article A fair deal for PhD students and postdocs and the two articles it cites that preceded it:

In a recent essay I drew attention to five axioms that have helped to make the biomedical research enterprise unsustainable in the US (Bourne, 2013a). This essay tackles, in detail, the dangerous consequences of one of these axioms: that the biomedical laboratory workforce should be largely made up of PhD students and postdoctoral researchers, mostly supported by research project grants, with a relatively small number of principal investigators leading ever larger research groups. This axiom—trainees equal research workforce—drives a powerful feedback loop that undermines the sustainability of both training and research. Indeed, unless biomedical scientists, research institutions and the National Institutes of Health (NIH) act boldly to reform the biomedical research enterprise in the US, it is likely to destroy itself (Bourne, 2013b).

I’m basically in agreement with him that very long PhD+postdoc training current in biology in the US is fundamentally broken, and that the postdoc “holding tank” is not a sustainable system.

I also agree with him that one of the biggest problems in the system is paying for education through research grants. Grad student support should be provided directly, either as fellowships or training grants (I prefer individual fellowships like the NSF fellowships, he prefers training grants). By separating support for PhD training from research support, we can effectively eliminate the conflict of interest in which students are kept as cheap labor rather than being properly trained to become independent scientists (or encouraged to find a field that better fits their talents). By limiting the number of PhD students we can stop pumping more people into the postdoc holding tank faster than we can drain the tank by finding the postdocs real jobs.

I disagreed with one of his suggestions, though. He wants to see the PhD shrunk to an average of 4.5 years, followed by a 2–4-year postdoc. I’d rather keep the PhD at 6.5 years and eliminate the postdoc holding tank entirely. In engineering fields, researchers are hired into permanent positions immediately after their PhDs—postdoc positions are rare.  It is mainly because NIH makes hiring postdocs so very, very “cost-effective” that the huge postdoc holding tank has grown. If NIH changed their policies to eliminate support for postdocs on research grants, allowing only permanent staff to be paid, that would help quite a bit.

Draining the postdoc holding tank would probably take a decade or more even with rational policies, but current policies of universities and industry (only hiring people in bio after 6 years or more of postdoc) and of the NIH (providing generous funding for postdocs but little for permanent researchers) make the postdoc holding tank likely to grow rather than shrink.

He pointed out that NIH used to spend a much larger fraction of their funding on training students than they do now—they’ve practically abandoned education, in favor of a low-pay, no-job-security research workforce (grad students and postdocs).

A big part of the problem is that research groups have changed from being a professor working with a handful of students to huge groups with one PI and dozens of postdocs and grad students. Under the huge-group model, one PI needs to have many grants to keep the group going, so competition for research grant money is much fiercer, and there is much less diversity of research than under a small-group model.

The large-group model necessitates few PIs and many underlings, making it difficult for postdocs to move up to becoming independent scientists (there are few PI positions around), as well as making it difficult for new faculty to compete with grant-writing machines maintained by the large groups.

A simple solution would be for NIH to institute a policy that they will not fund any PI with more than 3 grants at time, and study sections should be told how much funding each PI has from grants, so that they can compare productivity to cost (they should also be told when grants expire, so that they can help PIs avoid gaps in funding that can shut down research).  The large groups would dissolve in a few years, as universities raced to create more PIs to keep the overhead money coming in.  The new positions would help drain the postdoc holding tank and increase the diversity of research being pursued.

Of course, the new positions would have to be real ones, not “soft-money” positions that have no more job security than a postdoc. NIH could help there too, by refusing to pay more than 30% of a PI’s salary out of Federal funds.

Of course, any rational way of spending the no-longer-growing NIH budget will result in some of the bloated research groups collapsing (mainly in med schools, which have become addicted to easy money and have built empires on “soft-money” positions).

I think that biology has been over-producing PhDs for decades—more than there are permanent positions for in industry and academia combined. That combined with the dubious quality of much of the PhD training (which has often been just indentured servitude in one lab, with no training in teaching or in subjects outside a very narrow focus on the needs of the PhD adviser’s lab), has resulted in a situation where a PhD in biology is not worth much—necessitating further training before the scientist is employable and providing a huge pool of postdoc “trainees”, many of whom will never become independent scientists.

Tightening the standards for admission to PhD programs and providing more rigorous coursework in the first two years of PhD training (rather than immediately shoving them into some PI’s lab) would help a lot in increasing the value of the PhD.

Unfortunately, I see our department going in the opposite direction—moving away from the engineering model of training people to be independent immediately after the PhD and towards a model where they are little more than hands in the PI’s labs (decreasing the required coursework, shrinking the lab rotations, and getting people into PI labs after only 2 quarters). I gave up being grad director for our department, because I was not willing to supervise this damage to the program, nor could I explain to students policies that I did not agree with.

One thing we are trying to do that I think is good is increasing the MS program, so that there is a pool of trained individuals able to take on important research tasks as permanent employees, rather than as long-term PhDs or postdocs. Again, the engineering fields have developed a much better model than the biomedical fields, with the working degree for most positions being the BS or MS, with only a few PhDs needed for academic positions and cutting-edge industrial research. Note that a PhD often has less actual coursework than an MS—PhD students have been expected to learn by floundering around in someone’s lab for an extra 5 years taking no courses and often not even going to research seminars, which is a rather slow way of developing skills and deadly to gaining a breadth of knowledge. Biotech companies would probably do well to stop hiring PhDs and postdocs for routine positions, and start hiring those with an MS in bioengineering instead.

2014 March 11

Why few women in engineering?

Filed under: Uncategorized — gasstationwithoutpumps @ 11:33
Tags: , , ,
The Washington Post recently published an opinion piece by Catherine Rampell with a somewhat unusual, but plausible explanation why some fields end up with more men than women (as most of the engineering fields do). The theory is that women are more discouraged by a B in an entry-level course than men are (she cites some data from econ courses that support that theory, though it is only correlation, not necessarily causation).
Plenty has been written about whether hostility toward female students or a lack of female faculty members might be pushing women out of male-dominated majors such as computer science. Arcidiacono’s research, while preliminary, suggests that women might also value high grades more than men do and sort themselves into fields where grading curves are more lenient.
As parents and teachers we encourage children to pursue fields that they enjoy, that they are good at, and that can support them later in life. It may be that girls are getting the “that they are good at” message more strongly than boys are, or that enjoyment is more related to grades for girls. These habits of thought can become firmly set by the time students become men and women in college, so minor setbacks (like getting a B in an intro CS course) may have a larger effect on women than on men.
I’m a little wary of putting too much faith in this theory, though, as the author exhibits some naiveté:
But I fear that women are dropping out of fields such as math and computer science not because they’ve discovered passions elsewhere but because they fear delivering imperfection in the “hard” fields that they (and potential employers) genuinely love. Remember, on net, many more women enter college intending to major in STEM or economics than exit with a degree in those fields. If women were changing their majors because they discovered new intellectual appetites, you’d expect to see greater flows into STEM fields, too.
It is very difficult for students, male or female, to transfer into STEM majors late—the number of required courses and prerequisite chains are too long.  As long as the humanities majors have few, unchained requirements and STEM majors have many, chained requirements, the transfer out of STEM will be far larger than the transfer into STEM. Expecting equal flow in both directions is naive.
But there is, I believe, a greater proportional loss of women from STEM fields in college than men, and most of the interventions trying to reduce that loss have not been very effective.  (Harvey Mudd has had some success, attributed to various causes.) If the theory put forth by Rampell is valid, what interventions might be useful? Here are a few I thought of:
  • Higher grades in beginning classes. Engineering courses generally average 0.4 or 0.5 grade points lower than the massively inflated grades in humanities courses. I doubt, somehow, that many engineering faculty will be comfortable with the humanities approach of giving anyone who shows up an A, no matter how bad their work. So I don’t think that this idea has any merit.
  • Lower entry points. One of the things that Harvey Mudd did was to require every freshman to take CS and to introduce a lower-level CS course for those who did not have previous programming. By having some lower-level courses, students could get high grades in their first course without teachers having to water down existing classes or engage in grade inflation. By requiring the course of all students, students who avoided the subject for fear of not being able to compete are given a chance to discover an interest in the field (and, apparently, many women at Harvey Mudd do discover an interest in CS as a result of the required course).
  • Extra tutoring help for B students in entry-level courses. Almost all the “help” resources at the University seem to be aimed at getting students from failing to passing—but the students who are barely passing after massive help do not make good engineering majors, and are likely to fail out of the major later on. It would be far more productive to try to turn the Bs into As, retaining more women (and minorities) in the field. Of course, this means that the assistance has to be at a higher level than it often is now—the tutors need to know the material extremely well and be able to assist others to achieve that expertise.  Basic study skills and generic group help may be good for getting from failing to passing, but may not be enough to get from B to A.
  • More information to students about the feasibility and desirability of continuing with a B. This sort of encouragement probably has to happen one-on-one from highly trusted people (more likely peers than adults).

These ideas are definitely half-baked—I’m not even fully convinced that the theory behind them is valid, much less that they would have the desired effect. I welcome comments and suggestions from my readers.

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 274 other followers

%d bloggers like this: