Gas station without pumps

2017 January 29

Thermistor lab graded

Filed under: Circuits course — gasstationwithoutpumps @ 22:14
Tags: ,

I just spent my entire weekend grading 37 design reports for the thermistor lab—it has not been a fun weekend.  The coming week or two will be grading hell, as I have homework due for the 72-person class Monday, Wednesday, and Friday (with another lab report due next Monday), and no grader or TA.

This lab report was the first of the quarter, so there were a lot more problems with the submissions than I expect to see on future lab reports.  I’ve tried to collect some of my notes on the more common writing errors for this blog post, with the intent of trying to work them into the chapter on lab reports in the textbook:

  • Some students had wordy introductions. I want reports to start with a clear, concise statement of the engineering goal, not a dump of any random factoid that might be vaguely related to the report.
  • Report should be standalone—not referring to homework. If something in the homework is needed, incorporate it!
  • Use paragraphs with one topic each. Every paragraph should start with a topic sentence, and the rest of the paragraph (if there is any) should support and amplify that topic sentence. It is better to have one-sentence paragraphs than to ramble from topic to topic without a paragraph break.
  • Fit your model to your data, not your data to a model. You should never be changing your data to make it fit your theory—you should be changing your theory to fit your data.If you say you are fitting your data to your model, you are claiming to commit scientific fraud.
  • Best-fit curves are not necessarily lines—students don’t have a “line of best fit” in this lab, because the models we’re fitting are nonlinear.
  • Figure captions should be paragraphs below figure, not noun phrases above figure. Any anomalies or interesting features of the figure should be pointed out in the caption.  Most of the crucial content of the report should be in the figures and captions, because that is all 90% of readers ever look at in a science or engineering paper.
  • Refer to figures and equations by number, rather than “schematic below” or “equation above”.
  • Don’t use screenshots for schematics or gnuplot output—export graphics properly as PDF files and incorporate them into the report so that they can be printed at full resolution even when scaled.
  • Many students use way too much passive voice.  Using passive voice is a way to hide who did something or deny responsibility (see Nixon’s “mistakes were made”) and should not be necessary in a design report.
  • Use past tense for things that have been done, not present tense.  Also, “would” is not some formal version of the past tense—it is a marker for the subjunctive mood in English, which has a whole lot of different uses.  In technical writing, the most common use of subjunctive is for “contrary to fact”.  If you say “I would put the thermometer in the water”, I immediately want to know why you don’t—I expect to see the sentence continue with “, but I won’t because …”
  • “Software” is an uncountable noun, which means that it can’t be used with the indefinite article “a”.  There are a lot of uncountable nouns in English, and there isn’t much sense to which words are countable and which aren’t—even closely related languages with similar notions of countable and uncountable nouns mark different nouns as uncountable.  I’ve only found one dictionary that marks countability of English nouns—the Oxford Dictionary of American English, which is available used for very little money.
  • Equations are part of a sentence (as a noun phrase), not random blobs that can be sprinkled anywhere in the paper.  No equation should appear without a textual explanation of its meaning, and the meaning of its variables.
  • There was a lot of misuse of “directly proportional” and “inversely proportional”: A directly proportional relationship plots as a straight line through zero. The voltage output in the thermistor lab is not directly proportional to temperature—it is increasing with temperature, but the function is sigmoidal, not linear.  Similarly, an inversely proportional relationship between x and y is a direct relationship between 1/x and y. It plots as a hyperbola. The resistance of a thermistor is not inversely proportional to temperature, as the resistance is proportional to e^{B/T}  not B/T.
  • Read the data sheet carefully!  A lot of students claimed that their thermometers were good to 150°C, but the data sheet said that the thermistor they were using had a maximum temperature range of  –40°C to 105°C, not 150°C.
  • Students need to use the right metric prefixes.  For example, “kilo” is a lower-case “k” not an upper-case “K”.  This becomes even more urgent for “micro” (µ), “milli” (m), and “mega” (M).  At least one report needs to be redone because the students claimed a value around 200MΩ, when they (probably) meant 200mΩ.  What’s a factor of a billion between friends?
  • Some students are clearly not used to using the prefixes, because I saw a lot of values around 0.0001kΩ, which should have been written 0.1Ω (or even 100mΩ).  Even worse, a lot of students just wrote 0.0001, with no indication what the units were (that triggered a number of “redo” grades on the reports).
  • “Lastly” is not a word—”last” is already an adverb. The same goes for “first”, “second”, and “third”. Perhaps it is easier to keep this in mind if you think of “next”, which is in the same class of words that are both adjectives and adverbs. For some reason, students never write “nextly”.
  • The ×  symbol (\times in LaTeX) is only used for crossproduct, not for scalar multiplication (except in elementary school). The normal way to show scalar multiplication is juxtaposition of the variables, with no operator symbol.
  • “Before” and “after” make no sense in the voltage divider circuit. You can sometimes use those terms in a block diagram that has a clearly directed information flow from inputs to output, but not for talking about the two legs of a voltage divider.

 

 

 

2016 November 3

Writing feedback

Filed under: Uncategorized — gasstationwithoutpumps @ 09:15
Tags: , ,

In his post Omics! Omics!: 10 Years of Omicing!, which reflects on the influences on his writing, Keith Robison says

The other person who deserves nearly infinite credit for making me think about my word choices is my father.  Sometimes he strays into being a pedant and enforcing rules which have fallen by the wayside, but he did make me think when I spoke and wrote.  I’ve seen some guidelines for helping students that counsel picking only a few major errors to mark, for fear of scarring the psyche of young writers.  Dad didn’t subscribe to that viewpoint in the least, and I’m the better for it.  In high school I treasured getting back a draft with red ink all over it; it’s a service I missed in college and beyond.  That meant he had read it and thought about it, and my work was always better for it.

I think that this attitude is one that we need to see more of, both among students and among faculty. I put a lot of time into trying to provide thorough feedback on student writing, even though I know that it is not always appreciated.  I also know a number of faculty who bemoan the low quality of student writing, but spend almost no time giving detailed feedback so that the students can improve.

There are times for triage—concentrating on the students whose work could benefit most from editing, while providing only minimal feedback to those who produce word salad or whose writing is very good—but I prefer to try to provide similar amounts of feedback for all students.  For the word-salad students, my comments are mainly on sentence structure and paragraph structure, to try to have their writing make sense at least at a local level. The students in the middle get a mixture of different comments from punctuation to overall structure of the paper, while the top students get mainly get comments on trivial little details that can polish their already good writing.

2016 September 29

GRE Analytic Writing favors bullshitters

Filed under: Uncategorized — gasstationwithoutpumps @ 22:33
Tags: , , , ,

My son recently took the GRE exam to apply for grad school in computer science.  The test has changed since I took it in 1973, but it still looks a lot like the SAT exam, which has also changed since I took it in 1970.  The multiple-choice section is still primarily 9th and 10th grade material, so it is a bit surprising that only 5.5% of CS students, 11.4% of physics students, and 15.3% of math students get 170, the highest possible score, on the quantitative reasoning section. [All data in this post from https://www.ets.org/s/gre/pdf/gre_guide_table4.pdf]

The “quantitative reasoning” questions are primarily algebra and reading simple graphs, so the banking and finance students do best with 15.5% getting 170. The scores would be more valuable for STEM grad school admissions if they included some college-level math (calculus, ODEs, combinatorics, statistics, … ), but the general GRE has always been based on an extremely low math level.

The verbal scores are perhaps less surprising, with philosophy being the only major with over 3% getting a 170 (5.1%), and with some of the education and business majors doing the worst—except for computer science, where 8% get the lowest scores (130–134), with the next worst major being accounting with 2.7% having 130–134.  I wonder how much of the difference here is due to the number of native and non-native speakers, as computer science certainly attracts a lot more foreign students than most majors.

I was most interested in looking at the “Analytical Writing” scores, since I’ve not seen much correlation between them and the quality of student writing on the grad school applications I’ve reviewed over the last decade.  I was interested in two things: the mean score and the fraction that got 4.5 or better (the fraction getting 5.5 or better is too small to be worth looking at).  Again computer science and electrical engineering stand out as having extremely low means and small fractions of students having 4.5 or better.  I have not found any analyses of the GRE scores that separate native speakers of English from non-native ones—I wonder how much of the effect we see here is due to being native speakers and how much is due to curricular differences.

Here is the table of all the broad categories in the order that ETS provided them:

Subject

Mean writing

%ile ≥4.5

Computer and Information Sciences

3.1

8.8

Electrical and Electronics

3.1

6.7

ENGINEERING

3.3

12.6

Civil

3.3

13.2

Industrial

3.3

9.8

Mechanical

3.3

12.3

PHYSICAL SCIENCES

3.4

17.3

Accounting

3.4

12.3

Banking and Finance

3.4

10.7

Natural Sciences ─ Other

3.5

14.8

Materials

3.5

19.4

BUSINESS

3.5

15.2

Other

3.5

14.7

Agriculture, Natural Res. & Conservation

3.6

18.0

Mathematical Sciences

3.6

21.0

Chemical

3.6

21.6

Early Childhood

3.6

16.0

Student Counseling and Personnel Srvcs

3.6

17.3

Business Admin and Management

3.6

17.8

Health and Medical Sciences

3.7

19.0

Chemistry

3.7

23.8

Other

3.7

23.1

Other

3.7

21.3

Arts ─ Performance and Studio

3.7

24.3

Administration

3.7

21.9

Elementary

3.7

21.3

Special

3.7

19.5

Other

3.7

23.7

LIFE SCIENCES

3.8

21.3

Biological & Biomedical Sciences

3.8

26.0

Earth, Atmospheric, and Marine Sciences

3.8

25.4

Physics and Astronomy

3.8

26.8

Economics

3.8

27.8

Sociology

3.8

28.2

EDUCATION

3.8

23.9

Curriculum and Instruction

3.8

21.4

Evaluation and Research

3.8

23.6

SOCIAL SCIENCES

3.9

29.1

Psychology

3.9

26.6

Higher

3.9

29.7

Anthropology and Archaeology

4.0

34.7

Foreign Languages and Literatures

4.0

37.2

Secondary

4.0

33.9

Political Science

4.1

42.9

ARTS AND HUMANITIES

4.1

40.8

Arts ─ History, Theory, and Criticism

4.1

38.5

History

4.1

40.4

Other

4.1

38.6

English Language and Literature

4.2

45.2

Philosophy

4.3

52.7

 OTHER

Architecture and Environmental Design

3.4

13.1

Communications and Journalism

3.7

23.3

Family and Consumer Sciences

3.7

20.7

Library and Archival Sciences

4.0

34.3

Public Administration

3.8

23.7

Religion and Theology

4.2

46.5

Social Work

3.6

16.7

The table is more interesting in sorted order (say by %ile ≥4.5 on Analytical Writing):

Subject

Mean writing

%ile ≥4.5

Electrical and Electronics

3.1

6.7

Computer and Information Sciences

3.1

8.8

Industrial

3.3

9.8

Banking and Finance

3.4

10.7

Mechanical

3.3

12.3

Accounting

3.4

12.3

ENGINEERING

3.3

12.6

Architecture and Environmental Design

3.4

13.1

Civil

3.3

13.2

Other

3.5

14.7

Natural Sciences ─ Other

3.5

14.8

BUSINESS

3.5

15.2

Early Childhood

3.6

16.0

Social Work

3.6

16.7

PHYSICAL SCIENCES

3.4

17.3

Student Counseling and Personnel Srvcs

3.6

17.3

Business Admin and Management

3.6

17.8

Agriculture, Natural Res. & Conservation

3.6

18.0

Health and Medical Sciences

3.7

19.0

Materials

3.5

19.4

Special

3.7

19.5

Family and Consumer Sciences

3.7

20.7

Mathematical Sciences

3.6

21.0

Other

3.7

21.3

Elementary

3.7

21.3

LIFE SCIENCES

3.8

21.3

Curriculum and Instruction

3.8

21.4

Chemical

3.6

21.6

Administration

3.7

21.9

Other

3.7

23.1

Communications and Journalism

3.7

23.3

Evaluation and Research

3.8

23.6

Other

3.7

23.7

Public Administration

3.8

23.7

Chemistry

3.7

23.8

EDUCATION

3.8

23.9

Arts ─ Performance and Studio

3.7

24.3

Earth, Atmospheric, and Marine Sciences

3.8

25.4

Biological & Biomedical Sciences

3.8

26.0

Psychology

3.9

26.6

Physics and Astronomy

3.8

26.8

Economics

3.8

27.8

Sociology

3.8

28.2

SOCIAL SCIENCES

3.9

29.1

Higher

3.9

29.7

Secondary

4.0

33.9

Library and Archival Sciences

4.0

34.3

Anthropology and Archaeology

4.0

34.7

Foreign Languages and Literatures

4.0

37.2

Arts ─ History, Theory, and Criticism

4.1

38.5

Other

4.1

38.6

History

4.1

40.4

ARTS AND HUMANITIES

4.1

40.8

Political Science

4.1

42.9

English Language and Literature

4.2

45.2

Religion and Theology

4.2

46.5

Philosophy

4.3

52.7

Note that all the fields that call for precise, mathematical reasoning do poorly on this test, but those which call for fuzzy, emotional arguments with no mathematical foundation do well—the test is designed to favor con men. I believe that this is partly baked into the prompts (see the pool of issue topics and, to a lesser extent, the pool of argument topics), partly the result of having the writing being done entirely without access to facts (benefitting those who BS over those who prefer reasoning supported with well-sourced facts), and partly the result of having graders who are easily swayed by con men.

I believe that most of the graders are trained in the humanities, and so are more swayed by familiar vocabulary and rhetoric.  If ETS had science and engineering professors doing the grading (which they would have a hard time getting at the low rates they pay the graders), I think that the writing scores would come out quite different.

Of course, there are curricular differences, and science and engineering faculty are mostly not paying enough attention to their students’ writing (and I can well believe that CS and EE are the worst at that). But I don’t think that even engineering students who do very, very good engineering writing will necessarily score well on the GRE analytical writing test, which seems to favor rapid writing in only one style.

I will continue to give relatively little weight to Analytical Writing GRE scores in graduate admissions. The untimed essays that the students write for the applications are much closer to the sort of writing that they will be expected to do in grad school, and so much more indicative of whether their writing skills are adequate to the job. I will continue to interpret low GRE scores as a warning sign to look more closely at the essays for signs that the students are not up to the task of writing a thesis, but high GRE writing scores are not a strong recommendation—I don’t want grad students who are good at bull-shitting.

2016 September 15

Research Report | Siemens Competition

Filed under: Uncategorized — gasstationwithoutpumps @ 12:23
Tags: , , , ,

I was reading the guidelines for a research report for the Siemens competition for high-school science projects.  Overall, the guidelines are good, but I have one quibble with their description of the first section:

Introduction: the “why” section (2-3 pages)

  • Start with a broad picture of the problem you have chosen to study and why it is interesting. Provide a brief review of pertinent scientific literature, describe what information is missing and how your work addresses this gap in the literature. Previous relevant publications and patents must be properly cited in the text of the Research Report and included in the Reference section of your report.
  • Describe the specific problem to be solved, the research question to be answered, the hypothesis(es) to be tested, or the product to be developed (if any). Provide a brief rationale for the research and why the work is important.

I believe that they are encouraging a common mistake: burying the lede. Theses, grant proposals, student projects, and papers should start with a direct statement of the research question or design goal of the project, then provide the “broad picture” and “why it is interesting”. I’m very tired of wading through a page or more of mush trying to find out what a student project (or published research paper) is.

Swapping the two points that they put in the first section would improve the quality immensely.

2016 July 6

Outcomes assessment

Filed under: Uncategorized — gasstationwithoutpumps @ 21:47
Tags: , , , ,

In Confessions of a Community College Dean: The Whole and the Sum of Its Parts, “Dean Dad” wrote about outcomes assessment:

The first was a discussion on campus of the difference between the “course mapping” version of outcomes assessment and the “capstone” version.  Briefly, the first version implies locating each desired student outcome in a given class—“written communication” in “English composition,” say—and then demonstrating that each class achieves its role.  The idea is that if a student fulfills the various distribution requirements, and each requirement is tied to a given outcome, then the student will have achieved the outcomes by the end of the degree.

Except that it doesn’t always work that way.  Those of us who teach (or taught) in disciplines outside of English have had the repeated experience of getting horrible papers from students who passed—and even did well in—freshman comp.  For whatever reason, the skill that the requirement was supposed to impart somehow didn’t carry over.  Given that the purpose of “general education” is precisely to carry over, the ubiquity of that experience suggests a flaw in the model.  The whole doesn’t necessarily equal the sum of the parts.

In a “capstone” model, students in an end-of-sequence course do work that gets assessed against the desired overall outcomes.  Can the student in the 200 level history class write a paper showing reasonable command of sources?  The capstone approach recognizes that the point of an education isn’t the serial checking of boxes, but the acquisition and refinement of skills and knowledge that can transfer beyond their original source.  

I have certainly experienced the phenomenon of students doing well in freshman writing courses, but being unable to write reasonably in upper-division and graduate engineering courses—indeed, that is why I insisted on UCSC’s computer engineering curriculum requiring a tech writing course 30 years ago (and teaching it for about 14 years). I continue to teach writing-intensive courses—my current main course, Applied Electronics for Bioengineers, requires about 5–10 pages of writing from each pair of partners each week (though that load will drop in half next year, as the course is split into two quarters). The writing level of students increases noticeably during the quarter, though a number of students continue to have problems with organization, with paragraph structure, with grammatical sentences, and with punctuation (particularly commas).

But evaluating writing just once in a capstone course is no solution—that just invites a lowering of standards so that the failure rate is not too high. Nor can one guarantee that capstones will necessarily be a good check of all the desired outcomes. Indeed, one of the capstone options for the bioengineering degree at UCSC does not involve major writing—the results are presented orally, and oral presentations are frequent in the course.

I recently wrote an evaluation of the “Program Learning Outcomes” (PLOs) for the bioinformatics program at UCSC (and refused to write one for the bioengineering program—it was hard enough getting the info needed for the bioinformatics assessment).  The assessment falls more in the “various distribution requirements” camp than in the “capstone” camp.  We did not have much trouble showing that the PLOs were assessed thoroughly, largely because the PLOs were chosen to be things that the faculty really cared about and included in their course designs, rather than “wouldn’t it be nice if students magically acquired this” outcomes.

Here is the report (minus any attachments):
Program Learning Outcome Assessment
Bioinformatics BS program
Spring 2016
The bioinformatics program was asked to assess at least one of our Program Learning Outcomes (PLOs) [https://www.soe.ucsc.edu/departments/biomolecular-engineering/programs/bs-bioinformatics]:

A bioinformatics student completing the program should

  • have a detailed knowledge of statistics, computer science, biochemistry, and genetics;
  • be able to find and use information from a variety of sources, including books, journal articles, and online encyclopedias;
  • be able to design and conduct computational experiments, as well as to analyze and interpret data;
  • be able to apply their knowledge to write programs for answering research questions in biology;
  • be able to communicate problems, experiments, and design solutions in writing, orally, and as posters; and
  • be able to apply ethical reasoning to make decisions about engineering methods and solutions in a global, economic, environmental, and societal context.

Because the program graduates so few students, it did not seem very productive to examine the output of just the most recent graduating class—we would need a decade’s worth of output to get statistical significance, and any information about changes in the curriculum would be lost in viewing such a long time scale. Instead, we asked the faculty in our department who teach the required courses of the major how they assess the students for the objectives that they cover.

A Google form was used to collect the information.  The faculty were prompted

Please complete a separate form for each course that you teach that is a required or elective course for the Bioinformatics major (select from list below).  Only courses that are required (or are part of a small list of constrained choices) matter here, since we are looking for guarantees that all students are meeting the PLOs, not that there is some elective path that would cover them.

Please provide a sentence or two describing how your course(s) provide evidence that the student has met the outcome.  Be explicit (what assignment in what course provides the evidence)!  Your course does not have to provide evidence for all the PLOs—one or two PLOs supported strongly in a course is more convincing.

Responses were collected for 7 courses: BME 80G Bioethics, BME 110 Computational Biology Tools, BME 130 Genomes, BME 185 Technical Writing for Bioengineers, BME 205 Bioinformatics models & algorithms, BME 211 Computational Systems Biology, and BME 230/L Computational Genomics.  Each of these courses is required, except BME 185 (bioinformatics students may take CMPE 185) and only one of BME 211 and 230/L is required. Our hope is that all the PLOs are assessed thoroughly in these courses, so that we do not need to rely on courses outside our control for outcome assessment.

The responses to the questions are attached as a CSV file and will be summarized here [I’m not including the attachments in this blog version]. Despite the prompt, faculty did not always explain exactly how the outcome was assessed.

detailed knowledge of statistics, computer science, biochemistry, and genetics

All courses except BME 80G (bioethics) test some aspect of this objective. Most of the assignments in most the courses depend heavily on this content knowledge, and the faculty are convinced that this objective is being adequately addressed.  Note that we did not include the courses from other departments that actually teach the fundamental material—just the courses within our department that rely on it.

able to find and use information from a variety of sources, including books, journal articles, and online encyclopedias;

All the courses rely on students gathering information from a variety of sources, with different levels of search and different levels of interpretation needed in each course. All courses have at least one assignment that assesses students’ ability to use information from a variety of sources, and most have several.  Again, because of the pervasive nature of the objective in all our courses,  the faculty have no concern that the outcome is being inadequately assessed.

able to design and conduct computational experiments, as well as to analyze and interpret data;

All the courses except Bioethics require some data analysis, and several require computational experiments, but only BME 211 and 230/L have the students doing extensive design of the experiments.

able to apply their knowledge to write programs for answering research questions in biology;

BME 80G (Bioethics) and BME 110 (Bioinformatics tools) do not require programming, and BME 185 (Technical writing) has minimal programming, but all the other courses require writing computer programs, and the programming tasks are all directly related to research questions.  In BME 211 and BME 230/L the questions are genuine open research questions, not just classroom exercises.

able to communicate problems, experiments, and design solutions in writing, orally, and as posters; and

All courses except BME 110 (Bioinformatics tools) require written reports, and several of the courses require oral presentation. Only BME 185 (Technical writing) requires poster presentation, so we may want to institute a poster requirement in one of the other courses, to provide more practice at this form of professional communication, as posters are particularly important at bioinformatics conferences.

able to apply ethical reasoning to make decisions about engineering methods and solutions in a global, economic, environmental, and societal context.

BME 80G (Bioethics) is specifically focused on this PLO and covers it thoroughly, with all the assessments in the course testing students’ ability to apply ethical reasoning.  There is also coverage of research and engineering ethics in BME 185 (Technical Writing).  Although most of the courses do not teach ethics, the writing assessment in each of the courses holds students to high standards of research citation and written acknowledgement of collaboration.

Overall, the faculty feel that the PLOs are more than adequately assessed by the existing courses, even without looking at assessments in obviously relevant courses for the objectives from outside the department (such as AMS 132 for statistical reasoning). Because so many of the objectives are repeatedly assessed in multiple courses, they see no point to collecting portfolios of student work to assess the objectives in yet another process.

Only poster presentation and ethical reasoning are assessed in only one course, and practical research ethics is assessed in almost every course, leaving only poster presentation as a skill that might need to be reinforced in improvements to the curriculum.

Next Page »