Gas station without pumps

2020 September 11

Edition 1.1 released today!

Filed under: Circuits course — gasstationwithoutpumps @ 11:49
Tags: , , , ,

I finally released the new version of the textbook today!  (https://leanpub.com/applied_analog_electronics). The book is only slightly longer than the previous edition:

659 pages
337 figures
14 tables
515 index entries
162 references

The chapter on design report guidelines is available free as a separate publication:
https://leanpub.com/design_report_guidelines

At the same time as I released the new edition, I eliminated my COVID-19 sale, so the minimum price is now $7.99. I will still provide coupons for free copies to instructors who are considering using the textbook for a course.

I may have to do another version before January, as I have not checked the labs for BME 51A yet to see what modifications are needed for doing the labs at home. For example, I haven’t decided whether it is worth buying more blood-pressure cuffs and extra tubing, to have enough to ship one to everyone. I’ll probably have to give up on the drill-press instruction. I’d rather not skip the micrometer instruction, but that would mean buying a lot more micrometers, as we generally share 5 for the whole class.

One nice thing about selling through Leanpub is that purchasers get all future editions published through Leanpub as part of the price—the company is trying to encourage authors to publish book drafts through them, rather than waiting until the book is completely polished. That means that students who got earlier versions of the book will get this release for free, and anyone who buys now will get the benefit of future releases.

 

2020 September 9

Checked tandem-duplicate words in book

Filed under: Uncategorized — gasstationwithoutpumps @ 16:54
Tags: , , ,

I got all the spelling checks done in the book today, and I noticed a “the the” in the book, so I looked for all occurrences of that pair of words in the LaTeX files and fixed them.  I then decided to write a tandem-word finder and look for all tandem duplicate words in the LaTeX files.  There were about ten others.  I was only checking a line at a time, though, so I decided to also convert the PDF file to text and check that.  That found another 5 or 6 tandem duplicate words (which had crossed line boundaries in the LaTeX files, but not in the output PDF file).

There were a lot of false positives in the PDF file, because “the Thévenin” somehow got treated as if it had “the the” with a word boundary after the second “the”.  There were also a lot of places in tables where numbers were duplicated, or description lists where the item head.

What I’ve not decided yet is whether it is worth rewriting the program to look for duplicate words that cross line boundaries—the program would be a bit more powerful, but I’d need to keep track of the place in the file better to be able to pinpoint where the error occurs, as I would not want to point to a full page as the location of the error.

Here is the code I wrote (edited 2020 Sept 10 to include page or line numbers):

#!/usr/bin/env python3

import re
import sys
import io

import pdftotext	# requires installing poppler and pdftotext

tandem_str = r"\b(\S+\b)\s+\b\1\b"
tandem_re = re.compile(tandem_str,re.IGNORECASE)

def lines_of_input(filenames):
    if not filenames:
        for line in sys.stdin:
            yield "--stdin",line
    else:
        for filename in filenames:
            if filename.endswith(".pdf"):
            	with open(filename, "rb") as file:
                    pdf = pdftotext.PDF(file)
                    for pagenum,page in enumerate(pdf):
                        for line in io.StringIO(page):
                            yield f'{filename} page {pagenum}',line
            else:
                with open(filename, 'r') as file:
                    for linenum,line in enumerate(file):
                        yield f'{filename} line {linenum}',line


for filename,line in lines_of_input(sys.argv[1:]):
#        print("DEBUG:", filename, line, file=sys.stderr)
        if tandem_re.search(line) is not None:
            print(filename,":",line.strip())

2020 September 6

Checked URLs in book

Filed under: Uncategorized — gasstationwithoutpumps @ 12:20
Tags: , , , , ,

I got all the URLs in my book checked yesterday.  Writing a program to extract the links and test them was not very difficult, though some of the links that work fine from Chrome or Preview mysteriously would not work from my link-checking program.

As it turns out, my son was writing me a link-checking program at the same time. His used pdfminer.six instead of PyPDF2, and relied on new features of Python (I still had Python 3.5.5 installed on this laptop, and f-format strings only came in with Python 3.6). I had to install a new version of Python with Anaconda to get his program to run. One difference in our programs is that he collected all the URLs and reduced them to a set of unique URLs (reducing 259 to 206), while I processed the URLs as they were encountered. His program is faster, but mine let me keep track of where in the book the URL occurred.

The checks we did are slightly different, so the programs picked up slightly different sets of bad URLs. He did just a “get” with a specified agent and stream set to True, while I tried “head”, then “get” if “head” failed, then “post” if “get” failed, but with default parameter values.  We also had different ways of detecting redirection (he used the url field of the response, while I used headers[“location”]), which got different redirection information. It might be worthwhile to write a better check program that does more detailed checking, but this pair of programs was enough to check the book, and I don’t want to waste more time on it.

I had to modify a number of the URLs for sites that had moved—in some cases having to Google some of the content in order to find where it had now been hidden. I wasted a lot of time trying to track one source of information back to a primary source, and finally gave up, relying on the (moved) secondary source that I had been citing before.

A surprising number of sites are only accessible with http and not https, and I ended up with eight URLs that I could not get to work in the link-check program, but that worked fine from the PDF file and from Chrome. Some of them worked from my son’s program also, but his failed on some that mine had success with.

Here is the code I wrote:

#!/usr/bin/env python3

import PyPDF2
import argparse
import sys

import requests	


def parse_args():
    """Parse the options and return what argparse does:
        a structure whose fields are the possible options
    """
    parser = argparse.ArgumentParser( description= __doc__, formatter_class = argparse.ArgumentDefaultsHelpFormatter )
    parser.add_argument("filenames", type=str, nargs="*",
            default=[],
            help="""names of files to check
            """)
    options=parser.parse_args()
    return options

    
def pdf_to_urls(pdf_file_name):
   """yields urls used as hyperlinks in file named by pdf_file_name
   """
   pdf = PyPDF2.PdfFileReader(pdf_file_name)
   for page_num in range(pdf.numPages):
        pdfPage = pdf.getPage(page_num)
        pageObject = pdfPage.getObject()
        if '/Annots' in pageObject.keys():
            ann = pageObject['/Annots']
            for a in ann:
               u = a.getObject()
               if '/URI' in u['/A']:
                   yield( page_num,  u['/A']['/URI'])


# HTTP status codes from https://developer.mozilla.org/en-US/docs/Web/HTTP/Status
HTTP_codes = {
    100:"Continue"
    , 101:"Switching Protocol"
    , 102:"Processing (WebDAV)"
    , 102:"Early Hints"
    , 200:"OK"
    , 201:"Created"
    , 202:"Accepted"
    , 203:"Non-Authoritative Information"
    , 204:"No Content"
    , 205:"Reset Content"
    , 206:"Partial Content"
    , 207:"Multi-Status (WebDAV)"
    , 208:"Already Reported (WebDAV)"
    , 226:"IM Used (HTTP Delta encoding)"
    , 300:"Multiple Choice"
    , 301:"Moved Permanently"
    , 302:"Found"
    , 303:"See Other"
    , 304:"Not Modified"
    , 305:"Use Proxy (deprecated)"
    , 306:"unused"
    , 307:"Temporary Redirect"
    , 308:"Permanent Redirect"
    , 400:"Bad Request"
    , 401:"Unauthorized"
    , 402:"Payment Required"
    , 403:"Forbidden"
    , 404:"Not Found"
    , 405:"Method Not Allowed"
    , 406:"Not Acceptable"
    , 407:"Proxy Authentication Required"
    , 408:"Request Timeout"
    , 409:"Conflict"
    , 410:"Gone"
    , 411:"Length Required"
    , 412:"Precondition Failed"
    , 413:"Payload Too Large"
    , 414:"URI Too Long"
    , 415:"Unsupported Media Type"
    , 416:"Range Not Satisfiable"
    , 417:"Expectation Failed"
    , 418:"I'm a teapot"
    , 421:"Misdirected Request"
    , 422:"Unprocessable Entity (WebDAV)"
    , 423:"Locked (WebDAV)"
    , 424:"Failed Dependency (WebDAV)"
    , 425:"Too Early"
    , 426:"Upgrade Required"
    , 428:"Precondition Required"
    , 429:"Too Many Requests"
    , 431:"Request Header Fields Too Large"
    , 451:"Unavailable for Legal Reasons"
    , 500:"Internal Server Error"
    , 501:"Not Implemented"
    , 502:"Bad Gateway"
    , 503:"Service Unavailable"
    , 504:"Gateway Timeout"
    , 505:"HTTP Version Not Supported"
    , 506:"Variant Also Negotiates"
    , 507:"Insufficient Storage (WebDAV)"
    , 508:"Loop Detected (WebDAV)"
    , 510:"Not Extended"
    , 511:"Network Authentication Required"
    }



options=parse_args()
for pdf_name in options.filenames:
    print("checking",pdf_name,file=sys.stderr)
    for page_num,url in pdf_to_urls(pdf_name):
        print ("checking page",page_num, url, file=sys.stderr)
        req = None
        try:
            req = requests.head(url, verify=False)      # don't check SSL certificates
            if req.status_code in [403,405,406]: raise RuntimeError(HTTP_codes[req.status_code])
        except:
            print("--head failed, trying get",file=sys.stderr)
            try: 
                req = requests.get(url)
                if req.status_code in [403,405,406]: raise RuntimeError(HTTP_codes[req.status_code])
            except: 
                print("----get failed, trying post",file=sys.stderr)
                try: req = requests.post(url)
                except: pass
	
        if req is None:
            print("page",page_num, url, "requests failed with no return")
            print("!!!", url, "requests failed with no return", file=sys.stderr)
            continue

        if req.status_code not in (200,302):
            try: 
               code_meaning = HTTP_codes[req.status_code]
            except: 
               code_meaning = "Unknown code!!"
            
            try:
                new_url = req.headers["location"]
            except:
                new_url=url
            
            if url==new_url:
                print("page",page_num, url, req.status_code, code_meaning)
                print("!!!", url, req.status_code, code_meaning, file=sys.stderr)
            else:
                print("OK? page",page_num, url, "moved to", new_url, req.status_code, code_meaning)
                print("!!!", url, "moved to", new_url, req.status_code, code_meaning, file=sys.stderr)

2020 September 3

Shakespeare cookies (whole wheat)

Filed under: Uncategorized — gasstationwithoutpumps @ 17:27
Tags: , , , ,

I last baked Shakespeare cookies 11 months ago (for the Santa Cruz Shakespeare trip to the Ashland Shakespeare Festival), using version 7 of the cookie cutters I designed:

Version 7 of the Shakespeare cookie cutter uses a simple outline for the cutter and a separate stamp for adding the facial features.

I’m going to make some more today using a similar recipe (using whole-wheat pastry flour rather than white pastry flour is the only change, other than shrinking the size of the batch):

½ cup butter
1 cup whole-wheat pastry flour
¼ cup powdered sugar

Sift the flour and sugar together.  Soften butter slightly in microwave, beat into flour-sugar mixture with a fork, and shape the dough into a smooth ball by hand. I refrigerated the dough for a few hours to reharden the butter, but this turned out to be a mistake—I had to warm the dough with my hands to make it soft enough to roll out.

On a silicone baking sheet, roll out dough to 6mm thick (using cookie sticks to set the thickness).  Cut the cookie outlines and remove dough between cookies.  Stamp the facial features. Put silicone sheet on an aluminum baking sheet. Bake at 300°F for about 60 minutes.

I made 9 cookies with this recipe (plus a little bit left over to make a small rectangular cookie).

Here are the best 3 of the 9 whole-wheat shortbread cookies. They taste a bit like the digestive biscuits we used to be sent from England.

2020 September 2

Santa Cruz Shakespeare’s Richard III

Santa Cruz Shakespeare is ending their season with a free Zoom reading of Richard III (after 9 weeks of doing Henry VI Parts 1, 2, and 3) Wed 2020 September 9, 6:30–9p.m.  They spent a lot getting a Zoom license for 1000 viewers, and they’ve been running around 500 viewers for the Henry VI plays, so they’d like to double that for the more popular Richard III.This is also the only one of the plays that they are doing as a single installment—the others were broken up into three evenings, with scholarly discussion after each third of each play.

To get the Zoom link for the play, register for the free webinar at

https://ucsc.zoom.us/webinar/register/WN_p_11ndXkRsq7G_zsFnjN4Q

I’ve been watching the Henry VI readings, and they have been doing a good job of using the limited capabilities of Zoom to present these rarely performed plays.

Last weekend I saw a rather different use of Zoom (and OBS and Youtube) by SFShakes to do a full live performance of King Lear.  That was technically much more ambitious, with each actor having their own camera and green screen and one person with a lot of monitor space busy compositing them live onto the appropriate backgrounds. Much of their rehearsal time went into blocking and marking positions and sightlines, as the actors could not see each other when performing.  There is a good “behind-the-scenes blog post at behind-the-scenes blog post at https://sfshakes.wordpress.com/2020/0/Unfortunately, the performances were all in one weekend, so there was no way to get out word of mouth advertising for the performance after seeing it. King Lear continues (I don’t know for how long) as live performances on https://www.youtube.com/user/SFShakes Sat at 7pm, Sun at 4pm, Mon at 4pm.

Santa Cruz Shakespeare went for a broader sweep (4 tightly coupled history plays), but more modest production (seated actors doing a reading directly on Zoom).  Their rehearsal time seems to have been spent more on understanding the lines and verbal delivery, with minimal props and costuming.

Later this week I’ll be seeing UCSB’s Naked Shakes performance of Immortal Longings (a combination of Julius Ceasar with Antony and Cleopatra). Free tickets from https://www.theaterdance.ucsb.edu/news/event/826.

Next Page »

%d bloggers like this: