Gas station without pumps

2015 January 22

Kernel density estimates

Filed under: Uncategorized — gasstationwithoutpumps @ 22:29
Tags: , ,

In the senior thesis writing course, I suggested to the class that they replace the histograms that several students were using with kernel density estimates, as a better way to approximate the underlying probability distribution.  Histograms are designed to be easy to make by hand, not to convey the best possible estimate or picture of the probability density function. Now that we have computers to draw our graphs for us, we can use computational techniques that are too tedious to do by hand, but that provide better graphs: both better looking and less prone to artifacts.

The basic idea of kernel density estimation is simple: every data point is replaced by a narrow probability density centered at that point, and all the probability densities are averaged.  The narrow probability density function is called the kernel, and we are estimating a probability density function for the data, hence the name kernel density estimation.  The most commonly used kernel is a Gaussian distribution, which has two parameters: µ and σ. The mean µ is set to the data point, leaving the standard deviation σ as a parameter that can be used to control the estimation.  If σ is made large, then the kernels are very wide, and the overall density estimate will be very smooth and slowly changing. If σ is made small, then the kernels are narrow, and the density estimate will follow the data closely.

The scipy Python package has a built-in function for creating kernel density estimates from a list or numpy array of data (in any number of dimensions). I used this function to create some illustrative plots of the differences between histograms and kernel density estimates.

This plot has 2 histograms and two kernel density estimates for a sample of 100,000 points.  The blue dots are a histogram with bin width 1, and the bar graph uses bins slightly narrower than 5. The red line is the smooth curve from using Gaussian kernel density estimation, and the green curve results from Gaussian kernel density estimation on transformed data (ln(x+40.))  Note that the kde plots are smoother than the histograms, and less susceptible to boundary artifacts (most of the almost-5-wide bins contain 5 integers, but some have only 4).  The rescaling before computing the kde causes the bins to be wider for large x values, where there are fewer data points.

This plot has 2 histograms and two kernel density estimates for a sample of 100,000 points. The blue dots are a histogram with bin width 1, and the bar graph uses bins slightly narrower than 5. The red line is the smooth curve from using Gaussian kernel density estimation, and the green curve results from Gaussian kernel density estimation on transformed data (ln(x+40.)) Note that the kde plots are smoother than the histograms, and less susceptible to boundary artifacts (most of the almost-5-wide bins contain 5 integers, but some have only 4). The rescaling before computing the kde causes the bins to be wider for large x values, where there are fewer data points.

With only 1000 points, the histograms get quite crude, but kde estimates are still quite good, particularly the "squished kde" which rescales the x axis before applying the kernel density estimate.

With only 1000 points, the histograms get quite crude, but kde estimates are still quite good, particularly the “squished kde” which rescales the x axis before applying the kernel density estimate.

With even more data points from the simulation, the right-hand tail can be seen to be well approximated by a single exponential (a straight line on these semilog plots), so the kernel density estimates are doing a very good job of extrapolating the probability density estimates down to the region where there is only one data point every 10 integers.

Here is the source code I used to create the plots. Note that the squishing requires a compensation to the output of the kernel density computation to produce a probability density function that integrates to 1 on the original data space.

#!/usr/bin/env python3

""" Reads a histogram from stdin
and outputs a smoothed probability density function to stdout
using Gaussian kernel density estimation

Input format:
  # comment lines are ignored
  First two columns are numbers:
	value	number_of_instances
  remaining columns are ignored.

Output format three columns:
  value	 p(value)  integral(x>=value)p(x)
"""

from __future__ import division, print_function

from scipy import stats
import numpy as np
import sys
import itertools
import matplotlib
import matplotlib.pyplot as plt

# values and counts are input histogram, with counts[i] instances of values[i]
values = []
counts = []
for line in sys.stdin:
    line=line.strip()
    if not line: continue
    if line.startswith("#"): continue
    fields = line.split()
    counts.append(int(fields[1]))
    values.append(float(fields[0]))

counts=np.array(counts)
values=np.array(values)

squish_shift = 40. # amount to shift data before taking log when squishing

def squish(data):
    """distortion function to make binning correspond better to density"""
    return np.log(data+squish_shift)

def dsquish(data):
    """derivative of squish(data)"""
    return 1./(data+squish_shift)

instances = np.fromiter(itertools.chain.from_iterable( [value]*num for value, num in zip(values,counts)), float)
squish_instances = np.fromiter(itertools.chain.from_iterable( [squish(value)]*num for value, num in zip(values,counts)), float)
num_points = len(squish_instances)

# print("DEBUG: instances shape=", instances.shape, file=sys.stderr)

min_v = min(values)
max_v = max(values)

squish_smoothed = stats.gaussian_kde(squish_instances)
smoothed = stats.gaussian_kde(instances)

step_size=0.5
grid = np.arange(max(step_size,min_v-10), max_v+10, step_size)

# print("DEBUG: grid=",grid, file=sys.stderr)

plt.xlabel("Length of longest ORF")
plt.ylabel("Probability density")
plt.title("Esitmates of probability density functions")

plt.ylim(0.01/num_points, 0.1)
plt.semilogy(values, counts/num_points, linestyle='None',marker=".", label="histogram bin_width=1")
plt.semilogy(grid,squish_smoothed(squish(grid))*dsquish(grid), label="squished kde")
plt.semilogy(grid,smoothed(grid), label="kde")
num_bins = int(5*num_points**0.25)
plt.hist(values, weights=counts, normed=True,log=True, bins=num_bins,
	label="histogram {} bins".format(num_bins))

plt.legend(loc="upper right")
plt.show()

2013 July 20

Plotting histograms

Filed under: Uncategorized — gasstationwithoutpumps @ 17:10
Tags: , ,

In any sort of experimental work, I or my students end up plotting a lot of histograms (of alignment scores, cost functions, segment lengths, … ). What I usually want to see is a probability density function (so the scaling is independent of the number of data points sampled or the bin sizes used). Most of the students end up using some crude built-in histogram plotter (in R or Matplotlib), that ends up with difficult-to-interpret axes and bad bin sizes.

I spent a couple of days experimenting with different approaches to making a Python module that can convert a generic list of numbers into a list of points to plot with gnuplot or Matplotlib. I ended up with 3 different things to control: the number of bins to use, how wide to make each bin, and whether to plot the estimated density function as a step function or linearly interpolated.

If I have n samples, I set the default number of bins to \mbox{num\_bins} = \sqrt{n} +1, which seems to give a good tradeoff between having so few bins that resolution is lost and so many that the shape of the distribution is buried in the sampling noise. The “+1” is just to ensure that truncating the estimate never results in 0 bins.

Most of my experimenting was in adjusting the bin widths. I came up with three main approaches, and one minor variant:

fixed-width
This is the standard histogram technique, where the full range of values is split into num_bins equal intervals, and the instances of numbers counted in the corresponding bins. This approach is very simple to program and very fast, as there is no need to sort the data (if the range is known), and bin selection is a trivial subscript computation. Since I’m projecting the range out a little from the largest and smallest numbers (based on the second largest and second smallest), I ended up sorting anyway.The fixed-width approach is very good for discrete distributions, as it can have lots of bins with 0 counts.

fixed-count
The fixed-count approach tries to make each bin have the same number of samples in it. The intent here is to have finer resolution where there are a lot of samples, can coarser resolution where there are few samples. I implemented this approach by sorting the numbers and setting thresholds in a single sweep across the data.The fixed-count approach gives good resolution at the peaks of the peaks of the density, but gets very coarse where the density is low. It does not leave any empty bins, so is not as good for discrete distributions as the fixed-bin approach.

tapered
The tapered approach is like the fixed-count approach, except that the desired counts taper off in the first and last few bins of the range. This was a rather clumsy attempt to get better resolution in the tails of a distribution.
fixed-area
This approach is a compromise between the other two trying to keep the product of the number of counts and the bin width roughly constant. I again implemented this by doing a sweep across the sorted numbers.The fixed-area approach provides a useful compromise giving reasonable resolution both at the peaks and on the tails of continuous distributions, but (like the fixed-count method) does not handle discrete distributions very well, since the density estimate can’t go to zero inside the range of the data.

I made up some test data and tried the different approaches on a number of test cases. Here are a couple of illustrative plots using 3000 points, 1000 each from 2 Gaussian distributions and 1000 from a Weibull (extreme-value) distribution):

    Plot of the real density function and the reconstructed ones using each of the approaches to setting bin boundaries. I used a log scale on the y axis so that the tails of the distribution could be analyzed (something I often do when looking at a distribution to estimate p-values). Note that the fixed-width bins get very noise once the expected number of counts per bin gets below 1, and the fixed-count method has an extremely coarse estimate, but the fixed-area estimate is pretty good.

Plot of the real density function and the reconstructed ones using each of the approaches to setting bin boundaries. I used a log scale on the y axis so that the tails of the distribution could be analyzed (something I often do when looking at a distribution to estimate p-values). Note that the fixed-width bins get very noisy once the expected number of counts per bin gets below 1, and the fixed-count method has an extremely coarse estimate for the right-hand tail, but the fixed-area estimate is pretty good.

Only the fixed-width estimate drops off as fast as the narrow Gaussian peaks, but it goes all the way to 0. If we used pseudocounts to prevent estimates of 0 probability, then the fixed-width method would have a minimum value, determined by the pseudocount (a pseudocount of 1 would give a minimum density of about 1.6E-3 for this data, about where the single-count bins on the right-hand tail are).

ggw1000-steps-detail

Here is a detailed look at the narrow central Gaussian peak. Because I’m interested in the peak here, rather than the tail, I used a linear scale on the y axis. Not that the fixed-width bins are too wide here, broadening the peak and making it difficult to see as a Gaussian. The fixed-count methods have a very fine resolution—too fine really, as they are picking up a lot of sampling noise. The fixed-area method seems to do the cleanest job of picking up the shape of the peak without excessive noise.

I would release the code on my web site at work, except that we had yet another power failure on campus last night (Friday night seems to be the popular time for power failures and server failures), and the file server where I plan to keep the file will not be rebooted until Monday.

Update 2013 July 25: The file is now available at http://users.soe.ucsc.edu/~karplus/pluck-scripts/density-function.py and I’ve added it to this post:

#!/usr/bin/env python2.7
"""
Thu Jul 18 01:36:21 PDT 2013 Kevin Karplus

The density-function.py file is mainly intended for use as a module
(rather that as a standalone program).
The main entry points are
    cumulative  converts a list of sortable items into an iterator over
                sorted pairs of (item, cumulative count)

    binned_cumulative

                converts a list of numbers into a sorted list of
            (threshold, cumulative count) where the thresholds are
            bin boundaries (not equal to any numbers on the list)

                The first pair has a cumulative count of 0 and a
                threshold less than any number on the input list. The
                last pair has a cumulative count equal to the length
                of the input list and a threshold larger than any
                element of the list.

                Users specify the number of bins, and get
                (approximately) one more pair on the list than the
                specified numer.

        Different binning methods can be specified:

                    width   fixed-width bins
                    count   fixed-count bins
                    tapered fixed-count bins with smaller counts
                            near the beginning and end
                area    fixed width*count bins

    density_function
            takes the output of binned_cumulative
                and produces a plottable series of pairs
                (threshold, probability density)

        Output format may be

                steps   two points per bin to make step-wise function
                lines   linear interpolation between bin centers

As a stand-alone program, the file converts a list of numbers into a
table of pairs that can be plotted as a probability density function.

The gnuplot command
    plot '<density-function -c 1 -n 6 < example_file ' with lines
would plot a 6-bin density function estimate from the second column of a file
called "example file"

"""

# to ensure compatibility with python3
from __future__ import absolute_import, division, generators, unicode_literals, print_function, nested_scopes, with_statement

import sys
import argparse
from math import sqrt
from itertools import izip,islice

def cumulative(numbers):
    """Takes a list of numbers and yields (x,cum_count) pairs representing
    the cumulative counts of numbers <= x.
    The set of first values of the pairs is exactly the set of numbers.
    (Actually, list can have any sortable items, not just numbers.)
    """
    if len(numbers)==0: return
    sorted_samples = sorted(numbers)

    old_x = None
    for i,x in enumerate(sorted_samples):
        if x!=old_x:
            if old_x is not None:
                yield (old_x, i)
            old_x = x
    yield (x,len(numbers))

def binned_cumulative(numbers, num_bins=None, method="area"):
    """Takes a list of numbers and returns a sorted list of (x,cum_count) pairs representing
    the cumulative counts of numbers < x.
    An extra pair is included at each end (with 0 count difference from the real ends), projecting
    an approximate end point out from the real ends.
    The first values of the pairs are between values of numbers.
    The list is thinned to try to get num_bins+1 pairs.
    """
    count = len(numbers)
    if num_bins is None:
        num_bins = int(sqrt(count)+1)
    assert(num_bins>0)

    cum_pairs = [ x for x in cumulative(numbers)]
    if len(cum_pairs)==0:
        return [(0,0), (1,0)]
    first_x = cum_pairs[0][0]
    if len(cum_pairs)==1:
        return [(first_x-0.5,0), (first_x+0.5,cum_pairs[0][1])]

    # project out bin boundaries past real data, using first 2 and last values
    cum_pairs.insert(0,  tuple( (1.5*first_x-0.5*cum_pairs[1][0],  0)) )
    cum_pairs.append( tuple( (1.5*cum_pairs[-1][0]-0.5*cum_pairs[-2][0], count)) )

    if num_bins==1:
        return [ cum_pairs[0], cum_pairs[-1] ]

    total_width = cum_pairs[-1][0] - cum_pairs[0][0]

    if method=="width":
        # use fixed-width bins (total interval/num_bins)
        counts = [0]*num_bins
        bin_width = total_width/num_bins
        bin_scale = num_bins/total_width
        start=cum_pairs[0][0]
        oldc=0
        for x,c in cum_pairs:
            subscript=int( (x-start)*bin_scale )
            if (subscript<num_bins):    # avoid rounding error on last, empty count
                counts[ subscript ] += c-oldc
            oldc=c
        for i in xrange(1,num_bins):
            counts[i] += counts[i-1]
        return [(start,0)] +[(start+(k+1)*bin_width,c) for k,c in enumerate(counts)]

    # For methods other than fixed-width bins, we currently have no
    # way of producing empty bins, so the number of bins is at most
    # the number different values in "numbers"
    if num_bins>len(cum_pairs)-2:
        num_bins=len(cum_pairs)-2

    if method=="count":
        # Use bins that are approximately equal counts.
        # This method does a low-to-high sweep setting boundaries,
        # which may result in target counts getting lower towards the end,
        # as earlier boundaries overshoot their target counts.
        remaining_count = count
        remaining_bins = num_bins
        cum_to_find = remaining_count/remaining_bins

        thinned = [cum_pairs[0]]    # zero count at beginning
        for x,y in izip(cum_pairs,islice(cum_pairs,1,None)):
            if x[1]>= cum_to_find:
                thinned.append( (  (x[0]+y[0])/2,   x[1] ) )
                remaining_count = count-x[1]
                remaining_bins -=1
                if remaining_bins==0: break
                cum_to_find = x[1] + remaining_count/remaining_bins
        #    print("DEBUG: cum_pairs=",cum_pairs, file=sys.stderr)
        if thinned[-1][1] == cum_pairs[-1][1]:
            # the last bin covered all counts,
            # but may not include the extension
            thinned = thinned[:-1]
        thinned.append(cum_pairs[-1])
        return thinned

    if method=="tapered":
        # This method is like "count" but tapers the bin sizes towards the ends

        approx_bin_count = count/(num_bins-1)       # size for middle bins

        # num_end_bins is how many bins on each end to ramp up size over.
        # The total count for the first num_end_bins is about half the middle bins.
        # (Same for the last num_end_bins)
        num_end_bins = min(num_bins//10, (len(cum_pairs)-num_bins)//2)

        if num_end_bins==0:
            bin_size = count/num_bins
            cum_to_find = [round(i*bin_size) for i in xrange(1,num_bins+1)]
        else:
            effective_num_bins = num_bins-2*num_end_bins +1
            bin_size = count/effective_num_bins         # average count for middle bins

            first_size = bin_size/num_end_bins

            cum_to_find = []
            cum=0
            size = first_size
            # ramp up from the beginning
            cum_to_find = [ int(round(bin_size/(num_end_bins+1) *i*(i+1)/2)) for i in xrange(1,num_end_bins+1)]
            # fill in the middle
            cum_so_far = cum_to_find[-1]
            from_end = [count] + [count-x for x in cum_to_find[0:num_end_bins]]

            middle_bin_size = (count-2*cum_so_far)/ (num_bins-2*num_end_bins)
            # print("DEBUG: cum_so_far=", cum_so_far, " middle_bin_size=",middle_bin_size, file=sys.stderr)
            cum_to_find.extend( [ int(round(middle_bin_size*(i-num_end_bins+1)+cum_so_far))
                    for i in xrange(num_end_bins, num_bins-num_end_bins-1)])

            # make the second half by reversing the first half and counting from the end
            # print("DEBUG: from_end=", from_end, " len(cum_to_find)=",len(cum_to_find), file=sys.stderr)
            cum_to_find.extend(from_end[::-1])
        # print("DEBUG: num_bins=", num_bins, " len(cum_to_find)=",len(cum_to_find), " cum_to_find=",cum_to_find, file=sys.stderr)

        thinned = [cum_pairs[0]]    # zero count at beginning
        bin=0
        for x,y in izip(cum_pairs,islice(cum_pairs,1,None)):
            if x[1]>= cum_to_find[bin]:
                # print("DEBUG: x=",x," y=",y, file=sys.stderr)
                thinned.append( (  (x[0]+y[0])/2,   x[1] ) )
                bin+=1
        #    print("DEBUG: cum_pairs=",cum_pairs, file=sys.stderr)
        if thinned[-1][1] == cum_pairs[-1][1]:
            # the last bin covered all counts, but may not include the extension
            thinned = thinned[:-1]
        thinned.append(cum_pairs[-1])
        return thinned

    if method=="area":
    #  This method scales the bins so that
        #  the product of the count and the binwidth are roughly constant.

        remaining_area = (cum_pairs[-1][0] - cum_pairs[0][0])*count
        remaining_bins = num_bins
        thinned = [cum_pairs[0]]    # zero count at beginning
        for x,y in izip(cum_pairs,islice(cum_pairs,1,None)):
            boundary = (x[0]+y[0])/2
            bin_count = x[1]-thinned[-1][1]
            width = boundary - thinned[-1][0]
            if bin_count*width >= remaining_area/(remaining_bins*remaining_bins):
                thinned.append( ( boundary,   x[1] ) )
                remaining_area = (cum_pairs[-1][0] - boundary)*(count-x[1])
                remaining_bins -= 1
                if remaining_bins == 0: break

        if thinned[-1][1] == cum_pairs[-1][1]:
            # the last bin covered all counts,
            # but may not include the extension
            thinned = thinned[:-1]
        thinned.append(cum_pairs[-1])
        #    print("DEBUG: num_bins=",num_bins," len(thinned)=",len(thinned), file=sys.stderr)
        return thinned

def density_function(cum_pairs,smoothing="steps"):
    """
     This is a generator that yields points.
     Converts a cumulative pair list [ (x0,0) ... (xn,total_count)]
     into a probability density function for plotting.
     Note: x0< x1< ... <xn required.

     Output can be steps or lines between bin centers.
    """
    assert(len(cum_pairs)>=2)
    count = cum_pairs[-1][1]
    if count<=0:
        return

    if smoothing=="steps" or len(cum_pairs)==2:
        yield (cum_pairs[0][0], 0)
    else:
        yield ( 1.5*cum_pairs[0][0] - 0.5*cum_pairs[1][0], 0)

    for old_pair,pair in izip(cum_pairs,islice(cum_pairs,1,None)):
        old_threshold = old_pair[0]
        threshold = pair[0]
        level = (pair[1]-old_pair[1])/(count*(threshold-old_threshold))
        if smoothing=="steps":
            yield (old_threshold,  level)
        yield (threshold,  level)
        else:
            yield ( (threshold+old_threshold)/2, level)
    if smoothing=="steps" or len(cum_pairs)==2:
    yield (cum_pairs[-1][0],0)
    else:
        yield ( 1.5*cum_pairs[0][-1] - 0.5*cum_pairs[1][-2], 0)

# ---------------------------------------------------------
# Below this line are functions primarily for testing or using the
# module as a stand-alone program.

def positive_int(string):
    """Type converter for argparse allowing only int > 0 """
    value = int(string)
    if value<=0:
        msg = "{} is not a positive integer".format(string)
        raise argparse.ArgumentTypeError(msg)
    return value

def parse_args(argv):
    """parse the command-line options.
    Returning all as members of a class
    """
    parser = argparse.ArgumentParser( description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter)
    parser.set_defaults(   column=1
                        , num_bins=None
                        , method="area"
                        , smoothing="steps"
                        )
    parser.add_argument("--column","-c", action="store", type=positive_int,
        help="""Which (white-space separated) column of each line to read.
        One-based indexing.
        """)
    parser.add_argument("--num_bins","-n", action="store", type=positive_int,
        help="""Number of bins to use for "tapered" variant.
        Approximate number of bins for "area" variant.
        Default is sqrt(count)+1.
        """)
    parser.add_argument("--method","-m",
        choices=["width","count","tapered","area"],
        help="""Different algorithms for choosing bin widths:
        width     fixed-width bins (the classic method for histograms)
        count     roughly fixed-count bins, giving finer resolution where
              the probability density is higher.
        tapered   has roughly equal counts in the middle,
                  but reduces the counts towards the two ends,
                  to get better resolution in the tails, where counts are low
        area      has roughly equal count*bin_width throughout,
              providing good resolution in both high-density
                  and low-density
        Default is area.
        """)
    parser.add_argument("--smoothing","-s",
        choices=["steps","lines"],
        help="""Different ways of output the density function:
        steps   step for each bin (two points per bin)
        lines   straight lines between bin centers
        """)
    return parser.parse_args(argv[1:])

def column(file_obj, col_num=0):
    """yields one number from each line of a file, ignoring blank
    lines or comment lines whose first non-white-space is #
    Columns are numbered with zero-based indexing.
    """
    for line in file_obj:
        line = line.strip()
        if line=="" or line.startswith("#"):
            continue
        fields = line.split()
        yield float(fields[col_num])

def main(args):
    """Example of using the density_function and binned_cumulative functions.
    This function (and the parse_args function) are not used when density_function.py is used as module.
    """
    options=parse_args(sys.argv)
    numbers = [x for x in column(sys.stdin,options.column-1)]

    cum_bins = binned_cumulative(numbers,
        num_bins=options.num_bins,method=options.method)
    # print ("DEBUG: cum_bins=",cum_bins,file=sys.stderr)
    for x,cum in density_function(cum_bins,smoothing=options.smoothing):
        print (x, "\t", cum)

if __name__ == "__main__" :
    try :
        sys.exit(main(sys.argv))
    except EnvironmentError as (errno,strerr):
        sys.stderr.write("ERROR: " + strerr + "\n")
        sys.exit(errno)

2012 October 13

When is a line graph not a line graph?

Filed under: Uncategorized — gasstationwithoutpumps @ 21:39
Tags: , , ,

I recently discovered that elementary school teachers have taken to calling histograms “line plots”, and that this definition has gotten quite widespread:

A line plot is a graph that shows frequency of data along a number line. It is best to use a line plot when comparing fewer then 25 numbers. It is a quick, simple way to organize data. [http://ellerbruch.nmu.edu/classes/cs255w03/cs255students/nsovey/p5/p5.html]

A line plot shows data on a number line with x or other marks to show frequency. [http://www.icoachmath.com/math_dictionary/Line_Plot.html]

A line plot is a graph that shows frequency of data along a number line. It is best to use a line plot when comparing fewer than 25 numbers. It is a quick, simple way to organize data. [http://www.mathplanet.com/education/algebra-2/equations-and-inequalities/line-plots-and-stem-and-leaf-plots]

This page contains worksheets with line plots, a type of graph that shows frequency of data along a number line. [http://www.superteacherworksheets.com/line-plots.html]

Of course, no one outside the elementary school teachers uses that term, which is confusingly similar to the standard term “line graphs”.  Even the superteacherworksheets site acknowledges the terrible confusion that the “line plot” term generates:

Line Graph Worksheets Line graphs (not to be confused with line plots) have plotted points connected by straight lines.

“Line graph” is a common term, even among educators:

Line graph is a graph that uses line segments to connect data points and shows changes in data over time. [http://www.icoachmath.com/math_dictionary/line_graph.html]

Line graph: A graph that uses points connected by lines to show how something changes in value (as time goes by, or as something else happens). [http://www.mathsisfun.com/definitions/line-graph.html]

line graph definition: a diagram of lines made by connected data points which represent successive changes in the value of a variable quantity or quantities. [http://dictionary.reference.com/browse/line+graph]

Was it just because they couldn’t spell “histogram” that elementary school teachers had to invent a new term confusingly close to an existing standard term?  I feel sorry for the kids subjected to this poor choice of nomenclature, as they will have to do more unlearning before taking high school or college tests, where they will be expected to know what a histogram and a line graph are, but not anything about “line plots”.

%d bloggers like this: