Where in an undergraduate STEM major is the technique of Lagrangian multipliers for solving constrained optimization problems taught?

Doing a search for it on my campus found a few classes, all of which are upper-division or graduate classes: MATH 140 (Industrial Math), Econ 100 A(Intermediate Microeconomics), Econ 205C (Advanced Macroeconomic Theory III), ISM 206 (Optimization Theory and Applications), Physics 105(Classical Mechanics), CMPS 242 (Machine Learning), CMPS 290C (Advanced Topics in Machine Learning), BME 205 (Bioinformatics: Models and Algorithms). Students in my class had a vague recollection of having heard of Lagrangian multipliers, but had never actually used them (except for a couple of exchange students from Europe).

My personal belief is that such a fundamental, useful technique should be included in the lower-division multi-variable differential calculus class. Now, this isn’t a curmudgeonly complaint about how students nowadays don’t learn things the way we did, back in the good old days of slide rules and tables of logarithms (and, yes, I am just barely old enough to have learned with those tools). I didn’t learn about Lagrangian multipliers until I was a professor (having completed a B.S. and M.S. in math and a Ph.D. in computer science). I was introduced to the method by an engineering grad student (from India, if I remember correctly), who had had a much more practical math education than mine. In fact, most of my practical math education came either from my father or was self-taught. Very little of my math education in school or college turned out to have much lasting value to me, other than allowing me the confidence to learn on my own the math I actually needed.

For those of you completely unfamiliar with Lagrangian multipliers, I recommend the Wikipedia page, Lagrangian multipliers. For bioinformatics students, I will try to present one example, using the technique to derive the Baum-Welch algorithm, which is the expectation-maximization algorithm used for training hidden Markov models (HMMs). These are the notes for one lecture in my graduate class in bioinformatics, coming after we have developed the forward and backward algorithms for HMMs. For those who want to learn this stuff on their own, I recommend Chapter 3 of *Biological Sequence Analysis *by Durbin, Eddy, Krogh, and Mitchison, though I don’t remember them using Lagrangian multipliers.

*[I’m presenting these notes in my blog for two reasons: 1) I wanted to plead for more teachers to teach Lagrangian multipliers, and 2) my presentation of it in class sucked. In my defense, I had only had four hours of sleep the night before, and live-action math requires more alertness than I had with so much sleep debt.]*

Our goal is to choose the parameters of the hidden Markov model to maximize the probability of the HMM generating a particular observed set of data. The EM (Expectation Maximization) algorithm is an iterative approximation algorithm. It starts with a setting of the parameters, then chooses a new setting of parameters that increases the probability, and repeats. What we are working on here is doing a single step: given a setting of the parameters, how do we find a better setting?

The parameters are , the probability of emitting character in state , and , the probability of making a transition to state from state (I’m using the notation of the book here, though it is awkward to mix subscript and function notation, and would have been cleaner to use ). It is common to refer to the parameters as a vector , so that we can concisely refer to functions of $\latex \Theta$. We have the constraints that and $\latex \sum_T a_{ST} = 1$, that is that emission probabilities for a state sum to one, and so do the transition probabilities to a new state. (There are also constraints that all probabilities are non-negative, but those constraints don’t need to be explicitly represented in solving the problem, as the solution we’ll come up with automatically satisfies that.)

The probability of a sequence is the sum over all paths of length through the HMM of the probability of taking that path and emitting , that is . The products are a bit messy to take derivatives of, because some of the parameters may be repeated, but we can simplify them by grouping together repeated factors:

,

where is the number of times and along a particular path and is the number of times and . Both of these new count variables are specific to a particular path, so should really have the vector as subscripts, but we’ve got too many subscripts already.

If we try to maximize this function without paying attention to the constraints, we find that all the parameters go to infinity—making the parameters bigger makes the product bigger. To keep the constraints relevant, we use Lagrangian multipliers, adding one new variable for each constraint. I’ll use for the constraints on the emission table of state and for the constraints on the transitions out of state . The objective function to take derivatives of is thus

Let’s look at the partial derivatives of our objective function with respect to the emission parameters:

If the partials are set to zero, we get

The sum over all paths formula is the expected count of the number of times that state produces character $d$. Note that this happens only for positions where , so we can rewrite it as the sum over all such positions of the probability of being in state at that position, which is precisely what the forward-backward algorithm computed for us:

We can now impose the constraint , by adding all these equations:

Finally we get the new value of the emission parameters:

We can do a very similar series of operations to get update equations for the transition parameters also:

Note that the optimization we did uses the parameters in the forward-backward algorithm, so we haven’t really found “the optimum”, just a better setting of the parameters, so we need to iterate the whole process, recomputing the forward-backward matrices after resetting all the parameters.

## Leave a Reply