# Machine learning

The Center for the Study of Complex Systems at the University of Michigan hosted an intensive day-long training on some of the basics of machine learning for graduate students and interested faculty and staff. Jake Hofman, a Microsoft researcher who also teaches this subject at Columbia University, was the instructor, and the session was both rigorous and accessible (link). Participants were asked to load a copy of R, a software package designed for the computations involved in machine learning and applied statistics, and numerous data sets were used as examples throughout the day. (Here is a brief description of R; link.) Thanks, Jake, for an exceptionally stimulating workshop.

So what is machine learning? Most crudely, it is a handful of methods through which researchers can sift through a large collection of events or objects, each of which has a very large number of properties, in order to arrive at a predictive sorting of the events or objects into a set of categories. The objects may be email texts or hand-printed numerals (the examples offered in the workshop), the properties may be the presence/absence of a long list of words or the presence of a mark in a bitmap grid, and the categories may be “spam/not spam” or the numerals between 0 and 9. But equally, the objects may be Facebook users, the properties “likes/dislikes” for a very large list of webpages, and the categories “Trump voter/Clinton voter”. There is certainly a lot more to machine learning — for example, these techniques don’t shed light on the ways that AI Go systems improve their play. But it’s good to start with the basics. (Here is a simple presentation of the basics of machine learning; link.)

Two intuitive techniques form the core of basic machine learning theory. The first makes use of the measurement of conditional probabilities in conjunction with Bayes’ theorem to assign probabilities of the object being a Phi given the presence of properties xi. The second uses massively multi-factor regressions to calculate a probability for the event being Phi given regression coefficients ci.

Another basic technique is to treat the classification problem spatially. Use the large number of variables to define an n-dimensional space; then classify the object according to the average or majority value of its m-closest neighbors. (The neighbor number m might range from 1 to some manageable number such as 10.)

There are many issues of methodology and computational technique raised by this approach to knowledge. But these are matters of technique, and smart data science researchers have made great progress on them. More interesting here are epistemological issues: how good and how reliable are the findings produced by these approaches to the algorithmic treatment of large data sets? How good is the spam filter or the Trump voter detector when applied to novel data sets? What kind of errors would we anticipate this approach to be vulnerable to?

One important observation is that these methods are explicitly anti-theoretical. There is no place for discovery of causal mechanisms or underlying explanatory processes in these calculations. The researcher is not expected to provide a theoretical hypothesis about how this system of phenomena works. Rather, the techniques are entirely devoted to the discovery of persistent statistical associations among variables and the categories of the desired sorting. This is as close to Baconian induction as we get in the sciences (link). The approach is concerned about classification and prediction, not explanation. (Here is an interesting essay where Jake Hofman addresses the issues of prediction versus explanation of social data; link.)

A more specific epistemic concern that arises is the possibility that the training set of data may have had characteristics that are importantly different from comparable future data sets. This is the familiar problem of induction: will the future resemble the past sufficiently to support predictions based on past data? Spam filters developed in one email community may work poorly in an email community in another region or profession. We can label this as the problem of robustness.

Another limitation of this approach has to do with problems where our primary concern is with a singular event or object rather than a population. If we want to know whether NSA employee John Doe is a Russian mole, it isn’t especially useful to know that his nearest neighbors in a multi-dimensional space of characteristics are moles; we need to know more specifically whether Doe himself has been corrupted by the Russians. If we want to know whether North Korea will explode a nuclear weapon against a neighbor in the next six months the techniques of machine learning seem to be irrelevant.

The statistical and computational tools of machine learning are indeed powerful, and seem to lead to results that are both useful and sometimes surprising. One should not imagine, however, that machine learning is a replacement for all other forms of research methodology in the social and behavioral sciences.

(Here is a brief introduction to a handful of the algorithms currently in use in machine-learning applications; link.)