Science, Models and Machine Learning

3 September, 2014

guest post by David Tweed

The members of the Azimuth Project have been working on both predicting and understanding the El Niño phenomenon, along with writing expository articles. So far we’ve mostly talked about the physics and data of the El Niño, along with looking at one method of actually trying to predict El Niño events. Since there’s going to more data exploration using methods more typical of machine learning, it’s a good time to briefly describe the mindset and highlight some of differences between different kinds of predictive models. Here we’ll concentrate on the concepts rather than the fine details and particular techniques.

We also stress there’s not a fundamental distinction between machine learning (ML) and statistical modelling and inference. There are certainly differences in culture, background and terminology, but in terms of the actual algorithms and mathematics used there’s a great commonality. Throughout the rest of the article we’ll talk about ‘machine learning models’, but could equally have used ‘statistical models’.

For our purposes here, a model is any object which provides a systematic procedure for taking some input data and producing a prediction of some output. There’s a spectrum of models, ranging from physically based models at one end to purely data driven models at the other. As a very simple example, suppose you commute by car from your place of work to your home and you want to leave work in order to arrive homeat 6:30 pm. You can tackle this by building a model which takes as input the day of the week and gives you back a time to leave.

• There’s the data driven approach, where you try various leaving times on various days and record whether or not you get home by 6:30 pm. You might find that the traffic is lighter on weekend days so you can leave at 6:10 pm while on weekdays you have to leave at 5:45 pm, except on Wednesdays when you have to leave at 5:30pm. Since you’ve just crunched on the data you have no idea why this works, but it’s a very reliable rule when you use it to predict when you need to leave.

• There’s the physical model approach, where you attempt to infer how many people are doing what on any given day and then figure out what that implies for the traffic levels and thus what time you need to leave. In this case you find out that there’s a mid-week sports game on Wednesday evenings which leads to even higher traffic. This not only predicts that you’ve got to leave at 5:30 pm on Wednesdays but also lets you understand why. (Of course this is just an illustrative example; in climate modelling a physical model would be based upon actual physical laws such as conservation of energy, conservation of momentum, Boyle’s law, etc.)

There are trade-offs between the two types of approach. Data driven modelling is a relatively simple process. In contrast, by proceeding from first principles you’ve got a more detailed framework which is equally predictive but at the cost of having to investigate a lot of complicated underlying effects. Physical models have one interesting advantage: nothing in the data driven model prevents it violating physical laws (e.g., not conserving energy, etc) whereas a physically based model obeys the physical laws by design. This is seldom a problem in practice, but worth keeping in mind.

The situation with data driven techniques is analogous to one
of those American medication adverts
: there’s the big message about how “using a data driven technique can change your life for the better” while the voiceover gabbles out all sorts of small print. The remainder of this post will describe some of the basic principles in that small print.

Preprocessing and feature extraction

There’s a popular misconception that machine learning works well when you simply collect some data and throw it into a machine learning algorithm. In practice that kind of approach often yields a model that is quite poor. Almost all successful machine learning applications are preceded by some form of data preprocessing. Sometimes this is simply rescaling so that different variables have similar magnitudes, are zero centred, etc.

However, there are often steps that are more involved. For example, many machine learning techniques have what are called ‘kernel variants’ which involve (in a way whose details don’t matter here) using a nonlinear mapping from the original data to a new space which is more amenable to the core algorithm. There are various kernels with the right mathematical properties, and frequently the choice of a good kernel is made either by experimentation or knowledge of the physical principles. Here’s an example (from Wikipedia’s entry on the support vector machine) of how a good choice of kernel can convert a not linearly separable dataset into a linearly separable one:



An extreme example of preprocessing is explicitly extracting features from the data. In ML jargon, a feature “boils down” some observed data into a value directly useful. For example, in the work by Ludescher et al that we’ve been looking at, they don’t feed all the daily time series values into their classifier but take the correlation between different points over a year as the basic features to consider. Since the individual days’ temperatures are incredibly noisy and there are so many of them, extracting features from them gives more useful input data. While these extraction functions could theoretically be learned by the ML algorithm, this is quite a complicated function to learn. By explicitly choosing to represent the data using this feature the amount the algorithm has to discover is reduced and hence the likelihood of it finding an excellent model is dramatically increased.

Limited amounts of data for model development

Some of the problems that we describe below would vanish if we had unlimited amounts of data to use for model development. However, in real cases we often have a strictly limited amount of data we can obtain. Consequently we need methodologies to address the issues that arise when data is limited.

Training sets and test sets

The most common way to work with collected data is to split it into a training set and a test set. The training set is used in the process of determining the best model parameters, while the test set—which is not used in any way in determining the those model parameters—is then used to see how effective the model is likely to be on new, unseen data. (The test and training sets need not be equally sized. There are some fitting techniques which need to further subdivide the training set, so that having more training than test data works out best.) This division of data acts to further reduce the effective amount of data used in determining the model parameters.

After we’ve made this split we have to be careful how much of the test data we scrutinise in any detail, since once it has been investigated it can’t meaningfully be used for testing again, although it can still be used for future training. (Examining the test data is often informally known as burning data.) That only applies to detailed inspection however; one common way to develop a model is to look at some training data and then train the model (also known as fitting the model) on that training data. It can then be evaluated on the test data to see how well it does. It’s also then okay to purely mechanically train the model on the test data and evaluate it on the training data to see how “stable” the performance is. (If you get dramatically different scores then your model is probably flaky!) However, once we start to look at precisely why the model failed on the test data—in order to change the form of the model—the test data has now become training data and can’t be used as test data for future variants of that model. (Remember, the real goal is to accurately predict the outputs for new, unseen inputs!)

Random patterns in small sample sets

Suppose we’re modelling a system which has a true probability distribution P . We can’t directly observe this, but we have some samples S obtained from observation of the system and hence come from P . Clearly there are problems if we generate this sample in a way that will bias the area of the distribution we sample from: it wouldn’t be a good idea to get training data featuring heights in the American population by only handing out surveys in the locker rooms of basketball facilities. But if we take care to avoid sampling bias as much as possible, then we can make various kinds of estimates of the distribution that we think S comes from.

Let’s consider the estimate P' implied for S by some particular technique. It would be nice if P = P' , wouldn’t it? And indeed many good estimators have the property that as the size of S tends to infinity P' will tend to P . However, for finite sizes of S , and especially for small sizes, P' may have some spurious detail that’s not present in P .

As a simple illustration of this, my computer has a pseudorandom number generator which generates essentially uniformly distributed random numbers between 0 and 32767. I just asked for 8 numbers and got

2928, 6552, 23979, 1672, 23440, 28451, 3937, 18910.

Note that we’ve got one subset of 4 values (2928, 6552, 1672, 3937) within the interval of length 5012 between 1540 and 6552 and another subset of 3 values (23440, 23979 and 28451) in the interval of length 5012 between 23440 and 28451. For this uniform distribution the expectation of the number of values falling within a given range of that size is about 1.2. Readers will be familiar with how the expectation of a random quantity for a small sample will have a large amount of variation around its value that only reduces as the sample size increases, so this isn’t a surprise. However, it does highlight that even completely unbiased sampling from the true distribution will typically give rise to extra ‘structure’ within the distribution implied by the samples.

For example, here are the results from one way of estimating the probability from the samples:



The green line is the true density while the red curve shows the probability density obtained from the samples, with clearly spurious extra structure.

Generalization

Almost all modelling techniques, while not necessarily estimating an explicit probability distribution from the training samples, can be seen as building functions that are related to that probability distribution.

For example, a ‘thresholding classifier’ for dividing input into two output classes will place the threshold at the optimal point for the distribution implied by the samples. As a consequence, one important aim in building machine learning models is to estimate the features that are present in the true probability distribution while not learning such fine details that they are likely to be spurious features due to the small sample size. If you think about this, it’s a bit counter-intuitive: you deliberately don’t want to perfectly reflect every single pattern in the training data. Indeed, specialising a model too closely to the training data is given the name over-fitting.

This brings us to generalization. Strictly speaking generalization is the ability of a model to work well upon unseen instances of the problem (which may be difficult for a variety of reasons). In practice however one tries hard to get representative training data so that the main issue in generalization is in preventing overfitting, and the main way to do that is—as discussed above—to split the data into a set for training and a set only used for testing.

One factor that’s often related to generalization is regularization, which is the general term for adding constraints to the model to prevent it being too flexible. One particularly useful kind of regularization is sparsity. Sparsity refers to the degree to which a model has empty elements, typically represented as 0 coefficients. It’s often possible to incorporate a prior into the modelling procedure which will encourage the model to be sparse. (Recall that in Bayesian inference the prior represents our initial ideas of how likely various different parameter values are.) There are some cases where we have various detailed priors about sparsity for problem specific reasons. However the more common case is having a ‘general modelling’ belief, based upon experience in doing modelling, that sparser models have a better generalization performance.

As an example of using sparsity promoting priors, we can look at linear regression. For standard regression with E examples of y^{(i)} against P dimensional vectors x^{(i)} we’re considering the total error

\min_{c_1,\dots, c_P} \frac{1}{E}\sum_{i=1}^E (y^{(i)} - \sum_{j=1}^P c_j x^{(i)}_j)^2

while with the l_1 prior we’ve got

\min_{c_1,\dots, c_P} \frac{1}{E} \sum_{i=1}^E (y^{(i)} - \sum_{j=1}^P c_j x^{(i)}_j)^2 + \lambda \sum_{j=1}^P |c_j|

where c_i are the coefficients to be fitted and \lambda is the prior weight. We can see how the prior weight affects the sparsity of the c_i s:



On the x -axis is \lambda while the y -axis is the coefficient value. Each line represents the value of one particular coefficient as \lambda increases. You can see that for very small \lambda – corresponding to a very weak prior – all the weights are non-zero, but as it increases – corresponding to the prior becoming stronger – more and more of them have a value of 0.

There are a couple of other reasons for wanting sparse models. The obvious one is speed of model evaluation, although this is much less significant with modern computing power. A less obvious reason is that one can often only effectively utilise a sparse model, e.g., if you’re attempting to see how the input factors should be physically modified in order to affect the real system in a particular way. In this case we might want a good sparse model rather than an excellent dense model.

Utility functions and decision theory

While there are some situations where a model is sought purely to develop knowledge of the universe, in many cases we are interested in models in order to direct actions. For example, having forewarning of El Niño events would enable all sorts of mitigation actions. However, these actions are costly so they shouldn’t be undertaken when there isn’t an upcoming El Niño. When presented with an unseen input the model can either match the actual output (i.e., be right) or differ from the actual output (i.e., be wrong). While it’s impossible to know in advance if a single output will be right or wrong – if we could tell that we’d be better off using that in our model – from the training data it’s generally possible to estimate the fractions of predictions that will be right and will be wrong in a large number of uses. So we want to link these probabilities with the effects of actions taken in response to model predictions.

We can do this using a utility function and a loss
function
. The utility maps each possible output to a numerical value proportional to the benefit from taking actions when that output was correctly anticipated. The loss maps outputs to a number proportional to the loss from the actions when the output was incorrectly predicted by the model. (There is evidence that human beings often have inconsistent utility and loss functions, but that’s a story for another day…)

There are three common ways the utility and loss functions are used:

• Maximising the expected value of the utility (for the fraction where the prediction is correct) minus the
expected value of the loss (for the fraction where the prediction is incorrect).

• Minimising the expected loss while ensuring that the expected utility is at least some
value

• Maximising the expected utility while ensuring that the expected loss is at most some
value.

Once we’ve chosen which one we want, it’s often possible to actually tune the fitting of the model to optimize with respect to that criterion.

Of course sometimes when building a model we don’t know enough details of how it will be used to get accurate utility and loss functions (or indeed know how it will be used at all).

Inferring a physical model from a machine learning model

It is certainly possible to take a predictive model obtained by machine learning and use it to figure out a physically based model; this is one way of performing data mining. However in practice there are a couple of reasons why it’s necessary to take some care when doing this:

• The variables in the training set may be related by some
non-observed latent variables which may be difficult to reconstruct without knowledge of the physical laws that are in play. (There are machine learning techniques which attempt to reconstruct unknown latent variables but this is a much more difficult problem than estimating known but unobserved latent variables.)

• Machine learning models have a maddening ability to find variables that are predictive due to the way the data was gathered. For example, in a vision system aimed at finding tanks all the images of tanks were taken during one day on a military base when there was accidentally a speck of grime on the camera lens, while all the images of things that weren’t tanks were taken on other days. A neural net cunningly learned that to decide if it was being shown a tank it should look for the shadow from the grime.

• It’s common to have some groups of very highly correlated input variables. In that case a model will generally learn a function which utilises an arbitrary linear combination of the correlated variables and an equally good model would result from using any other linear combination. (This is an example of the statistical problem of ‘identifiability’.) Certain sparsity encouraging priors have the useful property of encouraging the model to select only one representative from a group of correlated variables. However, even in that case it’s still important not to assign too much significance to the particular division of model parameters in groups of correlated variables.

• One can often come up with good machine learning models even when physically important variables haven’t been collected in the training data. A related issue is that if all the training data is collected from a particular subspace factors that aren’t important there won’t be found. For example, if in a collision system to be modelled all data is collected about low speeds the machine learning model won’t learn about relativistic effects that only have a big effect at a substantial fraction of the speed of light.

Conclusions

All of the ideas discussed above are really just ways of making sure that work developing statistical/machine learning models for a real problem is producing meaningful results. As Bob Dylan (almost) sang, “to live outside the physical law, you must be honest; I know you always say that you agree”.


Unreliable Biomedical Research

13 January, 2014

An American drug company, Amgen, that tried to replicate 53 landmark studies in cancer was able to reproduce the original results in only 6 cases—even though they worked with the original researchers!

That’s not all. Scientists at the pharmaceutical company Bayer were able to reproduce the published results in just a quarter of 67 studies!

How could things be so bad? The picture here shows two reasons:

If most interesting hypotheses are false, a lot of positive results will be ‘false positives’. Negative results may be more reliable. But few people publish negative results, so we miss out on those!

And then there’s wishful thinking, sloppiness and downright fraud. Read this Economist article for more on the problems—and how to fix them:

Trouble at the lab, Economist, 18 October 2013.

That’s where I got the picture above.


Levels of Excellence

29 September, 2013

 

Over on Google+, a computer scientist at McGill named Artem Kaznatcheev passed on this great description of what it’s like to learn math, written by someone who calls himself ‘man after midnight’:

The way it was described to me when I was in high school was in terms of ‘levels’.

Sometimes, in your mathematics career, you find that your slow progress, and careful accumulation of tools and ideas, has suddenly allowed you to do a bunch of new things that you couldn’t possibly do before. Even though you were learning things that were useless by themselves, when they’ve all become second nature, a whole new world of possibility appears. You have “leveled up”, if you will. Something clicks, but now there are new challenges, and now, things you were barely able to think about before suddenly become critically important.

It’s usually obvious when you’re talking to somebody a level above you, because they see lots of things instantly when those things take considerable work for you to figure out. These are good people to learn from, because they remember what it’s like to struggle in the place where you’re struggling, but the things they do still make sense from your perspective (you just couldn’t do them yourself).

Talking to somebody two or levels above you is a different story. They’re barely speaking the same language, and it’s almost impossible to imagine that you could ever know what they know. You can still learn from them, if you don’t get discouraged, but the things they want to teach you seem really philosophical, and you don’t think they’ll help you—but for some reason, they do.

Somebody three levels above is actually speaking a different language. They probably seem less impressive to you than the person two levels above, because most of what they’re thinking about is completely invisible to you. From where you are, it is not possible to imagine what they think about, or why. You might think you can, but this is only because they know how to tell entertaining stories. Any one of these stories probably contains enough wisdom to get you halfway to your next level if you put in enough time thinking about it.

What follows is my rough opinion on how this looks in a typical path towards a Ph.D. in math. Obviously this is rather subjective, and makes math look too linear, but I think it’s a useful thought experiment.

Consider the change that a person undergoes in first mastering elementary algebra. Let’s say that that’s one level. This student is now comfortable with algebraic manipulation and the idea of variables.

The next level may come somewhere during a first calculus course. The student now understands the concept of the infinitely small, of slope at a point, and can reason about areas, physical motion, and optimization.

Many stop here, believing that they have finally learned math. Those who do not stop, might proceed through multivariable calculus and perhaps a basic linear algebra course with the tools they currently possess. Their next level comes when they find themselves suffering through an abstract algebra course, and have to once again reshape their whole thought process just to squeak by with a C.

Once this student masters all of that, the rest of the undergraduate curriculum at their university might be a breeze. But not so with graduate school. They gain a level their first year. They gain another their third year. And they are horrified to discover that they are expected to gain a third level before they graduate. This level is the hardest of them all, because it is the first one that consists in mastering material that has been created largely by the student.

I don’t know how many levels there are after that. At least three.

So, the bad news is, you never do see the whole picture (though you see the old picture shrink down to a tiny point), and you can’t really explain what you do see. But the good news is that the world of mathematics is so rich and exciting and wonderful that even your wildest dreams about it cannot possibly compare. It is not like seeing the Matrix—it is like seeing the Matrix within the Matrix within the Matrix within the Matrix within the Matrix.

As he points out, this talk of ‘levels’ is too linear. You can be much better at algebraic geometry than your friend, but way behind them in probability theory. Or even within a field like algebraic geometry, you might be able to understand sheaf cohomology better than your friend, yet still way behind in some classical topic like elliptic curves.

To have worthwhile conversations with someone who is not evenly matched with you in some subject, it’s often good for one of you to play ‘student’ while the other plays ‘teacher’. Playing teacher is an ego boost, and it helps organize your thoughts – but playing student is a great way to amass knowledge and practice humility… and a good student can help the teacher think about things in new ways.

Taking turns between who is teacher and who is student helps keep things from becoming unbalanced. And it’s especially fun when some subject can only be understood with the combined knowledge of both players.

I have a feeling good mathematicians spend a lot of time playing these games—we often hear of famous teams like Atiyah, Bott and Singer, or even bigger ones like the French collective called ‘Bourbaki’. For about a decade, I played teacher/student games with James Dolan, and it was really productive. I should probably find a new partner to learn the new kinds of math I’m working on now. Trying to learn things by yourself is a huge disadvantage if you want to quickly rise to higher ‘levels’.

If we took things a bit more seriously and talked about them more, maybe a lot of us could get better at things faster.

 

 

Indeed, after I passed on these remarks, T.A. Abinandanan, a professor of materials science in Bangalore, pointed out this study on excellence in swimming:

• Daniel Chambliss, The mundanity of excellence.

Chambliss emphasizes that in swimming there really are discrete levels of excellence, because there are different kinds of swimming competitions, each with their own different ethos. Here are some of his other main points:

1) Excellence comes from qualitative changes in behavior, not just quantitative ones. More time practicing is not good enough. Nor is simply moving your arms faster! A low-level breaststroke swimmer does very different things than a top-ranked one. The low-level swimmer tends to pull her arms far back beneath her, kick the legs out very wide without bringing them together at the finish, lift herself high out of the water on the turn, and fail to go underwater for a long ways after the turn. The top-ranked one sculls her arms out to the side and sweeps back in, kicks narrowly with the feet finishing together, stays low on the turns, and goes underwater for a long distance after the turn. They’re completely different!

2) The different levels of excellence in swimming are like different worlds, with different rules. People can move up or down within a level by putting in more or less effort, but going up a level requires something very different—see point 1).

3) Excellence is not the product of socially deviant personalities. The best swimmers aren’t “oddballs,” nor are they loners—kids who have given up “the normal teenage life”.

4) Excellence does not come from some mystical inner quality of the athlete. Rather, it comes from learning how to do lots of things right.

5) The best swimmers are more disciplined. They’re more likely to be strict with their training, come to workouts on time, watch what they eat, sleep regular hours, do proper warmups before a meet, and the like.

6) Features of the sport that low-level swimmers find unpleasant, excellent swimmers enjoy. What others see as boring – swimming back and forth over a black line for two hours, say – the best swimmers find peaceful, even meditative, or challenging, or therapeutic. They enjoy hard practices, look forward to difficult competitions, and try to set difficult goals.

7) The best swimmers don’t spend a lot of time dreaming about big goals like winning the Olympics. They concentrate on “small wins”: clearly defined minor achievements that can be rather easily done, but produce real effects.

8) The best swimmers don’t “choke”. Faced with what seems to be a tremendous challenge or a strikingly unusual event such as the Olympic Games, they take it as a normal, manageable situation. One way they do this is by sticking to the same routines. Chambliss calls this the “mundanity of excellence”.

I’ve just paraphrased chunks of the paper. The whole thing is worth reading! I can’t help wondering how much these lessons apply to other areas. He gives an example that could easily apply to mathematics—a

more personal example of failing to maintain a sense of mundanity, from the world of academia: the inability to finish the doctoral thesis, the hopeless struggle for the magnum opus. Upon my arrival to graduate school some 12 years ago, I was introduced to an advanced student we will call Michael. Michael was very bright, very well thought of by his professors, and very hard working, claiming (apparently truthfully) to log a minimum of twelve hours a day at his studies. Senior scholars sought out his comments on their manuscripts, and their acknowledgements always mentioned him by name. All the signs pointed to a successful career. Yet seven years later, when I left the university, Michael was still there-still working 12 hours a day, only a bit less well thought of. At last report, there he remains, toiling away: “finishing up,” in the common expression.

In our terms, Michael could not maintain his sense of mundanity. He never accepted that a dissertation is a mundane piece of work, nothing more than some words which one person writes and a few other people read. He hasn’t learned that the real exams, the true tests (such as the dissertation requirement) in graduate school are really designed to discover whether at some point one is willing just to turn the damn thing in.


Why Most Published Research Findings Are False

11 September, 2013

My title here is the eye-catching—but exaggerated!—-title of this well-known paper:

• John P. A. Ioannidis, Why most published research findings are false, PLoS Medicine 2 (2005), e124.

It’s open-access, so go ahead and read it! Here is his bold claim:

Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies to the most modern molecular research. There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. However, this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof.

He’s not really talking about all ‘research findings’, just research that uses the

ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05.

His main interests are medicine and biology, but many of the problems he discusses are more general.

His paper is a bit technical—but luckily, one of the main points was nicely explained in the comic strip xkcd:


If you try 20 or more things, you should not be surprised that once an event with probability less than 0.05 = 1/20 will happen! It’s nothing to write home about… and nothing to write a scientific paper about.

Even researchers who don’t make this mistake deliberately can do it accidentally. Ioannidis draws several conclusions, which he calls corollaries:

Corollary 1: The smaller the studies, the less likely the research findings are to be true. (If you test just a few jelly beans to see which ones ‘cause acne’, you can easily fool yourself.)

Corollary 2: The smaller the effects being measured, the less likely the research findings are to be true. (If you’re studying whether jelly beans cause just a tiny bit of acne, you you can easily fool yourself.)

Corollary 3: The more quantities there are to find relationships between, the less likely the research findings are to be true. (If you’re studying whether hundreds of colors of jelly beans cause hundreds of different diseases, you can easily fool yourself.)

Corollary 4: The greater the flexibility in designing studies, the less likely the research findings are to be true. (If you use lots and lots of different tricks to see if different colors of jelly beans ‘cause acne’, you can easily fool yourself.)

Corollary 5: The more financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. (If there’s huge money to be made selling acne-preventing jelly beans to teenagers, you can easily fool yourself.)

Corollary 6: The hotter a scientific field, and the more scientific teams involved, the less likely the research findings are to be true. (If lots of scientists are eagerly doing experiments to find colors of jelly beans that prevent acne, it’s easy for someone to fool themselves… and everyone else.)

Ioannidis states his corollaries in more detail; I’ve simplified them to make them easy to understand, but if you care about this stuff, you should read what he actually says!

The Open Science Framework

Since his paper came out—and many others on this general theme—people have gotten more serious about improving the quality of statistical studies. One effort is the Open Science Framework.

Here’s what their website says:

The Open Science Framework (OSF) is part network of research materials, part version control system, and part collaboration software. The purpose of the software is to support the scientist’s workflow and help increase the alignment between scientific values and scientific practices.

Document and archive studies.

Move the organization and management of study materials from the desktop into the cloud. Labs can organize, share, and archive study materials among team members. Web-based project management reduces the likelihood of losing study materials due to computer malfunction, changing personnel, or just forgetting where you put the damn thing.

Share and find materials.

With a click, make study materials public so that other researchers can find, use and cite them. Find materials by other researchers to avoid reinventing something that already exists.

Detail individual contribution.

Assign citable, contributor credit to any research material – tools, analysis scripts, methods, measures, data.

Increase transparency.

Make as much of the scientific workflow public as desired – as it is developed or after publication of reports. Find public projects here.

Registration.

Registering materials can certify what was done in advance of data analysis, or confirm the exact state of the project at important points of the lifecycle such as manuscript submission or at the onset of data collection. Discover public registrations here.

Manage scientific workflow.

A structured, flexible system can provide efficiency gain to workflow and clarity to project objectives, as pictured.

CONSORT

Another group trying to improve the quality of scientific research is CONSORT, which stands for Consolidated Standards of Reporting Trials. This is mainly aimed at medicine, but it’s more broadly applicable.

The key here is the “CONSORT Statement”, a 25-point checklist saying what you should have in any paper about a randomized controlled trial, and a flow chart saying a bit about how the experiment should work.

What else?

What are the biggest other efforts that are being made to improve the quality of scientific research?


The Selected Papers Network (Part 4)

29 July, 2013

guest post by Christopher Lee

In my last post, I outlined four aspects of walled gardens that make them very resistant to escape:

• walled gardens make individual choice irrelevant, by transferring control to the owner, and tying your one remaining option (to leave the container) to being locked out of your professional ecosystem;

• all competition is walled garden;

• walled garden competition is winner-take-all;

• even if the “good guys” win (build the biggest walled garden), they become “bad guys” (masters of the walled garden, whose interests become diametrically opposed to that of the people stuck in their walled garden).

To state the obvious: even if someone launched a new site with the perfect interface and features for an alternative system of peer review, it would probably starve to death both for lack of users and lack of impact. Even for the rare user who found the site and switched all his activity to it, he would have little or no impact because almost no one would see his reviews or papers. Indeed, even if the Open Science community launched dozens of sites exploring various useful new approaches for scientific communication, that might make Open Science’s prospects worse rather than better. Since each of these sites would in effect be a little walled garden (for reasons I outlined last time), their number and diversity would mainly serve to fragment the community (i.e. the membership and activity on each such site might be ten times less than it would have been if there were only a few such sites). When your strengths (diversity; lots of new ideas) act as weaknesses, you need a new strategy.

SelectedPapers.net is an attempt to an offer such a new strategy. It represents only about two weeks of development work by one person (me), and has only been up for about a month, so it can hardly be considered the last word in the manifold possibilities of this new strategy. However, this bare bones prototype demonstrates how we can solve the four ‘walled garden dilemmas’:

Enable walled-garden users to ‘levitate’—be ‘in’ the walled garden but ‘above’ it at the same time. There’s nothing mystical about this. Think about it: that’s what search engines do all the time—a search engine pulls material out of all the worlds’ walled gardens, and gives it a new life by unifying it based on what it’s about. All selectedpapers.net does is act as a search engine that indexes content by what paper and what topics it’s about, and who wrote it.

This enables isolated posts by different people to come together in a unified conversation about a specific paper (or topic), independent of what walled gardens they came from—while simultaneously carrying on their full, normal life in their original walled garden.

Concretely, rather than telling Google+ users (for example) they should stop posting on Google+ and post only on selectedpapers.net instead (which would make their initial audience plunge to near zero), we tell them to add a few tags to their Google+ post so selectedpapers.net can easily index it. They retain their full Google+ audience, but they acquire a whole new set of potential interactions and audience (trivial example: if they post on a given paper, selectedpapers.net will display their post next to other people’s posts on the same paper, resulting in all sorts of possible crosstalk).

Some people have expressed concern that selectedpapers.net indexes Google+, rightly pointing out that Google+ is yet another walled garden. Doesn’t that undercut our strategy to escape from walled gardens? No. Our strategy is not to try to find a container that is not a walled garden; our strategy is to ‘levitate’ content from walled gardens. Google+ may be a walled garden in some respects, but it allows us to index users’ content, which is all we need.

It should be equally obvious that selectedpapers.net should not limit itself to Google+. Indeed, why should a search engine restrict itself to anything less than the whole world? Of course, there’s a spectrum of different levels of technical challenges for doing this. And this tends to produce an 80-20 rule, where 80% of the value can be attained by only 20% of the work. Social networks like Google+, Twitter etc. provide a large portion of the value (potential users), for very little effort—they provide open APIs that let us search their indexes, very easily. Blogs represent another valuable category for indexing.

More to the point, far more important than technology is building a culture where users expect their content to ‘fly’ unrestricted by walled-garden boundaries, and adopt shared practices that make that happen easily and naturally. Tagging is a simple example of that. By putting the key metadata (paper ID, topic ID) into the user’s public content, in a simple, standard way (as opposed to hidden in the walled garden’s proprietary database), tagging makes it easy for anyone and everyone to index it. And the more users get accustomed to the freedom and benefits this provides, the less willing they’ll be to accept walled gardens’ trying to take ownership (ie. control) of the users’ own content.

Don’t compete; cooperate: if we admit that it will be extremely difficult for a small new site (like selectedpapers.net) to compete with the big walled gardens that surround it, you might rightly ask, what options are left? Obviously, not to compete. But concretely, what would that mean?

☆ enable users in a walled garden to liberate their own content by tagging and indexing it;

☆ add value for those users (e.g. for mathematicians, give them LaTeX equation support);

☆ use the walled garden’s public channel as your network transport—i.e. build your community within and through the walled garden’s community.

This strategy treats the walled garden not as a competitor (to kill or be killed by) but instead as a partner (that provides value to you, and that you in turn add value to). Morever, since this cooperation is designed to be open and universal rather than an exclusive partnership (concretely, anyone could index selectedpapers.net posts, because they are public), we can best describe this as public data federation.

Any number of sites could cooperate in this way, simply by:

☆ sharing a common culture of standard tagging conventions;

☆ treating public data (i.e. viewable by anybody on the web) as public (i.e. indexable by anybody);

☆ drawing on the shared index of global content (i.e. when the index has content that’s relevant to your site’s users, let them see and interact with it).

To anyone used to the traditional challenges of software interoperability, this might seem like a tall order—it might take years of software development to build such a data federation. But consider: by using Google+’s open API, selectedpapers.net has de facto established such a data federation with Google+, one of the biggest players in the business. Following the checklist:

☆ selectedpapers.net offers a very simple tagging standard, and more and more Google+ users are trying it;

☆ Google+ provides the API that enables public posts to be searched and indexed. Selectedpapers.net in turn assures that posts made on selectedpapers.net are visible to Google+ by simply posting them on Google+;

☆ Selectedpapers.net users can see posts from (and have discussions with) Google+ users who have never logged into (or even heard of) selectedpapers.net, and vice versa.

Now consider: what if someone set up their own site based on the open source selectedpapers.net code (or even wrote their own implementation of our protocol from scratch). What would they need to do to ensure 100% interoperability (i.e. our three federation requirements above) with selectedpapers.net? Nothing. That federation interoperability is built into the protocol design itself. And since this is federation, that also means they’d have 100% interoperation with Google+ as well. We can easily do so also with Twitter, WordPress, and other public networks.

There are lots of relevant websites in this space. Which of them can we actually federate with in this way? This divides into two classes: those that have open APIs vs. those that don’t. If a walled garden has an API, you can typically federate with it simply by writing some code to use their API, and encouraging its users to start tagging. Everybody wins: the users gain new capabilities for free, and you’ve added value to that walled garden’s platform. For sites that lack such an API (typically smaller sites), you need more active cooperation to establish a data exchange protocol. For example, we are just starting discussions with arXiv and MathOverflow about such ‘federation’ data exchange.

To my mind, the most crucial aspect of this is sincerity: we truly wish to cooperate with (add value to) all these walled garden sites, not to compete with them (harm them). This isn’t some insidious commie plot to infiltrate and somehow destroy them. The bottom line is that websites will only join a federation if it benefits them, by making their site more useful and more attractive to users. Re-connecting with the rest of the world (in other walled gardens) accomplishes that in a very fundamental way. The only scenario I see where this would not seem advantageous, would be for a site that truly believes that it is going to achieve market dominance across this whole space (‘one walled garden to rule them all’). Looking over the landscape of players (big players like Google, Twitter, LinkedIn, Facebook, vs. little players focused on this space like Mendeley, ResearchGate, etc.), I don’t think any of the latter can claim this is a realistic plan—especially when you consider that any success in that direction will just make all other players federate together in self-defense.

Level the playing field: these considerations lead naturally to our third concern about walled gardens: walled garden competition strongly penalizes new, small players, and makes bigger players assume a winner-takes-all outcome. Concretely, selectedpapers.net (or any other new site) is puny compared with, say, Mendeley. However, the federation strategy allows us to turn that on its head. Mendeley is puny compared with Google+, and selectedpapers.net operates in de facto federation with Google+. How likely is it that Mendeley is going to crush Google+ as a social network where people discuss science? If a selectedpapers.net user could only post to other selectedpapers.net members (a small audience), then Mendeley wins by default. But that’s not how it works: a selectedpapers.net user has all of Google+ as his potential audience. In a federation strategy, the question isn’t how big you are, but rather how big your federation is. And in this day of open APIs, it is really easy to extend that de facto federation across a big fraction of the world’s social networks. And that is level playing field.

Provide no point of control: our last concern about walled gardens was that they inevitably create a divergence of interests for the winning garden’s owner vs. the users trapped inside. Hence the best of intentions (great ideas for building a wonderful community) can truly become the road to hell—an even better walled garden. After all, that’s how the current walled garden system evolved (from the reasonable and beneficial idea of establishing journals). If any one site ‘wins’, our troubles will just start all over again. Is there any alternative?

Yes: don’t let any one site win; only build a successful federation. Since user data can flow freely throughout the federation, users can move freely within the federation, without losing their content, accumulated contacts and reputation, in short, their professional ecosystem. If a successful site starts making policies that are detrimental to users, they can easily vote with their feet. The data federation re-establishes the basis for a free market, namely unconstrained individual freedom of choice.

The key is that there is no central point of control. No one ‘owns’ (i.e. controls) the data. It will be stored in many places. No one can decide to start denying it to someone else. Anyone can access the public data under the rules of the federation. Even if multiple major players conspired together, anyone else could set up an alternative site and appeal to users: vote with your feet! As we know from history, the problem with senates and other central control mechanisms is that given enough time and resources, they can be corrupted and captured by both elites and dictators. Only a federation system with no central point of control has a basic defense: regardless of what happens at ‘the top’, all individuals in the system have freedom of choice between many alternatives, and anybody can start a new alternative at any time. Indeed, the key red flag in any such system is when the powers-that-be start pushing all sorts of new rules that hinder people from starting new alternatives, or freely migrating to alternatives.

Note that implicit in this is an assertion that a healthy ecosystem should contain many diverse alternative sites that serve different subcommunities, united in a public data federation. I am not advocating that selectedpapers.net should become the ‘one paper index to rule them all’. Instead, I’m saying we need one successful exemplar of a federated system, that can help people see how to move their content beyond the walled garden and start ‘voting with their feet’.

So: how do we get there? In my view, we need to use selectedpapers.net to prove the viability of the federation model in two ways:

☆ we need to develop the selectedpapers.net interface to be a genuinely good way to discuss scientific papers, and subscribe to others’ recommendations. It goes without saying that the current interface needs lots of improvements, e.g. to work past some of Google+’s shortcomings. Given that the current interface took only a couple of weeks of hacking by just one developer (yours truly), this is eminently doable.

☆ we need to show that selectedpapers.net is not just a prisoner of Google+, but actually an open federation system, by adding other systems to the federation, such as Twitter and independent blogs. Again, this is straightforward.

To Be or Not To Be?

All of which brings us to the real question that will determine our fates. Are you for a public data federation, or not? In my
view, if you seriously want reform of the current walled garden
system, federation is the only path forward that is actually a path forward (instead of to just another walled garden). It is the only strategy that allows the community to retain control over its own content. That is fundamental.

And if you do want a public data federation, are you willing to
work for that outcome? If not, then I think you don’t really want it—because you can contribute very easily. Even just adding #spnetwork tags to your posts—wherever you write them—is a very valuable contribution that enormously increases the value of the federation ecosystem.

One more key question: who will join me in developing the
selectedpapers.net platform (both the software, and federation alliances)? As long as selectedpapers.net is a one-man effort, it must fail. We don’t need a big team, but it’s time to turn the project into a real team. The project has solid foundations that will enable rapid development of new federation partnerships—e.g. exciting, open APIs like REST — and of seamless, intuitive user interfaces — such as the MongoDB noSQL database, and AJAX methods. A small, collaborative team will be able to push this system forward quickly in exciting, useful ways. If you jump in now, you can be one of the very first people on the team.

I want to make one more appeal. Whatever you think about
selectedpapers.net as it exists today, forget about it.

Why? Because it’s irrelevant to the decision we need to make today: public data federation, yes or no? First, because the many flaws of the current selectedpapers.net have almost no bearing on that critical question (they mainly reflect the limitations of a version 0.1 alpha product). Second, because the whole point of federation is to ‘let a thousand flowers bloom’— to enable a diverse ecology of different tools and interfaces, made viable because they work together as a federation, rather than starving to death as separate, warring, walled gardens.

Of course, to get to that diverse, federated ecosystem, we first
have to prove that one federated system can succeed—and
liberate a bunch of minds in the process, starting with our own. We have to assemble a nucleus of users who are committed to making this idea succeed by using it, and a team of developers who are driven to build it. Remember, talking about the federation ideal will not by itself accomplish anything. We have to act, now; specifically, we have to quickly build a system that lets more and more people see the direct benefits of public data federation. If and when that is clearly successful, and growing sustainably, we can consider branching out, but not before.

For better or worse, in a world of walled gardens, selectedpapers.net is the one effort (in my limited knowledge) to do exactly that. It may be ugly, and annoying, and alpha, but it offers people a new and different kind of social contract than the walled gardens. (If someone can point me to an equivalent effort to implement the same public data federation strategy, we will of course be delighted to work with them! That’s what federation means).

The question now for the development of public data federation is whether we are working together to make it happen, or on the contrary whether we are fragmenting and diffusing our effort. I believe that public data federation is the Manhattan Project of the war for Open Science. It really could change the world in a fundamental and enduring way. Right now the world may seem headed the opposite direction (higher and higher walls), but it does not have to be that way. I believe that all of the required ingredients are demonstrably available and ready to go. The only remaining requirement is that we rise as a community and do it.

I am speaking to you, as one person to another. You as an individual do not even have the figleaf of saying “Well, if I do this, what’s the point? One person can’t have any impact.” You as an individual can change this project. You as an individual can change the world around you through what you do on this project.


The Selected Papers Network (Part 3)

12 July, 2013

guest post by Christopher Lee

A long time ago in a galaxy far, far away, scientists (and mathematicians) simply wrote letters to each other to discuss their findings.

In cultured cities, they formed clubs for the same purpose; at club meetings, particularly juicy letters might be read out in their entirety. Everything was informal (bureaucracy to-science ratio around zero), individual (each person spoke only for themselves, and made up their own mind), and direct (when Pierre wrote to Johan, or Nikolai to Karl, no one yelled “Stop! It has not yet been blessed by a Journal!”).

To use my nomenclature, it was a selected-papers network. And it worked brilliantly for hundreds of years, despite wars, plagues and severe network latency (ping times of 109 msec).

Even work we consider “modern” was conducted this way, almost to the twentieth century: for example, Darwin’s work on evolution by natural selection was “published” in 1858, by his friends arranging a reading of it at a meeting of the Linnean Society. From this point of view, it’s the current journal system that’s a historical anomaly, and a very recent one at that.

I’ll spare you an essay on the problems of the current system. Instead I want to focus on the practical question of how to change the system. The nub of the question is a conundrum: how is it, that just as the Internet is reducing publication and distribution costs to zero, Elsevier, the Nature group and other companies have been aggressively raising subscription prices (for us to read our own articles!), in many cases to extortionate levels?

That publishing companies would seek to outlaw Open Access rules via cynical legislation like the “Research Works” Act goes without saying; that they could blithely expect the market to buy a total divorce of price vs. value reveals a special kind of economic illogic.

That illogic has a name: the Walled Garden—and it is the immovable object we are up against. Any effort we make must be informed by careful study of what makes its iniquities so robust.

I’ll start by reviewing some obvious but important points.

A walled garden is an empty container that people are encouraged to fill with their precious content—at which point it stops being “theirs”, and becomes the effective property of whoever controls the container. The key word is control. When Pierre wrote a letter to Johan, the idea that they must pay some ignoramus $40 for the privilege would have been laughable, because there was no practical way for a third party to control that process. But when you put the same text in a journal, it gains control: it can block Pierre’s letter for any reason (or no reason); and it can lock out Johan (or any other reader) unless he pays whatever price it demands.

Some people might say this is just the “free market” at work—but that is a gross misunderstanding of the walled garden concept. Unless you can point to exactly how the “walls” lock people in, you don’t really understand it. For an author, a free market would be multiple journals competing to consider his paper (just as multiple papers compete for acceptance by a journal). This would be perfectly practical (they could all share the same set of 2-3 referee reports), but that’s not how journals decided to do it. For a reader or librarian, a free market would be multiple journals competing to deliver the same content (same articles): you choose the distributor that provides the best price and service.

Journals simply agree not to compete, by inserting a universal “non-compete clause” in their contract; not only are authors forced to give exclusive rights to one journal, they are not even permitted to seek multiple bids (let more than one journal at a time see the paper). The whole purpose of the walled garden is to eliminate the free market.

Do you want to reform some of the problems of the current system? Then you had better come to grips with the following walled garden principles:

• Walled gardens make individual choice irrelevant, by transferring control to the owner, and tying your one remaining option (to leave the container) to being locked out of your professional ecosystem.

• All the competition are walled gardens.

• Walled garden competition is winner-take-all.

• Even if the “good guys” win and become the biggest walled garden, they become “bad guys”: masters of the walled garden, whose interests become diametrically opposed to those of the people stuck in their walled garden.

To make these ideas concrete, let’s see how they apply to any
reform effort such as selectedpapers.net.

Walled gardens make individual choice irrelevant

Say somebody starts a website dedicated to such a reform effort, and you decide to contribute a review of an interesting paper. But such a brand-new site by definition has zero fraction of the relevant audience.

Question: what’s the point of writing a review, if it affects nothing and no one will read it? There is no point. Note that if you still choose to make that effort, this will achieve nothing. Individuals choosing to exile themselves from their professional ecosystem have no effect on the Walled Garden. Only a move of the whole ecosystem (a majority) would affect it.

Note this is dramatically different from a free market: even if I, a tiny flea, buy shares of the biggest, most traded company (AAPL, say), on the world’s biggest stock exchange, I immediately see AAPL’s price rise (a tiny bit) in response; when I sell, the price immediately falls in response. A free market is exquisitely sensitive to an individual’s decisions.

This is not an academic question. Many, many people have already tried to start websites with similar “reform” goals as selectedpapers.net. Unfortunately, none of them are gaining traction, for the same reasons that Diaspora has zero chance to beat Facebook.

(If you want to look at one of the early leaders, “open source”, and backed by none other than the Nature Publishing Group, check out Connotea.org. Or on the flip side, consider the fate of Mendeley.)

For years after writing the Selected-Papers Network paper, I held off from doing anything, because at that time I could not see any path for solving this practical problem.

All the competition are walled gardens

In the physical world, walls do not build themselves, and they have a distressing (or pleasing!) tendency to fall down. In the digital world, by contrast, walls are not the exception but the rule.

A walled garden is simply any container whose data do not automatically interoperate with and in the outside world. Since it takes very special design to achieve any interoperability at all, nearly all websites are walled gardens by default.

More to the point, if websites A and B are competing with each other, is website A going to give B its crown jewels (its users and data)? No, it’s going to build the walls higher. Note that even if a website is open source (anyone can grab its code and start their own site), it’s still a walled garden because its users and their precious data are only stored in its site, and cannot get out.

The significance of this for us is that essentially every “reform” solution being pushed at us, from Mendeley on out to idealistic open source sites, is unfortunately in practice a walled garden. And that means users won’t own their own content (in the crucial sense of control); the walled garden will.

Walled garden competition is winner-take-all

All this is made worse by the fact that walled garden competition has a strong tendency towards monopoly. It rewards consolidation and punishes small market players. In social networks, size matters. When a little walled garden tries to compete with a big walled garden, all advantages powerfully aid the big incumbent, even if the little one offers great new features. The whole mechanism of individuals “voting with their feet” can’t operate when the only choice available to them is to jump off a cliff: that is, leave the ecosystem where everyone else is.

Even if you win the walled garden war, the community will lose

Walled gardens intrinsically create a divergence of interests between their owners vs. their users. By giving the owner control and locking in the users, it gives the owner a powerful incentive to expand and exploit his control, at the expense of users, with very little recourse for them. For example, I think my own motivations for starting selectedpapers.net are reasonably pure, but if—for the purpose of argument—it were to grow to dominate mathematics, I still don’t think you should let me (or anyone else) own it as a walled garden.

First of all, you probably won’t agree with many of my decisions; second, if Elsevier offers me $100 million, how can you know I won’t just sell you out? That’s what the founders of Mendeley just did. Note this argument applies not just to individuals, but even to the duly elected representatives of your own professional societies. For example, in biology some professional societies have been among the most reactionary in fighting Open Access—because they make most of their money from “their” journals. Because they own a walled garden, their interests align with Elsevier, not with their own members.

Actually that’s the whole story of how we got in this mess in the first place. The journal system was started by good people with good intentions, as the “Proceedings” of their club meetings. But because it introduced a mechanism of control, it became a walled garden, with inevitable consequences. If we devote our efforts to a solution that in practice becomes a walled garden, the consequences will again be inevitable.

Why am I dwelling on all these negatives? Let’s not kid ourselves: this is a hard problem, and we are by no means the first to try to crack it. Most of the doors in this prison have already been tried by smart, hard-working people, and they did not lead out. Obviously I don’t believe there’s no way out, or I wouldn’t have started selectedpapers.net. But I do believe we all need to absorb these lessons, if we’re to have any chance of real success.

Roll these principles over in your mind; wargame the possible pathways for reform and note where they collide with one of these principles. Can you find a reliable way out?

In my next post I’ll offer my own analysis of where I think the weak link is. But I am very curious to hear what you come up with.


Quantitative Reasoning at Yale-NUS College

27 June, 2013

What mathematics should any well-educated person know? It’s rather rare that people have a chance not just to think about this question, but do something about it. But it’s happening now.

There’s a new college called Yale-NUS College starting up this fall in Singapore, jointly run by Yale College and the National University of Singapore. The buildings aren’t finished yet: the above picture shows how a bit of it should look when they are. Faculty are busily setting up the courses and indeed the whole administrative structure of the university, and I’ve had the privilege of watching some of this and even helping out a bit.

It’s interesting because you usually meet an institution when it’s already formed—and you encounter and learn about only those aspects that matter to you. But in this case, the whole institution is being created, and every aspect discussed. And this is especially interesting because Yale-NUS College is designed to be a ‘liberal arts college for Asia for the 21st century’.

As far as I can tell, there are no liberal arts colleges in Asia. Creating a good one requires rethinking the generally Eurocentric attitudes toward history, philosophy, literature, classics and so on that are built into the traditional idea of the liberal arts. Plus, the whole idea of a liberal arts education needs to be rethought for the 21st century. What should a well-educated person know, and be able to do? Luckily, the faculty of Yale-NUS College are taking a fresh look at this question, and coming up with some new answers.

I’m really excited about the Quantitative Reasoning course that all students will take in the second semester of their first year. It will cover topics like this:

• innumeracy, use of numbers in the media.
• visualizing quantitative data.
• cognitive biases, operationalization.
• qualitative heuristics, cognitive biases, formal logic and mathematical proof.
• formal logic, mathematical proofs.
• probability, conditional probability (Bayes’ rule), gambling and odds.
• decision trees, expected utility, optimal decisions and prospect theory.
• sampling, uncertainty.
• quantifying uncertainty, hypothesis testing, p-values and their limitations.
statistical power and significance levels, evaluating evidence.
• correlation and causation, regression analysis.

The idea is not to go into vast detail and not to bombard the students with sophisticated mathematical methods, but to help students:

• learn how to criticize and question claims in an informed way;

• learn to think clearly, to understand logical and intuitive reasoning, and to consider appropriate standards of proof
 in different contexts;

• develop a facility and comfort with a variety of representations of quantitative data, and practical experience in gathering data;

• understand the sources of bias and error in seemingly objective numerical data;

• become familiar with the basic concepts 
of probability and statistics, with particular emphasis on recognizing when these techniques provide reliable results and when they threaten to mislead us.

They’ll do some easy calculations using R, a programming language optimized for statistics.

Most exciting of all to me is how the course will be taught. There will be about 9 teachers. It will be ‘team-based learning’, where students are divided into (carefully chosen) groups of six. A typical class will start with a multiple choice question designed to test the students understanding of the material they’ve just studied. Then the team will discuss their answers, while professors walk around and help out; then they’ll take the quiz again; then one professor will talk about that topic.

This idea is called ‘peer instruction’. Some studies have shown this approach works better than the traditional lecture style. I’ve never seen it in action, though my friend Christopher Lee uses it in now in his bioinformatics class, and he says it’s great. You can read about its use in physics here:

• Eric Mazur, Physics Education.

I’ll be interested to see it in action starting in August, and later I hope to teach part-time at Yale-NUS College and see how it works for myself!

At the very least, it’s exciting to see people try new things.


Follow

Get every new post delivered to your Inbox.

Join 3,222 other followers