Standard models of observational learning in settings of sequential choice have two key features. The first is that players make decisions by using Bayes' rule to update their beliefs about payoffs from a common prior. The second is that each agent's decision rule is common knowledge, so that subsequent players can draw inferences about unobserved private signals from observable actions. In this paper, I relax the first assumption while maintaining the second. In particular, I look at observational learning by players who choose between two actions using nonparametric methods for estimating payoffs. When players are identical and make inferences using the maximum score method, an informational cascade and herd must result. If players of different payoff types use kernel or nearest-neighbor methods, there are cases in which a cascade need not arise. If one does occur, it must be one in which all players, regardless of type, choose the same action. In some situations, these alternative learning rules perform better than Bayesian updating.