The Politics of Settlement Choice on the Northwest Coast: Cognition, GIS, and Coastal Landscapes

Author(s):  
Herbert D. G. Maschner

The reasons why evolutionary ecology and, more specifically, optimal foraging theory, do not work in many archaeological situations are varied. Most importantly however, is our lack of understanding of basic human decision-making processes in societies intermediate between bands and states. From evolutionary ecology, we can predict some foraging behavior and thus explain some of the settlement behavior of foraging societies (Mithen 1991; Smith 1991). In states and empires, we can use modern microeconomic theory to predict settlement, trade, and political organization. However, we have very little understanding of how to predict behavior in societies that fall between these two extremes. One of the basic assumptions of modern economic, geographical, and cultural ecological studies is that humans are energy maximizers. Ecologists view this ability to be economically efficient as a product of our evolutionary history of being adaptive (Jochim 1981; Krebs and Davies 1991; Smith and Winterhalder 1992; Stephens and Krebs 1986; Winterhalder and Smith 1981). Support for this assumption is clearly seen in studies of small, mobile foraging societies where individuals and kin-based groups tend to maximize their economic return with subsistence and settlement behaviors that most would agree are adaptive in that particular context (Jochim 1981; Mithen 1991; Smith 1991). For sedentary communities with more complex political organizations (tribes and simple chiefdoms), however, this is not the case, and this discrepancy is seen archaeologically in settlement and subsistence strategies that do not conform to predictions derived from optimal foraging theory. Thus, an underlying assumption in ecological studies is that models of subsistence economizing behavior and studies of subsistence efficiency will work well for hunters and gatherers (Keene 1981; Winterhalder and Smith 1981) or small-scale horticulturalists (Keegan 1986), but will decrease in their explanatory power with increasing social and political complexity. Although this has not been specifically tested, the fact that optimal foraging theory is less effective in explaining behavior in agricultural and sedentary hunter-and-gatherer societies (Maschner 1992) and is not usually applied to chiefdoms and states at all supports this contention.

Nature ◽  
1977 ◽  
Vol 268 (5621) ◽  
pp. 583-584 ◽  
Author(s):  
John Krebs

2016 ◽  
Vol 112 ◽  
pp. 127-138 ◽  
Author(s):  
Dahlia Foo ◽  
Jayson M. Semmens ◽  
John P.Y. Arnould ◽  
Nicole Dorville ◽  
Andrew J. Hoskins ◽  
...  

2017 ◽  
Vol 8 (1) ◽  
Author(s):  
Emily Lena Jones ◽  
David A. Hurley

The use of optimal foraging theory in archaeology has been criticized for focusing heavily on “negative” human-environmental interactions, particularly anthropogenic resource depression, in which prey populations are reduced by foragers’ own foraging activities. In addition, some researchers have suggested the focus on resource depression is more common in the zooarchaeological literature than in the archaeobotanical literature, indicating fundamental differences in the ways zooarchaeologists and archaeobotanists approach the archaeological record. In this paper, we assess these critiques through a review of the literature between 1997 and 2017. We find that studies identifying resource depression occur at similar rates in the archaeobotanical and zooarchaeological literature. In addition, while earlier archaeological applications of optimal foraging theory did focus heavily on the identification of resource depression, the literature published between 2013 and 2017 shows a wider variety of approaches.


Sign in / Sign up

Export Citation Format

Share Document