This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.
Nudging opens up risks on opposite extremes linked to data and how data is used. The first risk is the danger of ignoring variances in data. Valuable data elements that may impact our understanding of the underlying phenomenon and the design of the intervention — elements such as diverse information that is difficult to capture — can be overlooked. Second, on the other extreme, academia may be flirting with discrimination by using group attributes to generalize patterns across individuals who might have features connecting them to one or more categories. Algorithms pick out data points that make up a small (e.g., high school GPA, major, hometown, residence, financial aid status) or large (e.g., race, socioeconomic status, marital status, gender) portion of an individual’s experience, but should these data points become a factor in the types of nudges used?
In fact, algorithms are now so widespread, and so subtle, that some sociologists worry that they function as a form of “social control.” (That is, at least, the title of a keynote at an upcoming academic conference called Theorizing the Web, where technologists and sociologists will discuss “algorithms as a type of social engineering.”)