Nudging opens up risks on opposite extremes linked to data and how data is used. The first risk is the danger of ignoring variances in data. Valuable data elements that may impact our understanding of the underlying phenomenon and the design of the intervention — elements such as diverse information that is difficult to capture — can be overlooked. Second, on the other extreme, academia may be flirting with discrimination by using group attributes to generalize patterns across individuals who might have features connecting them to one or more categories. Algorithms pick out data points that make up a small (e.g., high school GPA, major, hometown, residence, financial aid status) or large (e.g., race, socioeconomic status, marital status, gender) portion of an individual’s experience, but should these data points become a factor in the types of nudges used?