Our culture is currently biased toward science, math, and engineering as modes of producing new knowledge, often at the expense of the humanities.
These scientific models are good at describing the physical world, but fall short when it comes to explaining phenomena that aren't easily described through quantitative data—including human behaviour Focusing too narrowly on these models—described by Christian Madsbjerg as algorithmic thinking or "scientism"—creates blind spots that inhibits our intuition about people and culture. As Muller argues, we’ve come to conflate “accountability” with transparent, standardized measurement. Conversely, we’ve de-valued human judgment in assessing success. Algorithmic thinking, according to Madsbjerg, gives us a "view from nowhere."
Madsbjerg argues that Silicon Valley is an ideology that is obsessed with scientism: it rejects the notion of accumulative knowledge and assumes that every new insight means making a clean, disruptive break with what came before. Each new paradigm holds until it is disproved by another, which in turn becomes dominant. Silicon Valley therefore rejects the humanities and its emphasis on recovering knowledge and understanding how knowledge and power are intertwined.
Many big data evangelists take as their gospel Chris Anderson’s 2008 Wired magazine article “The End of Theory,” which argued that models and hypotheses have been deprecated by a superabundance of data—numbers that can explain themselves without a need for induction or interpretation.
Rory Sutherland observes that relying too heavily on scientific models can lead us to overlook aspects of "psych-logic" decision-making that are not rationally optimal but still useful to human beings.
- Qualitative and quantitative research represent different philosophies of knowledge.
- Qualitative research excels at explanation
- Quant data is lossy
- Quantitative analysis is not inherently more reliable than qualitative data
- Qualitative and quantitative research methods each have trade-offs