For a long time , the concept of success was synonymous with three terms for startups and big tech firms alike: development of hockey sticks. To scale is to be capable of rapid user and revenue growth. However, according to a recent paper published by Google senior research scientist Alex Hanna and independent researcher Tina Park, AI researchers claim that companies interested in purposes beyond profit need to consider approaches beyond rapid growth.
The paper argues that scale thinking is not just a way to grow a company, but a strategy that affects all parts of a company, actively prevents participation in technology and society, and causes specific types of involvement to work as extractive or exploitative labor.
Size thought is all-encompassing, whether individuals are conscious of it or not. It is not just a characteristic of one’s product , service, or company, but it frames how one thinks about the world (what it is and how it can be observed and measured), its problems (what is an issue worth solving versus not), and the potential technical fixes for those issues, the paper reads.
The paper continues to suggest that it is unlikely that businesses rooted in scale thought are as “successful in deep, structural change as their suppliers imagine.” Instead, methods that avoid scale thinking are required to dismantle the structural mechanisms that lie at the root of social inequality.
An approach that rejects scale as necessary runs counter to what is now the core orthodoxy of big tech companies such as Facebook and Google, and how media and analysts frequently measure the importance of emerging startups.
A Congressional antitrust investigation published earlier this month cites size as part of the recipe for Big Tech corporations’ anti-competitive practices that preserve and perpetuate monopolies throughout the digital economy. A complaint brought by the Department of Justice against Google on Tuesday, the first against a major tech firm in two decades, also calls the scale accomplished through algorithms and personal user data collection a significant part of why the government is suing the Alphabet firm.
Size evangelists include Paul Graham, co-founder of Y Combinator, Werner Vogels, AWS CTO, and Eric Schmidt, former Google CEO, who is quoted in the DOJ lawsuit as saying “Size is the secret” to the power of Google in search.
The belief that scalability is morally healthy, and that solutions that do not scale are morally abject, is rooted in scale thought, Hanna and Park argue. Authors say that’s part of why artificial intelligence is highly regarded by major tech firms.
Large tech companies spend most of their time recruiting engineers who can visualize ideas that can be algorithmically applied. Poorly-scale code and algorithms are seen as unwanted and inefficient. Many of Big Tech’s most innovative infrastructure advancements have been those that improve scalability, such as Google File System (and the MapReduce computing scheme subsequently) and distributed and federated machine learning models, the paper reads.
Scale thinking is often short-sighted because it needs to think about resources and people as interchangeable units, and promotes user data to “find ways to rationalize the person into legible data points.” This strategy will show that structures are not made to represent everyone, and can have a detrimental effect on the lives about people who fall beyond the universality of scaling.
Hanna and Park also label scale thought an inefficient way to improve employee recruiting or retention at Big Tech companies from different backgrounds. Since the deaths of Black women like Breonna Taylor and George Floyd earlier this year led to demands for social justice, a number of big tech firms have resumed diversity targets, but change has been nearly undetectable for years now. Examples presented in the paper include an emphasis on the amount of bias seminars or such measures of inclusion rather than the experiences inside a business of oppressed individuals.
Instead of scale thought, writers advocate alternative methods such as mutual assistance, which allows individuals to adopt an interdependent strategy and take responsibility for meeting individuals’ immediate material needs thus denying people’s scale or categorization as a North Star. The inspiration for mutual aid as an option came in part from the types of support structures that arose since the global pandemic of COVID-19 began.
“While scale thought emphasizes abstraction and modularity, networks of mutual aid foster concretization and interaction,” the paper reads. “While mutual aid is not the only mechanism in which we can suggest a step away from collective work arrangements focused on scale-thinking, we find it a fruitful one to theorize and explore.”
The paper urges developers to ask, in addition to analyzing mutual aid, certain questions about the structures they build, such as whether it legitimizes or extends social systems that individuals are attempting to abolish, whether a system facilitates involvement, and whether a system centralizes power or distributes power to developers and users.
The paper’s suggestions are in line with a number of ethically-centered alternative ways of building technology and AI suggested in recent months by the fairness and transparency section of the AI group. Others include the principle of anti-colonial AI that opposes algorithmic colonialism and colonization of data, queer machine learning, feminism of data, and the construction of AI based on Ubuntu’s African theory , which focuses on people’s interconnectedness and the natural world.
A Data & Society primer released earlier this month is also “Good Intentions, Poor Innovations,” which seeks to debunk common misconceptions about safe ways to create technology and what developers can do to promote consumer well-being.
The paper was highlighted this week at a Computer-Supported Cooperative Work and Social Computing (CSCW) conference workshop titled ‘Against Scale: Provocations and Resistances to Scale Thinking’. Hanna and colleagues at Google published a paper in late 2019 before writing critically about scale, which suggests that the algorithmic fairness group should look at critical race theory as a way to interrogate AI systems and how they influence human lives.