Copy to Clipboard. Add italics as necessaryCite as: Orit Halpern, ‘Models, Markets, and Artificial Intelligence: A Brief History of our Speculative Present’, in Breaking and Making Models, ed. by Christoph F. E. Holzhey, Marietta Kesting, and Claudia Peppel, Cultural Inquiry, 33 (Berlin: ICI Berlin Press, 2025), pp. 201–15 <https:/​/​doi.org/​10.37050/​ci-33_08>

Models, Markets, and Artificial IntelligenceA Brief History of our Speculative Present*Orit HalpernORCID

Abstract

Over the past four decades, the idea that both digital machines and human agents are networked intelligences and parts of self-organizing systems has not only shaped financial markets, but has also been incorporated into economic thinking and artificial intelligence. This has led to what Halpern calls the ‘financialization of cognition’, an economy of attention that reconfigures human agency and decision-making based on a model of contemporary finance and the digital economy.

Keywords: financial market; model; artificial intelligence; neoliberal economics; neural networks; machine learning; algorithm; Hayek, Friedrich

Research for this article was supported by the Mellon Foundation, the Digital Now Project at the Center for Canadian Architecture (CCA), and by the staff and archives at the CCA. Further funding was given by the Swiss National Science Foundation, Sinergia Project, Governing through Design. A somewhat different version of this article has appeared in e-flux Architecture, as part of the Models special issue: ‘Financializing Intelligence: On the Integration of Machines and Markets’, e-flux Architecture (March 2023) <https://www.e-flux.com/architecture/on-models/519993/financializing-intelligence-on-the-integration-of-machines-and-markets/>.

In 1997 the New York Stock Exchange (NYSE) embarked on a failed quest to build a virtual-reality trading floor. Fantasizing an interface beyond actuarial graphs and numbers, the designers wanted to produce a new world, a born-digital virtual world that would surpass the original. There is something telling in this aspiration for a virtual trading floor. In excavating the history of this project, one can unearth a genealogy of contemporary forms of attention, economy, and technology. The model of the market came to reformulate ideas of human and machine decision-making in a manner that was co-produced with new ideas of ‘artificial’ intelligence and neural networks. In the leftover digital detritus of architectural renderings of the NYSE are infrastructures for our present media environments.

But to set the stage. Starting in the 1960s, and particularly after the 1970s, the New York Stock Exchange and most other financial exchanges became digital, adopting new financial instruments such as derivatives and, later, algorithmic trading. While physical trading persisted until the late 1980s, by the 1990s, the situation had changed. The rise of dot-coms, new electronic consumer trading platforms, and new financial instruments had increased the velocity, volume, and automation of trading. Human bodies could no longer register trades fast enough. As a result, the runners and clerks previously managing trades were replaced by traders and ‘quants’ behind Bloomberg terminals and other electronic platforms. These were largely flat screens with graphs and statistical information geared towards actuarial visualization of the market. The need to maintain a ‘place’ for financial exchange was disappearing.

FIG 1. A Bloomberg Terminal on Display at Bloomberg L.P. (2016). Photo by Travis Wise, Creative Commons Attribution 2.0 Generic licensed .
Fig 1. A Bloomberg Terminal on Display at Bloomberg L.P. (2016). Photo by Travis Wise, Creative Commons Attribution 2.0 Generic licensed <https:/​/​commons.wikimedia.org/​wiki/​File:Bloomberg_Terminal_Museum.jpg>.

Under such conditions, the idea of using architecture to create the ‘space’ of a market appeared contradictory to dominant trends. Despite this, in 1997 the Securities Industry Automation Corporation (SIAC), which oversees the technical operations of the NYSE, approached the young architecture firm Asymptote Architecture (founded by Lise Anne Couture and Hani Rashid) to create a ‘virtual’ trading floor, a model or representation of the NYSE that would recreate the exchange space. Asymptote’s response to the seeming immateriality of trading was ‘to turn a physical space into a multidimensional interactive cinematic space’. They sought to put ‘the walls of the Virtual Stock Exchange in motion’, as well as, in Cartesian fashion, allow its users to ‘look at the entire trading floor and fly around it to observe or correlate real-time data’.1 Asymptote’s project struggles to integrate the human sensorium into the increasingly immaterial market and offers a new cognitive-perceptual landscape attuned to the imagined future of immersive computing and ubiquitous digital media. Yet users do not have full information about this landscape; its geography is not clearly demarcated and located.

The market that Asymptote modelled possessed some interesting characteristics. Unconsciously, perhaps, the architects had internalized the idea of the market as a disembodied networked intelligence culling data from elsewhere and making decisions autonomously. Their vision of the trading floor suggested a market without humans. In all the images of the mock-ups, people are absent on the floor. Numbers move and monetary transactions are shown, but everything happens autonomously. The users (we presume traders) were on the one hand given a recognizable representation of a space they had known and been in. There are trading posts, stock tickers, and clearly represented stock symbols — presumably data — conveying the impression that the market information they received was complete and suitable to ground reasoned decisions.

But this Cartesian perspective was only a conduit to be channelled into immersion in the exchange. Other mock-ups of the virtual trading floor show fly-throughs, or situate the viewer within the exchange without a clear vantage point. One struggles to understand where, in fact, economic activity is happening. The movements and efforts to depict market transactions suggest that it is possible that the traders themselves, as market actors, have become part of the data visualizations — part of the data set for the system. The simulations suggest users interpolated into the scene as consumers or perhaps trainees of its intelligence. This model of subjectivity, which competes with an older assumption of liberal economics that individuals make reasoned decisions based on full information, drove the designers to take inspiration from stories of divinity, such as Hieronymus Bosch’s The Garden of Earthly Delights. There is heaven, hell, and purgatory in a market, Asymptote argued, with the exchange itself being ‘a giant, churning sea of hoarding and wasting’.2

While the project was not initially framed as modelling an exchange of waste, excess, and information, this has proved prophetic in an age of trading apps, where finance is democratized and YOLOing (You Only Live Once) and HODLing (Hang On for Dear Life) with options are (or until recently were) standard practice (that is, practising high-risk futures speculation on securities and assets of questionable value). The story of the first virtual 3-D trading floor interface thus demonstrates an emergent and evolving relationship between computing, ideas of networked intelligence, and finance. It offers traces of an attempt to make the market visible to human observers while also developing ways to engage and interact with the market without full information. Asymptote’s designs embody an understanding of the market as a flow of networked information, where agency and decision-making is coordinated by machines.

The project also exemplifies the idea, most clearly suggested by Milton Friedman, that economic models ‘are engines not cameras’.3 One way to read that statement is that the model does not represent the world, but creates it. Models make markets. Models are technologies such as a derivative pricing equation or an algorithm for high-speed trading. Within these techniques for betting there are also models of markets. There are built-in assumptions about gathering data, comparing prices, betting, selling, and timing bets, but not about whether the information is correct or ‘true’, or whether the market is mapped or shown in its entirety. These models let people create markets by arbitraging price differences without necessarily knowing everything about the market or asset. As a result, models are also not plans. They are understood as ways to act without having to actually represent a market. For proponents of this idea of models as machines, building markets without representation was part of a broader ideology negating the possibility of a state or other organization ever planning a market.

Over the past four decades, the idea that both digital machines and human agents are networked intelligences and parts of self-organizing systems has not only shaped financial markets, but has also been incorporated into economic thinking and artificial intelligence. This has led to what I call the ‘financialization of cognition’, an economy of attention that reconfigures human agency and decision-making based on a model of contemporary finance and the digital economy. In what follows, I briefly trace some historical precedents for this development and ask about its implications for how we understand our relationships to other people and to the future through finance.

Networked Intelligence

Throughout the middle of the twentieth century, increased trading volumes had clerks fall behind on transaction tapes and often fail to enter specific prices and transactions. Human error and slowness came to be understood as untenable and ‘non-transparent’, or arbitrary in assigning price. The NYSE also needed ways to manage and monitor labour, particularly lower paid clerical work. As a result, computerized trading desks were introduced in the 1960s. These were understood as algorithmic and rule-bound. The more automated the market, the thinking went, the more rule-bound it would become. Officials also thought computing would save the securities industry from regulation, that if computers followed the rules algorithmically, there would be no need for oversight.4

This belief in the rationality and self-regulation of algorithms derives from a longer neoliberal tradition that reimagined human intelligence as machinic and networked. According to Austrian-born economist Friedrich Hayek, writing in 1945:

The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form, but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society is thus not merely a problem of how to allocate ‘given’ resources — if ‘given’ is taken to mean given to a single mind which deliberately solves the problem set by these ‘data’. It is rather a problem of how to secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know. Or, to put it briefly, it is a problem of the utilization of knowledge not given to anyone in its totality.5

Human beings, Hayek believed, were subjective, incapable of reason, and fundamentally limited in their attention and cognitive capacities. The idea that no single subject, mind, or central authority can fully represent and understand the world was crucial to how he conceived the market. He argued that ‘the “data” from which the economic calculus starts are never for the whole society “given” to a single mind […] and can never be so given’.6 Instead, only markets can learn at scale and suitably evolve to coordinate dispersed resources and information in the best way possible.

Responding to what he understood as the failure of democratic populism, resulting in fascism and communism, Hayek disavowed centralized state planning. Instead, he turned to another model of human agency and markets. First, Hayek posited that markets are not about matching supply and demand but about coordinating information.7 Second, his model of learning and ‘using knowledge’ is grounded in the idea of a networked intelligence embodied in the market, which enables the creation of knowledge outside and beyond the purview of individual humans: ‘The whole acts as one market, not because any of its members survey the whole field, but because their limited individual fields of vision sufficiently overlap so that through many intermediaries the relevant information is communicated to all.’8 And third, the market therefore embodies a notion of cognition and decision I would call ‘environmental intelligence’, in which the data that such a calculating machine processes is dispersed throughout society, and where decision-making is a population-based activity derived from but not congruent with individual bodies and thoughts.

Hayek inherited his idea of environmental intelligence directly from Canadian psychologist Donald O. Hebb, known as the inventor of the neural network model and the theory that ‘cells [neurons] that fire together wire together’.9 In 1949, Hebb published The Organization of Behavior, a text that popularized the idea that the brain stores knowledge in complex networks or ‘populations’ of neurons.10 Today, his research is famous for presenting a new concept of functional neuroplasticity, developed through working with soldiers and other individuals who had lost limbs or been injured, blinded, or rendered deaf from proximity to blasts. Hebb noted that while these individuals had suffered changes to their sensory order, the loss of a limb or a sense could be compensated for through training. He thus began to suspect that neurons might rewire themselves to accommodate trauma and create new capacities.

The rewiring of neurons is not just a matter of sensory perception but also memory. Hebb theorized that brains do not store inscriptions or exact representations of objects but patterns of neurons firing. For example, if a baby sees a cat, a certain group of neurons fires. The more cats the baby sees, the more a certain set of stimuli become related to this animal, and the more the same set of neurons will fire when a cat enters the field of perception. This is the basis for contemporary ideas of learning in neural networks. It was also an inspiration to Hayek, who in his 1952 book The Sensory Order credited Hebb with providing a key model for imagining human cognition.11 Hayek used the idea that the brain is composed of networks to remake the liberal subject. The subject is not one of reasoned objectivity, but rather is subjective, with limited information and incapable of making objective decisions.

The concept of algorithmic, replicable, and computational decision-making forwarded in the Cold War was not that of conscious, affective, and informed decision-making privileged since the democratic revolutions of the eighteenth century.12 But if Cold War technocrats were still experts with authority and predictive capacities, the ignorant and partially informed individual Hayek presented us with is not. He reconceptualized freedom not as the freedom to exercise reasoned decision-making and sovereignty, but as the freedom to become part of the market or network. Hayek elaborated that freedom was not wilful agency but freedom from coercion. While this could be understood as necessitating legal and humane infrastructures to allow all individuals access to the mythic market, neoliberal thinking, evinced in the deregulatory policies of many nations in the 1980s, as with Reaganomics and Thatcherism, did not interpret it this way.

Machines

If markets and minds are engines, as Milton Friedman implied, then what technical forms might they come to embody?

In 1956, a series of computer scientists, psychologists, and other scientists embarked on a project to develop machine forms of learning. In a proposal for a workshop at Dartmouth College in 1955, John McCarthy labelled this new concept ‘artificial intelligence’. While many of the participants, including Marvin Minsky, Nathaniel Rochester, Warren McCulloch, Ross Ashby, and Claude Shannon, focused on symbolic and linguistic processes, one concentrated on the neuron. A psychologist, Frank Rosenblatt, proposed that learning, whether in non-human animals, humans, or computers, could be modelled on artificial cognitive devices that in turn were based on the basic architecture of human neurons.13

In his initial Dartmouth paper detailing the idea of a ‘perceptron’, Rosenblatt distanced himself from his peers. These others, he claimed, had been ‘chiefly concerned with the question of how such functions as perception and recall might be achieved by a deterministic system of any sort, rather than how this is actually done by the brain’.14 This approach, he argued, fundamentally ignored the question of scale and the emergent properties of biological systems. Instead, Rosenblatt based his approach on the theory of statistical separability, which he attributed to Hebb and Hayek, and a new conception of networked perception-cognition.15 According to Rosenblatt, neurons are mere switches or nodes in a network that classifies cognitive input, and intelligence emerges only at the level of the population and through the patterns of interaction between neurons.

Contemporary neural networks operate on these principles. Repeatedly exposed to the same stimuli, groups of nets are trained to eventually fire together. Each exposure increases the statistical likelihood that the net will fire together and ‘recognize’ the object. In supervised ‘learning’, nets can be corrected by comparing their result with the original input. The key feature is that the input does not need to be ontologically defined or represented, meaning that a series of networked machines can come to identify a cat without having to be told what a cat ‘is’. Only through patterns of affiliation does sensory response emerge. The key to learning is therefore exposure to a ‘large sample of stimuli’, which Rosenblatt stressed meant approaching learning ‘in terms of probability theory rather than symbolic logic’.16 The perceptron model suggests that machine systems, like markets, might be able to perceive what individual subjects cannot.17 While each individual human is limited to a specific set of external stimuli they are exposed to, a computer perceptron can draw on data resulting from the judgements and experiences of large populations of humans.18

Adaptation versus Consciousness

For Rosenblatt and Hayek, and their predecessors in psychology, notions of learning forwarded the idea that systems can change and adapt non-consciously, or automatically. The central feature of these models was that small operations done on parts of a problem might agglomerate into something greater than the sum of its parts and solve problems not through representation but action. Both Hayek and Rosenblatt draw upon theories of communication and information, particularly cybernetics, which conceives communication in terms of thermodynamics and argues that systems at different scales are only probabilistically related to their parts. These approaches to calculating the future behaviour of systems assume that calculating individual components cannot represent or predict the actions of the entire system.19 This disavowal of ‘representation’ continues to fuel the desire for ever larger data sets and unsupervised learning in neural nets that would, at least in theory, be driven by the data.

Hayek himself espoused an imaginary of this data-rich world that could be increasingly calculated without (human) consciousness. He was apparently fond of quoting Alfred North Whitehead’s remark that ‘it is a profoundly erroneous truism […] that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations we can perform without thinking about them.’20 The perceptron is the technological manifestation of the reconfiguration and reorganization of human subjectivity, physiology, psychology, and economy that this theory implies. And, as a result of the belief that technical decision-making at the level of populations rather than through governments might remedy the danger of populism or the errors of human judgement, the neural net became the embodiment of an idea (and ideology) that could scale from the mind to planetary electronic trading platforms and global markets.

Historical notions of machine intelligence and networked markets merged in the work of Fischer Black and Myron Scholes, and the publication of the Black-Scholes model for options pricing in 1973.21 This model applied cybernetic communication theories and Brownian motion to market models and exponentially facilitated the automation and computerization of trading in futures and options. In his famous article ‘Noise’, Black posited that investors trade and profit from misinformation and information overload. This vision of the market is not one of Cartesian mastery or fully informed decision-makers. Rather, falsity (noise) is the very infrastructure for value. Noisiness creates chances, probabilities, and volatility that can be bet on or arbitraged:

Noise in the sense of a large number of small events is often a causal factor much more powerful than a small number of large events can be. Noise makes trading in financial markets possible, and thus allows us to observe prices for financial assets.22

Black’s statement refracts and consolidates thirty years of research in computing, psychology, and economics that reconfigured ideas of decision-making away from liberal or enlightenment reason. In 1990, Andrei Shleifer and Lawrence H. Summers drew the conclusion that, from the market’s perspective, truth or reality might not only be impossible to represent but irrelevant — a conclusion that drove the use of derivative instruments.23 It is important to note that this theory of networked noisiness and speculation on volatility came with the end of Bretton Woods, decolonization, post-Fordism, and the OPEC oil crisis, to name but a few of the transformations at the time.

The derivative pricing equation emerged, then, as a way to tame or circumvent extreme volatility in politics, currency, and commodity markets. New financial technologies and institutions such as computer-driven trading and hedge funds were created in order to literally ‘hedge’ bets: to ensure that risks were reallocated, decentralized, and networked. Through the likes of short bets, credit swaps, and futures markets, dangerous bets could be combined with safer ones and dispersed across multiple territories and temporalities. Corporations, governments, and financiers flocked to these techniques of uncertainty management in the face of unnameable, and unquantifiable, risks.24 In this world, volatility, noise, and chance were no longer ‘devils’, in the words of cybernetician Norbert Wiener, but rather media with value.25 The impossibility of prediction, the subjective nature of human decision-making, and the electronic networking of global media systems, all became infrastructures for new forms of betting on futures.

Models, Machines, and Infernos

Neoliberal economics often theorizes the world as a self-organizing adaptive system to counter the idea of planned and perfectly controllable political (and potentially totalitarian) orders. Within this ideology, the market takes on an almost divine, or perhaps biologically determinist, capacity for chance and emergence, but never through consciousness or planning.26 Evolution is imagined against willed action and the reasoned decisions of individuals. More critically, against the backdrop of civil rights and calls for racial, sexual, and queer forms of justice and equity, the negation of any state intervention or planning (say, affirmative action) becomes naturalized in the figure of the neural net — a model of mind and market that appears to make human-built institutions and organizations (such as the NYSE) appear as evolutionary, biological necessities. Any effort to address structural injustice becomes a conspiracy against emergence, economy, and intelligence.27

We have become attuned to this model of the world where our machines and markets are syncopated with one another. As an ideology, a model of mind and markets, and a technology, neural nets might have cyborg potentials in Donna Haraway’s sense of the term. As cultural theorist Randy Martin has argued, rather than separating itself from social processes of production and reproduction, algorithmic finance actually demonstrates the increased interrelatedness, globalization, and socialization of debt and precarity.28 By tying together disparate actions and objects into a single assemblage of reallocated risks for trading, new market machines have made more people more indebted to each other, as for example in the case of 2008, where middle-class homeowners and poor homeowners were tied together through financial instruments of debt. The political and ethical question then becomes: How might we activate this mutual indebtedness in new ways, ways less amenable to the strict market logics of neoliberal economics?

The future lies in recognizing what our machines have finally made visible, and what has perhaps always been there: the socio-political nature of our seemingly natural thoughts and perceptions. Every market crash, every subprime mortgage event reveals the social constructedness and the work — aesthetic, political, economic — it takes to maintain our belief in markets as forces of nature or divinity. And if not aesthetically smoothed over through media and narratives of inevitability, they also make it possible to recognize how our machines have linked so many of us together in precarity. The potential politics of these moments has not yet been realized, but there have been efforts, whether in Occupy or more recently in movements for civil rights, racial equity, and environmental justice such as Black Lives Matter or the Chilean anti-austerity protests of 2019.

If we consider that all computer systems are programmed, and therefore planned, we are also forced to contend with the intentional and therefore changeable nature of how we think and perceive economy. Asymptote’s failed efforts to build a visualization of the market birthed new modes of interactivity, making us recognize the historically situated and socially specific nature of both the economy and perception. ‘We are now at the threshold of an uncharted landscape,’ wrote Asymptote, ‘well beyond the sanctuary of order and reason. Here concealed beyond places inviting yearning and anticipation, we discover an architecture that perseveres.’29 This architecture that might produce relations and futures other than those of capital did not appear on the virtual trading floor, but perhaps it still might in other forms and practices. Architecture must attend to its own aesthetic political economy.

Notes

  1. Research for this article was supported by the Mellon Foundation, the Digital Now Project at the Center for Canadian Architecture (CCA), and by the staff and archives at the CCA. Further funding was given by the Swiss National Science Foundation, Sinergia Project, Governing through Design. A somewhat different version of this article has appeared in e-flux Architecture, as part of the Models special issue: ‘Financializing Intelligence: On the Integration of Machines and Markets’, e-flux Architecture (March 2023) <https://www.e-flux.com/architecture/on-models/519993/financializing-intelligence-on-the-integration-of-machines-and-markets/>.
  2. Lise Anne Couture, Hani Rashid, and Gregg Lynn, Asymptote Architecture: NYSE Virtual Trading Floor; Oral History, ed. by Greg Lynn (Montreal: Canadian Center for Architecture, 2015), p. 40.
  3. Ibid., p. 44.
  4. Donald A. MacKenzie, An Engine, Not a Camera: How Financial Models Shape Markets (Cambridge, MA: MIT Press, 2006) <https:/​/​doi.org/​10.7551/​mitpress/​9780262134606.001.0001>.
  5. Devin Kennedy, ‘The Machine in the Market: Computers and the Infrastructure of Price at the New York Stock Exchange, 1965–1975’, Social Studies of Science, 47.6 (2017), pp. 888–917 <https://doi.org/10.1177/0306312717739367>.
  6. Friedrich A. Hayek, ‘The Use of Knowledge in Society’, The American Economic Review, 35.4 (1945), pp. 519–30 (pp. 519–20) <https://doi.org/10.1142/9789812701275_0025>.
  7. Ibid., p. 519.
  8. A critical first step towards contemporary notions of information economies, as historians such as Philip Mirowski have noted. See Philip Mirowski, Machine Dreams: Economics Becomes a Cyborg Science (Cambridge: Cambridge University Press, 2002) <https:/​/​doi.org/​10.1017/​CBO9780511613364>; Philip Mirowski, ‘Twelve Theses Concerning the History of Postwar Neoclassical Price Theory’, History of Political Economy, 38 (2006), pp. 344–79 <https://doi.org/10.1215/00182702-2005-029>.
  9. Hayek, ‘The Use of Knowledge in Society’, p. 526.
  10. See Carla J. Shatz, ‘The Developing Brain’, Scientific American, 267.3 (1992), pp. 60–67 (p. 64) <https://doi.org/10.1038/scientificamerican0992-60>.
  11. Donald O. Hebb, The Organization of Behavior: A Neuropsychological Theory (New York: Wiley, 1949).
  12. Friedrich A. Hayek, The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology (Chicago: University of Chicago Press, 1952).
  13. Paul Erickson and others, How Reason Almost Lost its Mind: The Strange Career of Cold War Rationality (Chicago: University of Chicago Press, 2015).
  14. Frank Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms (Washington, DC: Spartan, 1962) <https:/​/​doi.org/​10.21236/​AD0256582>.
  15. Frank Rosenblatt, ‘The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain’, Psychological Review, 65.6 (1958), pp. 386–408 (p. 388) <https://doi.org/10.1037/h0042519>.
  16. Ibid.
  17. Ibid.
  18. Ibid., pp. 388–89.
  19. Rosenblatt, Principles of Neurodynamics, pp. 19–20.
  20. For more on the influence of cybernetics and systems theories on producing notions of non-conscious growth and evolution in Hayek’s thought, see Paul Lewis, ‘The Emergence of “Emergence” in the Work of F. A. Hayek: A Historical Analysis’, History of Political Economy, 48.1 (2016), pp. 111–50 <https://doi.org/10.1215/00182702-3452315>; Gabriel Oliva, ‘The Road to Servomechanisms: The Influence of Cybernetics on Hayek from “The Sensory Order” to the Social Order’, Research in the History of Economic Thought and Methodology 34 (2016), pp. 161–98 <https://doi.org/10.2139/ssrn.2670064>.
  21. Alfred Moore, ‘Hayek, Conspiracy, and Democracy’, Critical Review, 28.1 (2016), pp. 44–62 (p. 50) <https://doi.org/10.1080/08913811.2016.1167405>. I am indebted to Moore’s excellent discussion for much of the argument surrounding Hayek, democracy, and information. This quote is from Hayek, ‘The Use of Knowledge in Society’, p. 528.
  22. Fischer Black and Myron Scholes, ‘The Pricing of Options and Corporate Liabilities’, The Journal of Political Economy, 81.3 (1973), pp. 637–54 <https://doi.org/10.1086/260062>.
  23. Fischer Black, ‘Noise’, The Journal of Finance, 41.3 (1986), pp. 529–43 <https://doi.org/10.1111/j.1540-6261.1986.tb04513.x>.
  24. Andrei Shleifer and Lawrence H. Summers, ‘The Noise Trader Approach to Finance’, The Journal of Economic Perspectives, 4.2 (1990), pp. 19–33 <https://doi.org/10.1257/jep.4.2.19>.
  25. For an excellent summary of these links and of the insurance and urban planning fields, see Kevin Grove, Resilience (New York: Routledge, 2018) <https://doi.org/10.4324/9781315661407>.
  26. For an extensive discussion of thermodynamics, stochastic processes, and control, see the introduction to Norbert Wiener, Cybernetics, or Control and Communication in the Animal and the Machine (New York: MIT Press, 1961) <https:/​/​doi.org/​10.1037/​13140-000>. For further discussion, see also Orit Halpern, ‘Dreams for Our Perceptual Present: Temporality, Storage, and Interactivity in Cybernetics’, Configurations 13.2 (2005), p. 283–319 <https://doi.org/10.1353/con.2007.0016>; Peter Galison, ‘The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision’, Critical Inquiry, 21 (1994), pp. 228–66 <https://doi.org/10.1086/448747>.
  27. Joshua Ramey, ‘Neoliberalism as a Political Theology of Chance: The Politics of Divination’, Palgrave Communications, 1 (2015) <https://doi.org/10.1057/palcomms.2015.39>.
  28. Michael Schaus, ‘Narrative & Value: Authorship in the Story of Money’ (unpublished master’s thesis, OCAD University, 2017) <https://openresearch.ocadu.ca/id/eprint/2719/> [accessed 27 September 2024].
  29. Randy Martin, ‘What Difference Do Derivatives Make? From the Technical to the Political Conjuncture,’ Culture Unbound, 6 (2014): 189–210 <https://doi.org/10.3384/cu.2000.1525.146189>.
  30. Hani Rashid and Lise Anne Couture, Asymptote: Architecture at the Interval (New York: Rizzoli, 1995), p. 49.

Bibliography

  1. Black, Fischer, ‘Noise’, The Journal of Finance, 41.3 (1986), pp. 529–43 <https://doi.org/10.1111/j.1540-6261.1986.tb04513.x>
  2. Black, Fischer, and Myron Scholes, ‘The Pricing of Options and Corporate Liabilities’, The Journal of Political Economy, 81.3 (1973), pp. 637–54 <https://doi.org/10.1086/260062>
  3. Couture, Lise Anne, Hani Rashid, and Gregg Lynn, Asymptote Architecture: NYSE Virtual Trading Floor; Oral History, ed. by Greg Lynn (Montreal: Canadian Center for Architecture, 2015)
  4. Erickson, Paul, and others, How Reason Almost Lost its Mind: The Strange Career of Cold War Rationality (Chicago: University of Chicago Press, 2015)
  5. Galison, Peter, ‘The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision’, Critical Inquiry, 21 (1994), pp. 228–66 <https://doi.org/10.1086/448747>
  6. Grove, Kevin, Resilience (New York: Routledge, 2018) <https://doi.org/10.4324/9781315661407>
  7. Halpern, Orit, ‘Dreams for Our Perceptual Present: Temporality, Storage, and Interactivity in Cybernetics’, Configurations 13.2 (2005), p. 283–319 <https://doi.org/10.1353/con.2007.0016>
  8. Hayek, Friedrich A., The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology (Chicago: University of Chicago Press, 1952)
  9. ‘The Use of Knowledge in Society’, The American Economic Review, 35.4 (1945), pp. 519–30 <https://doi.org/10.1142/9789812701275_0025>
  10. Hebb, Donald O., The Organization of Behavior: A Neuropsychological Theory (New York: Wiley, 1949)
  11. Kennedy, Devin, ‘The Machine in the Market: Computers and the Infrastructure of Price at the New York Stock Exchange, 1965–1975’, Social Studies of Science, 47.6 (2017), pp. 888–917 <https://doi.org/10.1177/0306312717739367>
  12. Lewis, Paul, ‘The Emergence of “Emergence” in the Work of F. A. Hayek: A Historical Analysis’, History of Political Economy, 48.1 (2016), pp. 111–50 <https://doi.org/10.1215/00182702-3452315>
  13. MacKenzie, Donald A., An Engine, Not a Camera: How Financial Models Shape Markets (Cambridge, MA: MIT Press, 2006) <https:/​/​doi.org/​10.7551/​mitpress/​9780262134606.001.0001>
  14. Martin, Randy, ‘What Difference Do Derivatives Make? From the Technical to the Political Conjuncture,’ Culture Unbound, 6 (2014): 189–210 <https://doi.org/10.3384/cu.2000.1525.146189>
  15. Mirowski, Philip, Machine Dreams: Economics Becomes a Cyborg Science (Cambridge: Cambridge University Press, 2002) <https:/​/​doi.org/​10.1017/​CBO9780511613364>
  16. ‘Twelve Theses Concerning the History of Postwar Neoclassical Price Theory’, History of Political Economy, 38 (2006), pp. 344–79 <https://doi.org/10.1215/00182702-2005-029>
  17. Moore, Alfred, ‘Hayek, Conspiracy, and Democracy’, Critical Review, 28.1 (2016), pp. 44–62 <https://doi.org/10.1080/08913811.2016.1167405>
  18. Oliva, Gabriel, ‘The Road to Servomechanisms: The Influence of Cybernetics on Hayek from “The Sensory Order” to the Social Order’, Research in the History of Economic Thought and Methodology 34 (2016), pp. 161–98 <https://doi.org/10.2139/ssrn.2670064>
  19. Ramey, Joshua, ‘Neoliberalism as a Political Theology of Chance: The Politics of Divination’, Palgrave Communications, 1 (2015) <https://doi.org/10.1057/palcomms.2015.39>
  20. Rashid, Hani, and Lise Anne Couture, Asymptote: Architecture at the Interval (New York: Rizzoli, 1995)
  21. Rosenblatt, Frank, ‘The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain’, Psychological Review, 65.6 (1958), pp. 386–408 <https://doi.org/10.1037/h0042519>
  22. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms (Washington, DC: Spartan, 1962) <https:/​/​doi.org/​10.21236/​AD0256582>
  23. Schaus, Michael, ‘Narrative & Value: Authorship in the Story of Money’ (unpublished master’s thesis, OCAD University, 2017) <https://openresearch.ocadu.ca/id/eprint/2719/> [accessed 27 September 2024]
  24. Shatz, Carla J., ‘The Developing Brain’, Scientific American, 267.3 (1992), pp. 60–67 <https://doi.org/10.1038/scientificamerican0992-60>
  25. Shleifer, Andrei, and Lawrence H. Summers, ‘The Noise Trader Approach to Finance’, The Journal of Economic Perspectives, 4.2 (1990), pp. 19–33 <https://doi.org/10.1257/jep.4.2.19>
  26. Wiener, Norbert, Cybernetics, or Control and Communication in the Animal and the Machine (New York: MIT Press, 1961) <https:/​/​doi.org/​10.1037/​13140-000>