On Not Extracting
The obvious question is “Sure, but you build these systems – what does it mean to practice Gelassenheit from inside the apparatus you’re describing?” I don’t have a clean answer; however, in sitting with it, I at least have a clearer sense of why it’s hard to answer. The practice erodes the very capacity for genuine encounter that I’m trying to preserve, and the philosophical self-awareness that might seem like protection turns out to be its own trap. This is an attempt to sort out why.
It’s natural to ask if curation provides a path forward here. But the answer is no. The same logic applies: upgrading the standing reserve doesn’t change the fundamental posture one carries into their work. Reading Pearl’s causality rather than scrolling Medium can still be done as a form of cultural capital. If you don’t approach anomalous model results or theoretical developments with genuine openness, you’re just pursuing a more refined form of the same posture. The instinct isn’t wrong. The quality of what you attend to probably does matter at the margins, and there’s a real difference between genuine intellectual engagement and pure consumption. So we can set aside what you attend to as an answer, and turn instead to the structure of attention itself.
The deeper problem is structural. Sustained immersion in the optimizing frame isn’t neutral with respect to attention. A focus on metrics, ablations, business cases, and loss curves distracts from the requisite attunement. Each of these is legitimate, and a necessary part of the work. Collectively, however, they train a particular posture in which things appear as inputs to be processed, anomalies are resolved, and outputs improved. It’s all hill climbing and no discovery. The posture becomes habitual. And what’s insidious about this, and what I think Heidegger was specifically worried about, is that the erosion doesn’t feel like anything. You don’t notice yourself becoming less curious. You don’t experience the loss of attunement as a loss. What’s worse, you feel productive because you are making progress. That feeling of productivity is precisely what obscures the loss – the sense that something has gone missing is the first thing to go. This is why individual efforts, such as deciding to stay curious, are insufficient as a response. The problem isn’t a failure of will. It’s a structural feature of the environment that works on you continuously.
There’s a particular trap available to people sophisticated enough to see all of this clearly, and I want to name it. Understanding the critique generates a specific self-image: the reflective practitioner, the person who builds these systems while retaining the capacity to interrogate what they’re doing. That self-image is itself a form of cultural capital. It distinguishes you from the people building the same systems without this vocabulary. And the moment you start drawing that distinction, even quietly, you’ve recruited the diagnosis into the standing reserve it was diagnosing. Heidegger, compressed into a latent representation of seriousness, available on demand.
This self-flattering move is analogous to Sartre’s bad faith, but applied one level up. In Sartre’s telling, bad faith isn’t lying to others – it’s lying to yourself about the nature of your own situation. It’s making peace with something that, under honest examination, should remain in tension. The rationalizations are familiar. The technology will exist regardless, so working from inside creates more influence than working outside. And abandoning a field I’m genuinely good at and care about seems like an obvious waste. The inevitability argument deserves more than a quick bracket. The technology probably will exist regardless. That part is likely true. What makes it rationalization isn’t the factual claim but what it does to the question. It converts a choice into a pseudo-necessity. I didn’t have to work on this; I chose to. The argument works by making that choice feel like no choice at all, which is exactly what Sartre means by bad faith — not lying, but using the truth to refuse your own freedom. These arguments are most convincing precisely to people who can articulate exactly why they’re rationalizations.
The usual response to this kind of problem is self-knowledge. If you know yourself well enough, the bad faith dissolves. But that’s not quite right, at least not in this context. Heidegger’s point, and I suspect it’s correct, is that self-knowledge of this kind can be recruited into the problem rather than solving it. Writing a careful post about Gestell doesn’t spring the trap. It might just be a more refined way of being caught in it. The trap is sophisticated enough to incorporate its own recognition. I don’t think I’m outside it. I don’t have a clean way out of this, and I’m not sure there is one.
There’s a harder version of this question I’ve been avoiding. Not “can I personally practice Gelassenheit while building these systems” but “are these systems causing harm, and does my participation make me complicit in that harm?”
The direct version is easy to answer. The primary AI and ML problems at Block are fraud reduction and remediation. That work does measurable good. But that answer sidesteps the real question, because the harm I’ve been describing isn’t harm to a person in any immediate sense. It’s harm to a capacity: the gradual normalization of a posture in which human behaviour appears primarily as signal to be processed, scored, and acted upon. This kind of harm is diffuse, structural, and hard to attribute to any particular system. It’s also real.
Fraud models are probably the most defensible instance of behaviour-as-standing-reserve. The purpose is legitimate, the adversarial context is clear, and the population being protected is identifiable. To build and deploy these systems we’ve developed an entire apparatus: the technical infrastructure, the organizational epistemology, the metricization, and the habits of mind. The same technologies, the same framing, the same way of encountering human behaviour as distributions to be modeled reach towards engagement optimization, hiring, insurance underwriting, surveillance, and a dozen other applications where the legitimacy is considerably less obvious. The harm isn’t in the fraud model. It’s in the codification of behaviour as signals and metrics, and in what the fraud model is part of – a general apparatus whose applications vary enormously in their defensibility, and whose expansion is not something any individual team controls.
This is where the question becomes more tractable, and more personal. Not “is this apparatus harmful” in the abstract, but: given that I’m inside it, given that I understand its limitations better than most, given that I can see where the posture becomes pathological, am I using that position to push back on the applications that deserve pushing back on? That’s a question with a concrete answer, and it’s the one I find myself returning to.
One concrete form Gelassenheit takes in technical practice is maintaining the gap between model score and person. The score is not the person. The behavioral sequence is not the intention. The representation is not the life. Holding that distinction open, treating the output as evidence rather than verdict, is the small but real act of attunement available to anyone working inside these systems. Automatic decisioning institutionalizes the refusal of that gap. When the score becomes the decision directly, the human who might maintain the distinction is made structurally unnecessary. The standing reserve stops being a posture you can resist or interrupt. It becomes an administrative fact with direct causal power over people’s lives.
There’s something a human reviewer brings that a model cannot replicate, and it’s worth being specific about what it is. A person with genuine attunement to a case can notice when something doesn’t fit, when the situation has a texture that the model’s implicit assumptions don’t account for. They can feel that something is wrong before they can prove it. This kind of pre-reflective attunement — being already responsive to a situation before analysis begins — has no analogue in a model. Automatic decisioning has no such capacity. It can only apply the representation. When the model is wrong in ways that matter, and it will be, systematically, for populations underrepresented in training data, for people whose lives don’t fit the distribution, there is no mechanism to notice. The error doesn’t register as an error. It registers as a decision.
The model is always a projection. It takes a person, who exists in some impossibly high-dimensional space of intentions, history, relationships, and circumstances, and projects them onto the subspace the training data happened to span. That’s not a flaw to be engineered away. It’s constitutive of what a model is. Treating the projection as complete, as if the subspace were the space, is the error that makes all the others possible. And it’s a convenient error, because the projection is tractable in ways the person is not. The score can be optimized. The person cannot.
In automatic decisioning, the individual failure of collapsing “this framework is powerful” into “this framework is correct about what matters” stops being individual and becomes the explicit operating assumption of the system. The model gets handed off. The caveats, and there are always caveats, don’t travel with it. The organizational structure that grows around it gradually treats the score as more real than the person, because the score is tractable and the person is not. What makes this particularly hard to arrest is that the people who know the model’s limitations are often not the people deciding where automatic decisioning is appropriate. The decision about scope is made upstream, often on business grounds, by people who have no reason to have internalized what the model can’t see. By the time the gap between model and person becomes visible, in the form of complaints, edge cases, audit findings, it’s embedded in infrastructure that is costly to change.
GDPR Article 22 encodes the right not to be subject to solely automated decisions with significant effects. The Digital Services Act requires algorithmic transparency and meaningful opt-out. Both exist because legislators had the intuition that something was wrong with fully automated consequential decisions, even when models perform well in aggregate. The framework gives a philosophical account of what that intuition tracks. The objection isn’t only about accuracy. It’s about the claim that a person cannot be adequately represented by their behavioral sequence for purposes of decisions that materially affect them. There is a remainder the model doesn’t capture, intentions, circumstances, context that didn’t make it into the training data, and that remainder has moral weight. The regulatory apparatus is an institutional attempt to preserve the gap. It encodes, however imperfectly, the recognition that the score is not the person and that acting as if it were is a harm independent of whether the model is right.
The work of engaging with these requirements, reviewing complaints, evaluating edge cases, assessing whether a model’s decision was responsive to a person’s actual situation, is the work of maintaining the gap institutionally. It’s the place where the system is required to re-encounter the person rather than just the representation. Someone has to read what the customer actually said and make a judgment that can’t be fully delegated back to the model. Regulation E’s dispute resolution requirements are a concrete instance: when a customer contests an electronic fund transfer, the institution must investigate the specific claim and make a human determination. The law doesn’t use the language of Gelassenheit, but it encodes its structure.
This isn’t incidental to the philosophical problem. Individual Gelassenheit operates at the level of personal practice, where its effects are real but diffuse. This kind of compliance work operates at the level where the gap gets closed or preserved as a matter of policy. It determines, for a class of cases, whether the system is required to treat the person as more than their representation. That may make it among the most structurally important work in this space, not despite being downstream of the model, but because of it.
For practitioners inside these systems, this reframes the question of agency. You may not be able to resolve the structural problem of Gestell, but you can advocate for the institutional mechanisms that preserve the gap — human review processes, dispute resolution, the points where the system is required to re-encounter the person. The practitioner who understands what the model can’t see is well-positioned to argue for where that understanding matters most. That isn’t a solution to the philosophical problem. But it is a concrete site of action within it.
The hardest cases are ones where no such re-encounter is required. Recommendation systems don’t make single consequential decisions; they make millions of small ones, each individually below any threshold, collectively shaping what you find interesting, what you believe is normal, who you become. GDPR Article 22 covers decisions with significant effects on individuals — a bar individual recommendations never meet. The gap that compliance work preserves in consequential decisions simply doesn’t exist here. The absence is structural, not an oversight, and there is no obvious institutional equivalent.
This piece has been about what building these systems does to the builder – the practitioner’s relationship to their own attention, their own capacity for encounter. The question of what these systems do to the people they act on is related but distinct, and deserves its own treatment. The recommendation system problem above is a glimpse of that territory, not a full account.
The conclusion rhymes with familiar advice. Stay curious. Don’t fool yourself. Keep the map from becoming the territory. None of this is new. What the philosophical account adds isn’t the prescription but the diagnosis. It clarifies what kind of problem this is, not a failure of effort or intention but a structural feature of the environment that works on attention continuously. Knowing that changes what you’re looking for. The enemy isn’t carelessness. It’s the habitual, productive, well-intentioned capture that doesn’t feel like capture.
The most concrete form Gelassenheit takes in technical work isn’t a grand philosophical stance but a repeated small act: noticing when you’re treating the output as verdict rather than evidence, when you’ve stopped asking what the model can’t see, when the score has started to feel more real than the person it describes. The most useful single diagnostic: the moment the score becomes identity rather than evidence is the moment you’ve stopped encountering and started extracting. Evidence is something you reason from, something you hold alongside other things, something that can be wrong in ways that matter. Identity is what something is. When the score becomes identity, when the fraud probability stops being a signal and starts being the person, the gap is closed, and what was a tool for understanding has become a substitute for it. These moments are easy to miss — the posture I’ve been describing is precisely what makes them easy to miss.











