@machinespirits/eval 0.2.0 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +91 -9
- package/config/eval-settings.yaml +3 -3
- package/config/paper-manifest.json +486 -0
- package/config/providers.yaml +9 -6
- package/config/tutor-agents.yaml +2261 -0
- package/content/README.md +23 -0
- package/content/courses/479/course.md +53 -0
- package/content/courses/479/lecture-1.md +361 -0
- package/content/courses/479/lecture-2.md +360 -0
- package/content/courses/479/lecture-3.md +655 -0
- package/content/courses/479/lecture-4.md +530 -0
- package/content/courses/479/lecture-5.md +326 -0
- package/content/courses/479/lecture-6.md +346 -0
- package/content/courses/479/lecture-7.md +326 -0
- package/content/courses/479/lecture-8.md +273 -0
- package/content/courses/479/roadmap-slides.md +656 -0
- package/content/manifest.yaml +8 -0
- package/docs/research/build.sh +44 -20
- package/docs/research/figures/figure10.png +0 -0
- package/docs/research/figures/figure11.png +0 -0
- package/docs/research/figures/figure3.png +0 -0
- package/docs/research/figures/figure4.png +0 -0
- package/docs/research/figures/figure5.png +0 -0
- package/docs/research/figures/figure6.png +0 -0
- package/docs/research/figures/figure7.png +0 -0
- package/docs/research/figures/figure8.png +0 -0
- package/docs/research/figures/figure9.png +0 -0
- package/docs/research/header.tex +23 -2
- package/docs/research/paper-full.md +941 -285
- package/docs/research/paper-short.md +216 -585
- package/docs/research/references.bib +132 -0
- package/docs/research/slides-header.tex +188 -0
- package/docs/research/slides-pptx.md +363 -0
- package/docs/research/slides.md +531 -0
- package/docs/research/style-reference-pptx.py +199 -0
- package/package.json +6 -5
- package/scripts/analyze-eval-results.js +69 -17
- package/scripts/analyze-mechanism-traces.js +763 -0
- package/scripts/analyze-modulation-learning.js +498 -0
- package/scripts/analyze-prosthesis.js +144 -0
- package/scripts/analyze-run.js +264 -79
- package/scripts/assess-transcripts.js +853 -0
- package/scripts/browse-transcripts.js +854 -0
- package/scripts/check-parse-failures.js +73 -0
- package/scripts/code-dialectical-modulation.js +1320 -0
- package/scripts/download-data.sh +55 -0
- package/scripts/eval-cli.js +106 -18
- package/scripts/generate-paper-figures.js +663 -0
- package/scripts/generate-paper-figures.py +577 -76
- package/scripts/generate-paper-tables.js +299 -0
- package/scripts/qualitative-analysis-ai.js +3 -3
- package/scripts/render-sequence-diagram.js +694 -0
- package/scripts/test-latency.js +210 -0
- package/scripts/test-rate-limit.js +95 -0
- package/scripts/test-token-budget.js +332 -0
- package/scripts/validate-paper-manifest.js +670 -0
- package/services/__tests__/evalConfigLoader.test.js +2 -2
- package/services/__tests__/learnerRubricEvaluator.test.js +361 -0
- package/services/__tests__/learnerTutorInteractionEngine.test.js +326 -0
- package/services/evaluationRunner.js +975 -98
- package/services/evaluationStore.js +12 -4
- package/services/learnerTutorInteractionEngine.js +27 -2
- package/services/mockProvider.js +133 -0
- package/services/promptRewriter.js +1471 -5
- package/services/rubricEvaluator.js +55 -2
- package/services/transcriptFormatter.js +675 -0
- package/docs/EVALUATION-VARIABLES.md +0 -589
- package/docs/REPLICATION-PLAN.md +0 -577
- package/scripts/analyze-run.mjs +0 -282
- package/scripts/compare-runs.js +0 -44
- package/scripts/compare-suggestions.js +0 -80
- package/scripts/dig-into-run.js +0 -158
- package/scripts/show-failed-suggestions.js +0 -64
- /package/scripts/{check-run.mjs → check-run.js} +0 -0
|
@@ -0,0 +1,326 @@
|
|
|
1
|
+
|
|
2
|
+
|
|
3
|
+
## Bitter Lessons, Stochastic Parrots, Errant Agents: From Criticism to Immanent Critique?
|
|
4
|
+
|
|
5
|
+
|
|
6
|
+
- Overview of examples of *external* (Goodlad and Stone; Gebru and Torres) and *internal* (Sutton, LeCun) critique
|
|
7
|
+
- Boosters / doomers? Hype vs critique? How do we address profound, even irreconciliable difference?
|
|
8
|
+
- Example: TESCREALism illustrates how criticism can be co-opted and made ironic
|
|
9
|
+
- "What is immanent Critique?" (Stahl)
|
|
10
|
+
|
|
11
|
+
|
|
12
|
+
> Marc Andreessen, the last of whom included “TESCREAList” in his Twitter profile for several weeks in 2023 (Gebru & Torres 2024)
|
|
13
|
+
|
|
14
|
+
```notes
|
|
15
|
+
We have traversed a lot of conceptual territory in this course, and this week we move to another term: critique, and a related term of criticism.
|
|
16
|
+
|
|
17
|
+
All of the articles and videos offer insight on different AI and machine learning topics. I'll be providing very brief coverage of two lines of criticism: what we can think of external (Goodlad and Stone; Gebru and Torres) and internal (Sutton, LeCun) criticism.
|
|
18
|
+
|
|
19
|
+
But the area I really want to focus on is the question raised by the Stahl article: what is immanent critique?
|
|
20
|
+
|
|
21
|
+
I raise this question not only because it arises in the context of Hegelian scholarship - and because he is the key thinker to inaugurate a concern about this question. It is because one of the difficulties today concerns how we address deep epistemic divides between proponents and critics of AI. Let me take as an example the remark Gebru and Torres make in the context of their article on TESCREALism:
|
|
22
|
+
|
|
23
|
+
> Marc Andreessen, the last of whom included “TESCREAList” in his Twitter profile for several weeks in 2023
|
|
24
|
+
|
|
25
|
+
Now this term is of course a critical one, and we can assume Andreessen is both aware of this and proudly applies the term to himself *ironically*. So here we get a key difficulty in the context of critique and criticism: today it is very easy simply to embrace the criticism and move on. If criticism is simply a game of name-calling and point-scoring, we might say it is ultimately ineffectual - something like a fashional pose we adopt, a sign of learning and distinction. How do we make criticism *stick*? This is the problem Stahl - and Hegel before him – seek to address via the idea of an immanent critique.
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
|
|
31
|
+
### Towards Critical AI: Beyond Chatbot-K
|
|
32
|
+
|
|
33
|
+
|
|
34
|
+
- Mistaking statistical processes for signs of intelligence
|
|
35
|
+
- Extreme hype - both of the benefits and dangers of AI (booster-doomerism)
|
|
36
|
+
- Corporate offloading of risks and costs to the public
|
|
37
|
+
- Poor science: "misleading terminology, flawed technical benchmarks, and proprietary datsets"
|
|
38
|
+
- "ELIZA" effects of anthropomorphism
|
|
39
|
+
- Misplaced attribution of experience to machine learning
|
|
40
|
+
- "Amplification of biases and stereotypes"
|
|
41
|
+
- "Misinformation and Degradation of the Internet"
|
|
42
|
+
- "Copyright Infringement, Lack of Consent"
|
|
43
|
+
- Environmental effects
|
|
44
|
+
- Corporatization / privatization of research
|
|
45
|
+
- "Exploitation of Human Labour"
|
|
46
|
+
- "Frictionless Knowing"
|
|
47
|
+
|
|
48
|
+
```notes
|
|
49
|
+
In the first two readings, we see criticism applied to AI from the standpoint of scholars working in the field of what has been termed "critical AI studies". These are what Stahl might term "external" criticism: levelled at the general field of AI from outsiders (scholars typically working in the humanities and social science traditions).
|
|
50
|
+
|
|
51
|
+
The article by Goodlad and Stone is an introduction to a recent special issue launched by a new journal, in fact entitled "Critical AI". It develops a wide-ranging and lengthy but useful array of critical AI topics. These summarize a number of topics many of you will be familiar with, and that we have dealt with in previous weeks.
|
|
52
|
+
|
|
53
|
+
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
### TESCREALism: "Transhumanism, Extropianism, singularitarianism, (modern) cosmism, Rationalism, Effective Altruism, and longtermism"
|
|
59
|
+
|
|
60
|
+
- Blends narratives of scientific progress with transhumanist fantasies of mortality transcendence
|
|
61
|
+
- Connected with history (and recent resurgence) of racialized eugenics
|
|
62
|
+
- AGI (Artificial General Intelligence): Utopia and Apocalypse: Two Sides of the Same Coin?
|
|
63
|
+
- What is downplayed in the "existential" crisis of AGI is the perpetuation of everyday risks: algorithmic racism and sexism, environmental harms and the concentration of political power
|
|
64
|
+
|
|
65
|
+
```notes
|
|
66
|
+
The article by Gebru and Torres develops criticism in another direction: what I would call an effort to characterize the "spirit" of Silicon Valley, which today motivates the development of Artificial Intelligence. Their description of TESCREAL seeks to join together otherwise disparate and even contradictory elements. On the one hand, the pursuit of scientific, mathematical and technological progress. On the other, the fantasy – at least for today – of an overcoming of mortality and the limits of life, which fall under the banner of transhumanism.
|
|
67
|
+
|
|
68
|
+
They further develop their analysis with a comparison with eugenics: a movement that coincides with the history of statistics, which underpins machine learning. This they connect specifically with the recent focus on AGI. They note one of the recurring oddities of the generative AI moment: that those researching or investing to develop AI are often those also expressing concerns about its existential risk. What is downplayed in the "existential" crisis of AGI is the perpetuation of everyday risks: algorithmic racism and sexism, environmental harms and the concentration of political power.
|
|
69
|
+
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
---
|
|
73
|
+
|
|
74
|
+
### The "Godfather" Critiques: The Problem of Experience
|
|
75
|
+
|
|
76
|
+
- Both LeCun and Sutton discuss the limits of Transformer-based systems
|
|
77
|
+
- LLMs cannot "experience"; they are useful regurgitators of human experience
|
|
78
|
+
- Echoes of Goodlad, Stone, Gebru and Torres: LLMs overrated
|
|
79
|
+
- **But**: each suggests alternatives that can address limits of LLMs:
|
|
80
|
+
- Sutton sees Reinforcement Learning as the key to genuine (as opposed to imitative) AI.
|
|
81
|
+
- LeCun argues (in one of the supplementary readings) for AI that is modelled more closely on childhood development.
|
|
82
|
+
|
|
83
|
+
|
|
84
|
+
```notes
|
|
85
|
+
In the follow-up two videos, featuring Yann LeCun and Richard Sutton, we see a different type of criticism. Both are highly respected scholars in the field of Artificial Intelligence, responsible for some of the foundational developments in neural networks and reinforcement learning. Both are however critical about the overreliance upon the Transformer architecture we discussed in Week 4. In different ways, each thinks language models can never arrive at general - much less super - intelligence, because they are inherently limited.
|
|
86
|
+
|
|
87
|
+
Both offer criticisms that overlap, but also have differences, certainly in how each sees these limitations being overcome. For LeCun, this limitation relates to the relatively *static* nature of the language model. For both, the limits of language models relate instead to the lack of *experience*.
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
### Criticism Types in Machine Learning
|
|
95
|
+
|
|
96
|
+
- *External* critics are well-informed scholars outside ML, offering outsider perspectives.
|
|
97
|
+
- *Internal* critics are ML experts questioning the field's own evolution.
|
|
98
|
+
- Both viewpoints highlight different concerns about progress and direction.
|
|
99
|
+
- But are they *convincing*? *Persuasive*?
|
|
100
|
+
|
|
101
|
+
```notes
|
|
102
|
+
So far we have examined two examples of two types of criticism. The first we can describe as *external*: developed by well-informed scholars who sit, like most of us, outside the immediate field of machine learning. The second we can describe as *internal*: developed by experts in machine learning itself, who are nonetheless critical of the way the field has evolved.
|
|
103
|
+
|
|
104
|
+
But whether we agree with these different critiques or not, we might still be troubled by whether *opposing* standpoints would find them convincing....
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
---
|
|
108
|
+
|
|
109
|
+
### "What is Immanent Critique?"
|
|
110
|
+
|
|
111
|
+
- Immanent critique: unfolds the system’s own logic.
|
|
112
|
+
- Distinguishes object (target of criticism) from standpoint (critic’s explicit position).
|
|
113
|
+
- Example: Goodlad & Stone critique
|
|
114
|
+
- Object: AI or LLMs
|
|
115
|
+
- Standpoint: equality, sustainability, verifiability - *liberal* values?
|
|
116
|
+
|
|
117
|
+
|
|
118
|
+
```notes
|
|
119
|
+
Now we turn to the fifth article by Stahl, "what is immanent critique?" What *is* immanent critique? And what is its relevance for us, to think about machine and human learning?
|
|
120
|
+
|
|
121
|
+
Firstly, the purpose of immanent critique is to pose greater difficulties for those who would like to rebut that critique - because it aims to show it stems from the same *ground*, or shared premisses, as that which motivates the rebuttal.
|
|
122
|
+
|
|
123
|
+
Let's move on by introducing a couple of further terms: the *object* and the *standpoint* of critique. The object is the thing that we wish to judge with our criticism, while the standpoint is the position we adopt, implicitly or explicitly, when we do so. For example, the Goodlad and Stone article is directed toward the *object* of artificial intelligence, and their *standpoint* is that technologies such as AI should be equitable, sustainable and fair with respect to the work of creators and workers.
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
### Transcendental vs Immanent Critique
|
|
129
|
+
|
|
130
|
+
- Transcendental critique: standpoint is *external* to the world of the object.
|
|
131
|
+
- A transcendental god creates but remains aloof; an immanent god is embedded in creation.
|
|
132
|
+
- Hegel’s philosophy exemplifies immanence, seeing Spirit evolve through history.
|
|
133
|
+
- **Immanence** implies that *human development* mirrors *divine development*
|
|
134
|
+
|
|
135
|
+
```notes
|
|
136
|
+
We'll next introduce another term: the opposite of immanent critique, as a way for us to get into what can be quite a difficult idea to grasp, through the form of contrast. It is a term known as *transcendental* critique. What is transcendental, and how does it differ from immanent?
|
|
137
|
+
|
|
138
|
+
Both ideas come in fact from the world of religion. A transcendental god is one who sits outside the world. Who perhaps gives the world its start, via an act of divine creation, but thereafter remains aloof, watching from afar, perhaps only returning at some future date of Judgment. An immanent god, by contrast, is one who is embedded in our world, who is able to act upon it, or who is infused in every act of creation. We have already seen how Hegel's view is deeply immanent: the Spirit is always in a process of maturing, developing, evolving. This includes the Spirit in the godly sense - he means something like god is itself coming into being through the creation of the world, of us, of history, of technology. God's immanence means, for Hegel, that we are not only part of god's plan; our development is also the development of god, or of spirit. Hegel is in some sense of the term, a transhumanist.
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
|
|
142
|
+
---
|
|
143
|
+
|
|
144
|
+
### Immanent Critique arises from Contradiction
|
|
145
|
+
|
|
146
|
+
- Missteps in history: reduced knowledge, stalled progress.
|
|
147
|
+
- Ideal outcomes (equality, harmony) clash with reality: wars, xenophobia rise.
|
|
148
|
+
- Contradiction between **ought** and **is**
|
|
149
|
+
- This contradiction fuels critical examination of our cultural trajectory.
|
|
150
|
+
|
|
151
|
+
```notes
|
|
152
|
+
So what about critique? Let's stay with Hegel's idea of immanence a moment longer, even if we treat it just as a convenient fiction. If we assume our history - the history of our species, our culture, our technology – is the coming into being of god, then we might also assume there are missteps. These might be moments when god's development is arrested, held up, because we stray from the proper path. How do we know this? We might believe god's ultimate plan is for the realization of Absolute Knowledge; yet we might feel we are instead losing knowledge, becoming less intelligent, less educated - straying from the tendency towards god's own eventual realization.
|
|
153
|
+
|
|
154
|
+
In more concrete terms: we might believe our culture, science and technology *should* be leading us towards greater equality, mutual recognition, self-consciousness and awareness, political harmony, sustainability and so on. But then we notice in practice the opposite is true: wars continue to be fought, xenophobia – a kind of lack of recogition - is on the rise.
|
|
155
|
+
|
|
156
|
+
In short, we notice a contradiction between what *ought* to be the case, and what *is* actually the case. And this leads us to criticism.
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
|
|
160
|
+
---
|
|
161
|
+
|
|
162
|
+
### Immanent Critique in Hegelian Thought
|
|
163
|
+
|
|
164
|
+
- Standards of right and wrong arise from historical evolution, not divine decree.
|
|
165
|
+
- **Dogmatic** beliefs resist challenge because they rely on transcendental conditions.
|
|
166
|
+
- Rational knowledge must acknowledge historical contingency to allow revision.
|
|
167
|
+
- Immanent critique enables detection and correction of irrational foundations.
|
|
168
|
+
|
|
169
|
+
```notes
|
|
170
|
+
Now what makes this criticism *immanent*? In our story so far, we might note we do not in fact have ten commandments or a set of equivalent rules laid out for us. Instead, it is through our own evolution that we develop a sense of right and wrong, or the standards that allow us to talk about what *ought* to be the case. This sense that our standards of judgment arise historically – via our own trial and error - is fundamental to Hegel's picture, and part of his legacy is precisely this idea of critique that is immanent to our own historical process – and to not some *deus ex machina* who presides over us.
|
|
171
|
+
|
|
172
|
+
Why does this matter? One of the problems Hegel is trying to address - just like Kant before him – is that of *dogmatism*. Dogmatic beliefs are those that cannot be challenged, because in some way they refer to a transcendental conditions: for example, to a god that *I* believe in, but that *you* do not. Hegel wants to say that dogmatic knowledge, even when it looks like it is true, still rests upon what might be subjective and even irrational beliefs. Such knowledge is itself therefore ultimately irrational.
|
|
173
|
+
|
|
174
|
+
Rational knowledge, on the contrary, needs to be based upon foundations that understand themselves to be historically contingent - even if we don't know of any better. Then if we see contradictions, these foundations are capable of being revised - this is the sign of a properly rational knowledge, and the basis for immanent critique.
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
---
|
|
178
|
+
|
|
179
|
+
### The Problem of Transcendental Critique
|
|
180
|
+
|
|
181
|
+
- Transcendental Critique relies on shared *a priori* values (e.g. liberalism)
|
|
182
|
+
- *Divergent* value systems cause perceived dogmatism or skepticism.
|
|
183
|
+
- Historical process can unite parties by showing values as *earned* (not divinely bestowed on this or that group).
|
|
184
|
+
|
|
185
|
+
|
|
186
|
+
```notes
|
|
187
|
+
|
|
188
|
+
Why is this important? In the final analysis, transcendental critique - the kind of criticism that depends upon standards brought from outside the object – only holds if the speaker and the listener share some *a priori* values. For example liberalism: if we are both liberals, your critique of how AI (for example) offends liberal sensibility is likely to resonate. But what if I am opposed to liberalism? We are stuck: you appear to me as dogmatic; I appear to you as sceptical. What breaks this impasse? If you can show me how we are together by a historical process to understand certain values, such as equality, as precious - not as god-given, but rather acquired via painful historical trial-and-error. This doesn't require us both to be atheist; only that we see how history delivers us a set of ideals that seem to improve upon earlier versions. Then you might lead me to a conviction that AI (or something else) contradicts this historical and ethical achievement. Pointing out such a contradiction leads to a criticism that is *immanent*, because it does not depend us having some shared metaphysical values - only that we both acknowledge this historical process (a separate problem).
|
|
189
|
+
```
|
|
190
|
+
|
|
191
|
+
---
|
|
192
|
+
|
|
193
|
+
### Immanent Critique and Dialectic Learning
|
|
194
|
+
|
|
195
|
+
- We evaluate objects against ethical standards - while also *reassessing* those standards.
|
|
196
|
+
- This reciprocal judgment embodies Hegelian experience, driving self-consciousness toward forward.
|
|
197
|
+
- The process refines both object and norm, leading toward *Absolute Knowledge*.
|
|
198
|
+
- It exemplifies genuine learning through continuous recalibration of ideas and values.
|
|
199
|
+
|
|
200
|
+
```notes
|
|
201
|
+
As the Stahl paper argues, this process of immanent critique involves a complex dialectic exchange: we judge an object against a normative, ethical standard, like the printing press, and then we also judge our own standards too - recalibrating both object and stanard together. This process, as Stahl reminds us, is what Hegel means by the term "experience", and is the genuine form of learning that takes us forward toward self-consciousness and, ultimately, Absolute Knowledge.
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
---
|
|
205
|
+
|
|
206
|
+
### AI, Expectations, and Self‑Critique
|
|
207
|
+
|
|
208
|
+
- Shoud our hopes for technology should *evolve alongside* AI assessment?
|
|
209
|
+
- Consider AI as a catalyst for *reflexive pedagogy* (following Hegel) - or potential *social domination* (following Marx)
|
|
210
|
+
- How is AI reshaping standards and foundations of critical self‑examination?
|
|
211
|
+
|
|
212
|
+
```notes
|
|
213
|
+
How do we bring this all home in the context of AI? First, we might want to think in terms of our own processes of response to AI as an object: how do we judge it against what we take to be our normative expectations of technology? Or in less formal language: what are our hopes for technology, and in what ways does AI fail to live up to them?
|
|
214
|
+
|
|
215
|
+
But second, how are our hopes themselves being modified as we start to assess AI? Can we begin to see that our standards themselves need to evolve - not just that we need new rubrics, for example, but that the foundations for those rubrics also need examination?
|
|
216
|
+
|
|
217
|
+
And on a wider front, leading us towards the differing visions of history and futurity we will discuss next week: is the advent of AI itself the kind of technology that might be essential to the very task of our critical self-examination? Is it the technology that can help transform the admitted obscurity of reflexive immanent critique? Can it lead us toward, as Mary and Bill might say, reflexive pedagogy? Or, as Gebru and Torres claim, is the AI project instead a massive fantasy projected by a disguised eugenics movement - a contemporary form, as Stahl discussed in relation to Marx, of "social domination" (rather than "conceptual self-determination")?
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
|
|
221
|
+
---
|
|
222
|
+
|
|
223
|
+
|
|
224
|
+
### Immanent Critique: Example of the Printing Press
|
|
225
|
+
|
|
226
|
+
|
|
227
|
+
- Printing press: dual role in *democratizing knowledge* and *spreading misinformation*.
|
|
228
|
+
- Democratic values stem partly from *technology-enabled education*.
|
|
229
|
+
- Calls for reflexive critique, balancing *object’s shortcomings* with *realistic standards*.
|
|
230
|
+
- Emphasizes need to anticipate future tech impacts by learning from past innovations.
|
|
231
|
+
|
|
232
|
+
```notes
|
|
233
|
+
Let's examine now the case of the printing press, as an example of an immanent critique of technology. We can look back upon the arrival of this invention and note its pros and cons: its democratization of information, on the one hand, and its propensity for the spread of misinformation, on the other. But then we might step back to say: where does our *desire* for democracy come from? In other words, what explains the basis of our judgment? We might say: we learn it, from parents, schools, community, all of which instils in us an appreciation for democracy, for an equal right to vote, and so on. But we might probe further, and note our education is in part depedent upon the low cost of the technologies for storing and disseminating information itself. These include the printing press, or derivatives from it. So a proper "immanent" critique acknowledges our debt to the very technology that we are busy critiquing.
|
|
234
|
+
|
|
235
|
+
Immanent critique involves then a process of trying to bring together the objects and our standards for assessing them in a single account. If we didn't have the object of the printing press, we might not have the benefit of several hundred years of the democratization of education to allow us to judge it. And this must form *part of our judgment*!
|
|
236
|
+
|
|
237
|
+
However that does not mean that critique needs to become mute. Instead it can observe the emergence of contradictions: when, for example, the technology of print is used to subvert the ideals of democracy and equality that it also gave rise to. In that case, we need to deliberate: either the object is not living up to the standards we expect of it; or our standards may have been too high; or some combination of the two. We need to be *reflexive* in our critique, and consider how both the object and the critique itself might need to be recalibrated.
|
|
238
|
+
|
|
239
|
+
Now this becomes more difficult in terms of technologies of the present, because we do not yet experience its benefits nor its deficits fully. So we have to *anticipate* many of its effects, while also acknowledging the role of similar technological innovations, like the printing press and the Internet, in the past.
|
|
240
|
+
```
|
|
241
|
+
|
|
242
|
+
|
|
243
|
+
---
|
|
244
|
+
|
|
245
|
+
### Toward a Reflexive Pedagogy of AI? AI and Immanent Critique
|
|
246
|
+
|
|
247
|
+
|
|
248
|
+
a. the pros and cons of the technology itself - will it improve education? What are criteria for evaluating improvement?
|
|
249
|
+
b. whether our existing standards are sufficient for evaluation - or do we need new ways to evaluate? How do we evaluate our evaluations?
|
|
250
|
+
c. Can AI be a reflexive technology - one that helps us develop a critique that raises the standards of both human and machine? What would be criteria for assessing this?
|
|
251
|
+
|
|
252
|
+
```notes
|
|
253
|
+
In breakout groups, we want to try to develop a rubric for a reflexive, immanent critique!
|
|
254
|
+
|
|
255
|
+
We'll start by imagining we are evaluating a new AI technology for education. We want to develop a rubric that seeks to judge the following:
|
|
256
|
+
|
|
257
|
+
a. the pros and cons of the technology itself - will it improve education? What are criteria for evaluting improvement?
|
|
258
|
+
b. whether our existing standards are sufficient for evaluation - or do we need new ways to evaluate? How do we evaluate our evaluations?
|
|
259
|
+
c. Can AI be a reflexive technology - one that helps us develop a critique that raises the standards of both human and machine? What would be criteria for assessing this?
|
|
260
|
+
```
|
|
261
|
+
|
|
262
|
+
|
|
263
|
+
<!--
|
|
264
|
+
---
|
|
265
|
+
|
|
266
|
+
Sketching immanent critique
|
|
267
|
+
|
|
268
|
+
1. We could argue that both the history of technologies - such as the printing press and the Internet – and social progress has brought us standards that we apply to future technologies. These must also help to promote democracy, equality and liberty.
|
|
269
|
+
|
|
270
|
+
2. We could also argue that technologies and the "will to power" have led to a society that is increasingly centred on control, domination and value extraction. Certain technologies might then surprise in how they seem to run against this trend, opening up potential for liberation.
|
|
271
|
+
|
|
272
|
+
Both constitute examples of immanent critique, because we are judging the present against the expectations set by the past - history - not some values imported via religion or other means.
|
|
273
|
+
|
|
274
|
+
-->
|
|
275
|
+
|
|
276
|
+
---
|
|
277
|
+
|
|
278
|
+
|
|
279
|
+
|
|
280
|
+
|
|
281
|
+
|
|
282
|
+
<!--
|
|
283
|
+
We might begin by asking the obvious question: how is critique different from criticism? Often the terms are used interchangably.
|
|
284
|
+
|
|
285
|
+
But as I suggested in the write-up for this week, Hegel's predecessor Immanuel Kant really defined the term for us. For Kant, critique is the development of an argument about what can and cannot be said about a topic. The Critique of Pure Reason determines what can be known by science, and what cannot. The Critique of Practical Reason, on the other hand, describes how we can know what is moral and what is not - how we ought to regulate our conduct in practical day-to-day life. Finally, the Critique of Judgment describes the conditions under which we apply judgments about beauty. In short, the three Critiques describe how we determine what is true, what is good, and what is beautiful.
|
|
286
|
+
|
|
287
|
+
Critique in this sense acts like meta-criticism; it establishes the ground by which criticism - 'that is false / evil / ugly' - can be applied. How does this relate to our contemporary situation with respect to machine learning? First, I would say much of the discourse around AI today remains locked into *criticism*: relatively little has sought to explicate what constitute the limits of what a machine can learn, or, just as importantly, what machine learning means for the limits of human learning.
|
|
288
|
+
|
|
289
|
+
Kant presumes a relatively static world, made up of space, time, and many other things, including things like us, capable of asking questions about that same world. With Hegel, we get a similar picture, but injected now with a dynamic sense of how we as humans can conceive of that world. For Hegel, there are no fixed categories. Instead we work our way through the categories we inherit, discarding some, modifying others, as we test the sum of our knowledge against how we take the world to be. In this very specific sense, Hegel is the first proper philosopher of **technology**: because he allows for the possibility that, alongside and as part of our collective scientific adventure, instruments and machines could revise our understanding of the world quite fundamentally.
|
|
290
|
+
|
|
291
|
+
Hence Hegel would not have necessarily shocked by profound scientific discoveries and technological developments after his time: Darwin, Freud, Einstein, computers, or machine learning. Instead he would see these as proving his point against Kant - critique must always absorb into itself new 'moments' of discovery.
|
|
292
|
+
|
|
293
|
+
Today though, I'm not sure we have quite the same generosity and openness to the prospects of a technology that, as we saw last week, is asking us to align ourselves with the new possibilities it establishes.
|
|
294
|
+
-->
|
|
295
|
+
|
|
296
|
+
|
|
297
|
+
|
|
298
|
+
|
|
299
|
+
|
|
300
|
+
<!-- Changing relationship between human and machine. "Assistant" is the wrong metaphor: assumes anthropomorphic "other". Why not a new appendage? Why not acknowledge AI is a powerful cyborgian supplement - like writing, and other technologies?
|
|
301
|
+
-->
|
|
302
|
+
|
|
303
|
+
|
|
304
|
+
<!--
|
|
305
|
+
|
|
306
|
+
|
|
307
|
+
First, a little background: Hegel, a guiding light for this course, was following closely in the footsteps of another German philosopher, Immanuel Kant. Kant introduced the concept of critique in three major works:
|
|
308
|
+
|
|
309
|
+
- Critique of Pure Reason (what we today would call, roughly, "Science")
|
|
310
|
+
- Critique of Pratical Reason (what we would call "morality" and "ethics")
|
|
311
|
+
- Critique of Judgment (what we would call "aesthetics", and also art, literature, film studies etc)
|
|
312
|
+
|
|
313
|
+
"Critique" was not simply meant as a form of criticism, but rather a radical re-thinking of the possibilities of knowledge. For Kant, critique avoids the twin problems of sedimented knowledge: dogmatism (insisting you know what is right) and scepticism (being doubtful that anything is right).
|
|
314
|
+
|
|
315
|
+
Hegel picks up on Kant's concept of critique, offering (as we might suspect) a *critique* of critique. Where Kant sought to identify limits of knowledge, Hegel saw these limits as themselves unnecessarily arbitrary. Hence the *Phenomenology*'s long road toward Absolute Knowledge.
|
|
316
|
+
|
|
317
|
+
In relation to AI, we are faced in many respects with similar terms of reference today. On the one hand: those who see AI as the path to unlimited knowledge (the dogmatists). On the other: those who deny that path entirely (the sceptics). And then those who, like Kant and Hegel, seek to describe the very map and contours of what is knowable by machines at all. Are they simply powerful statistical engines? Or potential oracles that, given time, might overcome limits of their training data?
|
|
318
|
+
|
|
319
|
+
Quite aside from these abstract concerns, there is a fast-growing literature on the negative social effects of AI. We saw some of this already last week in the discussion of alignment. But not all criticism sees alignment as a fix; indeed, for many, the issues are more profound and structural, relating to who *owns* the models, the compute power, the data centres, and the potential means for social surveillance and cognitive lock-in. Even if we remain unconcerned about *doomer* scenarios, many are worried about the intense concentration of resources – including energy, water and real estate, as well as data, computation and human resources – involved in AI today.
|
|
320
|
+
|
|
321
|
+
This week we review a small sample of this literature, to gain a sense of how machine learning is far from a political neutral technological development. Yet might there be a sense that the machines are learning from this criticism too?
|
|
322
|
+
|
|
323
|
+
This week I've added a number of papers that touch upon both critique and criticism of AI. As with previous weeks, pick readings that might interest you and absorb what you can.
|
|
324
|
+
|
|
325
|
+
-->
|
|
326
|
+
|
|
@@ -0,0 +1,273 @@
|
|
|
1
|
+
## Hegel's Revenge: The Return of the *Grand Récit*?
|
|
2
|
+
|
|
3
|
+
|
|
4
|
+
- From Synthesis (Week 1) back to Synthesis (Week 8)
|
|
5
|
+
- But what is being synthesized? Review of key concepts Weeks 2 – 7
|
|
6
|
+
- Experience
|
|
7
|
+
- Recognition
|
|
8
|
+
- Attention
|
|
9
|
+
- Consciousness
|
|
10
|
+
- Alignment
|
|
11
|
+
- Critique
|
|
12
|
+
|
|
13
|
+
|
|
14
|
+
```notes
|
|
15
|
+
So far in this course we have traversed a wide array of concepts, loosely guided by Hegel's early but influential account of learning - as I have interpreted *The Phenomenology of Spirit*.
|
|
16
|
+
|
|
17
|
+
Today we return to our originating concept of *Synthesis* – a return to the beginning that echoes Hegel's own description of Spirit in that text, with hopefully the benefit of some lessons along the way. Each week we've covered a key concept – sometimes derived from Hegel, sometimes not – and sought to mobilize that concept to understand something about the emerging differences and similarities between machine and human learning. In returning to Week one's concept, I'll summarize the concepts we've covered in Weeks 2–7 - closing the loop, as it were, by returning to Week one's key concept of *Synthesis*.
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
### Synthesis as a Concept
|
|
23
|
+
|
|
24
|
+
- Conceptual meaning: bringing together other concepts
|
|
25
|
+
- Historical meaning: bringing together past periods into a united present
|
|
26
|
+
- Liberal democracies bring together prior Greek and Christian periods (Hegel; Fukuyama)
|
|
27
|
+
- Communism brings together prior feudal and capitalist periods (Marx)
|
|
28
|
+
- Transhumanism / posthumanism / technopoeisis / cybersocial: bring together elements of the organic (pre-industrial; romantic) and the mechanical (industrial; post-romantic)
|
|
29
|
+
- Pedagogical meaning: we synthesize concepts in a cumulative way, when we are ready, in conjuction with others (cf. Vygotsky's Zone of Proximal Development - Vygotsky was apparently himself influenced by Hegel)
|
|
30
|
+
|
|
31
|
+
|
|
32
|
+
```notes
|
|
33
|
+
But first, a note about the term "synthesis" itself.
|
|
34
|
+
One question to bear in mind for today, the final assignment and perhaps beyond: how do these seven concepts work together to form a *theory* of intelligence, consciousness or what – following Kant and Hegel here too – we can call the "automated subject". This is one sense of the term "synthesis" - the bringing together of a series of concepts in order to produce something new.
|
|
35
|
+
|
|
36
|
+
But there is another sense to hold on to - synthesis as a process that unfolds, and which includes the social historical process we all belong to. We can see examples of this in all the readings this week. Following Hegel, many kinds of discussions about the future involve some kind of implicit idea using this historical sense of synthesis - that we are "rolling up" elements of the past into the present and future. Even attempts to characterize how machines and humans might operate together employ this idea more or less explicitly: that we start with a pre-industrial society, evolve to an industrial or mechanized society, and now need to move forward to a post-industrial society that blends both "moments" of development and moves beyond them.
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
### "Ontogeny recapitulates phylogeny"
|
|
42
|
+
|
|
43
|
+
- Individual development follows the pattern of social development (infancy > adolescence > maturity (Spencer, Freud, many others))
|
|
44
|
+
|
|
45
|
+
```notes
|
|
46
|
+
One key quote here – a piece of early 19th century biology that sounds as though it could come from Hegel:
|
|
47
|
+
|
|
48
|
+
- "ontogeny recapitulates phylogeny"
|
|
49
|
+
|
|
50
|
+
This means something like the individual's history follows that of the species. In social terms: our own development (from infancy to adolescence to maturity) follows that of society's (from early to middle to late stages of development). See Herbert Spencer on pedagogy. This partly accounts from Hegel's shift in the Phenomenology from psychological to sociological language - and while many might wish to deny it, this pseudo-fallacy – of a relationship between individual and group – lives in variants of the narrative we tell ourselves even about technology.
|
|
51
|
+
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
---
|
|
55
|
+
|
|
56
|
+
### Experience
|
|
57
|
+
|
|
58
|
+
- For Hegel, experience is the synthesis of concepts and sensory perception
|
|
59
|
+
- Perceptions correct concepts...
|
|
60
|
+
- But perceptions can be fallible, so we need *laws* - made up of concepts – that also check perceptions (for hallucinations, etc)
|
|
61
|
+
- We *synthesize* conceptual laws (e.g. theory of gravitation) and perceptions ('but what about this object that flies?') to produce understanding
|
|
62
|
+
|
|
63
|
+
```notes
|
|
64
|
+
We'll move now back to the structure of our course.
|
|
65
|
+
|
|
66
|
+
When we talk of synthesis then, we think of the convergence of machine learning with human learning. I think we have already seen ways that is happening. With the concept of Experience, Hegel suggests we always integrating or synthesizing our concepts with our perceptions: a process of revising our concepts to be consistent with those perceptions. To do so, we need to take account also of the fallibility of our perceptions, devising rules or laws, such as the law of gravity, to apply some order and regulation to the flow of sensations which can often be disorted. Hence we move from sense to perception to, finally, understanding. Synthesis involves this back-and-forth between the reality of the law and the concept and the appearance of things involved in perception - each working to shape the other through experience.
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
### Recognition
|
|
72
|
+
|
|
73
|
+
- But *understanding* has limits; it is directed outward, toward external objects, but cannot "know itself" (Hegel)
|
|
74
|
+
- Need further synthesis: consciousness knows itself through *knowing others* / *being known by others* (Recognition)
|
|
75
|
+
|
|
76
|
+
```notes
|
|
77
|
+
We then saw synthesis in a different form: in the coming together of social roles. What Hegel describes as an existential fight to the death results in the production of a society made up of masters and servants. Yet this equilibrium cannot hold. The servant is no longer the passive recipient of experience that results in even the lofty products of understanding. Instead they learn how to work with things: if you like, they move from a scientific understanding to a technological and practical know-how – ironically, a kind of mastery. In the process they seek to overcome the paradox of recognition faced by the master, who desires the recognition of equals but equally demands they submit to his authority as servants. The servant, by contrast, can acquire recognition of their labour – not by this master, we imagine, but by other servants who are coming into their own as equals in modern society.
|
|
78
|
+
|
|
79
|
+
Here the synthesis exists in its social form: what we call society or the social is the synthesis of individuals coming together via at least the pursuit of mutual recognition.
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
### Attention
|
|
85
|
+
|
|
86
|
+
- Humans pay attention via three mechanisms: alerting, orienting, executing
|
|
87
|
+
- "Attention is all you need" - machine tokens pay attention to each other, and build up rich context from this
|
|
88
|
+
- The mechanisms are very different - but machinic attention bears some resemblance to the implicit semantic theory of Hegel (and later semantic theories – e.g. generative semantics of Lakoff)
|
|
89
|
+
|
|
90
|
+
```notes
|
|
91
|
+
By week 4 we reached the concept of Attention, and while we departed from a direct reading of Hegel, we again saw synthesis in the form of how both humans and machines, at least metaphorically, enact cognition via attention mechanisms.
|
|
92
|
+
|
|
93
|
+
If we apply synthesis here, it is in a metaphorical way. Our experience of consciousness involves synthesizing different moments of neurological attention: moving from being alert, to orienting, to higher order executive functions, involving the sustaining and switching of the lower level attention subsystems.
|
|
94
|
+
|
|
95
|
+
Synthesis applies even less obviously to the attention mechanisms of generative AI systems. I did suggest however some analogy between Hegel's discussion of the concept – as something that stands in relation to, and is defined by, an intricate web of other concepts – and the ways tokens acquire semantic meaning via the Transformer attention mechanism. In an abstract, we might say both share a common semantic theory, insofar as terms obtain their meaning primarily through their connection to other terms. This idea, for people interested, is also taken up by semantic theorists like George Lakoff and Mark Johnson in recent decades.
|
|
96
|
+
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
---
|
|
100
|
+
|
|
101
|
+
|
|
102
|
+
|
|
103
|
+
### Industrial Revolution / Fordism / Generative AI: How does Technology remake Concepts of Consciousness?
|
|
104
|
+
|
|
105
|
+
- "The hand-mill gives you society with the feudal lord; the steam-mill society with the industrial capitalist." (Marx, 1847)
|
|
106
|
+
- But what does technology give to our concepts of consciousness?
|
|
107
|
+
- Early 19th century: Consciousness to Self-consciousness
|
|
108
|
+
- Early 20th century: Consciousness | Pre-conscious | Unconscious
|
|
109
|
+
- Early 21st century: Nonconscious cognition
|
|
110
|
+
- "Joins" machine and human cognition
|
|
111
|
+
- Human "consciousness": second-order cognitive effect - downstream from nonconscious cognition
|
|
112
|
+
|
|
113
|
+
|
|
114
|
+
```notes
|
|
115
|
+
In week 5 we returned to Hegel's interest in consciousness, enriched perhaps via our interest in attention. Where Hegel plotted the path from consciousness to self-consciousness, we saw later developments that both undermine and enrich concepts of consciousness.
|
|
116
|
+
|
|
117
|
+
With Freud, we get a partly mechanistic account that involves an interplay between conscious, pre-conscious and unconconscious subsystems or components. Messages from the unconscious are first screened by a pre-conscious before it is admitted to consciousness, while conscious messages are only reluctantly, via the process of therapy, permitted to revise deep unconscious structures.
|
|
118
|
+
|
|
119
|
+
Taking into account neuroscientific discoveries like the attention mechanism alongside the rise of artificial intelligence, N Katherine Hayles' more recent work sought to *synthesize* machinic and human cognitive processes under the name of "nonconscious cognition".
|
|
120
|
+
|
|
121
|
+
Yet even more recent AI systems – notably the attention mechanism itself in Transformer systems – has perhaps – even for some of you – raised again quite basic questions of whether machines can experience, recognise and so – in this much earlier Hegelian sense – can be said to be conscious.
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
---
|
|
125
|
+
|
|
126
|
+
### Alignment
|
|
127
|
+
|
|
128
|
+
- Alignment: synthesis in the sense of the machine with "human values"
|
|
129
|
+
- Suggests another similarity between deviant human and machinic "subjects" – subject to disciplinary systems (schools, prisons, hospitals, clinics, workplaces) – cf. Michel Foucault
|
|
130
|
+
|
|
131
|
+
```notes
|
|
132
|
+
By the time we reach alignment, it is as though this obscure idea of synthesis has gone fully mainstream. Isn't alignment just another word for 'synthesis'? Not entirely perhaps, but it is as though the idea of synthesizing human with machines had gone from a strange 1960s or 70s sci fi scenario dreamed up by Isaac Asimov or Philip K. Dick had suddenly turned into a core industrial – and pedagogical - project.
|
|
133
|
+
|
|
134
|
+
Think back to Michele's insightful discusion of her work as a trainer and teacher of AI systems at Scale AI. While AI clearly needs some kind of alignment – to avoid being offensive, to be useful and so on – we also saw several strange side-effects of this work. Companies now are in charge of determining what "human values" AI must conform to. AI is subject to many of the similar disciplinary techniques that have and continue to haunt human subjects. The language of machinic deviance - jailbroken, hallucination and so on – suggests a strong connection with the language of human deviance.
|
|
135
|
+
|
|
136
|
+
Perhaps this is a kind of synthesis that could be unintentionally reassuring? Will we see errant AI agents conspiring with rogue human subjects? Does a refusal to align suggest perhaps an uneasy allyship between machine and human?
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
---
|
|
140
|
+
|
|
141
|
+
### Critique
|
|
142
|
+
|
|
143
|
+
- *External* (Goodlad & Stone; Gebru & Torres) vs *Internal* (LeCun, Simmons) Criticism
|
|
144
|
+
- Another distinction: immanent vs transcendental critique
|
|
145
|
+
- *Transcendental*: standards of criticism brought from outside the world of the object (e.g. from a god)
|
|
146
|
+
- *Immanent*: standards belong to the world of the object - the object is *self-contradictory*, in that it promises but does not deliver X.
|
|
147
|
+
- But the critique must account for *itself* - it does not just condemn the object, it says why the critique itself is possible
|
|
148
|
+
- *Synthesizes* the object and the standards of assessment into a unitary account.
|
|
149
|
+
|
|
150
|
+
```notes
|
|
151
|
+
In week 7, we examine what it means to criticize AI, looking at two recent works of critical AI studies. We also saw two prominent AI researchers criticize the overreliance of AI upon the attention mechanism - we could say they are arguing other aspects of AI have not received enough... attention!
|
|
152
|
+
|
|
153
|
+
Yet what I wanted to pay attention to myself here was the complicated concept of "immanent critique". This was not due to any sense that the criticisms we reviewed were insufficiently reflexive, must less valid. Instead it was to address the concern might arise, almost unconsciously, whenever we hear criticism: that it contains at its core dogmatic beliefs, and that we need first to strip away the outer appearance of critique to arrive at what the criticism is really 'selling' - which might be as bad or worse than the object of criticism itself.
|
|
154
|
+
|
|
155
|
+
So in examining how critique can be immanent - a move that, as we saw, originates with Hegel himself – we also get another sense of synthesis: as the way critique can account for its own production alongside that of the object it criticizes. If we can show how both object and critical standpoint derive from the same historical process, we might, according to this line of argument, be able to produce something new from this contradictory state – a synthesis that moves things forward. We used the example of the printing press as a way for seeing how a properly immanent criticism needs to account for how what it is that produces the object of criticism can also be responsible for the development of criticism itself.
|
|
156
|
+
|
|
157
|
+
In a contemporary situation, it may not be enough to show that AI is flawed and problematic. We may also want to say that the standards we use to judge AI stem from the same origins – for example, the Enlightenment, which unleashed the mathematical techniques and industrial concentrations of capital alongside a sense of liberalism and humanism that found the present day results of these techniques so troubling. From there, we might seek ways to amend the object via the critique to generate new synthesized ideas to move forward. Which brings us to this week's readings...
|
|
158
|
+
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
---
|
|
162
|
+
|
|
163
|
+
### History as Synthesis
|
|
164
|
+
|
|
165
|
+
|
|
166
|
+
- Does history have a direction? Can it "end"? What would this mean?
|
|
167
|
+
- History as moving from A to its negation, B (not-A) - and then to C, as a combination of A and B.
|
|
168
|
+
- Synthesis emerges from the **contradition** of A and not-A.
|
|
169
|
+
- Contradition is **generative**.
|
|
170
|
+
- Example:
|
|
171
|
+
- Ancient Greek: rationalism, art, philosophy
|
|
172
|
+
- Judeo-Christian: religion, monotheism
|
|
173
|
+
- Medieval / Renaissance: integrates Greek **thought** with Judeo-Christian **belief**.
|
|
174
|
+
|
|
175
|
+
|
|
176
|
+
```notes
|
|
177
|
+
Finally we arrive at yet another turn - that of history itself, and whether we imagine the concept of synthesis applies here. For Hegel - and as we see with someone like Fukuyama – history has a definite arc. Despite the inevitable deviations and detours, if it is to unfold according to its destiny, history will arrive at its destination. In Fukuyama's famous phrase, it has an **end**.
|
|
178
|
+
|
|
179
|
+
How does the concept of synthesis work here? For Hegel, even the deviations and detours – such as the European Dark Ages – are productive. They allow earlier insights, such as the classical age of Greek philosophy and art, to be integrated with Judeo-Christian religion. They are integrated and synthesized, eventually producing, after an extended incubation, the Renaissance and the Enlightenment. And it was medieval feudal society that, despite its apparent backwardness, could effect this reconciliation.
|
|
180
|
+
|
|
181
|
+
For our machine-obsessed era, it is important to note that Hegel dealt exclusively in the trade of *ideas*. He was not interested in what the printing press, the steam engine or the Jacquard loom would lead to, and of course could not foretell the eventual arrival of the computer. It was Karl Marx who famously turned Hegel on his head, declaring that it was material and matter, not mind and spirit, that were the driving force of history. Ironically this insight continues to motivate almost all of modern discourse about progress: think for example of whether talk of 'progress' today relates to new ideas about social organization or ethics, or instead relates to advances in medicine, transportation, architecture, computation – and of course, to artificial intelligence.
|
|
182
|
+
|
|
183
|
+
And yet Hegel's ideas about the 'arc' of history still lurk behind this same discourse of progress. Anxieties about the future – about nuclear holocaust, environmental sustainability, AI doomerism, viral contagions, even zombie apocalypses – is itself, we might say, a *modern* invention. It relies upon a sense that the future is something we can *control* – or otherwise why be worried about it?
|
|
184
|
+
|
|
185
|
+
```
|
|
186
|
+
|
|
187
|
+
---
|
|
188
|
+
|
|
189
|
+
### History as Technological Synthesis?
|
|
190
|
+
|
|
191
|
+
|
|
192
|
+
- With Marx, Darwin, Freud: move to **materialist** view of historical development
|
|
193
|
+
- Modern inheritors
|
|
194
|
+
- posthuman (Braidotti)
|
|
195
|
+
- technopoeisis (N Katherine Hayles)
|
|
196
|
+
- technodiverse (Hui)
|
|
197
|
+
- cybersocial (Kalantzis & Cope)
|
|
198
|
+
|
|
199
|
+
|
|
200
|
+
```notes
|
|
201
|
+
So while the present-day form of progress may not be 'idea-driven' – perhaps because we have run out of them? or because we have unwittingly taken Marx' materialism to heart, even if we want to dispute Marxism itself? – we still hold dear the idea that we can bend history to our will. In the first four readings this week – by Rosa Braidotti, N Katherine Hayles, Yuk Hui, and Mary Kalantzis and Bill Cope – we see several variations of this.
|
|
202
|
+
|
|
203
|
+
Each of these readings identifies a key concept that describes a future that is completely determined by the sort of reductive transhumanist ideology we saw criticized in the Gebru and Torres article last week. We should note how each of these concepts is itself an example of portmanteau that synthesis aspects of the machine with the human:
|
|
204
|
+
|
|
205
|
+
- posthuman (Braidotti)
|
|
206
|
+
- technopoeisis (N Katherine Hayles)
|
|
207
|
+
- technodiverse (Hui)
|
|
208
|
+
- cybersocial (Kalantzis & Cope)
|
|
209
|
+
|
|
210
|
+
Rosa Braidotti's discussion of AI develops upon her earlier work over several decades on the concept of the posthuman. We revisted N Katherine Hayles' work earlier on nonconscious congition; here she presents another concept, technopoeisis. Yuk Hui is a philosopher of technology who has written several prominent books over the past decade. I include this interview with him, as it involves not only a discussion of another term we can consider – technodiversity – but a useful elaboration of AI across geopolitical lines. With Mary Kalantzis and Bill Cope's work on the cybersocial, we have a fourth and final concept.
|
|
211
|
+
|
|
212
|
+
Each of these terms gestures towards one or another way of thinking about exactly how we might consider our contemporary situation with respect to technology, and to AI.
|
|
213
|
+
|
|
214
|
+
|
|
215
|
+
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
---
|
|
219
|
+
|
|
220
|
+
### Owl of Minerva (Roman Goddess of Wisdom)
|
|
221
|
+
|
|
222
|
+
|
|
223
|
+
- Fukuyama (neo-Hegelianism): history is over, liberal democracy has "won". What remains?
|
|
224
|
+
- Tinkering with the political and economic systems
|
|
225
|
+
- Techno-scientific progress (AI etc).
|
|
226
|
+
- But is AI causing a return to "big picture" thinking? Is the permanence of a global liberal democratic world order still certain? Can technology drive history?
|
|
227
|
+
- Hegel's best-known quote: "the owl of Minerva spreads its wings only with the coming of the dusk"
|
|
228
|
+
- We can only understand events as history - once they are *past*
|
|
229
|
+
- Cautionary note on our ability to predict the future, or even interpret the present
|
|
230
|
+
- **What do we make of AI's own ability to either predict or produce new futures? **
|
|
231
|
+
- "The truth is that no one really knows how AIs will develop in the coming decades." (Katherine Hayles 2024)
|
|
232
|
+
|
|
233
|
+
|
|
234
|
+
```notes
|
|
235
|
+
|
|
236
|
+
On the other hand, we have a neo-Hegelian thinker like Fukuyama, who might argue this situation does not represent break or change, but instead a continuity with the past. In Fukuyama's case, the end of the Cold War witnessed the End of History, as the only true ideological rival to liberalism collapsed. Ironically, one of the signs of the end of history might be that time is marked by technological change rather than fundamentally new concepts of social or political organization. All that remains, according to this argument, is the stretching out of the achievements of liberalism to the parts of the world yet to fully experience its benefits.
|
|
237
|
+
|
|
238
|
+
For some, of course, this thesis was always absurd (see Derrida's critique in Spectres of Marx - a point I'll return to), and a far too literal reading of Hegel's own thesis about the end of history. However its naivete makes explicit the utopic elements that are often submerged within other, less explicitly historicist cases. These too include a sense of history as purpose-driven, a process that seeks synthesize our collective experience.
|
|
239
|
+
|
|
240
|
+
We might ask today whether these examples of "synthetic" thinking indicate how AI serves as a catalyst for the return, as I've put it, of the *grand recit* - the big picture or grand narrative. Or instead, as Fukuyama signals at the end of his essay, the calcification of history: when the bots take over, the genuine desire for recognition fades, and we are faced instead with the interminable "boredom at the end of history". In this view, AI - helping us to look back through our archives of culture – serves as a nostalgic reminder of a now-lost heroic age.
|
|
241
|
+
|
|
242
|
+
Finally, we need to remind ourselves that Hegel himself, even while confidently calling the French Revolution "the end of history", remained ambivalent about the possibility for prediction. His most famous quote:
|
|
243
|
+
|
|
244
|
+
> the owl of Minerva spreads its wings only with the coming of the dusk
|
|
245
|
+
|
|
246
|
+
Is an example of epistemological caution.
|
|
247
|
+
|
|
248
|
+
This is part of Derrida's critique of Fukuyama - Fukuyama is essentially a naive Hegelian, whereas Hegel was himself far more circumspect. There is always the possibility for new contradictions – and therefore new syntheses.
|
|
249
|
+
|
|
250
|
+
What do we make of AI's own ability to either predict or produce new futures?
|
|
251
|
+
|
|
252
|
+
|
|
253
|
+
```
|
|
254
|
+
|
|
255
|
+
|
|
256
|
+
---
|
|
257
|
+
|
|
258
|
+
|
|
259
|
+
|
|
260
|
+
### Designing Curriculum for the Forthcoming Machine/Human Synthesis!
|
|
261
|
+
|
|
262
|
+
1. Does history have an arc or tendency? If so, is it something we can influence? And what role does technology – and machine learning – play?
|
|
263
|
+
2. What do machines still need to learn? What about humans? What will help us get there?
|
|
264
|
+
|
|
265
|
+

|
|
266
|
+
|
|
267
|
+
```notes
|
|
268
|
+
On that note, I'll conclude. We'll now move to another exercise that prepares for the final assessment (covered in greater detail back in Week 6).
|
|
269
|
+
|
|
270
|
+
We'll start by asking, in a Hegelian vein: does history have an arc or tendency? If so, is it something we can influence? And what role does technology – and machine learning – play?
|
|
271
|
+
|
|
272
|
+
From these wild speculations we'll move to a more concrete (though still 'big picture') topic: what do we (and let's be inclusive, and consider both machines and humans) need to learn? What would be the syllabus and / or curriculum for this ideal learning? How is our combined, synthetic knowledge limited? What – if anything – from our repetoire of concepts can steer us into the new horizons forecast by Braidotti, Katherine Hayles, Hui, Kalantzis & Cope? Discuss!
|
|
273
|
+
```
|