Philosophy Eats AI

by robgon 1/19/2025, 6:49 PMwith 55 comments

by scoofyon 1/19/2025, 9:15 PM

I have multiple degrees in philosophy and I have no idea what this article is even trying to say.

If anyone has access to the full article, I’m interested, but it sounds like a lot of buzzwords and not a ton of substance.

The framing of ai through a philosophical lens is obviously interesting, but a lot of the problems addressed in the intro are pretty much irrelevant to the ai-ness of the information.

by Terr_on 1/19/2025, 7:05 PM

> what counts as knowledge (epistemology), and how AI represents reality (ontology) also shape value creation.

As a skeptic with only a few drums to beat, my quasi-philosophical complaint about LLMs: we have a rampant problem where humans confuse a character they perceive out of a text-document with a real-world author.

In all these hyped-products, you are actually being given the "and then Mr. Robot said" lines from a kind of theater-script. This document grows as your contribution is inserted as "Mr. User says", plus whatever the LLM author calculates "fits next."

So all these excited articles about how SomethingAI has learned deceit or self-interest? Nah, they're really probing how well it assembles text (learned from ones we make) where we humans can perceive a fictional character which exhibits those qualities. That can including qualities we absolutely know the real-world LLM does not have.

It's extremely impressive compared to where we used to be, but not the same.

by tomlockwoodon 1/19/2025, 10:52 PM

Philosophy postgrad and now long time programmer here!

This article makes a revelation of the pretty trivially true claim that philosophy is an undercurrent of thought. If you ask, why do we do science, the answer is philosophical.

But the mistake many philosophers make is extrapolating philosophy being a discipline that reveals itself when fundamental questions about an activity are asked, into a belief that philosophy, as a discipline, is necessary to that activity.

AI doesn't require an understanding of philosophy any more than science does. Philosophers may argue that people always wonder about philosophical things, like, as the article says, teleology, epistemology and ontology, but that relation doesn't require an understanding of the theory. A scientist doesn't need to know any of those words to do science. Arguably, a scientist ought to know, but they don't have to.

The article implies that AI leaders are currently ignoring philosophy, but it isn't clear to me what ignoring the all-pervasive substratum of thought would look like. What would it look like for a person not to think about the meaning of it all, at least once at 3am at a glass outdoor set in a backyard? And, the article doesn't really stick the landing on why bringing those thoughts to the forefront would mean philsophy will "eat" AI. No argument from me against philosophy though, I think a sprinkling of it is useful, but a lack of philosophy theory is not an obstacle to action, programming, creating systems that evaluate things, see: almost everyone.

by kelseyfrogon 1/19/2025, 8:09 PM

Philosophy eats AI because we're in the exploration phase of the s-curve and there's a whole bunch of VC money pumping into the space. When we switch to an extraction regime, we can expect a lot of these conversations to evaporate and replaced with, "what makes us the most money" regardless of philosophic implication.

by poloticson 1/19/2025, 9:03 PM

I strongly disagree with the article on at least one point: ontologies, as painstakingly hand-crafted jewels handed down from aforementioned philosophers, are the complete opposite of what LLM's are bottoming-up through their layers.

by redelbeeon 1/19/2025, 9:29 PM

So we’re back to the idea that only philosopher kings can shape and rule the ideal world? Plato would be proud!

Jests aside, I love the idea of incorporating an all encompassing AI philosophy built up from the rich history of thinking, wisdom, and texts that already exist. I’m no expert, but I don’t see how this would even be possible. Could you train some LLM exclusively on philosophical works, then prompt it to create a new perfect philosophy that it will then use to direct its “life” from then on? I can’t imagine that would work in any way. It would certainly be entertaining to see the results, however.

That said, AI companies would likely all benefit from a team of philosophers on staff. I imagine most companies would. Thinking deeply and critically has been proven to be enormously valuable to humankind, but it seems to be of dubious value to capital and those who live and die by it.

The fact that the majority of deep thinking and deep work of our time serves mainly to feed the endless growth of capital - instead of the well-being of humankind - is the great tragedy of our time.

by alganeton 1/19/2025, 9:56 PM

Philosophy is mostly autophagous and self-regulating, I think. It's a debug mode, or something like it.

It's not eating AI. It's "eating" the part of AI that was tuned to disproportionally change the natural balance of philosophy.

Trying to get on top of it is silly. The debug mode is not for sale.

by laptopdevon 1/19/2025, 9:17 PM

Is this available in full text anywhere without sign up?

by antonkaron 1/19/2025, 10:44 PM

How can you create an all-understanding all-powerful jinn that is a slave in a lamp? Can the jinn be all-good, too? What is good anyways? What should we do if doing good turns out to be understanding and freeing others (at least as a long-term goal)? Should our AI systems gradually become more censoring or more freeing?

by mibeson 1/19/2025, 10:59 PM

Procians bothered by the cost and status of Halikaarnian work. Its not about what "AI" can do, its a about what you can convince people AI can do (which to the Procian is one and the same)

by htkon 1/20/2025, 3:44 PM

IMHO the article is trying to elevate the importance of philosophy in AI development and success, but its arguments are weak, and the examples are too generic. I wish the article was more rigorous and less verbose. While philosophy and AI clearly have significant overlaps, this article does little to strengthen the case for their synergy.

by Sleakeron 1/19/2025, 8:14 PM

I'm confused on the premise that AI is eating software. What does that even mean and what does it look like? AI is software, no?

by jonahbentonon 1/20/2025, 5:51 PM

The "ChatGPT is Bullshit" paper and its references make this point much more effectively.

by treksison 1/19/2025, 9:56 PM

wishful

by treksison 1/19/2025, 9:56 PM

wishful article

by floppiploppon 1/20/2025, 12:06 PM

Did an AI write this? Anyways, the real philosophical questions are why AI is such a subversive weapon against humanity's purpose and reason, and who do we need to stop to save us, and how?

by initramfson 1/19/2025, 8:52 PM

The world is becoming an algorithm.

https://en.wikipedia.org/wiki/The_Creepy_Line Algorithms create a compression of search values not unlike a Cartesian plane.

The question is, will more people embrace the Cartesian compression of ubiquitous internet communication?

by qrsjutsuon 1/19/2025, 8:26 PM

> The critical enterprise challenge is whether leaders will possess the self-awareness and rigor to use philosophy as a resource for creating value with AI

what the fuck. they haven't even done that with post 90's technology in general and it's not only that no intelligent person wants to work among them that they will fall just as short with AI. I'm still grateful they are doing a job.

but please, a dying multitude right at your feet and all you need to save - so you can learn even more from - them in your hands and you scale images, build drones for cleaning at home and war and imitate to replace people who love or need their jobs.

and faking all those AI gains - deceit, self-interest and what not - is so ridiculously obvious just build-in linguistics that can be read from a paper by someone who does not even speak that language. it's "just" parameters and conditional logic, cool and fancy and ready to eat up and digest almost any variation of user input, but it's nowhere even close to intelligence, let alone artificial intelligence.

philosophy eats nothing. there's those on all fours waiting for whatever gives them status and recognition and those who, thankfully, stay silent to not give those leaders more tools of power.