This commit is contained in:
JP Hastings-Spital 2024-06-20 08:26:52 +01:00
parent 5bf9480c55
commit 078ea46828
3 changed files with 112 additions and 0 deletions

View file

@ -0,0 +1,110 @@
---
title: Bullshit, not hallucination
date: "2024-06-20T07:20:01Z"
emoji: "\U0001F4A9"
publishDate: "2024-06-08T00:00:00Z"
bookmarkOf: https://link.springer.com/article/10.1007/s10676-024-09775-5
references:
bookmark:
url: https://link.springer.com/article/10.1007/s10676-024-09775-5
type: entry
name: ChatGPT is Bullshit (from Ethics and Information Technology)
summary: 'Ethics and Information Technology - Recently, there has been considerable
interest in large language models: machine learning systems which produce human-like
text and dialogue. Applications of...'
author: Slater, Joe
tags:
- AI
- Philosophy
---
A well reasoned paper on why AI-generated falsehoods should be called bullshit, not hallucinations.
The rise of generative AI must be an absolute joy for philosophers in that space; what a wealth of new concepts and ways to consider what it means to be human! The authors of this paper seem to have had some fun _and_ provided a new and excellent viewpoint too.
In short: if bullshit is delivering information without concern for its truth then ChatGPT (and its ilk) are bullshitters. All those who use LLM output without working to correct any & all inaccuracies are, by the transitive property of “(not) being concerned about the truth”, also bullshitters.
### Highlights
> at minimum, the outputs of LLMs like ChatGPT are soft bullshit:
I am fully convinced by the arguments for this put forward in this paper; so Ill be referring to erroneous LLM output as ~hallucination~ bullshit from now on.
---
> ChatGPT is a bullshit machine
---
> The very same process occurs when its outputs happen to be true.
An excellent point; if an inaccuracy in an LLMs output is called a hallucination then why is an accuracy anything different?
---
> these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as _bullshit_ in the sense explored by Frankfurt (On Bullshit, Princeton, 2005):
---
> because they are designed to produce text that _looks_ truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.
---
> Descriptions of new technology, including metaphorical ones, guide policymakers and the publics understanding of new technology;
This is much more accurate and relevant than, I think, many people understand; Im _certain_ its why the word “hallucination” was originally coined (by AI companies, Id wager) to describe the easily detected fallacies LLMs produce.
In pre-LLM widespread use of the word a person is _affected by_ (desirably or otherwise) hallucinations, it is external action, it can sometimes be “not your fault”. Whereas “bullshit” is intrinsic — _you_ chose to bullshit when asked a simple question.
Neither of these words really describes whats going on inside an LLM when a falsehood arises, but the associations from “hallucination” are much more favourable to the profitability of an LLM company than “bullshit” could ever be.
---
> We draw a distinction between two sorts of bullshit, which we call hard and soft bullshit, where the former requires an active attempt to deceive the reader or listener as to the nature of the enterprise, and the latter only requires a lack of concern for truth.
---
> we call bullshit on ChatGPT.
---
> ChatGPT may indeed produce hard bullshit: if we view it as having intentions (for example, in virtue of how it is designed), then the fact that it is designed to give the impression of concern for truth qualifies it as attempting to mislead the audience about its aims, goals, or agenda.
---
> The problem here isnt that large language models hallucinate, lie, or misrepresent the world in some way. Its that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.
---
> I entered a pun competition and because I really wanted to win, I submitted ten entries. I was sure one of them would win, but no pun in ten did.
I love that the authors worked this superb Dad joke into this paper 😂
---
> Frankfurt understands bullshit to be characterized not by an intent to deceive but instead by a reckless disregard for the truth.
---
> ### Bullshit (general)
>
> Any utterance produced where a speaker has indifference towards the truth of the utterance.
>
> ### Hard bullshit
>
> Bullshit produced with the intention to mislead the audience about the utterers agenda.
>
> ### Soft bullshit
>
> Bullshit produced without the intention to mislead the hearer regarding the utterers agenda.
---
> whether or not ChatGPT has agency, its creators and users do. And what they produce with it, we will argue, is bullshit.
---
> if something is bullshit to start with, then its repetition “is bullshit as he \[or it\] repeats it, insofar as it was originated by someone who was unconcerned with whether what he was saying is true or false”
---
> Calling these inaccuracies bullshit rather than hallucinations isnt just more accurate (as weve argued); its good science and technology communication in an area that sorely needs it.

View file

@ -0,0 +1 @@
{"interactions":[{"guid":"webmentions.io#1835164","emoji":"♥️","url":"https://www.jvt.me/mf2/2024/06/eqh14/","author":{"name":"Jamie Tanna","url":"https://www.jvt.me"},"timestamp":"2024-06-19T07:50:00+01:00"}]}

View file

@ -0,0 +1 @@
{"interactions":[{"guid":"webmentions.io#1834981","emoji":"♥️","url":"https://bsky.app/profile/byjp.me/post/3kv6m2uq6uc2y#liked_by_did:plc:wl566elrfi4b45iamk5v5zlu","author":{"name":"Adrien Lemaire ドリ","url":"https://bsky.app/profile/tagu.fr"},"timestamp":"2024-06-18T07:18:54Z"},{"guid":"webmentions.io#1834984","emoji":"♥️","url":"https://bsky.app/profile/byjp.me/post/3kv6m2uq6uc2y#liked_by_did:plc:rhvbdduyfafpigc72blx5mqz","author":{"name":"Wallpaper𝕏🆙🦋","url":"https://bsky.app/profile/yuplovewallpq.bsky.social"},"timestamp":"2024-06-18T07:52:38Z"}]}