30000th poster gets a cookie (cookie thread (Part 7)) (Part 10)

So, like, why do these things not apply to AI, why is this one a cost-benefit analysis where the research is unclear while T3 is expressing unacceptably callous disregard for human life by trying to make the same analysis, when AI is way newer and it’s way harder to get confident research on its effects rather than a bunch of individual case studies?

2 Likes

yeah you’re probably correct about the analogy

i am not going to give all my money to global health and development charities, because i am selfish. also i have no money to give

The social media case studies are horrific, often much more so, nearly-normal these days, and the benefits of mass social media are just about equally characterisable as frivolous

I guess I’m saying that the best applications of AI and similar technologies are being de-prioritized in favor of applications that have a lot of very real negative externalities. An ML model for protein folding never told someone to kill themselves (probably).

1 Like


THINK AGAIN

3 Likes

oh youbie my friend i love plenty of stories and keep it entirely to myself

2 Likes

(This is an XRD structure so no it didn’t)

JOKE’S ON YOU, I DON’T GET WHAT THAT MEANS

well, i’m not sure that i agree that useful AI applications are being deprioritized… there are enormous amounts of government funding and private investment going to ‘useful’ AI applications

Oh wait

I think that the applications have negative externalities but that this doesn’t mean that they’re net negatives, and I think whether or not they are is still super up in the air and that you’re being very overly decisive on them being net negatives on the basis of a couple of case studies and very early research on cognitive offloading

is there some smaller percent of your income that would feel more achievable than ‘literally all of it’? (I guess if you have literally 0 income this would be $0 but it works if you have small but nonzero income)

Same

I think people who talk about AI like this tend to be kind of in a loop that goes roughly:

  1. These applications aren’t at all useful, so any cost is too great
  2. The cost is too great, so any useful applications don’t matter

And I think it’s silly to think we know at all enough about the applications and the cost of “AI in general” or “LLMs” or “fine-tuned LLMs that are sold as general-purpose assistants like ChatGPT”. I think people on social media tend to make very confident declarations about the applications and the cost of this technology, but that you can see people saying similar things about every new technology no matter the use or cost it ends up having

well, i have limited income (i am a student), but i definitely have some excess money that i spend on frivolous stuff

and on a conscious level i understand that donating the money is clearly better than buying a Celsius and a protein bar every day, but that doesn’t really translate into action

And the reason I keep bringing up ESMFold and RFDiffusion is because these models that are invested in for the purpose of “frivolous” things, these LLMs and image diffusion networks and shit, they can have unexpected applications that end up making them SUPER worth it, and that any kind of developing field has lots of cases like this

1 Like

what would need to be different for it to translate into action?

And I think that most of the things people say about AI can be translated pretty directly into something you could say about computers or social media in the early days and sound smart and substantiated.

And you could link news stories of parents attributing their children’s suicide to online forums, of which there are many, and how forums are mostly used for dumb arguments and games and rarely have serious applications, and when they do have serious applications they could be replaced in in-person networks, and you could make the exact same points you make about AI about the internet. And you could sound smart. But I don’t think you’d be right. It’d be a premature judgement based on insufficient evidence.

2 Likes

I dunno I guess I just also don’t really have the brain circuits that make me think about questions like “was social media a mistake?”, I’m not very into counterfactuals. I don’t really care whether it was a mistake.

Maybe I care about whether the government should ban/restrict it (I don’t think they should to any meaningful degree), maybe I care about whether I should use it (but the type of arguments people use are typically unconvincing), I care about what we should do to improve/replace it (but “everyone stops using it altogether” is not a plausible option).

But I don’t really care about whether it’s Good or Bad. I don’t care whether it’s a Mistake. That’s not a meaningful object to me

2 Likes

thank you for giving this kinda insight though

i think it’s somewhat meaningful, because it could help people who are working on things to this extent think of the possibility of things going wrong for a future invention involving ai