30000th poster gets a cookie (cookie thread (Part 7)) (Part 10)

this is indeed martini

1 Like

or figure out ways to solve problems within ai. if you name it you can solve it

or social media. both can be pretty similar. sorry i went slightly off topic it is the mashup of ai and social media related posts

I’m not happy with the status quo regarding social media. Blaming it for the much larger increase in suicidality does not seem to hold up (assuming I am remember the data correctly which could be wrong!). Social media use definitely increases suicidality in some areas but also seems to lower it in others, and it’s difficult to quantify the net effect there. Banning it for reasons related to suicidality may not make sense. I also think that social media can be really unhealthy and mentally corrosive. My use of it is pretty limited for that reason.

I’m not happy with ChatGPT and similar AI bots. I also don’t think the technology should be banned wholesale like with protein folding. We have instances of very real harm, and largely speculative benefits for AI chat bots. Like it’s a difficult question if the convenience and speed of using it for search is actually beneficial because output is disconnected from sources even when it finds them (it will still bullshit) and there are probably costs to not doing that work yourself. A lot of what is touted as the best practical use cases of gen AI chat bots like ChatGPT is unclear if it is really all that beneficial because it’s usually offloading reasoning to a machine that does not reason (caveat here is that I haven’t delved into what ā€œreasoningā€ means in reasoning models, but even those can be confounded in tests that they should be able to pass if they could actually do some kind of reasoning so I have my doubts). That’s also in like the best case scenarios.

How Marissa uses ChatGPT to study could be fine, but a lot of students seem to be using it to do a large amount of work. Yes, people have cheated forever and you could pay someone to write an essay for you since we’ve had universities probably, but the difference is the scale and ease of access to offloading the parts of school that would actually educate you. How much does undermining of education does ChatGPT have to do before it offsets any potential gains in the same domain? Even if you could replace junior developers with Claude, is it actually good if a lot of companies does that and we suddenly don’t have enough more new senior developers?

Like I’m not even trying to defend social media. It’s bad if we’re blaming it for a lot of real harm it isn’t causing. That’s the point I was trying to make more than anything when it came to that. To me, saying the productivity gains from stuff like ChatGPT is worth the harm is frustrating because I’m skeptical that the underlying utilitarian argument is correct (or would lead to a better world if does increase productivity because the specifics matter). It’s also frustration at magical thinking about AI and other bullshit I don’t want to get into. So, it’s different because I think it’s an apples to oranges comparison in the first place, and I’m more willing to throw out machines because they can automate systemic and individual harm at scale than I am communication between people on the internet.

1 Like

blud missed the to me part of the post is this man truly blind

towards myself btw. but in the meantime this is lowkey history coded i should lock into history

i like throwing long posts into it and then tell it to simplify and summarize 6 times in a row and use whatever it spits out to strawman the original point

its really funny

3 Likes

pro tip: read slowly. i chronically do this

I guess the main thing I’m coming back to is that I think you were extremely harsh with T3 and dug at emotional claims (accusing the post of being callous, telling him to say that to their parents, etc etc.) at something that’s ultimately a question of, like, moral calculus, while having made essentially the same claim (ā€œthe benefits of something can outweigh the costs, even if those costs are suicidesā€) about something you just happen to think is more useful

2 Likes

I’m not particularly interested in having a conversation about this cost-benefit analysis because it’s not that interesting to me because it is largely dependent on how the technology develops, but I think it’s important to recognize that this is a sliding scale, and that this:

is not a logical reaction to ā€œthe aggregate benefit of LLMs will probably save livesā€, a claim which is logically justifiable by somebody with perfectly fine regard for human life, even if you don’t believe the argument

5 Likes

I have thoughts on the new HSR character’s design and its implications on the metagame.

2 Likes

In the social media example, it was direct comparison of a measurement of suicidality vs suicidality. It’s like a trolley problem of 1 person being tied onto either side of the track. Pulling the lever just affects who gets run over.

In the LLM example, we have people who have died and productivity gains. These are not directly comparable. Saying that an increase in productivity is morally equivalent to preventing someone from dying with generality is just reckless. Trivial to disprove with a counter example of a productivity gain in a domain that could have probably positive, negative, and neutral effects on life expectancy. The specifics matter. Mining cobalt with children is more productive than using say adults. It’s also more productive to not worry about safety or how long they’re working. Smartphones that are made using cobalt from mines operated in these conditions, but is it justifiable because smartphones make us more productive? How much harm can we justify doing to the children of the DRC because we can blithely say they increase productivity, likely elsewhere in the world? How much money is justifiable to divert from charitable organizations that distribute mosquito nets to prevent the spread of malaria to OpenAI because of the potential productivity gains they promise? How much of the global south can be sacrificed now for marginal productivity gains so those of us in the rest of the world can be better off in the future?

I don’t think it matters if you sincerely believe that throwing away lives now is worth increased productivity because the latter often does not translate to the former, and even if it does, it may not benefit the same category of people it harmed. ā€œThe lives lost now is okay because the number of Snickers produced by the Mars Bar company will go up next quarter.ā€ Sincerity of belief in that statement does not make it a moral one without a substantial amount of empirical evidence to show that yeah, that is likely to save more lives, and even then it could be ethically dubious. ā€œIt’s okay if 100 brown children die each year if 1000 old white people live for another 10 years and because those white people have better education they are going to be more productive which saves even more lives if you think about it.ā€ Someone could sincerely believe the last statement and it may be perfectly logically justifiable to them given their beliefs, but it doesn’t follow that my reaction can’t be ā€œyou’re a eugenicist asshole.ā€ I don’t think it’s good to just assert that productivity increases are worth the loss of lives without giving that a lot of due consideration and serious skepticism, and he did not come off like had given it much of any.

1 Like

it feels unfair to draw a connection between believing AI investment could potentially save more lives than it takes and, like, literal eugenics

you have to see how far you’re stretching it if you’re doing this. you can argue why LLMs are or are not going to help people or how they are or are not hurting people but talking about cobalt mines and eugenics to prove your point is insane

3 Likes

The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create ā€œsafe AGIā€ that is ā€œbeneficial for all of humanity.ā€ We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like ā€œAGIā€ cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past ( e.g. , racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of ā€œsafetyā€ and ā€œbenefiting humanityā€ to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.

— The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday

in my defense, I am referencing a paper

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa

WAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

ok why are you babying im the baby

1 Like

is this the appropriate type of babying according to your styleguide

1 Like

2 Likes

My styleguide dictates that any babying should be done in the form of emotional manipulation and gaslighting.

2 Likes