this is indeed martini
or figure out ways to solve problems within ai. if you name it you can solve it
or social media. both can be pretty similar. sorry i went slightly off topic it is the mashup of ai and social media related posts
Iām not happy with the status quo regarding social media. Blaming it for the much larger increase in suicidality does not seem to hold up (assuming I am remember the data correctly which could be wrong!). Social media use definitely increases suicidality in some areas but also seems to lower it in others, and itās difficult to quantify the net effect there. Banning it for reasons related to suicidality may not make sense. I also think that social media can be really unhealthy and mentally corrosive. My use of it is pretty limited for that reason.
Iām not happy with ChatGPT and similar AI bots. I also donāt think the technology should be banned wholesale like with protein folding. We have instances of very real harm, and largely speculative benefits for AI chat bots. Like itās a difficult question if the convenience and speed of using it for search is actually beneficial because output is disconnected from sources even when it finds them (it will still bullshit) and there are probably costs to not doing that work yourself. A lot of what is touted as the best practical use cases of gen AI chat bots like ChatGPT is unclear if it is really all that beneficial because itās usually offloading reasoning to a machine that does not reason (caveat here is that I havenāt delved into what āreasoningā means in reasoning models, but even those can be confounded in tests that they should be able to pass if they could actually do some kind of reasoning so I have my doubts). Thatās also in like the best case scenarios.
How Marissa uses ChatGPT to study could be fine, but a lot of students seem to be using it to do a large amount of work. Yes, people have cheated forever and you could pay someone to write an essay for you since weāve had universities probably, but the difference is the scale and ease of access to offloading the parts of school that would actually educate you. How much does undermining of education does ChatGPT have to do before it offsets any potential gains in the same domain? Even if you could replace junior developers with Claude, is it actually good if a lot of companies does that and we suddenly donāt have enough more new senior developers?
Like Iām not even trying to defend social media. Itās bad if weāre blaming it for a lot of real harm it isnāt causing. Thatās the point I was trying to make more than anything when it came to that. To me, saying the productivity gains from stuff like ChatGPT is worth the harm is frustrating because Iām skeptical that the underlying utilitarian argument is correct (or would lead to a better world if does increase productivity because the specifics matter). Itās also frustration at magical thinking about AI and other bullshit I donāt want to get into. So, itās different because I think itās an apples to oranges comparison in the first place, and Iām more willing to throw out machines because they can automate systemic and individual harm at scale than I am communication between people on the internet.
blud missed the to me part of the post is this man truly blind
towards myself btw. but in the meantime this is lowkey history coded i should lock into history
i like throwing long posts into it and then tell it to simplify and summarize 6 times in a row and use whatever it spits out to strawman the original point
its really funny
pro tip: read slowly. i chronically do this
I guess the main thing Iām coming back to is that I think you were extremely harsh with T3 and dug at emotional claims (accusing the post of being callous, telling him to say that to their parents, etc etc.) at something thatās ultimately a question of, like, moral calculus, while having made essentially the same claim (āthe benefits of something can outweigh the costs, even if those costs are suicidesā) about something you just happen to think is more useful
Iām not particularly interested in having a conversation about this cost-benefit analysis because itās not that interesting to me because it is largely dependent on how the technology develops, but I think itās important to recognize that this is a sliding scale, and that this:
is not a logical reaction to āthe aggregate benefit of LLMs will probably save livesā, a claim which is logically justifiable by somebody with perfectly fine regard for human life, even if you donāt believe the argument
I have thoughts on the new HSR characterās design and its implications on the metagame.
In the social media example, it was direct comparison of a measurement of suicidality vs suicidality. Itās like a trolley problem of 1 person being tied onto either side of the track. Pulling the lever just affects who gets run over.
In the LLM example, we have people who have died and productivity gains. These are not directly comparable. Saying that an increase in productivity is morally equivalent to preventing someone from dying with generality is just reckless. Trivial to disprove with a counter example of a productivity gain in a domain that could have probably positive, negative, and neutral effects on life expectancy. The specifics matter. Mining cobalt with children is more productive than using say adults. Itās also more productive to not worry about safety or how long theyāre working. Smartphones that are made using cobalt from mines operated in these conditions, but is it justifiable because smartphones make us more productive? How much harm can we justify doing to the children of the DRC because we can blithely say they increase productivity, likely elsewhere in the world? How much money is justifiable to divert from charitable organizations that distribute mosquito nets to prevent the spread of malaria to OpenAI because of the potential productivity gains they promise? How much of the global south can be sacrificed now for marginal productivity gains so those of us in the rest of the world can be better off in the future?
I donāt think it matters if you sincerely believe that throwing away lives now is worth increased productivity because the latter often does not translate to the former, and even if it does, it may not benefit the same category of people it harmed. āThe lives lost now is okay because the number of Snickers produced by the Mars Bar company will go up next quarter.ā Sincerity of belief in that statement does not make it a moral one without a substantial amount of empirical evidence to show that yeah, that is likely to save more lives, and even then it could be ethically dubious. āItās okay if 100 brown children die each year if 1000 old white people live for another 10 years and because those white people have better education they are going to be more productive which saves even more lives if you think about it.ā Someone could sincerely believe the last statement and it may be perfectly logically justifiable to them given their beliefs, but it doesnāt follow that my reaction canāt be āyouāre a eugenicist asshole.ā I donāt think itās good to just assert that productivity increases are worth the loss of lives without giving that a lot of due consideration and serious skepticism, and he did not come off like had given it much of any.
it feels unfair to draw a connection between believing AI investment could potentially save more lives than it takes and, like, literal eugenics
you have to see how far youāre stretching it if youāre doing this. you can argue why LLMs are or are not going to help people or how they are or are not hurting people but talking about cobalt mines and eugenics to prove your point is insane
it feels unfair to draw a connection between believing AI investment could potentially save more lives than it takes and, like, literal eugenics
you have to see how far youāre stretching it if youāre doing this. you can argue why LLMs are or are not going to help people or how they are or are not hurting people but talking about cobalt mines and eugenics to prove your point is insane
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create āsafe AGIā that is ābeneficial for all of humanity.ā We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like āAGIā cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past ( e.g. , racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of āsafetyā and ābenefiting humanityā to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
in my defense, I am referencing a paper
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa
WAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ok why are you babying im the baby
is this the appropriate type of babying according to your styleguide
My styleguide dictates that any babying should be done in the form of emotional manipulation and gaslighting.
