30000th poster gets a cookie (cookie thread (Part 7)) (Part 10)

this sums up my thoughts on ai but like, in general
there are things its useful for, but lets be real there are also tons of things its actively bad for and people are treating AI like the hammer to end all hammers
if we could just use the things its useful for? absolutely awesome, only a plus. if it was like only a few bad things? still pretty fucking awesome ngl. but imo the bad far outweighs the good

4 Likes

Idk I guess it’s like cigarettes. Cigarettes aren’t strictly negative. There are some effects that are positive. However, they’re still cigarettes. The downsides massively outweigh the upsides. So, it’s hard for me to examine these narrow questions like is it useful for learning when I think that overall they’re probably cigarettes. We don’t know they are for sure, but that doesn’t mean we should tell everyone to smoke up because we absolutely don’t know they’re not.

3 Likes

theres a difference: cigarettes actually kinda look cool when u use them

2 Likes

we’ve found the arete of ai schools

1 Like

fwiw I have sometimes found LLMs useful in limited forms for things that are annoying to find but easy to verify – e.g. at one point I was trying to find a specific primary source that I’d read a couple years ago, but the majority of the details I remembered were not incredibly easy to Google, so I asked an AI and it gave me an answer and then I went and looked up that source and it was in fact the right one. But, like, the step where I checked to make sure it hadn’t lied to me was important. (I probably could have found it on my own eventually, it just would have taken way longer)

4 Likes

this is bait

5 Likes

I was Trapper and my trap got triggered by 2 Bodyguards and a Doctor :wowee:

2 Likes

Is this martini

1 Like

Happy birthday notgrayorgrey’s wife

Yeah lmao I think it had an abysmal price before deepseek entered the market

I just heard that from a wired video

I tried to analyze the thinking capabilities of chatGPT, deepseek and Gemini when it comes to solving BoTC games me and a friend were in. Gemini was the only one who got it right twice on setups that aren’t too complicated, the other 2 just failed and for Gemini we didn’t even had to explain how BoTC works

1 Like

Yeah it’s a situation where you need:

  1. The ā€œfinding informationā€ step to be slow
  2. The ā€œverifying potentially inaccurate informationā€ step to be fast
  3. The rate of accuracy of the LLM to be high enough that this saves you time

I end up with a decent amount of cases where these facts are true but not everybody does

5 Likes

this was a rlly useful article that helped me maximize the amount of value i got from learning with LLMs

currently i’m using it to teach me direct preference optimization and it’s incredibly helpful. i don’t think it’s very good at teaching me something from scratch, but rather as a supplemental resource to a course/youtube video/paper

4 Likes

typically i’ll ask ChatGPT to essentially rewrite a section of a paper in terms that i already know (eg. i’ll tell it that i have limited machine learning background beyond the basics of transformer models and CNNs but have 3 years of a math degree). i will then 1. make Anki flashcards for definitions/methods that i do not know, and 2. paraphrase the rewritten paper sections back to ChatGPT and get it to score my understanding

the Anki flashcards are helpful for me because after a month i’ll usually have forgotten most of the paper, but Anki makes sure that even months later i’ll remember a definition. so if i want to read through the paper again it won’t be hard

4 Likes

ChatGPT works by mimicking and trying to come with the most likely word to spout next, but it can’t think. The most famous example was the stack of red-yellow-blue-green books, if you’d put the green book below blue, it couldn’t give you the correct answer.

2 Likes

well, i don’t think that’s entirely correct. i am not a neuroscientist, just someone interested in this stuff, so if there are any professionals here they can correct me if i’m wrong.

as far as i understand, what we call ā€œthinkingā€ is an emergent property of billions of neurons firing away in response to various stimuli. the human brain is (partially) a prediction engine (similar to an LLM). our brain is constantly predicting its own inputs, and perception/thought/action (at an abstract level) is the process of minimizing prediction error. humans consciously understand the world in terms of discrete objects and engages in chains of thought to minimize both prediction error and compute time. LLM’s do an analogous (but different) thing too - they build an internal world-model (objects, physics, people’s intentions)

1 Like

personally i think that in the hands of ~most people, LLMs are a net positive for both personal productivity and happiness. for example, i use Gemini to give me book recommendations - i gave it a list of books that i like, and then i got it to Deep Research 200 books and score them from 0 to 100, the formula being an arbitrary score from 0 to 50 based on how much the book aligns with my personal taste, plus the Goodreads rating times 10. i’ve very consistently found that Gemini’s book ratings to be better than any human or Amazon/Goodreads algorithm. it really shines in its ability to recommend books in diverse genres that i’ll like based on the writing style and themes. previously i arbitrarily read books that a friend or an algorithm recommended, which led me to read large numbers of books in the same 2 or 3 genres. the LLM is actually able to understand exactly why i, specifically, liked a book, and recommend me books with similar abstract qualities. this has meaningfully increased my personal happiness and led me to generate better ideas

there are a subset of people who have various issues for whom LLMs are very harmful. however i’m not convinced that the negative effects outweigh the increases in productivity

2 Likes

They’re steady-state. Once trained, a model does not change. Context windows are limited pools of memory that literally store the entire conversation history, and essentially prepend that to a new prompt which makes it seem like it’s learning, but it’s not. Something like searching is done with a specialized tool depending on the domain that extracts and generates a search query for the tool based on the prompt, parses the output, scores it for relevance, and then spits back output.

When you asked it for book recommendations, it’s very important to understand that it did not read anything. It either used a specialized tool (unlikely for this use case) or it used the model. Assuming it used the model, it transformed your input into tokens, probably ran it through some kind of semantic analysis algorithm to refine it into something easier to process, and produced an output that was statistically likely to seem like the kind of response a similar input would have received in its training data. That’s all that’s going on. I’m not a neuroscientist, but I can tell you fairly confidently that this process is distinct from the way humans think (someone once tried to have ChatGPT argue against me on this point, but failed to realize ChatGPT was agreeing with me while framing it like its response was a dunk).

Anyway, you’re using Gemini and famously Google has an agreement with reddit which lead to some funny outputs, so there’s a chance that you basically got book recommendations from redditors who had used a reasonable intersection of keywords when describing their tastes. Again, there’s randomness in the output, so it also may have just made it up some amount of the time. The point I’m really trying to drive home here is that the LLM doesn’t actually have an opinion or know anything about you or your tastes. It fundamentally cannot. It is, at most, using some amount of personalized frontmatter to affect output weights. It is not coherent. It’s just pretty reliable at seeming coherent.

4 Likes

for happiness i pretty strongly disagree. For productivity… I agree in limited scenarios but I know too many people who are just lowkey mentally cooked as a result of chatgpt. My dad used chatgpt to divorce my mom because he couldnt do it himself lol. and chatgpt to ā€œhelpā€ with thanksgiving dinner. and to tell me my grandma died. etc. etc. he is a particularly unique person but like

its lowkey large-scale brainrot, like the equivalent of wearing a posture brace permanently. The short-term effect is better posture but in the long-term you wont be able to stand up straight without it.

I don’t doubt you are using it in a way that isn’t brainrot-inducing and is actually helpful for learning (I feel my use falls under the same category!) but generalizing to ā€œmost peopleā€ is where i disagree

5 Likes