this sums up my thoughts on ai but like, in general
there are things its useful for, but lets be real there are also tons of things its actively bad for and people are treating AI like the hammer to end all hammers
if we could just use the things its useful for? absolutely awesome, only a plus. if it was like only a few bad things? still pretty fucking awesome ngl. but imo the bad far outweighs the good
Idk I guess itās like cigarettes. Cigarettes arenāt strictly negative. There are some effects that are positive. However, theyāre still cigarettes. The downsides massively outweigh the upsides. So, itās hard for me to examine these narrow questions like is it useful for learning when I think that overall theyāre probably cigarettes. We donāt know they are for sure, but that doesnāt mean we should tell everyone to smoke up because we absolutely donāt know theyāre not.
theres a difference: cigarettes actually kinda look cool when u use them
weāve found the arete of ai schools
fwiw I have sometimes found LLMs useful in limited forms for things that are annoying to find but easy to verify ā e.g. at one point I was trying to find a specific primary source that Iād read a couple years ago, but the majority of the details I remembered were not incredibly easy to Google, so I asked an AI and it gave me an answer and then I went and looked up that source and it was in fact the right one. But, like, the step where I checked to make sure it hadnāt lied to me was important. (I probably could have found it on my own eventually, it just would have taken way longer)
this is bait
I was Trapper and my trap got triggered by 2 Bodyguards and a Doctor 
Is this martini
Happy birthday notgrayorgreyās wife
Yeah lmao I think it had an abysmal price before deepseek entered the market
I just heard that from a wired video
I tried to analyze the thinking capabilities of chatGPT, deepseek and Gemini when it comes to solving BoTC games me and a friend were in. Gemini was the only one who got it right twice on setups that arenāt too complicated, the other 2 just failed and for Gemini we didnāt even had to explain how BoTC works
Yeah itās a situation where you need:
- The āfinding informationā step to be slow
- The āverifying potentially inaccurate informationā step to be fast
- The rate of accuracy of the LLM to be high enough that this saves you time
I end up with a decent amount of cases where these facts are true but not everybody does
this was a rlly useful article that helped me maximize the amount of value i got from learning with LLMs
currently iām using it to teach me direct preference optimization and itās incredibly helpful. i donāt think itās very good at teaching me something from scratch, but rather as a supplemental resource to a course/youtube video/paper
typically iāll ask ChatGPT to essentially rewrite a section of a paper in terms that i already know (eg. iāll tell it that i have limited machine learning background beyond the basics of transformer models and CNNs but have 3 years of a math degree). i will then 1. make Anki flashcards for definitions/methods that i do not know, and 2. paraphrase the rewritten paper sections back to ChatGPT and get it to score my understanding
the Anki flashcards are helpful for me because after a month iāll usually have forgotten most of the paper, but Anki makes sure that even months later iāll remember a definition. so if i want to read through the paper again it wonāt be hard
ChatGPT works by mimicking and trying to come with the most likely word to spout next, but it canāt think. The most famous example was the stack of red-yellow-blue-green books, if youād put the green book below blue, it couldnāt give you the correct answer.
well, i donāt think thatās entirely correct. i am not a neuroscientist, just someone interested in this stuff, so if there are any professionals here they can correct me if iām wrong.
as far as i understand, what we call āthinkingā is an emergent property of billions of neurons firing away in response to various stimuli. the human brain is (partially) a prediction engine (similar to an LLM). our brain is constantly predicting its own inputs, and perception/thought/action (at an abstract level) is the process of minimizing prediction error. humans consciously understand the world in terms of discrete objects and engages in chains of thought to minimize both prediction error and compute time. LLMās do an analogous (but different) thing too - they build an internal world-model (objects, physics, peopleās intentions)
personally i think that in the hands of ~most people, LLMs are a net positive for both personal productivity and happiness. for example, i use Gemini to give me book recommendations - i gave it a list of books that i like, and then i got it to Deep Research 200 books and score them from 0 to 100, the formula being an arbitrary score from 0 to 50 based on how much the book aligns with my personal taste, plus the Goodreads rating times 10. iāve very consistently found that Geminiās book ratings to be better than any human or Amazon/Goodreads algorithm. it really shines in its ability to recommend books in diverse genres that iāll like based on the writing style and themes. previously i arbitrarily read books that a friend or an algorithm recommended, which led me to read large numbers of books in the same 2 or 3 genres. the LLM is actually able to understand exactly why i, specifically, liked a book, and recommend me books with similar abstract qualities. this has meaningfully increased my personal happiness and led me to generate better ideas
there are a subset of people who have various issues for whom LLMs are very harmful. however iām not convinced that the negative effects outweigh the increases in productivity
Theyāre steady-state. Once trained, a model does not change. Context windows are limited pools of memory that literally store the entire conversation history, and essentially prepend that to a new prompt which makes it seem like itās learning, but itās not. Something like searching is done with a specialized tool depending on the domain that extracts and generates a search query for the tool based on the prompt, parses the output, scores it for relevance, and then spits back output.
When you asked it for book recommendations, itās very important to understand that it did not read anything. It either used a specialized tool (unlikely for this use case) or it used the model. Assuming it used the model, it transformed your input into tokens, probably ran it through some kind of semantic analysis algorithm to refine it into something easier to process, and produced an output that was statistically likely to seem like the kind of response a similar input would have received in its training data. Thatās all thatās going on. Iām not a neuroscientist, but I can tell you fairly confidently that this process is distinct from the way humans think (someone once tried to have ChatGPT argue against me on this point, but failed to realize ChatGPT was agreeing with me while framing it like its response was a dunk).
Anyway, youāre using Gemini and famously Google has an agreement with reddit which lead to some funny outputs, so thereās a chance that you basically got book recommendations from redditors who had used a reasonable intersection of keywords when describing their tastes. Again, thereās randomness in the output, so it also may have just made it up some amount of the time. The point Iām really trying to drive home here is that the LLM doesnāt actually have an opinion or know anything about you or your tastes. It fundamentally cannot. It is, at most, using some amount of personalized frontmatter to affect output weights. It is not coherent. Itās just pretty reliable at seeming coherent.
for happiness i pretty strongly disagree. For productivity⦠I agree in limited scenarios but I know too many people who are just lowkey mentally cooked as a result of chatgpt. My dad used chatgpt to divorce my mom because he couldnt do it himself lol. and chatgpt to āhelpā with thanksgiving dinner. and to tell me my grandma died. etc. etc. he is a particularly unique person but like
its lowkey large-scale brainrot, like the equivalent of wearing a posture brace permanently. The short-term effect is better posture but in the long-term you wont be able to stand up straight without it.
I donāt doubt you are using it in a way that isnāt brainrot-inducing and is actually helpful for learning (I feel my use falls under the same category!) but generalizing to āmost peopleā is where i disagree