yeah that’s fair. i have spent frankly an inordinate amount of time reading about it, reading scholarly papers about it, i took a class on machine learning and neural networks. the algorithms modern LLMs are using were made in like the 70s and virtually haven’t changed. i have a pretty good idea of how chatgpt works excluding the part of it related to conversational context. however, i still think that summation is still an area ripe for hallucination given its output generation is stochastic; it picks the next word from a pool of likely next words. so it irks me to really say that it’s doing any kind of synthesis at all; it’s not. it is a seeming of synthesis, a semblance of coherence. it fundamentally does not do what humans do when we synthesize information. i also think that divorcing someone even further from actual sources is harmful. i think its authoritative tone and reinforcement of biases is harmful. and if a lot of academic literature was wrong about something for years and years and years, and we only recently corrected it, there could be a huge corpus of just wrong text that it has a probability to pull from. it could easily be more likely to give the wrong answer in that case. etc. etc. etc.