• 2 Posts
  • 77 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle

  • Some of it has to do with CAFE standards using vehicle footprint to determine the target MPG. Some of it is because of better safety standards. Some of it is just because that’s what a certain portion of the market wants, and the profit margins on the large vehicles are higher, so they spend more money marketing them (creating more demand).









  • I’ve tried a couple rolling distros (including Arch), and they always “broke” after ~6 months to a year. Both times because an update would mess up something with my proprietary GPU drivers, IIRC. Both times, I would just install a different distro, because it would’ve probably took me longer to figure out what the issue was and fix it. I’m currently just using Debian stable, lol.






  • AfD is far right. They are ethno-nationalists that believe only ethnic-Germans belong in Germany. A leader has defended the Nazi SS. They have discussed re-migrating German citizens out of Germany. How do you compromise with people who would like to carry out an ethnic cleansing? Only forcibly relocate Muslims for now, and wait until next year to expel the Jewry?

    Most far-right politicians do not debate or operate politically in good-faith. IDK about the people who vote for them. I think it usually takes years of slow progress for people to move away from extremist positions, and it takes a change in their environment to start the process (new social circle, life experiences, media consumption habits, etc).


  • A lot of the “elites” (OpenAI board, Thiel, Andreessen, etc) are on the effective-accelerationism grift now. The idea is to disregard all negative effects of pursuing technological “progress,” because techno-capitalism will solve all problems. They support burning fossil fuels as fast as possible because that will enable “progress,” which will solve climate change (through geoengineering, presumably). I’ve seen some accelerationists write that it would be ok if AI destroys humanity, because it would be the next evolution of “intelligence.” I dunno if they’ve fallen for their own grift or not, but it’s obviously a very convenient belief for them.

    Effective-accelerationism was first coined by Nick Land, who appears to be some kind of fascist.




  • We’re close to peak using current NN architectures and methods. All this started with the discovery of transformer architecture in 2017. Advances in architecture and methods have been fairly small and incremental since then. The advancements in performance has mostly just been throwing more data and compute at the models, and diminishing returns have been observed. GPT-3 costed something like $15 million to train. GPT-4 is a little better and costed something like $100 million to train. If the next model costs $1 billion to train, it will likely be a little better.


  • LLMs do sometimes hallucinate even when giving summaries. I.e. they put things in the summaries that were not in the source material. Bing did this often the last time I tried it. In my experience, LLMs seem to do very poorly when their context is large (e.g. when “reading” large or multiple articles). With ChatGPT, it’s output seems more likely to be factually correct when it just generates “facts” from it’s model instead of “browsing” and adding articles to its context.