• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle





  • Lemvi@lemmy.sdf.orgtoMemes@lemmy.mlKnow the difference.
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    5 months ago

    No, I just have very different ideas what progress is.

    Progress in my eyes is made when a society becomes more democratic, and when we solve conflicts without bloodshed.

    In that sense, sure, the GDR was a step in the right direction, but nazi germany didn’t exactly set the bar very high.

    The idea of socialism is nice, but you hardly have any progress if the system (be it built on free markets or planned economies) doesn’t work to improve ordinary citizens’ lives, but only to keep the powerful in power.

    Personaly, I don’t care much about free markets or planned economies. I think the best approach, as so often, is a kind of blend, a social market economy that allows independent companies in a framework that protects workers, consumers and the environment.

    Thing is, the specifics of the economic system aren’t important. What matters is that the people are the ones who decide them.

    There is nothing wrong with pursuing a utopian society, but ultimatly you have no control over what happens in the far future (neither should you, future societies need to be ruled by future people).

    The only thing you can control is the present and the near future, so what really matters aren’t the ends you strive for, but the means you employ while doing so.


  • Lemvi@lemmy.sdf.orgtoMemes@lemmy.mlKnow the difference.
    link
    fedilink
    arrow-up
    29
    arrow-down
    33
    ·
    5 months ago

    Ah yes, my grandparents, the landlords. Wait hol’ up, they were working people, not landlords. GDR fucked them regardless.

    “bUt tHAT wASn’T rEaL ComMunIsM” If neither the USSR nor China could achieve true Communism, then maybe it isn’t so much a realistic goal as a utopian ideal, a convenient justification for all kinds of crimes against humanity that occur in its pursuit.



  • Technological progress reduces the amount of work required to perform certain tasks. In any just system, this would improve the lives of the general population, either by reducing the amount of work required to make a living, or by increasing the amount and range of products and services.

    If technological progress does not do that, and instead makes the rich richer and the poor poorer, the problem isn’t technological progress, but the system in which it is applied.

    So what I’m saying is this: AI isn’t the problem. AI replacing employees isn’t the problem. The problem is that with a class divide into investors and workers, the ones profiting the most from technological progress are the investors.


  • I compared these numbers to the general population (Source: https://www.wahlrecht.de/umfragen/ )

    Support for the far right AfD is about 5 percentage points lower than among the general population (12% vs 17%)

    For the conservative CDU/CSU it is 10 pp lower (20% vs 30%)

    For the Social Democrats it is 3 pp lower (12% vs 15%)

    For the liberal FDP it is 4 pp higher (8% vs 4%)

    For the Greens it is about 4 pp higher (18% vs 14%)

    For the Wagenknecht alliance, a weird mix of far right and far left, it is about the same (5%)

    Unfortunately this article doesn’t mention the socialist left, which for the general population sits at around 3%

    So, to conclude (and from my own experience) youths in Germany don’t deviate that much from the general population in terms of their political views. They tend to be less conservative and xenophobic. Most of them are somewhere in the center, having slightly more liberal tendencies than the general population.





  • LLMs are absolutely complex, neural nets ARE somewhat modelled after human brains after all, and trying to understand transformers or LSTMs for the first time is a real pain. However, what a NN can do, or rather what it has been trained to do depends almost entirely on the loss function used. The complexity of the architecture and the training dataset don’t change what a LLM can do, only how good it is at doing that, and how well it generalizes. The loss function used for the training of LLMs simply evaluates whether the predicted tokens fit the actual ones. That means that an LLM is trained to predict words from other words, or in other words, to model language.

    The loss function does not evaluate the validity of logical statements, though. All reasoning that an LLM is capable of, or seems to be capable of, emerges from its modelling of language, not an actual understanding of logic.



  • Ok, maybe I didn’t make my point clear: Yes they can produce a text in which they reason. However, that reasoning mimics the reasoning found in the training data. The arguments a LLM makes and the stance it takes will always reflect its training data. It cannot reason counter to that.

    Train a LLM on a bunch of english documents and it will suggest nuking Russia. Train it on a bunch of Russian documents and it will suggest nuking the West. In both cases it has learned to “reason”, but it can only reason within the framework it has learned.

    Now if you want to find a solution for world peace, I’m not saying that AI can’t do that. I am saying that LLMs can’t. They don’t solve problems, they model language.


  • LLMs are trained to see parts of a document and reproduce the other parts, that’s why they are called “language models”.

    For example, they might learn that the words “strawberries are” are often followed by the words “delicious”, “red”, or “fruits”, but never by the words “airplanes”, “bottles” or “are”.

    Likewise, they learn to mimic reasoning contained in their training data. They learn the words and structures involved in an argument, but they also learn the conclusions they should arrive at. If the training dataset consists of 80 documents arguing for something, and 20 arguing against it (assuming nothing else differentiates those documents (like length etc.)), the LLM will adopt the standpoint of the 80 documents, and argue for that thing. If those 80 documents contain flawed logic, so will the LLM’s reasoning.

    Of course, you could train a LLM on a carefully curated selection of only documents without any logical fallacies. Perhaps, such a model might be capable of actual logical reasoning (though it would still be biased by the conclusions contained in the training dataset)

    But to train an LLM you need vasts amount of data. Filtering out documents containing flawed logic does not only require a lot of effort, it also reduces the size of the training dataset.

    Of course, that is exactly what the big companies are currently researching and I am confident that LLMs will only get better over time, but the LLMs of today are trained on large datasets rather than perfect ones, and their architecture and training prioritize language modelling, not logical reasoning.


  • It should be mentioned that those are language models trained on all kinds of text, not military specialists. They string together sentences that are plausible based on the input they get, they do not reason. These models mirror the opinions most commonly found in their training datasets. The issue is not that AI wants war, but rather that humans do, or at least the majority of the training dataset’s authors do.