• 3 Posts
  • 56 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • Nah, it’s a subscription service, but it’s got a few notable YouTubers and they tend to drop extra content there. PhilosophyTube is on there, 12Tone, a bunch of people. As a platform it’s a lot less bullshit, but it’s also obviously less content.

    Though now I realize you actually have to get referred by one of the other members in order to start posting, so I’m not really sure they stand to benefit that much. It kind of explains why the content has been lacking. It certainly won’t ever have the diversity of content that YouTube has with that approach.

    Honestly learning that it’s more of a market stall than a garden makes me less enthusiastic. It’s there to curate what’s already on YouTube without YouTube’s limitations, not to create a better alternative that’s actually sustainable.




  • millie@beehaw.orgtoTechnology@beehaw.orgThe problem with GIMP
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 days ago

    You have literally no idea who I am or what I do.

    I used GIMP to make a mock-up of a sign for a restaurant just yesterday. Is it going to be the tool I use for the final product? No, because that’ll be in vector, but it’s a lot easier to slap something together in than Inkscape or Krita.

    ‘Killer apps’ are meaningless in comparison to useful apps. I’m an artist who needs usable tools for her work. GIMP qualifies. Personally, I find it way easier and more intuitive to navigate than Krita, Inkscape, or any of Adobe’s suite. It may not be for you, that’s cool.

    But what isn’t cool is to pretend you know about other people’s lives and what they need. Speak for yourself, you are perfectly capable of doing that. If you don’t like GIMP’s UI, that’s great. If you think GIMP’s UI is absolutely horrible for every user and nobody would ever use it for professional work… you’re literally just completely wrong.





  • I think when people think of the danger of AI, they think of something like Skynet or the Matrix. It either hijacks technology or builds it itself and destroys everything.

    But what seems much more likely, given what we’ve seen already, is corporations pushing AI that they know isn’t really capable of what they say it is and everyone going along with it because of money and technological ignorance.

    You can already see the warning signs. Cars that run pedestrians over, search engines that tell people to eat glue, customer support AI that have no idea what they’re talking about, endless fake reviews and articles. It’s already hurt people, but so far only on a small scale.

    But the profitablity of pushing AI early, especially if you’re just pumping and dumping a company for quarterly profits, is massive. The more that gets normalized, the greater the chance one of them gets put in charge of something important, or becomes a barrier to something important.

    That’s what’s scary about it. It isn’t AI itself, it’s AI as a vector for corporate recklessness.


  • millie@beehaw.orgtoTechnology@beehaw.orgThe problem with GIMP
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    24 days ago

    Cool condescension, but I’ve been using Photoshop on and off since 2005, have occasionally used Illustrator, and used to spend an absurd amount of time with Flash. In addition to GIMP, I currently have Krita and Inkscape installed.

    I literally prefer GIMP’s UI. It doesn’t have extra shit, it doesn’t try to force me into a single window, and it goes really, really well with a multi-monitor setup. I don’t care that it doesn’t automatically edit non-destructively, because my workflow is adapted to it. Layers and folders are plenty.

    No one piece of software is going to be the ideal solution for everyone. That’s capitalistic exceptionalism infecting the rational analysis of what tool suits which user best. Photoshop may suit you better, but I’d take the sleek usefulness of GIMP over the bloat that accompanies all that extra stuff I don’t need any day.

    Why do I need an AI strapped to my tool for pixel art, pathing, and masking?






  • I drive a cab. Yesterday I was trying to take a credit card payment and the square app kept screwing up, so I literally had to drive somewhere else for a signal and reset my phone. While waiting for it to reset, my fare and I commiserated on how much easier it would be if I could just take an imprint with a carbon sheet.

    It made me think about all the ways that it’s not necessarily a great idea to digitize everything and make it all dependent on technology functioning properly. There’s a lot of stuff that simply didn’t need power 30 years ago that absolutely requires it to function at all now.

    I keep notes in my phone for my taxi fares. I’ve convinced myself that it’s easier because I don’t need to keep track of a notepad, but I’m realizing that it’s not. It’s actually easier and preferable to have a single-purpose analogue device than it is to have to take the time to access another device that has all these other conflicting distractions and go get my notebook app. Then I have to wait for it to load and sync, then I have to wait for my keyboard to come up. Then, depending on how my phone is feeling, I have to wait for it to catch up with my typing.

    It’s good for the same reason it’s nice to have knobs with dedicated functions, or extra buttons. Dedicated inputs are simpler for repeated tasks than elaborate articulation of existing multi-purpose inputs.

    It’s needless complexity and bottle-necking at a single device, and the more complicated it gets the worse it seems it gets at actually being a phone. If the physical component that is your phone were part of a program, jamming all this functionality into one place and running a bunch of dedicated chromium instances for some reason, I’m not really sure you could reliably predict that it was created on the same planet that came up with single responsibility principle.

    You can’t enshittify ink and paper.

    Maybe we don’t want everything to be hackable, traceable, power-dependent, and susceptible to data loss.





  • Granted, I don’t assume that LLMs are currently equivalent to a lesser general AI, but like, won’t we always be able to say that they’re just generating the next token? Like, what level of complexity of ‘choice’ determines the difference between LLM and general AI? Or is that not the criteria?

    Are we talking some internal record of tracking specific reasoning? A long-term record that it can access between sessions? Some prescribed degree of autonomy within the systems it’s connected to? Introspection?

    Because to me “find the most reasonable next token for the current context” sounds a lot like how animals work. We make our way through a complex sea of sensory information and stored information to produce our next action, over and over again.

    I was watching Dr Kevin Mitchell discuss free will with Adam Conover recently, and a lot of their discussion touched on consciousness as basically the choice-making process itself. It’s worth watching, and I won’t try to summarize it, but it does make me wonder how big of a gap there is between ‘come up with the next token’ and ‘live’.

    It does make me suspect that some iteration of LLMs may form the foundation of a more complex proper AI that’s not just choosing the next token, but has some form of awareness of the process behind it.