• 0 Posts
  • 143 Comments
Joined 7 months ago
cake
Cake day: December 2nd, 2023

help-circle

  • The question is always: What do you want to use it for?

    When raspberry started the landscape was very difficult. Small computer boards were expensive, now there’s the N100 if you need a tiny cheap computer. Microcontrollers were really dumb and unconnected, now there’s the ESP32 which has WiFi and Bluetooth and decent performance. Right in the middle of this wide spectrum is the raspberry pi and its clones.

    This is a very different situation than in the introduction era where PCs were heavy and expensive and microcontrollers were dumb. There was a much wider niche for the raspberry then. For a small server I would now get a $100 N100 from aliexpress. For embedded electronics I would grab a $10 ESP32. Only in the middle is the raspberry pi, but the problem is, it’s only in the middle in terms of performance, not price. A raspberry pi with case, PSU, storage etc costs more than a decked out N100, while actually being slower.

    The only remaining usecase I see for a pi 5 would be an electronics project where you need some more compute than a microcontroller can provide, like some machine vision project. Otherwise:

    • Do you want to make some electronics IoT thingy: Get an ESP32
    • Do you want a small light computer or server: Get an N100





  • I don’t think it will.

    Microsoft’s endgame is being the lord and master of AI. AI thrives on knowing more data about the user. What good is an assistant if it doesn’t know your habits, your wishes and desires, your schedule and your attitude towards each person in your life?

    This is not really a feature primarily aimed at helping the user directly (even though it’s currently marketed as such), but to have the AI build up a repository of knowledge about you. Which is hopefully used locally only. For now this seems to be the case, but knowing Microsoft, once they have established themselves as the leading product they will start monetising it in every way possible.

    Of course I’m very unhappy with this too. I’d like to have an AI assistant. But it has to be FOSS, and owned and operated by me. I don’t trust microsoft in any way. I’m already playing around with ollama, RAG scripting etc. It won’t be as good as simply signing up to OpenAI, Google or Microsoft but at least it will be mine.







  • Hmmm weird. I have a 4090 / Ryzen 5800X3D and 64GB and it runs really well. Admittedly it’s the 8B model because the intermediate sizes aren’t out yet and 70B simply won’t fly on a single GPU.

    But it really screams. Much faster than I can read. PS: Ollama is just llama.cpp under the hood.

    Edit: Ah, wait, I know what’s going wrong here. The 22B parameter model is probably too big for your VRAM. Then it gets extremely slow yes.





  • The LLMs for text are also based on “theft”. They’re just much better at hiding it because they have a multitude more source material. Still, it does sometimes happen that they quote a source article verbatim.

    But yeah basically they’re just really good copy/paste engines that work with statistical analysis to determine the most likely answer based on what’s written in basically the whole internet :P It’s a bit hard to explain sometimes to people who think that the AI really “thinks”. I always say: If that were the case, why is the response to a really complicated question just as fast as a simple one? The wait is just based on the length of the output.

    In terms of the “theft” I think it’s similar ethically to google cache though.


  • A lot of translation and summarisation. ChatGPT is extremely good in absorbing a whole mix of comments in different languages and summarising them in English (or whatever other language).

    For programming I don’t use it so much anymore because it hallucinates too much, calling APIs that don’t even exist. And when I lower the temperature the output is too sparse.

    I’m also trying to build an assistant that can also communicate proactively (I intend to auto-prompt it when things happen and then evaluate if it should cause a message to me). But I need to get a local LLM going for that because running that through the ChatGPT API will be too costly.

    Also, a replacement for some of my web searches. Sometimes I just want to know something and it’s refreshing that it can give me an answer (even though it does need to be validated, it’s much easier to do that when you know what you’re looking for!)


  • That’s true. They actually stopped supporting Nginx recently which really bothered me too because I want to keep using self-signed certs (my server is only reachable internally and I do not want to expose it to the internet). And the new server they use (I forgot which) didn’t really have that option. So right now I’m locked out from updating until I fix that.

    And yes it is totally feasible to use upstream! Not a problem at all.

    I would recommend to use the dockers though, as the whole debian thing becomes a bit of a mess with different python requirements for some of the bridges. I tried that in a long forgotten past and there is a reason I’m trying to forget that 🤭

    Like you I know the ansible playbook has its limits (for example one other thing I run into is that I want to run several instances of the same bridge to bridge eg. 2 whatsapp accounts!) but I do think docker is the way to go. I’m interested to hear how you’re faring though as it’s a long time ago since I tried that.