You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.
Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
As a non-Windows-user I see that as a good thing. LLMs are not going away - but that kind of nonsense at least will make sure all PCs will eventually have cheap and reasonably fast AI acceleration. Which is required for killing off centrally hosted LLMs (plus nvidias cash grabbing)
Intel is well known for requiring a new board for each new CPU generation, even if it is the same socket. AMD on the other hand is known to push stuff to its physical limits before they break compatibility.
I nowadays manage my private stuff with the ansible scripts I develop for work - so mostly my own stuff is a development environment for work, and therefore doesn’t need to be done on private time.
Generally yes, but you still need hardware support (mostly kernel and mesa). They upstream - but generally you currently want packages built from their git for that.
Also the installer is very mac hardware specific.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Not just that - intel did dual core CPUs as a response to AMD doing just that, by gluing two cores together. Which is pretty funny when you look at intels 2017 campaign of discrediting ryzen by calling it a glued together CPU.
AMDs Opteron was wiping the floor with intel stuff for years - but not every vendor offered systems as they got paid off by intel. I remember helping a friend with building a kernel for one of the first available Opteron setups - that thing was impressive.
And then there’s the whole 64bit thing which intel eventually had to license from AMD.
Most of the big CPU innovations (at least in x86 space) of the last decade were by AMD - and the chiplet design of ryzen is just another one.
That’s already the friendly variant. Traditional find has a mandatory path as first argument, so to find in the current directory you need to do find .
It also doesn’t know if it really is a path - it just prints that as a likely error. You might just have messed up quoting an argument.
Is it a ‘death by quantity’ thing?
Pretty much that - those companies rely on open projects to sort it for them, so they’re pretty much scraping open databases, and selling good data they pull from there. That’s why they were complaining about the kernel stuff - the info required was there already, just you needed to put effort in, so they were asking for CVEs. Now they got their CVEs - but to profit from it they’d still need to put the same effort in as they’d had to without CVEs in place.
Short version: A bunch of shitty companies have as business model to sell open databases to companies to track security vulnerabilities - at pretty much zero effort to themselves. So they’ve been bugging the kernel folks to start issuing CVEs and do impact analysis so they have more to sell - and the kernel folks just went “it is the kernel, everything is critical”
tl;dr: this is pretty much an elaborate “go fuck yourself” towards shady ‘security’ companies.
Making an exception for one organisation, pressured by politicians, would be harmful. BBC has the following policy about neutral reporting:
We don’t use loaded words like “evil” or “cowardly”. We don’t talk about “terrorists”. And we’re not the only ones to follow this line. Some of the world’s most respected news organisations have exactly the same policy
One fascinating example is one owner that replaced the DC barrel jack with a USB-C port, so they could utilize USB-PD for external power.
Oddly enough that’s also an example for bad design in that notebook: The barrel jack is soldered in. With a module that is plugged into the board that’d be significantly easier to replace - and also provide strain relief for power jack abuse. All my old thinkpads were trivial to move to USB-C PD because they use a separate power jack with attached cable.
The transparent bottom also isn’t very functional - it is pretty annoying to remove and put back, due to the large amount of screws required. For a notebook designed for tinkering I’d have wanted some kind of quick release for that. Also annoying is the lack of USB ports on the board - there’s enough space to integrate a USB hub, but just doing that on the board and providing extra ports would’ve been way more sensible.
The CPU module also is a bit of a mixed bag - it pretty much is designed for the first module they developed, and later modules don’t have full support for the existing ports. I was expecting that, though - many projects trying to offer that kind of modular upgrade path run into that sooner or later, and for that kind of small project with all its teething problems ‘sooner’ was to be expected. It still is very interesting for some prototyping needs - but that’s mostly companies or very dedicated hackers, not the average linux user.
No, most companies also have mostly incompetent engineers.
Roku always was a company with great engineers and shitty money grabbing management. The new user creation always requested data not necessary for basic operation.
I find this situation rather entertaining. It shows yet again how important it is to educate people on the basics of how LLM work, including how they are being executed - I’m guessing with just a tiny bit more knowledge it’d also have been obvious nonsense to you.
Not entirely sure about that. I have a bunch of systems with the current 8cx, and that’s pretty much 10 years behind Apple performance wise, while being similar in heat and power consumed. It is perfectly fine for the average office and webbrowsing workload, though - a 10 year old mobile i7 still is an acceptable CPU for that nowadays, the more problematic areas of IO speed are better with the Snapdragon. (That’s also the reason why Apple is getting away with that 8GB thing - the performance impact caused by that still keeps a usable system for the average user. The lie is not that it doesn’t work - the lie is that it doesn’t have an impact).
From the articles I see about the Snapdragon Elite it seems to have something like double the multicore performance of the 8cx - which is a nice improvement, but still quite a bit away from catching up to the Apple chips. You could have a large percentage of office workers use them and be happy - but for demanding workloads you’d still need to go intel/AMD/Apple. I don’t think many companies will go for Windows/Arm when they can’t really switch everybody over. Plus, the deployment tools for ARM are not very stable yet - and big parts of what you’d need for doing deployments in an organization have just been available for ARM for a few months now (I’ve been waiting for that, but didn’t have a time to evaluate if they’re working).
The problem with renewables is the fluctuation. So you need something you can quickly spin up or down to compensate. Now you can do that with nuclear reactors to some extent - but they barely break even at current energy prices, and they keep having the same high cost while idle.
So a combination of grid storage and power plants with low cost when idle (like water) is the way to go now.