Intel's wifi cards and drivers are their saving grace for laptops. Atheros/qualcomm drivers are pitful in comparison on Linux.
Perhaps, Apple Silicon is the way to solve the dilemma... :-)
> At work I need to drive two 4K displays at 60Hz and I don't think there are many motherboards that will do that with onboard graphics
MSI has such motherboards in their PRO range, the UEFI allows you to set the primary GPU. The otherwise anemic Intel UHD 770 graphics are able to drive two 4K monitors (144 and 60Hz, DP and HDMI respectively) quite smoothly.
Besides that, performance and idle power consumption, the onboard NIC and wireless should also be taken into consideration. Intel's are pretty well supported across operating systems, and were the deciding factor in my case.
I braved out and got the latest x1 carbon, and I've been rather happy. Battery life with latest kernels is quite ok (although obviously not macbook level) and nearly everything seems to work quite nicely.
Only weird part is the slow throttle up that happens after sleep. Seems like the machine is at 400MHz for half a minute before actually waking up.
This was me ~5y ago, but at that time I'd still consider them for datacenter.
Now? I actively avoid the Intel models for things, but sometimes I'm solving a particular problem with various constraints and for whatever reason the vendors only have a handful of AMD SKUs. Feels gross.
Despite the fact that I hate intensely some anonymous Intel employees, buying AMD instead of Intel is not an emotional decision for me.
I hate many things done by Intel, but I hate none more than whoever has conceived around 1994 the market segmentation between Pentium and Pentium Pro. Those people are the origin of the fact that today the majority of the non-server computers lack protection against memory errors and the memory modules with ECC have a price much higher than justified by their production cost.
While this market segmentation scheme has increased the profits for both CPU and memory manufacturers, it will never be possible to estimate the financial losses that it has caused upon the rest of the world, due to things like undetected data corruption and computer crashes misattributed to software bugs.
However, that is not the reason why 6 years have passed since the last time when I have bought Intel CPUs (early 2019, when I have bought a couple of Intel NUCs), despite previously having been a big spender on Intel CPUs.
The programs that are the most important for me are either programs that I write myself or programs that I compile from sources with whatever compilation options I choose. Therefore buying today any x86 CPU that does not support AVX-512 would be unacceptable.
AVX-512 is a much better vector ISA than AVX and actually its current names of AVX-512 or AVX10 are an insult, because AVX-512 has not been an evolution of AVX. The Larrabee New Instructions (the initial name of AVX-512) have been developed and also implemented slightly earlier than AVX. The first product with AVX, Sandy Bridge, was launched in 2011, while the first product with the Larrabee New Instructions had been delivered (in development systems) one year earlier.
AVX should have never existed, but it was a creation of the Intel Team A, while Larrabee was a creation of outsiders together with the Team C or D. As launched in 2011, AVX was much worse, being a completely obsolete vector ISA on the day of its launch, but it has been improved a lot in 2013, when Haswell has added to AVX many instructions taken from Larrabee, including FMA3 and gather.
Now, the Intel CPUs that support AVX-512 are too expensive in comparison with AMD alternatives, while the Intel CPUs with decent prices lack AVX-512 support, so they have a much lower performance than they should in any application that can benefit from array operations. The too low performance of the Intel CPUs is masked by the legacy applications, which have avoided to use AVX-512, because most of the installed base of CPUs lacked support, but I do not care about legacy applications.
Currently AMD does not have CPUs optimized for low-power levels, i.e. for TDPs of 6/7 W, like Intel Twin Lake/Amston Lake, or TDPs of 15 to 17 W, like Intel Lunar Lake. For such low powers, either ARM-based or Intel CPUs are suitable, while AMD is the best for any size equal to or greater than that of an Intel NUC or of a not too thin tablet/notebook (for the classic Intel NUC size, i.e. for a volume of 0.5 liter with active cooling, a CPU TDP of 28 to 35 W is optimal, e.g. an AMD Strix Point CPU; Lunar Lake is a too weak CPU for the Intel NUC size, while Arrow Lake H would have been powerful enough, but it does not support AVX-512 like AMD).
So after the Intel Lunar Lake launch, I have planned for some time to buy a Lunar Lake computer, for a low power application, because I was also curious to test the new Lion Cove and Skymont CPU cores. However, I have abandoned these plans after it became known that Lunar Lake has a bug that makes MONITOR/MWAIT unreliable. For me this is a critical feature and a great advantage of the x86 CPUs. Without it working, I shall better get an ARM-based SBC for that application.
Intel's chips have become so absolutely awful in the last few years I also have no desire to buy one, even in laptops where power efficiency is so important. Maybe I'm just yelling at clouds but the whole P-core and E-core architecture seems off to me (and clearly Intel too), and having to implement new schedulers for virtually zero performance gain (just power efficiency) is really annoying. Especially as a Linux user where power efficiency isn't really the priority and battery life tends to suck anyway.
> and may have better single-core burst performance
Is this even still true?
And in any case there is 9950X3D
I don't worry about things like games or 3D. The big advantage to me about Intel is the open-source graphics drivers. As a long time user of Linux (on everything), I don't need to worry about graphics driver issues after kernel upgrades.