SQL.
It is a joke, but an SQL engine can be massively parallel. You just don't know it, it just gives you what you want. And in many ways the operations resembles what you do for example in CUDA.
CUDA backend for DuckDB or Trino would be one of my go-to projects if i was laid off.
Raph and I also talked about this subject here: https://www.popovit.ch/interviews/raph-levien-simd The discussion covers things at a relatively basic level as we wanted it to be accessible to a wide audience. So we explain SIMD vs SIMT, predication, multiversioning, and some more.
Raph is a super nice guy and a pleasure to talk to. I'm glad we have people like him around!
There were a few languages designed specifically for parallel computing spurred by DARPA's High Productivity Computing Systems project. While Fortress is dead, Chapel is still being developed.
It seems like there are two sides to this problem, both of which are hard and go hand in hand. There is the HCI problem of having abstractions are rich enough to handle problems like parsing and scheduling on the GPU. Then you need a sufficiently smart compiler problem of lowering these problems to the GPU. But of course, there's a limit to how smart a compiler can be, which loops back to your abstraction design.
Overall, it seems to be a really interesting problem!
I was expecting the author to at least mention Halide https://halide-lang.org/.
I think a good parallel language will be the one that takes your code written with tasks and channels, understands its logic, rewrites and compiles it in the most efficient way. I don't feel that I have to write something harder than that as a pity human.
Bend comes to mind as an attempt at this: https://github.com/HigherOrderCO/Bend
Disclaimer: I did not watch the video yet
What about burla.dev ?
Or basically a generic nestable `remote_parallel_map` for python functions over lists of objects.
I haven't had a chance to fully watch the video yet / I understand it focuses on lower levels of abstraction / GPU programming. But I'd love to know how this fit's into what the speaker is looking for / what it's missing (other than obviously it not being a way to program GPU's) (also full disclosure I am a co-founder).
Went in thinking "Have you heard of Go?"... but this turned out to be about GPU computing.
Lower-level programming language, which is either object-oriented like python or after compilation a real-time system transposition would assemble the microarchitecture to an x86 chip.
VHDL?
The audio is weirdly messed up
prolog?
ctrl-f Erlang
Nothing yet? Damn...
Was trying to remember where I recognised this name, Raph Levien is the Ghostscript and Advogato creator and helped legalize crypto https://en.wikipedia.org/wiki/Raph_Levien
Unfortunately his microphone did not cooperate.
So he wants a good parallel language? What's the issue? I haven't had problems with concurrency, multiplexing, and promises. They've solved all the parallelism tasks I've needed to do.
Interesting talk. He mentions Futhark a few times, but fails to point out that his ideal way of programming is almost 1:1 how it would be done in Futhark.
His example is:
It would be written in Futhark something like this: