Maybe the 2026 Apple Watch will be able to auto detect running as reliably as my 2015 Samsung Gear S2. My 2022 Series 8 is certainly not there yet.
Why do you need LLM to interpret patterns?
AI will finally allow us to bring 1984's Telescreens into existence, at scale.
Something to note here that annoys me about the title is that the LLMs aren't taking in the raw data (LLM's are for text, after all). The raw data is fed through audio and motion models that then produce natural language descriptions, that are then fed to the LLM.
Unrelated: yeah, this article is a little creepy, but damn is it interesting technically.
The tinfoil interpretatio that LLMs can spy on you is shortsighted and a bit paranoid, it would require LLM providers to actually run a prompt asking what you are doing.
However, any system with a mic, like your cellphone listening for a "Hey Siri" prompt, or your fridge, could theoretically be coupled with an llm on an adhoc basis to get a fuller picture of what's going on.
Pretty cool, if an attacker or govt force with a warrant can get an audio stream they can get some clues although of course not probatory evidence.
In about:config (Firefox) would
device.sensors.enabled = false
have any effect for browser based access, or is this strictly the app?
we'll inevitably have universal tracking for everything like this (good luck privacy), it's essentially machine learning around a bunch of vibration patterns... ideal for a device that hundreds of millions of people are carrying everywhere daily
Time to ditch the Apple Watch then
If you're interested in this concept, it's not new and the alarm has been sounded since the android Facebook app required motion sensor permissions in android 4.
https://par.nsf.gov/servlets/purl/10028982
https://arxiv.org/pdf/2109.13834.pdf