You don't need GPS to figure out the correction for this. Inertial navigation systems in aircraft (which use very stabilised platforms with a lot of math involved) worked before GPS was available.
It helps to have a rough indication of the current latitude on startup, but you can also figure it out from the gyro outputs. Just takes longer.
With modern sensors (solid state laser gyroscopes) it has all become a lot smaller so if you really want to you can do this in a camera. It's just probably going to be too expensive for what it brings, because 6+ stops of stabilisation is a lot already.
> The second solution is much more plausible, but still very difficult. The user would have to be pointing the camera at the subject for long enough such that the drift in their aim at the subject is smaller than the drift from the rotation of the earth. This is also implausible. What is concerning though, is that this second method is one that could work very well to cancel out Earth’s rotation on the CIPA specified stabilization test apparatus.
So, basically dieselgate but for image stabilization
Can somebody ELI5 this to me?
The image with the 2 earths.. that only works if the camera is not also on the ground, but it is? How is the rotation of the object and the camera not identical? Why would it rotate ‘upwards’?
Also, if the issue is relative motion or rotation between camera and object, wouldn’t two sensors, one on the camera and one on the subject be able to solve this, since we can see if their rotations/movement match up or not?
This can be fixed in software:
you can back calculate orientations with high pass filterd gyro data, to rotate the unfiltered gyro date into the current reference frame, then low pass the unfiltered but rotation corrected gyro data to get the earth rotation axis in the current reference frame, then one can estimate the expected rotation that should be ignored.
Solution (2) as written seems to imply that the camera can only use the gyroscope signal while the camera is pointed at the subject, but I cannot see why that is a strong limitation.
In theory, you can take the last N seconds of data from the gyroscope (I assume it is running while the camera is active) to get the overall drift, even if it is tumbling around for a while before being pointed at the subject... Assuming the tumbling has enough periods of time that are correlated with the earth's rotation (e.g. someone carrying it, not pointing it an an aircraft or something moving EW for the window duration that is anticorrelated with the rotation).
> The first isn’t a good solution for many reasons. Don’t have GPS signal? Shooting next to a magnet? Your system won’t work.
These seem trivial to work around. Just store the last known position and use that. It's rare that you'll be without a GPS signal or beside a magnet, and you certainly won't be traveling long distances in those conditions. And since when do magnets block GPS signals?
Version 2 sounds to me as the probably reason for the ability of Cameras like the OM1-2 to go over 8 stops. Yes, it is probably not a simple task to measure the earths drift with the gyroscopes, but there is one thing that might help: the frequency of that drift is exactly known - it is the speed of the earths rotation. So it should be possible to tune a very narrow filter to that frequency and only analyze the gyroscope signal for that frequency. With that one could at least partially compensate for the drift.
Nikon claims 8.0 stops of "VR image stabilization" for their Zf camera (released late in 2023).
https://www.nikonusa.com/p/z-f/1761/overview
("Based on CIPA standards; when using the telephoto end of the NIKKOR Z 24-120mm f/4 S" - for clarity, that lens does not have optical VR in the lens itself, so this is all based on in-body stabilization.)
On the other hand, that should be awesome for astrophotography.
Perhaps in some camera firmware bug database there's a closed bug marked: "Won't fix. Tested working in orbit."
This is analogous to astro-photography problems with keeping stars as points rather than as blurred lines in long exposures. If you think about it, if a long exposure at night has a static landscape but moving stars, the IBIS equivalent would have static stars and a moving landscape :)
You should be able to calculate it out by telling the user to press a button and after this, not rotating the camera away.
Right?
Might just not be practical at all.
On the other hand, shouldn't the earth rotate fast enough to figure this out in a short timeframe while the photographer starts looking through the finder?
Why not stabilize optically?
I am probably missing something huge. But if the goal is a stable image why use gyros. use the image itself to apply the correction factor to the final integration. sort of the same way videos are stabilized.
i'm curious how the OM-1 MK2 gets around this to achieve 8.5 stops.
I still don't quite follow the explanation. The duck and I are on the surface of the same body and are rotating together, maintaining a constant distance... why does Earth rotation need to be corrected for?
You should be able to exceed 6.3 stops if you are pointing north/south rather than east/west, right? Maybe they are just measuring it pointing north/south.
Would it be possible to correct for the rotation by counter rotating if the orientation of the camera is known (or determined by GPS + compass)?
Bullshit. It's ITAR, they don't want parts floating around in the world that can make a dead nuts accurate INS - inertial navigation system, as this enables weapons we don't want in the wild.
You can stabilize out everything and account for the rotation by simply watching the vector of gravity over time.
Nikon has 8 stops so they somehow beat physics
6.3 stops is a lot, though. That's basically the fully usable aperture range of a kit zoom lens.
Is a plain phone gyroscope enough to detect Earth rotation? Is there an app for that?
Yet another example of b0rked / unescaped TeX, specifically log vs \log in this case. Blows my mind that nobody sees it...
Well, if we're nitpicking here, it is not 86,000s/day (24 hours * 3600s/hour) and 7.27x10^-5 radians/s, but 86,164.091s and 7.29x10^-5 radians/s.
24 hours is the time it takes the sun to return to the same spot in the sky due to earth having to rotate for another 3m56s to make up for angle gained by revolving around the sun in the same direction as the rotation of the Earth. This applies for the other planets that also rotate and revolve in the same direction - Mercury, Earth, Mars, Jupiter, Saturn, and Neptune. A sidereal day is 23h 56m 4.091s for distant stars to return to the same spot in the sky.
Damn, I knew that is why I botched my 6-stop exposure at my daughter's graduation! She can't blame me now! Thank you HN!