I'm sharing my project to control 3D models with voice commands and hand gestures:
- use voice commands to change interaction mode (drag, rotate, scale, animate)
- use hand gestures to control the 3D model
- drag/drop to import other models (only GLTF format supported for now)
Created using threejs, mediapipe, web speech API, rosebud AI, and Quaternius 3D models
Githhub repo: https://github.com/collidingScopes/3d-model-playground
Demo: https://xcancel.com/measure_plan/status/1929900748235550912
I'd love to get your feedback! Thank you
Here's a quick video demo showing how it works: https://x.com/measure_plan/status/1929900748235550912
Very cool! I like the different modes. I've always been fascinated with this space and products like Leap Motion: https://www.youtube.com/watch?v=zXghYjh6Gro
Amazing! Maybe use specific finger positions/gestures to trigger a rotation and scale functions (index finger up and within a bounding box of the model perhaps for rotation, similar for pinch to fingers to scale).
Awesome, nice work! This type of tech opens up a world of physical games.
Slightly on topic - anyone remember LeapMotion and is anyone aware of any current support for that? Found an original one in a drawer when I was having a clearout the other day
Sounds very cool, but I could not make sense of the on-screen instructions. Some images or animations would go a long way to explain the controls.
Great job! Looks very useful for interactive content creations and product showcasing. Definitely will testing it more. Thanks for sharing.
I understand you need your face in the videos for the demos. But, want to mention that you should make sure your system works with your hands in your lap. As shown, the user is going to experience "gorilla arm" fatigue very quickly.