- Home /
Using raycast to sonify point cloud data and geometry of shapes
Hello community,
I am working with the blind and the visually impaired to assist with object identification.
I am looking for development advice for the best approach to sonify shapes with accuracy using sound in a 3D unity scene at runtime.
I have been using existing datasets of simple objects and also iPad Pro and have seen the geometry mesh and point cloud information that can be captured. I am really interested in the communities opinion of how sonification of this data could be approached. I have a procedural and spatial audio background but am struggling with getting the dataset/data to a usable format for audio implementation.
I have looked into volumetric audio, however surfaces don’t have great definition with this method and if blindfolded you could not detect that object shape or detail of edges.
I feel as though a raycast, focused, almost an audio scrubbing method, from FPS view then spatialised would possibly be the best approach, although I want to know your thoughts, I am unsure dev wise how to implement this. Would be great to scan through the point cloud or geometry data and extrapolate it so that I could use those parameters to drive sound. I have attempted to draw this https://share.icloud.com/photos/0o81uJF3vmaz6YM5vOm-efzzQ
I would need to leverage the raw point cloud data or geometry to sonify, I am just really unsure on this and would appreciate any advice on this.
This video is unique but is actually quite interesting in terms of approach, if a user/player had control of this it would be an interesting way for a blind person to inspect a complicated object. https://vimeo.com/195054847
I would want to start with boxes and spheres first and the consideration of someone moving round the object needs consideration also.
Thanks in advance
R
wow, interesting. I'm not super familiar with the data one gets back from ARKit etc, but one approach might be to have three separate sounds, one anchored to the left ear, one to the right, and one forward, and modulate them in some way based on corresponding raycasts. For example create a truncated cone or something extending in each of those three directions, and adjust the sound volume or pitch according to how many points are closest to the device, and adjust some other property like repeat rate based on proximity. .. hmm.. Actually I guess I'm thinking about head-mounted devices. For a handheld I think your idea of a single probe makes sense.
Your answer
Follow this Question
Related Questions
help with multiple sounds 0 Answers
AudioClip.Create() 1 Answer
Media Formats and Licensing in Unity 0 Answers
WAV byte[] to AudioClip? 3 Answers
Continuous RayCast on GetButtonDown 0 Answers