This is an automated archive made by the Lemmit Bot.
The original was posted on /r/artificial by /u/NuseAI on 2024-03-25 05:23:57.
- Apple researchers are investigating the use of AI to identify when a user is speaking to a device without requiring a trigger phrase like ‘Siri’.
- A study involved training a large language model using speech and acoustic data to detect patterns indicating the need for assistance from the device.
- The model showed promising results, outperforming audio-only or text-only models as its size increased.
- Eliminating the ‘Hey Siri’ prompt could raise concerns about privacy and constant listening by devices.
- Apple’s handling of audio data has faced scrutiny in the past, leading to policy changes regarding user data and Siri recordings.
Source :
You must log in or register to comment.