When iPhone showed a message saying that, the phone had detected a sound and it might be a doorbell. This is accessibility notifications sent to individuals with hearing impairments. Apple has been rolling out many of these newly, and Google’s Android has been doing similar.
The iPhone can detect a variety of sounds, including sirens and smoke alarms, fire alarms, cat and dog alarms, as well as appliances, doorbells and door knocks, kettles, and water running. If the phone is now listening, it should add the “Hey Siri” command to a few items to hear? If it is listening to extra sounds, must deactivate the “Hey Siri” voice commands. It is not clear why this happens.
What if the sound recognition was able to be modified to perform operational and core IT tasks? It’s a way to make the phone more exact to your actions. As in the classic example of machine learning, could the phone detect a sound and tell, if it is the XYZ part in the huge heating piece?
Perhaps the feature might do more than detect when someone is passing through the door. “Alert! Someone from Legal is coming. “Hide now.”
What about a useful identifier? It could be programmed with all users’ voices to flag the caller’s name if caller-ID is not relevant. It can also be used as a management tool to alert someone if there are no keypad clicks for a certain period. It’s well-known that video conferencing systems listen even if a person mutes your microphone. This could allow a smartphone to be personalized to identify any sound anyone desires, which would help the user’s business. Then what if your phone could identify who is really speaking? While some systems do offer this, it isn’t universal and doesn’t work with all systems.
Imagine if the phone could listen to the user and send a slower, more precise interpretation. It could display a live transcript on the display but it is hard not to notice it. Earbud prompts can be more discreet.
There are also real-time alerts for “voice-lying detection”. Imagine having a conversation with the user’s hearing and supervisor, “That’s probably a lie.” This could be supportive during audience or board presentations. Listen for high volumes of sighs or yawns. They’re losing people.” While a great speaker will know this, a speaker can be focused on complex material and not sign the listeners becoming distracted.
Google, Apple, and other companies are working to improve accessibility features. These devices can do much more.