This is an automated archive made by the Lemmit Bot.
The original was posted on /r/artificial by /u/NuseAI on 2024-04-02 17:32:54.
- Apple researchers have developed an AI system called ReALM that can understand screen context and ambiguous references, improving interactions with voice assistants.
- ReALM reconstructs the screen using parsed on-screen entities to generate a textual representation, outperforming GPT-4.
- Apple is investing in making Siri more conversant and context-aware through this research.
- However, automated parsing of screens has limitations, especially with complex visual references.
- Apple is catching up in AI research but faces stiff competition from tech rivals like Google, Microsoft, Amazon, and OpenAI.
Source:
You must log in or register to comment.