Siri is More Effective: Apple Asserts its ReALM Outperforms OpenAI's GPT-4 in this Job

By Consultants Review Team Wednesday, 03 April 2024

In a research paper released this week, Apple's AI experts provided insight into the company's AI aspirations for Siri. A conversational AI system called Reference Resolution As Language Modeling (ReALM), which aims to enhance reference resolution, was described in the study.

How is ReALM operated?

ReALM may help Siri comprehend the context of a conversation, interpret onscreen material more effectively, and recognize background activity.

Additionally, the system has the ability to transform background, onscreen, and conversational processes into a text format that large language models (LLMs) can process.

When comparing ReALM models to OpenAI models, what did researchers say?

The free ChatGPT and the premium ChatGPT Plus are powered by OpenAI's LLMs GPT-3.5 and GPT-4, which the researchers compared to ReALM models. The researchers said in the report, "Across various reference types, we demonstrate large improvements over an existing system with similar functionality, with our smallest model obtaining absolute gains of over 5% for onscreen references." We also compare our models against GPT-3.5 and GPT-4, with our bigger models significantly outperforming GPT-4 and our smallest model performing similarly to it.

How did ReALM do when compared to GPT-4?

"We show that ReALM outperforms previous approaches, and performs roughly as well as the state of the art LLM today, GPT-4, despite consisting of far fewer parameters," the report said in relation to ReALM's performance.

June's WWDC at Apple

According to Apple's senior vice president of worldwide marketing Greg Joswiak, AI may be the main topic of discussion at the company's upcoming annual Worldwide Developers Conference (WWDC), which is set for June 10.

Current Issue