Scientists have employed a novel approach by using a six-month-old baby named Sam to train artificial intelligence (AI) systems, aiming to impart a better understanding of how humanity develops. The AI learning process mirrored the child's development, relying on observations of the environment, interactions with nearby individuals, and establishing connections between visual and auditory stimuli.
During the experiment, Sam's daily activities, encompassing approximately 60 hours of data, were randomly captured by a camera. The AI model, dubbed the Child's View for Contrastive Learning (CVCL), utilized a vision and text encoder to interpret images and written language obtained through Sam's headset.
The study delved into the intricate link between words and their visual representations, investigating how children associate terms like 'ball' with specific visual attributes. Brenden Lake, an assistant professor at NYU's Center for Data Science and Department of Psychology, emphasized that employing AI models to study language learning in children helps address debates on the essential elements required for word acquisition.
Despite instances where the footage didn't directly correlate words and images, the CVCL model demonstrated the ability to recognize meanings. Employing a contrastive learning approach, the model predicted associations between images and text, achieving a 61.6 percent accuracy rate in generalization and accurately identifying unseen examples, such as 'apple' and 'dog,' 35 percent of the time.
While acknowledging imperfections in the obtained information, Lake highlighted its uniqueness, presenting an unprecedented insight into a child's learning environment. The researchers plan to extend their investigations to replicate early language learning in young children around two years old, further advancing the understanding of AI's capacity to emulate human cognitive processes.