Thursday, May 7, 2026
Washington DC
New York
Toronto
Distribution: (800) 510 9863
Press ID
  • Login
RH NEWSROOM National News and Press Releases. Local and Regional Perspectives. Media Advisories.
Yonkers Observer
  • Home
  • World
  • Politics
  • Finance
  • Technology
  • Health
  • Culture
  • Entertainment
  • Trend
No Result
View All Result
  • Home
  • World
  • Politics
  • Finance
  • Technology
  • Health
  • Culture
  • Entertainment
  • Trend
No Result
View All Result
Yonkers Observer
No Result
View All Result
Home Technology

From Baby Talk to Baby A.I.

by Yonkers Observer Report
April 30, 2024
in Technology
Share on FacebookShare on Twitter

We ask a lot of ourselves as babies. Somehow we must grow from sensory blobs into mobile, rational, attentive communicators in just a few years. Here you are, a baby without a vocabulary, in a room cluttered with toys and stuffed animals. You pick up a Lincoln Log and your caretaker tells you, “This is a ‘log.’” Eventually you come to understand that “log” does not refer strictly to this particular brown plastic cylinder or to brown plastic cylinders in general, but to brown plastic cylinders that embody the characteristics of felled, denuded tree parts, which are also, of course, “logs.”

There has been much research and heated debate around how babies accomplish this. Some scientists have argued that most of our language acquisition can be explained by associative learning, as we relate sounds to sensibilia, much like dogs associate the sound of a bell with food. Others claim that there are features built into the human mind that have shaped the forms of all language, and are crucial to our learning. Still others contend that toddlers build their understanding of new words on top of their understanding of other words.

This discourse advanced on a recent Sunday morning, as Tammy Kwan and Brenden Lake delivered blackberries from a bowl into the mouth of their twenty-one-month-old daughter, Luna. Luna was dressed in pink leggings and a pink tutu, with a silicone bib around her neck and a soft pink hat on her head. A lightweight GoPro-type camera was attached to the front.

“Babooga,” she said, pointing a round finger at the berries. Dr. Kwan gave her the rest, and Dr. Lake looked at the empty bowl, amused. “That’s like $10,” he said. A light on the camera blinked.

For an hour each week over the past 11 months, Dr. Lake, a psychologist at New York University whose research focuses on human and artificial intelligence, has been attaching a camera to Luna and recording things from her point of view as she plays. His goal is to use the videos to train a language model using the same sensory input that a toddler is exposed to — a LunaBot, so to speak. By doing so, he hopes to create better tools for understanding both A.I. and ourselves. “We see this research as finally making that link, between those two areas of study,” Dr. Lake said. “You can finally put them in dialogue with each other.”

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

The 2024 Emmys BuzzMeter offers insights into key races

2 years ago

Chris Sununu Eyes the G.O.P.’s ‘Normal’ Lane in 2024. Does It Exist?

3 years ago

Shakira leaves Barcelona for Miami after Gerard Piqué split

3 years ago

SAG-AFTRA and studios said they would turn to a federal mediator

3 years ago
Yonkers Observer

© 2025 Yonkers Observer or its affiliated companies.

Navigate Site

  • About
  • Advertise
  • Terms & Conditions
  • Privacy Policy
  • Disclaimer
  • Contact

Follow Us

No Result
View All Result
  • Home
  • World
  • Politics
  • Finance
  • Technology
  • Health
  • Culture
  • Entertainment
  • Trend

© 2025 Yonkers Observer or its affiliated companies.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In