At MIT Machines Learn Language The Same Way Kids Do

first_imgStay on target Teaching language to an artificial intelligence system is child’s play, thanks to MIT.Researchers at the Massachusetts Institute of Technology developed a parser that imitates kids’ learning processes by observing the environment and making connections.The system studies captioned videos and learns to associate words with objects and actions.“We don’t give the system the meaning for the sentence. We say, ‘There’s a sentence and a video,” study co-author and CSAIL researcher Andrei Barbu said in a statement. “The sentence has to be true of the video. Figure out some intermediate representation that makes it true of the video.’”When given a new sentence, the parser can accurately predict its meaning—without visual clues.This “weakly supervised” approach, which requires limited training input, could expand the types of data and reduce the effort needed for training parsers.Annotations can certainly speed up the process, but they’re not needed to learn.“People talk to each other in partial sentences, run-on thoughts, and jumbled language,” Barbu, a researcher in MIT’s McGovern Institute’s Center for Brains, Minds, and Machines (CBMM), said.Ideally, an android equipped with MIT’s parser could constantly observe its environment, reinforcing its understanding of spoken commands—even when they aren’t fully grammatical or clear.“You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,” he added.The parser could also shed light on how young children learn language.“A child has access to redundant, complementary information from different modalities, including hearing parents and siblings talk about the world, as well as tactile information and visual information, [which help him or her] to understand the world,” according to co-author Boris Katz, a principal research scientist and head of the CSAIL InfoLab Group.“It’s an amazing puzzle, to process all this simultaneous sensory input,” he continued. “This work is part of [a] bigger piece to understand how this kind of learning happens in the world.”The parser is described in a paper presented at this week’s Empirical Methods in Natural Language Processing conference.Moving forward, researchers are interested in modeling interactions, not just passive observations.More artificial intelligence coverage on Geek.com:Robots to Build Robots at Shanghai FactoryMove Over, Chanel No. 5: AI Is Creating Perfumes NowWould You Pay $10K For This AI-Generated Portrait? Robot Dog Astro Can Sit, Lie Down, and Save LivesMIT’s AI Knitting System Designs, Creates Woven Garments last_img

Leave a Reply

Your email address will not be published. Required fields are marked *