Meta’s new learning algorithm can teach AI to perform multiple tasks

Meta’s new learning algorithm can teach AI to perform multiple tasks

Data2vec is part of a big trend in AI towards models that can learn to understand the world in multiple ways. “It’s a smart idea,” said Ani Kembhavi of the Allen Institute for AI in Seattle, who works on vision and language. “It’s promising progress when it comes to generalized learning systems.”

An important caveat is that although the same learning algorithm can be used for different skills, it can only learn one skill at a time. When he learns to recognize images, he must start from scratch to learn to recognize speech. Giving AI more skills at once is difficult, but that’s something the Meta AI team wants to consider next.

The researchers were surprised to find that their approach actually had a better effect than existing techniques in image and speech recognition, and that it also demonstrated leading language models in text comprehension.

Mark Zuckerberg already is devising potential metaverse applications. “All of this will eventually be built into AR glasses with an AI assistant,” he posted on Facebook today. “It could help you cook dinner, notice if you’re missing an ingredient, which will lead you to reduce the heat or more complex tasks.”

For Auli, the main conclusion is that researchers should get out of their silos. “Hey, you don’t have to focus on one thing,” he says. “If you have a good idea, it could help in its entirety.”

Source link

Leave a Reply