Mark Zuckerberg wants to create a voice assistant metaverse for Facebook that will delight Alex and Siri

Mark Zuckerberg wants to create a voice assistant metaverse for Facebook that will delight Alex and Siri

Meta, a company formerly known as Facebook, has shifted its long-term strategy from its social networking applications and focused on the metaverse, a virtual world where people wearing augmented / virtual reality headsets can talk to each other, play games, hold meetings and otherwise they engage in social activities.

This has raised many questions, such as what it means for a company that has focused on social media for nearly two decades, whether Meta will succeed in achieving its new goal of building a metaverse future, and what that future will look like for billions of people using Meta products every day. . On Wednesday, Meta CEO Mark Zuckerberg revealed some answers during an introductory speech on the company’s latest developments in AI.

One of Meta’s main goals is to develop advanced AI voice assistant technology – think Alex or Siri, but smarter – which the company plans to use in its AR / VR products, such as Quest Headphones (formerly Oculus), Smart Screen Portal and Ray-Ban smart glasses.

“The kinds of experiences you will have in the metaverse go beyond what is possible today,” Zuckerberg said. “This will require progress in a whole range of areas, from new hardware devices to software for building and exploring worlds. And the key to unlocking many of these advances is AI. ”

The presentation comes at one of the most challenging moments in the company’s history. Meta’s stock prices fell sharplyit advertising model is shaken by Apple’s changes to privacy on mobile devices and faces a threatening threat political regulation.

It is therefore logical for the company to look to the future, in which Meta hopes to introduce sophisticated artificial intelligence for language processing.

Mark Zuckerberg (left) – in the form of a virtual reality avatar – demonstrates how his company’s new AI tools allow you to create virtual environments by saying what you want to see.
Meta

This is the first time that Meta had an event that was exclusively dedicated to presenting the development of artificial intelligence, said a spokesman for Meta. Given this, the company acknowledges that this AI is still under development and is still not in use. Demonstrations are exploratory; Meta’s demo videos on Wednesday included a disclaimer at the bottom that many images and examples are for illustrative purposes only, not actual products. Also: Avatars in the metaverse still don’t have legs.

However, if Meta encourages its world-class computer science researchers to develop these tools, there is a good chance that it will succeed. And if fully realized, these technologies could change the way we communicate, both in real life and in virtual reality. These developments also pose significant privacy concerns about how more personal data collected from AI-powered wearables is stored and shared.

Here are a few things to know about how Meta builds a voice assistant using new AI models, as well as privacy and ethics issues raised by the metaverse with super powerful artificial intelligence.

Meta is building his own ambitious voice assistant for AR / VR

On Wednesday, it became clear that Meta sees voice assistants as a key part of the metaverse and knows that his voice assistant needs to be more conversational than what we have now. For example, most voice assistants can easily answer the question, “What time is it today?” But if you ask a follow-up question, such as, “Is it warmer than last week?” the voice assistant is likely to be confused.

Meta wants his voice assistant to be better at collecting contextual clues in conversations, along with other points of information he can collect about our physical body, such as our gaze, facial expressions, and hand movements.

“To support true creation and exploration of the world, we need to move beyond the current state of the art for smart assistants,” Zuckerberg said on Wednesday.

While Meta’s Big Tech competitors – Amazon, Apple and Google – already have popular voice assistant products, either on a mobile device or as standalone hardware like Alexa, Meta doesn’t (except for some limited voice command features on its Ray-Bans, Oculus, and Portal Devices).

“When we have glasses on our faces, it will be the first time that the artificial intelligence system will be able to really see the world from our perspective – to see what we see, hear what we hear and much more,” Zuckerberg said. “So the ability and expectations we have of the AI ​​system will be much higher.”

To meet those expectations, the company says it is developing a project called CAIRaoke, a self-learning AI neural model (a statistical model based on biological networks in the human brain) to power its voice assistant. This model uses “self-directed learning”, which means that instead of being trained in large data sets such as many other AI models, AI can essentially teach on its own.

“Before, all the blocks were built separately, and then you kind of glued them together,” AI research director Joëlle Pineau told Recode Metin. “As we move to self-directed learning, we have the opportunity to learn the whole conversation.”

As an example of how this technology can be applied, Zuckerberg – in the form of a virtual reality avatar – demonstrated a tool the company is working on called “BuilderBot” that allows you to say what you want to see in your virtual reality. For example, saying “I want to see a palm tree there” could make an AI-generated palm tree jump where you want it, based on what you say, your view, your controllers / hands, and general contextual awareness, according to companies.

Meta has yet to do more research to make this possible, and is studying what is called “egocentric perception,” which refers to understanding worlds from a first-person perspective, in order to build on that. He is currently testing the technology from the model in his Portal smart screens.

Ultimately, the company also hopes to be able to capture inputs outside of speech – such as movement, position and user speech – to build even smarter virtual assistants who can predict what users want.

AI in the metaverse will present ethical challenges

Privacy concerns and failures have haunted Meta and other major technology companies because their business models are built around collecting user data: our browsing histories, interests, personal communications, and more.

That concern is even greater, say privacy experts, with AR / VR, because it can track even more sensitive data, such as eye movements, facial expressions and body language.

Some AR / VR and AI ethicists are concerned about how personal these data entries may become, what predictions AI may make with these inputs, and how this data will be shared.

“Eye tracking data, sight data, literally being able to quantify whether you’re feeling a stimulus of sexual arousal or a loving look – it’s all worrying,” said Kavya Pearlman, founder of XR Safety Initiative, a nonprofit that advocates for ethical technology development. which is VR. “Who has access to this data? What are they doing with this data? ”

For now, the answers to those questions are not entirely clear, although Meta says she is committed to addressing such concerns.

Zuckerberg said the company is working with human rights, civil rights and privacy experts to build “systems based on fairness, respect and human dignity”.

But with regard to companies privacy breach recordssome technological ethicists are skeptical.

“From a purely scientific perspective, I’m really excited. But since it’s Meta, I’m afraid, “Pearlman said.

Responding to people’s concerns about privacy in the metaverse, Meta’s Pineau said that by giving users control over the data they share, the company can help alleviate people’s concerns.

“People are willing to share information when there is value they derive from it. And so, if you look, the notion of autonomy, control and transparency is what really allows users to have more control over the way their data is used, ”she said.

In addition to privacy concerns, some Meta AR / VR users are concerned that if the AI-powered metaverse takes off, it may not be available to everyone and safe for everyone. Already, some women have complained of sexual harassment in the metaverse, such as when Meta’s social VR app beta tester Horizon Worlds reported that practically felt by other users. The target has since established what is 4 feet virtual safety balloon around the avatar to avoid “unwanted interactions”.

If Meta achieves its goal of using AI to make its AR / VR environments even more impressive and flawless in our daily lives, more problems with accessibility, security, and discrimination are likely to arise. And although Facebook says it thinks about these issues at the outset, its experience with other products is not convincing.



Source link

Leave a Reply