Google presents AI for Search Overview feature at I/O 2024 conference

Last May, Google CEO Sundar Pichai said the company would use artificial intelligence to reimagine all of its products.

But because the new generative AI technology posed risks, such as the spread of false information, Google was cautious in applying the technology to its search engine, which is used by more than two billion people and which last year generated revenue of $175 billion.

On Tuesday, at Google's annual conference in Mountain View, California, Pichai showed how the company's aggressive work on artificial intelligence has finally made its way to the search engine. Starting this week, she said, U.S. users will see a feature, AI Overviews, that generates summaries of information on top of traditional search results. By the end of the year, more than a billion people will have access to technology.

AI Overviews are likely to add to concerns that web publishers will see less traffic from Google Search, putting more pressure on an industry that has reeled from rifts with other technology platforms. On Google, users will see longer summaries on a topic, which could reduce the need to visit another website, although Google has downplayed these concerns.

“Links included in AI Overviews get more clicks” from users than if they were presented as traditional search results, Liz Reid, Google's vice president of search, wrote in a blog post. “We will continue to focus on sending valuable traffic to publishers and creators.”

The company also unveiled a number of other initiatives, including a lightweight AI model, new chips and so-called agents that help users perform tasks, in a bid to gain the upper hand in an AI slugfest with Microsoft and OpenAI , the creator of ChatGPT. .

“We are in the early days of AI platform change,” Pichai said Tuesday at Google's I/O developer conference. “We want everyone to benefit from what Gemini can do,” including developers, startups and the public.

When ChatGPT was released in late 2022, some tech industry insiders considered it a serious threat to Google's search engine, the most popular way to get information online. Since then, Google has worked aggressively to regain its lead in artificial intelligence, releasing a family of technologies called Gemini, including new AI models for developers and chatbots for consumers. It has also integrated the technology into YouTube, Gmail, and Docs, helping users create videos, emails, and drafts with less effort.

Meanwhile, the competition between Google and OpenAI and its partner Microsoft continued. The day before Google's conference, OpenAI presented a new version of ChatGPT that is more like a voice assistant.

(The New York Times sued OpenAI and Microsoft in December for copyright infringement of news content related to artificial intelligence systems.)

At its event in Silicon Valley, Google showed how it would involve artificial intelligence more deeply in users' lives. He presented Project Astra, an experiment to see how artificial intelligence could act like an agent, chatting vocally with users and responding to images and videos. Some of the features will be available to users of Google's Gemini chatbot later this year, Demis Hassabis, CEO of DeepMind, Google's artificial intelligence lab, wrote in a blog post.

DeepMind also unveiled Gemini 1.5 Flash, an AI model designed to be fast and efficient but lighter than Gemini 1.5 Pro, the mid-tier model that Google has implemented in many of its consumer services. Dr Hassabis wrote that the new model was “highly capable” of reasoning and was good at summarizing information, chatting and captioning images and videos.

The company announced another AI model, Veo, which generates high-definition videos based on simple text instructions, similar to OpenAI's Sora system. Google said some creators may preview Veo and that others may join a waitlist to access it. Later this year, the company plans to bring some of Veo's capabilities to YouTube Shorts, the video platform's TikTok competitor, and other products.

Google also showed off the latest versions of its music generation tool, Lyria, and image generator, Imagen 3. In February, Google's Gemini chatbot was criticized by users on social media for refusing to generate images of white people and for presenting inaccurate images of historical figures. figures. The company said it will stop the ability to generate images of people until it fixes the problem.

Over the past three months, more than a million users have signed up for Gemini Advanced, the version of Google's chatbot available through a $20 monthly subscription, the company said.

In the coming months, Google will add Gemini Live, which will provide users with a way to speak to the chatbot via voice commands. The chatbot will respond in natural-sounding voices, Google said, and users could interrupt Gemini to ask clarifying questions. Later this year, users will be able to use their cameras to show Gemini Live the physical world around them and have conversations about it with the chatbot.

In addition to AI overviews, Google's search engine will feature AI-organized search results pages, with generated titles highlighting different types of content. The feature will start with meal and recipe results and will later be offered for questions about shopping, travel and entertainment.

Ms. Reid, the head of search, said in an interview before the conference that she expected the search updates to save users time because Google “can do more work for you.”

Pichai said he expects the vast majority of people will interact with Gemini AI technology through Google's search engine.

“We will make it increasingly seamless for people to interact with Gemini,” Pichai said in a briefing before the conference.

Leave a Reply

Your email address will not be published. Required fields are marked *