10 minutes to read - Jun 26, 2023

Every major AI feature announced at Google I/O 2023

VISIT
Every major AI feature announced at Google I/O 2023
As Google tries to play catch-up with the likes of Microsoft and OpenAI, it unveiled a series of AI advancements to try and get ahead in the AI race.

Google has been a leader in developing advanced artificial intelligence and machine learning models way before the generative AI craze began. 

However, the company's generative AI efforts have paled in comparison to those of competitors, such as OpenAI and Microsoft, who are currently dominating the space with ChatGPT and Bing Chat. Even Google's direct answer to those services, Google Bard, has been underwhelming and fallen short of expectations.To accelerate its growth and hopefully bridge the innovation gap, Google unveiled new AI and software at Google I/O. Some of the highlights included a much-needed new and improved Bard, a brand new large language model, and a generative AI developer interface. 

Here's a roundup of what Google announced and how you can take advantage.

1. PaLM 2

Google unveiled PaLM, the company's advanced language learning model, in August 2022. Since then, developers have used the PaLM API for many different generative AI applications including chatbots and content generation. 

Google today unveiled PaLM 2, a model that is expected to surpass the capabilities of its highly successful predecessor. The LLM is lightweight, easy to deploy and has more advanced technical capabilities. 

The LLM is so versatile that Google announced over 25 products and features that are powered by PaLM 2 at Google I/O. It has four different models, Gecko, Otter, Bison, Unicorn which can be used for different purposes. 

PaLM 2 will support more than 100 languages and can excel in various technical skills including coding, writing and mathematics. 

2. Bard, but smarter and for everyone

In the past, Google typically held onto its AI models until the company was sure they were fully ready to be released to the public. But the rapid growth of ChatGPT caused Google to take a different approach.

After seeing ChatGPT's success, the company rushed to release its own chatbot, Google Bard, way before it was ready to deliver real value to customers. Google CEO Sundar Pichai even called Bard to "a souped-up Civic" compared to other AI models in an interview. 

To make Bard smarter and more capable of performing functions such as coding, math, and logic, Bard was upgraded to a much more capable model -- PaLM 2. 

Bard's coding abilities have been significantly improved. It can now help with code debugging, collaborating, and exploring. It has also learned more than 20 coding languages and is automatically doing code citations. 

Bard will be able to function in more languages including Japanese and Korean and is on track for supporting 40 more languages soon. 

Some updates geared to improving user experience include a new Bard dark theme you and a new export feature allowing for chat export into Gmail and Docs. 

New visual features are also coming to Bard soon. For example, when you ask a question, your response will be able to include an image, table, or a map. 

Google Lens is also coming to Bard, allowing users to upload photos to Bard and ask prompts regarding the photo. The example in the demo included uploading a photo of dogs and asking for a caption. 

Extensions in Bard will allow it to participate with external partners. Google's Adobe Firefly extension will be arriving to Bard in the next couple of months. Through this collaboration, you users can ask Bard to create any image you'd like and have it generated in the chat.

Other extensions planned to arrive to the platform include Kayak, OpenTable, Instacart, Wolfram and Khan Academy. 

Bard also removed its waitlist, making the chatbot available to over 180 countries across the globe. 

3. Google Search's AI integration

In order to keep up with Bing Chat, it had been rumored that Google was working on integrating AI features into its own search engine. At Google I/O, Google unveiled Search with generative AI through a new Search Generative Experience (SGE). 

The new Search will have AI powered snapshots which provides users with a concise, informative and conversational answers to any search query. The snapshot also provides additional sources users can visit to learn more information about the topic. 

The SGE will also help users make shopping decisions. When looking for what to buy, the Search snapshot will show users with a series of items, presented in a table that compares the products' features, prices, reviews and more. 

These new SGE features will be available in Search Labs, a new program to access early experiments, in the coming weeks. If interested, you can use the waitlist starting today. 

This is a very smart move for Google, who holds the biggest share of the search engine market, responsible for about 90% of all search queries worldwide. By integrating AI into its search engine, Google will be able to leverage its position in the search market to propel it within the AI space too. 

4. AI upgrades to Workspace

Back in March, Google announced the arrival of AI advances to its Google Workspace. This meant that all of Google's widely-used productivity tools including Gmail, Google 

Docs and Slides were getting a generative AI facelift. 

Google announced further Workspace AI integrations in Gmail, Maps, Photos, and more. 
A new feature for Gmail called "Help me write" will allow people to use generative AI to send auto-replies to emails and tweak them to best meet their needs. The feature launched to trusted testers in March. 
The new Sheets will feature a "Help me organize" entry where users can ask for information to be organized for them in the sheet through a simple word prompt. 
Similarly, in Google Slides there will be a "Help me visualize" entry where users can use prompts to get AI- generated images. Starting next month, trusted testers will be allowed to use these two features as well as other generative AI features in Workspace. 
Google Maps will have a new AI feature, "Immersive View for Routes", which allows users to get a bird's eye view of your entire path in a multidimensional experience. This feature will begin to rollout in the summer and be coming to 15 cities by the end of the year. 
Google Photos will be getting a new feature "Magic Editor" which allows users to edit various aspects of a photo including lighting, moving people, removing unwanted items, and more using generative AI. This feature will rollout later this year. 
The timing is especially great because although Microsoft announced a similar AI revamp to its Office 365 apps, the company has yet to release it to the public as well. Google has every chance to take the first punch in this AI bout.
5. Android personalization with AI
Android just got a series of brand-new personalization features, enabled by generative AI. 
Magic Compose is a new feature coming to Google messages that allow users to add more personality to their messages. Users just have to type their message as they usually would and can then select different tones to update their text to such as "professional", "funny", "chill" and even "Shakespeare".
In Android 14, users can also add new personalizations to their phone layout. New customizable aspects of the lock screen include clocks and shortcuts to choose from.A new emoji wallpaper feature was also unveiled. Through this feature, users can choose from emojis, patterns, colors and more to generate a custom wallpaper.
On top of that, Google demoed a new cinematic wallpaper feature that allows users to take any photo and transform it into a 3D cinematic photo for the home screen. Both the emoji and cinematic wallpaper will come to Pixel devices next month. 
Lastly, you will soon be able to use generative AI to create a new wallpaper for you from scratch. All you have to do is type in a prompt for what you'd like your wallpaper to look like and the phone will do the rest. This feature will be coming to Pixel phones next fall. 
6. Duet AI for Google Cloud

On the event website, Google teased that it would be unveiling "a new suite of tools that make it easy for developers to build on top of our (Google's) best models". 
Duet AI is a development interface powered by AI that includes code and chat assistance for developers on Google Cloud's platform. 
With code assistance, cloud developers will be able to take advantage of AI-driven code completion and generation. 
As programmers write code on Google Cloud, the code assistant can complete it in real-time based on comments and function definitions. It can also identify errors and suggest fixes.
For specific cloud questions, such as how to use certain cloud services, users can ask the chat assistant with simple natural language prompts to get a response.  
Lastly, the No-Code AppSheet uses generative AI to help users create apps from simple prompts, enabling users who don't even know how to code to develop an app. 
Briefly mentioned during event was also Duet AI for Google Workspace which will assist users on the many different Workspace platforms. 
Duet AI for Cloud is currently limited to trusted testers by invitation only. However, a larger roll out is expected. 
7. Gemini
Gemini is a large language model created by Google DeepMind revealed at Google I/O. It is still in its early phases but is intended to be used in the same way that PaLM 2 is being used now. This LLM will likely compare with GPT-4, Open AI's latest and most advanced model.

Article source
loading...