At the stage of Shoreline Amphitheatre in Mountain View, California, the CEO of Google Sundar Pichai, announces the latest developments in Google Assistant, Google Photos, Android Google Maps, Artificial Intelligence, etc.
Google is all set to release it’s all new consumer features with the help of AI and machine learning. At the release, Google announces the next phase of the Google Assistant, which includes more than just voice assistant.
Google used its I/O Keynote as an opportunity to detail a number of features baked into Android P, including Adaptive Applications, Adaptive Battery, and Adaptive Brightness – three useful new tools that take advantage of artificial intelligence (AI) to alter system functions when the algorithm sees fit. Here’s the low-down:
- App Actions and Slices – Android P introduces two ambitious UI changes: Actions and Slices. The former is analogous to Actions on Google Assistant, while the latter is a subset that can surface core features from an application when conducting a device-wide search.
- Digital Wellbeing – There’s a Dashboard application baked into Android P that lets you monitor your usage and set restrictions. You can, for example, tell it to restrict access to Netflix after using it for two hours. Or even to strip color from your handset after a specific time to encourage you to put it down.
- Do Not Disturb (DND) – Do Not Disturb is far more aggressive in Android P. When enabled, notifications will now disappear altogether. The only way an alert will come through is if it’s triggered by a starred contact, and even then it’s restricted to a phone call; no text messages.
- Navigation – Google has done away with the standard on-screen navigation buttons in Android P, in favor of a lone Home button that relies on multitasking to navigate; sliding the Home button to the right, for example, lets you cycle through recent applications.
- Overview – There’s a new Overview screen built into Android P that’s essentially a multitasking hub. There’s a Search bar at the top and a row of predictive applications at the bottom, generated using the aforementioned Adaptive Applications algorithm.
Google Assistant has been treated to a huge overhaul. There are six new voices to choose from, as well as a revised user interface that will be available on both Android and iOS when it launches later in 2018. It’s a lot smoother than the existing interface, putting AI at the center of the experience.
But that isn’t the best bit. Google Assistant will soon (no word on when) be able to make calls for you. And no, we aren’t kidding.
Google showed a demonstration of Assistant calling a hair salon to book an appointment during its I/O Keynote. Assistant had a back-and-forth conversation with the receptionist, speaking like a human; it processed questions on the spot and offered up intelligent and unnervingly normal-sounding responses within a matter of seconds.
The firm will also use the feature to update business listings in Search.
If you aren’t convinced the opening hours for an establishment listed on Google Maps is correct, for example, you’ll have the option to command the company’s in-house Assistant to contact the business to verify the information. It will then update the listing accordingly for the whole world to see. Crazy stuff, right?
Last, but not least, Google is arming Assistant with the facility to have a natural back-and-forth conversation. The feature will arrive in the coming weeks, providing you with the option to request additional information after you’ve submitted a request. You’ll also be able to ask more than one question at once.
Google announced that a slew of new features is headed to Google Photos at I/O 2018, the most notable of which is the option to add realistic-looking color, using AI, to black-and-white pictures. There’s also a new Smart Actions tool that can determine who’s in an image, offering to share it with those people.
Oh, and it will soon be able to recognize documents and convert them to PDFs, and let you know when it thinks an image needs to be brightened or rotated in order to make it look better.
A new version of Google News was showcased during Google’s I/O Keynote. It’s designed to merge the client’s core features with the best of the standalone Newsstand application, while at the same time placing emphasis on images and YouTube videos.
The biggest change, however, is that News is now driven by AI. AI that intends to weed out fake news and understand you better.
Google says it uses artificial intelligence to not only surface stories you’re interested in, but to ensure you only see relevant and high-quality content. That’s further reinforced by the introduction of Snapchat Stories-style Newscasts, which you can swipe through for key snippets before you actually open an article.
Google I/O 2018 Explained – What is Google I/O?
Google I/O is the name of Google’s annual Developer Conference, where the firm discusses the future of the various products and services in its ecosystem. This year, it’s being held at the Shoreline Amphitheatre in Mountain View, California, but it’s also live-streamed for loyal fans around the world to tune in and watch.
Well seeing these mind-boggling features of the Google, it is sure that they are running way ahead to make their user’s experience a fantastic experience. And bring the new innovations into the palm of the users.
Source – trustedreviews.com | techcrunch.com | whatech.com