Google Pixel 2's AI image technology is open source; Google Pixel 2 portrait mode feature
engine and
software giant Google, a has open-sourced its artificial
intelligence-based ‘Semantic Image Segmentation’ technology – a
technology used in the Pixel 2 and Pixel 2 XL portrait mode to
achieve shallow depth-of-field effect without the need for a
secondary camera.
“Today, we are
excited to announce the open-source release of our latest and
best-performing semantic image segmentation model, DeepLab-v3+,
implemented in Tensorflow. This release includes DeepLab-v3+ models
built on top of a powerful convolutional neural network (CNN)
backbone architecture for the most accurate results, intended for
server-side deployment.
As part of this
release, we are additionally sharing our Tensorflow model training
and evaluation code, as well as models already pre-trained on the
Pascal VOC 2012 and Cityscapes benchmark semantic segmentation
tasks,” stated a blog post shared by Google.
In the blog post,
Google explained what the technology was and how it worked to achieve
the result. According to the post, the semantic image segmentation
assigns a semantic label, such as road, sky, person, dog, etc, to
every pixel in an image. These labels then pinpoint the outline of
objects, and thus impose much stricter localisation accuracy
requirements than other visual entity recognition tasks such as
image-level classification or bounding box-level detection.

Comments
Post a Comment