6 Liberty Sq PMB 95052 Boston, MA 02109
+1 617-945-8567

Artificial Intelligence at Adobe – Two Current Use-Cases

Adobe was founded in 1982 and was named for a creek that ran behind a garage belonging to John Warnock, one of its cofounders. Adobe’s first products included specialized printing software and digital fonts. Adobe launched Illustrator, its first consumer software offering in the mid-1980s. The company went public in 1986.

Today, Adobe Inc. is a multinational computer software company based in San Jose, California. The company claims that more than 90% of creative professionals around the world use Photoshop and that its Creative Cloud mobile apps have been downloaded more than 449 million times. As of January 2022, Adobe trades on the Nasdaq with a market cap of approximately $240 billion. For the fiscal year ended December 3, 2021, Adobe reported total revenues of $15.8 billion.

Adobe established Adobe Research, located at its corporate headquarters, to combine “cutting-edge academic discovery with industry impact” and “shape early-stage ideas into innovative technologies.” Its research areas include AI and machine learning, computer vision, content intelligence, intelligent agents and assistants, and natural language processing.

In this article, we’ll look at how Adobe has explored AI applications for its business and industry through two unique use-cases:

  • Finding the Right User-Generated Video Content — Adobe users can use Smart Tags technology, a computer vision solution powered by Adobe Sensei, to automate the search for the right user-generated video content and save both time and resources.
  • Making Image Search More Precise — Adobe uses the artificial intelligence and machine learning in Adobe Sensei to bring an expanded iteration of Visual Search to its vast library of Adobe Stock assets.

We will begin by examining how Adobe relies on computer vision technology to help marketers sort through the vast amounts of user-generated content that is uploaded to social media platforms every day.

Finding the Right User-Generated Video Content

Sixty percent of marketers believe that their audience engages more when they see user-generated content (UGC) in their marketing assets, according to Tint research published in their 2022 State of User-Generated Content Report. The same number of marketers plan to grow their UGC investments in 2022.

On YouTube, user-generated videos get ten times as many views as branded content. But with 2.3 billion users that upload more than 700,000 hours of video every day, how can marketers efficiently find UGC that’s worth curating and using in their marketing assets? After all, watching 700,000 hours of video would take approximately 80 years, before allowing for breaks.

To help creatives sort through the vast and ever-growing amounts of UGC, Adobe  turned to its Smart Tags technology, a computer vision solution powered by Adobe Sensei’s artificial intelligence technology. Smart Tags automates the scanning of videos and images, identifying:

  • Objects
  • Categories
  • Aesthetic properties

These attributes are then used to create the tags that marketers can use to locate content relevant to their needs.

While Smart Tags have helped marketers find images and text more quickly, Adobe claims, the true value of the technology may come when sourcing user-generated video content.

“Curating video content has been laborious since a user needs to manually watch lots of videos to find relevant footage. Videos are much heavier and have a temporal dimension, making them more challenging than images to classify, filter and curate,” an Adobe product manager writes on the Adobe Tech blog.

The Sensei-powered Smart Tags technology delivers two sets of tags for videos, one that describes objects, scenes, and attributes from a list of some 150,000 possibilities, and another that describes actions, the Adobe Tech blog post continues.

To develop the video auto-tagging technology, Adobe trained the models on images and videos from Adobe Stock, a dataset that consists of some 200 million assets sourced from contributors and photo agencies that span 180 countries around the world.

Delivered as part of its Adobe Experience Manager (AEM) product, the Smart Tags capability “can automatically scan and identify objects in videos, helping marketers to search and filter videos easily,” Adobe claims in a December 2018 press release.

Although Adobe touts the potential reduction in marketing costs, increased campaign efficiencies, and scale that its Smart Tags technology can deliver, our research did not identify any Adobe disclosures relating to the specific financial results of its Sensei-powered video smart-tagging technology.

However, Adobe does include its Adobe Experience Manager product within the customer solutions listed among its strategic growth pillars identified in its 2021 10-K, an indication that Adobe plans to include the technology in its long-term strategic plans.

Making Image Search More Precise

Nearly half (48%) of content produced by marketers contains visuals, according to Venngage research. And almost three of every ten marketers surveyed by the design platform named stock photos as the visual they use most often. With some 200 million assets, Adobe Stock has developed into a major player in this market.

Adobe Stock’s vast library of assets is continually sourced from both individual contributors and photo agencies in 180 countries across the world. But, with such a large asset library, how do creatives go about finding the right image for their message?

To help creatives find the image they need, Adobe turned to its Sensei artificial intelligence technology to add Image Similarity functionality to its Visual Search capabilities.

The idea is simple. By applying Adobe Sensei, the company’s AI and machine learning framework, to its library of images, Adobe hopes to remove much of the time and effort needed to find the right image so that creatives can focus on how to best integrate it into their messaging and strategy.

According to Adobe, when a creative finds an image that’s close to, but not quite, what they need, Image Similarity Search allows them to use the image itself to find others like it, instead of losing time trying to convert what they like about the image into a text-based search query.

Using Visual Search, creatives can drag an image into their search bar or select an image in their search results to find similar images. The platform’s aesthetic filters then allow users to narrow down their stock photo options.

The first iteration of Adobe’s Similarity Search relied on a system of descriptive tags that the company’s AI created by analyzing an image’s metadata, pixels, and RGB values to discern an image’s component elements, Adobe claims on its Adobe Tech blog.

However, the post continues, users wanted to be shown more than just images with similar objects in them. Users wanted images that “felt” the same too. Beyond an image’s content, they wanted to search by:

  • Color
  • Composition
  • Location of an object within a photo
  • A specific object within a photo

Through Adobe Sensei, Adobe aimed to upgrade its Visual Search function so that users could generate images that matched the search image not just in what it showed, but also in attributes like how it was composed. the colors the image included, whether an object was on the left or right side of an image, and even the physical appearance of a subject, e.g., a breed of dog or a particular person.

While our research did not identify detailed data about the metrics resulting from the addition of Sensei-enabled features to its Visual Search functionality, Adobe does claim that these innovations save users both time and resources in finding the right image for their needs. This will become more important as Adobe continues to grow its Adobe Stock library, which has added 100 million assets and doubled in size since 2018.