Computational Photography | Popular Photography Founded in 1937, Popular Photography is a magazine dedicated to all things photographic. Wed, 07 Sep 2022 22:09:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popphoto.com/uploads/2021/12/15/cropped-POPPHOTOFAVICON.png?auto=webp&width=32&height=32 Computational Photography | Popular Photography 32 32 The Apple iPhone 14 Pro has as many megapixels as a full-frame camera https://www.popphoto.com/news/apple-iphone-14-pro-camera/ Wed, 07 Sep 2022 22:09:59 +0000 https://www.popphoto.com/?p=185074
The nw Apple iPhone 14 Pro
Apple's flagship phone stills sports a three-camera array. Apple

Apple's new flagship phones feature massively overhauled cameras.

The post The Apple iPhone 14 Pro has as many megapixels as a full-frame camera appeared first on Popular Photography.

]]>
The nw Apple iPhone 14 Pro
Apple's flagship phone stills sports a three-camera array. Apple

‘Tis the season for product announcements, and right on queue, the new iPhones are here. As we’ve come to expect (with some exceptions), the press event came with some big news; the most notable for us is the camera specs. The iPhone 14 and 14 Plus camera updates are rather modest, though there are still some noteworthy changes. The main excitement is from the iPhone 14 Pro and Pro Max, which Apple CEO Tim Cook says is the “most innovative pro lineup yet.”

iPhone 14 Pro & Pro Max camera

The nw Apple iPhone 14 Pro
The iPhone 14 pro comes in four colors, Space Black, Silver, Gold, and Deep Purple. Apple

Apple has long been pushing the boundaries on phone photography, striving for better cameras and smarter computational photography. There have been years with lackluster changes, but we were glad to see that that is not the case this year, especially with the two Pro phones.

48-megapixel sensor

The biggest news from the event is the new 48-megapixel camera on the iPhone 14 Pro and Pro Max. That’s a massive jump from the previous 12-megapixel sensor found in the iPhone 13 Pro. But pixels aren’t everything, the new sensor is also 65% larger than its predecessor which should result in far superior light gathering capabilities. The main camera also gets Apple’s latest, second-gen optical image stabilization.

It utilizes a quad-pixel design and takes advantage of pixel binning—the process of grouping individual pixels together to act like larger ones—for better low-light performance. The binning process means that most photos will be a more standard 12-megapixels, but you can take advantage of the full 48-megapixel resolution with Apple’s ProRAW format.

Better low-light photos & new flash

Low-light performance was clearly an area of focus for Apple in these new phones. The phone utilizes Apple’s “Photonic Engine,” which leans on the powers of computational photography to provide better color and preserve details, even in minimal light.

Overall, Apple claims that low-light performance, compared to the previous generation, will be 2x better with the main camera, 3x better with the 13mm ultra-wide angle, and 2x better with the telephoto. They’ve also greatly improved “Night mode,” taking full advantage of the main camera’s increased light-gathering powers. 

The nw Apple iPhone 14 Pro
Apple says the main camera has 2x better low-light performance than its predecessor. Apple

Related: What is computational photography?

Also relevant to low-light situations is the new flash. It’s been redesigned with nine LEDs which change patterns based on the focal length you are using. It’s twice as bright as before and should enable you to get much more dramatic images with the built-in flash.

More zoom flexibility

There is also a new 2x zoom feature when shooting with the main camera. It uses a crop from the middle 12 megapixels of the quad-pixel sensor to give you a 48mm equivalent field of view. The new ultra-wide-angle camera also boasts improved macro abilities.

A better front-facing camera & 4k Cinematic mode

At the front of the phone is a new “TrueDepth” camera that provides autofocus for the first time. It utilizes a faster f/1.9 aperture for, you guessed it, better low-light performance. 

Video users also get some exciting features, including a new “Action mode” for smoother results while moving. And “Cinematic mode” is now available in 4K at 30 and 24 fps.

The nw Apple iPhone 14 Pro
The screen on the iPhone 14 pro is twice as bright as its predecessor’s. Apple

iPhone 14 & 14 Plus camera

The iPhone 14 and 14 plus have more modest camera updates, some of which are borrowed from the Pro versions. For example, the front-facing TrueDepth camera has been upgraded with an f/1.9 aperture and newly added autofocus. They also provide access to Action mode for smoother video and an improved “TrueTone” flash. Apple also included its Photonic Engine for improved mid and low-light performance, up to 2x on the ultra-wide camera, 2x on the TrueDepth camera, and 2.5x on the main camera.

In terms of updates only relevant to these phones, the main camera now has a larger f/1.5 aperture. It also has a new ultra-wide angle for more sweeping views. 

Additional updates

All four phones still utilize Apple’s Super Retina XDR display, but both are now brighter. The 14 and 14 Plus have 1200 nits of peak HDR brightness and a 2,000,000:1 contrast ratio, while the Pro versions have up to 2000 nits, which is twice as bright as the iPhone 13 Pro. It will make viewing the screen in bright conditions much easier. They all use Apple’s “Ceramic Shield” front cover to protect against falls and bumps.

The Pro options get the new A16 Bionic chip, which Apple says is the fastest chip ever in a smartphone. It has a 1Hz refresh rate and an Always-On lock screen so that you can see widgets and notifications with a quick glance. This isn’t anything new for Android users but is new to Apple phones.

The nw Apple iPhone 14 Pro
Preorders for the iPhone 14 Pro and Pro plus begin on September 9. Apple

Pricing & availability

The iPhone 14 and 14 Plus will be priced at $799, and $899, respectively, and are available for preorder starting September 9. The iPhone 14 will be available on September 16, while those wanting the 14 Plus will have to wait until October 7.

The iPhone 14 Pro starts at $999, and the Pro Plus at $1099. Preorders also begin September 9, with full availability on September 16.

The post The Apple iPhone 14 Pro has as many megapixels as a full-frame camera appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The state of AI in your favorite photo editing apps https://www.popphoto.com/how-to/ai-photo-editing-apps/ Tue, 30 Aug 2022 19:43:41 +0000 https://www.popphoto.com/?p=184122
ON1’s Super Select AI feature automatically detects different subjects
ON1’s forthcoming Super Select AI feature automatically detects various subjects/elements (shown in blue), allowing user to quickly create an editable mask. ON1

From Lightroom to Luminar Neo, we surveyed the field and these are the most-powerful AI-enhanced photo editing platforms.

The post The state of AI in your favorite photo editing apps appeared first on Popular Photography.

]]>
ON1’s Super Select AI feature automatically detects different subjects
ON1’s forthcoming Super Select AI feature automatically detects various subjects/elements (shown in blue), allowing user to quickly create an editable mask. ON1

Artificial Intelligence (AI) technologies in photography are more widespread than ever before, touching every part of the digital image-making process, from framing to focus to the final edit. But they’re also widespread in the sense of being spread wide, often appearing as separate apps or plug-ins that address specific needs.

That’s starting to change. As AI photo editing tools begin to converge, isolated tasks are being added to larger applications, and in some cases, disparate pieces are merging into new utilities.

This is great for photographers because it gives us improved access to capabilities that used to be more difficult, such as dealing with digital noise. From developers’ perspectives, this consolidation could encourage customers to stick with a single app or ecosystem instead of playing the field.

Let’s look at some examples of AI integration in popular photo editing apps.

ON1 Photo RAW

ON1 currently embodies this approach with ON1 Photo RAW, its all-in-one photo editing app. Included in the package are tools that ON1 also sells as separate utilities and plug-ins, including ON1 NoNoise AI, ON1 Resize AI, and ON1 Portrait AI.

The company recently previewed a trio of new features it’s working on for the next major versions of ON1 Photo RAW and the individual apps. Mask AI analyzes a photo and identifies subjects; in the example ON1 showed, the software picked out a horse, a person, foliage, and natural ground. You can then click a subject and apply an adjustment, which is masked solely to that individual/object.

ai photo editing tools
In this demo of ON1’s Mask AI feature under development, the software has identified subjects such as foliage and the ground. ON1

Related: Edit stronger, faster, better with custom-built AI-powered presets

ON1’s Super Select AI feature works in a similar way, while Tack Sharp AI applies intelligent sharpening and optional noise reduction to enhance detail.

Topaz Photo AI

Topaz Labs currently sells its utilities as separate apps (which also work as plug-ins). That’s great if you just need to de-noise, sharpen, or enlarge your images. In reality, though, many photographers buy the three utilities in a bundle and then bounce between them during editing. But in what order? Is it best to enlarge an image and then remove noise and sharpen it, or do the enlarging at the end?

Topaz is currently working on a new app, Photo AI, that rolls those tools into a single interface. Its Autopilot feature looks for subjects, corrects noise, and applies sharpening in one place, with controls for adjusting those parameters. The app is currently available as a beta for owners of the Image Quality bundle with an active Photo Upgrade plan.

ai photo editing tools
Topaz Photo AI, currently in beta, combines DeNoise AI, Sharpen AI, and Gigapixel AI into a single app. Jeff Carlson

Luminar Neo

Skylum’s Luminar was one of the first products to really embrace AI technologies at its core, albeit with a confusing rollout. Luminar AI was a ground-up rewrite of Luminar 4 to center it on an AI imaging engine. The following year, Skylum released Luminar Neo, another rewrite of the app with a separate, more extensible AI base.

Now, Luminar Neo is adding extensions, taking tasks that have been spread among different apps by other vendors, and incorporating them as add-ons. Skylum recently released an HDR Merge extension for building high dynamic range photos out of several images at different exposures. Coming soon is Noiseless AI for dealing with digital noise, followed in the coming months by Upscale AI for enlarging images and AI Background Removal. In all, Skylum promises to release seven extensions in 2022.

ai photo editing tools
With the HDR Merge extension installed, Luminar Neo can now blend multiple photos shot at different exposures. Jeff Carlson

Adobe Lightroom & Lightroom Classic

Adobe Lightroom and Lightroom Classic are adding AI tools piecemeal, which fits the platform’s status of being one of the original “big photo apps” (RIP Apple Aperture). The most significant recent AI addition was the revamped Masking tool that detects skies and subjects with a single click. That feature is also incorporated into Lightroom’s adaptive presets.

ai photo editing tools
Lightroom Classic generated this mask of the fencers (highlighted in red) after a single click of the Select Subject mask tool. Jeff Carlson

It’s also worth noting that because Lightroom Classic has been one of the big players in photo editing for some time, it has the advantage of letting developers, like the ones mentioned so far, offer their tools as plug-ins. So, for example, if you primarily use Lightroom Classic but need to sharpen beyond the Detail tool’s capabilities, you can send your image directly to Topaz Sharpen AI and then get the processed version back into your library. (Lightroom desktop, the cloud-focused version, does not have a plug-in architecture.)

What does the consolidation of AI photo editing tools mean for photographers?

As photo editors, we want the latest and greatest editing tools available, even if we don’t use them all. Adding these AI-enhanced tools to larger applications puts them easily at hand for photographers everywhere. You don’t have to export a version or send it to another utility via a plug-in interface. It keeps your focus on the image.

It also helps to build brand loyalty. You may decide to use ON1 Photo RAW instead of other companies’ tools because the features you want are all in one place. (Insert any of the apps above in that scenario.) There are different levels to this, though. From the looks of the Topaz Photo AI beta, it’s not trying to replace Lightroom any time soon. But if you’re an owner of Photo AI, you’ll probably be less inclined to check out ON1’s offerings. And so on.

More subscriptions

Then there’s the cost. It’s noteworthy that companies are starting to offer subscription pricing instead of just single purchases. Adobe years ago went all-in on subscriptions, and it’s the only way to get any of their products except for Photoshop Elements. Luminar Neo and ON1 Photo RAW offer subscription pricing or one-time purchase options. ON1 also sells standalone versions of its Resize AI, NoNoise AI, and Portrait AI utilities. Topaz sells its utilities outright, but you can optionally pay to activate a photo upgrade plan that renews each year.

ai photo editing tools
AI-enhanced photo editing tools come in many forms, from standalone apps to plugins to built-in features in platforms like Lightroom. Getty Images

Subscription pricing is great for companies because it gives them a more stable revenue stream, and they’re hopefully incentivized to keep improving their products to keep those subscribers over time. And subscriptions also encourage customers to stick with what they’re actively paying for.

For instance, I subscribe to the Adobe Creative Cloud All Apps plan, and use Adobe Audition to edit audio for my podcasts. I suspect that Apple’s audio editing platform, Logic Pro would be a better fit for me, based on my preference for editing video in Final Cut Pro versus Adobe Premiere Pro, but I’m already paying for Audition. My audio-editing needs aren’t sophisticated enough for me to really explore the limits of each app, so Audition is good enough.

In the same way, subscribing to a large app adds the same kind of blanket access to tools, including new AI features, when needed. Having to pay $30-$70 for a focused tool suddenly feels like a lot (even though it means the tool is there for future images that need it).

The wrap

On the other hand, investing in large applications relies on the continued support and development of them. If software stagnates or is retired (again, RIP Aperture), you’re looking at time and effort to migrate them to another platform or extricate them and their edits.

Right now, the tools are still available in several ways, from single-task apps to plug-ins. But AI convergence is also happening quickly.

The post The state of AI in your favorite photo editing apps appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These landscape ‘photos’ were generated by an AI https://www.popphoto.com/news/stability-ai-generated-landscapes/ Fri, 19 Aug 2022 18:30:00 +0000 https://www.popphoto.com/?p=182929
ai-generated-landscape
This landscape doesn't really exist. Stability AI, generated by Aurel Manea

Artists are feeding prompts—both real and etherial—into the Stability Diffusion AI to churn out landscapes and more.

The post These landscape ‘photos’ were generated by an AI appeared first on Popular Photography.

]]>
ai-generated-landscape
This landscape doesn't really exist. Stability AI, generated by Aurel Manea

Romanian photographer and artist Aurel Manea has used a new text-to-image AI to create beautiful, almost-photorealistic, landscape images. 

First noticed by PetaPixel, Manea used Stability AI’s Stability Diffusion—a DALL-E 2-like text-to-image generation tool—to make the series of incredible landscape “photographs.” By using prompts like “landscape photography by Marc Adamus, glacial lake, sunset, dramatic lighting, mountains, clouds, beautiful” he was able to create the shots of entirely made-up places. (You can also see other photorealistic images generated by the AI in the Stability Diffusion Facebook group.)

However, unlike DALL-E 2, Stability Diffusion has limited content filters. That’s partly why it is able to create such realistic scenes, but it also raises a few troubling concerns.

How do these AIs work?

Most of the text-to-image generation AIs that are popular at the moment, like DALL-E 2, Google’s Imagen, and even TikTok’s AI Greenscreen feature, are based on the same underlying technique: diffusion models. The deep-down mathematics are complicated, but the general idea is pretty simple. 

Diffusion models work by tapping huge databases of images paired with text descriptions. Stable Diffusion, for example, uses more than five billion image-text pairs from the LAOIN-5B database. When given a prompt, the models start with a field of random noise and gradually edit it until it begins to resemble the written target. The random nature of the initial noise is part of what allows each model to generate multiple results for the same prompt. 

In other words, every pixel in an image created by one of these models is original. They’re not copying and pasting random parts of different images in a database to generate something, but subtly shaping random noise to resemble a target prompt. This is why so many objects often appear swirly or slightly misshapen—even Van Gogh-esque.

The problem with no filters

Most text-to-image generation models either have high level content filters—like DALL-E 2—or are limited to researchers—like Imagen. What’s most unusual about Stable Diffusion is that it has relatively limited content filters, and Stability AI plans to make it available to the general public. This raises a couple of potential issues. 

To prevent DALL-E 2 from being used to generate misinformation, Open AI blocks people from creating images of real people. Stable Diffusion has no such filter. Over on TechCrunch you can see images of Barack Obama, Boris Johnson (the soon-to-be-former British Prime Minister) wielding various weapons, and a portrait of Hitler. While they aren’t quite photorealistic yet, the technology is going that way and could soon be open to abuse. 

The other issue is bias. Every machine learning tool is at the mercy of its dataset. DALL-E 2 has had its issues and, most recently, Meta had to shutdown its chatbot after it started spouting antisemitic election fraud conspiracies. TechCrunch notes that the LAOIN-400M database, the precursor to the one Stable Diffusion uses, “was known to contain depictions of sex, slurs and harmful stereotypes.”

To counter that, Stability AI has created the LAOIN-Aesthetics database, but it is unclear yet if it is truly free from bias.

Are these even photos?

For the past while at PopPhoto, we’ve been discussing how computational photography changes the nature of photographs. These types of generated images are just another outgrowth of the same kinds of research. The question here in particular is: If an AI can one day generate a realistic image of a real place—or even of an imagined place—then what does it mean for landscape photography? 

Obviously we don’t know yet, but we’re going to have fun discussing and debating it from here on out. 

How can I try Stable Diffusion?

If you want to try Stable Diffusion, you can apply on Stability AI’s website. Right now, it’s just open to researchers and beta testers.

The post These landscape ‘photos’ were generated by an AI appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
1 million lucky creators will beta test AI image generator DALL-E https://www.popphoto.com/news/text-to-image-synthesizer-dall-e-in-testing/ Thu, 21 Jul 2022 22:32:58 +0000 https://www.popphoto.com/?p=179655
astronaut plays basketball with cats in space dall-e
Using DALL-E, the prompt "An astronaut playing basketball with cats in space in a watercolor style" will return an image like this. DALL-E

It's a portal to a Surrealist world—but can its creators address potential ethical and moral concerns?

The post 1 million lucky creators will beta test AI image generator DALL-E appeared first on Popular Photography.

]]>
astronaut plays basketball with cats in space dall-e
Using DALL-E, the prompt "An astronaut playing basketball with cats in space in a watercolor style" will return an image like this. DALL-E

Text-to-image synthesizers have the power to make anyone into an artist, no art skills required. All you need is a little imagination and the power of artificial intelligence to create something spectacular (or truly bizarre). Now, the folks behind the much-hyped, text-to-image synthesizer DALL-E (pronounced: Dalí, like Salvador) have invited one million beta testers, pulled from DALL-E waitlist, to start creating on the platform. Here are the details. 

Related: How to use AI to tag and organize your photo library

bowl of soup dall-e
“A bowl of soup as a planet in the universe as a 1960s poster.” DALL-E

How it works

Over the coming months, DALL-E will invite up to a million people to beta test its platform. Testers will receive free monthly credits to create as they please and, should their enthusiasm take over, they’ll have the option to purchase additional credits at $15 for 115. 

Each credit allows for one prompt and the choice between a generation, which returns four images, and an edit/variation, which returns three images. The edit/variation prompt will either modify an existing image created by or uploaded to DALL-E, or it will create a new piece of artwork based on the inspiration. The company has stated that it will be collecting user feedback to make improvements.

Additionally, artists and creators will have the ability to commercialize their work, including reprinting, selling, and merchandising. The company reports that some creators already have children’s book illustrations, newsletters, games, and movie storyboards in the works. 

teddy bear mad scientists dall-e
“Teddy bears mixing sparkling chemicals as mad scientists in a steampunk style.” DALL-E

Is it really safe?

Of course, all the buzz about AI image synthesizers has come with plenty of concerns about safety and ethics. This is one of the reasons why Google’s Imagen synthesizer is not available to the public. The opportunity for misinformation, bias, and abuse abounds, with no clear solutions and mitigation policies in place. However, DALL-E claims that it has implemented the following measures to combat any misuse. 

First, the platform will reject image uploads of realistic faces, so users won’t be able to recreate the likeness of public figures, such as celebrities and politicians. DALL-E will also not produce photorealistic images of real people. 

The content filters will also guard against violent, adult, political, and other subjects that violate its content policy. When developing the system, the team also removed explicit content from the training data to limit the AI’s exposure. DALL-E’s creators also claim they have implemented a system-level technique to fight bias, allowing it to generate “images of people that more accurately reflect the diversity of the world’s population.” There is also automated and human monitoring in place to evaluate content. 

While the concept is novel and fun, it remains to be seen if DALL-E will successfully surmount the moral and ethical risks the technology poses. We’re keen to see the beta testers’ creations and how the platform will navigate the balance of freedom of expression with the possible repercussions.

The post 1 million lucky creators will beta test AI image generator DALL-E appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Picsart’s new ‘AI Enhance’ tool promises to upscale your images and improve IQ https://www.popphoto.com/news/picsart-ai-enhance/ Wed, 29 Jun 2022 02:25:40 +0000 https://www.popphoto.com/?p=176631
An example of Picsart's new AI Enhance tool in action
A Picsart-provided example of the new 'AI Enhance' tool in action. Picsart

But can it compete with upscale solutions like Topaz Labs' Gigapixel AI and Photoshop's Super Resolution mode?

The post Picsart’s new ‘AI Enhance’ tool promises to upscale your images and improve IQ appeared first on Popular Photography.

]]>
An example of Picsart's new AI Enhance tool in action
A Picsart-provided example of the new 'AI Enhance' tool in action. Picsart

Picsart, the self-declared “world’s leading digital creation platform”, has announced a new artificial intelligence-powered tool called “AI Enhance.” According to the press release, the tool—which is coming to both Picsart’s mobile app, available for iOS and Android, and API service for businesses—“removes noise, upscales images and improves overall quality.” Here’s what you need to know. 

What does AI Enhance do?

According to Picsart, AI Enhance’s “capabilities include advanced image enhancement and upscaling that improves the overall quality of an image and resolution for printing or sharing online.” To achieve all this, “It uses advanced AI models to remove or blur pixelated effects, add pixels, and sharpen and restore scenes and objects, including faces.”

In short, Picsart is selling this as a scale-up-and-sharpen tool on steroids. 

Is it any good? 

Grandparents
A low-resolution portrait of News Editor Dan Bracaglia’s grandparents. Dan Bracaglia
Grandparents
The same image, run through Picsart’s “HD Portrait” tool. Note: Open both images in a new tab for the best comparison. Dan Bracaglia

According to Picsart’s blog, a version of AI Enhance is currently available in the iOS app as “HD Portrait” under the “Retouch Tool.” We checked it out and while the tool exists, it didn’t increase the size of my low-resolution image. What it did do was smooth the skin, somewhat aggressively, especially with the slider set to 100 percent. 

A more general version of the AI Enhance too is also set to launch in the general “Editor” section of the app sometime this summer. More likely than not, it’s this tool that will perform the upscaling. We’ve requested access to the beta and will update this story when we’ve had the chance to try it out. But we’re hoping the results are a little more impressive than the HD Portrait tool.

Haje Jan Kamps, over at TechCrunch, had a bit more success testing the beta of AI Enhance. However, it still seemingly declares noise the enemy, and slightly over-sharpens faces and other areas of high detail it detects. 

Even the company’s own example images, like the one at the top of the page, show somewhat similar results. The handful of sample files available take a low quality and very (artificially!?) noisy original, smooth things out, sharpen faces, and call it a day. 

Barack Obama
We also tested out the tool on a portrait of our good pal, Barry O. Dan Bracaglia
Barack Obama
The skin-smoothing effect on our 44th president’s face is a bit over the top. Dan Bracaglia

How does this stack up to other upscaling tools?

While we’ll withhold judgment until the feature is more widely available, Picsart isn’t the only app offering AI-powered upscaling tools. 

Topaz Labs has Gigapixel AI which can upscale images up to 600%. Since it costs $99.99 just for the upscaling tool (or $199.99 for the upscaling, denoise, and sharpening tools) we suspect Topaz Labs is the company Picsart is swinging at when it says “this type of technology has been limited to expensive software with limited quality.” With that said, it’s hard to deny that the results from Gigapixel AI are significantly more natural-looking. 

Photoshop also has a “Super Resolution” feature that is powered by its Sensei AI. It’s very much still in development, but also quite effective at scaling up low-resolution images. It is a lot more conservative when it comes to noise reduction which, as a photographer, I much prefer. Picsart feels like it’s trying to do too much with just one tool. 

What does Picsart cost? 

The HD Portrait tool in the iOS app was locked behind the Picsart Gold subscription ($55.99 for a year or $11.99 per month). If you’re just looking for image upscaling (or even image editing) this feels a little steep. Adobe’s Photography bundle that includes Photoshop and Lightroom, both of which have multiple ways of upscaling images, AI-powered or not, is $9.99 monthly. If you are prepared to pay for Picsart for a year upfront, it’s cheaper—but not if you subscribe month to month. 

The post Picsart’s new ‘AI Enhance’ tool promises to upscale your images and improve IQ appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Excire Foto 2022 can analyze and keyword your entire photo library using AI https://www.popphoto.com/how-to/excire-foto-2022/ Fri, 24 Jun 2022 00:08:53 +0000 https://www.popphoto.com/?p=176189
Photographer at a computer importing photos
Thanks to tools like automatic keywording and duplicate detection, metadata management can take little effort. Getty Images

Tidy up your image database with just a few clicks of the mouse.

The post Excire Foto 2022 can analyze and keyword your entire photo library using AI appeared first on Popular Photography.

]]>
Photographer at a computer importing photos
Thanks to tools like automatic keywording and duplicate detection, metadata management can take little effort. Getty Images

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

I sometimes feel like the odd-photographer-out when it comes to working with my photo library. I’ve always seen the value of tagging images with additional metadata such as keywords and the names of people who appear (I’ve even written a book about the topic). 

However, many people just don’t want to bother. It’s an extra step—an impediment really—before they can review, edit, and share their images. It requires switching into a word-based mindset instead of an image-based mindset. And, well, it’s boring.

And yet, there will come a time when you need to find something in your increasingly growing image collection, and as you’re scrolling through thumbnails and trying to remember dates and events from the past, you’ll think, “Maybe I should have done some keywording at some point.”

In an earlier column, I took a high-level look at utilities that use AI technologies to help with this task. One of the standouts was Excire Foto, which has just been updated to version 2.0 (and branded Excire Foto 2022). I was struck by its ability to automatically tag photos, and also the granularity you can use when searching for images. Let’s take it for a spin.

Related: The best photo editor for every photographer

A few workflow notes

Excire Foto is a standalone app for macOS or Windows, which means it serves as your photo library manager. You point it at existing folders of images; you can also use the Copy and Add command to read images from a camera or memory card and save them to a location of your choice. If you use a managed catalog such as Lightroom or Capture One that tracks metadata in its own way, Excire Foto won’t work as well. A separate product, Excire Search 2, is a plug-in for Lightroom Classic.

Or, Excire Foto could be the first step in your workflow: import images into it, tag and rate them, save the metadata to a disk (more on that just ahead), and then ingest the photos into the managed photo editing app of your choice.

Since the app manages your library, it doesn’t offer any photo editing features. Instead, you can send an image to another app, such as Photoshop, but its edits are not round-tripped back to Excire Foto.

For my testing, I copied 12,574 photos (593 GB) from my main photo storage to an external SSD connected to my 2021 16-inch MacBook Pro, which is configured with an M1 Max processor. Importing them into Excire Photo took about 38 minutes, which entailed adding the images to its database, generating thumbnail previews, and analyzing the photos for content. Performance will depend on hardware, particularly in the analysis stage, but it’s safe to say that adding a large number of photos is a task that can run while you’re doing something else or overnight. Importing a typical day’s worth of 194 images took less than a minute.

Automatic keywording

excire foto 2022
Review and rate photos in Excire Foto 2022. Jeff Carlson

To me, those numbers are pretty impressive, considering the software is using machine learning to identify objects and scenes it recognizes. But still, do you really care about how long an app imports images? Probably not.

But this is what you will care about: In many other apps, the next step after importing would be to go through your images and tag them with relevant terms to make them easier to find later. In Excire Foto, at this point all the images include automatically generated keywords—much of the work is already done for you. You can then jump to reviewing the photos by assigning star ratings and color labels, and quickly pick out the keepers.

I know I sound like a great big photo nerd about this, but it’s exactly the type of situation where computational photography can make a big difference. To not care about keywords and still get the advantages of tagged photos without any extra work? Magic. 

excire foto 2022
The keywords in blue were created by the app, while keywords in gray were ones I added manually. Jeff Carlson

I find that Excire Foto does a decent-to-good job of identifying objects and characteristics in the photos. It doesn’t catch everything, and occasionally adds keywords that aren’t accurate. That’s where manual intervention comes in. You can manually delete keywords or add new ones to flesh out the metadata with tags you’re likely to search for later. For example, I like to add the season name so I can quickly locate autumn or winter scenes. Tags that the software applies appear with blue outlines, while tags you add show up with gray outlines. It’s also easy to copy and paste keywords among multiple images.

All of the metadata is stored in the app’s database, not with the images themselves, so you’re not cluttering up your image directories with app-specific files (a pet peeve of mine, perhaps because I end up testing so many different ones). If you prefer to keep the data with the files, you can opt to always use sidecar files, which writes the information to standard .XMP text files. Or, you can manually store the metadata in sidecar files for just the images you want.

Search that takes search seriously

excire foto 2022
Explore the keyword hierarchy tree to perform specific term searches. Jeff Carlson

The flip side of working with keywords and other metadata is how quickly things can get complicated. Most apps try to keep the search as simple as possible to appeal to the most people, but Excire Foto embraces multiple ways to search for photos.

A keyword search lets you browse the existing tags and group them together; as you build criteria, you can see how many matches are made before running the search. The search results panel also keeps recent searches available for quick access.

excire foto 2022
You can get pretty darn specific with your searches. Jeff Carlson

Or consider the ability to find people in photos. The Find Faces search gives you options for the number of faces that appear, approximate ages, the ratio of male to female, and a preference for smiling or not smiling expressions.

excire foto 2022
The Find Faces interface allows you to search for particular attributes. Jeff Carlson

Curiously, the people search lacks the ability to name individuals. To locate a specific person you must open an image in which they appear, click the Find People button, select the box on the person’s face, and then run the search. You can save that search as a collection (such as “Jeff”), but it’s not dynamically updated. If you add new photos of that person, you need to manually add them to the collection.

excire foto 2022
Search for a person by first opening an image in which they appear and selecting their face identifier. Jeff Carlson

It appears that the software isn’t necessarily built for identifying specific people, instead, it’s looking for shared characteristics based on whichever source image is chosen. Some searches on my face brought up hundreds of results, while others drew fewer hits.

Identifying Potential Duplicates

New in Excire Foto 2022 is a feature for locating duplicate photos. This is a tricky task because what you and I think of as a duplicate might not match what the software identifies. For instance, in my library, I was surprised that performing a duplicate search set to find exact duplicates brought up only 10 matches.

That’s because this criteria looks for images that are the exact same file, not just visually similar. Those photos turned out to be shots that were imported twice for some reason (indicated by their file names: DSCF3161.jpg and DSCF3161-2.jpg).

excire foto 2022
How duplicates like this get into one’s library will forever be a mystery. Jeff Carlson

When I performed a duplicate search with the criteria set to Near Duplicates: Strict, I got more of what I expected. In the 1007 matches, many were groups of burst photos and also a selection of image files where I’d shot in Raw+JPEG mode and both versions were imported. The Duplicate Flagging Assistant includes the ability to reject non-Raw images, or in the advanced options you can drill down and flag photos with more specific criteria such as JPEGs with the short edges measuring less than 1024 pixels, for example.

excire foto 2022
Choose common presets for filtering possible duplicates, or click Advanced Settings to access more specific criteria. Jeff Carlson

As with all duplicate finding features, the software’s job is primarily to present you with the possible matches. It’s up to you to review the results and determine which images should be flagged or tossed.

End Thoughts

It’s always tempting to jump straight to editing images, but ignoring metadata catches us out at some point. When a tool such as Excire Foto can shoulder a large portion of that work, we get to spend more time on editing, which is the more exciting part of the post-production process, anyway.

The post Excire Foto 2022 can analyze and keyword your entire photo library using AI appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet Apple’s powerful new M2 MacBook Air https://www.popphoto.com/how-to/apple-wwdc-announcements-2022/ Wed, 08 Jun 2022 21:33:24 +0000 https://www.popphoto.com/?p=174399
Photoshop running on the new MacBook Air.
Photoshop running on the new MacBook Air. Apple

Plus: a first look at macOS 13 Ventura, iOS 16, and more.

The post Meet Apple’s powerful new M2 MacBook Air appeared first on Popular Photography.

]]>
Photoshop running on the new MacBook Air.
Photoshop running on the new MacBook Air. Apple

Apple’s Worldwide Developer Conference (WWDC) kicked off this week with the announcement of a new MacBook Air and first looks at macOS 13 Ventura, iOS 16, iPadOS 16, and watchOS 9. It’s a giant stew of features and technologies meant to excite developers and prepare them for the software releases later this year.

But what about photographers? Several photo-related changes are coming, including improvements that take advantage of computational photography. Given this column’s interest in AI and ML technologies, that’s what I’m mostly going to focus on here.

Keep in mind that the operating system releases are currently available only as betas to developers, with full versions coming likely in September or October. As such, it’s possible that some announced features may be delayed or canceled before then. Also, Apple usually saves some details in reserve, particularly regarding the hardware capabilities of new iPhone models.

That said, here are the things that stood out to me.

The M2-Based MacBook Air and MacBook Pro

Photographers’ infamous Gear Acquisition Syndrome isn’t limited to camera bodies and lenses. The redesigned MacBook Air was the noteworthy hardware announcement, specifically because it’s powered by a new M2 processor.

The new MacBook Air uses Apple's M2 chip.
The new MacBook Air uses Apple’s latest M2 chip. Apple

Related: Testing the advantages of Apple’s ProRAW format

In short, the M2 is faster and better than the M1, which itself was a stark improvement over the Intel-based processors Apple had been using before transitioning to its own silicon. A few standout specs that will interest photographers include: The memory bandwidth is 100 GB/s, 50 percent more than the M1, which will speed up operations in general. (The M-series architecture uses a unified pool of memory for CPU and GPU operations instead of discrete chipsets, increasing performance; up to 24 GB of memory is available on the M2.)

The M2’s 20 billion transistors need more space than the M1’s dimensions
The M2’s 20 billion transistors need more space than the M1’s dimensions. Apple

Photographers and videographers will also see improvements due to 10 GPU cores, compared to 8 on the M1, and an improved onboard media engine that supports high bandwidth 8K H.264 and HEVC video decoding, a ProRes video engine enabling playback of multiple 8K and 4K video streams, and a new image signal processor (ISP) that offers improved image noise reduction.

In short, the M2 offers more power while also being highly efficient and battery-friendly. (The battery life I get on my 2021 MacBook Pro with M1 Max processor is unreal compared to my 2019 Intel-based model, and I’ve heard the fan spin up only on a handful of occasions over the past 6 months.)

The MacBook Air’s design reflects the new MacBook Pro’s flattened profile—goodbye to the distinctive wedge shape that defined the Air since its introduction—and includes two Thunderbolt ports and a MagSafe charging port. The screen is now a 13.6-inch Liquid Retina display that supports 1 billion colors and can go up to 500 nits of brightness.

The MacBook Air is just as slim as its predecessor and available in four colors.
The MacBook Air is just as slim as its predecessor and available in four colors. Apple

Apple also announced a 13-inch MacBook Pro with an M2 processor in the same older design, which includes a TouchBar but no MagSafe connector. The slight advantage of this model over the new MacBook Air is the inclusion of a fan for active cooling, which allows for longer sustained processing.

The M2 MacBook Air starts at $1199, and the M2 MacBook Pro starts at $1299. The M1-powered MacBook Air remains available as the $999 entry-level option.

Continuity Camera

Next on my list of interests is the Continuity Camera feature. Continuity refers to technologies that let you pass information between nearby Apple devices, such as copying text on the Mac and pasting it on an iPad. The Continuity Camera lets you use an iPhone 11 or later as a webcam.

Using a phone as a webcam isn’t new; I’ve long used Reincubate Camo software for this (and full disclosure, wrote a few articles for them). Apple brings its Center Stage technology for following subjects in the frame and Portrait Mode for artificially softening the background. It also features a Studio Light setting that boosts the exposure on the subject (you) and darkens the background to simulate external illumination like a ring light. Apple does these things by using machine learning to identify the subject.

But more intriguing is a new Desk View mode: It uses the iPhone’s Ultra-Wide camera and likely some AI technology to apply extreme distortion correction to display what’s on your desk as if you’re looking through a down-facing camera mounted above you. Other participants on the video call still see you in another frame, presumably captured by the normal Wide camera at the same time.

Continuity Camera uses the iPhone’s cameras as webcams and to show a top-down view of the desktop.
Continuity Camera uses the iPhone’s cameras as webcams to show a top-down view of the desktop. Apple

Acting on Photo Content

A few new features take advantage of the software’s ability to identify content within images and act on it.

The iPhone in iOS 16 will have a configurable lock screen with options for changing the typeface of the current time and including widgets for getting quick information at a glance. If the wallpaper image includes depth information, such as a Portrait Mode photo of someone, the screen automatically places the time behind them (a feature introduced in last year’s watchOS 8 update). It can also suggest photos from your library that would work well as lock screen images.

Awareness of subjects in a photo enable the new iOS 16 lock screen to simulate depth by obscuring the time.
Awareness of subjects in a photo enables the new iOS 16 lock screen to simulate depth by obscuring the time. Apple

Another clever bit of subject recognition is the ability to lift a subject from the background. You can touch and hold a subject, which is automatically identified and extracted using machine learning, and then drag or copy it to another app, such as Messages.

Touch to select a subject and then drag it to another app.
Touch to select a subject and then drag it to another app. Apple

The previous iOS and iPadOS updates added Live Text, which lets you select any text that appears in an image. In the next version, you can also pause any frame of video and interact with the text. Developers will be able to add quick actions to do things like convert currency or translate text.

Photos App Improvements

Apple’s Photos app has always occupied an odd space: it’s the default place for saving and organizing images on each platform, but needs to have enough broad appeal that it doesn’t turn off average users who aren’t looking for complexity. I suspect many photographers turn to apps such as Lightroom or Capture One, but we all still rely on Photos as the gatekeeper for iPhone photos.

In the next update, Apple is introducing iCloud Shared Photo Library, a way for people with iCloud family plans to share a separate photo library with up to six members. Each person can share and receive all the photos, bringing photos from family events together in one library without encroaching on individual personal libraries.

An iCloud Shared Library collects photos from every family member.
An iCloud Shared Library collects photos from every family member. Apple

You can populate the library manually, or use person recognition to specify photos where two or more people are together. Or, you can set it up so that when family members are together, photos will automatically be sent to the shared library.

Other Photos improvements include a way to detect duplicates in the Photos app, the ability to copy and paste adjustments between photos or in batches, and more granular undo and redo options while editing.

Reference Mode on iPad Pro

The last thing I want to mention isn’t related to computational photography, but it’s cool nonetheless. Currently, you can use the Sidecar feature in macOS to use an iPad as an additional display, which is great when you need more screen real estate.

In macOS Ventura and iPadOS 16, an iPad Pro can be set up as a reference monitor to view color-consistent photos and videos as you edit. The catch is that according to Apple’s footnotes, only the 12.9-inch iPad Pro with its gorgeous Liquid Retina XDR display will work, and the Mac must have an M1 or M2 processor. (I added “gorgeous” there; it’s not in the footnotes.)

Use the 12.9-inch M1 iPad Pro as a color-accurate reference monitor.
Use the 12.9-inch M1 iPad Pro as a color-accurate reference monitor. Apple

Speaking of screen real estate, iPadOS 16 finally—finally!—enables you to connect a single external display (up to 6K resolution) and use it to extend the iPad desktop, not just mirror the image. Again, that’s limited to models with the M1 processor, which currently includes the iPad Pro and the iPad Air. But if you’re the type who does a lot of work or photo editing on the iPad, external display support will give you more breathing room.

Extend the iPad Pro’s desktop by connecting an external display.
Extend the iPad Pro’s desktop by connecting an external display. Apple

A new feature called Stage Manager breaks apps out of their full-screen modes to enable up to four simultaneous app windows on the iPad and on the external display. If you’ve ever felt constrained running apps like Lightroom and Photoshop side-by-side in Split View on the same iPad screen, Stage Manager should open things up nicely. Another feature, Display Zoom, can also increase the pixel density to reveal more information on the M1-based iPad’s screen.

More to Come

I’ve focused mostly on features that affect photographers, but there are plenty of other new things coming in the fall. If nothing else, the iPad finally has its own Weather app and the Mac has a full Clock app. That may not sound like much, but it helps when you’re huddled in your car wondering if the rain will let up enough to capture dramatic clouds before sundown, or when you want a timer to remind you to get to bed at a respectable hour while you’re lost in editing.

The post Meet Apple’s powerful new M2 MacBook Air appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Me(w)ow! Samsung’s new 200-megapixel smartphone sensor shot this larger-than-life-cat print https://www.popphoto.com/news/samsung-200-megapixel-cat-print/ Tue, 07 Jun 2022 20:27:17 +0000 https://www.popphoto.com/?p=174143
A billboard captured using Samsung's new 200-megapixel smartphone sensor
The feline in question. Samsung

The colossal banner is just a whisker under 7,000 square feet—the new sensor used to take it set to debut in smartphones next month.

The post Me(w)ow! Samsung’s new 200-megapixel smartphone sensor shot this larger-than-life-cat print appeared first on Popular Photography.

]]>
A billboard captured using Samsung's new 200-megapixel smartphone sensor
The feline in question. Samsung

For the folks over at Samsung Semiconductor, the mighty megapixel reigns supreme. Not content to rest on the laurels of the 108-megapixel sensor it co-developed with Xiaomi back in 2019, the company launched the record-breaking 200-megapixel ISOCELL HP1 sensor late last year. In the process, it set a new benchmark for smartphone camera resolution.

As the ISOCELL HP1 sensor nears retail—the rumor mill currently has it landing next month in a new Motorola handset—its maker is looking for ways to emphasize the level of quality available from the chip. And as everyone knows, if you want the internet’s attention, it’s usually helpful to stick a cat in there somewhere. That’s precisely what Samsung’s marketing team has done, using an early copy of the new sensor to take an exceptionally high-res feline photo. They then turned it into an absolutely epic banner and wrapped it around a Korean high-rise building.

Capturing the shot and creating the print

Of course, since the ISOCELL HP1 hasn’t yet reached the market, Samsung couldn’t use it in an actual phone as none have been officially revealed. Hence, it used a camera module containing the new sensor mounted on a development board instead. Multiple engineers were needed to handle the framing, exposure, focus, and cat-wrangling.

Initially, the company says it used various SLR camera lenses in front of the sensor, and indeed the video shows an unidentified lens in use, although curiously it appears to be exposing the lens built into the camera module, rather than a naked sensor. Either way, the company says that the final image was taken without using an accessory lens.

samsun 200mp smartphone
Samsung says it initially tried mounting the ISOCELL HP1 behind SLR lenses, but took the final shot with the naked camera module as shown here. Samsung

Creating the banner was no less of an ordeal. With dimensions of approximately 92 feet wide by 72 feet tall, it is roughly the same length as a basketball court, and a fair bit wider to boot. (The total area works out to roughly 6,630 square feet.) Clearly, it couldn’t be printed in one go. Instead, Samsung printed off a dozen strips, each containing a 7.5-foot wide slice of the final image. These then had to be sewn together to create the end result, and a crane was needed to lift the completed banner into place before the big reveal.

Impressive resolution, but being forced to view from afar helps

There’s no denying that the amount of detail in the image is impressive, and it seems to hold up to examination even from a relatively close distance. With that said, Samsung is also taking advantage of the fact that it’s impossible to get truly up close to the print. With a sensor resolution of 16,384 by 12,288-pixels and assuming no cropping, the final output resolution would be roughly 14 to 15 pixels per inch, making each pixel large enough to be easily visible from very near.

Still, that would be true regardless of the capture device. A sensor resolution of 200 megapixels goes far beyond any DSLR or mirrorless camera, which would have required multiple shots to yield the same output resolution without additional interpolation. Of course, most of the time that much resolution is going to be overkill. There’s not much call for basketball court-sized prints from the general public, and so you may be wondering what the point of all this is.

Smartphone News photo

You mostly won’t use the full 200-megapixel resolution directly

One of the key advantages of such a high-resolution sensor is the ability to use pixel binning to improve light-gathering capability. Pixel binning combines information from groups of adjacent pixels together so that they act as one much larger pixel. This in turn allows them to gather more light than they would otherwise be able to individually. More light = better image quality. This process has the benefit of reducing the resolution of the output image to a more manageable size.

The 200-megapixel resolution additionally allows for a higher-quality “zoom” to be achieved by cropping the image and then downsampling slightly less for the final shot. That zoom can operate entirely silently, making it great not just for stills but also video. The extra resolution also provides finer-grained data on which focus, exposure, and AI algorithms can potentially operate as needed.

Future product plans have yet to be revealed

So where can we expect to see this device show up first? Some sources predict that Lenovo-owned Motorola will likely be the first to employ the ISOCELL HP1, debuting it with the launch of the Motorola Frontier next month. It’s also pretty much a given that the chip will feature in one of Samsung’s phones, just as we saw for the previous-generation chip. Presuming those rumors aren’t too far off the mark, watch this space for more news in the not-too-distant future!

The post Me(w)ow! Samsung’s new 200-megapixel smartphone sensor shot this larger-than-life-cat print appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google’s Imagen text-to-image synthesizer creates strikingly accurate ‘photos’ https://www.popphoto.com/news/google-imagen-text-to-image/ Mon, 06 Jun 2022 19:30:53 +0000 https://www.popphoto.com/?p=174032
corgi in sushi house google imagen
With Google's text-to-image synthesizer, Imagen, users can dream up any possibility, including a corgi in a sushi house. Google Research, Brain Team

However, the technology also poses moral and ethical dilemmas.

The post Google’s Imagen text-to-image synthesizer creates strikingly accurate ‘photos’ appeared first on Popular Photography.

]]>
corgi in sushi house google imagen
With Google's text-to-image synthesizer, Imagen, users can dream up any possibility, including a corgi in a sushi house. Google Research, Brain Team

A cute corgi lives in a house made of sushi. A dragon fruit wearing a karate belt in the snow. A brain riding a rocket ship heading towards the moon. These are just a few of the AI-generated images produced by Google’s Imagen text-to-image diffusion model, and the results are incredibly accurate—sometimes humorously so. Researchers from Google recently unveiled these results in a paper published last month—and discussed the moral repercussions that come with using this latest technology.

Google’s Imagen beats the competition  

In their research paper, Google computer scientists confirmed that existing pre-trained large language models perform fairly well in creating images from text input. With Imagen, they simply increased the language model size and found that it led to more accurate results.

google imagen
Imagen’s FID score ranked well above other text-to-image synthesizers. Google Research, Brain Team

Related: When AI changes its mind

To measure results, Imagen employed the Common Objects in Context (COCO) dataset, which is an open-source compendium of visual datasets on which companies and researchers can train their AI algorithms in image recognition. The models receive a Frechet Inception Distance (FID) score, which calculates their accuracy in rendering an image based on prompts from the dataset. A lower score indicates that there are more similarities between the real and generated images, with a perfect score being 0.0. Google’s Imagen diffusion model can create 1024-by-1024- pixel sample images with an FID score of 7.27.

According to the research paper, Imagen tops the charts with its FID score when compared to other models including DALL-E 2, VQ-GAN+CLIP, and Latent Diffusion Models. Findings indicated that Imagen was also preferred by human raters.

dragon fruit wearing karate belt google imagen
A dragon fruit wearing a karate belt is just one of the many images Imagen is capable of creating. Google Research, Brain Team

“For photorealism, Imagen achieves 39.2% preference rate indicating high image quality generation,” Google computer scientists report. “On the set with no people, there is a boost in the preference rate of Imagen to 43.6%, indicating Imagen’s limited ability to generate photorealistic people. On caption similarity, Imagen’s score is on-par with the original reference images, suggesting Imagen’s ability to generate images that align well with COCO captions.”

In addition to the COCO dataset, the Google team also created their own, which they called DrawBench. The benchmark consists of rigorous scenarios that tested different models’ ability to synthesize images based on “compositionality, cardinality, spatial relations, long-form text, rare words, and challenging prompts,” going beyond the more limited COCO prompts. 

brain riding rocket ship to moon google Imagen
Though fun, the technology presents moral and ethical dilemmas. Google Research, Brain Team

Related: How to use AI to edit your photos faster

Moral implications of Imagen and other AI text-to-image software

There’s a reason why all the sample images have no people. In their conclusion, the Imagen team discusses the potential moral repercussions and societal impact of the technology, which is not always for the best. Already, the program exhibits a Western bias and viewpoint. While acknowledging that there is a potential for endless creativity, there are, unfortunately, also those who would may attempt to use the software for harm. It is for this reason, among others, that Imagen is not available for public use—but that could change. 

“On the other hand, generative methods can be leveraged for malicious purposes, including harassment and misinformation spread, and raise many concerns regarding social and cultural exclusion and bias,” the researchers write. “These considerations inform our decision to not to release code or a public demo. In future work, we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access.”

octopus holding newspaper google imagen
The researchers acknowledge that more work is required before Imagen can be responsibly released to the public. Google Research, Brain Team

Additionally, the researchers noted that due to the available datasets on which Imagen is trained, the program exhibits bias. “Dataset audits have revealed these datasets tend to reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups.”

While the technology is certainly fun (who wouldn’t want to whip up an image of an alien octopus floating through a portal while reading a newspaper?), it’s clear that it requires more work and research before Imagen (and other programs) can be responsibly released to the public. Some, like Dall-E 2, have deployed safeguards, but the efficacy remains to be seen. Imagen acknowledges the gargantuan, though necessary task of thoroughly mitigating negative consequences.

“While we do not directly address these challenges in this work, an awareness of the limitations of our training data guides our decision not to release Imagen for public use,” they finish. “We strongly caution against the use of text-to-image generation methods for any user-facing tools without close care and attention to the contents of the training dataset.”

The post Google’s Imagen text-to-image synthesizer creates strikingly accurate ‘photos’ appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Turbocharge your wedding edits with the help of AI https://www.popphoto.com/how-to/edit-wedding-photos-faster-ai/ Fri, 27 May 2022 12:00:00 +0000 https://www.popphoto.com/?p=172905
Lightroom photo
Carol Harrold

Here's how AI tools in Lightroom, Photoshop, and Luminar Neo can help speed up the time it takes to edit a wedding gallery.

The post Turbocharge your wedding edits with the help of AI appeared first on Popular Photography.

]]>
Lightroom photo
Carol Harrold

Photographing someone’s Big Day is a beautiful—and stressful—job, especially if you’re not a seasoned pro. This week, PopPhoto is serving up our best advice for capturing that special kind of joy.

A typical wedding day photoshoot can result in thousands of images. After the photographer has spent hours actively capturing the event, hours of culling and editing still loom ahead of them. In an earlier Smarter Image column, I offered an overview of apps designed to sort and edit your photos faster. For this installment, I want to look at the editing side and how AI tools can shave off some of that time.

Consider this situation: You’ve done your initial sort and now you have a series of photos of the bride. They were made in the same location, but the bride strikes different poses and the framing is slightly different from shot to shot. They could all use some editing, and because they’re all similar they’d get the same edits.

This is where automation comes in. In many apps, you can apply edits to one of the images and then copy or sync those edits to the rest. However, that typically works globally, adjusting the tone and color evenly to each full image. What if the overall photo is fine but you want to increase the exposure on just the bride to make her stand out against the backdrop? Well, then you’re back to editing each image individually.

But not necessarily. The advantage of AI-assisted processing is that the software identifies objects within a scene. When the software can pick out the bride and apply edits only to her—even if she moves within the frame—it can save a lot of time and effort.

For this task I’m looking specifically at three apps: Adobe Photoshop, Adobe Lightroom Classic (the same features appear in the cloud-based Lightroom desktop app), and Skylum Luminar Neo. These apps can identify people and make selective edits on them, and batch-apply those edits to other images.

First, let’s look at the example photos I’m working with to identify what they need. Seattle-based photographer Carol Harrold of Carol Harrold Photography graciously allowed me to use a series of photos from a recent wedding shoot. These are Nikon .NEF Raw images from straight out of the camera. 

An unedited set of six similar photos of the bride.
An unedited set of six similar photos of the bride. Carol Harrold

The bride is in shadow to avoid harsh highlights on a sunny day, so as a consequence I think she would benefit from additional exposure. Although she’s posing in one spot, she faces two different directions and naturally appears in slightly different positions within each shot. A single mask copied between the images wouldn’t be accurate. For the purposes of this article, I’m only focusing on the exposure on the bride, and not making other adjustments.

Adobe Photoshop

One of Photoshop’s superpowers is the Actions panel, which is where you can automate all sorts of things in the app. And for our purposes, that includes the ability to use the new Select Subject command in an automation.

In this case, I’ve opened the original Raw files, which processes them through the Adobe Camera Raw module; I kept the settings there unchanged. Knowing that I want to apply the same settings to all of the files, I’ll open the Actions panel and click the [+] button to create a new action, name it, and start recording. 

Next, I’ll choose Select > Subject, which selects the bride and adds that as a step in the action.

Selecting the subject while recording an action inserts the Select > Subject command as a step.
Selecting the subject while recording an action inserts the Select > Subject command as a step. Carol Harrold

To adjust the exposure within the selection, I’ll create a new Curves adjustment layer. Doing so automatically makes a mask from the selection, and when I adjust the curve’s properties to lighten the bride, the effect applies only in that selection.

I’m using a Curves adjustment to increase exposure on the bride in the first photo, though I could use other tools as well.
I’m using a Curves adjustment to increase exposure on the bride in the first photo, though I could use other tools as well. Carol Harrold

In the interests of keeping things simple for this example, I’ll stick to just that adjustment. In the Actions panel, I’ll click the Stop Recording button. Now I have an action that will select any subject in a photo and increase the exposure using the curve adjustment.

To apply the edits to the set of photos, I’ll choose File > Automate > Batch, and choose the recorded action to run. Since all the images are currently open in Photoshop, I’ll set the Source as Opened Files and the Destination as None, which runs the action on the files without saving them. I could just as easily point it at a folder on disk and create new edited versions.

It’s not exciting looking, but the Batch dialog is what makes the automation possible between images.
It’s not exciting looking, but the Batch dialog is what makes the automation possible between images.

When I click OK, the action runs and the bride is brightened in each of the images.

In a few seconds, the batch process applies the edits and lightens the bride in the other photos.
In a few seconds, the batch process applies the edits and lightens the bride in the other photos. Carol Harrold

The results can seem pretty magical when you consider the time saved by not processing each photo individually, but as with any task involving craftsmanship, make sure to check the details. It’s great that Photoshop can detect the subject, but we’re also assuming it’s detecting subjects correctly each time. If we zoom in on one, for example, part of the bride’s shoulder was not selected, leading to a tone mismatch.

Watch for areas the AI tool might have missed, like this section of the bride’s shoulder.
Watch for areas the AI tool might have missed, like this section of the bride’s shoulder. Carol Harrold

The upside is that the selection exists as a mask on the Curves layer. All I have to do is select the area using the Quick Selection tool and fill the area with white to make the adjustment appear there; I could also use the Brush tool to paint it in. So you may need to apply some touch-ups here and there. 

Filling in that portion of the mask fixes the missed selection.
Filling in that portion of the mask fixes the missed selection. Carol Harrold

Lightroom Classic and Lightroom

Photographers who use Lightroom Classic and Lightroom are no doubt familiar with the ability to sync Develop settings among multiple photos—it’s a great way to apply a specific look or LUT to an entire set that could be a signature style or even just a subtle softening effect. The Lightroom apps also incorporate a Select Subject command, making it easy to mask the bride and make our adjustments.

With the bride masked, I can increase the exposure just on her.
With the bride masked, I can increase the exposure just on her. Carol Harrold

In Lightroom Classic, with one photo edited, I can return to the Library module, select the other similar images, and click the Sync Settings button, or choose Photo > Develop Settings > Sync Settings. (To do the same in Lightroom desktop, select the edited photo in the All Photos view; choose Photo > Copy Edit Settings; select the other images you want to change; and then choose Photo > Paste Edit Settings.)

However, there’s a catch. The Select Subject needs to be reprocessed before it will be applied. In Lightroom Classic, when you click Sync Settings, the dialog that appears does not select the Masking option, and includes the message “AI-powered selections need to be recomputed on the target photo.”

Lightroom Classic needs to identify the subject in each image that is synced from the original edit.
Lightroom Classic needs to identify the subject in each image that is synced from the original edit. Carol Harrold

That requires an additional step. After selecting the mask(s) in the dialog and clicking Synchronize, I need to open the next image in the Develop module, click the Masking button, and click the Update button in the panel. 

It’s an extra step, but all you have to do is select the mask and click Update.
It’s an extra step, but all you have to do is select the mask and click Update. Carol Harrold

Doing so reapplies the mask and the settings I made in the first image. Fortunately, with the filmstrip visible at the bottom of the screen, clicking to the next image keeps the focus in the Masking panel, so I can step through each image and click Update. (The process is similar in the Edit panel in Lightroom desktop.)

As with Photoshop, you’ll need to take another look at each image to ensure the mask was applied correctly, and add or remove portions as needed.

Luminar Neo

I frequently cite Luminar’s image syncing as a great example of how machine learning can do the right thing between images. Using the Face AI and Skin AI tools, you can quickly lighten a face, enhance the eyes, remove dark circles, and apply realistic skin smoothing, and then copy those edits to other photos. From the software’s point of view, you’re not asking it to make a change to a specific area of pixels; it knows that in each photo it should first locate the face, and then apply those edits regardless of where in the frame the face appears.

I can still do that with these photos, but it doesn’t help with the exposure of the bride’s entire body. So instead, I’ll use the Relight AI tool in Luminar Neo and increase the Brightness Near value. The software identifies the bride as the foreground subject, increasing the illumination on her without affecting the background.

Luminar Neo’s Relight AI tool brightens the bride, which it has identified as the foreground object.
Luminar Neo’s Relight AI tool brightens the bride, which it has identified as the foreground object. Carol Harrold

Returning to the Catalog view, we can see the difference in the bride’s exposure in the first photo compared to the others. 

Before syncing in Luminar Neo
Carol Harrold

To apply that edit to the rest, I’ll select them all, making sure the edited version is selected first (indicated by the blue selection outline), and then choose Image > Adjustments > Sync Adjustments. After a few minutes of processing, the other images are updated with the same edit. 

After syncing, the image series now features the lightened bride.
After syncing, the image series now features the lightened bride. Carol Harrold

The results are pretty good, with some caveats. On a couple of the shots, the edges are a bit harsh, requiring a trip back to the Relight AI tool to increase the Dehalo control. I should also point out that the results you see above were from the second attempt; on the first try the app registered that it had applied the edit, but the images remained unchanged. I had to revert the photos to their original states and start over.

The latest update to Luminar Neo adds Masking AI technology, which scans the image and makes the individual areas it finds selectable as masks, such as Human, Flora, and Architecture. I thought that it would allow me to identify a more specific mask, but instead, it did the opposite when synced to the rest, applying the adjustment to what appears to be the same pixel area as the source image.

Unfortunately, the Masking AI feature doesn’t work correctly when syncing adjustments between photos.
Unfortunately, the Masking AI feature doesn’t work correctly when syncing adjustments between photos. Carol Harrold

The AI Assistant

Wedding photographers often work with one or more assistants, so think of these AI-powered features as another assistant. Batch processing shots with software that can help target adjustments can help you turn around a large number of images in a short amount of time.

The post Turbocharge your wedding edits with the help of AI appeared first on Popular Photography.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>