Trying Generative Fill in Photoshop

The year is 2023. Machine learning is all the rage (for good reason, I believe).

Computer-generated graphics is one area where the computing industry is experimenting with ML. I recently installed the Photoshop beta, which gives access to this tech through its new "generative fill" feature. I tested the feature out on some recent photos of mine.

I'll show my results on a photo I took of an Osprey. Here's the original frame I captured, with a white border:

As you can see, I wasn't able to fit the bird's entire wingspan into the shot. I used generative fill after expanding the image, and picked my favorite from three ML-drawn options:

The results look great when shrunk down to 15% size here. The length and positioning of the feathers looks nice, and the added sky background blends in fine. At 100% scale though, it doesn't look so perfect. The first problem I noticed was that the drawn-in feather tips appear out-of-focus. The rest of the wings are fairly sharp so I'm not sure why Photoshop drew them in this way:

I also noticed that the background didn't blend in as smoothly as I'd hoped. I could see the faint outline of where the original frame was. By applying a camera raw filter that increases the "Texture," "Clarity," and "Dehaze," we can see how the generated sky area is subtly different from the existing sky:

This means that further edits to the image could potentially reveal this lack of continuity from the original sky pixels to the drawn-in pixels. This is less of an issue if generative fill is applied as the final step, but I felt like the sky wasn't good enough to be used in the final image. Instead, I created an entirely new sky gradient using the original sky colors, added some noise, and then pasted the Osprey on top of that:

To fix the blurred feather tips, I then brought the image into Topaz Sharpen AI. I was able to use Sharpen AI's masking feature to selectively sharpen the feather tips:

If you know where to look on the feathers you can still see where the original image ends and the ML-drawn pixels begin, but this isn't really a problem when the entire image is viewed as a whole:

Overall I'm happy, but not blown away, with the results generated by this new feature. I tried it out on a few other images in a recent gallery. (Of the three Osprey shots I tried it on, this one worked the best.) Generative fill did work very well for replacing some objects (birds) in photos with background content, and you can see those results in that linked gallery. I think painting over unwanted areas is the best initial use of this feature.

I also think that in a few years generative fill will perform much better. It will definitely be able to generate perfectly plausible "canvas extends" like I attempted to use here. I'm not sure what that means for the future of photography, but it'll be fun to see where this goes.