Photoshop adds a new kind of generative manipulation
Adobe’s newest Photoshop trick is not simply another background fill or one-click compositing tool. The new Rotate Object feature, available in Photoshop version 27.6 according to ZDNET’s hands-on report, lets users rotate a 2D object in three-dimensional space and then has AI generate the portions of the image the camera never captured. It is the kind of feature that would have sounded implausible in mainstream image editing not long ago. Now it sits inside a familiar menu.
What makes the feature notable is not just the novelty of the effect, but the way it extends Photoshop’s role. The software is no longer limited to adjusting or combining captured views. It is increasingly willing to infer new ones. Rotate Object pushes Photoshop from editing pixels toward synthesizing plausible visual geometry.
More than a simple rotation command
Traditional rotation tools in image editors work in two dimensions. They let users spin, tilt, or reposition an object on the page, but they do not truly create a new perspective. ZDNET’s description of Rotate Object makes the distinction clear. This tool allows rotation around the X axis and Y axis, perspective adjustment, and rotation around a center point, while the AI generates missing visual information to support the new view.
That is a significant change in what the user is asking the software to do. Instead of merely transforming what is visible, the tool must imagine what was hidden. A side of a laptop lid, the back edge of an object, or the shifted angle of a surface may never have existed in the original photograph. The model fills in those gaps and then cleans up the image after working from a lower-resolution preview state.
The result, at least in the hands-on account, is compelling enough to feel like a major creative shortcut. But it is also imperfect in a way that says a lot about where these tools stand today.
The promise is real, but so are the limits
ZDNET’s key takeaway is not that Rotate Object works flawlessly. It is that the feature can do “really cool things,” especially when paired with Photoshop’s Harmonize feature, while still requiring human skill, judgment, and cleanup. That caveat is important. The AI can generate details the camera never saw, but it does not actually know what the hidden side of an object looked like. It is making a plausible guess.
This matters because plausibility is not the same thing as truth. In design, advertising, concept work, or rough compositing, plausible may be enough. In documentary, product-accurate, or evidentiary contexts, it may not be. The software’s ability to invent unseen image regions increases creative flexibility, but it also increases the burden on users to understand when the result is interpretive rather than faithful.
The report notes that Adobe does not currently allow the user to seed the rotation with more than one view of an object. That limitation makes sense as a near-term product simplification, but it also exposes the guesswork. More reference views could help constrain the model. Without them, the output remains part transformation and part hallucinated reconstruction.
Why this matters for creative workflows
Even with those limits, the workflow implications are substantial. Editors, designers, and compositors often spend significant time searching for alternate angles, reshooting assets, or manually painting perspective changes that were never photographed. A tool that can create a usable rotated view from a single image may compress those steps dramatically in many everyday scenarios.
Paired with Harmonize, which adjusts color, lighting, and shadows to match a background, Rotate Object becomes even more powerful. ZDNET’s emphasis on their combined effect points to Adobe’s broader strategy: not just isolated AI gimmicks, but a stack of generative features that work together. One tool adjusts orientation, another integrates lighting and tone, and together they bring the composite closer to something visually coherent with less manual labor.
That does not remove the editor from the process. If anything, it changes the nature of editing skill. The value shifts away from executing every transformation by hand and toward directing, evaluating, and correcting model output.
Photoshop is becoming a judgment-first tool
One of the most useful lines in the report is also the simplest: great results still require Photoshop skill, judgment, and cleanup. That observation cuts through a lot of AI-product hype. The most capable creative tools are not making expertise irrelevant. They are making discernment more central.
Users still need to decide when the generated perspective looks believable, when structural details break down, and how much retouching is needed before the image is fit for purpose. In that sense, Rotate Object is less an autopilot than a force multiplier. It expands what can be attempted quickly, but it does not erase the need for craft.
A glimpse of where image editing is heading
Rotate Object is a strong example of a broader transition already underway in creative software. Editing platforms are steadily moving from adjustment and arrangement toward synthesis and reconstruction. Instead of asking, “How do I modify this image?” users increasingly ask, “Can the software generate the version of this image I wish I had?”
Adobe’s answer, at least here, is increasingly yes. But the answer comes with a condition: you still need to know what a good result looks like. That tension is likely to define the next stage of professional creative tools. The AI can invent. The human still has to decide whether the invention works.
For Photoshop users, that is both the attraction and the caution. Rotate Object opens a new lane for fast visual experimentation. It also blurs the line between editing what was captured and constructing what never existed. That is powerful, and if used carelessly, potentially misleading. For now, Adobe appears to be betting that its users will want the power first and manage the responsibility themselves.
This article is based on reporting by ZDNET. Read the original article.
Originally published on zdnet.com








