A few months ago, I was sitting with a jewelry designer, walking her through some AI image tools. She’d been in the business for years. Beautiful work. A drawer full of sketches for pieces she’d imagined but never made.
I asked why some of them never got produced.
“Manufacturing is expensive,” she said. “And I never know which ones will actually sell.”
That’s a real constraint. You design something, have it made, list it — and then find out whether people want it. The risk is front-loaded. If the piece doesn’t sell, you’ve already spent the money.
I showed her a different way to think about it.
The Demo
AI image generation has gotten good enough that you can take a design concept — even a rough sketch, even a written description — and produce something that looks like a professional product photo. Not an obvious render. Not a drawing. Something that looks like the real object, shot under good lighting, on a clean background.
I’ve done this with architecture: a client had hand-drawn sketches of spaces they were trying to sell, and we turned them into photorealistic room renders that looked like finished design photography. The prospective buyers responded to the renders completely differently from the sketches — because the renders felt real.
The same logic applies to any product category.
For jewelry, you start with a description or a sketch. You feed it into Gemini with some reference images for style and scale. You iterate until you have something that looks like a real piece, photographed on a hand or against a clean surface. Then you take that image and put it to work.
The Business Model Shift
Here’s where it gets interesting from a business standpoint.
Traditional product development: design it → manufacture it → list it → find out if it sells.
Risk is at step two. Once you’ve manufactured, you’re committed.
AI-enabled product development: design it → generate a photorealistic image → list it on a pre-order or waitlist page → find out if it sells → then manufacture.
The risk moves to after you have market signal. You don’t spend money on production until you know people want it.
When I walked the jewelry designer through this, I described it this way: imagine you post the image, run a little traffic to it, and collect pre-orders. If you get fifty people who want this piece — make it. If you get three — that tells you something. Move on to the next design.
She immediately realized something: she had a full drawer of designs she could test this way. Not sequentially, not expensively — by putting real-looking images in front of real customers and watching what happened.
Why This Works Beyond Jewelry
The principle applies across product categories:
Apparel. Design a jacket, generate a model shot in that jacket, run a pre-order page, cut the pattern only if you hit a threshold.
Home goods. A candle line, a ceramic piece, a textile design — all of these can be visualized realistically before the first prototype is made.
Food and specialty products. Package design, label design, new flavor concepts — you can test the visual and the concept before production.
Accessories, gadgets, niche hardware. Anything where tooling or initial production runs are expensive enough that a wrong bet hurts.
The constraint used to be that you needed a real product to photograph. Now you need a clear enough concept for the image tools to work from. That’s a fundamentally different starting point.
How to Actually Do This
The workflow isn’t complicated, but there are a few things that matter for getting images that look genuinely professional rather than obviously AI-generated.
Start with good reference images. Give the model examples of the style you’re going for — other products in your category that have the look and feel you want. The more visual context you provide, the more accurately it can match your intent.
Be specific about materials and settings. “A gold ring with a small diamond” produces generic results. “A 14k yellow gold band with a 0.3ct round brilliant diamond, shot on a marble surface, natural window light from the left” gives the model enough to work from.
Iterate in fresh sessions. Image quality degrades in long chat threads as the model loses track of what you established at the start. When you’ve found a direction that’s working, start a new session and rebuild from your best prompts.
Use the output as a proof of concept, not a final asset. The pre-order page image doesn’t need to be perfect — it needs to be clear enough that a real buyer would understand what they’re ordering and feel confident about it. You can invest in proper photography once you have confirmed demand.
The Bigger Shift
What’s actually changed here is where the risk sits in the product development process.
For most of the history of making things, you had to commit capital before you had signal. Manufacturing required it. Distribution required it. The businesses that survived were the ones that had good instincts about demand, or enough margin to absorb mistakes.
AI images change the cost of getting signal. Generating a photorealistic product image is cheap. Running a pre-order page is cheap. Finding out whether fifty people want a thing before you make it — that used to be expensive. Now it’s fast and nearly free.
The jewelry designer had a drawer full of designs she’d been sitting on for years. Some of them probably would have sold. Some of them probably wouldn’t. She had no cheap way to tell the difference.
Now she does.
Thanh Pham is the founder of Asian Efficiency. He teaches AI fluency through workshops and the 4-Day AI Sprint — designed to take people from occasional AI use to real, running workflows.
