Image to Image

Edit and iterate on images using AI. Upload references to control composition, change styles, or generate consistent variations of characters and products.

Image to Image (I2I) Generation

Image to Image allows you to guide the AI's generation process using both a text prompt and an input image. This is essential for controlling the composition and color palette of the output.

How It Works (Technical)

Technically, I2I starts the diffusion denoising process not from random Gaussian noise (like Text-to-Image), but from a noised version of your uploaded image.

The Denoising Strength (often represented internally) determines how much of the original pixels are preserved.

  • High Strength: The AI changes the image significantly (concept change).
  • Low Strength: The AI changes only texture/lighting (refinement).

Core Models

  • Flux I2I: High fidelity, preserves structure.
  • Stable Diffusion XL (SDXL) Refiner: Great for enhancing details.

Use Cases

1. Sketch to Rendering

Architects and designers can upload a basic line drawing or whiteboard sketch and prompt "photorealistic building" to get a high-quality visualization while keeping the sketched shape.

2. Variation Generation

E-commerce sellers can upload a product photo and prompt "on a wooden table in a garden" to change the background without needing a photoshoot.

3. Character Consistency

Upload a portrait of a character and prompt different styles (e.g., "as a pixar character", "as a van gogh painting") to see the same person in different universes.

Step-by-Step Guide

  1. Reference Upload: Drag and drop your source image.
  2. Prompting: Describe what you want the FINAL image to look like.
    • Tip: If you want to keep the original subject, describe it in the prompt too.
  3. Ratio Selection: Usually, you want to match the aspect ratio of the uploaded image to avoid cropping.
  4. Generate: Produce variations.