Questions for Better Result

  1. It seems Emphasis using parenthesis not working. Has it been modified?

  2. Can I use Midjourney Word like ‘–no’ ? If not, what should I do if I want to erase something specific?
    ex) no buildings besides

  3. What is the best way to express the glass window? Does it help to give a lot of transparency to glass in ghost mode? it seems work better to remove glass at all

4.1. I’m not sure which view (display mode) works better.(I think Render > Material > Arctic > Shaded) Or does it read better if I mark ‘Surface Edge’?

4.2. Can you tell me which ‘Render Elements / Channels’ Veras uses?

  1. Should lights depend on random? The location mark of the light is read as an image. (Like Reference Image, Maple leaves and firecrackers are printed like the shape of sphere lights)

My Final Image:

These are great questions. I ran some tests on my end using the base image you posted. However, without the model, I was more limited in my testing.

  1. Emphasis still works. See example, where reflective pond is emphasized:
    2024-05-08 - 08-05-09 - chrome

  2. We don’t have a ‘-no’ equivalent. To erase something you can use the Render Selection feature under the EDIT tab. However, if the model has geometry and linework in an area, it will generally try to fit something there depending on the prompt. In this case you would have to erase / hide that geometry in your model and re-sync the image.
    2024-05-08 - 08-24-47 - Veedub64

  3. For the glass it’s best to almost completely hide it, or set it as a very low opacity, so that the objects inside can be interpreted.

  4. Currently, it works best to have the edges visible. Arctic has less geometry retention in general, but can produce appealing results. This will not matter in the upcoming version 2 where we are normalizing the inputs across all plugins. Check out this post for reference: Veras: Issue with Rendering sharp edges in 3D Model / bad geometries
    4.2. v1 uses the image data, and generates the additional channels. This is why the display mode matters.
    4.3. If you’d like to share any models for our testing, or get early access, let me know, as this will improve the v2 release

  5. Similar to #4.1 answer, the metadata from the lights are not currently read. These are read as structural data, instead of symbolic. This will change with v2.

Your final image looks amazing, btw!!