I am trying to take image_a
and map it on image_b
, which is a render from a 3D application, using a UV map rendered out of the same application. I am 90% there with the following code, however there are a few things I could use help with.
The image comes out rotated 90 degrees after I map it with the UV coordinates. I do not know why, but was able to fix it by just rotating the image before mapping it. I want to make sure that I am not doing something wrong here...
The mapped image comes out slightly distorted when compared to the same image mapped in a 3D application as "ground truth". It is close, but looks a little "smushed" to the bottom-left of the mapping coordinates (if that makes sense...). I suspect this has to do with the gamma correction that I applied to the UV image, but I am not sure what is wrong and thus how to fix it.
I am a little bit confused with how cv's remap function works. Through trial and error I got this to work, but I am not sure that I am doing it "right". For example I found I need to resize image_a to be a square for it to map properly, is that correct?
As always, any suggestions to optimize performance are welcome. For example I am blending the images with cv, would that be better in numpy? Can I reduce the number of steps anywhere along the line, combine them, etc to get a speed gain?
image_a = cv2.imread('tex.png')image_b = cv2.imread('rgb.png')image_uv = cv2.imread('uv.png', cv2.IMREAD_UNCHANGED)image_matte = cv2.imread('matte.png', cv2.IMREAD_GRAYSCALE)# resize input image - not sure why it has to be square...height = image_b.shape[1]image_a = cv2.resize(image_a, (height, height), interpolation=cv2.INTER_CUBIC)gamma = 2.2uv_map_normalized = image_uv.astype(np.float32) / 65535.0uv_map_normalized = np.power(uv_map_normalized, gamma)uv_map_u = uv_map_normalized[:, :, 1] * image_a[1].shape[0]uv_map_v = uv_map_normalized[:, :, 2] * image_a[0].shape[0]# why do I have to rotate clockwise?remapped_image = cv2.remap(cv2.rotate(image_a, cv2.ROTATE_90_CLOCKWISE), uv_map_u, uv_map_v, cv2.INTER_CUBIC)alpha_normalized = image_matte.astype(np.float64) / 255.0alpha_normalized = np.stack((alpha_normalized,) * 3, axis=-1).astype(np.uint8) # convert to 3-channel imageresult_1 = cv2.multiply(remapped_image, alpha_normalized)result_2 = cv2.multiply(image_b, 1 - alpha_normalized)blended_image = cv2.add(result_1, result_2)