I am trying to obtain an image shaped (1080, 1920, 1) from one shaped (1080, 1920, 3)This is what I have been trying without success:
for fr in fr_lst: frame = cv2.imread(os.path.join(frame_root, fr)) #SPLIT CHANNELS frame = (cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) r, g, b = cv2.split(frame) r = np.expand_dims(r, axis=2) print(r.shape) frame = Image.fromarray(r)
when I print the shape of r I get (1080, 1920, 1) but Image.fromarray(r)
returns the error
TypeError: Cannot handle this data type: (1, 1, 1), |u1
I tried not expanding the dimensions, obtaining the shape of r of (1080,1920) and successfully running Image.fromarray(r)
I also tried to expand the dimensions of the PIL image frame = np.expand_dims(frame, axis=(2))
which seems to return the appropriate result, but has a strange behaviour:
If I use an array of size (1080, 1920, 3) and run size = frames[0].size
I obtain size = 1920, 1080
which is great. But if I run size = frames[0].size
with frames of shape (1080, 1920, 1) I obtain size = 2073600
My goal is to have an array of size (1920, 1080) when passing a frame of shape (1080, 1920, 1).
What am I doing wrong or not understanding?
Thank you