Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 16862

Adding additional functionality to a basic movement tracking script in Python

$
0
0

For my dissertation project I am looking at tadpole movement patterns. I have a python script written by a previous PhD student. Currently the script takes a video and tracks the tadpole as it swims around the tank, at the end it outputs a text file with the total distance travelled in pixles. At the start of the script I get prompted with a window to designate the tracking arena Draggable square to designate the tracking arean. To designate the tracking area I can drag around a square to the place I want. The draggable square has a numbered grid inside it. Then I get another window to draw a square around an indiviudal tadpole which is used to set the minimum and maximum size that the script detects when tracking the movement.

For my dissertation I need to modify the script to track distance travelled within the outside 16 squares (The numbered squares which are seen when designating the tracking arena) and the distance travelled within the middle 9 squares.

The whole script is split into 2 files, one which I run using command line and the other contains some of the needed functions.

This is the main code which I run.

#Importing all the packages needed and the tracking functionsimport cv2import osimport numpy as npfrom collections import dequeimport Tracking_functions1 as tc# Information to changesub_varThreshold = 90 #values usually work between 90-150 - the pixel threshold value to be counted as a "moving tadpole" objectsub_learn_rate = -1 # -1 is the default, 0.05 also works wellblur_kernal = (1, 1) #use smaller kernal sizes (1,1) for small tadpoles and larger kernals (5,5) for big tadpolesmin_area = 1 #initalise min area as a number before setting by drawing a rectanglemax_area = 1 #initalise max area as a number before setting by drawing a rectanglelarge_movement_threshold = 1 #initalise threshold as a number before setting by drawing a rectanglestart_position = 2 #where we startstart_frame = start_position  #the videos are fps so if start position is in seconds, this takes us to framesmin_area_multiplier = 0.3 #the min area we will track will be 0.4 of the rectangle drawn around the individualmax_area_multiplier = 1.5 #the max area we will track will be 0.4 of the rectangle drawn around the individualkey = None #initalise cap = None #initaliseframe_count = 0  # Variable to keep track of the frame number# Global variables for tracking arena dimensionsarena_width = 900arena_height = 900grid_rows = 5grid_cols = 5grid_width = arena_width // grid_colsgrid_height = arena_height // grid_rows# Create dictionaries to store window statuseswindows = {'Draw Tracking Arena': False,'Draw Area to Track': False}# Create a function to create and show windowsdef create_window(window_name, image):    cv2.imshow(window_name, image)    windows[window_name] = True# Create a function to destroy windowsdef destroy_window(window_name):    cv2.destroyWindow(window_name)    windows[window_name] = False# Function to handle mouse events for dragging the squaredef drag_square(event, x, y, flags, param):    global top_left_pt1, dragging, offset_x, offset_y, bottom_right_pt1, arena_width, arena_height    if event == cv2.EVENT_LBUTTONDOWN:        if top_left_pt1[0] <= x <= bottom_right_pt1[0] and top_left_pt1[1] <= y <= bottom_right_pt1[1]:            dragging = True            offset_x = x - top_left_pt1[0]            offset_y = y - top_left_pt1[1]    elif event == cv2.EVENT_MOUSEMOVE:        if dragging:            new_top_left_x = x - offset_x            new_top_left_y = y - offset_y            bottom_right_pt1 = (new_top_left_x + arena_width, new_top_left_y + arena_height)            top_left_pt1 = (bottom_right_pt1[0] - arena_width, bottom_right_pt1[1] - arena_height)    elif event == cv2.EVENT_LBUTTONUP:        dragging = False# Function to calculate grid pointsdef calculate_grid_points():    grid_points = []    for i in range(grid_rows):        for j in range(grid_cols):            x = j * grid_width            y = i * grid_height            grid_points.append((x, y))    return grid_points# Function to draw grid lines on the tracking arenadef draw_grid(image):    for i in range(1, grid_rows):        cv2.line(image, (0, i * grid_height), (arena_width, i * grid_height), (255, 0, 0), 2)    for j in range(1, grid_cols):        cv2.line(image, (j * grid_width, 0), (j * grid_width, arena_height), (255, 0, 0), 2)    return image# Function to draw the draggable square with grid and labels on the tracking arena windowdef draw_tracking_arena_with_grid(image, top_left_pt1):    # Draw grid lines on the image and label each square    for i in range(grid_rows):        for j in range(grid_cols):            x1 = top_left_pt1[0] + j * grid_width            y1 = top_left_pt1[1] + i * grid_height            x2 = top_left_pt1[0] + (j + 1) * grid_width            y2 = top_left_pt1[1] + (i + 1) * grid_height            cv2.rectangle(image, (x1, y1), (x2, y2), (255, 255, 255), 2)            square_number = i * grid_cols + j + 1            cv2.putText(image, str(square_number), (x1 + 10, y1 + 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)    # Draw the draggable square    cv2.rectangle(image, top_left_pt1, bottom_right_pt1, (0, 255, 0), 2)  # Draw a green rectangle    return image# Create one numpy arrays to place over images, I am creating ROI here because I seem to need to do that to clear ROI between batchesimg = np.zeros((1080, 1920, 3), np.uint8)ROI = np.zeros_like(img)# Open and read the txt file of video namesmyfile = open('G:/Aug1/A1/video_info.txt', encoding="ISO-8859-1")myfile = myfile.readlines()batch_numbers = set()for line in myfile[1:]:    line = line.strip()    line = line.split()    BatchID = 0    TadpoleID = 1    TrialID = 2    columnvid = 3    LR = 4    batch_numbers.add(line[BatchID])for batch in batch_numbers:    ROI.fill(0) #clearing ROI between batches    print("Is ROI cleared:", np.count_nonzero(ROI) == 0)    print("ROI dimensions (width, height):", ROI.shape)    top_left_pt1 = (0, 0) #resetting tracking area and arena    bottom_right_pt1 = (arena_width, arena_height)    top_left_pt2 = (0, 0)    bottom_right_pt2 = (0, 0)    for line in myfile[1:]:        line = line.strip()        line = line.split()        BatchID = 0        if line[BatchID] != batch:         continue        TadpoleID = 1        TrialID = 2        columnvid = 3        LR = 4        Behav = 5        vidx = (line[columnvid])           # Create a unique window name for each tadpole based on TadpoleID        tracking_window_name = f'Tracking Arena for Individual {line[TadpoleID]} on the {line[LR]} {line[Behav]}'        area_window_name = f'Area of Individual {line[TadpoleID]} to Track on the {line[LR]}  {line[Behav]}'        results_fn = ('_'.join(('Track_Results/Batch', batch, 'Individial', line[TadpoleID], 'Trial', line[TrialID], 'Behav', line[Behav], line[LR])))        suffix = '.txt'        results_txt = os.path.join(results_fn + suffix)        myresults = open(results_txt, 'a')        printheader = print('TadpoleID', 'TrialID', 'FrameNo', 'Xcentroid', 'Ycentroid', 'DistX', 'DistY', 'PixelDist', 'CumulPixelDistTrvalled', 'CumulPixelDistInculded', 'InculsionStatus', sep='\t', file=myresults)        myresults.close()        myresults = open(results_txt, 'a')        # Capture the first frame of the video         cap = cv2.VideoCapture(vidx)        ret, first_frame = cap.read()        if not ret:          print("Error: Failed to read the first frame")          break          # Create a window to display the tracking arena        cv2.namedWindow(tracking_window_name)        cv2.setMouseCallback(tracking_window_name, drag_square)        while True:            square_image1 = first_frame.copy()            overlay = draw_tracking_arena_with_grid(square_image1, top_left_pt1)            cv2.imshow(tracking_window_name, overlay)            key = cv2.waitKey(1) & 0xFF            if key == ord('q'):                break        cap.release()        cv2.destroyAllWindows()        square_image1 = first_frame.copy()        draw_tracking_arena_with_grid(square_image1, top_left_pt1)        print("Window 'Draw Tracking Arena' opened.")        cap = cv2.VideoCapture(vidx)        # Skip the first 49 frames to reach the 50th frame        for i in range(49):         ret, _ = cap.read()         if not ret:          print("Error: Failed to read frame ", i)          break# Read the 50th frame for square_image2        ret, square_image2 = cap.read()        if not ret:         print("Error: Failed to read the 50th frame for square_image2")         break        create_window(area_window_name, square_image2)        print("Window 'Draw Area' opened.")        def draw_square2(event, x, y, flags, param):            global square_image2, drawing2, top_left_pt2, bottom_right_pt2            if event == cv2.EVENT_LBUTTONDOWN:                drawing2 = True                top_left_pt2 = (x, y)            elif event == cv2.EVENT_LBUTTONUP:                drawing2 = False                bottom_right_pt2 = (x, y)                cv2.rectangle(square_image2, top_left_pt2, bottom_right_pt2, (255, 255, 255), -1)        drawing2 = False        cv2.setMouseCallback(area_window_name, draw_square2)        def check_for_keys():          key = cv2.waitKey(1) & 0xFF          return key        while True:            cv2.imshow(area_window_name, square_image2)            key = check_for_keys()            if key == ord('q'):                break            if top_left_pt1[0] < bottom_right_pt1[0] and top_left_pt1[1] < bottom_right_pt1[1]:                ROI = cv2.rectangle(img, top_left_pt1, bottom_right_pt1, (255, 255, 255), -1)            if top_left_pt2[0] < bottom_right_pt2[0] and top_left_pt2[1] < bottom_right_pt2[1]:                drawn_area = abs(bottom_right_pt2[0] - top_left_pt2[0]) * abs(bottom_right_pt2[1] - top_left_pt2[1])                width = abs(bottom_right_pt2[0] - top_left_pt2[0])                height = abs(bottom_right_pt2[1] - top_left_pt2[1])                longest_side_length = max(width, height)                min_area = drawn_area * min_area_multiplier                max_area = drawn_area * max_area_multiplier                large_movement_threshold = 3.5 * longest_side_length                print(large_movement_threshold)        cap.release()        cv2.destroyAllWindows()            prev_x = deque(maxlen=1)        prev_y = deque(maxlen=1)        prev_xy = deque(maxlen=1)        prev_x.append(0)        prev_y.append(0)        prev_xy.append(None)        pts = deque(maxlen=100)        cxpts = deque(maxlen=2)        cypts = deque(maxlen=2)        dist_travelled = deque()        cap = cv2.VideoCapture(vidx)        fps = int(cap.get(cv2.CAP_PROP_FPS))        cap.set(1, 1)        end_frame = int(700 * fps)        framecount = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))        sub_history = 100        subtractor = cv2.createBackgroundSubtractorMOG2(history=sub_history, varThreshold=sub_varThreshold, detectShadows=False)        while True:            frame = cap.read()            frame = frame[1]            frame_pos = cap.get(cv2.CAP_PROP_POS_FRAMES)            if frame_pos > end_frame:                myresults.close()                break            if frame is None:                myresults.close()                break            frame = cv2.bitwise_and(frame, ROI)            mask, blur, eq = tc.create_mask(frame, subtractor, sub_learn_rate, blur_kernal)            c, cx, cy, cxcy = tc.detect_contours(frame, blur, mask, eq, min_area, max_area, prev_x, prev_y, prev_xy)            print(cx,  cy)            ix, iy, Pixel_dist, cumul_dist_travelled, cumul_dist_included, inclusion_status = tc.calculate_distance(cx, cy, cxpts, cypts, dist_travelled, large_movement_threshold)            pts = tc.draw_lines(cxcy, pts, frame, blur, mask)            printresults = print(line[TadpoleID], line[TrialID], int(frame_pos), cx, cy, ix, iy, Pixel_dist, cumul_dist_travelled, cumul_dist_included, inclusion_status, sep='\t', file=myresults) #print results to file            frame, blur, mask, eq = tc.HUD_info(frame, blur, mask, eq, line, TadpoleID, frame_pos, cumul_dist_travelled, inclusion_status)            output = tc.output(frame, mask, blur, eq, mode="frame+new_mask")            key = tc.frame_display_time(fps, mode="veryfast_speed")            cv2.imshow('tadpole_tracker', frame)            if key == ord('q'):                cv2.destroyAllWindows()                breakcap.release()cv2.destroyAllWindows()myresults.close()

This is the code containing some functions which are called over in the main script.

import cv2import imutilsimport numpy as npimport mathdef create_mask(frame, subtractor, sub_learn_rate, blur_kernal):"""    This function retrieves a video frame and preprocesses it for     object tracking.     The code converts the image to greyscale (gray, eq), blurs the image (blur) to reduce noise and returns     a thresholded image (mask) which is determined by a min and max threshold values (minthresh and maxthresh).    The mask is dilated to remove small spots appearing in the mask.     Parameters.    ----------    frame: ndarray, shape(n_rows, n_cols, 3)        source image with 3 colour channels    subtractor: cv2 function        the type of subtractor used to create mask    sub_learn_rate: float        how many previous frames are used to estimate tadpole's next position    blur_kernal:         array, int        how many pixels are combined to create the blurred image"""    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)    eq = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)    blur = cv2.GaussianBlur(gray, blur_kernal, 0)    mask = subtractor.apply(blur, learningRate=sub_learn_rate)    minthresh = 60     maxthresh = 255     mask = cv2.threshold(mask, minthresh, maxthresh, cv2.THRESH_BINARY)[1]     mask = cv2.dilate(mask, None, iterations=3)     return mask, blur, eqdef detect_contours(frame, blur, mask, eq, min_area, max_area, prev_x, prev_y, prev_xy):"""    This function detects contours (binary mask images), thresholds them based on area and draws them.    Parameters    ----------    frame: ndarray, shape(n_rows, n_cols, 3)        source image with 3 colour channels    blur: ndarray, shape(n_rows, n_cols, 1)        blurred image with 1 colour channel    mask: ndarray, shape(n_rows, n_cols, 1)        binarised image with 1 colour channel     eq: ndarray, shape(n_rows, n_cols, 1)        source image converted from colour to grey - 1 colour channel    min_area: int        the minimum area a contour should be to considered a tadpole and not noise eg specs of food left in tank     max_area:        the minimum area a contour should be to considered a tadpole and not noise eg water ripples    prev_x: int        previous x contour coordinate    prev_y: int        previous y contour coordinate    prev_xy: array, int        previous xy contour coordinate"""    cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) #finds the contours (outlines) of the binary mask image    cnts = imutils.grab_contours(cnts) #select these contour    for c in cnts:        area = cv2.contourArea(c)        (x, y, w, h) = cv2.boundingRect(c)        extent = area / float(w * h)        if area < min_area or area > max_area:            cv2.drawContours(mask, [c], -1, 0, -1)    mask = cv2.bitwise_and(mask, mask, mask = mask)    cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) #finds the contours (outlines) of the binary mask image    cnts = imutils.grab_contours(cnts) #select these contour    for c in cnts:        c = sorted(cnts, key = cv2.contourArea, reverse=True)[-1]         M = cv2.moments(c) #allows calculation of centroid        if M["m00"] or M["m10"] != 0:            cx = int(M["m10"] / M["m00"])            cy = int(M["m01"] / M["m00"])            cxcy = (int(M['m10']/M['m00']), int(M['m01']/M['m00']))         else:            cx = 0            cy = 0            cxcy = (0,0)        if cx == 319 and cy == 179:            cx = 0            cy = 0            cxcy = (319,179)        prev_x.appendleft(cx)        prev_y.appendleft(cy)        prev_xy.appendleft(cxcy)        (cx, cy, w, h) = cv2.boundingRect(c) #Define the xy coordinates and width+height of the contour        for img in (frame, blur, mask, eq):            cv2.rectangle(img, (cx, cy), (cx + w, cy + h), (0, 255, 0), 1) #define colour and shape of rectangle and place on the frame    else:        for x in prev_x:            cx = x        for y in prev_y:            cy = y        for xy in prev_xy:            cxcy = xy        c = None    return c, cx, cy, cxcydef calculate_distance(cx, cy, cxpts, cypts, dist_travelled, large_movement_threshold):"""    This function calculates the distance the tracked object moves.    The distance is calculated for consecutive and cumlative distance travelled.     Parameters    ----------    cx: float        The objets current location on the x axsis (its x centroid) in current frame    cy: float        The objets current location on the y axsis (its y centroid) in current frame    cxpts: array, float        A list of all current x centroids from previous frames    cypts: array, float        A list of all current y centroids from previous frames    dist_travelled: float        distance object has moved from this frame and previous frame"""    inclusion_status = "Unknown"  #Give inclusion status a defintion before if block    #step 1: append xy coordinates to a list    cxpts.appendleft(cx) #append x coordinates to cxpts list    cypts.appendleft(cy) #append y coordinates to cxpts list    #step 2: calculate distance between consecutive x and y points     #calculate the distance between consecutive x points and call the value ix    ix = 0 #unless otherwise stated ix = 0    for ix in range(1, len(cxpts)): #for values in the cxpts list        if cxpts[ix] == 0 or cxpts[ix-1] == 0: #if the current cx value is a zero, ix = 0            ix = 0        else: #otherwise perform the following to calculate ix            ix = (cxpts[ix]-cxpts[ix-1]) #ix = current cx value - previous cx value    #calculate distance between consecutive y points    iy = 0 #unless otherwise stated iy = 0    for iy in range(1, len(cypts)): #for values in the cypts list        if cypts[iy] == 0 or cypts[iy-1] == 0: #if the current cy value is a zero, iy = 0            iy = 0        else: #otherwise perform the following to calculate iy            iy = (cypts[iy]-cypts[iy-1]) #iy = current cy value - previous cy value    #step 3: calculate the distance travelled between consecutive points    #calculate distance travelled between two points (in pixels)    #define some values    square_ix = (ix*ix)     square_iy = (iy*iy)     #calculate the distance as sqrt of the square_ix + square_iy values    Pixel_dist = math.sqrt(square_ix+square_iy)     #round the value to 2 decimal places    Pixel_dist = round(Pixel_dist, 2)    #step 4: calculate cumlative distance travelled (in pixels)    #append distance travelled calcultions to a list    dist_travelled.appendleft(Pixel_dist)    #unless otherwise stated cumlative distance travelled = 0    cumul_dist_travelled = 0    cumul_dist_included = 0    #sum each value found in the dist_travelled list    for cumul_dist_travelled in range(1, len(dist_travelled)):        cumul_dist_travelled = sum(dist_travelled) # Update cumulative distance regardless of inclusion status    if Pixel_dist < large_movement_threshold:       inclusion_status = "Included"    else:       inclusion_status = "Excluded"  # Frame distance is too large     # Calculate cumulative distance for included frames    if inclusion_status == "Included":       cumul_dist_included = sum([d for d in dist_travelled if d < large_movement_threshold])    #print(f"Pixel_dist: {Pixel_dist}")    #print(f"Inclusion Status: {inclusion_status}")    #print(f"Cumulative Distance Included: {cumul_dist_included}") #removed for now    #round the value to 2 decimal places    cumul_dist_travelled = round(cumul_dist_travelled, 2)    cumul_dist_included = round(cumul_dist_included, 2)    return ix, iy, Pixel_dist, cumul_dist_travelled, cumul_dist_included, inclusion_statusdef draw_lines(cxcy, pts, frame, blur, mask):"""    This function draws a line between consecutive locations the object moves    Parameters    ----------    cxcy: array, float        The object's current location (its centroid) in the current frame    pts: int        The number of previous consecutive locations to draw lines for    frame: ndarray, shape(n_rows, n_cols, 3)        source image with 3 colour channels    blur: ndarray, shape(n_rows, n_cols, 1)        blurred image with 1 colour channel    mask: ndarray, shape(n_rows, n_cols, 1)        binarised image with 1 colour channel """    if cxcy != (319, 179):        pts.appendleft(cxcy)    for i in range(1, len(pts)):        if pts[i - 1] is None or pts[i] is None:            continue        line_thickness = 1        for img in (frame, blur, mask):            cv2.line(img, pts[i - 1], pts[i], (0, 0, 255), line_thickness)  # place centroid lines on frame            cv2.line(img, pts[i - 1], pts[i], (0, 0, 255), line_thickness)  # place centroid lines on mask image    return ptsdef HUD_info(frame, blur, mask, eq, line, columnID, frame_pos, cumul_dist_travelled, inclusion_status):"""    This function provides the Heads UP Display on the image to show details of the current tadpole being tracked    Parameters    ----------    frame: ndarray, shape(n_rows, n_cols, 3)        source image with 3 colour channels    blur: ndarray, shape(n_rows, n_cols, 1)        blurred image with 1 colour channel    mask: ndarray, shape(n_rows, n_cols, 1)        binarised image with 1 colour channel     eq: ndarray, shape(n_rows, n_cols, 1)        source image converted from colour to grey - 1 colour channel    line: str        reads the information in the correct ACT_video_info.txt file    columnID: str         gets tadpole ID from ACT_video_info.txt file under the column name ID    frame_pos: int        the current frame number of source image    cumul_dist_travelled: float        the total distance travelled object has been recorded to move in pixels"""    for img in (frame, blur, mask):        cv2.rectangle(frame, (10, 225), (110,240), (255,255,255), -1) #rectangle location and colour        cv2.putText(frame, "TadpoleID: {}".format(line[columnID]), (15, 235), cv2.FONT_HERSHEY_SIMPLEX, 0.35 , (0,0,0)) #retrieve frame number from cap and place in rectangle        cv2.rectangle(frame, (10, 250), (110,265), (255,255,255), -1) #rectangle location and colour        cv2.putText(frame, "Frame No: {}".format(int(frame_pos)), (15, 260), cv2.FONT_HERSHEY_SIMPLEX, 0.35 , (0,0,0)) #retrieve frame number from cap and place in rectangle        cv2.rectangle(frame, (10, 275), (135,290), (255,255,255), -1) #rectangle location and colour for frame number        cv2.putText(frame, "Dist (pixels): {}".format(cumul_dist_travelled), (15, 285), cv2.FONT_HERSHEY_SIMPLEX, 0.35 , (0,0,0)) #retrieve frame number from cap and place in rectangle        # Display inclusion status in the HUD        if inclusion_status:            text_color = (0, 0, 255) if inclusion_status == "Excluded" else (0, 255, 0)            cv2.putText(frame, f'Inclusion Status: {inclusion_status}', (15, 310), cv2.FONT_HERSHEY_SIMPLEX, 0.35, text_color)    return frame, blur, mask, eqdef output(frame, mask, eq, blur, mode="frame", inclusion_status=None, pause_on_excluded=False):"""    This function provides different ways of viewing the output of the tracked object    Parameters    ----------    frame: ndarray, shape(n_rows, n_cols, 3)        source image with 3 colour channels    blur: ndarray, shape(n_rows, n_cols, 1)        blurred image with 1 colour channel    mask: ndarray, shape(n_rows, n_cols, 1)        binarised image with 1 colour channel     eq: ndarray, shape(n_rows, n_cols, 1)        source image converted from colour to grey - 1 colour channel    mode: str        choose from "frame" : shows tracked object in source image"mask" : shows tracked object in binarised mask image"blur" : shows tracked object in blurred image"frame+mask" : shows tracked object in source image and binarised mask image"frame_blur" : shows tracked object in source image and blurred image"blur+eq" : shows tracked object in blurred image and greyscale image        different modes are useful when diagnosing problems with tracker        inclusion_status: str or None        Inclusion status for the frame ("Included" or "Excluded"). If None, no inclusion status is displayed.    pause_on_excluded: bool        If True, the function will pause for one second on excluded frames."""    if mode == "frame":        cv2.imshow('output', frame)     if mode == "mask":        cv2.imshow('output', mask)     if mode == "blur":        cv2.imshow('output', blur)    if mode == "frame+mask":        frame_plus_mask = cv2.bitwise_and(frame, frame, mask=mask)        hstack_frame_plus_mask = np.hstack((frame, frame_plus_mask))        cv2.imshow("output", hstack_frame_plus_mask)    if mode == "frame_blur":        blur_2_colour = cv2.cvtColor(blur, cv2.COLOR_GRAY2BGR)        hstack_frame_blur = np.hstack((frame, blur_2_colour))        cv2.imshow('output', hstack_frame_blur)    if mode == "blur+eq":        hstack_gray_plus_eq = np.hstack((blur, eq))        cv2.imshow("output", hstack_gray_plus_eq)    if pause_on_excluded and inclusion_status == "Excluded":        cv2.waitKey(100000) def frame_display_time(fps, mode = "slow_speed"):"""    This function provides ways of changing the speed of different images    Parameters    ----------    fps: int        The number of frames per second the output is displayed    mode: str        choose from"nat_speed" : natural filming speed of 25 frames per second (1000ms / 25frames = each frame displayed for 40ms)"veryfast_speed" :  each frame displayed for 2ms = 40ms/20"fast_speed" : each frame displayed for 4ms = 40ms/10"med_speed" : each frame displayed for 4ms = 8ms/5"slow_speed" : each frame dispayed for 200ms = 40ms*5        useful for when wanting to speed up or slow down tracking process"""    nat_speed = int(1000/fps)     veryfast_speed = int((1000/fps)/40)     fast_speed = int((1000/fps)/10)     med_speed = int((1000/fps)/5)     slow_speed = int((1000/fps)*5)    if mode == "nat_speed":        key = cv2.waitKey(nat_speed) & 0xFF    if mode == "veryfast_speed":        key = cv2.waitKey(veryfast_speed) & 0xFF    if mode == "fast_speed":        key = cv2.waitKey(fast_speed) & 0xFF    if mode == "med_speed":        key = cv2.waitKey(med_speed) & 0xFF    if mode == "slow_speed":        key = cv2.waitKey(slow_speed) & 0xFF    return key

I tried using chat GPT to modify the code. I've managed to add some things to make the script easier to use like the windows to designate the tracking arena and the individual area.

I'm realising I'm in over my head as I study biology and don't have any experience in coding apart from data analysis in R. I do realise the code is not that well written, it has been heavily modified using chat GPT by me and a different PhD student.

I would appreciate any help possible.


Viewing all articles
Browse latest Browse all 16862

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>