如何使 python 脚本运行得更快?

问题描述 投票:0回答:2

我有一个 python 脚本,它截取屏幕某些部分的屏幕截图,如果排列的 RGB 代码匹配,则单击该对象。它适用于不移动的物体。但是,当对象移动时,由于延迟,它不能点击它。我不知道物体的方向,它是随机的。因此,我无法通过重新排列点击坐标来解决问题。

你有什么建议吗?有没有更好的方法来编写这个脚本?有没有什么技巧可以更快地单击找到的坐标或更快地找到所需的坐标?由于背景,对象识别效果不佳。

谢谢!

import pyautogui
import keyboard
import time
import win32api, win32con

time.sleep(2)

def click(x,y):
    win32api.SetCursorPos((x,y))
    win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,0,0)
    win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,0,0)



while 1:
    pic = pyautogui.screenshot(region=(265,220,200,200))
    width, height = pic.size

    for x in range(0,width,10):
        for y in range(0,height,10):
            r, g, b = pic.getpixel((x,y))

            if g in range(90,95):
                click(x+265,y+220)
                time.sleep(1)
                break

python bots screenshot pyautogui
2个回答
0
投票

尝试使用 OpenCV 进行对象检测,使用(SIFT)或(SURF)。 像这样的

import cv2
import numpy as np
import pyautogui
import time
import win32api, win32con

# Load the reference image of the object you want to detect
ref_img = cv2.imread('object_reference.jpg', cv2.IMREAD_GRAYSCALE)

# Initialize the SIFT detector
sift = cv2.xfeatures2d.SIFT_create()

# Detect keypoints and descriptors in the reference image
kp1, des1 = sift.detectAndCompute(ref_img, None)

# Set up the mouse click function
def click(x,y):
    win32api.SetCursorPos((x,y))
    win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,0,0)
    win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,0,0)

# Set the region of interest where you want to search for the object
roi = (265, 220, 200, 200)

# Set the threshold for matching the object
threshold = 0.7

# Set the delay between clicks
delay = 1

# Wait for 2 seconds to let the user move the object into the region of interest
time.sleep(2)

# Continuously search for the object in the region of interest
while True:
    # Take a screenshot of the region of interest
    img = pyautogui.screenshot(region=roi)

    # Convert the screenshot to grayscale
    img_gray = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2GRAY)

    # Detect keypoints and descriptors in the screenshot
    kp2, des2 = sift.detectAndCompute(img_gray, None)

    # Match the descriptors between the reference image and the screenshot
    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1, des2, k=2)

    # Apply ratio test to filter out false matches
    good_matches = []
    for m, n in matches:
        if m.distance < threshold * n.distance:
            good_matches.append(m)

    # If enough good matches are found, assume the object is detected and click on it
    if len(good_matches) > 10:
        # Get the center of the detected object by averaging the keypoints
        obj_center = np.mean(np.array([kp2[m.trainIdx].pt for m in good_matches]), axis=0)

        # Click on the center of the detected object
        click(int(obj_center[0]) + roi[0], int(obj_center[1]) + roi[1])

        # Wait for a short delay before searching for the object again
        time.sleep(delay)

0
投票

一些尝试的建议:

roi = (265,220,200,200)
width, height = roi[2]-roi[0], roi[3]-roi[1]
range_x = range(0,width,10)
range_y = range(0,height,10)



while 1:
    pic = pyautogui.screenshot(region=roi)
    
    img_g = np.array(pic)[:,:,1] # get the green channel
    g_in_range = (95<img_g)&(90<img_g) 
    for x in range_x :
        for y in range_y :
            g = img_g((y,x)) # double check x,y order

            if g:
                click(x+265,y+220)
                time.sleep(1)
                break


如果你需要更多的加速:

尝试按照这篇文章关于使用

cython
.

© www.soinside.com 2019 - 2024. All rights reserved.