How to use drawClickedPoint method in fMBT

Best Python code snippet using fMBT_python

eyenfinger.py

Source:eyenfinger.py Github

copy

Full Screen

...760 bbox, _g_windowOffsets[windowId],761 (clickedX, clickedY)))762 if capture:763 drawWords(_g_origImage, capture, [word], _g_words)764 drawClickedPoint(capture, capture, (clickedX, clickedY))765 return ((score, matching_word), (clickedX, clickedY))766def iClickBox((left, top, right, bottom), clickPos=(0.5, 0.5),767 mouseButton=1, mouseEvent=1, dryRun=None,768 capture=None, _captureText=None):769 """770 DEPRECATED - use fmbtx11.Screen.tapItem instead.771 Click coordinates relative to the given bounding box, default is772 in the middle of the box.773 Parameters:774 (left, top, right, bottom)775 coordinates of the box inside the window.776 (0, 0) is the top-left corner of the window.777 clickPos (offsetX, offsetY) position to be clicked,778 relative to the given box. (0, 0) is the779 top-left, and (1.0, 1.0) is the lower-right780 corner of the box. The default is (0.5, 0.5),781 that is, the middle point of the box. Values782 smaller than 0 and bigger than 1 are allowed,783 too.784 mouseButton mouse button to be synthesized on the event, default is 1.785 mouseEvent event to be synthesized, the default is MOUSEEVENT_CLICK,786 others: MOUSEEVENT_MOVE, MOUSEEVENT_DOWN, MOUSEEVENT_UP.787 dryRun if True, does not synthesize events. Still returns788 coordinates of the clicked position and illustrates789 the clicked position on the capture image if790 given.791 capture name of file where the last screenshot with792 clicked point highlighted is saved. The default793 is None (nothing is saved).794 Returns pair (clickedX, clickedY)795 X and Y coordinates of clicked position on the796 screen.797 """798 clickWinX = int(left + clickPos[0]*(right-left))799 clickWinY = int(top + clickPos[1]*(bottom-top))800 (clickedX, clickedY) = iClickWindow((clickWinX, clickWinY),801 mouseButton, mouseEvent,802 dryRun, capture=False)803 if capture:804 if _captureText == None:805 _captureText = "Box: %s, %s, %s, %s" % (left, top, right, bottom)806 drawIcon(_g_origImage, capture, _captureText, (left, top, right, bottom))807 drawClickedPoint(capture, capture, (clickedX, clickedY))808 return (clickedX, clickedY)809def iClickWindow((clickX, clickY), mouseButton=1, mouseEvent=1, dryRun=None, capture=None):810 """811 DEPRECATED - use fmbtx11.Screen.tap instead.812 Click given coordinates in the window.813 Parameters:814 (clickX, clickY)815 coordinates to be clicked inside the window.816 (0, 0) is the top-left corner of the window.817 Integer values are window coordinates. Floating818 point values from 0.0 to 1.0 are scaled to window819 coordinates: (0.5, 0.5) is the middle of the820 window, and (1.0, 1.0) the bottom-right corner of821 the window.822 mouseButton mouse button to be synthesized on the event, default is 1.823 mouseEvent event to be synthesized, the default is MOUSEEVENT_CLICK,824 others: MOUSEEVENT_MOVE, MOUSEEVENT_DOWN, MOUSEEVENT_UP.825 dryRun if True, does not synthesize events. Still826 illustrates the clicked position on the capture827 image if given.828 capture name of file where the last screenshot with829 clicked point highlighted is saved. The default830 is None (nothing is saved).831 Returns pair (clickedX, clickedY)832 X and Y coordinates of clicked position on the833 screen.834 """835 # Get the size of the window836 wndSize = windowSize()837 (clickX, clickY) = _coordsToInt((clickX, clickY), wndSize)838 # Get the position of the window839 wndPos = windowXY()840 # If coordinates are given as percentages, convert to window coordinates841 clickScrX = clickX + wndPos[0]842 clickScrY = clickY + wndPos[1]843 iClickScreen((clickScrX, clickScrY), mouseButton, mouseEvent, dryRun, capture)844 return (clickScrX, clickScrY)845def iClickScreen((clickX, clickY), mouseButton=1, mouseEvent=1, dryRun=None, capture=None):846 """847 DEPRECATED - use fmbtx11.Screen.tap instead.848 Click given absolute coordinates on the screen.849 Parameters:850 (clickX, clickY)851 coordinates to be clicked on the screen. (0, 0)852 is the top-left corner of the screen. Integer853 values are screen coordinates. Floating point854 values from 0.0 to 1.0 are scaled to screen855 coordinates: (0.5, 0.5) is the middle of the856 screen, and (1.0, 1.0) the bottom-right corner of857 the screen.858 mouseButton mouse button to be synthesized on the event, default is 1.859 mouseEvent event to be synthesized, the default is MOUSEEVENT_CLICK,860 others: MOUSEEVENT_MOVE, MOUSEEVENT_DOWN, MOUSEEVENT_UP.861 dryRun if True, does not synthesize events. Still862 illustrates the clicked position on the capture863 image if given.864 capture name of file where the last screenshot with865 clicked point highlighted is saved. The default866 is None (nothing is saved).867 """868 _DEPRECATED()869 if mouseEvent == MOUSEEVENT_CLICK:870 params = "'mouseclick %s'" % (mouseButton,)871 elif mouseEvent == MOUSEEVENT_DOWN:872 params = "'mousedown %s'" % (mouseButton,)873 elif mouseEvent == MOUSEEVENT_UP:874 params = "'mouseup %s'" % (mouseButton,)875 else:876 params = ""877 clickX, clickY = _coordsToInt((clickX, clickY))878 if capture:879 drawClickedPoint(_g_origImage, capture, (clickX, clickY))880 if dryRun == None:881 dryRun = _g_defaultClickDryRun882 if not dryRun:883 # use xte from the xautomation package884 _runcmd("xte 'mousemove %s %s' %s" % (clickX, clickY, params))885def iGestureScreen(listOfCoordinates, duration=0.5, holdBeforeGesture=0.0, holdAfterGesture=0.0, intermediatePoints=0, capture=None, dryRun=None):886 """887 DEPRECATED - use fmbtx11.Screen.drag instead.888 Synthesizes a gesture on the screen.889 Parameters:890 listOfCoordinates891 The coordinates through which the cursor moves.892 Integer values are screen coordinates. Floating893 point values from 0.0 to 1.0 are scaled to screen894 coordinates: (0.5, 0.5) is the middle of the895 screen, and (1.0, 1.0) the bottom-right corner of896 the screen.897 duration gesture time in seconds, excluding898 holdBeforeGesture and holdAfterGesture times.899 holdBeforeGesture900 time in seconds to keep mouse down before the901 gesture.902 holdAfterGesture903 time in seconds to keep mouse down after the904 gesture.905 intermediatePoints906 the number of intermediate points to be added907 between each of the coordinates. Intermediate908 points are added to straight lines between start909 and end points.910 capture name of file where the last screenshot with911 the points through which the cursors passes is912 saved. The default is None (nothing is saved).913 dryRun if True, does not synthesize events. Still914 illustrates the coordinates through which the cursor915 goes.916 """917 _DEPRECATED()918 # The params list to be fed to xte919 params = []920 # The list of coordinates through which the cursor has to go921 goThroughCoordinates = []922 for pos in xrange(len(listOfCoordinates)):923 x, y = _coordsToInt(listOfCoordinates[pos])924 goThroughCoordinates.append((x,y))925 if pos == len(listOfCoordinates) - 1:926 break # last coordinate added927 nextX, nextY = _coordsToInt(listOfCoordinates[pos+1])928 (x,y), (nextX, nextY) = (x, y), (nextX, nextY)929 for ip in range(intermediatePoints):930 goThroughCoordinates.append(931 (int(round(x + (nextX-x)*(ip+1)/float(intermediatePoints+1))),932 int(round(y + (nextY-y)*(ip+1)/float(intermediatePoints+1)))))933 # Calculate the time (in micro seconds) to sleep between moves.934 if len(goThroughCoordinates) > 1:935 moveDelay = 1000000 * float(duration) / (len(goThroughCoordinates)-1)936 else:937 moveDelay = 0938 if not dryRun:939 # Build the params list.940 params.append("'mousemove %d %d'" % goThroughCoordinates[0])941 params.append("'mousedown 1 '")942 if holdBeforeGesture > 0:943 params.append("'usleep %d'" % (holdBeforeGesture * 1000000,))944 for i in xrange(1, len(goThroughCoordinates)):945 params.append("'usleep %d'" % (moveDelay,))946 params.append("'mousemove %d %d'" % goThroughCoordinates[i])947 if holdAfterGesture > 0:948 params.append("'usleep %d'" % (holdAfterGesture * 1000000,))949 params.append("'mouseup 1'")950 # Perform the gesture951 _runcmd("xte %s" % (" ".join(params),))952 if capture:953 intCoordinates = [ _coordsToInt(point) for point in listOfCoordinates ]954 drawLines(_g_origImage, capture, intCoordinates, goThroughCoordinates)955 return goThroughCoordinates956def iGestureWindow(listOfCoordinates, duration=0.5, holdBeforeGesture=0.0, holdAfterGesture=0.0, intermediatePoints=0, capture=None, dryRun=None):957 """958 DEPRECATED - use fmbtx11.Screen.drag instead.959 Synthesizes a gesture on the window.960 Parameters:961 listOfCoordinates962 The coordinates through which the cursor moves.963 Integer values are window coordinates. Floating964 point values from 0.0 to 1.0 are scaled to window965 coordinates: (0.5, 0.5) is the middle of the966 window, and (1.0, 1.0) the bottom-right corner of967 the window.968 duration gesture time in seconds, excluding969 holdBeforeGesture and holdAfterGesture times.970 holdBeforeGesture971 time in seconds to keep mouse down before the972 gesture.973 holdAfterGesture974 time in seconds to keep mouse down after the975 gesture.976 intermediatePoints977 the number of intermediate points to be added978 between each of the coordinates. Intermediate979 points are added to straight lines between start980 and end points.981 capture name of file where the last screenshot with982 the points through which the cursors passes is983 saved. The default is None (nothing is saved).984 dryRun if True, does not synthesize events. Still985 illustrates the coordinates through which the cursor986 goes.987 """988 screenCoordinates = [ _windowToScreen(*_coordsToInt((x,y),windowSize())) for (x,y) in listOfCoordinates ]989 return iGestureScreen(screenCoordinates, duration, holdBeforeGesture, holdAfterGesture, intermediatePoints, capture, dryRun)990def iType(word, delay=0.0):991 """992 DEPRECATED - use fmbtx11.Screen.type instead.993 Send keypress events.994 Parameters:995 word is either996 - a string containing letters and numbers.997 Each letter/number is using press and release events.998 - a list that contains999 - keys: each key is sent using press and release events.1000 - (key, event)-pairs: the event (either "press" or "release")1001 is sent.1002 - (key1, key2, ..., keyn)-tuples. 2n events is sent:1003 key1 press, key2 press, ..., keyn press,1004 keyn release, ..., key2 release, key1 release.1005 Keys are defined in eyenfinger.Xkeys, for complete list1006 see keysymdef.h.1007 delay is given as seconds between sent events1008 Examples:1009 iType('hello')1010 iType([('Shift_L', 'press'), 'h', 'e', ('Shift_L', 'release'), 'l', 'l', 'o'])1011 iType([('Control_L', 'Alt_L', 'Delete')])1012 """1013 _DEPRECATED()1014 args = []1015 for char in word:1016 if type(char) == tuple:1017 if char[1].lower() == 'press':1018 args.append("'keydown %s'" % (char[0],))1019 elif char[1].lower() == 'release':1020 args.append("'keyup %s'" % (char[0],))1021 else:1022 rest = []1023 for key in char:1024 args.append("'keydown %s'" % (key,))1025 rest.insert(0, "'keyup %s'" % (key,))1026 args = args + rest1027 else:1028 # char is keyname or single letter/number1029 args.append("'key %s'" % (char,))1030 usdelay = " 'usleep %s' " % (int(delay*1000000),)1031 _runcmd("xte %s" % (usdelay.join(args),))1032def iInputKey(*args, **kwargs):1033 """1034 DEPRECATED - use fmbtx11.Screen.pressKey instead.1035 Send keypresses using Linux evdev interface1036 (/dev/input/eventXX).1037 iInputKey(keySpec[, keySpec...], hold=<float>, delay=<float>, device=<str>)1038 Parameters:1039 keySpec is one of the following:1040 - a string of one-character-long key names:1041 "aesc" will send four keypresses: A, E, S and C.1042 - a list of key names:1043 ["a", "esc"] will send two keypresses: A and ESC.1044 Key names are listed in eyenfinger.InputKeys.1045 - an integer:1046 116 will press the POWER key.1047 - "_" or "^":1048 only press or release event will be generated1049 for the next key, respectively.1050 If a key name inside keySpec is prefixed by "_"1051 or "^", only press or release event is generated1052 for that key.1053 hold time (in seconds) to hold the key before1054 releasing. The default is 0.1.1055 delay delay (in seconds) after key release. The default1056 is 0.1.1057 device name of the input device or input event file to1058 which all key presses are sent. The default can1059 be set with iSetDefaultInputKeyDevice(). For1060 instance, "/dev/input/event0" or a name of a1061 device in /proc/bus/input/devices.1062 """1063 _DEPRECATED()1064 hold = kwargs.get("hold", 0.1)1065 delay = kwargs.get("delay", 0.1)1066 device = kwargs.get("device", _g_defaultInputKeyDevice)1067 inputKeySeq = []1068 press, release = 1, 11069 for a in args:1070 if a == "_": press, release = 1, 01071 elif a == "^": press, release = 0, 11072 elif type(a) == str:1073 for char in a:1074 if char == "_": press, release = 1, 01075 elif char == "^": press, release = 0, 11076 else:1077 inputKeySeq.append((press, release, _inputKeyNameToCode(char.upper())))1078 press, release = 1, 11079 elif type(a) in (tuple, list):1080 for keySpec in a:1081 if type(keySpec) == int:1082 inputKeySeq.append((press, release, keySpec))1083 press, release = 1, 11084 else:1085 if keySpec.startswith("_"):1086 press, release = 1, 01087 keySpec = keySpec[1:]1088 elif keySpec.startswith("^"):1089 press, release = 0, 11090 keySpec = keySpec[1:]1091 if keySpec:1092 inputKeySeq.append((press, release, _inputKeyNameToCode(keySpec.upper())))1093 press, release = 1, 11094 elif type(a) == int:1095 inputKeySeq.append((press, release, a))1096 press, release = 1, 11097 else:1098 raise ValueError('Invalid keySpec "%s"' % (a,))1099 if inputKeySeq:1100 _writeInputKeySeq(_deviceFilename(device), inputKeySeq, hold=hold, delay=delay)1101def _deviceFilename(deviceName):1102 if not _deviceFilename.deviceCache:1103 _deviceFilename.deviceCache = dict(_listInputDevices())1104 if not deviceName in _deviceFilename.deviceCache:1105 return deviceName1106 else:1107 return _deviceFilename.deviceCache[deviceName]1108_deviceFilename.deviceCache = {}1109def _listInputDevices():1110 nameAndFile = []1111 for l in file("/proc/bus/input/devices"):1112 if l.startswith("N: Name="):1113 nameAndFile.append([l.split('"')[1]])1114 elif l.startswith("H: Handlers=") and "event" in l:1115 try:1116 eventFilename = re.findall("(event[0-9]+)", l)[0]1117 nameAndFile[-1].append("/dev/input/%s" % (eventFilename,))1118 except:1119 _log('WARNING: Could not recognise event[0-9] filename from row "%s".' % (l.strip(),))1120 return nameAndFile1121def _writeInputKeySeq(filename, keyCodeSeq, hold=0.1, delay=0.1):1122 if type(filename) != str or len(filename) == 0:1123 raise ValueError('Invalid input device "%s"' % (filename,))1124 fd = os.open(filename, os.O_WRONLY | os.O_NONBLOCK)1125 for press, release, keyCode in keyCodeSeq:1126 if press:1127 bytes = os.write(fd, struct.pack(_InputEventStructSpec,1128 int(time.time()), 0, _EV_KEY, keyCode, 1))1129 if bytes > 0:1130 bytes += os.write(fd, struct.pack(_InputEventStructSpec,1131 0, 0, 0, 0, 0))1132 time.sleep(hold)1133 if release:1134 bytes += os.write(fd, struct.pack(_InputEventStructSpec,1135 int(time.time()), 0, _EV_KEY, keyCode, 0))1136 if bytes > 0:1137 bytes += os.write(fd, struct.pack(_InputEventStructSpec,1138 0, 0, 0, 0, 0))1139 time.sleep(delay)1140 os.close(fd)1141def findWord(word, detected_words = None, appearance=1):1142 """1143 Returns pair (score, corresponding-detected-word)1144 """1145 if detected_words == None:1146 detected_words = _g_words1147 if _g_words == None:1148 raise NoOCRResults()1149 scored_words = []1150 for w in detected_words:1151 scored_words.append((_score(w, word), w))1152 scored_words.sort()1153 if len(scored_words) == 0:1154 raise BadMatch("No words found.")1155 return scored_words[-1]1156def findText(text, detected_words = None, match=-1):1157 def biggerBox(bbox_list):1158 left, top, right, bottom = bbox_list[0]1159 for l, t, r, b in bbox_list[1:]:1160 left = min(left, l)1161 top = min(top, t)1162 right = max(right, r)1163 bottom = max(bottom, b)1164 return (left, top, right, bottom)1165 words = text.split()1166 word_count = len(words)1167 detected_texts = [] # strings of <word_count> words1168 if detected_words == None:1169 detected_words = _g_words1170 if _g_words == None:1171 raise NoOCRResults()1172 # sort by numeric word id1173 words_by_id = []1174 for word in detected_words:1175 for wid, middle, bbox in detected_words[word]:1176 # change word id from "word_2_42" to (2, 42)1177 int_wid = [int(n) for n in wid[5:].split("_")]1178 words_by_id.append(1179 (int_wid, word, bbox))1180 words_by_id.sort()1181 scored_texts = []1182 if word_count > 0:1183 for i in xrange(len(words_by_id)-word_count+1):1184 detected_texts.append(1185 (" ".join([w[1] for w in words_by_id[i:i+word_count]]),1186 biggerBox([w[2] for w in words_by_id[i:i+word_count]])))1187 norm_text = " ".join(words) # normalize whitespace1188 for t in detected_texts:1189 scored_texts.append((_score(t[0], norm_text), t[0], t[1]))1190 scored_texts.sort()1191 elif match == 0.0:1192 # text == "", match == 0 => every word is a match1193 for w in words_by_id:1194 detected_texts.append((w[1], w[2]))1195 scored_texts = [(0.0, t[0], t[1]) for t in detected_texts]1196 else:1197 # text == "", match != 0 => no hits1198 detected_texts = []1199 scored_texts = []1200 return [st for st in scored_texts if st[0] >= match]1201def _score(w1, w2):1202 closeMatch = {1203 '1l': 0.1,1204 '1I': 0.2,1205 'Il': 0.21206 }1207 def levenshteinDistance(w1, w2):1208 m = [range(len(w1)+1)]1209 for j in xrange(len(w2)+1):1210 m.append([])1211 m[-1].append(j+1)1212 i, j = 0, 01213 for j in xrange(1, len(w2)+1):1214 for i in xrange(1, len(w1)+1):1215 if w1[i-1] == w2[j-1]:1216 m[j].append(m[j-1][i-1])1217 else:1218 # This is not part of Levenshtein:1219 # if characters often look similar,1220 # don't add full edit distance (1.0),1221 # use the value in closeMatch instead.1222 chars = ''.join(sorted(w1[i-1] + w2[j-1]))1223 if chars in closeMatch:1224 m[j].append(m[j-1][i-1]+closeMatch[chars])1225 else:1226 # Standard Levenshtein continues...1227 m[j].append(min(1228 m[j-1][i] + 1, # delete1229 m[j][i-1] + 1, # insert1230 m[j-1][i-1] + 1 # substitute1231 ))1232 return m[j][i]1233 return 1 - (levenshteinDistance(w1, w2) / float(max(len(w1),len(w2))))1234def _hocr2words(hocr):1235 rv = {}1236 hocr = hocr.replace("<strong>","").replace("</strong>","").replace("<em>","").replace("</em>","")1237 hocr.replace("&#39;", "'")1238 for name, code in htmlentitydefs.name2codepoint.iteritems():1239 if code < 128:1240 hocr = hocr.replace('&' + name + ';', chr(code))1241 ocr_word = re.compile('''<span class=['"]ocrx?_word["'] id=['"]([^']*)["'] title=['"]bbox ([0-9]+) ([0-9]+) ([0-9]+) ([0-9]+)["';][^>]*>([^<]*)</span>''')1242 for word_id, bbox_left, bbox_top, bbox_right, bbox_bottom, word in ocr_word.findall(hocr):1243 bbox_left, bbox_top, bbox_right, bbox_bottom = \1244 int(bbox_left), int(bbox_top), int(bbox_right), int(bbox_bottom)1245 if not word in rv:1246 rv[word] = []1247 middle_x = (bbox_right + bbox_left) / 2.01248 middle_y = (bbox_top + bbox_bottom) / 2.01249 rv[word].append((word_id, (middle_x, middle_y),1250 (bbox_left, bbox_top, bbox_right, bbox_bottom)))1251 return rv1252def _getScreenSize():1253 global _g_screenSize1254 _, output = _runcmd("xwininfo -root | awk '/Width:/{w=$NF}/Height:/{h=$NF}END{print w\" \"h}'")1255 s_width, s_height = output.split(" ")1256 _g_screenSize = (int(s_width), int(s_height))1257def iUseWindow(windowIdOrName = None):1258 global _g_lastWindow1259 if windowIdOrName == None:1260 if _g_lastWindow == None:1261 _g_lastWindow = iActiveWindow()1262 elif windowIdOrName.startswith("0x"):1263 _g_lastWindow = windowIdOrName1264 else:1265 _g_lastWindow = _runcmd("xwininfo -name '%s' | awk '/Window id: 0x/{print $4}'" %1266 (windowIdOrName,))[1].strip()1267 if not _g_lastWindow.startswith("0x"):1268 raise BadWindowName('Cannot find window id for "%s" (got: "%s")' %1269 (windowIdOrName, _g_lastWindow))1270 _, output = _runcmd("xwininfo -id %s | awk '/Width:/{w=$NF}/Height:/{h=$NF}/Absolute upper-left X/{x=$NF}/Absolute upper-left Y/{y=$NF}END{print x\" \"y\" \"w\" \"h}'" %1271 (_g_lastWindow,))1272 offset_x, offset_y, width, height = output.split(" ")1273 _g_windowOffsets[_g_lastWindow] = (int(offset_x), int(offset_y))1274 _g_windowSizes[_g_lastWindow] = (int(width), int(height))1275 _getScreenSize()1276 return _g_lastWindow1277def iUseImageAsWindow(imageFilename):1278 global _g_lastWindow1279 global _g_screenSize1280 if not eye4graphics:1281 _log('ERROR: iUseImageAsWindow("%s") called, but eye4graphics not loaded.' % (imageFilename,))1282 raise EyenfingerError("eye4graphics not available")1283 if not os.access(imageFilename, os.R_OK):1284 raise BadSourceImage("The input file could not be read or not present.")1285 _g_lastWindow = imageFilename1286 imageWidth, imageHeight = imageSize(imageFilename)1287 if imageWidth == None:1288 _log('iUseImageAsWindow: Failed reading dimensions of image "%s".' % (imageFilename,))1289 raise BadSourceImage('Failed to read dimensions of "%s".' % (imageFilename,))1290 _g_windowOffsets[_g_lastWindow] = (0, 0)1291 _g_windowSizes[_g_lastWindow] = (imageWidth, imageHeight)1292 _g_screenSize = _g_windowSizes[_g_lastWindow]1293 return _g_lastWindow1294def iActiveWindow(windowId = None):1295 """ return id of active window, in '0x1d0f14' format """1296 if windowId == None:1297 _, output = _runcmd("xprop -root | awk '/_NET_ACTIVE_WINDOW\(WINDOW\)/{print $NF}'")1298 windowId = output.strip()1299 return windowId1300def drawBboxes(inputfilename, outputfilename, bboxes):1301 """1302 Draw bounding boxes1303 """1304 if inputfilename == None:1305 return1306 draw_commands = []1307 for bbox in bboxes:1308 left, top, right, bottom = bbox1309 color = "green"1310 draw_commands += ["-stroke", color, "-fill", "blue", "-draw", "fill-opacity 0.2 rectangle %s,%s %s,%s" % (1311 left, top, right, bottom)]1312 _runDrawCmd(inputfilename, draw_commands, outputfilename)1313def drawBbox(inputfilename, outputfilename, bbox, caption):1314 """1315 Draw bounding box1316 """1317 if inputfilename == None:1318 return1319 draw_commands = []1320 left, top, right, bottom = bbox1321 color = "green"1322 draw_commands += ["-stroke", color, "-fill", "blue", "-draw", "fill-opacity 0.2 rectangle %s,%s %s,%s" % (1323 left, top, right, bottom)]1324 draw_commands += ["-stroke", "none", "-fill", color, "-draw", "text %s,%s '%s'" % (1325 left, top, _safeForShell(caption))]1326 _runDrawCmd(inputfilename, draw_commands, outputfilename)1327def drawWords(inputfilename, outputfilename, words, detected_words):1328 """1329 Draw boxes around words detected in inputfilename that match to1330 given words. Result is saved to outputfilename.1331 """1332 if inputfilename == None:1333 return1334 draw_commands = []1335 for w in words:1336 score, dw = findWord(w, detected_words)1337 left, top, right, bottom = detected_words[dw][0][2]1338 if score < 0.33:1339 color = "red"1340 elif score < 0.5:1341 color = "brown"1342 else:1343 color = "green"1344 draw_commands += ["-stroke", color, "-fill", "blue", "-draw", "fill-opacity 0.2 rectangle %s,%s %s,%s" % (1345 left, top, right, bottom)]1346 draw_commands += ["-stroke", "none", "-fill", color, "-draw", "text %s,%s '%s'" % (1347 left, top, _safeForShell(w))]1348 draw_commands += ["-stroke", "none", "-fill", color, "-draw", "text %s,%s '%.2f'" % (1349 left, bottom+10, score)]1350 _runDrawCmd(inputfilename, draw_commands, outputfilename)1351def drawIcon(inputfilename, outputfilename, iconFilename, bboxes, color='green', area=None):1352 if inputfilename == None:1353 return1354 if type(bboxes) == tuple:1355 bboxes = [bboxes]1356 show_number = False1357 else:1358 show_number = True1359 draw_commands = []1360 for index, bbox in enumerate(bboxes):1361 left, top, right, bottom = bbox[0], bbox[1], bbox[2], bbox[3]1362 draw_commands += ["-stroke", color, "-fill", "blue", "-draw", "fill-opacity 0.2 rectangle %s,%s %s,%s" % (left, top, right, bottom)]1363 if show_number:1364 caption = "%s %s" % (index+1, iconFilename)1365 else:1366 caption = iconFilename1367 draw_commands += ["-stroke", "none", "-fill", color, "-draw", "text %s,%s '%s'" % (1368 left, top, _safeForShell(caption))]1369 if area != None:1370 draw_commands += ["-stroke", "yellow", "-draw", "fill-opacity 0.0 rectangle %s,%s %s,%s" % (area[0]-1, area[1]-1, area[2], area[3])]1371 _runDrawCmd(inputfilename, draw_commands, outputfilename)1372def drawClickedPoint(inputfilename, outputfilename, clickedXY):1373 """1374 clickedXY contains absolute screen coordinates1375 """1376 if inputfilename == None:1377 return1378 x, y = clickedXY1379 x -= _g_windowOffsets[_g_lastWindow][0]1380 y -= _g_windowOffsets[_g_lastWindow][1]1381 draw_commands = ["-stroke", "red", "-fill", "blue", "-draw", "fill-opacity 0.2 circle %s,%s %s,%s" % (1382 x, y, x + 20, y)]1383 draw_commands += ["-stroke", "none", "-fill", "red", "-draw", "point %s,%s" % (x, y)]1384 _runDrawCmd(inputfilename, draw_commands, outputfilename)1385def _screenToWindow(x,y):1386 """...

Full Screen

Full Screen

hand_gesture_modeling.py

Source:hand_gesture_modeling.py Github

copy

Full Screen

...64pose = getLabelName('./datasets/hand_pose/pose_name.json')65gmodel = tf.keras.models.load_model('./models/hand_gesture.h5', compile=False)66pmodel = tf.keras.models.load_model('./models/hand_pose.h5', compile=False)67hands = mp.solutions.hands.Hands(min_detection_confidence=0.5, min_tracking_confidence=0.5, max_num_hands=1)68def drawClickedPoint(landmark, w, h):69 global circles70 pos = landmark[8]71 circles.append((int(pos.x*w), int(pos.y*h)))72 if len(circles) > 20:73 circles = circles[1:]74def predict(frame, key, videoCap):75 global lastFrameTime, sequence76 h, w, c = frame.shape77 ratio = h/w78 dt = time.time() - lastFrameTime79 lastFrameTime = time.time()80 results = hands.process(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))81 if results.multi_handedness:82 draw_landmark(frame, results)83 landmark = results.multi_hand_landmarks[0].landmark84 landmark_list = landmark_to_list(landmark, ratio)85 vlandmark = [vectorize_landmark(landmark_list)]86 pinputs = tf.split(vlandmark, num_or_size_splits=5, axis=-1)87 index = tf.argmax(pmodel([pinputs]), axis=1).numpy()[0]88 pred_pose = pose[index]89 if pred_pose == 'pointer1':90 f1_landmark = landmark_to_list_norm(landmark, ratio)91 f1_landmark += [dt]92 sequence.append(f1_landmark)93 if len(sequence) > 20:94 sequence = sequence[1:]95 input = tf.constant([sequence])96 confidence = tf.nn.sigmoid(gmodel(input)).numpy()[0][0]97 if confidence > 0.993:98 print('Clicked')99 drawClickedPoint(landmark, w, h)100 sequence.clear()101 elif len(sequence) != 0:102 sequence.clear()103 104 cv2.putText(frame, pred_pose, (int(w/4),30), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)105 106 cv2.putText(frame, str(dt), (0,10), cv2.FONT_HERSHEY_COMPLEX, 0.5, (0,255,0), 1)107 for center in circles:108 frame = cv2.circle(frame, center, 2, (0,255,0), 2)109 return frame...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run fMBT automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful