How to use crop_element method in toolium

Best Python code snippet using toolium_python

autoencoderInvertible.py

Source:autoencoderInvertible.py Github

copy

Full Screen

...124 return x[:, :, :, :, start:end]125 return Lambda(func)126 except IndexError:127 print("Sorry, your index is out of range.")128def crop_element(k: tuple, r) -> tuple:129 """130 **A function taking out specific values.**131 132 + param **k**: tuple object to be processed, type `tuple`.133 + param **r**: value to be removed, type `int, float, string, None`.134 + return **k2**: cropped tuple object, type `tuple`.135 """136 k2 = list(k)137 k2.remove(r)138 return tuple(k2)139def invertible_subspace_dimension2D(units: int):140 """141 **A helper function converting dimensions into 2D convolution shapes.**142 This functions works only for quadratic dimension size. It reshapes the data143 according to an embedding with the same dimension, represented by a `2D` array.144 + param **units**: , type `int`.145 + return **embedding**: , type `tuple`.146 """147 embedding = (int(math.sqrt(units)), int(math.sqrt(units)), 1)148 return embedding149def dense_group(150 _input: np.ndarray,151 units: int,152 alpha: float = 5.5,153 useBias: bool = True,154 kernelInitializer: str = "uniform",155 biasInitializer: str = "zeros",156 padding: None = None,157 filterPower: None = None,158 kernelSize: None = None,159):160 """161 **This group can be extended for deep learning models and is a sequence of dense layers.**162 The dense layer is used with a `LeakyRelu` activation function. After the activation163 function batch-normalization is performed on default, to take care of the covariate shift.164 + param **_input**: data from previous convolutional layer, type `np.ndarray`.165 + param **filterPower**: multiple of the filters per layer, type `int`.166 + param **alpha**: parameter for `LeakyRelu` activation function, default `5.5`, type `float`.167 + param **kernelSize**: size of the `2D` kernel, default `(2,2)`, type `tuple`.168 + param **kernelInitializer**: keras kernel initializer, default `uniform`, type `str`.169 + param **padding**: padding for convolution, default `same`, type `str`.170 + param **useBias**: whether or not to use the bias term throughout the network, type `bool`.171 + param **biasInitializer**: initializing distribution of the bias values, type `str`.172 + return **data**: processed data by neural layers, type `np.ndarray`.173 """174 _dense = Dense(175 units=units,176 use_bias=useBias,177 kernel_initializer= kernelInitializer,178 bias_initializer=biasInitializer,179 )(_input)180 _activ = LeakyReLU(alpha=alpha)(_dense)181 _norm = BatchNormalization()(_activ)182 return _norm183def convolutional_group(184 _input: np.ndarray,185 filterPower: int = 2,186 alpha: float = 5.5,187 useBias: bool = True,188 kernelSize: tuple = (2, 2),189 kernelInitializer: str = "uniform",190 biasInitializer: str = "zeros",191 padding: str = "same",192 units: None = None193):194 """195 **This group can be extended for deep learning models and is a sequence of convolutional layers.**196 The convolutions is a `2D`-convolution and uses a `LeakyRelu` activation function. After the activation197 function batch-normalization is performed on default, to take care of the covariate shift. The padding198 is set to `same`, to avoid difficulties with convolution.199 + param **_input**: data from previous convolutional layer, type `np.ndarray`.200 + param **filterPower**: multiple of the filters per layer, type `int`.201 + param **alpha**: parameter for `LeakyRelu` activation function, default `5.5`, type `float`.202 + param **kernelSize**: size of the `2D` kernel, default `(2,2)`, type `tuple`.203 + param **kernelInitializer**: keras kernel initializer, default `uniform`, type `str`.204 + param **padding**: padding for convolution, default `same`, type `str`.205 + param **useBias**: whether or not to use the bias term throughout the network, type `bool`.206 + param **biasInitializer**: initializing distribution of the bias values, type `str`.207 + return **data**: processed data by neural layers, type `np.ndarray`.208 """209 _conv = Conv2D(210 filterPower,211 kernelSize,212 kernel_initializer=kernelInitializer,213 use_bias=useBias,214 bias_initializer=biasInitializer,215 padding=padding,216 )(_input)217 _activ = LeakyReLU(alpha=alpha)(_conv)218 _norm = BatchNormalization()(_activ)219 return _norm220def group_loop(221 group: Callable,222 element: np.ndarray,223 units: int = None,224 filterPower: int = 2,225 kernelSize: tuple = (2,2),226 groupLayers: int = 1,227 kernelInitializer: str = "uniform",228 biasInitializer: str = "zeros",229 useBias: bool = True,230) -> np.ndarray:231 """232 **This callable is a loop over a group specification.**233 The group is a stacking of neural network layers into a deep learning system.234 This functino stacks a certain amount of dense layers. The Dense layers are kept235 equally dimensioned within the group, to respect its mathematical sense.236 + param **group**: a callable that sets up the neural architecture, type `Callable`.237 + param **element**: data, type `np.ndarray`.238 + param **groupLayers**: depth of the neural network, type `int`.239 + param **units**: units for dense groups, type `int`.240 + param **useBias**: whether or not to use the bias term throughout the network, type `bool`.241 + param **biasInitializer**: initializing distribution of the bias values, type `str`.242 + return **data**: processed data by neural network, type `np.ndarray`.243 """244 data = element245 for i in range(0, groupLayers+1):246 try:247 if i == (groupLayers):248 data = group(249 data,250 units=units,251 filterPower=1,252 kernelSize= kernelSize,253 kernelInitializer=kernelInitializer,254 biasInitializer=biasInitializer,255 useBias=useBias256 )257 else:258 data = group(259 data,260 units = units,261 filterPower=filterPower ** ((groupLayers) - i),262 kernelSize=kernelSize,263 kernelInitializer=kernelInitializer,264 biasInitializer=biasInitializer,265 useBias=useBias266 )267 except TypeError:268 exit("TypeError on your convolutional layer size.")269 return data270def dense_invertible_layer(271 data: np.ndarray,272 units: int = None,273 groupLayers: int = 1,274 alpha: float = 5.5,275 croppingNumber: int = 2,276 kernelInitializer: str = "uniform",277 biasInitializer: str = "zeros",278 useBias: bool = True,279 kernelSize: None = None,280 padding: None = None281) -> np.ndarray:282 """283 **Returns an invertible dense neural network layer.**284 This neural network layer learns invertible subspaces, parameterized by higher dimensional285 functions with a trivial invertibility. The higher dimensional functions are also neural286 subnetworks, trained during learning process.287 + param **data**: data from previous convolutional layer, type `np.ndarray`.288 + param **units**: units for dense groups, type `int`.289 + param **groupLayers**: depth of the neural network, type `int`.290 + param **alpha**: parameter for `LeakyRelu` activation function, default `5.5`, type `float`.291 + param **useBias**: whether or not to use the bias term throughout the network, type `bool`.292 + param **croppingFactor**: some quotient of the dimension, type `int`.293 + param **biasInitializer**: initializing distribution of the bias values, type `str`.294 + param **kernelInitializer**: keras kernel initializer, default `uniform`, type `str`.295 + return **data**: processed data, type `np.ndarray`.296 """297 data_shape = crop_element(K.int_shape(data), None)298 crop = int(data_shape[0] / croppingNumber)299 def _splitLayer(tensor):300 if (2 * crop) == data_shape[0]: 301 return tf.split(tensor, [crop,crop], axis=1)302 else:303 return tf.split(tensor, [crop,crop+1], axis=1)304 partVectorOne, partVectorTwo = Lambda(_splitLayer)(data)305 firstGroup = group_loop(306 group=dense_group,307 units = crop,308 groupLayers=groupLayers,309 element=partVectorTwo,310 kernelInitializer=kernelInitializer,311 useBias=useBias,312 biasInitializer=biasInitializer313 )314 firstMultiplication = multiply(315 [partVectorOne, firstGroup]316 )317 firstAddition = add([firstMultiplication,firstGroup])318 # Inverse process of learning.319 secondGroup = group_loop(320 group=dense_group,321 units=crop,322 groupLayers=groupLayers,323 element=firstAddition,324 kernelInitializer=kernelInitializer,325 useBias=useBias,326 biasInitializer=biasInitializer,327 )328 secondMultiplication = multiply(329 [partVectorTwo, firstGroup]330 )331 secondAddition = add(332 [secondMultiplication, secondGroup]333 )334 decoded_layer = Concatenate()([firstAddition, secondAddition])335 decoded_layer = Reshape(data_shape)(decoded_layer)336 return decoded_layer337def convolutional_invertible_layer(338 data: np.ndarray,339 groupLayers: int,340 alpha: float = 5.5,341 kernelSize: tuple = (2, 2),342 kernelInitializer: str = "uniform",343 filterPower: int = 2,344 croppingFactor: int = 2,345 useBias: bool = True,346 biasInitializer: str = "zeros",347) -> np.ndarray:348 """349 **Returns an invertible neural network layer.**350 This neural network layer learns invertible subspaces, parameterized by higher dimensional351 functions with a trivial invertibility. The higher dimensional functions are also neural352 subnetworks, trained during learning process.353 + param **data**: data from previous convolutional layer, type `np.ndarray`.354 + param **alpha**: parameter for `LeakyRelu` activation function, default `5.5`, type `float`.355 + param **groupLayers**: depth of the neural network, type `int`.356 + param **kernelSize**: size of the kernels, type `tuple`.357 + param **filterPower**: multiple of the filters per layer, type `int`.358 + param **croppingFactor**: should be a multiple of the strides length, type `int`.359 + param **useBias**: whether or not to use the bias term throughout the network, type `bool`.360 + param **biasInitializer**: initializing distribution of the bias values, type `str`.361 + return **data**: processed data, type `np.ndarray`.362 """363 data_shape = crop_element(K.int_shape(data), None)364 crop = int(data_shape[0] / croppingFactor)365 partVectorOne = Cropping2D(cropping=((0, 0), (0,crop)))(data)366 partVectorTwo = Cropping2D(cropping=((crop,0), (0, 0)))(data)367 # Storing the shapes for dynamic adaptation of parameters.368 partVectorOneShape = crop_element(K.int_shape(partVectorOne), None)369 partVectorTwoShape = crop_element(K.int_shape(partVectorTwo), None)370 if crop - ((groupLayers+1) * (kernelSize[0]-1)) <= 0:371 exit("The group layers will cause negative dimensions due to convolution in axis = 0.")372 elif crop - ((groupLayers+1) * (kernelSize[0]-1)) <= 0:373 exit("The group layers will cause negative dimensions due to convolution in axis = 1.")374 # Compute the dimension reduction caused by the convolutional layer.375 sizeReductionPerDimension = []376 for i in range(0, len(kernelSize)):377 sizeReductionPerDimension.append((kernelSize[i] - 1) * groupLayers)378 # First function for invertibility.379 dataDimension = np.prod(np.array(partVectorOneShape))380 dataReduced = dataDimension - np.prod(np.array(sizeReductionPerDimension))381 firstGroup = group_loop(382 group=convolutional_group,383 groupLayers=groupLayers,384 element=partVectorTwo,385 filterPower=filterPower,386 kernelSize=kernelSize,387 kernelInitializer=kernelInitializer,388 useBias=useBias,389 biasInitializer=biasInitializer,390 )391 firstMultiplication = multiply(392 [partVectorOne, Reshape(partVectorOneShape)(firstGroup)]393 )394 firstAddition = add([firstMultiplication, Reshape(partVectorOneShape)(firstGroup)])395 # Inverse process of learning.396 secondGroup = group_loop(397 group=convolutional_group,398 groupLayers=groupLayers,399 element=firstAddition,400 filterPower=filterPower,401 kernelSize=kernelSize,402 kernelInitializer=kernelInitializer,403 useBias=useBias,404 biasInitializer=biasInitializer,405 )406 secondMultiplication = multiply(407 [partVectorTwo, Reshape(partVectorTwoShape)(firstGroup)]408 )409 secondAddition = add(410 [secondMultiplication, Reshape(partVectorTwoShape)(secondGroup)]411 )412 # Storing the shapes for dynamic adaptation of parameters.413 outOneShape = crop_element(K.int_shape(firstAddition), None)414 outTwoShape = crop_element(K.int_shape(secondAddition), None)415 def _dot(tensors):416 return K.stack(tensors, axis = 1)417 decoded_layer = Lambda(_dot)([firstAddition, Reshape(outOneShape)(secondAddition)])418 decoded_layer = Reshape(data_shape)(decoded_layer)419 return decoded_layer420def dense_invertible_subspace_autoencoder(421 data: np.ndarray,422 units: int,423 invertibleLayers: int,424 alpha: float = 5.5,425 kernelInitializer: str = "uniform",426 biasInitializer: str = "zeros",427 groupLayers: int = 1,428 useBias: bool = True,429):430 """431 **A function returning an invertible dense autoencoder model.**432 This is a fully invertible dense autoencoder. The amount of hidden layers is specified through the433 `groupLayers` argument. Please use only an even embedding dimension, such that no problem with tensor434 transformations occure.435 + param **data**: data, type `np.ndarray`.436 + param **units**: units for dense layers, type `int`.437 + param **invertibleLayers**: amout of invertible layers in the middle of the network, type `int`.438 + param **alpha**: parameter for `LeakyRelu` activation function, default `5.5`, type `float`.439 + param **kernelInitializer**: initializing distribution of the kernel values, type `str`.440 + param **groupLayers**: depth of the neural network (counts from `0`), type `int`.441 + param **useBias**: whether or not to use the bias term throughout the network, type `bool`.442 + param **biasInitializer**: initializing distribution of the bias values, type `str`.443 + return **output**: an output layer for keras neural networks, type `np.ndarray`.444 """445 firstLayerFlattened = Flatten()(data)446 dataDimension = crop_element(K.int_shape(firstLayerFlattened), None)[0]447 data_shape = crop_element(K.int_shape(data), None)448 firstLayer = Dense(449 units=units,450 use_bias=True,451 kernel_initializer="glorot_uniform",452 bias_initializer="zeros",453 )(firstLayerFlattened)454 firstActivation = LeakyReLU(alpha=alpha)(firstLayer)455 firstNorm = BatchNormalization()(firstActivation)456 for i in range(0, invertibleLayers):457 firstNorm = add(458 [459 firstNorm,460 dense_invertible_layer(461 data=firstNorm,462 units=units,463 alpha=alpha,464 kernelInitializer=kernelInitializer,465 groupLayers=groupLayers,466 useBias=useBias,467 biasInitializer=biasInitializer,468 ),469 ]470 )471 lastLayer = Dense(472 units=dataDimension,473 use_bias=True,474 kernel_initializer="glorot_uniform",475 bias_initializer="zeros",476 )(firstNorm)477 lastActivation = LeakyReLU(alpha=alpha)(lastLayer)478 lastNorm = BatchNormalization()(lastActivation)479 output = Reshape(data_shape)(lastNorm)480 return output481def convolutional_invertible_subspace_autoencoder(482 data: np.ndarray,483 units: int,484 invertibleLayers: int,485 alpha: float = 5.5,486 kernelSize: tuple = (2, 2),487 kernelInitializer: str = "uniform",488 groupLayers: int = 1,489 filterPower: int = 2,490 useBias: bool = True,491 biasInitializer: str = "zeros",492):493 """494 **A function returning an invertible convolutional autoencoder model.**495 This model works only with a quadratic number as units. The convolutional embedding496 dimension in `2D` is determined, for the quadratic matrix, as the square root of the497 respective dimension of the dense layer. This module is for testing purposes and not498 meant to be part of a productive environment.499 + param **data**: data, type `np.ndarray`.500 + param **units**: projection dim. into lower dim. by dense layer, type `int`.501 + param **invertibleLayers**: amout of invertible layers in the middle of the network, type `int`.502 + param **alpha**: parameter for `LeakyRelu` activation function, default `5.5`, type `float`.503 + param **kernelSize**: size of the kernels, type `tuple`.504 + param **kernelInitializer**: initializing distribution of the kernel values, type `str`.505 + param **groupLayers**: depth of the neural network (counts from `0`), type `int`.506 + param **filterPower**: multiple of the filters per layer, type `int`.507 + param **useBias**: whether or not to use the bias term throughout the network, type `bool`.508 + param **biasInitializer**: initializing distribution of the bias values, type `str`.509 + param **filterPower**: an integer factor for each convolutional layer, type `int`.510 + return **output**: an output layer for keras neural networks, type `np.ndarray`.511 """512 firstLayerFlattened = Flatten()(data)513 dataDimension = crop_element(K.int_shape(firstLayerFlattened), None)[0]514 data_shape = crop_element(K.int_shape(data), None)515 firstLayer = Dense(516 units=units,517 use_bias=True,518 kernel_initializer=kernelInitializer,519 bias_initializer=biasInitializer,520 )(firstLayerFlattened)521 firstActivation = LeakyReLU(alpha=alpha)(firstLayer)522 firstNorm = BatchNormalization()(firstActivation)523 firstShape = crop_element(K.int_shape(firstLayerFlattened), None)524 shape = invertible_subspace_dimension2D(units)525 reshapedLayer = Reshape((shape))(firstNorm)526 for i in range(0, invertibleLayers):527 reshapedLayer = add(528 [529 reshapedLayer,530 convolutional_invertible_layer(531 data=reshapedLayer,532 alpha=alpha,533 kernelSize=kernelSize,534 kernelInitializer=kernelInitializer,535 groupLayers=groupLayers,536 filterPower=filterPower,537 useBias=useBias,...

Full Screen

Full Screen

visual_test.py

Source:visual_test.py Github

copy

Full Screen

...107 img = Image.open(BytesIO(self.driver_wrapper.driver.get_screenshot_as_png()))108 img = self.remove_scrolls(img)109 img = self.mobile_resize(img)110 img = self.exclude_elements(img, exclude_web_elements)111 img = self.crop_element(img, web_element)112 img.save(output_file)113 DriverManager.visual_number += 1114 # Determine whether we should save the baseline image115 if self.save_baseline or not os.path.exists(baseline_file):116 # Copy screenshot to baseline117 shutil.copyfile(output_file, baseline_file)118 if self.driver_wrapper.config.getboolean_optional('VisualTests', 'complete_report'):119 self._add_result_to_report('baseline', report_name, output_file, None, 'Screenshot added to baseline')120 self.logger.debug("Visual screenshot '%s' saved in visualtests/baseline folder", filename)121 else:122 # Compare the screenshots123 self.compare_files(report_name, output_file, baseline_file, threshold)124 def get_scrolls_size(self):125 scroll_x = 0126 scroll_y = 0127 if (self.driver_wrapper.config.get('Driver', 'type').split('-')[0] in ['chrome', 'iexplore'] and128 not self.driver_wrapper.is_mobile_test()):129 scroll_height = self.driver_wrapper.driver.execute_script("return document.body.scrollHeight")130 scroll_width = self.driver_wrapper.driver.execute_script("return document.body.scrollWidth")131 window_height = self.driver_wrapper.driver.execute_script("return window.innerHeight")132 window_width = self.driver_wrapper.driver.execute_script("return window.innerWidth")133 scroll_size = 21 if self.driver_wrapper.config.get('Driver', 'type').split('-')[0] == 'iexplore' else 17134 scroll_x = scroll_size if scroll_width > window_width else 0135 scroll_y = scroll_size if scroll_height > window_height else 0136 return {'x': scroll_x, 'y': scroll_y}137 def remove_scrolls(self, img):138 scrolls_size = self.get_scrolls_size()139 if scrolls_size['x'] > 0 or scrolls_size['y'] > 0:140 new_image_width = img.size[0] - scrolls_size['y']141 new_image_height = img.size[1] - scrolls_size['x']142 img = img.crop((0, 0, new_image_width, new_image_height))143 return img144 def mobile_resize(self, img):145 if self.driver_wrapper.is_ios_test() or self.driver_wrapper.is_android_web_test():146 scale = img.size[0] / self.utils.get_window_size()['width']147 if scale != 1:148 new_image_size = (int(img.size[0] / scale), int(img.size[1] / scale))149 img = img.resize(new_image_size, Image.ANTIALIAS)150 return img151 def get_element_box(self, web_element):152 if not self.driver_wrapper.is_mobile_test():153 scroll_x = self.driver_wrapper.driver.execute_script("return window.pageXOffset")154 scroll_x = scroll_x if scroll_x else 0155 scroll_y = self.driver_wrapper.driver.execute_script("return window.pageYOffset")156 scroll_y = scroll_y if scroll_y else 0157 offset_x = -scroll_x158 offset_y = -scroll_y159 else:160 offset_x = 0161 offset_y = self.utils.get_safari_navigation_bar_height()162 location = web_element.location163 size = web_element.size164 return (int(location['x']) + offset_x, int(location['y'] + offset_y),165 int(location['x'] + offset_x + size['width']), int(location['y'] + offset_y + size['height']))166 def crop_element(self, img, web_element):167 if web_element:168 element_box = self.get_element_box(web_element)169 # Reduce element box if it is greater than image size170 element_max_x = img.size[0] if element_box[2] > img.size[0] else element_box[2]171 element_max_y = img.size[1] if element_box[3] > img.size[1] else element_box[3]172 element_box = (element_box[0], element_box[1], element_max_x, element_max_y)173 img = img.crop(element_box)174 return img175 def exclude_elements(self, img, web_elements):176 if web_elements and len(web_elements) > 0:177 img = img.convert("RGBA")178 pixel_data = img.load()179 for web_element in web_elements:180 element_box = self.get_element_box(web_element)...

Full Screen

Full Screen

process_dataset.py

Source:process_dataset.py Github

copy

Full Screen

...37 ymin += -min(0, ymin)38 xmax += -min(0, xmin)39 xmin += -min(0, xmin)40 return image, xmin, xmax, ymin, ymax41def crop_element(image_path, output_path):42 """Crop off the whitespace in UI sketches to capture only the UI element's sketch43 Arguments:44 image_path {string} -- File path of input image file45 output_path {string} -- File path to store the cropped image46 """47 original_image = cv2.imread(image_path)48 # morph close kernel size is 10% of image width & crop offset is 1% of width49 height, width, _ = original_image.shape50 kernel_size = int(width * 0.1)51 offset = int(width * 0.01)52 # Convert original image to grayscale for further processing53 grayscale_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)54 # Copy the grayscale image for later reuse55 image = grayscale_image.copy()56 # Threshold to convert image black/white - remove all grays & colors57 _, thresh_binary_image = cv2.threshold(grayscale_image, 220, 255, cv2.THRESH_BINARY)58 # Apply gaussian blur on thresh binary image to remove noise59 denoised_image = cv2.GaussianBlur(thresh_binary_image, (7, 7), 0)60 # Find edges in the denoised image61 edged_image = cv2.Canny(denoised_image, 10, 250)62 # Close the edge detected image to form one combined element blob63 kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernel_size, kernel_size))64 blob_image = cv2.morphologyEx(edged_image, cv2.MORPH_CLOSE, kernel)65 # Find all the contours66 (_, contours, _) = cv2.findContours(blob_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)67 # Pick only the largest contour based on area, crop it and save it in processed folder68 if contours:69 contour = max(contours, key=cv2.contourArea)70 xmin, ymin, width, height = cv2.boundingRect(contour)71 bndbox = (xmin - offset, ymin - offset, xmin + width + offset, ymin + height + offset)72 # Identify the regions of interest and save them73 roi = crop_image(image, bndbox)74 cv2.imwrite(output_path, roi)75if __name__ == "__main__":76 PARSER = argparse.ArgumentParser(description="Automatically crop labelled UI sketch elements.")77 PARSER.add_argument(78 "-i",79 "--input",80 required=True,81 dest="input_folder",82 help="Input folder containing labelled folders of UI sketches",83 )84 PARSER.add_argument(85 "-o",86 "--output",87 required=True,88 dest="output_folder",89 help="Output folder of cropped images",90 )91 ARGS = PARSER.parse_args()92 INPUT_FOLDER = ARGS.input_folder93 INPUT_FOLDER = INPUT_FOLDER.strip(os.sep)94 print(f"Input folder: {INPUT_FOLDER}")95 OUTPUT_FOLDER = ARGS.output_folder96 OUTPUT_FOLDER = OUTPUT_FOLDER.strip(os.sep)97 print(f"Output folder: {OUTPUT_FOLDER}")98 print("Creating folder structure similar to input folder in output folder.....")99 for folder in os.listdir(INPUT_FOLDER):100 if os.path.isdir(os.path.join(INPUT_FOLDER, folder)):101 os.makedirs(os.path.join(OUTPUT_FOLDER, folder), exist_ok=True)102 print("File structure cloned in output folder.")103 FILES = glob.glob(f"{INPUT_FOLDER}/**/*.jpg")104 print(f"Cropping {len(FILES)} images....")105 for image_file in FILES:106 output_file = image_file.replace(INPUT_FOLDER, OUTPUT_FOLDER)107 crop_element(image_file, output_file)...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run toolium automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful