How to use ensure_readable method in localstack

Best Python code snippet using localstack_python

__init__.py

Source:__init__.py Github

copy

Full Screen

...285 fails an exception is raised, but you can set :attr:`force` to286 :data:`True` to continue with backup rotation instead (the default is287 obviously :data:`False`).288 .. seealso:: :func:`Location.ensure_exists()`,289 :func:`Location.ensure_readable()` and290 :func:`Location.ensure_writable()`291 .. _#18: https://github.com/xolox/python-rotate-backups/issues/18292 """293 return False294 @cached_property(writable=True)295 def include_list(self):296 """297 Filename patterns to select specific backups (a list of strings).298 This is a list of strings with :mod:`fnmatch` patterns. When it's not299 empty :func:`collect_backups()` will only collect backups whose name300 matches a pattern in the list.301 :see also: :attr:`exclude_list`302 """303 return []304 @mutable_property305 def io_scheduling_class(self):306 """307 The I/O scheduling class for backup rotation (a string or :data:`None`).308 When this property is set (and :attr:`~Location.have_ionice` is309 :data:`True`) then ionice_ will be used to set the I/O scheduling class310 for backup rotation. This can be useful to reduce the impact of backup311 rotation on the rest of the system.312 The value of this property is expected to be one of the strings 'idle',313 'best-effort' or 'realtime'.314 .. _ionice: https://linux.die.net/man/1/ionice315 """316 @mutable_property317 def prefer_recent(self):318 """319 Whether to prefer older or newer backups in each time slot (a boolean).320 Defaults to :data:`False` which means the oldest backup in each time321 slot (an hour, a day, etc.) is preserved while newer backups in the322 time slot are removed. You can set this to :data:`True` if you would323 like to preserve the newest backup in each time slot instead.324 """325 return False326 @mutable_property327 def removal_command(self):328 """329 The command used to remove backups (a list of strings).330 By default the command ``rm -fR`` is used. This choice was made because331 it works regardless of whether the user's "backups to be rotated" are332 files or directories or a mixture of both.333 .. versionadded: 5.3334 This option was added as a generalization of the idea suggested in335 `pull request 11`_, which made it clear to me that being able to336 customize the removal command has its uses.337 .. _pull request 11: https://github.com/xolox/python-rotate-backups/pull/11338 """339 return DEFAULT_REMOVAL_COMMAND340 @required_property341 def rotation_scheme(self):342 """343 The rotation scheme to apply to backups (a dictionary).344 Each key in this dictionary defines a rotation frequency (one of the345 strings 'minutely', 'hourly', 'daily', 'weekly', 'monthly' and346 'yearly') and each value defines a retention count:347 - An integer value represents the number of backups to preserve in the348 given rotation frequency, starting from the most recent backup and349 counting back in time.350 - The string 'always' means all backups in the given rotation frequency351 are preserved (this is intended to be used with the biggest frequency352 in the rotation scheme, e.g. yearly).353 No backups are preserved for rotation frequencies that are not present354 in the dictionary.355 """356 @mutable_property357 def strict(self):358 """359 Whether to enforce the time window for each rotation frequency (a boolean, defaults to :data:`True`).360 The easiest way to explain the difference between strict and relaxed361 rotation is using an example:362 - If :attr:`strict` is :data:`True` and the number of hourly backups to363 preserve is three, only backups created in the relevant time window364 (the hour of the most recent backup and the two hours leading up to365 that) will match the hourly frequency.366 - If :attr:`strict` is :data:`False` then the three most recent backups367 will all match the hourly frequency (and thus be preserved),368 regardless of the calculated time window.369 If the explanation above is not clear enough, here's a simple way to370 decide whether you want to customize this behavior:371 - If your backups are created at regular intervals and you never miss372 an interval then the default (:data:`True`) is most likely fine.373 - If your backups are created at irregular intervals then you may want374 to set :attr:`strict` to :data:`False` to convince375 :class:`RotateBackups` to preserve more backups.376 """377 return True378 @mutable_property379 def timestamp_pattern(self):380 """381 The pattern used to extract timestamps from filenames (defaults to :data:`TIMESTAMP_PATTERN`).382 The value of this property is a compiled regular expression object.383 Callers can provide their own compiled regular expression which384 makes it possible to customize the compilation flags (see the385 :func:`re.compile()` documentation for details).386 The regular expression pattern is expected to be a Python compatible387 regular expression that defines the named capture groups 'year',388 'month' and 'day' and optionally 'hour', 'minute' and 'second'.389 String values are automatically coerced to compiled regular expressions390 by calling :func:`~humanfriendly.coerce_pattern()`, in this case only391 the :data:`re.VERBOSE` flag is used.392 If the caller provides a custom pattern it will be validated393 to confirm that the pattern contains named capture groups394 corresponding to each of the required date components395 defined by :data:`SUPPORTED_DATE_COMPONENTS`.396 """397 return TIMESTAMP_PATTERN398 @timestamp_pattern.setter399 def timestamp_pattern(self, value):400 """Coerce the value of :attr:`timestamp_pattern` to a compiled regular expression."""401 pattern = coerce_pattern(value, re.VERBOSE)402 for component, required in SUPPORTED_DATE_COMPONENTS:403 if component not in pattern.groupindex and required:404 raise ValueError("Pattern is missing required capture group! (%s)" % component)405 set_property(self, 'timestamp_pattern', pattern)406 def rotate_concurrent(self, *locations, **kw):407 """408 Rotate the backups in the given locations concurrently.409 :param locations: One or more values accepted by :func:`coerce_location()`.410 :param kw: Any keyword arguments are passed on to :func:`rotate_backups()`.411 This function uses :func:`rotate_backups()` to prepare rotation412 commands for the given locations and then it removes backups in413 parallel, one backup per mount point at a time.414 The idea behind this approach is that parallel rotation is most useful415 when the files to be removed are on different disks and so multiple416 devices can be utilized at the same time.417 Because mount points are per system :func:`rotate_concurrent()` will418 also parallelize over backups located on multiple remote systems.419 """420 timer = Timer()421 pool = CommandPool(concurrency=10)422 logger.info("Scanning %s ..", pluralize(len(locations), "backup location"))423 for location in locations:424 for cmd in self.rotate_backups(location, prepare=True, **kw):425 pool.add(cmd)426 if pool.num_commands > 0:427 backups = pluralize(pool.num_commands, "backup")428 logger.info("Preparing to rotate %s (in parallel) ..", backups)429 pool.run()430 logger.info("Successfully rotated %s in %s.", backups, timer)431 def rotate_backups(self, location, load_config=True, prepare=False):432 """433 Rotate the backups in a directory according to a flexible rotation scheme.434 :param location: Any value accepted by :func:`coerce_location()`.435 :param load_config: If :data:`True` (so by default) the rotation scheme436 and other options can be customized by the user in437 a configuration file. In this case the caller's438 arguments are only used when the configuration file439 doesn't define a configuration for the location.440 :param prepare: If this is :data:`True` (not the default) then441 :func:`rotate_backups()` will prepare the required442 rotation commands without running them.443 :returns: A list with the rotation commands444 (:class:`~executor.ExternalCommand` objects).445 :raises: :exc:`~exceptions.ValueError` when the given location doesn't446 exist, isn't readable or isn't writable. The third check is447 only performed when dry run isn't enabled.448 This function binds the main methods of the :class:`RotateBackups`449 class together to implement backup rotation with an easy to use Python450 API. If you're using `rotate-backups` as a Python API and the default451 behavior is not satisfactory, consider writing your own452 :func:`rotate_backups()` function based on the underlying453 :func:`collect_backups()`, :func:`group_backups()`,454 :func:`apply_rotation_scheme()` and455 :func:`find_preservation_criteria()` methods.456 """457 rotation_commands = []458 location = coerce_location(location)459 # Load configuration overrides by user?460 if load_config:461 location = self.load_config_file(location)462 # Collect the backups in the given directory.463 sorted_backups = self.collect_backups(location)464 if not sorted_backups:465 logger.info("No backups found in %s.", location)466 return467 # Make sure the directory is writable, but only when the default468 # removal command is being used (because custom removal commands469 # imply custom semantics that we shouldn't get in the way of, see470 # https://github.com/xolox/python-rotate-backups/issues/18 for471 # more details about one such use case).472 if not self.dry_run and (self.removal_command == DEFAULT_REMOVAL_COMMAND):473 location.ensure_writable(self.force)474 most_recent_backup = sorted_backups[-1]475 # Group the backups by the rotation frequencies.476 backups_by_frequency = self.group_backups(sorted_backups)477 # Apply the user defined rotation scheme.478 self.apply_rotation_scheme(backups_by_frequency, most_recent_backup.timestamp)479 # Find which backups to preserve and why.480 backups_to_preserve = self.find_preservation_criteria(backups_by_frequency)481 # Apply the calculated rotation scheme.482 for backup in sorted_backups:483 friendly_name = backup.pathname484 if not location.is_remote:485 # Use human friendly pathname formatting for local backups.486 friendly_name = format_path(backup.pathname)487 if backup in backups_to_preserve:488 matching_periods = backups_to_preserve[backup]489 logger.info("Preserving %s (matches %s retention %s) ..",490 friendly_name, concatenate(map(repr, matching_periods)),491 "period" if len(matching_periods) == 1 else "periods")492 else:493 logger.info("Deleting %s ..", friendly_name)494 if not self.dry_run:495 # Copy the list with the (possibly user defined) removal command.496 removal_command = list(self.removal_command)497 # Add the pathname of the backup as the final argument.498 removal_command.append(backup.pathname)499 # Construct the command object.500 command = location.context.prepare(501 command=removal_command,502 group_by=(location.ssh_alias, location.mount_point),503 ionice=self.io_scheduling_class,504 )505 rotation_commands.append(command)506 if not prepare:507 timer = Timer()508 command.wait()509 logger.verbose("Deleted %s in %s.", friendly_name, timer)510 if len(backups_to_preserve) == len(sorted_backups):511 logger.info("Nothing to do! (all backups preserved)")512 return rotation_commands513 def load_config_file(self, location):514 """515 Load a rotation scheme and other options from a configuration file.516 :param location: Any value accepted by :func:`coerce_location()`.517 :returns: The configured or given :class:`Location` object.518 """519 location = coerce_location(location)520 for configured_location, rotation_scheme, options in load_config_file(self.config_file, expand=False):521 if configured_location.match(location):522 logger.verbose("Loading configuration for %s ..", location)523 if rotation_scheme:524 self.rotation_scheme = rotation_scheme525 for name, value in options.items():526 if value:527 setattr(self, name, value)528 # Create a new Location object based on the directory of the529 # given location and the execution context of the configured530 # location, because:531 #532 # 1. The directory of the configured location may be a filename533 # pattern whereas we are interested in the expanded name.534 #535 # 2. The execution context of the given location may lack some536 # details of the configured location.537 return Location(538 context=configured_location.context,539 directory=location.directory,540 )541 logger.verbose("No configuration found for %s.", location)542 return location543 def collect_backups(self, location):544 """545 Collect the backups at the given location.546 :param location: Any value accepted by :func:`coerce_location()`.547 :returns: A sorted :class:`list` of :class:`Backup` objects (the548 backups are sorted by their date).549 :raises: :exc:`~exceptions.ValueError` when the given directory doesn't550 exist or isn't readable.551 """552 backups = []553 location = coerce_location(location)554 logger.info("Scanning %s for backups ..", location)555 location.ensure_readable(self.force)556 for entry in natsort(location.context.list_entries(location.directory)):557 match = self.timestamp_pattern.search(entry)558 if match:559 if self.exclude_list and any(fnmatch.fnmatch(entry, p) for p in self.exclude_list):560 logger.verbose("Excluded %s (it matched the exclude list).", entry)561 elif self.include_list and not any(fnmatch.fnmatch(entry, p) for p in self.include_list):562 logger.verbose("Excluded %s (it didn't match the include list).", entry)563 else:564 try:565 backups.append(Backup(566 pathname=os.path.join(location.directory, entry),567 timestamp=self.match_to_datetime(match),568 ))569 except ValueError as e:570 logger.notice("Ignoring %s due to invalid date (%s).", entry, e)571 else:572 logger.debug("Failed to match time stamp in filename: %s", entry)573 if backups:574 logger.info("Found %i timestamped backups in %s.", len(backups), location)575 return sorted(backups)576 def match_to_datetime(self, match):577 """578 Convert a regular expression match to a :class:`~datetime.datetime` value.579 :param match: A regular expression match object.580 :returns: A :class:`~datetime.datetime` value.581 :raises: :exc:`exceptions.ValueError` when a required date component is582 not captured by the pattern, the captured value is an empty583 string or the captured value cannot be interpreted as a584 base-10 integer.585 .. seealso:: :data:`SUPPORTED_DATE_COMPONENTS`586 """587 kw = {}588 captures = match.groupdict()589 for component, required in SUPPORTED_DATE_COMPONENTS:590 value = captures.get(component)591 if value:592 kw[component] = int(value, 10)593 elif required:594 raise ValueError("Missing required date component! (%s)" % component)595 else:596 kw[component] = 0597 return datetime.datetime(**kw)598 def group_backups(self, backups):599 """600 Group backups collected by :func:`collect_backups()` by rotation frequencies.601 :param backups: A :class:`set` of :class:`Backup` objects.602 :returns: A :class:`dict` whose keys are the names of rotation603 frequencies ('hourly', 'daily', etc.) and whose values are604 dictionaries. Each nested dictionary contains lists of605 :class:`Backup` objects that are grouped together because606 they belong into the same time unit for the corresponding607 rotation frequency.608 """609 backups_by_frequency = dict((frequency, collections.defaultdict(list)) for frequency in SUPPORTED_FREQUENCIES)610 for b in backups:611 backups_by_frequency['minutely'][(b.year, b.month, b.day, b.hour, b.minute)].append(b)612 backups_by_frequency['hourly'][(b.year, b.month, b.day, b.hour)].append(b)613 backups_by_frequency['daily'][(b.year, b.month, b.day)].append(b)614 backups_by_frequency['weekly'][(b.year, b.week)].append(b)615 backups_by_frequency['monthly'][(b.year, b.month)].append(b)616 backups_by_frequency['yearly'][b.year].append(b)617 return backups_by_frequency618 def apply_rotation_scheme(self, backups_by_frequency, most_recent_backup):619 """620 Apply the user defined rotation scheme to the result of :func:`group_backups()`.621 :param backups_by_frequency: A :class:`dict` in the format generated by622 :func:`group_backups()`.623 :param most_recent_backup: The :class:`~datetime.datetime` of the most624 recent backup.625 :raises: :exc:`~exceptions.ValueError` when the rotation scheme626 dictionary is empty (this would cause all backups to be627 deleted).628 .. note:: This method mutates the given data structure by removing all629 backups that should be removed to apply the user defined630 rotation scheme.631 """632 if not self.rotation_scheme:633 raise ValueError("Refusing to use empty rotation scheme! (all backups would be deleted)")634 for frequency, backups in backups_by_frequency.items():635 # Ignore frequencies not specified by the user.636 if frequency not in self.rotation_scheme:637 backups.clear()638 else:639 # Reduce the number of backups in each time slot of this640 # rotation frequency to a single backup (the oldest one or the641 # newest one).642 for period, backups_in_period in backups.items():643 index = -1 if self.prefer_recent else 0644 selected_backup = sorted(backups_in_period)[index]645 backups[period] = [selected_backup]646 # Check if we need to rotate away backups in old periods.647 retention_period = self.rotation_scheme[frequency]648 if retention_period != 'always':649 # Remove backups created before the minimum date of this650 # rotation frequency? (relative to the most recent backup)651 if self.strict:652 minimum_date = most_recent_backup - SUPPORTED_FREQUENCIES[frequency] * retention_period653 for period, backups_in_period in list(backups.items()):654 for backup in backups_in_period:655 if backup.timestamp < minimum_date:656 backups_in_period.remove(backup)657 if not backups_in_period:658 backups.pop(period)659 # If there are more periods remaining than the user660 # requested to be preserved we delete the oldest one(s).661 items_to_preserve = sorted(backups.items())[-retention_period:]662 backups_by_frequency[frequency] = dict(items_to_preserve)663 def find_preservation_criteria(self, backups_by_frequency):664 """665 Collect the criteria used to decide which backups to preserve.666 :param backups_by_frequency: A :class:`dict` in the format generated by667 :func:`group_backups()` which has been668 processed by :func:`apply_rotation_scheme()`.669 :returns: A :class:`dict` with :class:`Backup` objects as keys and670 :class:`list` objects containing strings (rotation671 frequencies) as values.672 """673 backups_to_preserve = collections.defaultdict(list)674 for frequency, delta in ORDERED_FREQUENCIES:675 for period in backups_by_frequency[frequency].values():676 for backup in period:677 backups_to_preserve[backup].append(frequency)678 return backups_to_preserve679class Location(PropertyManager):680 """:class:`Location` objects represent a root directory containing backups."""681 @required_property682 def context(self):683 """An execution context created using :mod:`executor.contexts`."""684 @required_property685 def directory(self):686 """The pathname of a directory containing backups (a string)."""687 @lazy_property688 def have_ionice(self):689 """:data:`True` when ionice_ is available, :data:`False` otherwise."""690 return self.context.have_ionice691 @lazy_property692 def have_wildcards(self):693 """:data:`True` if :attr:`directory` is a filename pattern, :data:`False` otherwise."""694 return '*' in self.directory695 @lazy_property696 def mount_point(self):697 """698 The pathname of the mount point of :attr:`directory` (a string or :data:`None`).699 If the ``stat --format=%m ...`` command that is used to determine the700 mount point fails, the value of this property defaults to :data:`None`.701 This enables graceful degradation on e.g. Mac OS X whose ``stat``702 implementation is rather bare bones compared to GNU/Linux.703 """704 try:705 return self.context.capture('stat', '--format=%m', self.directory, silent=True)706 except ExternalCommandFailed:707 return None708 @lazy_property709 def is_remote(self):710 """:data:`True` if the location is remote, :data:`False` otherwise."""711 return isinstance(self.context, RemoteContext)712 @lazy_property713 def ssh_alias(self):714 """The SSH alias of a remote location (a string or :data:`None`)."""715 return self.context.ssh_alias if self.is_remote else None716 @property717 def key_properties(self):718 """719 A list of strings with the names of the :attr:`~custom_property.key` properties.720 Overrides :attr:`~property_manager.PropertyManager.key_properties` to721 customize the ordering of :class:`Location` objects so that they are722 ordered first by their :attr:`ssh_alias` and second by their723 :attr:`directory`.724 """725 return ['ssh_alias', 'directory'] if self.is_remote else ['directory']726 def ensure_exists(self, override=False):727 """728 Sanity check that the location exists.729 :param override: :data:`True` to log a message, :data:`False` to raise730 an exception (when the sanity check fails).731 :returns: :data:`True` if the sanity check succeeds,732 :data:`False` if it fails (and `override` is :data:`True`).733 :raises: :exc:`~exceptions.ValueError` when the sanity734 check fails and `override` is :data:`False`.735 .. seealso:: :func:`ensure_readable()`, :func:`ensure_writable()` and :func:`add_hints()`736 """737 if self.context.is_directory(self.directory):738 logger.verbose("Confirmed that location exists: %s", self)739 return True740 elif override:741 logger.notice("It seems %s doesn't exist but --force was given so continuing anyway ..", self)742 return False743 else:744 message = "It seems %s doesn't exist or isn't accessible due to filesystem permissions!"745 raise ValueError(self.add_hints(message % self))746 def ensure_readable(self, override=False):747 """748 Sanity check that the location exists and is readable.749 :param override: :data:`True` to log a message, :data:`False` to raise750 an exception (when the sanity check fails).751 :returns: :data:`True` if the sanity check succeeds,752 :data:`False` if it fails (and `override` is :data:`True`).753 :raises: :exc:`~exceptions.ValueError` when the sanity754 check fails and `override` is :data:`False`.755 .. seealso:: :func:`ensure_exists()`, :func:`ensure_writable()` and :func:`add_hints()`756 """757 # Only sanity check that the location is readable when its758 # existence has been confirmed, to avoid multiple notices759 # about the same underlying problem.760 if self.ensure_exists(override):761 if self.context.is_readable(self.directory):762 logger.verbose("Confirmed that location is readable: %s", self)763 return True764 elif override:765 logger.notice("It seems %s isn't readable but --force was given so continuing anyway ..", self)766 else:767 message = "It seems %s isn't readable!"768 raise ValueError(self.add_hints(message % self))769 return False770 def ensure_writable(self, override=False):771 """772 Sanity check that the directory exists and is writable.773 :param override: :data:`True` to log a message, :data:`False` to raise774 an exception (when the sanity check fails).775 :returns: :data:`True` if the sanity check succeeds,776 :data:`False` if it fails (and `override` is :data:`True`).777 :raises: :exc:`~exceptions.ValueError` when the sanity778 check fails and `override` is :data:`False`.779 .. seealso:: :func:`ensure_exists()`, :func:`ensure_readable()` and :func:`add_hints()`780 """781 # Only sanity check that the location is readable when its782 # existence has been confirmed, to avoid multiple notices783 # about the same underlying problem.784 if self.ensure_exists(override):785 if self.context.is_writable(self.directory):786 logger.verbose("Confirmed that location is writable: %s", self)787 return True788 elif override:789 logger.notice("It seems %s isn't writable but --force was given so continuing anyway ..", self)790 else:791 message = "It seems %s isn't writable!"792 raise ValueError(self.add_hints(message % self))793 return False794 def add_hints(self, message):795 """796 Provide hints about failing sanity checks.797 :param message: The message to the user (a string).798 :returns: The message including hints (a string).799 When superuser privileges aren't being used a hint about the800 ``--use-sudo`` option will be added (in case a sanity check failed801 because we don't have permission to one of the parent directories).802 In all cases a hint about the ``--force`` option is added (in case the803 sanity checks themselves are considered the problem, which is obviously804 up to the operator to decide).805 .. seealso:: :func:`ensure_exists()`, :func:`ensure_readable()` and :func:`ensure_writable()`806 """807 sentences = [message]808 if not self.context.have_superuser_privileges:809 sentences.append("If filesystem permissions are the problem consider using the --use-sudo option.")810 sentences.append("To continue despite this failing sanity check you can use --force.")811 return " ".join(sentences)812 def match(self, location):813 """814 Check if the given location "matches".815 :param location: The :class:`Location` object to try to match.816 :returns: :data:`True` if the two locations are on the same system and817 the :attr:`directory` can be matched as a filename pattern or818 a literal match on the normalized pathname.819 """...

Full Screen

Full Screen

copy_bot.py

Source:copy_bot.py Github

copy

Full Screen

...88 return False89 else:90 logger.debug("The file path " + str(file) + " exists.")91 return True92# ensure_readable(): Helper function that ensure that a file or directory is93# readable (has the +r bit). Returns a True or False.94def ensure_readable(file):95 logger.debug("Entered function ensure_readable().")96 if not os.access(file, os.R_OK):97 logger.debug("The file path " + str(file) + " cannot be read.")98 return False99 else:100 logger.debug("The file path " + str(file) + " is readable.")101 return True102# ensure_writable(): Helper function that ensure that a file or directory is103# writable (has the +2 bit). Returns a True or False.104def ensure_writable(file):105 logger.debug("Entered function ensure_writable().")106 if not os.access(file, os.W_OK):107 logger.debug("The file path " + str(file) + " cannot be written to.")108 return False109 else:110 logger.debug("The file path " + str(file) + " is writable.")111 return True112# copy_files(): Function that takes as its argument a hash table containing two113# filespecs, one a set of one or more files to copy (or a directory to copy114# the contents of), the other a directory to copy them into. Returns115# a message to the user.116def copy_files(filespecs):117 logger.debug("Entered function copy_files().")118 # Message for the user.119 message = ""120 # List of files in the source directory to copy.121 source_files = []122 # List of files that could not be copied.123 uncopied_files = []124 # Normalize the file paths so they are internally consistent.125 source_path = normalize_file_path(filespecs['from'])126 destination_path = normalize_file_path(filespecs['to'])127 # Ensure the source filepath exists.128 if not ensure_exists(source_path):129 message = "ERROR: The source path " + str(source_path) + " does not exist. I can't do anything."130 return message131 # Ensure the source can be read. Bounce if it isn't.132 if not ensure_readable(source_path):133 message = "ERROR: The source path " + str(source_path) + " is not readable."134 return message135 # Ensure the destination directory exists.136 if not ensure_exists(destination_path):137 message = "ERROR: The destination path " + str(destination_path) + " does not exist. I can't do anything."138 return message139 # Ensure that the destination is writable. Note that it doesn't have to be140 # readable. Bounce if it isn't.141 if not ensure_writable(destination_path):142 message = "ERROR: The destination path " + str(destination_path) + " cannot be written to."143 return message144 # Build a list of one or more files to copy in the source directory.145 if os.path.isfile(source_path):146 source_files.append(source_path)147 else:148 source_files = os.listdir(source_path)149 # Roll through the list of files, test them, and copy them. If they can't150 # be copied push them onto the list of files that errored out. We don't151 # test if the files exist because we pulled their filenames out of the152 # directory listing.153 for file in source_files:154 source_file = os.path.join(source_path, file)155 if not ensure_readable(source_file):156 uncopied_files.append(file)157 continue158 try:159 logger.debug("Attempting to copy " + str(file) + " to " + str(destination_path) + ".")160 shutil.copy2(source_file, destination_path)161 except:162 uncopied_files.append(file)163 # Build the message to return to the user.164 message = "Contents of directory " + source_path + " are as copied as I can make them."165 if uncopied_files:166 message = message + "\nI was unable to copy the following files:\n"167 message = message + str(uncopied_files)168 logger.debug("Files that could not be copied: " + str(uncopied_files))169 # Return the message....

Full Screen

Full Screen

common.py

Source:common.py Github

copy

Full Screen

1from localstack import config2# TODO: remove imports from here (need to update any client code that imports these from utils.common)3from localstack.utils.archives import get_unzipped_size, is_zip_file, untar, unzip # noqa4# TODO: remove imports from here (need to update any client code that imports these from utils.common)5from localstack.utils.collections import ( # noqa6 DelSafeDict,7 HashableList,8 PaginatedList,9 ensure_list,10 is_list_or_tuple,11 is_none_or_empty,12 is_sub_dict,13 items_equivalent,14 last_index_of,15 merge_dicts,16 merge_recursive,17 remove_attributes,18 remove_none_values_from_dict,19 rename_attributes,20 select_attributes,21 to_unique_items_list,22)23# TODO: remove imports from here (need to update any client code that imports these from utils.common)24from localstack.utils.crypto import ( # noqa25 PEM_CERT_END,26 PEM_CERT_START,27 PEM_KEY_END_REGEX,28 PEM_KEY_START_REGEX,29 generate_ssl_cert,30)31# TODO: remove imports from here (need to update any client code that imports these from utils.common)32from localstack.utils.files import ( # noqa33 TMP_FILES,34 chmod_r,35 chown_r,36 cleanup_tmp_files,37 cp_r,38 disk_usage,39 ensure_readable,40 file_exists_not_empty,41 get_or_create_file,42 is_empty_dir,43 load_file,44 mkdir,45 new_tmp_dir,46 new_tmp_file,47 replace_in_file,48 rm_rf,49 save_file,50)51# TODO: remove imports from here (need to update any client code that imports these from utils.common)52from localstack.utils.functions import ( # noqa53 call_safe,54 empty_context_manager,55 prevent_stack_overflow,56 run_safe,57)58# TODO: remove imports from here (need to update any client code that imports these from utils.common)59from localstack.utils.http import ( # noqa60 NetrcBypassAuth,61 _RequestsSafe,62 download,63 get_proxies,64 make_http_request,65 parse_request_data,66 replace_response_content,67 safe_requests,68)69# TODO: remove imports from here (need to update any client code that imports these from utils.common)70from localstack.utils.json import ( # noqa71 CustomEncoder,72 FileMappedDocument,73 JsonObject,74 assign_to_path,75 canonical_json,76 clone,77 clone_safe,78 extract_from_jsonpointer_path,79 extract_jsonpath,80 fix_json_keys,81 json_safe,82 parse_json_or_yaml,83 try_json,84)85# TODO: remove imports from here (need to update any client code that imports these from utils.common)86from localstack.utils.net import ( # noqa87 PortNotAvailableException,88 PortRange,89 get_free_tcp_port,90 is_ip_address,91 is_ipv4_address,92 is_port_open,93 port_can_be_bound,94 resolve_hostname,95 wait_for_port_closed,96 wait_for_port_open,97 wait_for_port_status,98)99# TODO: remove imports from here (need to update any client code that imports these from utils.common)100from localstack.utils.numbers import format_bytes, format_number, is_number # noqa101# TODO: remove imports from here (need to update any client code that imports these from utils.common)102from localstack.utils.objects import ( # noqa103 ArbitraryAccessObj,104 Mock,105 ObjectIdHashComparator,106 SubtypesInstanceManager,107 fully_qualified_class_name,108 get_all_subclasses,109 keys_to_lower,110 not_none_or,111 recurse_object,112)113# TODO: remove imports from here (need to update any client code that imports these from utils.common)114from localstack.utils.platform import ( # noqa115 get_arch,116 get_os,117 in_docker,118 is_debian,119 is_linux,120 is_mac_os,121 is_windows,122)123# TODO: remove imports from here (need to update any client code that imports these from utils.common)124from localstack.utils.run import ( # noqa125 CaptureOutput,126 ShellCommandThread,127 get_os_user,128 is_command_available,129 is_root,130 kill_process_tree,131 run,132 run_for_max_seconds,133)134# TODO: remove imports from here (need to update any client code that imports these from utils.common)135from localstack.utils.strings import ( # noqa136 base64_to_hex,137 camel_to_snake_case,138 canonicalize_bool_to_str,139 convert_to_printable_chars,140 first_char_to_lower,141 first_char_to_upper,142 is_base64,143 is_string,144 is_string_or_bytes,145 long_uid,146 md5,147 short_uid,148 snake_to_camel_case,149 str_insert,150 str_remove,151 str_startswith_ignore_case,152 str_to_bool,153 to_bytes,154 to_str,155 truncate,156)157# TODO: remove imports from here (need to update any client code that imports these from utils.common)158from localstack.utils.sync import ( # noqa159 poll_condition,160 retry,161 sleep_forever,162 synchronized,163 wait_until,164)165# TODO: remove imports from here (need to update any client code that imports these from utils.common)166from localstack.utils.tail import FileListener # noqa167# TODO: remove imports from here (need to update any client code that imports these from utils.common)168from localstack.utils.threads import ( # noqa169 TMP_PROCESSES,170 TMP_THREADS,171 FuncThread,172 cleanup_threads_and_processes,173 parallelize,174 start_thread,175 start_worker_thread,176)177# TODO: remove imports from here (need to update any client code that imports these from utils.common)178from localstack.utils.time import ( # noqa179 TIMESTAMP_FORMAT,180 TIMESTAMP_FORMAT_MICROS,181 TIMESTAMP_FORMAT_TZ,182 epoch_timestamp,183 isoformat_milliseconds,184 mktime,185 now,186 now_utc,187 parse_timestamp,188 timestamp,189 timestamp_millis,190)191# TODO: remove imports from here (need to update any client code that imports these from utils.common)192from localstack.utils.urls import path_from_url # noqa193# TODO: remove imports from here (need to update any client code that imports these from utils.common)194from localstack.utils.xml import obj_to_xml, strip_xmlns # noqa195# TODO: move somewhere sensible (probably localstack.runtime)196class ExternalServicePortsManager(PortRange):197 """Manages the ports used for starting external services like ElasticSearch, OpenSearch,..."""198 def __init__(self):199 super().__init__(config.EXTERNAL_SERVICE_PORTS_START, config.EXTERNAL_SERVICE_PORTS_END)200external_service_ports = ExternalServicePortsManager()201# TODO: replace references with config.get_protocol/config.edge_ports_info202get_service_protocol = config.get_protocol203# TODO: replace references to safe_run with localstack.utils.run.run...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run localstack automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful