How to use set_doc method in autotest

Best Python code snippet using autotest_python

array.py

Source:array.py Github

copy

Full Screen

1import chainerx2from chainerx import _docs3def set_docs():4 ndarray = chainerx.ndarray5 _docs.set_doc(6 ndarray,7 """ndarray(shape, dtype, device=None)8Multi-dimensional array, the central data structure of ChainerX.9This class, along with other APIs in the :mod:`chainerx` module, provides a10subset of NumPy APIs. This class works similar to :class:`numpy.ndarray`,11except for some differences including the following noticeable points:12- :class:`chainerx.ndarray` has a :attr:`device` attribute. It indicates on13 which device the array is allocated.14- :class:`chainerx.ndarray` supports :ref:`Define-by-Run <define_by_run>`15 backpropagation. Once you call :meth:`require_grad`, the array starts16 recording the operations applied to it recursively. Gradient of the result17 with respect to the original array can be computed then with the18 :meth:`backward` method or the :func:`chainerx.backward` function.19Args:20 shape (tuple of ints): Shape of the new array.21 dtype: Data type.22 device (~chainerx.Device): Device on which the array is allocated.23 If omitted, :ref:`the default device <chainerx_device>` is chosen.24.. seealso:: :class:`numpy.ndarray`25""")26 _docs.set_doc(27 ndarray.data_ptr,28 """int: Address of the underlying memory allocation.29The meaning of the address is device-dependent.30""")31 _docs.set_doc(32 ndarray.data_size,33 'int: Total size of the underlying memory allocation.')34 _docs.set_doc(35 ndarray.device, '~chainerx.Device: Device on which the data exists.')36 _docs.set_doc(ndarray.dtype, 'Data type of the array.')37 # TODO(beam2d): Write about backprop id.38 _docs.set_doc(39 ndarray.grad,40 """~chainerx.ndarray: Gradient held by the array.41It is ``None`` if the gradient is not available.42Setter of this property overwrites the gradient.43""")44 _docs.set_doc(45 ndarray.is_contiguous,46 'bool: ``True`` iff the array is stored in the C-contiguous order.')47 _docs.set_doc(ndarray.itemsize, 'int: Size of each element in bytes.')48 _docs.set_doc(49 ndarray.nbytes,50 """int: Total size of all elements in bytes.51It does not count skips between elements.""")52 _docs.set_doc(ndarray.ndim, 'int: Number of dimensions.')53 _docs.set_doc(54 ndarray.offset,55 'int: Offset of the first element from the memory allocation in bytes.'56 )57 _docs.set_doc(58 ndarray.shape,59 """tuple of int: Lengths of axes.60.. note::61 Currently, this property does not support setter.""")62 _docs.set_doc(ndarray.size, 'int: Number of elements in the array.')63 _docs.set_doc(ndarray.strides, 'tuple of int: Strides of axes in bytes.')64 _docs.set_doc(65 ndarray.T,66 """~chainerx.ndarray: Shape-reversed view of the array.67New array is created at every access to this property.68``x.T`` is just a shorthand of ``x.transpose()``.69""")70 _docs.set_doc(71 ndarray.__getitem__,72 """___getitem__(self, key)73Returns self[key].74.. note::75 Currently, only basic indexing is supported not advanced indexing.76""")77 def unary_op(name, s):78 _docs.set_doc(getattr(ndarray, name), '{}()\n{}'.format(name, s))79 unary_op('__bool__', 'Casts a size-one array into a :class:`bool` value.')80 unary_op('__float__',81 'Casts a size-one array into a :class:`float` value.')82 unary_op('__int__', 'Casts a size-one array into :class:`int` value.')83 unary_op('__len__', 'Returns the length of the first axis.')84 unary_op('__neg__', 'Computes ``-x`` elementwise.')85 def binary_op(name, s):86 _docs.set_doc(getattr(ndarray, name), '{}(other)\n{}'.format(name, s))87 binary_op('__eq__', 'Computes ``x == y`` elementwise.')88 binary_op('__ne__', 'Computes ``x != y`` elementwise.')89 binary_op('__lt__', 'Computes ``x < y`` elementwise.')90 binary_op('__le__', 'Computes ``x <= y`` elementwise.')91 binary_op('__ge__', 'Computes ``x >= y`` elementwise.')92 binary_op('__gt__', 'Computes ``x > y`` elementwise.')93 binary_op('__iadd__', 'Computes ``x += y`` elementwise.')94 binary_op('__isub__', 'Computes ``x -= y`` elementwise.')95 binary_op('__imul__', 'Computes ``x *= y`` elementwise.')96 binary_op('__itruediv__', 'Computes ``x /= y`` elementwise.')97 binary_op('__iand__', 'Computes ``x &= y`` elementwise.')98 binary_op('__ior__', 'Computes ``x |= y`` elementwise.')99 binary_op('__ixor__', 'Computes ``x ^= y`` elementwise.')100 binary_op('__add__', 'Computes ``x + y`` elementwise.')101 binary_op('__sub__', 'Computes ``x - y`` elementwise.')102 binary_op('__mul__', 'Computes ``x * y`` elementwise.')103 binary_op('__truediv__', 'Computes ``x / y`` elementwise.')104 binary_op('__and__', 'Computes ``x & y`` elementwise.')105 binary_op('__or__', 'Computes ``x | y`` elementwise.')106 binary_op('__xor__', 'Computes ``x ^ y`` elementwise.')107 binary_op('__radd__', 'Computes ``y + x`` elementwise.')108 binary_op('__rsub__', 'Computes ``y - x`` elementwise.')109 binary_op('__rmul__', 'Computes ``y * x`` elementwise.')110 binary_op('__rand__', 'Computes ``y & x`` elementwise.')111 binary_op('__ror__', 'Computes ``y | x`` elementwise.')112 binary_op('__rxor__', 'Computes ``y ^ x`` elementwise.')113 # TODO(beam2d): Write about as_grad_stopped(backprop_ids, copy) overload.114 _docs.set_doc(115 ndarray.as_grad_stopped,116 """as_grad_stopped(copy=False)117Creates a view or a copy of the array that stops gradient propagation.118This method behaves similar to :meth:`view` and :meth:`copy`, except that119the gradient is not propagated through this operation (internally, this120method creates a copy or view of the array without connecting the computational121graph for backprop).122Args:123 copy (bool): If ``True``, it copies the array. Otherwise, it returns a view124 of the original array.125Returns:126 ~chainerx.ndarray:127 A view or a copy of the array without propagating the gradient on128 backprop.129""")130 _docs.set_doc(131 ndarray.argmax,132 """argmax(axis=None)133Returns the indices of the maximum elements along a given axis.134See :func:`chainerx.argmax` for the full documentation.135""")136 _docs.set_doc(137 ndarray.argmin,138 """argmin(axis=None)139Returns the indices of the minimum elements along a given axis.140See :func:`chainerx.argmin` for the full documentation.141""")142 _docs.set_doc(143 ndarray.astype,144 """astype(dtype, copy=True)145Casts each element to the specified data type.146Args:147 dtype: Data type of the new array.148 copy (bool): If ``True``, this method always copies the data. Otherwise,149 it creates a view of the array if possible.150Returns:151 ~chainerx.ndarray: An array with the specified dtype.152""")153 _docs.set_doc(154 ndarray.backward,155 """backward(backprop_id=None, enable_double_backprop=False)156Performs backpropagation starting from this array.157This method is equivalent to ``chainerx.backward([self], *args)``.158See :func:`chainerx.backward` for the full documentation.159""")160 # TODO(beam2d): Write about backprop id.161 _docs.set_doc(162 ndarray.cleargrad,163 """cleargrad()164Clears the gradient held by this array.165""")166 _docs.set_doc(167 ndarray.copy,168 """copy()169Creates an array and copies all the elements to it.170The copied array is allocated on the same device as ``self``.171.. seealso:: :func:`chainerx.copy`172""")173 _docs.set_doc(174 ndarray.dot,175 """dot(b)176Returns the dot product with a given array.177See :func:`chainerx.dot` for the full documentation.178""")179 _docs.set_doc(180 ndarray.fill,181 """fill(value)182Fills the array with a scalar value in place.183Args:184 value: Scalar value with which the array will be filled.185""")186 # TODO(beam2d): Write about backprop_id argument.187 _docs.set_doc(188 ndarray.get_grad,189 """get_grad()190Returns the gradient held by the array.191If the gradient is not available, it returns ``None``.192""")193 # TODO(beam2d): Write about backprop_id argument.194 _docs.set_doc(195 ndarray.is_backprop_required,196 """is_backprop_required()197Returns ``True`` if gradient propagates through this array on backprop.198See the note on :meth:`require_grad` for details.199""")200 # TODO(beam2d): Write about backprop_id argument.201 _docs.set_doc(202 ndarray.is_grad_required,203 """is_grad_required()204Returns ``True`` if the gradient will be set after backprop.205See the note on :meth:`require_grad` for details.206""")207 _docs.set_doc(208 ndarray.item,209 """item()210Copies an element of an array to a standard Python scalar and returns it.211Returns:212 z:213 A copy of the specified element of the array as a suitable Python214 scalar.215.. seealso:: :func:`numpy.item`216""")217 _docs.set_doc(218 ndarray.max,219 """max(axis=None, keepdims=False)220Returns the maximum along a given axis.221See :func:`chainerx.amax` for the full documentation.222""")223 _docs.set_doc(224 ndarray.min,225 """min(axis=None, keepdims=False)226Returns the minimum along a given axis.227See :func:`chainerx.amin` for the full documentation.228""")229 # TODO(beam2d): Write about backprop_id argument.230 _docs.set_doc(231 ndarray.require_grad,232 """require_grad()233Declares that a gradient for this array will be made available after backprop.234Once calling this method, any operations applied to this array are recorded for235later backprop. After backprop, the :attr:`grad` attribute holds the gradient236array.237.. note::238 ChainerX distinguishes *gradient requirements* and *backprop requirements*239 strictly. They are strongly related, but different concepts as follows.240 - *Gradient requirement* indicates that the gradient array should be made241 available after backprop. This attribute **is not propagated** through242 any operations. It implicates the backprop requirement.243 - *Backprop requirement* indicates that the gradient should be propagated244 through the array during backprop. This attribute **is propagated**245 through differentiable operations.246 :meth:`require_grad` sets the gradient requirement flag. If you need to247 extract the gradient after backprop, you have to call :meth:`require_grad`248 on the array even if the array is an intermediate result of differentiable249 computations.250Returns:251 ~chainerx.ndarray: ``self``252""")253 _docs.set_doc(254 ndarray.reshape,255 """reshape(newshape)256Creates an array with a new shape and the same data.257See :func:`chainerx.reshape` for the full documentation.258""")259 _docs.set_doc(260 ndarray.set_grad,261 """set_grad(grad)262Sets a gradient to the array.263This method overwrites the gradient with a given array.264Args:265 grad (~chainerx.ndarray): New gradient array.266""")267 _docs.set_doc(268 ndarray.squeeze,269 """squeeze(axis=None)270Removes size-one axes from an array.271See :func:`chainerx.squeeze` for the full documentation.272""")273 _docs.set_doc(274 ndarray.swapaxes,275 """swapaxes(axis1, axis2)276Interchange two axes of an array..277See :func:`chainerx.swapaxes` for the full documentation.278""")279 _docs.set_doc(280 ndarray.repeat,281 """repeat(repeats, axis=None)282Constructs an array by repeating a given array.283See :func:`chainerx.repeats` for the full documentation.284""")285 _docs.set_doc(286 ndarray.sum,287 """sum(axis=None, keepdims=False)288Returns the sum of an array along given axes.289See :func:`chainerx.sum` for the full documentation.290""")291 _docs.set_doc(292 ndarray.take,293 """take(indices, axis)294Takes elements from the array along an axis.295See :func:`chainerx.take` for the full documentation.296""")297 _docs.set_doc(298 ndarray.to_device,299 """to_device(device, index=None)300Transfers the array to the specified device.301Args:302 device (~chainerx.Device or str): Device to which the array is transferred,303 or a backend name. If it is a backend name, ``index`` should also be304 specified.305 index (int): Index of the device for the backend specified by ``device``.306Returns:307 ~chainerx.ndarray:308 An array on the target device.309 If the original array is already on the device, it is a view of that.310 Otherwise, it is a copy of the array on the target device.311""")312 _docs.set_doc(313 ndarray.transpose,314 """transpose(axes=None)315Creates a view of an array with permutated axes.316See :func:`chainerx.transpose` for the full documentation.317""")318 _docs.set_doc(319 ndarray.view,320 """view()321Returns a view of the array.322The returned array shares the underlying buffer, though it has a different323identity as a Python object....

Full Screen

Full Screen

device.py

Source:device.py Github

copy

Full Screen

1import chainerx2from chainerx import _docs3def _set_docs_device():4 Device = chainerx.Device5 _docs.set_doc(6 Device,7 """Represents a physical computing unit.8""")9 _docs.set_doc(10 Device.synchronize,11 """Synchronizes the device.12""")13 _docs.set_doc(14 Device.name,15 """Device name.16It is the backend name and the device index concatenated with a colon, e.g.17``native:0``.18Returns:19 str: Device name.20""")21 _docs.set_doc(22 Device.backend,23 """Backend to which this device belongs.24Returns:25 ~chainerx.Backend: Backend object.26""")27 _docs.set_doc(28 Device.context,29 """Context to which this device belongs.30Returns:31 ~chainerx.Context: Context object.32""")33 _docs.set_doc(34 Device.index,35 """Index of this device.36Returns:37 int: Index of this device.38""")39def set_docs():40 _set_docs_device()41 _docs.set_doc(42 chainerx.get_device,43 """get_device(*device)44Returns a device specified by the arguments.45If the argument is a single :class:`~chainerx.Device` instance, it's simply46returned.47Otherwise, there are three ways to specify a device:48.. testcode::49 # Specify a backend name and a device index separately.50 chainerx.get_device('native', 0)51 # Specify a backend name and a device index in a single string.52 chainerx.get_device('native:0')53 # Specify only a backend name. In this case device index 0 is chosen.54 chainerx.get_device('native')55Returns:56 ~chainerx.Device: Device object.57""")58 _docs.set_doc(59 chainerx.get_default_device,60 """get_default_device()61Returns the default device associated with the current thread.62Returns:63 ~chainerx.Device: The default device.64.. seealso::65 * :func:`chainerx.set_default_device`66 * :func:`chainerx.using_device`67""")68 _docs.set_doc(69 chainerx.set_default_device,70 """set_default_device(device)71Sets the given device as the default device of the current thread.72Args:73 device (~chainerx.Device or str): Device object or device name to set as74 the default device.75.. seealso::76 * :func:`chainerx.get_default_device`77 * :func:`chainerx.using_device`78""")79 _docs.set_doc(80 chainerx.using_device,81 """using_device(device)82Creates a context manager to temporarily set the default device.83Args:84 device (~chainerx.Device or str): Device object or device name to set as85 the default device during the context. See :data:`chainerx.Device.name`86 for the specification of device names.87.. seealso::88 * :func:`chainerx.get_default_device`89 * :func:`chainerx.set_default_device`...

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run autotest automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful