How to use bench_range method of Minitest Package

Best Minitest_ruby code snippet using Minitest.bench_range

benchmark.rb

Source:benchmark.rb Github

copy

Full Screen

...45 # Defaults to exponential growth from 1 to 10k by powers of 10.46 # Override if you need different ranges for your benchmarks.47 #48 # See also: ::bench_exp and ::bench_linear.49 def self.bench_range50 bench_exp 1, 10_00051 end52 ##53 # Runs the given +work+, gathering the times of each run. Range54 # and times are then passed to a given +validation+ proc. Outputs55 # the benchmark name and times in tab-separated format, making it56 # easy to paste into a spreadsheet for graphing or further57 # analysis.58 #59 # Ranges are specified by ::bench_range.60 #61 # Eg:62 #63 # def bench_algorithm64 # validation = proc { |x, y| ... }65 # assert_performance validation do |n|66 # @obj.algorithm(n)67 # end68 # end69 def assert_performance validation, &work70 range = self.class.bench_range71 io.print "#{self.name}"72 times = []73 range.each do |x|74 GC.start75 t0 = Minitest.clock_time76 instance_exec(x, &work)77 t = Minitest.clock_time - t078 io.print "\t%9.6f" % t79 times << t80 end81 io.puts82 validation[range, times]83 end84 ##85 # Runs the given +work+ and asserts that the times gathered fit to86 # match a constant rate (eg, linear slope == 0) within a given87 # +threshold+. Note: because we're testing for a slope of 0, R^288 # is not a good determining factor for the fit, so the threshold89 # is applied against the slope itself. As such, you probably want90 # to tighten it from the default.91 #92 # See http://www.graphpad.com/curvefit/goodness_of_fit.htm for93 # more details.94 #95 # Fit is calculated by #fit_linear.96 #97 # Ranges are specified by ::bench_range.98 #99 # Eg:100 #101 # def bench_algorithm102 # assert_performance_constant 0.9999 do |n|103 # @obj.algorithm(n)104 # end105 # end106 def assert_performance_constant threshold = 0.99, &work107 validation = proc do |range, times|108 a, b, rr = fit_linear range, times109 assert_in_delta 0, b, 1 - threshold110 [a, b, rr]111 end112 assert_performance validation, &work113 end114 ##115 # Runs the given +work+ and asserts that the times gathered fit to116 # match a exponential curve within a given error +threshold+.117 #118 # Fit is calculated by #fit_exponential.119 #120 # Ranges are specified by ::bench_range.121 #122 # Eg:123 #124 # def bench_algorithm125 # assert_performance_exponential 0.9999 do |n|126 # @obj.algorithm(n)127 # end128 # end129 def assert_performance_exponential threshold = 0.99, &work130 assert_performance validation_for_fit(:exponential, threshold), &work131 end132 ##133 # Runs the given +work+ and asserts that the times gathered fit to134 # match a logarithmic curve within a given error +threshold+.135 #136 # Fit is calculated by #fit_logarithmic.137 #138 # Ranges are specified by ::bench_range.139 #140 # Eg:141 #142 # def bench_algorithm143 # assert_performance_logarithmic 0.9999 do |n|144 # @obj.algorithm(n)145 # end146 # end147 def assert_performance_logarithmic threshold = 0.99, &work148 assert_performance validation_for_fit(:logarithmic, threshold), &work149 end150 ##151 # Runs the given +work+ and asserts that the times gathered fit to152 # match a straight line within a given error +threshold+.153 #154 # Fit is calculated by #fit_linear.155 #156 # Ranges are specified by ::bench_range.157 #158 # Eg:159 #160 # def bench_algorithm161 # assert_performance_linear 0.9999 do |n|162 # @obj.algorithm(n)163 # end164 # end165 def assert_performance_linear threshold = 0.99, &work166 assert_performance validation_for_fit(:linear, threshold), &work167 end168 ##169 # Runs the given +work+ and asserts that the times gathered curve170 # fit to match a power curve within a given error +threshold+.171 #172 # Fit is calculated by #fit_power.173 #174 # Ranges are specified by ::bench_range.175 #176 # Eg:177 #178 # def bench_algorithm179 # assert_performance_power 0.9999 do |x|180 # @obj.algorithm181 # end182 # end183 def assert_performance_power threshold = 0.99, &work184 assert_performance validation_for_fit(:power, threshold), &work185 end186 ##187 # Takes an array of x/y pairs and calculates the general R^2 value.188 #189 # See: http://en.wikipedia.org/wiki/Coefficient_of_determination190 def fit_error xys191 y_bar = sigma(xys) { |_, y| y } / xys.size.to_f192 ss_tot = sigma(xys) { |_, y| (y - y_bar) ** 2 }193 ss_err = sigma(xys) { |x, y| (yield(x) - y) ** 2 }194 1 - (ss_err / ss_tot)195 end196 ##197 # To fit a functional form: y = ae^(bx).198 #199 # Takes x and y values and returns [a, b, r^2].200 #201 # See: http://mathworld.wolfram.com/LeastSquaresFittingExponential.html202 def fit_exponential xs, ys203 n = xs.size204 xys = xs.zip(ys)205 sxlny = sigma(xys) { |x, y| x * Math.log(y) }206 slny = sigma(xys) { |_, y| Math.log(y) }207 sx2 = sigma(xys) { |x, _| x * x }208 sx = sigma xs209 c = n * sx2 - sx ** 2210 a = (slny * sx2 - sx * sxlny) / c211 b = ( n * sxlny - sx * slny ) / c212 return Math.exp(a), b, fit_error(xys) { |x| Math.exp(a + b * x) }213 end214 ##215 # To fit a functional form: y = a + b*ln(x).216 #217 # Takes x and y values and returns [a, b, r^2].218 #219 # See: http://mathworld.wolfram.com/LeastSquaresFittingLogarithmic.html220 def fit_logarithmic xs, ys221 n = xs.size222 xys = xs.zip(ys)223 slnx2 = sigma(xys) { |x, _| Math.log(x) ** 2 }224 slnx = sigma(xys) { |x, _| Math.log(x) }225 sylnx = sigma(xys) { |x, y| y * Math.log(x) }226 sy = sigma(xys) { |_, y| y }227 c = n * slnx2 - slnx ** 2228 b = ( n * sylnx - sy * slnx ) / c229 a = (sy - b * slnx) / n230 return a, b, fit_error(xys) { |x| a + b * Math.log(x) }231 end232 ##233 # Fits the functional form: a + bx.234 #235 # Takes x and y values and returns [a, b, r^2].236 #237 # See: http://mathworld.wolfram.com/LeastSquaresFitting.html238 def fit_linear xs, ys239 n = xs.size240 xys = xs.zip(ys)241 sx = sigma xs242 sy = sigma ys243 sx2 = sigma(xs) { |x| x ** 2 }244 sxy = sigma(xys) { |x, y| x * y }245 c = n * sx2 - sx**2246 a = (sy * sx2 - sx * sxy) / c247 b = ( n * sxy - sx * sy ) / c248 return a, b, fit_error(xys) { |x| a + b * x }249 end250 ##251 # To fit a functional form: y = ax^b.252 #253 # Takes x and y values and returns [a, b, r^2].254 #255 # See: http://mathworld.wolfram.com/LeastSquaresFittingPowerLaw.html256 def fit_power xs, ys257 n = xs.size258 xys = xs.zip(ys)259 slnxlny = sigma(xys) { |x, y| Math.log(x) * Math.log(y) }260 slnx = sigma(xs) { |x | Math.log(x) }261 slny = sigma(ys) { | y| Math.log(y) }262 slnx2 = sigma(xs) { |x | Math.log(x) ** 2 }263 b = (n * slnxlny - slnx * slny) / (n * slnx2 - slnx ** 2)264 a = (slny - b * slnx) / n265 return Math.exp(a), b, fit_error(xys) { |x| (Math.exp(a) * (x ** b)) }266 end267 ##268 # Enumerates over +enum+ mapping +block+ if given, returning the269 # sum of the result. Eg:270 #271 # sigma([1, 2, 3]) # => 1 + 2 + 3 => 6272 # sigma([1, 2, 3]) { |n| n ** 2 } # => 1 + 4 + 9 => 14273 def sigma enum, &block274 enum = enum.map(&block) if block275 enum.inject { |sum, n| sum + n }276 end277 ##278 # Returns a proc that calls the specified fit method and asserts279 # that the error is within a tolerable threshold.280 def validation_for_fit msg, threshold281 proc do |range, times|282 a, b, rr = send "fit_#{msg}", range, times283 assert_operator rr, :>=, threshold284 [a, b, rr]285 end286 end287 end288end289module Minitest290 ##291 # The spec version of Minitest::Benchmark.292 class BenchSpec < Benchmark293 extend Minitest::Spec::DSL294 ##295 # This is used to define a new benchmark method. You usually don't296 # use this directly and is intended for those needing to write new297 # performance curve fits (eg: you need a specific polynomial fit).298 #299 # See ::bench_performance_linear for an example of how to use this.300 def self.bench name, &block301 define_method "bench_#{name.gsub(/\W+/, "_")}", &block302 end303 ##304 # Specifies the ranges used for benchmarking for that class.305 #306 # bench_range do307 # bench_exp(2, 16, 2)308 # end309 #310 # See Minitest::Benchmark#bench_range for more details.311 def self.bench_range &block312 return super unless block313 meta = (class << self; self; end)314 meta.send :define_method, "bench_range", &block315 end316 ##317 # Create a benchmark that verifies that the performance is linear.318 #319 # describe "my class Bench" do320 # bench_performance_linear "fast_algorithm", 0.9999 do |n|321 # @obj.fast_algorithm(n)322 # end323 # end324 def self.bench_performance_linear name, threshold = 0.99, &work325 bench name do326 assert_performance_linear threshold, &work327 end328 end...

Full Screen

Full Screen

bench_range

Using AI Code Generation

copy

Full Screen

1 (1..n).to_a2 (1..n).to_a3 (1..n).to_a

Full Screen

Full Screen

bench_range

Using AI Code Generation

copy

Full Screen

1 Minitest::Benchmark.bench_exp(2, 10_000)2 n.times { "a" * n }3 n.times { "a" * n }4 Minitest::Benchmark.bench_linear(1, 10_000)5 n.times { "a" * n }6 n.times { "a" * n }7 Minitest::Benchmark.bench_exp(2, 10_000)8 n.times { "a" * n }9 n.times { "a" * n }

Full Screen

Full Screen

bench_range

Using AI Code Generation

copy

Full Screen

1def bench_range(bm, range)2 bench_range(bm, (1..10))3for-loop: 1 0.000000 0.000000 0.000000 ( 0.000010)4for-loop: 2 0.000000 0.000000 0.000000 ( 0.000006)5for-loop: 3 0.000000 0.000000 0.000000 ( 0.000005)6for-loop: 4 0.000000 0.000000 0.000000 ( 0.000004)7for-loop: 5 0.000000 0.000000 0.000000 ( 0.000005)8for-loop: 6 0.000000 0.000000 0.000000 ( 0.000004)9for-loop: 7 0.000000 0.000000 0.000000 ( 0.000005)10for-loop: 8 0.000000 0.000000 0.000000 ( 0.000004)11for-loop: 9 0.000000 0.000000 0.000000 ( 0.000005)12for-loop: 10 0.000000 0.000000 0.000000 ( 0.000004)13for-loop: 1 0.000000 0.000000 0.000000 ( 0.000004)

Full Screen

Full Screen

bench_range

Using AI Code Generation

copy

Full Screen

1 Minitest::Benchmark.bench_exp(1, 100_000, 10_000)2 n.times { "a" * n }3 1 2.481k (± 8.0%) i/s - 12.400k in 5.026932s4 10000 148.827 (± 5.2%) i/s - 751.000 in 5.053477s5 20000 83.672 (± 6.0%) i/s - 420.000 in 5.039677s6 30000 55.752 (± 5.4%) i/s - 280.000 in 5.043335s7 40000 41.516 (± 4.4%) i/s - 209.000 in 5.051614s8 50000 33.082 (± 4.8%) i/s - 167

Full Screen

Full Screen

bench_range

Using AI Code Generation

copy

Full Screen

1 @hash = {}2 assert_equal(1, @bm.array.size)3 assert_equal(1, @bm.hash.size)4 n.times { @bm.insert_array }5 n.times { @bm.insert_hash }

Full Screen

Full Screen

bench_range

Using AI Code Generation

copy

Full Screen

1 @hash = {}2 assert_equal(1, @bm.array.size)3 assert_equal(1, @bm.hash.size)4 n.times { @bm.insert_array }5 n.times { @bm.insert_hash }

Full Screen

Full Screen

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run Minitest_ruby automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful