Benchmarking Results

These are results from benchmarking the training process. The following are benchmarking results from running equivalent programmes in both repositories. These programmes uses ~10 thousand training images at 19 x 19 pixels each.

Language of ImplementationCommitRun Time in SecondsNumber of AllocationsMemory Usage
Python8772a28480.0354—ᵃ—ᵃ
Julia6fd8ca9e19.90572556001055.11 GiB

ᵃI have not yet figured out benchmarking memory usage in Python.

These results were run on this machine:

julia> versioninfo()
Julia Version 1.5.2
Commit 539f3ce943 (2020-09-23 23:17 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin18.7.0)
  CPU: Intel(R) Core(TM) i5-6360U CPU @ 2.00GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-9.0.1 (ORCJIT, skylake)

1.6 Update

A few months after the release of Julia 1.6, I did some performance considerations (there are already quite a few nice features that come with 1.6). Now these are the benchmarking results (see benchmark/basic.jl) Language of Implementation | Commit | Run Time in Seconds | Number of Allocations | Memory Usage –- | –- | –- | –- | –- Julia | ??? | 8.165 | 249021919 | 5.01 GiB