Skip to content
On this page

Unified representation

We now have the pieces in place to demonstrate machinable's unified experiment representation that we advertised in the introduction.

A monte-carlo experiment

Consider the following experiment that estimates how monte-carlo samples are needed to approximate the circle constant PI to a certain level of accuracy. The implementation brings together the essential concepts covered thus far.

montecarlo.py

py
import math
from dataclasses import dataclass
from random import random

from machinable import Experiment


class EstimatePi(Experiment):
    @dataclass
    class Config:
        acceptable_error: float = 0.01

    @property
    def result(self):
        return self.load_data("result.json", default={"samples": 0, "pi": 0})

    @property
    def pi(self):
        return self.result["pi"]

    @property
    def samples(self):
        return self.result["samples"]

    def on_execute(self):
        samples = 10
        while abs(math.pi - self.pi) > self.config.acceptable_error:
            # monte-carlo simulation
            count = 0
            for _ in range(samples):
                x, y = random(), random()
                count += int((x**2 + y**2) <= 1)
            pi = 4 * count / samples

            self.save_data(
                "result.json",
                {"samples": samples, "pi": pi},
            )

            # double the amount of samples
            # in case we have to try again
            samples *= 2

The experiment defines a number of helper properties to keep track of the current results and then runs the monte-carlo simulation until we reach an acceptable error rate.

Running the experiment

Using the experiment, we can write our analysis script that will give us the answer to our question of how many samples are required.

py
from machinable import Experiment

experiment = Experiment.singleton("montecarlo").execute()

print(
    f"We need {experiment.samples} samples to approximate"
    f" PI as {experiment.pi}"
    f" (< {experiment.config.acceptable_error} error)"
)
Output

We need 2560 samples to approximate PI as 3.15 (< 0.01 error)

A first thing to notice here is that to print the result, we were able to conveniently re-use the properties that the experiment implementation itself used during simulation. This is an immediate benefit of the fact that we use the experiment class to both generate as well as retrieve results. In fact, such re-use is likely since important values computed during simulation are likely quantities of interest in the analysis script.

Result-driven scripts

More importantly, however, we can tweak and iteratively develop the analysis script above without re-running the underlying monte-carlo simulation.

This is because singleton("montecarlo").execute() will only execute the experiment once, and otherwise just retrieve the existing experiment from the storage.

What this gives us is a unified representation where we express how results should be displayed, and the same code happens to trigger the generation of the results if they have not been produced yet.

While you would typically write a simulation and later the plot script to display the results, in this paradigm, you can start writing the plot script and use it to launch the required simulation.

MIT Licensed