"# Let us start by looking again at the Prey-Predator model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"load(library:examples/lotka_volterra/LVi.bc)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"list_model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### SSA means Stochastic Simulation Algorithm (from Gillespie)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"numerical_simulation(method: ssa).\n",
"plot."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### SPN is a Stochastic Petri Net, i.e., SSA without time"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"numerical_simulation(method: spn).\n",
"plot."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### SBN is a Stochastic Boolean Net, i.e., a stochastic boolean simulation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"numerical_simulation(method: sbn).\n",
"plot."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"Now let us look at different ways to approach PAC learning for this model.\n",
"\n",
"First, the biocham command: `pac_learning(Model, #Initial_states, Time_horizon)`\n",
"it will read the file `Model` and generate `#Initial_states` random initial states from which it will run simulations for `Time_horizon`.\n",
"\n",
"You can add options for the simulation, notably: `boolean_simulation: yes` to go from default `ssa` to `sbn` method,\n",
"and `cnf_clause_size: 2` to change the size of the disjuncts considered from the default `3`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Question 1\n",
"\n",
"Compare the results of trying to learn a model from traces of the above `library:examples/lotka_volterra/LVi.bc` model in the 3 following conditions:\n",
"\n",
"1. A single boolean simulation of length 50\n",
"2. 25 boolean simulations of length 2\n",
"3. 50 stochastic simulations of length 1\n",
"\n",
"Explain what you observe"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Question 2\n",
"\n",
"In the output, the `h` corresponds to Valiant's precision parameter. What we know (see François' slides) is that with $L(h, s)$ samples we have probability higher than $1 - h^{-1}$ to find our approximation, and its total amount of false negatives has measure $< h^{-1}$\n",
"\n",
"How did we turn this into an estimate of the number of samples needed for a given $h$?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Question 3\n",
"\n",
"Why do we have to provide a `cnf_clause_size` to learn CNF formulae of size less than `K`?\n",
"\n",
"What does it represent \"biologically\"?\n",
"\n",
"Could we have used the DNF learning algorithm here? why?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"Let us now consider a bigger model coming from L. Mendoza (Biosystems 2006), and made Boolean by the same author with Remy et al. (Dynamical Roles and Functionality of Feedback Circuits, Springer 2006).\n",
# Let us start by looking again at the Prey-Predator model
%% Cell type:code id: tags:
```
load(library:examples/lotka_volterra/LVi.bc).
```
%% Cell type:code id: tags:
```
list_model.
```
%% Cell type:markdown id: tags:
### SSA means Stochastic Simulation Algorithm (from Gillespie)
%% Cell type:code id: tags:
```
numerical_simulation(method: ssa).
plot.
```
%% Cell type:markdown id: tags:
### SPN is a Stochastic Petri Net, i.e., SSA without time
%% Cell type:code id: tags:
```
numerical_simulation(method: spn).
plot.
```
%% Cell type:markdown id: tags:
### SBN is a Stochastic Boolean Net, i.e., a stochastic boolean simulation
%% Cell type:code id: tags:
```
numerical_simulation(method: sbn).
plot.
```
%% Cell type:markdown id: tags:
---
Now let us look at different ways to approach PAC learning for this model.
First, the biocham command: `pac_learning(Model, #Initial_states, Time_horizon)`
it will read the file `Model` and generate `#Initial_states` random initial states from which it will run simulations for `Time_horizon`.
You can add options for the simulation, notably: `boolean_simulation: yes` to go from default `ssa` to `sbn` method,
and `cnf_clause_size: 2` to change the size of the disjuncts considered from the default `3`.
%% Cell type:markdown id: tags:
## Question 1
Compare the results of trying to learn a model from traces of the above `library:examples/lotka_volterra/LVi.bc` model in the 3 following conditions:
1. A single boolean simulation of length 50
2. 25 boolean simulations of length 2
3. 50 stochastic simulations of length 1
Explain what you observe
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:markdown id: tags:
## Question 2
In the output, the `h` corresponds to Valiant's precision parameter. What we know (see François' slides) is that with $L(h, s)$ samples we have probability higher than $1 - h^{-1}$ to find our approximation, and its total amount of false negatives has measure $< h^{-1}$
How did we turn this into an estimate of the number of samples needed for a given $h$?
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:markdown id: tags:
## Question 3
Why do we have to provide a `cnf_clause_size` to learn CNF formulae of size less than `K`?
What does it represent "biologically"?
Could we have used the DNF learning algorithm here? why?
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:markdown id: tags:
---
Let us now consider a bigger model coming from L. Mendoza (Biosystems 2006), and made Boolean by the same author with Remy et al. (Dynamical Roles and Functionality of Feedback Circuits, Springer 2006).

The model is about the control and differentiation of Th (lymphocite) cells.