Commit 3abe47ef authored by SOLIMAN Sylvain's avatar SOLIMAN Sylvain
Browse files

Moar questions

parent 74bcb390
%% Cell type:markdown id: tags:
# Let us start by looking again at the Prey-Predator model
%% Cell type:code id: tags:
```
load(library:examples/lotka_volterra/LVi.bc).
```
%% Cell type:code id: tags:
```
list_model.
```
%% Cell type:markdown id: tags:
### SSA means Stochastic Simulation Algorithm (from Gillespie)
%% Cell type:code id: tags:
```
numerical_simulation(method: ssa).
plot.
```
%% Cell type:markdown id: tags:
### SPN is a Stochastic Petri Net, i.e., SSA without time
%% Cell type:code id: tags:
```
numerical_simulation(method: spn).
plot.
```
%% Cell type:markdown id: tags:
### SBN is a Stochastic Boolean Net, i.e., a stochastic boolean simulation
%% Cell type:code id: tags:
```
numerical_simulation(method: sbn).
plot.
```
%% Cell type:markdown id: tags:
---
Now let us look at different ways to approach PAC learning for this model.
First, the biocham command: `pac_learning(Model, #Initial_states, Time_horizon)`
it will read the file `Model` and generate `#Initial_states` random initial states from which it will run simulations for `Time_horizon`.
You can add options for the simulation, notably: `boolean_simulation: yes` to go from default `ssa` to `sbn` method,
and `cnf_clause_size: 2` to change the size of the disjuncts considered from the default `3`.
%% Cell type:markdown id: tags:
## Question 1
Compare the results of trying to learn a model from traces of the above `library:examples/lotka_volterra/LVi.bc` model in the 3 following conditions:
1. A single boolean simulation of length 50
2. 25 boolean simulations of length 2
3. 50 stochastic simulations of length 1
Explain what you observe
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:markdown id: tags:
## Question 2
In the output, the `h` corresponds to Valiant's precision parameter. What we know (see François' slides) is that with $L(h, s)$ samples we have probability higher than $1 - h^{-1}$ to find our approximation, and its total amount of false negatives has measure $< h^{-1}$
How did we turn this into an estimate of the number of samples needed for a given $h$?
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:markdown id: tags:
## Question 3
Why do we have to provide a `cnf_clause_size` to learn CNF formulae of size less than `K`?
What does it represent "biologically"?
What does it represent "biologically"? Where can you see that in the model?
Could we have used the DNF learning algorithm here? why?
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:markdown id: tags:
---
Let us now consider a bigger model coming from L. Mendoza (Biosystems 2006), and made Boolean by the same author with Remy et al. (Dynamical Roles and Functionality of Feedback Circuits, Springer 2006).
![Th Lymphocite differentiation](RemyEtAl06.png)
The model is about the control and differentiation of Th (lymphocite) cells.
Before "learning" it, we will try to understand it a bit…
%% Cell type:code id: tags:
```
load(library:examples/Th_lymphocytes/lympho.bc).
list_model.
```
%% Output
%% Cell type:code id: tags:
```
draw_influences.
```
%% Cell type:markdown id: tags:
Basically Th0 cells differentiate either into
Th1 cells (marked by the activity of the TBet transcription factor) under the effect of IFNγ
or
Th2 cells under the effect of IL4 that binds to its receptor to activate STAT6 and GATA3…
%% Cell type:code id: tags:
```
list_stable_states.
```
%% Cell type:markdown id: tags:
## Question 4
Why do we have 6 stable states instead of 3?
STAT4,TBet -> IFNg.
/ STAT4 -< IFNg.
/ TBet -< IFNg.
GATA3 / STAT1 -> IL4.
/ GATA3 -< IL4.
STAT1 -< IL4.
IFNg / SOCS1 -> IFNgR.
/ IFNg -< IFNgR.
SOCS1 -< IFNgR.
IL4 / SOCS1 -> IL4R.
/ IL4 -< IL4R.
SOCS1 -< IL4R.
IL12 / STAT6 -> IL12R.
/ IL12 -< IL12R.
STAT6 -< IL12R.
IFNgR -> STAT1.
/ IFNgR -< STAT1.
IL4R -> STAT6.
/ IL4R -< STAT6.
IL12R / GATA3 -> STAT4.
/ IL12R -< STAT4.
GATA3 -< STAT4.
STAT1 -> SOCS1.
TBet -> SOCS1.
/ STAT1,TBet -< SOCS1.
STAT6 / TBet -> GATA3.
STAT1 / GATA3 -> TBet.
TBet / GATA3 -> TBet.
GATA3 -< TBet.
/ STAT1,TBet -< TBet.
/ STAT6 -< GATA3.
TBet -< GATA3.
Hint: the picture of the graph might help…
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
%% Cell type:markdown id: tags:
## Question
Could we have used the DNF learning algorithm? Why?
%% Cell type:code id: tags:
```
```
%% Cell type:code id: tags:
```
```
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment