Mentions légales du service

Skip to content
Snippets Groups Projects
Commit 7836f3b8 authored by POTTIER Francois's avatar POTTIER Francois
Browse files

Import from the CVS repository.

parents
Branches master
No related tags found
No related merge requests found
.exclude 0 → 100644
.exclude
CVS
.cvsignore
go
Notes
NOTES
TODO
test
paper
talk
doc
*.tar.gz
spec.godiva
.#*
GNUmakefile
blurbs
interactive
*.tar.gz
Changes 0 → 100644
20061014:
Incompatible change: there is now one distinct [UnboundIdentifier] exception
per atom sort.
20061003:
Added [AtomSet.remove] and [AtomMap.remove].
Added [AtomSet.iterator] and [AtomMap.iterator].
Added [AtomSet.print] and [AtomMap.print].
Added [AtomMap.domain].
20060927:
Added [AtomMap.mem].
20060922:
New function at every type: "equal".
New function at every abstraction type: "open2".
The syntax of the "container" declaration changes -- it now requires
fold2 in addition to map and fold.
20060920:
New functions at every type: "flatten" and "unflatten".
20060915:
Fixed a bug in the code generator that caused incorrect computation
of sets of free atoms.
Modified the code generator in order to avoid warnings about unused
variables in the generated code.
Performance improvements in [Patricia], [AtomSet], [AtomMap].
20060906:
Various changes in [Patricia], [GSet] and [GMap] so as to bring
these modules up to sync with their counterparts in Menhir.
Sped up an assertion in [AtomIdMap.add].
20060522:
Added [cardinal] and [choose] in [AtomSet] and [AtomMap].
Added [AtomSet.element].
Added [AtomSet.compare].
20060511:
Added [AtomMap.strict_add] and [AtomMap.mapi].
20060504:
Changed the default renaming scheme in [Atom.String]. The
new scheme should make renamings slightly less frequent
and more obvious.
20060421:
Added AtomMapId.iter and AtomMapId.fold.
20060419:
Added AtomMap.iter and AtomMap.fold.
Added find as a synonym for lookup.
20051129:
Updated library/Makefile for compatibility with ocaml 3.09.
20050609:
Initial release.
-include Makefile
.PHONY: test package export godi wc reinstall
# De-installing and re-installing.
reinstall:
$(MAKE) PREFIX=/usr/local uninstall install
# Test suite.
test: tool library
$(MAKE) -s -C test/good $(MFLAGS)
$(MAKE) -s -C test/bad $(MFLAGS)
# Package file creation.
# The version number is automatically set to the current date.
DATE := $(shell /bin/date +%Y%m%d)
PACKAGE := alphaCaml-$(DATE)
package:
# Copier le rpertoire courant (c'est--dire la working copy) vers /tmp.
/bin/rm -rf /tmp/$(PACKAGE)
/bin/cp -r `pwd` /tmp/$(PACKAGE)
# Nettoyer.
$(MAKE) -C /tmp/$(PACKAGE) clean
# Copier le nouveau numro de version dans les fichiers ad hoc.
echo version = \"$(DATE)\" >> /tmp/$(PACKAGE)/library/META
echo let version = \"$(DATE)\" >> /tmp/$(PACKAGE)/tool/version.ml
echo '\gdef\acversion{$(DATE)}' >> /tmp/$(PACKAGE)/doc/version.tex
# Construire la documentation.
$(MAKE) -C /tmp/$(PACKAGE)/doc -rs clean pdf
/bin/mv /tmp/$(PACKAGE)/doc/main.pdf /tmp/$(PACKAGE)/$(DOC)
# Crer l'archive.
cd /tmp && tar -cvz --exclude-from $(PACKAGE)/.exclude -f $(PACKAGE).tar.gz $(PACKAGE)
RSYNC := scp -p -B -C
TARGET := yquem.inria.fr:public_html/alphaCaml/
export: package
# Copier l'archive et la doc vers yquem.
(cd /tmp && $(RSYNC) $(PACKAGE).tar.gz $(TARGET))
(cd /tmp/$(PACKAGE) && $(RSYNC) $(DOC) $(TARGET))
# Construire l'article et l'exporter vers yquem.
# (cd paper && ./export)
wc:
ocamlwc -p tool/*.{ml,mll,mli,mly}
ocamlwc -p library/*.{ml,mli}
# -------------------------------------------------------------------------
# Creating a GODI package.
# This entry assumes that "make package" and "make export" have been
# run on the same day. It must be run with sufficient rights to write
# into the local GODI hierarchy.
GODINAME := godi/godi-alphacaml
GODIWORK := /home/fpottier/dev/godi-build
GODISVN := $(GODIWORK)/trunk/$(GODINAME)
GODIH := $(GODI_HOME)/build/$(GODINAME)
GODIPACK := $(GODIWORK)/pack
GODIMAP := $(GODIPACK)/release.3.09.map
godi:
@ if [ `whoami` != "root" ] ; then \
echo "make godi must be run with root privileges." ; \
exit 1 ; \
fi
@ sed -e s/VERSION/$(DATE)/ < spec.godiva > $(GODISVN)/spec.godiva
@ cd $(GODIWORK) && svn up
@ cd $(GODISVN) && \
godiva -refetch -localbase $(GODI_HOME) spec.godiva && \
rsync -v -r $(GODIH)/ $(GODISVN) && \
chown -R fpottier.fpottier $(GODISVN)
@ echo "Do you wish to proceed and commit changes to GODI (yes or no)?"
@ read answer && if [ "$$answer" != "yes" ] ; then \
echo Aborting. ; \
exit 0 ; \
fi
@ cd $(GODISVN) && svn commit -m "Changes to alphaCaml package."
@ echo "Now editing $(GODIMAP)..."
@ echo "Please add or edit a line of the form trunk/$(GODINAME) <version number>."
@ emacs $(GODIMAP)
@ echo "Do you wish to proceed and commit changes to GODI (yes or no)?"
@ read answer && if [ "$$answer" != "yes" ] ; then \
echo Aborting. ; \
exit 0 ; \
fi
@ cd $(GODIPACK) && svn commit -m "Updated release map for alphaCaml."
@ echo "You may now open GODI's release tool at"
@ echo " https://gps.dynxs.de/godi_admin"
@ echo "and proceed as directed."
INSTALL 0 → 100644
HOW TO INSTALL ALPHACAML
There are two components to install: the tool (alphaCaml) and the library
(alphaLib). Installation of the library requires Gerd Stolpmann's findlib
to be installed.
The installation steps are as follows:
1. Make sure findlib is installed on your system. (If it is, then the
command "ocamlfind" should be available and in the PATH.) If it is
not, download and install it.
http://www.ocaml-programming.de/packages/
2. (Need not run as root.) Compile the tool and the library.
make
3. (May need to run as root.) Install.
The tool is copied into $(PREFIX)/bin.
The library is installed via ocamlfind.
The documentation is copied into $(PREFIX)/doc/alphaCaml.
make PREFIX=/usr/local install
4. (Optional. Need not run as root.) Compile the demos. (This requires
step 3 to have succeeded.)
make demos
LICENSE 0 → 100644
Copyright (c) 2005, François Pottier (INRIA) All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer. Redistributions in binary
form must reproduce the above copyright notice, this list of conditions and
the following disclaimer in the documentation and/or other materials provided
with the distribution. Neither the name of INRIA nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
Makefile 0 → 100644
.PHONY: all clean tool library demos
all: tool library
# Compiling alphaCaml.
tool:
$(MAKE) -s -C tool $(MFLAGS)
# Compiling alphaLib.
library:
$(MAKE) -s -C library $(MFLAGS)
# Compiling the demos. Requires the tool and library to have been installed.
demos:
$(MAKE) -s -C demos $(MFLAGS)
# Installation.
DOC=alphaCaml.pdf
install: tool library
$(MAKE) -s -C tool $(MFLAGS) $@
$(MAKE) -s -C library $(MFLAGS) $@
mkdir -p $(PREFIX)/doc/alphaCaml/
install $(DOC) $(PREFIX)/doc/alphaCaml/$(DOC)
uninstall:
$(MAKE) -s -C tool $(MFLAGS) $@
$(MAKE) -s -C library $(MFLAGS) $@
/bin/rm -rf $(PREFIX)/doc/alphaCaml
# Cleaning up.
clean:
/bin/rm -f *~ .*~
$(MAKE) -s -C tool clean
$(MAKE) -s -C library clean
if [ -d test ] ; then $(MAKE) -s -C test/good clean ; fi
$(MAKE) -s -C demos clean
if [ -d paper ] ; then $(MAKE) -s -C paper clean ; fi
if [ -d doc ] ; then $(MAKE) -s -C doc clean ; fi
Notes 0 → 100644
This diff is collapsed.
AlphaCaml is a tool that turns a so-called "binding specification" into an
Objective Caml compilation unit. A binding specification resembles an
algebraic data type declaration, but also includes information about names and
binding. AlphaCaml is meant to help writers of interpreters, compilers, or
other programs-that-manipulate-programs deal with alpha-conversion in a safe
and concise style.
AlphaCaml is described in [this paper](http://cambium.inria.fr/~fpottier/publis/fpottier-alphacaml.pdf).
AlphaCaml has not been updated since 2006.
TODO 0 → 100644
* corriger un bug dans alphaLib: is_id renvoie parfois false pour des
substitutions dont la sémantique est l'identité (mais qui ne sont
pas l'arbre vide). Changer la spec ou bien interdire ces arbres?
* bug: les fonctions aeq renvoient true sur les valeurs de type ocaml [].
* vérifier que les fichiers produits passent dans ocamlweb proprement.
* engendrer une classe pour verifier qu'un terme est bien clos, avec
ramassage des atomes libres et de leurs locations (ou alors echec
au premier)
* plutôt que d'utiliser les modules pour différencier les sortes,
utiliser un type atom paramétré (fantôme). D'où polymorphisme
plus léger.
* utiliser le code entier des atomes pour mémoriser leur sorte, ce qui
évite de devoir les tagger au vol et va simplifier pas mal de choses.
Remercier Nicolas dans la doc.
* supprimer la forme crue? il suffirait d'offrir une fonction fresh
mémoisante (pour le parsing) et une classe de parcours avec
maintien d'une atomidmap.
* front-end: exiger l'égalité entre clauses binds seulement sur les sortes
qui apparaissent effectivement vivantes dans le sous-terme. Sinon (Nicolas)
on arrive à la situation où on doit déclarer qu'une certaine sorte est liée
dans le sous-terme, et on se prend un warning parce qu'elle n'y est pas
vivante.
* proposer un callback aussi sur export? pour import, documenter le fait que
le callback doit mémoiser ou bien implémenter nous-mêmes la mémoisation.
* module Location: paramétrer vis-à-vis du type des locations et de la
valeur dummy.
* front-end: amélioration des messages d'erreur?
* générer un map-fold et songer à utiliser self comme accu?
* générer un map gauche-droite
* faire en sorte que la classe map engendree preserve le partage physique
* introduire un langage intermédiaire de haut niveau pour la génération
de code, suivi d'un pretty-printing à travers camlp4.
* au lieu de trimballer un nombre variable d'environnements suivant les
endroits, introduire un type record ayant toujours le même nombre de
champs, et les mêmes noms de champs.
* exporter les fonctions bvi_ et import sur les patterns pour que Nicolas
puisse implémenter son toplevel.
* supprimer la fonction basename (trop dangereuse)? introduire à la place
une fonction atom -> string injective (combine le basename et l'identité)
mais n'utilisant pas d'environnement (contrairement à atomidmap).
* implémenter certaines fonctions (export) aussi sur les types pattern
* rajouter la classe fold2 pour Yann
* supprimer la distinction interne entre expr et pat, qui provoque
beaucoup trop de duplication; autoriser un langage plus flexible
et plus orthogonal (indépendance des sortes)
* disallow names that are OCaml keywords or whose lowercase version is
one
* problem with type/type versus type/and (Julien Signoles)
* Modifier les démos pour vérifier la linéarité des motifs (utiliser strict_add)
* supprimer Atom.basename et Atom.identity pour empecher les
gens de faire des betises avec?
* Songer à produire un foncteur paramétré par ce qu'on veut
* Autoriser plus de types anonymes, e.g. ajouter
factor ::= factor container
factor ::= factor * factor
etc. Du boulot pour pas grand-chose?
* Réfléchir à la possibilité de mémoiser les calculs de variables
libres et liées à chaque noeud (ou à chaque abstraction)
* Une implantation incrémentale (un toplevel)
pourrait exiger que import/export/subst puissent se suspendre sur
un noeud spécial "trou" et renvoyer le contexte dans lequel le trou
doit être interprété... pour que l'on puisse continuer ensuite.
cf. message de Randy Pollack (2005/06/20)
cf. aussi desiderata de Nicolas Pouillard (2006/12/15)
Un tel mécanisme permet de plus d'implémenter des features "sales"
comme "#load" ou "open": si le client a la main sur l'environnement,
il peut le bidouiller comme bon lui semble, avant de continuer avec
un environnement modifié.
* Comprendre s'il est possible de différencier paramètres d'entrée
et de sortie dans la définition de la classe map; cela demanderait
que les méthodes qui traitent les facteurs EEscape soient laissées
abstraites?
* Si on veut effectuer de la génération de noms frais alors qu'on
est encore en format externe (e.g. pour éliminer certains sucres
syntaxiques au vol pendant l'analyse syntaxique), alors on est
coincé. Solution: ne pas le faire, mais ça peut obliger à compliquer
l'arbre de syntaxe abstraite pour refléter les constructions qu'on
n'a pas pu éliminer.
* Produire une fonction de hash compatible avec l'alpha-équivalence,
pour qu'on puisse utiliser les termes comme clefs dans une table
de hash. Memoiser le code de hash aux abstractions.
.PHONY: all clean poplmark mixins interactive
all: poplmark mixins interactive
poplmark:
$(MAKE) -s -C poplmark $(MFLAGS)
mixins:
$(MAKE) -s -C mixins $(MFLAGS)
interactive:
if [ -d interactive ] ; then $(MAKE) -s -C interactive $(MFLAGS) ; fi
clean:
/bin/rm -f *~ .*~
$(MAKE) -s -C poplmark $(MFLAGS) clean
$(MAKE) -s -C mixins $(MFLAGS) clean
if [ -d interactive ] ; then $(MAKE) -s -C interactive $(MFLAGS) clean ; fi
.PHONY: all clean
# This Makefile requires alphaLib to have been installed via ocamlfind. If so
# desired, this requirement could be removed by directly using our master copy
# of the library in ../../library. Still, this method has more pedagogical
# value, since it illustrates the normal way of using alphaLib.
# Similarly, we assume that alphaCaml has been installed and is available in
# the PATH.
ALPHACAML := alphaCaml
CAMLC := ocamlfind ocamlc -g -package alphaLib
CAMLOPT := ocamlfind ocamlopt -package alphaLib
CAMLDEP := ocamlfind ocamldep
CAMLLEX := ocamllex
CAMLYACC := ocamlyacc
OBJECTS = $(MODULES:=.cmo)
XBJECTS = $(MODULES:=.cmx)
all: $(EXECUTABLE) $(EXECUTABLE).opt
$(EXECUTABLE): $(OBJECTS)
$(CAMLC) -o $(EXECUTABLE) -linkpkg $(OBJECTS)
$(EXECUTABLE).opt: $(XBJECTS)
$(CAMLOPT) -o $(EXECUTABLE).opt -linkpkg $(XBJECTS)
.depend: $(wildcard *.ml *.mli) $(GENERATED)
@/bin/rm -f $@
$(CAMLDEP) *.ml *.mli > $@
ifneq ($(MAKECMDGOALS),clean)
-include .depend
endif
clean:
/bin/rm -f $(EXECUTABLE) $(EXECUTABLE).opt $(GENERATED)
/bin/rm -f *.cmi *.cmx *.cmo *.ppo *.o *.obj *~ .*~
/bin/rm -f .depend
%.cmi: %.mli
$(CAMLC) -c $<
%.cmo: %.ml
$(CAMLC) -c $<
%.cmx %.o: %.ml
$(CAMLOPT) -c $<
%.ml %.mli: %.mly
@/bin/rm -f $*.ml $*.mli
$(CAMLYACC) $<
@/bin/chmod -w $*.ml $*.mli
%.ml: %.mll
@/bin/rm -f $@
$(CAMLLEX) $<
@/bin/chmod -w $@
%.ml %.mli: %.mla
@/bin/rm -f $*.ml $*.mli
$(ALPHACAML) $<
@/bin/chmod -w $*.ml $*.mli
These demos are intended to illustrate the use of alphaCaml. They are not very
useful or very polished. They have not been seriously tested and probably
contain bugs. Suggestions for improvement are welcome.
The demos are:
lambda/
Not a full demo. Only a sample signature file for an untyped
lambda-calculus.
poplmark/
A typechecker and evaluator for core System Fsub with records.
mixins/
A step-by-step evaluator for a calculus of mixin modules.
main main.opt syntax.ml syntax.mli .depend parser.ml parser.mli lexer.ml
main
main.opt
syntax.ml
syntax.mli
.depend
parser.ml
parser.mli
lexer.ml
GENERATED := parser.ml parser.mli lexer.ml syntax.ml syntax.mli
MODULES := suspension syntax parserFix parser lexer main
EXECUTABLE := main
include ../Makefile.shared
SUMMARY
An interactive desk calculator with global declarations. Each
declaration should be the form <variable> = <expression> on a
single line. Expressions include variables, integer constants,
addition, and a special "fail" keyword, which simulates an
evaluation failure.
The code is set up in a somewhat contrived way so as to allow
the import and export functions generated by alphaCaml to be
used, even though these functions are normally applied to
complete terms, not to terms that are incomplete because they
are being read incrementally from the standard input channel.
The key trick is to rely on infinite (lazy) streams of
declarations.
Although this setup is definitely fun, it is not yet clear whether it is
reasonable. (Consult the caveat at the end of Main.) Perhaps a simpler
approach, consisting in *not* treating globally bound names as bound atoms,
but instead as mere strings, would be preferable?
SAMPLE COMMAND LINE
make
./main
Here is a sample session:
# x = 3
x is now bound to the value of 3, namely 3.
# y = x + x
y is now bound to the value of x + x, namely 6.
# z = foo
Sorry -- the identifier "foo" is unbound.
# 3r7y28fh
Sorry -- parse error: expected "<variable> = <expression>".
# x = /
Sorry -- lexical error at character 4.
# y = z
Sorry -- the identifier "z" is unbound.
# z = y + 2
z is now bound to the value of y + 2, namely 8.
# x = z + fail
Sorry -- evaluation error.
# w = x
w is now bound to the value of x, namely 3.
INTERESTING FILES
suspension.ml(i) A trivial implementation of delayed computations.
syntax.mla The abstract syntax. Relies on suspensions to
implement infinite lazy streams of declarations.
parserFix.ml Contains a forward reference that allows
tying a recursive knot between Parser and Main.
parser.mly The parser. Trivial, except in the way it invokes
Main.parse_declarations through ParserFix.
main.ml The main module. Each declaration is read in, parsed,
imported, evaluated, exported, and printed. Everything
is done in a lazy style. Lexical errors, parse errors,
unbound identifiers, and evaluation errors are properly
detected and dealt with.
{
open Parser
exception Lexical_error of int
}
rule main = parse
| [' ' '\009' '\012' '\r' '\n' ]+
{ main lexbuf }
| "="
{ EQUAL }
| "+"
{ PLUS }
| "fail"
{ FAIL }
| (['a'-'z' '_'] ['A'-'Z' 'a'-'z' '_' '0'-'9' '\'']*) as str
{ VAR str }
| (['0'-'9']+) as str
{ CONST (int_of_string str) }
| eof
{ EOF }
| _
{ raise (Lexical_error (Lexing.lexeme_start lexbuf)) }
(* To begin, set up a function that reads a line of input and parses
it. If lexing or parsing fails, the function displays an error
message and starts over, so that failure is not observable. *)
let rec parse_declarations () : Syntax.Raw.declarations =
(* Print a prompt. *)
print_string "# ";
flush stdout;
(* Read a line. This will raise [End_of_file] if the user hits ^D. *)
let lexbuf = Lexing.from_string (input_line stdin) in
(* Parse that line, and, if successful, return the raw parse tree. *)
try
Parser.declarations Lexer.main lexbuf
with
(* Handle errors by printing an error message and retrying. *)
| Parsing.Parse_error ->
print_string "Sorry -- parse error: expected \"<variable> = <expression>\".\n";
flush stdout;
parse_declarations()
| Lexer.Lexical_error c ->
Printf.printf "Sorry -- lexical error at character %d.\n" c;
flush stdout;
parse_declarations()
(* Tie the recursive knot. The function [declarations] above
is the one that the parser should refer to when building
a suspension that reads more input. So, write this function
into the appropriate reference. *)
let () =
ParserFix.declarations := parse_declarations
(* An evaluator for expressions. This code is perfectly normal. The
language is so simple that, in the absence of [EFail], no failure
would be possible. This explains why I introduced this special
expression, whose effect is to artificially cause an evaluation
failure. *)
exception EvaluationFailure
type env = Syntax.value Syntax.Var.AtomMap.t
let rec eval (env : env) : Syntax.expr -> Syntax.value = function
| Syntax.EVar x ->
Syntax.Var.AtomMap.lookup x env
| Syntax.EConst k ->
Syntax.VConst k
| Syntax.EAdd (e1, e2) ->
begin
match eval env e1, eval env e2 with
| Syntax.VConst k1, Syntax.VConst k2 ->
Syntax.VConst (k1 + k2)
end
| Syntax.EFail ->
raise EvaluationFailure
(* Now comes the main processing loop. This is where expressions are
evaluated under an evaluation environment that grows with every
declaration. *)
type suspension = Syntax.declarations Suspension.t
type esuspension = Syntax.evaluated_declarations Suspension.t
(* [process env suspension] turns a stream of declarations,
represented by the parameter [suspension], into a stream of
evaluated declarations.
The parameter [env] is an evaluation environment, that is, a
mapping of atoms to values, as defined above. It reflects the
declarations that were successfully parsed, imported, and
evaluated.
The parameter [suspension] produces a stream of declarations in
internal form or possibly fails with an [UnboundIdentifier]
exception emanating from [Syntax.import_declarations]. Elements in
the input stream that fail when evaluated are dropped in the output
stream. *)
let rec process (env : env) (suspension : suspension) : esuspension =
Suspension.create (fun () ->
try
(* Attempt to evaluate the head of the input stream. This
parses a line of input using [parse_declarations]
and imports it using [Syntax.import_declarations].
Execution of both of these functions gets suspended
again after the head of the stream has been obtained. *)
match Suspension.force suspension with
(* Success. Continue by examining the head of the stream,
which is now available. *)
| Syntax.D d ->
(* Look at the first declaration in the stream. Open it
to get a fresh atom. *)
let x, e, new_suspension = Syntax.open_declaration d in
(* Attempt to evaluate it. *)
let v = eval env e in
(* Success. Produce a new output stream cell, whose tail is
obtained by processing the tail of the input stream in
an extended environment. *)
let env = Syntax.Var.AtomMap.add x v env in
Syntax.ED (Syntax.create_evaluated_declaration (
x, e, v, process env new_suspension
))
with
| Syntax.Var.UnboundIdentifier x ->
(* Failure of [Syntax.import_declarations]. Print an error
message and continue by starting afresh with the same
parameters. This, in particular, has the effect of resuming
the execution of [Syntax.import_declarations] where it
left off, without extending its (to us, hidden) mapping
of identifiers to atoms.*)
Printf.printf "Sorry -- the identifier \"%s\" is unbound.\n" x;
flush stdout;
Suspension.force (process env suspension)
| EvaluationFailure ->
(* Failure of [eval], handled similarly. *)
Printf.printf "Sorry -- evaluation error.\n";
flush stdout;
Suspension.force (process env suspension)
)
(* Printers for raw values, expressions, and evaluated declarations. *)
let print_value c = function
| Syntax.Raw.VConst k ->
Printf.fprintf c "%d" k
let rec print_expr c = function
| Syntax.Raw.EVar x ->
Printf.fprintf c "%s" x
| Syntax.Raw.EConst k ->
Printf.fprintf c "%d" k
| Syntax.Raw.EAdd (e1, e2) ->
(* Don't care about parentheses. *)
Printf.fprintf c "%a + %a" print_expr e1 print_expr e2
| Syntax.Raw.EFail ->
Printf.fprintf c "fail"
let rec print_evaluated_declarations suspension =
match Suspension.force suspension with
| Syntax.Raw.ED (x, e, v, suspension) ->
Printf.printf "%s is now bound to the value of %a, namely %a.\n" x print_expr e print_value v;
flush stdout;
print_evaluated_declarations suspension
(* Reverse application. *)
let (/) x f =
f x
(* Use [parse_declarations] to obtain a stream of raw declarations.
Pipe it into [Syntax.import_declarations] to obtain a stream of
declarations in internal form. Pipe that into [process] to obtain a
stream of evaluated declarations in internal form. Pipe that next
into [Syntax.export_evaluated_declarations] to obtain a stream of
evaluated declarations in raw form. Finally, print that stream.
The whole setup is lazy. The printer plays the role of an eager
consumer that drives the pipeline. *)
let () =
try
Suspension.create parse_declarations
/ Suspension.map (Syntax.import_declarations Syntax.Identifier.Map.empty)
/ process Syntax.Var.AtomMap.empty
/ Suspension.map (Syntax.export_evaluated_declarations Syntax.Var.AtomIdMap.empty)
/ print_evaluated_declarations
with End_of_file ->
()
(* TEMPORARY
We have a problem if the identifiers chosen at export do not coincide
with the original identifiers typed in by the user. In that case, the
reports provided by the printer can state that "x1 is now bound ...",
whereas it is really "x" that is now bound. In other words, in the
input term that is being parsed, it is really the original identifier
that is bound, while in the output term that is being printed, another
identifier has been chosen, and it is really that new identifier that
is being bound. That is, we are reasoning with two different namespaces
at once.
# x = 3
x is now bound to the value of 3, namely 3.
# x = x + 1
x1 is now bound to the value of x + 1, namely 4.
# x = x + 1
x2 is now bound to the value of x1 + 1, namely 5.
# y3 = 0
y is now bound to the value of 0, namely 0.
It is not yet clear to me how to avoid this phenomenon. Checking that
the user does not define the same name twice would pollute the code
and would not be a sufficient condition, as illustrated by the case
of "y3".
It is also not clear to me whether this mind-boggling setup is at all
a good idea.
*)
%{
open Syntax.Raw
%}
%token <string> VAR
%token <int> CONST
%token EQUAL PLUS EOF FAIL
%left PLUS
%type <Syntax.Raw.declarations> declarations
%start declarations
%%
expr:
| VAR
{ EVar $1 }
| CONST
{ EConst $1 }
| expr PLUS expr
{ EAdd ($1, $3) }
| FAIL
{ EFail }
declarations:
| VAR EQUAL expr EOF
{ D ($1, $3, Suspension.create (fun () -> !ParserFix.declarations())) }
/* We read just one declaration, of the form <variable> = <expression>,
and expect to find no more input, because the token stream represents
just one line of input and we expect one declaration per line.
The rest of the declarations is represented by a suspension, which,
when forced, will do whatever is necessary to read more input. The
function that defines this suspension needs to call the parser, so
there is a circularity. We break this circularity by going through
a reference in module [ParserFix]. */
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment