2018-01-03 08:31:10 +00:00
//: Go from an address to the payload it points at using /lookup.
2016-05-03 17:15:17 +00:00
//:
//: The tests in this layer use unsafe operations so as to stay decoupled from
//: 'new'.
: ( scenario copy_indirect )
def main [
2016-09-17 07:43:13 +00:00
1 : address : num < - copy 10 / unsafe
2018-01-03 08:31:10 +00:00
10 : num < - copy 34
2016-05-03 17:15:17 +00:00
# This loads location 1 as an address and looks up *that* location.
2016-09-17 07:43:13 +00:00
2 : num < - copy 1 : address : num / lookup
2016-05-03 17:15:17 +00:00
]
+ mem : storing 34 in location 2
2016-05-06 20:12:38 +00:00
: ( before " End Preprocess read_memory(x) " )
2016-05-03 17:15:17 +00:00
canonize ( x ) ;
//: similarly, write to addresses pointing at other locations using the
//: 'lookup' property
: ( scenario store_indirect )
def main [
2016-09-17 07:43:13 +00:00
1 : address : num < - copy 10 / unsafe
1 : address : num / lookup < - copy 34
2016-05-03 17:15:17 +00:00
]
2018-01-03 08:31:10 +00:00
+ mem : storing 34 in location 10
2016-05-03 17:15:17 +00:00
2016-05-06 20:12:38 +00:00
: ( before " End Preprocess write_memory(x, data) " )
2016-05-03 17:15:17 +00:00
canonize ( x ) ;
//: writes to address 0 always loudly fail
: ( scenario store_to_0_fails )
% Hide_errors = true ;
def main [
2016-09-17 07:43:13 +00:00
1 : address : num < - copy 0
1 : address : num / lookup < - copy 34
2016-05-03 17:15:17 +00:00
]
- mem : storing 34 in location 0
2016-09-17 07:43:13 +00:00
+ error : can ' t write to location 0 in ' 1 : address : num / lookup < - copy 34 '
2016-05-03 17:15:17 +00:00
2016-10-25 19:12:02 +00:00
//: attempts to /lookup address 0 always loudly fail
: ( scenario lookup_0_fails )
% Hide_errors = true ;
def main [
1 : address : num < - copy 0
2 : num < - copy 1 : address : num / lookup
]
2017-05-29 18:52:19 +00:00
+ error : main : tried to / lookup 0 in ' 2 : num < - copy 1 : address : num / lookup '
2016-10-25 19:12:02 +00:00
2016-05-03 17:15:17 +00:00
: ( code )
void canonize ( reagent & x ) {
if ( is_literal ( x ) ) return ;
2017-10-21 14:07:30 +00:00
// Begin canonize(x) Lookups
2016-05-03 17:15:17 +00:00
while ( has_property ( x , " lookup " ) )
lookup_memory ( x ) ;
}
void lookup_memory ( reagent & x ) {
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
if ( ! x . type | | x . type - > atom | | x . type - > left - > value ! = get ( Type_ordinal , " address " ) ) {
2016-05-21 05:09:06 +00:00
raise < < maybe ( current_recipe_name ( ) ) < < " tried to /lookup ' " < < x . original_string < < " ' but it isn't an address \n " < < end ( ) ;
2016-05-03 17:15:17 +00:00
return ;
}
// compute value
if ( x . value = = 0 ) {
raise < < maybe ( current_recipe_name ( ) ) < < " tried to /lookup 0 \n " < < end ( ) ;
return ;
}
2016-10-25 19:12:02 +00:00
lookup_memory_core ( x , /*check_for_null*/ true ) ;
2016-05-17 17:44:12 +00:00
}
2016-10-25 19:12:02 +00:00
void lookup_memory_core ( reagent & x , bool check_for_null ) {
2016-05-17 17:44:12 +00:00
if ( x . value = = 0 ) return ;
2017-11-03 08:50:46 +00:00
trace ( " mem " ) < < " location " < < x . value < < " is " < < no_scientific ( get_or_insert ( Memory , x . value ) ) < < end ( ) ;
2016-05-03 17:15:17 +00:00
x . set_value ( get_or_insert ( Memory , x . value ) ) ;
drop_from_type ( x , " address " ) ;
2018-01-03 08:31:10 +00:00
if ( check_for_null & & x . value = = 0 ) {
2016-10-25 19:12:02 +00:00
if ( Current_routine )
2017-05-29 18:52:19 +00:00
raise < < maybe ( current_recipe_name ( ) ) < < " tried to /lookup 0 in ' " < < to_original_string ( current_instruction ( ) ) < < " ' \n " < < end ( ) ;
2016-10-25 19:12:02 +00:00
else
raise < < " tried to /lookup 0 \n " < < end ( ) ;
}
2016-05-03 17:15:17 +00:00
drop_one_lookup ( x ) ;
}
2016-05-06 07:46:39 +00:00
: ( before " End Preprocess types_strictly_match(reagent to, reagent from) " )
if ( ! canonize_type ( to ) ) return false ;
if ( ! canonize_type ( from ) ) return false ;
2016-05-03 17:15:17 +00:00
2016-05-06 07:46:39 +00:00
: ( before " End Preprocess is_mu_array(reagent r) " )
if ( ! canonize_type ( r ) ) return false ;
2016-05-03 17:15:17 +00:00
2016-05-06 07:46:39 +00:00
: ( before " End Preprocess is_mu_address(reagent r) " )
if ( ! canonize_type ( r ) ) return false ;
2016-05-03 17:15:17 +00:00
2016-05-06 07:46:39 +00:00
: ( before " End Preprocess is_mu_number(reagent r) " )
if ( ! canonize_type ( r ) ) return false ;
: ( before " End Preprocess is_mu_boolean(reagent r) " )
if ( ! canonize_type ( r ) ) return false ;
2016-08-17 15:24:19 +00:00
: ( before " End Preprocess is_mu_character(reagent r) " )
if ( ! canonize_type ( r ) ) return false ;
2016-05-03 17:15:17 +00:00
: ( after " Update product While Type-checking Merge " )
if ( ! canonize_type ( product ) ) continue ;
: ( before " End Compute Call Ingredient " )
canonize_type ( ingredient ) ;
: ( before " End Preprocess NEXT_INGREDIENT product " )
canonize_type ( product ) ;
: ( before " End Check RETURN Copy(lhs, rhs)
canonize_type ( lhs ) ;
canonize_type ( rhs ) ;
2016-05-12 23:38:59 +00:00
: ( before " Compute Container Size(reagent rcopy) " )
2016-05-03 17:15:17 +00:00
if ( ! canonize_type ( rcopy ) ) return ;
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
: ( before " Compute Container Size(element, full_type) " )
2016-05-03 17:15:17 +00:00
assert ( ! has_property ( element , " lookup " ) ) ;
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
: ( before " Compute Exclusive Container Size(element, full_type) " )
2016-09-09 18:03:49 +00:00
assert ( ! has_property ( element , " lookup " ) ) ;
2016-05-03 17:15:17 +00:00
: ( code )
bool canonize_type ( reagent & r ) {
while ( has_property ( r , " lookup " ) ) {
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
if ( ! r . type | | r . type - > atom | | ! r . type - > left | | ! r . type - > left - > atom | | r . type - > left - > value ! = get ( Type_ordinal , " address " ) ) {
2017-05-07 05:48:37 +00:00
raise < < " cannot perform lookup on ' " < < r . name < < " ' because it has non-address type " < < to_string ( r . type ) < < ' \n ' < < end ( ) ;
2016-05-03 17:15:17 +00:00
return false ;
}
drop_from_type ( r , " address " ) ;
drop_one_lookup ( r ) ;
}
return true ;
}
void drop_one_lookup ( reagent & r ) {
2016-10-20 05:10:35 +00:00
for ( vector < pair < string , string_tree * > > : : iterator p = r . properties . begin ( ) ; p ! = r . properties . end ( ) ; + + p ) {
2016-05-03 17:15:17 +00:00
if ( p - > first = = " lookup " ) {
r . properties . erase ( p ) ;
return ;
}
}
assert ( false ) ;
}
//: Tedious fixup to support addresses in container/array instructions of previous layers.
//: Most instructions don't require fixup if they use the 'ingredients' and
//: 'products' variables in run_current_routine().
: ( scenario get_indirect )
def main [
1 : address : point < - copy 10 / unsafe
2018-01-03 08:31:10 +00:00
10 : num < - copy 34
11 : num < - copy 35
2016-09-17 07:43:13 +00:00
2 : num < - get 1 : address : point / lookup , 0 : offset
2016-05-03 17:15:17 +00:00
]
+ mem : storing 34 in location 2
: ( scenario get_indirect2 )
def main [
1 : address : point < - copy 10 / unsafe
2018-01-03 08:31:10 +00:00
10 : num < - copy 34
11 : num < - copy 35
2016-09-17 07:43:13 +00:00
2 : address : num < - copy 20 / unsafe
2 : address : num / lookup < - get 1 : address : point / lookup , 0 : offset
2016-05-03 17:15:17 +00:00
]
2018-01-03 08:31:10 +00:00
+ mem : storing 34 in location 20
2016-05-03 17:15:17 +00:00
: ( scenario include_nonlookup_properties )
def main [
1 : address : point < - copy 10 / unsafe
2018-01-03 08:31:10 +00:00
10 : num < - copy 34
11 : num < - copy 35
2016-09-17 07:43:13 +00:00
2 : num < - get 1 : address : point / lookup / foo , 0 : offset
2016-05-03 17:15:17 +00:00
]
+ mem : storing 34 in location 2
: ( after " Update GET base in Check " )
if ( ! canonize_type ( base ) ) break ;
: ( after " Update GET product in Check " )
if ( ! canonize_type ( product ) ) break ;
: ( after " Update GET base in Run " )
canonize ( base ) ;
: ( scenario put_indirect )
def main [
1 : address : point < - copy 10 / unsafe
2018-01-03 08:31:10 +00:00
10 : num < - copy 34
11 : num < - copy 35
2016-05-03 17:15:17 +00:00
1 : address : point / lookup < - put 1 : address : point / lookup , 0 : offset , 36
]
2018-01-03 08:31:10 +00:00
+ mem : storing 36 in location 10
2016-05-03 17:15:17 +00:00
: ( after " Update PUT base in Check " )
if ( ! canonize_type ( base ) ) break ;
: ( after " Update PUT offset in Check " )
if ( ! canonize_type ( offset ) ) break ;
: ( after " Update PUT base in Run " )
canonize ( base ) ;
2016-07-21 18:56:27 +00:00
: ( scenario put_product_error_with_lookup )
% Hide_errors = true ;
def main [
1 : address : point < - copy 10 / unsafe
2018-01-03 08:31:10 +00:00
10 : num < - copy 34
11 : num < - copy 35
2016-07-21 18:56:27 +00:00
1 : address : point < - put 1 : address : point / lookup , x : offset , 36
]
+ error : main : product of ' put ' must be first ingredient ' 1 : address : point / lookup ' , but got ' 1 : address : point '
: ( before " End PUT Product Checks " )
reagent /*copy*/ p = inst . products . at ( 0 ) ;
if ( ! canonize_type ( p ) ) break ; // error raised elsewhere
reagent /*copy*/ i = inst . ingredients . at ( 0 ) ;
if ( ! canonize_type ( i ) ) break ; // error raised elsewhere
if ( ! types_strictly_match ( p , i ) ) {
raise < < maybe ( get ( Recipe , r ) . name ) < < " product of 'put' must be first ingredient ' " < < inst . ingredients . at ( 0 ) . original_string < < " ', but got ' " < < inst . products . at ( 0 ) . original_string < < " ' \n " < < end ( ) ;
break ;
}
2016-05-03 17:15:17 +00:00
: ( scenario new_error )
% Hide_errors = true ;
def main [
2016-09-17 07:43:13 +00:00
1 : num / raw < - new number : type
2016-05-03 17:15:17 +00:00
]
2016-09-17 07:43:13 +00:00
+ error : main : product of ' new ' has incorrect type : ' 1 : num / raw < - new number : type '
2016-05-03 17:15:17 +00:00
: ( after " Update NEW product in Check " )
canonize_type ( product ) ;
: ( scenario copy_array_indirect )
def main [
2018-01-03 08:31:10 +00:00
10 : array : num : 3 < - create - array
11 : num < - copy 14
12 : num < - copy 15
13 : num < - copy 16
2016-09-17 07:43:13 +00:00
1 : address : array : num < - copy 10 / unsafe
2 : array : num < - copy 1 : address : array : num / lookup
2016-05-03 17:15:17 +00:00
]
+ mem : storing 3 in location 2
+ mem : storing 14 in location 3
+ mem : storing 15 in location 4
+ mem : storing 16 in location 5
2016-06-08 04:28:45 +00:00
: ( scenario create_array_indirect )
def main [
2016-09-17 07:43:13 +00:00
1 : address : array : num : 3 < - copy 1000 / unsafe # pretend allocation
1 : address : array : num : 3 / lookup < - create - array
2016-06-08 04:28:45 +00:00
]
2018-01-03 08:31:10 +00:00
+ mem : storing 3 in location 1000
2016-06-08 04:28:45 +00:00
: ( after " Update CREATE_ARRAY product in Check " )
if ( ! canonize_type ( product ) ) break ;
: ( after " Update CREATE_ARRAY product in Run " )
canonize ( product ) ;
2016-05-03 17:15:17 +00:00
: ( scenario index_indirect )
def main [
2018-01-03 08:31:10 +00:00
10 : array : num : 3 < - create - array
11 : num < - copy 14
12 : num < - copy 15
13 : num < - copy 16
2016-09-17 07:43:13 +00:00
1 : address : array : num < - copy 10 / unsafe
2 : num < - index 1 : address : array : num / lookup , 1
2016-05-03 17:15:17 +00:00
]
+ mem : storing 15 in location 2
: ( before " Update INDEX base in Check " )
if ( ! canonize_type ( base ) ) break ;
: ( before " Update INDEX index in Check " )
if ( ! canonize_type ( index ) ) break ;
: ( before " Update INDEX product in Check " )
if ( ! canonize_type ( product ) ) break ;
: ( before " Update INDEX base in Run " )
canonize ( base ) ;
: ( before " Update INDEX index in Run " )
canonize ( index ) ;
: ( scenario put_index_indirect )
def main [
2018-01-03 08:31:10 +00:00
10 : array : num : 3 < - create - array
11 : num < - copy 14
12 : num < - copy 15
13 : num < - copy 16
2016-09-17 07:43:13 +00:00
1 : address : array : num < - copy 10 / unsafe
1 : address : array : num / lookup < - put - index 1 : address : array : num / lookup , 1 , 34
2016-05-03 17:15:17 +00:00
]
2018-01-03 08:31:10 +00:00
+ mem : storing 34 in location 12
2016-05-03 17:15:17 +00:00
: ( scenario put_index_indirect_2 )
def main [
2016-09-17 07:43:13 +00:00
1 : array : num : 3 < - create - array
2 : num < - copy 14
3 : num < - copy 15
4 : num < - copy 16
5 : address : num < - copy 10 / unsafe
2018-01-03 08:31:10 +00:00
10 : num < - copy 1
2016-09-17 07:43:13 +00:00
1 : array : num : 3 < - put - index 1 : array : num : 3 , 5 : address : num / lookup , 34
2016-05-03 17:15:17 +00:00
]
+ mem : storing 34 in location 3
2016-07-21 18:56:27 +00:00
: ( scenario put_index_product_error_with_lookup )
% Hide_errors = true ;
def main [
2018-01-03 08:31:10 +00:00
10 : array : num : 3 < - create - array
11 : num < - copy 14
12 : num < - copy 15
13 : num < - copy 16
2016-09-17 07:43:13 +00:00
1 : address : array : num < - copy 10 / unsafe
1 : address : array : num < - put - index 1 : address : array : num / lookup , 1 , 34
2016-07-21 18:56:27 +00:00
]
2016-09-17 07:43:13 +00:00
+ error : main : product of ' put - index ' must be first ingredient ' 1 : address : array : num / lookup ' , but got ' 1 : address : array : num '
2016-07-21 18:56:27 +00:00
: ( before " End PUT_INDEX Product Checks " )
reagent /*copy*/ p = inst . products . at ( 0 ) ;
if ( ! canonize_type ( p ) ) break ; // error raised elsewhere
reagent /*copy*/ i = inst . ingredients . at ( 0 ) ;
if ( ! canonize_type ( i ) ) break ; // error raised elsewhere
if ( ! types_strictly_match ( p , i ) ) {
raise < < maybe ( get ( Recipe , r ) . name ) < < " product of 'put-index' must be first ingredient ' " < < inst . ingredients . at ( 0 ) . original_string < < " ', but got ' " < < inst . products . at ( 0 ) . original_string < < " ' \n " < < end ( ) ;
break ;
}
2016-05-18 01:25:26 +00:00
: ( scenario dilated_reagent_in_static_array )
def main [
{ 1 : ( array ( address number ) 3 ) } < - create - array
2016-09-17 07:43:13 +00:00
5 : address : num < - new number : type
{ 1 : ( array ( address number ) 3 ) } < - put - index { 1 : ( array ( address number ) 3 ) } , 0 , 5 : address : num
* 5 : address : num < - copy 34
6 : num < - copy * 5 : address : num
2016-05-18 01:25:26 +00:00
]
+ run : creating array of size 4
+ mem : storing 34 in location 6
2016-05-03 17:15:17 +00:00
: ( before " Update PUT_INDEX base in Check " )
if ( ! canonize_type ( base ) ) break ;
: ( before " Update PUT_INDEX index in Check " )
if ( ! canonize_type ( index ) ) break ;
: ( before " Update PUT_INDEX value in Check " )
if ( ! canonize_type ( value ) ) break ;
: ( before " Update PUT_INDEX base in Run " )
canonize ( base ) ;
: ( before " Update PUT_INDEX index in Run " )
canonize ( index ) ;
: ( scenario length_indirect )
def main [
2018-01-03 08:31:10 +00:00
10 : array : num : 3 < - create - array
11 : num < - copy 14
12 : num < - copy 15
13 : num < - copy 16
2016-09-17 07:43:13 +00:00
1 : address : array : num < - copy 10 / unsafe
2 : num < - length 1 : address : array : num / lookup
2016-05-03 17:15:17 +00:00
]
+ mem : storing 3 in location 2
: ( before " Update LENGTH array in Check " )
if ( ! canonize_type ( array ) ) break ;
: ( before " Update LENGTH array in Run " )
canonize ( array ) ;
: ( scenario maybe_convert_indirect )
def main [
2018-01-03 08:31:10 +00:00
10 : number - or - point < - merge 0 / number , 34
2016-05-03 17:15:17 +00:00
1 : address : number - or - point < - copy 10 / unsafe
2016-09-17 07:46:03 +00:00
2 : num , 3 : bool < - maybe - convert 1 : address : number - or - point / lookup , i : variant
2016-05-03 17:15:17 +00:00
]
+ mem : storing 1 in location 3
2016-05-04 00:38:33 +00:00
+ mem : storing 34 in location 2
2016-05-03 17:15:17 +00:00
: ( scenario maybe_convert_indirect_2 )
def main [
2018-01-03 08:31:10 +00:00
10 : number - or - point < - merge 0 / number , 34
2016-05-03 17:15:17 +00:00
1 : address : number - or - point < - copy 10 / unsafe
2016-09-17 07:43:13 +00:00
2 : address : num < - copy 20 / unsafe
2016-09-17 07:46:03 +00:00
2 : address : num / lookup , 3 : bool < - maybe - convert 1 : address : number - or - point / lookup , i : variant
2016-05-03 17:15:17 +00:00
]
+ mem : storing 1 in location 3
2018-01-03 08:31:10 +00:00
+ mem : storing 34 in location 20
2016-05-03 17:15:17 +00:00
: ( scenario maybe_convert_indirect_3 )
def main [
2018-01-03 08:31:10 +00:00
10 : number - or - point < - merge 0 / number , 34
2016-05-03 17:15:17 +00:00
1 : address : number - or - point < - copy 10 / unsafe
2016-09-17 07:46:03 +00:00
2 : address : bool < - copy 20 / unsafe
3 : num , 2 : address : bool / lookup < - maybe - convert 1 : address : number - or - point / lookup , i : variant
2016-05-03 17:15:17 +00:00
]
2018-01-03 08:31:10 +00:00
+ mem : storing 1 in location 20
2016-05-04 00:38:33 +00:00
+ mem : storing 34 in location 3
2016-05-03 17:15:17 +00:00
: ( before " Update MAYBE_CONVERT base in Check " )
if ( ! canonize_type ( base ) ) break ;
: ( before " Update MAYBE_CONVERT product in Check " )
if ( ! canonize_type ( product ) ) break ;
: ( before " Update MAYBE_CONVERT status in Check " )
if ( ! canonize_type ( status ) ) break ;
: ( before " Update MAYBE_CONVERT base in Run " )
canonize ( base ) ;
: ( before " Update MAYBE_CONVERT product in Run " )
canonize ( product ) ;
: ( before " Update MAYBE_CONVERT status in Run " )
canonize ( status ) ;
: ( scenario merge_exclusive_container_indirect )
def main [
1 : address : number - or - point < - copy 10 / unsafe
1 : address : number - or - point / lookup < - merge 0 / number , 34
]
2018-01-03 08:31:10 +00:00
+ mem : storing 0 in location 10
+ mem : storing 34 in location 11
2016-05-03 17:15:17 +00:00
: ( before " Update size_mismatch Check for MERGE(x)
canonize ( x ) ;
//: abbreviation for '/lookup': a prefix '*'
: ( scenario lookup_abbreviation )
def main [
1 : address : number < - copy 10 / unsafe
2018-01-03 08:31:10 +00:00
10 : number < - copy 34
2016-05-03 17:15:17 +00:00
3 : number < - copy * 1 : address : number
]
+ parse : ingredient : { 1 : ( " address " " number " ) , " lookup " : ( ) }
+ mem : storing 34 in location 3
: ( before " End Parsing reagent " )
{
2016-09-12 01:07:29 +00:00
while ( starts_with ( name , " * " ) ) {
2016-05-03 17:15:17 +00:00
name . erase ( 0 , 1 ) ;
properties . push_back ( pair < string , string_tree * > ( " lookup " , NULL ) ) ;
}
if ( name . empty ( ) )
2016-05-21 05:09:06 +00:00
raise < < " illegal name ' " < < original_string < < " ' \n " < < end ( ) ;
2016-05-03 17:15:17 +00:00
}
//:: helpers for debugging
: ( before " End Primitive Recipe Declarations " )
_DUMP ,
: ( before " End Primitive Recipe Numbers " )
put ( Recipe_ordinal , " $dump " , _DUMP ) ;
: ( before " End Primitive Recipe Implementations " )
case _DUMP : {
2016-05-06 07:46:39 +00:00
reagent /*copy*/ after_canonize = current_instruction ( ) . ingredients . at ( 0 ) ;
2016-05-03 17:15:17 +00:00
canonize ( after_canonize ) ;
cerr < < maybe ( current_recipe_name ( ) ) < < current_instruction ( ) . ingredients . at ( 0 ) . name < < ' ' < < no_scientific ( current_instruction ( ) . ingredients . at ( 0 ) . value ) < < " => " < < no_scientific ( after_canonize . value ) < < " => " < < no_scientific ( get_or_insert ( Memory , after_canonize . value ) ) < < ' \n ' ;
break ;
}
//: grab an address, and then dump its value at intervals
//: useful for tracking down memory corruption (writing to an out-of-bounds address)
: ( before " End Globals " )
int Bar = - 1 ;
: ( before " End Primitive Recipe Declarations " )
_BAR ,
: ( before " End Primitive Recipe Numbers " )
put ( Recipe_ordinal , " $bar " , _BAR ) ;
: ( before " End Primitive Recipe Implementations " )
case _BAR : {
if ( current_instruction ( ) . ingredients . empty ( ) ) {
if ( Bar ! = - 1 ) cerr < < Bar < < " : " < < no_scientific ( get_or_insert ( Memory , Bar ) ) < < ' \n ' ;
else cerr < < ' \n ' ;
}
else {
2016-05-06 07:46:39 +00:00
reagent /*copy*/ tmp = current_instruction ( ) . ingredients . at ( 0 ) ;
2016-05-03 17:15:17 +00:00
canonize ( tmp ) ;
Bar = tmp . value ;
}
break ;
}