mu/033exclusive_container.cc

460 lines
14 KiB
C++
Raw Normal View History

2015-04-18 01:16:08 +00:00
//: Exclusive containers contain exactly one of a fixed number of 'variants'
//: of different types.
//:
//: They also implicitly contain a tag describing precisely which variant is
//: currently stored in them.
:(before "End Mu Types Initialization")
//: We'll use this container as a running example, with two number elements.
2015-04-18 01:16:08 +00:00
{
type_ordinal tmp = put(Type_ordinal, "number-or-point", Next_type_ordinal++);
2016-03-28 17:11:23 +00:00
get_or_insert(Type, tmp); // initialize
get(Type, tmp).kind = EXCLUSIVE_CONTAINER;
get(Type, tmp).name = "number-or-point";
get(Type, tmp).elements.push_back(reagent("i:number"));
get(Type, tmp).elements.push_back(reagent("p:point"));
2015-04-18 01:16:08 +00:00
}
2018-06-16 05:16:09 +00:00
//: Tests in this layer often explicitly set up memory before reading it as a
//: container. Don't do this in general. I'm tagging such cases with /unsafe;
//: they'll be exceptions to later checks.
2015-04-24 17:19:03 +00:00
:(scenario copy_exclusive_container)
2015-04-18 01:16:08 +00:00
# Copying exclusive containers copies all their contents and an extra location for the tag.
def main [
1:num <- copy 1 # 'point' variant
2:num <- copy 34
3:num <- copy 35
2016-01-12 06:57:35 +00:00
4:number-or-point <- copy 1:number-or-point/unsafe
2015-04-18 01:16:08 +00:00
]
+mem: storing 1 in location 4
+mem: storing 34 in location 5
+mem: storing 35 in location 6
:(before "End size_of(type) Special-cases")
2016-05-03 16:14:19 +00:00
if (t.kind == EXCLUSIVE_CONTAINER) {
2015-04-18 01:16:08 +00:00
// size of an exclusive container is the size of its largest variant
// (So like containers, it can't contain arrays.)
int result = 0;
for (int i = 0; i < SIZE(t.elements); ++i) {
reagent tmp;
tmp.type = new type_tree(*type);
int size = size_of(variant_type(tmp, i));
if (size > result) result = size;
2015-04-18 01:16:08 +00:00
}
// ...+1 for its tag.
return result+1;
2015-04-18 01:16:08 +00:00
}
2015-04-18 01:31:13 +00:00
//:: To access variants of an exclusive container, use 'maybe-convert'.
//: It always returns an address (so that you can modify it) or null (to
//: signal that the conversion failed (because the container contains a
//: different variant).
//: 'maybe-convert' requires a literal in ingredient 1. We'll use a synonym
//: called 'variant'.
:(before "End Mu Types Initialization")
put(Type_ordinal, "variant", 0);
2015-04-24 17:19:03 +00:00
:(scenario maybe_convert)
def main [
12:num <- copy 1
13:num <- copy 35
14:num <- copy 36
2016-09-17 07:46:03 +00:00
20:point, 22:bool <- maybe-convert 12:number-or-point/unsafe, 1:variant
]
# boolean
+mem: storing 1 in location 22
# point
+mem: storing 35 in location 20
+mem: storing 36 in location 21
2015-04-24 17:19:03 +00:00
:(scenario maybe_convert_fail)
def main [
12:num <- copy 1
13:num <- copy 35
14:num <- copy 36
2016-09-17 07:46:03 +00:00
20:num, 21:bool <- maybe-convert 12:number-or-point/unsafe, 0:variant
]
# boolean
+mem: storing 0 in location 21
# number: no write
2015-04-18 01:31:13 +00:00
:(before "End Primitive Recipe Declarations")
MAYBE_CONVERT,
:(before "End Primitive Recipe Numbers")
put(Recipe_ordinal, "maybe-convert", MAYBE_CONVERT);
2015-10-01 23:28:47 +00:00
:(before "End Primitive Recipe Checks")
case MAYBE_CONVERT: {
const recipe& caller = get(Recipe, r);
2015-10-01 23:28:47 +00:00
if (SIZE(inst.ingredients) != 2) {
2017-05-26 23:43:18 +00:00
raise << maybe(caller.name) << "'maybe-convert' expects exactly 2 ingredients in '" << to_original_string(inst) << "'\n" << end();
break;
}
2016-05-06 07:46:39 +00:00
reagent/*copy*/ base = inst.ingredients.at(0);
// Update MAYBE_CONVERT base in Check
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
if (!base.type) {
raise << maybe(caller.name) << "first ingredient of 'maybe-convert' should be an exclusive-container, but got '" << base.original_string << "'\n" << end();
break;
}
const type_tree* base_type = base.type;
// Update MAYBE_CONVERT base_type in Check
if (!base_type->atom || base_type->value == 0 || !contains_key(Type, base_type->value) || get(Type, base_type->value).kind != EXCLUSIVE_CONTAINER) {
raise << maybe(caller.name) << "first ingredient of 'maybe-convert' should be an exclusive-container, but got '" << base.original_string << "'\n" << end();
break;
}
2015-10-01 23:28:47 +00:00
if (!is_literal(inst.ingredients.at(1))) {
raise << maybe(caller.name) << "second ingredient of 'maybe-convert' should have type 'variant', but got '" << inst.ingredients.at(1).original_string << "'\n" << end();
break;
}
if (inst.products.empty()) break;
if (SIZE(inst.products) != 2) {
2017-05-26 23:43:18 +00:00
raise << maybe(caller.name) << "'maybe-convert' expects exactly 2 products in '" << to_original_string(inst) << "'\n" << end();
break;
}
2016-05-06 07:46:39 +00:00
reagent/*copy*/ product = inst.products.at(0);
// Update MAYBE_CONVERT product in Check
2016-02-15 00:16:11 +00:00
reagent& offset = inst.ingredients.at(1);
populate_value(offset);
if (offset.value >= SIZE(get(Type, base_type->value).elements)) {
2017-05-26 23:43:18 +00:00
raise << maybe(caller.name) << "invalid tag " << offset.value << " in '" << to_original_string(inst) << '\n' << end();
2016-02-15 00:16:11 +00:00
break;
}
2016-05-06 07:46:39 +00:00
const reagent& variant = variant_type(base, offset.value);
if (!types_coercible(product, variant)) {
raise << maybe(caller.name) << "'maybe-convert " << base.original_string << ", " << inst.ingredients.at(1).original_string << "' should write to " << to_string(variant.type) << " but '" << product.name << "' has type " << to_string(product.type) << '\n' << end();
2015-07-24 08:17:36 +00:00
break;
}
2016-05-06 07:46:39 +00:00
reagent/*copy*/ status = inst.products.at(1);
// Update MAYBE_CONVERT status in Check
if (!is_mu_boolean(status)) {
raise << maybe(get(Recipe, r).name) << "second product yielded by 'maybe-convert' should be a boolean, but tried to write to '" << inst.products.at(1).original_string << "'\n" << end();
break;
}
2015-10-01 23:28:47 +00:00
break;
}
:(before "End Primitive Recipe Implementations")
case MAYBE_CONVERT: {
2016-05-06 07:46:39 +00:00
reagent/*copy*/ base = current_instruction().ingredients.at(0);
// Update MAYBE_CONVERT base in Run
int base_address = base.value;
2015-10-01 23:28:47 +00:00
if (base_address == 0) {
2017-05-26 23:43:18 +00:00
raise << maybe(current_recipe_name()) << "tried to access location 0 in '" << to_original_string(current_instruction()) << "'\n" << end();
2015-07-24 08:17:36 +00:00
break;
}
int tag = current_instruction().ingredients.at(1).value;
2016-05-06 07:46:39 +00:00
reagent/*copy*/ product = current_instruction().products.at(0);
// Update MAYBE_CONVERT product in Run
2016-05-06 07:46:39 +00:00
reagent/*copy*/ status = current_instruction().products.at(1);
// Update MAYBE_CONVERT status in Run
// optimization: directly write results to only update first product when necessary
write_products = false;
if (tag == static_cast<int>(get_or_insert(Memory, base_address))) {
2016-05-06 07:46:39 +00:00
const reagent& variant = variant_type(base, tag);
trace("mem") << "storing 1 in location " << status.value << end();
put(Memory, status.value, 1);
if (!is_dummy(product)) {
// Write Memory in Successful MAYBE_CONVERT in Run
2016-10-20 05:10:35 +00:00
for (int i = 0; i < size_of(variant); ++i) {
double val = get_or_insert(Memory, base_address+/*skip tag*/1+i);
trace("mem") << "storing " << no_scientific(val) << " in location " << product.value+i << end();
put(Memory, product.value+i, val);
}
}
}
else {
trace("mem") << "storing 0 in location " << status.value << end();
put(Memory, status.value, 0);
}
break;
}
2015-05-14 17:30:01 +00:00
:(code)
const reagent variant_type(const reagent& base, int tag) {
return variant_type(base.type, tag);
}
const reagent variant_type(const type_tree* type, int tag) {
assert(tag >= 0);
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
const type_tree* root_type = type->atom ? type : type->left;
assert(contains_key(Type, root_type->value));
assert(!get(Type, root_type->value).name.empty());
const type_info& info = get(Type, root_type->value);
assert(info.kind == EXCLUSIVE_CONTAINER);
2016-05-06 07:46:39 +00:00
reagent/*copy*/ element = info.elements.at(tag);
// End variant_type Special-cases
return element;
}
:(scenario maybe_convert_product_type_mismatch)
% Hide_errors = true;
def main [
12:num <- copy 1
13:num <- copy 35
14:num <- copy 36
2016-09-17 07:46:03 +00:00
20:num, 21:bool <- maybe-convert 12:number-or-point/unsafe, 1:variant
]
+error: main: 'maybe-convert 12:number-or-point/unsafe, 1:variant' should write to point but '20' has type number
:(scenario maybe_convert_dummy_product)
def main [
12:num <- copy 1
13:num <- copy 35
14:num <- copy 36
2016-09-17 07:46:03 +00:00
_, 21:bool <- maybe-convert 12:number-or-point/unsafe, 1:variant
]
$error: 0
2016-10-22 23:56:07 +00:00
//:: Allow exclusive containers to be defined in Mu code.
2015-05-14 17:30:01 +00:00
:(scenario exclusive_container)
exclusive-container foo [
x:num
y:num
2015-05-14 17:30:01 +00:00
]
2015-10-29 18:56:10 +00:00
+parse: --- defining exclusive-container foo
+parse: element: {x: "number"}
+parse: element: {y: "number"}
2015-05-14 17:30:01 +00:00
:(before "End Command Handlers")
else if (command == "exclusive-container") {
2015-10-27 03:06:51 +00:00
insert_container(command, EXCLUSIVE_CONTAINER, in);
2015-05-14 17:30:01 +00:00
}
//: arrays are disallowed inside exclusive containers unless their length is
//: fixed in advance
:(scenario exclusive_container_contains_array)
exclusive-container foo [
x:@:num:3
]
$error: 0
:(scenario exclusive_container_disallows_dynamic_array_element)
% Hide_errors = true;
exclusive-container foo [
x:@:num
]
+error: container 'foo' cannot determine size of element 'x'
//:: To construct exclusive containers out of variant types, use 'merge'.
:(scenario lift_to_exclusive_container)
exclusive-container foo [
x:num
y:num
]
def main [
1:num <- copy 34
2:foo <- merge 0/x, 1:num # tag must be a literal when merging exclusive containers
4:foo <- merge 1/y, 1:num
]
+mem: storing 0 in location 2
+mem: storing 34 in location 3
+mem: storing 1 in location 4
+mem: storing 34 in location 5
//: type-checking for 'merge' on exclusive containers
:(scenario merge_handles_exclusive_container)
exclusive-container foo [
x:num
y:bar
]
container bar [
z:num
]
def main [
1:foo <- merge 0/x, 34
]
+mem: storing 0 in location 1
+mem: storing 34 in location 2
$error: 0
:(scenario merge_requires_literal_tag_for_exclusive_container)
% Hide_errors = true;
exclusive-container foo [
x:num
y:bar
]
container bar [
z:num
]
def main [
1:num <- copy 0
2:foo <- merge 1:num, 34
]
+error: main: ingredient 0 of 'merge' should be a literal, for the tag of exclusive-container 'foo' in '2:foo <- merge 1:num, 34'
:(scenario merge_handles_exclusive_container_inside_exclusive_container)
exclusive-container foo [
x:num
y:bar
]
exclusive-container bar [
a:num
b:num
]
def main [
1:num <- copy 0
2:bar <- merge 0/a, 34
4:foo <- merge 1/y, 2:bar
]
+mem: storing 0 in location 5
+mem: storing 34 in location 6
$error: 0
:(before "End check_merge_call Special-cases")
case EXCLUSIVE_CONTAINER: {
assert(state.data.top().container_element_index == 0);
trace("transform") << "checking exclusive container " << to_string(container) << " vs ingredient " << ingredient_index << end();
// easy case: exact match
if (types_strictly_match(container, inst.ingredients.at(ingredient_index)))
return;
if (!is_literal(ingredients.at(ingredient_index))) {
2017-05-26 23:43:18 +00:00
raise << maybe(caller.name) << "ingredient " << ingredient_index << " of 'merge' should be a literal, for the tag of exclusive-container '" << container_info.name << "' in '" << to_original_string(inst) << "'\n" << end();
return;
}
2016-05-06 07:46:39 +00:00
reagent/*copy*/ ingredient = ingredients.at(ingredient_index); // unnecessary copy just to keep this function from modifying caller
populate_value(ingredient);
if (ingredient.value >= SIZE(container_info.elements)) {
2017-05-26 23:43:18 +00:00
raise << maybe(caller.name) << "invalid tag at " << ingredient_index << " for '" << container_info.name << "' in '" << to_original_string(inst) << "'\n" << end();
return;
}
2016-05-06 07:46:39 +00:00
const reagent& variant = variant_type(container, ingredient.value);
trace("transform") << "tag: " << ingredient.value << end();
// replace union with its variant
state.data.pop();
state.data.push(merge_check_point(variant, 0));
++ingredient_index;
break;
}
:(scenario merge_check_container_containing_exclusive_container)
container foo [
x:num
y:bar
]
exclusive-container bar [
x:num
y:num
]
def main [
1:foo <- merge 23, 1/y, 34
]
+mem: storing 23 in location 1
+mem: storing 1 in location 2
+mem: storing 34 in location 3
$error: 0
:(scenario merge_check_container_containing_exclusive_container_2)
% Hide_errors = true;
container foo [
x:num
y:bar
]
exclusive-container bar [
x:num
y:num
]
def main [
1:foo <- merge 23, 1/y, 34, 35
]
+error: main: too many ingredients in '1:foo <- merge 23, 1/y, 34, 35'
:(scenario merge_check_exclusive_container_containing_container)
exclusive-container foo [
x:num
y:bar
]
container bar [
x:num
y:num
]
def main [
1:foo <- merge 1/y, 23, 34
]
+mem: storing 1 in location 1
+mem: storing 23 in location 2
+mem: storing 34 in location 3
$error: 0
:(scenario merge_check_exclusive_container_containing_container_2)
exclusive-container foo [
x:num
y:bar
]
container bar [
x:num
y:num
]
def main [
1:foo <- merge 0/x, 23
]
$error: 0
:(scenario merge_check_exclusive_container_containing_container_3)
% Hide_errors = true;
exclusive-container foo [
x:num
y:bar
]
container bar [
x:num
y:num
]
def main [
1:foo <- merge 1/y, 23
]
+error: main: too few ingredients in '1:foo <- merge 1/y, 23'
:(scenario merge_check_exclusive_container_containing_container_4)
exclusive-container foo [
x:num
y:bar
]
container bar [
a:num
b:num
]
def main [
1:bar <- merge 23, 24
3:foo <- merge 1/y, 1:bar
]
$error: 0
//: Since the different variants of an exclusive-container might have
//: different sizes, relax the size mismatch check for 'merge' instructions.
:(before "End size_mismatch(x) Special-cases")
if (current_step_index() < SIZE(Current_routine->steps())
&& current_instruction().operation == MERGE
&& !current_instruction().products.empty()
&& current_instruction().products.at(0).type) {
2016-05-06 07:46:39 +00:00
reagent/*copy*/ x = current_instruction().products.at(0);
// Update size_mismatch Check for MERGE(x)
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
const type_tree* root_type = x.type->atom ? x.type : x.type->left;
assert(root_type->atom);
if (get(Type, root_type->value).kind == EXCLUSIVE_CONTAINER)
return size_of(x) < SIZE(data);
}
:(scenario merge_exclusive_container_with_mismatched_sizes)
container foo [
x:num
y:num
]
exclusive-container bar [
x:num
y:foo
]
def main [
1:num <- copy 34
2:num <- copy 35
3:bar <- merge 0/x, 1:num
6:bar <- merge 1/foo, 1:num, 2:num
]
+mem: storing 0 in location 3
+mem: storing 34 in location 4
# bar is always 3 large so location 5 is skipped
+mem: storing 1 in location 6
+mem: storing 34 in location 7
+mem: storing 35 in location 8