mu/archive/1.vm/055shape_shifting_container.cc

774 lines
23 KiB
C++
Raw Permalink Normal View History

//:: Container definitions can contain 'type ingredients'
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
//: pre-requisite: extend our notion of containers to not necessarily be
//: atomic types
:(after "Update GET base_type in Check")
base_type = get_base_type(base_type);
:(after "Update GET base_type in Run")
base_type = get_base_type(base_type);
:(after "Update PUT base_type in Check")
base_type = get_base_type(base_type);
:(after "Update PUT base_type in Run")
base_type = get_base_type(base_type);
:(after "Update MAYBE_CONVERT base_type in Check")
base_type = get_base_type(base_type);
:(after "Update base_type in element_type")
base_type = get_base_type(base_type);
:(after "Update base_type in skip_addresses")
base_type = get_base_type(base_type);
:(replace{} "const type_tree* get_base_type(const type_tree* t)")
const type_tree* get_base_type(const type_tree* t) {
const type_tree* result = t->atom ? t : t->left;
if (!result->atom)
raise << "invalid type " << to_string(t) << '\n' << end();
return result;
}
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
5001 - drop the :(scenario) DSL I've been saying for a while[1][2][3] that adding extra abstractions makes things harder for newcomers, and adding new notations doubly so. And then I notice this DSL in my own backyard. Makes me feel like a hypocrite. [1] https://news.ycombinator.com/item?id=13565743#13570092 [2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning [3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2 The implementation of the DSL was also highly hacky: a) It was happening in the tangle/ tool, but was utterly unrelated to tangling layers. b) There were several persnickety constraints on the different kinds of lines and the specific order they were expected in. I kept finding bugs where the translator would silently do the wrong thing. Or the error messages sucked, and readers may be stuck looking at the generated code to figure out what happened. Fixing error messages would require a lot more code, which is one of my arguments against DSLs in the first place: they may be easy to implement, but they're hard to design to go with the grain of the underlying platform. They require lots of iteration. Is that effort worth prioritizing in this project? On the other hand, the DSL did make at least some readers' life easier, the ones who weren't immediately put off by having to learn a strange syntax. There were fewer quotes to parse, fewer backslash escapes. Anyway, since there are also people who dislike having to put up with strange syntaxes, we'll call that consideration a wash and tear this DSL out. --- This commit was sheer drudgery. Hopefully it won't need to be redone with a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
:(code)
void test_ill_formed_container() {
Hide_errors = true;
run(
"def main [\n"
" {1: ((foo) num)} <- copy 0\n"
"]\n"
);
// no crash
}
//: update size_of to handle non-atom container types
5001 - drop the :(scenario) DSL I've been saying for a while[1][2][3] that adding extra abstractions makes things harder for newcomers, and adding new notations doubly so. And then I notice this DSL in my own backyard. Makes me feel like a hypocrite. [1] https://news.ycombinator.com/item?id=13565743#13570092 [2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning [3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2 The implementation of the DSL was also highly hacky: a) It was happening in the tangle/ tool, but was utterly unrelated to tangling layers. b) There were several persnickety constraints on the different kinds of lines and the specific order they were expected in. I kept finding bugs where the translator would silently do the wrong thing. Or the error messages sucked, and readers may be stuck looking at the generated code to figure out what happened. Fixing error messages would require a lot more code, which is one of my arguments against DSLs in the first place: they may be easy to implement, but they're hard to design to go with the grain of the underlying platform. They require lots of iteration. Is that effort worth prioritizing in this project? On the other hand, the DSL did make at least some readers' life easier, the ones who weren't immediately put off by having to learn a strange syntax. There were fewer quotes to parse, fewer backslash escapes. Anyway, since there are also people who dislike having to put up with strange syntaxes, we'll call that consideration a wash and tear this DSL out. --- This commit was sheer drudgery. Hopefully it won't need to be redone with a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
void test_size_of_shape_shifting_container() {
run(
"container foo:_t [\n"
" x:_t\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:num <- merge 12, 13\n"
" 3:foo:point <- merge 14, 15, 16\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 12 in location 1\n"
"mem: storing 13 in location 2\n"
"mem: storing 14 in location 3\n"
"mem: storing 15 in location 4\n"
"mem: storing 16 in location 5\n"
);
}
void test_size_of_shape_shifting_container_2() {
run(
// multiple type ingredients
"container foo:_a:_b [\n"
" x:_a\n"
" y:_b\n"
"]\n"
"def main [\n"
" 1:foo:num:bool <- merge 34, true\n"
"]\n"
);
CHECK_TRACE_COUNT("error", 0);
}
void test_size_of_shape_shifting_container_3() {
run(
"container foo:_a:_b [\n"
" x:_a\n"
" y:_b\n"
"]\n"
"def main [\n"
" 1:text <- new [abc]\n"
// compound types for type ingredients
" {3: (foo number (address array character))} <- merge 34/x, 1:text/y\n"
"]\n"
);
CHECK_TRACE_COUNT("error", 0);
}
void test_size_of_shape_shifting_container_4() {
run(
"container foo:_a:_b [\n"
" x:_a\n"
" y:_b\n"
"]\n"
"container bar:_a:_b [\n"
// dilated element
" {data: (foo _a (address _b))}\n"
"]\n"
"def main [\n"
" 1:text <- new [abc]\n"
" 3:bar:num:@:char <- merge 34/x, 1:text/y\n"
"]\n"
);
CHECK_TRACE_COUNT("error", 0);
}
void test_shape_shifting_container_extend() {
run(
"container foo:_a [\n"
" x:_a\n"
"]\n"
"container foo:_a [\n"
" y:_a\n"
"]\n"
);
CHECK_TRACE_COUNT("error", 0);
}
void test_shape_shifting_container_extend_error() {
Hide_errors = true;
run(
"container foo:_a [\n"
" x:_a\n"
"]\n"
"container foo:_b [\n"
" y:_b\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"error: headers of container 'foo' must use identical type ingredients\n"
);
}
void test_type_ingredient_must_start_with_underscore() {
Hide_errors = true;
run(
"container foo:t [\n"
" x:num\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"error: foo: type ingredient 't' must begin with an underscore\n"
);
}
2016-07-06 21:25:23 +00:00
:(before "End Globals")
// We'll use large type ordinals to mean "the following type of the variable".
2016-04-29 04:39:24 +00:00
// For example, if we have a generic type called foo:_elem, the type
// ingredient _elem in foo's type_info will have value START_TYPE_INGREDIENTS,
// and we'll handle it by looking in the current reagent for the next type
// that appears after foo.
extern const int START_TYPE_INGREDIENTS = 2000;
:(before "End Commandline Parsing") // after loading .mu files
assert(Next_type_ordinal < START_TYPE_INGREDIENTS);
:(before "End type_info Fields")
map<string, type_ordinal> type_ingredient_names;
//: Suppress unknown type checks in shape-shifting containers.
:(before "Check Container Field Types(info)")
if (!info.type_ingredient_names.empty()) continue;
:(before "End container Name Refinements")
if (name.find(':') != string::npos) {
trace(101, "parse") << "container has type ingredients; parsing" << end();
if (!read_type_ingredients(name, command)) {
// error; skip rest of the container definition and continue
slurp_balanced_bracket(in);
return;
}
}
:(code)
bool read_type_ingredients(string& name, const string& command) {
string save_name = name;
istringstream in(save_name);
name = slurp_until(in, ':');
map<string, type_ordinal> type_ingredient_names;
2016-07-06 21:25:23 +00:00
if (!slurp_type_ingredients(in, type_ingredient_names, name)) {
return false;
}
if (contains_key(Type_ordinal, name)
&& contains_key(Type, get(Type_ordinal, name))) {
const type_info& previous_info = get(Type, get(Type_ordinal, name));
// we've already seen this container; make sure type ingredients match
if (!type_ingredients_match(type_ingredient_names, previous_info.type_ingredient_names)) {
raise << "headers of " << command << " '" << name << "' must use identical type ingredients\n" << end();
return false;
}
return true;
}
// we haven't seen this container before
if (!contains_key(Type_ordinal, name) || get(Type_ordinal, name) == 0)
put(Type_ordinal, name, Next_type_ordinal++);
type_info& info = get_or_insert(Type, get(Type_ordinal, name));
info.type_ingredient_names.swap(type_ingredient_names);
return true;
}
2016-07-06 21:25:23 +00:00
bool slurp_type_ingredients(istream& in, map<string, type_ordinal>& out, const string& container_name) {
int next_type_ordinal = START_TYPE_INGREDIENTS;
while (has_data(in)) {
string curr = slurp_until(in, ':');
2016-07-06 21:25:23 +00:00
if (curr.empty()) {
raise << container_name << ": empty type ingredients not permitted\n" << end();
return false;
}
2016-09-12 01:07:29 +00:00
if (!starts_with(curr, "_")) {
2016-07-06 21:25:23 +00:00
raise << container_name << ": type ingredient '" << curr << "' must begin with an underscore\n" << end();
return false;
}
if (out.find(curr) != out.end()) {
2016-07-06 21:25:23 +00:00
raise << container_name << ": can't repeat type ingredient name'" << curr << "' in a single container definition\n" << end();
return false;
}
put(out, curr, next_type_ordinal++);
}
return true;
}
bool type_ingredients_match(const map<string, type_ordinal>& a, const map<string, type_ordinal>& b) {
if (SIZE(a) != SIZE(b)) return false;
2016-10-20 05:10:35 +00:00
for (map<string, type_ordinal>::const_iterator p = a.begin(); p != a.end(); ++p) {
if (!contains_key(b, p->first)) return false;
if (p->second != get(b, p->first)) return false;
}
return true;
}
:(before "End insert_container Special-cases")
// check for use of type ingredients
else if (is_type_ingredient_name(type->name)) {
type->value = get(info.type_ingredient_names, type->name);
}
:(code)
bool is_type_ingredient_name(const string& type) {
2016-09-12 01:07:29 +00:00
return starts_with(type, "_");
}
:(before "End Container Type Checks")
if (type->value >= START_TYPE_INGREDIENTS
&& (type->value - START_TYPE_INGREDIENTS) < SIZE(get(Type, type->value).type_ingredient_names))
return;
5001 - drop the :(scenario) DSL I've been saying for a while[1][2][3] that adding extra abstractions makes things harder for newcomers, and adding new notations doubly so. And then I notice this DSL in my own backyard. Makes me feel like a hypocrite. [1] https://news.ycombinator.com/item?id=13565743#13570092 [2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning [3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2 The implementation of the DSL was also highly hacky: a) It was happening in the tangle/ tool, but was utterly unrelated to tangling layers. b) There were several persnickety constraints on the different kinds of lines and the specific order they were expected in. I kept finding bugs where the translator would silently do the wrong thing. Or the error messages sucked, and readers may be stuck looking at the generated code to figure out what happened. Fixing error messages would require a lot more code, which is one of my arguments against DSLs in the first place: they may be easy to implement, but they're hard to design to go with the grain of the underlying platform. They require lots of iteration. Is that effort worth prioritizing in this project? On the other hand, the DSL did make at least some readers' life easier, the ones who weren't immediately put off by having to learn a strange syntax. There were fewer quotes to parse, fewer backslash escapes. Anyway, since there are also people who dislike having to put up with strange syntaxes, we'll call that consideration a wash and tear this DSL out. --- This commit was sheer drudgery. Hopefully it won't need to be redone with a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
:(code)
void test_size_of_shape_shifting_exclusive_container() {
run(
"exclusive-container foo:_t [\n"
" x:_t\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:num <- merge 0/x, 34\n"
" 3:foo:point <- merge 0/x, 15, 16\n"
" 6:foo:point <- merge 1/y, 23\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"run: {1: (\"foo\" \"number\")} <- merge {0: \"literal\", \"x\": ()}, {34: \"literal\"}\n"
"mem: storing 0 in location 1\n"
"mem: storing 34 in location 2\n"
"run: {3: (\"foo\" \"point\")} <- merge {0: \"literal\", \"x\": ()}, {15: \"literal\"}, {16: \"literal\"}\n"
"mem: storing 0 in location 3\n"
"mem: storing 15 in location 4\n"
"mem: storing 16 in location 5\n"
"run: {6: (\"foo\" \"point\")} <- merge {1: \"literal\", \"y\": ()}, {23: \"literal\"}\n"
"mem: storing 1 in location 6\n"
"mem: storing 23 in location 7\n"
"run: return\n"
);
// no other stores
CHECK_EQ(trace_count_prefix("mem", "storing"), 7);
}
:(before "End variant_type Special-cases")
2016-11-06 20:00:39 +00:00
if (contains_type_ingredient(element))
replace_type_ingredients(element.type, type->right, info, " while computing variant type of exclusive-container");
5001 - drop the :(scenario) DSL I've been saying for a while[1][2][3] that adding extra abstractions makes things harder for newcomers, and adding new notations doubly so. And then I notice this DSL in my own backyard. Makes me feel like a hypocrite. [1] https://news.ycombinator.com/item?id=13565743#13570092 [2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning [3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2 The implementation of the DSL was also highly hacky: a) It was happening in the tangle/ tool, but was utterly unrelated to tangling layers. b) There were several persnickety constraints on the different kinds of lines and the specific order they were expected in. I kept finding bugs where the translator would silently do the wrong thing. Or the error messages sucked, and readers may be stuck looking at the generated code to figure out what happened. Fixing error messages would require a lot more code, which is one of my arguments against DSLs in the first place: they may be easy to implement, but they're hard to design to go with the grain of the underlying platform. They require lots of iteration. Is that effort worth prioritizing in this project? On the other hand, the DSL did make at least some readers' life easier, the ones who weren't immediately put off by having to learn a strange syntax. There were fewer quotes to parse, fewer backslash escapes. Anyway, since there are also people who dislike having to put up with strange syntaxes, we'll call that consideration a wash and tear this DSL out. --- This commit was sheer drudgery. Hopefully it won't need to be redone with a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
:(code)
void test_get_on_shape_shifting_container() {
run(
"container foo:_t [\n"
" x:_t\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:point <- merge 14, 15, 16\n"
" 4:num <- get 1:foo:point, y:offset\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 16 in location 4\n"
);
}
void test_get_on_shape_shifting_container_2() {
run(
"container foo:_t [\n"
" x:_t\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:point <- merge 14, 15, 16\n"
" 4:point <- get 1:foo:point, x:offset\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 14 in location 4\n"
"mem: storing 15 in location 5\n"
);
}
void test_get_on_shape_shifting_container_3() {
run(
"container foo:_t [\n"
" x:_t\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:num/alloc-id, 2:num <- copy 0, 34\n"
" 3:foo:&:point <- merge 1:&:point, 48\n"
" 6:&:point <- get 1:foo:&:point, x:offset\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 0 in location 6\n"
"mem: storing 34 in location 7\n"
);
}
void test_get_on_shape_shifting_container_inside_container() {
run(
"container foo:_t [\n"
" x:_t\n"
" y:num\n"
"]\n"
"container bar [\n"
" x:foo:point\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:bar <- merge 14, 15, 16, 17\n"
" 5:num <- get 1:bar, 1:offset\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 17 in location 5\n"
);
}
void test_get_on_complex_shape_shifting_container() {
run(
"container foo:_a:_b [\n"
" x:_a\n"
" y:_b\n"
"]\n"
"def main [\n"
" 1:text <- new [abc]\n"
" {3: (foo number (address array character))} <- merge 34/x, 1:text/y\n"
" 6:text <- get {3: (foo number (address array character))}, y:offset\n"
" 8:bool <- equal 1:text, 6:text\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 1 in location 8\n"
);
}
:(before "End element_type Special-cases")
replace_type_ingredients(element, type, info, " while computing element type of container");
:(before "End size_of(type) Non-atom Special-cases")
assert(type->left->atom);
if (!contains_key(Type, type->left->value)) {
raise << "no such type " << type->left->value << '\n' << end();
return 0;
}
type_info t = get(Type, type->left->value);
if (t.kind == CONTAINER) {
// size of a container is the sum of the sizes of its elements
int result = 0;
for (int i = 0; i < SIZE(t.elements); ++i) {
// todo: strengthen assertion to disallow mutual type recursion
if (get_base_type(t.elements.at(i).type)->value == get_base_type(type)->value) {
raise << "container " << t.name << " can't include itself as a member\n" << end();
return 0;
}
result += size_of(element_type(type, i));
}
return result;
}
if (t.kind == EXCLUSIVE_CONTAINER) {
// size of an exclusive container is the size of its largest variant
// (So like containers, it can't contain arrays.)
int result = 0;
for (int i = 0; i < SIZE(t.elements); ++i) {
reagent tmp;
tmp.type = new type_tree(*type);
int size = size_of(variant_type(tmp, i));
if (size > result) result = size;
}
// ...+1 for its tag.
return result+1;
}
:(code)
5001 - drop the :(scenario) DSL I've been saying for a while[1][2][3] that adding extra abstractions makes things harder for newcomers, and adding new notations doubly so. And then I notice this DSL in my own backyard. Makes me feel like a hypocrite. [1] https://news.ycombinator.com/item?id=13565743#13570092 [2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning [3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2 The implementation of the DSL was also highly hacky: a) It was happening in the tangle/ tool, but was utterly unrelated to tangling layers. b) There were several persnickety constraints on the different kinds of lines and the specific order they were expected in. I kept finding bugs where the translator would silently do the wrong thing. Or the error messages sucked, and readers may be stuck looking at the generated code to figure out what happened. Fixing error messages would require a lot more code, which is one of my arguments against DSLs in the first place: they may be easy to implement, but they're hard to design to go with the grain of the underlying platform. They require lots of iteration. Is that effort worth prioritizing in this project? On the other hand, the DSL did make at least some readers' life easier, the ones who weren't immediately put off by having to learn a strange syntax. There were fewer quotes to parse, fewer backslash escapes. Anyway, since there are also people who dislike having to put up with strange syntaxes, we'll call that consideration a wash and tear this DSL out. --- This commit was sheer drudgery. Hopefully it won't need to be redone with a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
void test_complex_shape_shifting_exclusive_container() {
run(
"exclusive-container foo:_a [\n"
" x:_a\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:text <- new [abc]\n"
" 3:foo:point <- merge 0/variant, 34/xx, 35/xy\n"
" 10:point, 20:bool <- maybe-convert 3:foo:point, 0/variant\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 1 in location 20\n"
"mem: storing 35 in location 11\n"
);
}
bool contains_type_ingredient(const reagent& x) {
return contains_type_ingredient(x.type);
}
bool contains_type_ingredient(const type_tree* type) {
if (!type) return false;
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
if (type->atom) return type->value >= START_TYPE_INGREDIENTS;
return contains_type_ingredient(type->left) || contains_type_ingredient(type->right);
}
void replace_type_ingredients(reagent& element, const type_tree* caller_type, const type_info& info, const string& location_for_error_messages) {
if (contains_type_ingredient(element)) {
if (!caller_type->right)
raise << "illegal type " << names_to_string(caller_type) << " seems to be missing a type ingredient or three" << location_for_error_messages << '\n' << end();
2016-11-06 20:00:39 +00:00
replace_type_ingredients(element.type, caller_type->right, info, location_for_error_messages);
}
}
// replace all type_ingredients in element_type with corresponding elements of callsite_type
2016-11-06 20:00:39 +00:00
void replace_type_ingredients(type_tree* element_type, const type_tree* callsite_type, const type_info& container_info, const string& location_for_error_messages) {
2015-11-04 21:14:41 +00:00
if (!callsite_type) return; // error but it's already been raised above
if (!element_type) return;
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
if (!element_type->atom) {
3663 - fix a refcounting bug: '(type)' != 'type' This was a large commit, and most of it is a follow-up to commit 3309, undoing what is probably the final ill-considered optimization I added to s-expressions in Mu: I was always representing (a b c) as (a b . c), etc. That is now gone. Why did I need to take it out? The key problem was the error silently ignored in layer 30. That was causing size_of("(type)") to silently return garbage rather than loudly complain (assuming 'type' was a simple type). But to take it out I had to modify types_strictly_match (layer 21) to actually strictly match and not just do a prefix match. In the process of removing the prefix match, I had to make extracting recipe types from recipe headers more robust. So far it only matched the first element of each ingredient's type; these matched: (recipe address:number -> address:number) (recipe address -> address) I didn't notice because the dotted notation optimization was actually representing this as: (recipe address:number -> address number) --- One final little thing in this commit: I added an alias for 'assert' called 'assert_for_now', to indicate that I'm not sure something's really an invariant, that it might be triggered by (invalid) user programs, and so require more thought on error handling down the road. But this may well be an ill-posed distinction. It may be overwhelmingly uneconomic to continually distinguish between model invariants and error states for input. I'm starting to grow sympathetic to Google Analytics's recent approach of just banning assertions altogether. We'll see..
2016-11-11 05:39:02 +00:00
if (element_type->right == NULL && is_type_ingredient(element_type->left)) {
int type_ingredient_index = to_type_ingredient_index(element_type->left);
if (corresponding(callsite_type, type_ingredient_index, is_final_type_ingredient(type_ingredient_index, container_info))->right) {
// replacing type ingredient at end of list, and replacement is a non-degenerate compound type -- (a b) but not (a)
replace_type_ingredient_at(type_ingredient_index, element_type, callsite_type, container_info, location_for_error_messages);
return;
}
}
2016-11-06 20:00:39 +00:00
replace_type_ingredients(element_type->left, callsite_type, container_info, location_for_error_messages);
replace_type_ingredients(element_type->right, callsite_type, container_info, location_for_error_messages);
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
return;
}
3663 - fix a refcounting bug: '(type)' != 'type' This was a large commit, and most of it is a follow-up to commit 3309, undoing what is probably the final ill-considered optimization I added to s-expressions in Mu: I was always representing (a b c) as (a b . c), etc. That is now gone. Why did I need to take it out? The key problem was the error silently ignored in layer 30. That was causing size_of("(type)") to silently return garbage rather than loudly complain (assuming 'type' was a simple type). But to take it out I had to modify types_strictly_match (layer 21) to actually strictly match and not just do a prefix match. In the process of removing the prefix match, I had to make extracting recipe types from recipe headers more robust. So far it only matched the first element of each ingredient's type; these matched: (recipe address:number -> address:number) (recipe address -> address) I didn't notice because the dotted notation optimization was actually representing this as: (recipe address:number -> address number) --- One final little thing in this commit: I added an alias for 'assert' called 'assert_for_now', to indicate that I'm not sure something's really an invariant, that it might be triggered by (invalid) user programs, and so require more thought on error handling down the road. But this may well be an ill-posed distinction. It may be overwhelmingly uneconomic to continually distinguish between model invariants and error states for input. I'm starting to grow sympathetic to Google Analytics's recent approach of just banning assertions altogether. We'll see..
2016-11-11 05:39:02 +00:00
if (is_type_ingredient(element_type))
replace_type_ingredient_at(to_type_ingredient_index(element_type), element_type, callsite_type, container_info, location_for_error_messages);
}
const type_tree* corresponding(const type_tree* type, int index, bool final) {
for (const type_tree* curr = type; curr; curr = curr->right, --index) {
assert_for_now(!curr->atom);
if (index == 0)
return final ? curr : curr->left;
}
assert_for_now(false);
}
bool is_type_ingredient(const type_tree* type) {
return type->atom && type->value >= START_TYPE_INGREDIENTS;
}
int to_type_ingredient_index(const type_tree* type) {
assert(type->atom);
return type->value-START_TYPE_INGREDIENTS;
}
void replace_type_ingredient_at(const int type_ingredient_index, type_tree* element_type, const type_tree* callsite_type, const type_info& container_info, const string& location_for_error_messages) {
if (!has_nth_type(callsite_type, type_ingredient_index)) {
2016-11-06 20:00:39 +00:00
raise << "illegal type " << names_to_string(callsite_type) << " seems to be missing a type ingredient or three" << location_for_error_messages << '\n' << end();
return;
}
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
*element_type = *nth_type_ingredient(callsite_type, type_ingredient_index, container_info);
}
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
const type_tree* nth_type_ingredient(const type_tree* callsite_type, int type_ingredient_index, const type_info& container_info) {
2016-11-11 05:36:51 +00:00
bool final = is_final_type_ingredient(type_ingredient_index, container_info);
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
const type_tree* curr = callsite_type;
2016-10-20 05:10:35 +00:00
for (int i = 0; i < type_ingredient_index; ++i) {
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
assert(curr);
assert(!curr->atom);
//? cerr << "type ingredient " << i << " is " << to_string(curr->left) << '\n';
curr = curr->right;
}
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
assert(curr);
if (curr->atom) return curr;
if (!final) return curr->left;
if (!curr->right) return curr->left;
return curr;
}
2016-11-11 05:36:51 +00:00
bool is_final_type_ingredient(int type_ingredient_index, const type_info& container_info) {
for (map<string, type_ordinal>::const_iterator p = container_info.type_ingredient_names.begin();
p != container_info.type_ingredient_names.end();
++p) {
if (p->second > START_TYPE_INGREDIENTS+type_ingredient_index) return false;
}
return true;
}
:(before "End Unit Tests")
2016-02-19 07:47:16 +00:00
void test_replace_type_ingredients_entire() {
run("container foo:_elem [\n"
" x:_elem\n"
2016-09-17 17:28:25 +00:00
" y:num\n"
"]\n");
reagent callsite("x:foo:point");
2016-04-30 17:09:38 +00:00
reagent element = element_type(callsite.type, 0);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element), "{x: \"point\"}");
}
2016-02-19 07:47:16 +00:00
void test_replace_type_ingredients_tail() {
run("container foo:_elem [\n"
" x:_elem\n"
"]\n"
"container bar:_elem [\n"
" x:foo:_elem\n"
"]\n");
reagent callsite("x:bar:point");
2016-04-30 17:09:38 +00:00
reagent element = element_type(callsite.type, 0);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element), "{x: (\"foo\" \"point\")}");
}
2016-02-19 07:47:16 +00:00
void test_replace_type_ingredients_head_tail_multiple() {
run("container foo:_elem [\n"
" x:_elem\n"
"]\n"
"container bar:_elem [\n"
" x:foo:_elem\n"
"]\n");
reagent callsite("x:bar:address:array:character");
2016-04-30 17:09:38 +00:00
reagent element = element_type(callsite.type, 0);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element), "{x: (\"foo\" \"address\" \"array\" \"character\")}");
}
2016-02-19 07:47:16 +00:00
void test_replace_type_ingredients_head_middle() {
run("container foo:_elem [\n"
" x:_elem\n"
"]\n"
"container bar:_elem [\n"
2016-09-17 17:28:25 +00:00
" x:foo:_elem:num\n"
"]\n");
reagent callsite("x:bar:address");
2016-04-30 17:09:38 +00:00
reagent element = element_type(callsite.type, 0);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element), "{x: (\"foo\" \"address\" \"number\")}");
}
void test_replace_last_type_ingredient_with_multiple() {
run("container foo:_a:_b [\n"
" x:_a\n"
" y:_b\n"
"]\n");
reagent callsite("{f: (foo number (address array character))}");
2016-04-30 17:09:38 +00:00
reagent element1 = element_type(callsite.type, 0);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element1), "{x: \"number\"}");
2016-04-30 17:09:38 +00:00
reagent element2 = element_type(callsite.type, 1);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element2), "{y: (\"address\" \"array\" \"character\")}");
}
3663 - fix a refcounting bug: '(type)' != 'type' This was a large commit, and most of it is a follow-up to commit 3309, undoing what is probably the final ill-considered optimization I added to s-expressions in Mu: I was always representing (a b c) as (a b . c), etc. That is now gone. Why did I need to take it out? The key problem was the error silently ignored in layer 30. That was causing size_of("(type)") to silently return garbage rather than loudly complain (assuming 'type' was a simple type). But to take it out I had to modify types_strictly_match (layer 21) to actually strictly match and not just do a prefix match. In the process of removing the prefix match, I had to make extracting recipe types from recipe headers more robust. So far it only matched the first element of each ingredient's type; these matched: (recipe address:number -> address:number) (recipe address -> address) I didn't notice because the dotted notation optimization was actually representing this as: (recipe address:number -> address number) --- One final little thing in this commit: I added an alias for 'assert' called 'assert_for_now', to indicate that I'm not sure something's really an invariant, that it might be triggered by (invalid) user programs, and so require more thought on error handling down the road. But this may well be an ill-posed distinction. It may be overwhelmingly uneconomic to continually distinguish between model invariants and error states for input. I'm starting to grow sympathetic to Google Analytics's recent approach of just banning assertions altogether. We'll see..
2016-11-11 05:39:02 +00:00
void test_replace_last_type_ingredient_inside_compound() {
run("container foo:_a:_b [\n"
" {x: (bar _a (address _b))}\n"
"]\n");
reagent callsite("f:foo:number:array:character");
reagent element = element_type(callsite.type, 0);
CHECK_EQ(names_to_string_without_quotes(element.type), "(bar number (address array character))");
}
void test_replace_middle_type_ingredient_with_multiple() {
run("container foo:_a:_b:_c [\n"
" x:_a\n"
" y:_b\n"
" z:_c\n"
"]\n");
reagent callsite("{f: (foo number (address array character) boolean)}");
2016-04-30 17:09:38 +00:00
reagent element1 = element_type(callsite.type, 0);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element1), "{x: \"number\"}");
2016-04-30 17:09:38 +00:00
reagent element2 = element_type(callsite.type, 1);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element2), "{y: (\"address\" \"array\" \"character\")}");
2016-04-30 17:09:38 +00:00
reagent element3 = element_type(callsite.type, 2);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element3), "{z: \"boolean\"}");
}
void test_replace_middle_type_ingredient_with_multiple2() {
run("container foo:_key:_value [\n"
" key:_key\n"
" value:_value\n"
"]\n");
reagent callsite("{f: (foo (address array character) number)}");
2016-04-30 17:09:38 +00:00
reagent element = element_type(callsite.type, 0);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element), "{key: (\"address\" \"array\" \"character\")}");
}
void test_replace_middle_type_ingredient_with_multiple3() {
run("container foo_table:_key:_value [\n"
2016-09-17 20:00:39 +00:00
" data:&:@:foo_table_row:_key:_value\n"
"]\n"
"\n"
"container foo_table_row:_key:_value [\n"
" key:_key\n"
" value:_value\n"
"]\n");
reagent callsite("{f: (foo_table (address array character) number)}");
2016-04-30 17:09:38 +00:00
reagent element = element_type(callsite.type, 0);
2016-09-09 23:37:42 +00:00
CHECK_EQ(to_string(element), "{data: (\"address\" \"array\" \"foo_table_row\" (\"address\" \"array\" \"character\") \"number\")}");
}
:(code)
bool has_nth_type(const type_tree* base, int n) {
2015-11-04 21:14:41 +00:00
assert(n >= 0);
2016-06-18 00:14:15 +00:00
if (!base) return false;
2015-11-04 21:14:41 +00:00
if (n == 0) return true;
return has_nth_type(base->right, n-1);
}
5001 - drop the :(scenario) DSL I've been saying for a while[1][2][3] that adding extra abstractions makes things harder for newcomers, and adding new notations doubly so. And then I notice this DSL in my own backyard. Makes me feel like a hypocrite. [1] https://news.ycombinator.com/item?id=13565743#13570092 [2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning [3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2 The implementation of the DSL was also highly hacky: a) It was happening in the tangle/ tool, but was utterly unrelated to tangling layers. b) There were several persnickety constraints on the different kinds of lines and the specific order they were expected in. I kept finding bugs where the translator would silently do the wrong thing. Or the error messages sucked, and readers may be stuck looking at the generated code to figure out what happened. Fixing error messages would require a lot more code, which is one of my arguments against DSLs in the first place: they may be easy to implement, but they're hard to design to go with the grain of the underlying platform. They require lots of iteration. Is that effort worth prioritizing in this project? On the other hand, the DSL did make at least some readers' life easier, the ones who weren't immediately put off by having to learn a strange syntax. There were fewer quotes to parse, fewer backslash escapes. Anyway, since there are also people who dislike having to put up with strange syntaxes, we'll call that consideration a wash and tear this DSL out. --- This commit was sheer drudgery. Hopefully it won't need to be redone with a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
void test_get_on_shape_shifting_container_error() {
Hide_errors = true;
run(
"container foo:_t [\n"
" x:_t\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:point <- merge 14, 15, 16\n"
" 10:num <- get 1:foo, 1:offset\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"error: illegal type \"foo\" seems to be missing a type ingredient or three while computing element type of container\n"
);
// todo: improve error message
}
void test_typos_in_container_definitions() {
Hide_errors = true;
run(
"container foo:_t [\n"
" x:adress:_t # typo\n"
"]\n"
"def main [\n"
" local-scope\n"
" x:address:foo:num <- new {(foo num): type}\n"
"]\n"
);
// no crash
}
void test_typos_in_recipes() {
Hide_errors = true;
run(
"def foo [\n"
" local-scope\n"
" x:adress:array:number <- copy null # typo\n"
"]\n"
);
// shouldn't crash
}
2017-06-24 16:23:42 +00:00
3309 Rip out everything to fix one failing unit test (commit 3290; type abbreviations). This commit does several things at once that I couldn't come up with a clean way to unpack: A. It moves to a new representation for type trees without changing the actual definition of the `type_tree` struct. B. It adds unit tests for our type metadata precomputation, so that errors there show up early and in a simpler setting rather than dying when we try to load Mu code. C. It fixes a bug, guarding against infinite loops when precomputing metadata for recursive shape-shifting containers. To do this it uses a dumb way of comparing type_trees, comparing their string representations instead. That is likely incredibly inefficient. Perhaps due to C, this commit has made Mu incredibly slow. Running all tests for the core and the edit/ app now takes 6.5 minutes rather than 3.5 minutes. == more notes and details I've been struggling for the past week now to back out of a bad design decision, a premature optimization from the early days: storing atoms directly in the 'value' slot of a cons cell rather than creating a special 'atom' cons cell and storing it on the 'left' slot. In other words, if a cons cell looks like this: o / | \ left val right ..then the type_tree (a b c) used to look like this (before this commit): o | \ a o | \ b o | \ c null ..rather than like this 'classic' approach to s-expressions which never mixes val and right (which is what we now have): o / \ o o | / \ a o o | / \ b o null | c The old approach made several operations more complicated, most recently the act of replacing a (possibly atom/leaf) sub-tree with another. That was the final straw that got me to realize the contortions I was going through to save a few type_tree nodes (cons cells). Switching to the new approach was hard partly because I've been using the old approach for so long and type_tree manipulations had pervaded everything. Another issue I ran into was the realization that my layers were not cleanly separated. Key parts of early layers (precomputing type metadata) existed purely for far later ones (shape-shifting types). Layers I got repeatedly stuck at: 1. the transform for precomputing type sizes (layer 30) 2. type-checks on merge instructions (layer 31) 3. the transform for precomputing address offsets in types (layer 36) 4. replace operations in supporting shape-shifting recipes (layer 55) After much thrashing I finally noticed that it wasn't the entirety of these layers that was giving me trouble, but just the type metadata precomputation, which had bugs that weren't manifesting until 30 layers later. Or, worse, when loading .mu files before any tests had had a chance to run. A common failure mode was running into types at run time that I hadn't precomputed metadata for at transform time. Digging into these bugs got me to realize that what I had before wasn't really very good, but a half-assed heuristic approach that did a whole lot of extra work precomputing metadata for utterly meaningless types like `((address number) 3)` which just happened to be part of a larger type like `(array (address number) 3)`. So, I redid it all. I switched the representation of types (because the old representation made unit tests difficult to retrofit) and added unit tests to the metadata precomputation. I also made layer 30 only do the minimal metadata precomputation it needs for the concepts introduced until then. In the process, I also made the precomputation more correct than before, and added hooks in the right place so that I could augment the logic when I introduced shape-shifting containers. == lessons learned There's several levels of hygiene when it comes to layers: 1. Every layer introduces precisely what it needs and in the simplest way possible. If I was building an app until just that layer, nothing would seem over-engineered. 2. Some layers are fore-shadowing features in future layers. Sometimes this is ok. For example, layer 10 foreshadows containers and arrays and so on without actually supporting them. That is a net win because it lets me lay out the core of Mu's data structures out in one place. But if the fore-shadowing gets too complex things get nasty. Not least because it can be hard to write unit tests for features before you provide the plumbing to visualize and manipulate them. 3. A layer is introducing features that are tested only in later layers. 4. A layer is introducing features with tests that are invalidated in later layers. (This I knew from early on to be an obviously horrendous idea.) Summary: avoid Level 2 (foreshadowing layers) as much as possible. Tolerate it indefinitely for small things where the code stays simple over time, but become strict again when things start to get more complex. Level 3 is mostly a net lose, but sometimes it can be expedient (a real case of the usually grossly over-applied term "technical debt"), and it's better than the conventional baseline of no layers and no scenarios. Just clean it up as soon as possible. Definitely avoid layer 4 at any time. == minor lessons Avoid unit tests for trivial things, write scenarios in context as much as possible. But within those margins unit tests are fine. Just introduce them before any scenarios (commit 3297). Reorganizing layers can be easy. Just merge layers for starters! Punt on resplitting them in some new way until you've gotten them to work. This is the wisdom of Refactoring: small steps. What made it hard was not wanting to merge *everything* between layer 30 and 55. The eventual insight was realizing I just need to move those two full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
//:: 'merge' on shape-shifting containers
5001 - drop the :(scenario) DSL I've been saying for a while[1][2][3] that adding extra abstractions makes things harder for newcomers, and adding new notations doubly so. And then I notice this DSL in my own backyard. Makes me feel like a hypocrite. [1] https://news.ycombinator.com/item?id=13565743#13570092 [2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning [3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2 The implementation of the DSL was also highly hacky: a) It was happening in the tangle/ tool, but was utterly unrelated to tangling layers. b) There were several persnickety constraints on the different kinds of lines and the specific order they were expected in. I kept finding bugs where the translator would silently do the wrong thing. Or the error messages sucked, and readers may be stuck looking at the generated code to figure out what happened. Fixing error messages would require a lot more code, which is one of my arguments against DSLs in the first place: they may be easy to implement, but they're hard to design to go with the grain of the underlying platform. They require lots of iteration. Is that effort worth prioritizing in this project? On the other hand, the DSL did make at least some readers' life easier, the ones who weren't immediately put off by having to learn a strange syntax. There were fewer quotes to parse, fewer backslash escapes. Anyway, since there are also people who dislike having to put up with strange syntaxes, we'll call that consideration a wash and tear this DSL out. --- This commit was sheer drudgery. Hopefully it won't need to be redone with a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
void test_merge_check_shape_shifting_container_containing_exclusive_container() {
run(
"container foo:_elem [\n"
" x:num\n"
" y:_elem\n"
"]\n"
"exclusive-container bar [\n"
" x:num\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:bar <- merge 23, 1/y, 34\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 23 in location 1\n"
"mem: storing 1 in location 2\n"
"mem: storing 34 in location 3\n"
);
CHECK_TRACE_COUNT("error", 0);
}
void test_merge_check_shape_shifting_container_containing_exclusive_container_2() {
Hide_errors = true;
run(
"container foo:_elem [\n"
" x:num\n"
" y:_elem\n"
"]\n"
"exclusive-container bar [\n"
" x:num\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:bar <- merge 23, 1/y, 34, 35\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"error: main: too many ingredients in '1:foo:bar <- merge 23, 1/y, 34, 35'\n"
);
}
void test_merge_check_shape_shifting_exclusive_container_containing_container() {
run(
"exclusive-container foo:_elem [\n"
" x:num\n"
" y:_elem\n"
"]\n"
"container bar [\n"
" x:num\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:bar <- merge 1/y, 23, 34\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"mem: storing 1 in location 1\n"
"mem: storing 23 in location 2\n"
"mem: storing 34 in location 3\n"
);
CHECK_TRACE_COUNT("error", 0);
}
void test_merge_check_shape_shifting_exclusive_container_containing_container_2() {
run(
"exclusive-container foo:_elem [\n"
" x:num\n"
" y:_elem\n"
"]\n"
"container bar [\n"
" x:num\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:bar <- merge 0/x, 23\n"
"]\n"
);
CHECK_TRACE_COUNT("error", 0);
}
void test_merge_check_shape_shifting_exclusive_container_containing_container_3() {
Hide_errors = true;
run(
"exclusive-container foo:_elem [\n"
" x:num\n"
" y:_elem\n"
"]\n"
"container bar [\n"
" x:num\n"
" y:num\n"
"]\n"
"def main [\n"
" 1:foo:bar <- merge 1/y, 23\n"
"]\n"
);
CHECK_TRACE_CONTENTS(
"error: main: too few ingredients in '1:foo:bar <- merge 1/y, 23'\n"
);
}