2015-10-01 06:31:43 +00:00
|
|
|
//: Introduce a new transform to perform various checks in instructions before
|
|
|
|
//: we start running them. It'll be extensible, so that we can add checks for
|
|
|
|
//: new recipes as we extend 'run' to support them.
|
2015-10-01 23:25:25 +00:00
|
|
|
//:
|
|
|
|
//: Doing checking in a separate part complicates things, because the values
|
|
|
|
//: of variables in memory and the processor (current_recipe_name,
|
|
|
|
//: current_instruction) aren't available at checking time. If I had a more
|
|
|
|
//: sophisticated layer system I'd introduce the simpler version first and
|
|
|
|
//: transform it in a separate layer or set of layers.
|
2015-09-30 07:07:33 +00:00
|
|
|
|
2015-11-29 06:17:47 +00:00
|
|
|
:(before "End Checks")
|
2015-11-07 01:29:52 +00:00
|
|
|
Transform.push_back(check_instruction); // idempotent
|
2015-09-30 07:07:33 +00:00
|
|
|
|
|
|
|
:(code)
|
2015-09-30 09:01:59 +00:00
|
|
|
void check_instruction(const recipe_ordinal r) {
|
2019-02-25 08:17:46 +00:00
|
|
|
trace(101, "transform") << "--- perform checks for recipe " << get(Recipe, r).name << end();
|
2015-09-30 07:07:33 +00:00
|
|
|
map<string, vector<type_ordinal> > metadata;
|
2016-10-20 05:10:35 +00:00
|
|
|
for (int i = 0; i < SIZE(get(Recipe, r).steps); ++i) {
|
2015-11-06 19:06:58 +00:00
|
|
|
instruction& inst = get(Recipe, r).steps.at(i);
|
2015-10-01 23:33:34 +00:00
|
|
|
if (inst.is_label) continue;
|
2015-09-30 07:07:33 +00:00
|
|
|
switch (inst.operation) {
|
2015-09-30 09:01:59 +00:00
|
|
|
// Primitive Recipe Checks
|
2015-09-30 07:07:33 +00:00
|
|
|
case COPY: {
|
2017-04-04 17:49:13 +00:00
|
|
|
if (SIZE(inst.products) > SIZE(inst.ingredients)) {
|
2017-05-26 23:43:18 +00:00
|
|
|
raise << maybe(get(Recipe, r).name) << "too many products in '" << to_original_string(inst) << "'\n" << end();
|
2015-09-30 07:07:33 +00:00
|
|
|
break;
|
|
|
|
}
|
2017-04-05 02:37:42 +00:00
|
|
|
for (int i = 0; i < SIZE(inst.products); ++i) {
|
2015-11-27 18:32:34 +00:00
|
|
|
if (!types_coercible(inst.products.at(i), inst.ingredients.at(i))) {
|
2016-05-21 05:18:33 +00:00
|
|
|
raise << maybe(get(Recipe, r).name) << "can't copy '" << inst.ingredients.at(i).original_string << "' to '" << inst.products.at(i).original_string << "'; types don't match\n" << end();
|
2015-09-30 07:07:33 +00:00
|
|
|
goto finish_checking_instruction;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2015-09-30 09:01:59 +00:00
|
|
|
// End Primitive Recipe Checks
|
2015-09-30 07:07:33 +00:00
|
|
|
default: {
|
2015-09-30 09:01:59 +00:00
|
|
|
// Defined Recipe Checks
|
|
|
|
// End Defined Recipe Checks
|
2015-09-30 07:07:33 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
finish_checking_instruction:;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
5001 - drop the :(scenario) DSL
I've been saying for a while[1][2][3] that adding extra abstractions makes
things harder for newcomers, and adding new notations doubly so. And then
I notice this DSL in my own backyard. Makes me feel like a hypocrite.
[1] https://news.ycombinator.com/item?id=13565743#13570092
[2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning
[3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2
The implementation of the DSL was also highly hacky:
a) It was happening in the tangle/ tool, but was utterly unrelated to tangling
layers.
b) There were several persnickety constraints on the different kinds of
lines and the specific order they were expected in. I kept finding bugs
where the translator would silently do the wrong thing. Or the error messages
sucked, and readers may be stuck looking at the generated code to figure
out what happened. Fixing error messages would require a lot more code,
which is one of my arguments against DSLs in the first place: they may
be easy to implement, but they're hard to design to go with the grain of
the underlying platform. They require lots of iteration. Is that effort
worth prioritizing in this project?
On the other hand, the DSL did make at least some readers' life easier,
the ones who weren't immediately put off by having to learn a strange syntax.
There were fewer quotes to parse, fewer backslash escapes.
Anyway, since there are also people who dislike having to put up with strange
syntaxes, we'll call that consideration a wash and tear this DSL out.
---
This commit was sheer drudgery. Hopefully it won't need to be redone with
a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
|
|
|
void test_copy_checks_reagent_count() {
|
|
|
|
Hide_errors = true;
|
|
|
|
run(
|
|
|
|
"def main [\n"
|
|
|
|
" 1:num, 2:num <- copy 34\n"
|
|
|
|
"]\n"
|
|
|
|
);
|
|
|
|
CHECK_TRACE_CONTENTS(
|
|
|
|
"error: main: too many products in '1:num, 2:num <- copy 34'\n"
|
|
|
|
);
|
|
|
|
}
|
2015-09-30 07:07:33 +00:00
|
|
|
|
5001 - drop the :(scenario) DSL
I've been saying for a while[1][2][3] that adding extra abstractions makes
things harder for newcomers, and adding new notations doubly so. And then
I notice this DSL in my own backyard. Makes me feel like a hypocrite.
[1] https://news.ycombinator.com/item?id=13565743#13570092
[2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning
[3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2
The implementation of the DSL was also highly hacky:
a) It was happening in the tangle/ tool, but was utterly unrelated to tangling
layers.
b) There were several persnickety constraints on the different kinds of
lines and the specific order they were expected in. I kept finding bugs
where the translator would silently do the wrong thing. Or the error messages
sucked, and readers may be stuck looking at the generated code to figure
out what happened. Fixing error messages would require a lot more code,
which is one of my arguments against DSLs in the first place: they may
be easy to implement, but they're hard to design to go with the grain of
the underlying platform. They require lots of iteration. Is that effort
worth prioritizing in this project?
On the other hand, the DSL did make at least some readers' life easier,
the ones who weren't immediately put off by having to learn a strange syntax.
There were fewer quotes to parse, fewer backslash escapes.
Anyway, since there are also people who dislike having to put up with strange
syntaxes, we'll call that consideration a wash and tear this DSL out.
---
This commit was sheer drudgery. Hopefully it won't need to be redone with
a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
|
|
|
void test_write_scalar_to_array_disallowed() {
|
|
|
|
Hide_errors = true;
|
|
|
|
run(
|
|
|
|
"def main [\n"
|
|
|
|
" 1:array:num <- copy 34\n"
|
|
|
|
"]\n"
|
|
|
|
);
|
|
|
|
CHECK_TRACE_CONTENTS(
|
|
|
|
"error: main: can't copy '34' to '1:array:num'; types don't match\n"
|
|
|
|
);
|
|
|
|
}
|
2015-09-30 07:07:33 +00:00
|
|
|
|
5001 - drop the :(scenario) DSL
I've been saying for a while[1][2][3] that adding extra abstractions makes
things harder for newcomers, and adding new notations doubly so. And then
I notice this DSL in my own backyard. Makes me feel like a hypocrite.
[1] https://news.ycombinator.com/item?id=13565743#13570092
[2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning
[3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2
The implementation of the DSL was also highly hacky:
a) It was happening in the tangle/ tool, but was utterly unrelated to tangling
layers.
b) There were several persnickety constraints on the different kinds of
lines and the specific order they were expected in. I kept finding bugs
where the translator would silently do the wrong thing. Or the error messages
sucked, and readers may be stuck looking at the generated code to figure
out what happened. Fixing error messages would require a lot more code,
which is one of my arguments against DSLs in the first place: they may
be easy to implement, but they're hard to design to go with the grain of
the underlying platform. They require lots of iteration. Is that effort
worth prioritizing in this project?
On the other hand, the DSL did make at least some readers' life easier,
the ones who weren't immediately put off by having to learn a strange syntax.
There were fewer quotes to parse, fewer backslash escapes.
Anyway, since there are also people who dislike having to put up with strange
syntaxes, we'll call that consideration a wash and tear this DSL out.
---
This commit was sheer drudgery. Hopefully it won't need to be redone with
a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
|
|
|
void test_write_scalar_to_array_disallowed_2() {
|
|
|
|
Hide_errors = true;
|
|
|
|
run(
|
|
|
|
"def main [\n"
|
|
|
|
" 1:num, 2:array:num <- copy 34, 35\n"
|
|
|
|
"]\n"
|
|
|
|
);
|
|
|
|
CHECK_TRACE_CONTENTS(
|
|
|
|
"error: main: can't copy '35' to '2:array:num'; types don't match\n"
|
|
|
|
);
|
|
|
|
}
|
2015-09-30 07:07:33 +00:00
|
|
|
|
5001 - drop the :(scenario) DSL
I've been saying for a while[1][2][3] that adding extra abstractions makes
things harder for newcomers, and adding new notations doubly so. And then
I notice this DSL in my own backyard. Makes me feel like a hypocrite.
[1] https://news.ycombinator.com/item?id=13565743#13570092
[2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning
[3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2
The implementation of the DSL was also highly hacky:
a) It was happening in the tangle/ tool, but was utterly unrelated to tangling
layers.
b) There were several persnickety constraints on the different kinds of
lines and the specific order they were expected in. I kept finding bugs
where the translator would silently do the wrong thing. Or the error messages
sucked, and readers may be stuck looking at the generated code to figure
out what happened. Fixing error messages would require a lot more code,
which is one of my arguments against DSLs in the first place: they may
be easy to implement, but they're hard to design to go with the grain of
the underlying platform. They require lots of iteration. Is that effort
worth prioritizing in this project?
On the other hand, the DSL did make at least some readers' life easier,
the ones who weren't immediately put off by having to learn a strange syntax.
There were fewer quotes to parse, fewer backslash escapes.
Anyway, since there are also people who dislike having to put up with strange
syntaxes, we'll call that consideration a wash and tear this DSL out.
---
This commit was sheer drudgery. Hopefully it won't need to be redone with
a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
|
|
|
void test_write_scalar_to_address_disallowed() {
|
|
|
|
Hide_errors = true;
|
|
|
|
run(
|
|
|
|
"def main [\n"
|
|
|
|
" 1:&:num <- copy 34\n"
|
|
|
|
"]\n"
|
|
|
|
);
|
|
|
|
CHECK_TRACE_CONTENTS(
|
|
|
|
"error: main: can't copy '34' to '1:&:num'; types don't match\n"
|
|
|
|
);
|
|
|
|
}
|
2015-09-30 07:07:33 +00:00
|
|
|
|
5001 - drop the :(scenario) DSL
I've been saying for a while[1][2][3] that adding extra abstractions makes
things harder for newcomers, and adding new notations doubly so. And then
I notice this DSL in my own backyard. Makes me feel like a hypocrite.
[1] https://news.ycombinator.com/item?id=13565743#13570092
[2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning
[3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2
The implementation of the DSL was also highly hacky:
a) It was happening in the tangle/ tool, but was utterly unrelated to tangling
layers.
b) There were several persnickety constraints on the different kinds of
lines and the specific order they were expected in. I kept finding bugs
where the translator would silently do the wrong thing. Or the error messages
sucked, and readers may be stuck looking at the generated code to figure
out what happened. Fixing error messages would require a lot more code,
which is one of my arguments against DSLs in the first place: they may
be easy to implement, but they're hard to design to go with the grain of
the underlying platform. They require lots of iteration. Is that effort
worth prioritizing in this project?
On the other hand, the DSL did make at least some readers' life easier,
the ones who weren't immediately put off by having to learn a strange syntax.
There were fewer quotes to parse, fewer backslash escapes.
Anyway, since there are also people who dislike having to put up with strange
syntaxes, we'll call that consideration a wash and tear this DSL out.
---
This commit was sheer drudgery. Hopefully it won't need to be redone with
a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
|
|
|
void test_write_address_to_character_disallowed() {
|
|
|
|
Hide_errors = true;
|
|
|
|
run(
|
|
|
|
"def main [\n"
|
|
|
|
" 1:&:num <- copy 12/unsafe\n"
|
|
|
|
" 2:char <- copy 1:&:num\n"
|
|
|
|
"]\n"
|
|
|
|
);
|
|
|
|
CHECK_TRACE_CONTENTS(
|
|
|
|
"error: main: can't copy '1:&:num' to '2:char'; types don't match\n"
|
|
|
|
);
|
|
|
|
}
|
2017-07-09 04:34:52 +00:00
|
|
|
|
5001 - drop the :(scenario) DSL
I've been saying for a while[1][2][3] that adding extra abstractions makes
things harder for newcomers, and adding new notations doubly so. And then
I notice this DSL in my own backyard. Makes me feel like a hypocrite.
[1] https://news.ycombinator.com/item?id=13565743#13570092
[2] https://lobste.rs/s/to8wpr/configuration_files_are_canary_warning
[3] https://lobste.rs/s/mdmcdi/little_languages_by_jon_bentley_1986#c_3miuf2
The implementation of the DSL was also highly hacky:
a) It was happening in the tangle/ tool, but was utterly unrelated to tangling
layers.
b) There were several persnickety constraints on the different kinds of
lines and the specific order they were expected in. I kept finding bugs
where the translator would silently do the wrong thing. Or the error messages
sucked, and readers may be stuck looking at the generated code to figure
out what happened. Fixing error messages would require a lot more code,
which is one of my arguments against DSLs in the first place: they may
be easy to implement, but they're hard to design to go with the grain of
the underlying platform. They require lots of iteration. Is that effort
worth prioritizing in this project?
On the other hand, the DSL did make at least some readers' life easier,
the ones who weren't immediately put off by having to learn a strange syntax.
There were fewer quotes to parse, fewer backslash escapes.
Anyway, since there are also people who dislike having to put up with strange
syntaxes, we'll call that consideration a wash and tear this DSL out.
---
This commit was sheer drudgery. Hopefully it won't need to be redone with
a new DSL because I grow sick of backslashes.
2019-03-13 01:56:55 +00:00
|
|
|
void test_write_number_to_character_allowed() {
|
|
|
|
run(
|
|
|
|
"def main [\n"
|
|
|
|
" 1:num <- copy 97\n"
|
|
|
|
" 2:char <- copy 1:num\n"
|
|
|
|
"]\n"
|
|
|
|
);
|
|
|
|
CHECK_TRACE_COUNT("error", 0);
|
|
|
|
}
|
2017-11-10 09:16:02 +00:00
|
|
|
|
2015-09-30 07:07:33 +00:00
|
|
|
:(code)
|
2015-11-29 19:43:25 +00:00
|
|
|
// types_match with some leniency
|
2018-06-24 16:16:17 +00:00
|
|
|
bool types_coercible(reagent/*copy*/ to, reagent/*copy*/ from) {
|
|
|
|
// Begin types_coercible(reagent to, reagent from)
|
|
|
|
if (types_match_sub(to, from)) return true;
|
2017-11-10 09:16:02 +00:00
|
|
|
if (is_real_mu_number(from) && is_mu_character(to)) return true;
|
2016-11-11 05:39:02 +00:00
|
|
|
// End types_coercible Special-cases
|
2015-11-28 06:01:23 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-06-24 16:16:17 +00:00
|
|
|
bool types_match_sub(const reagent& to, const reagent& from) {
|
2015-11-28 06:14:05 +00:00
|
|
|
// to sidestep type-checking, use /unsafe in the source.
|
|
|
|
// this will be highlighted in red inside vim. just for setting up some tests.
|
2016-01-31 06:22:52 +00:00
|
|
|
if (is_unsafe(from)) return true;
|
|
|
|
if (is_literal(from)) {
|
|
|
|
if (is_mu_array(to)) return false;
|
|
|
|
// End Matching Types For Literal(to)
|
|
|
|
if (!to.type) return false;
|
2018-06-24 16:16:17 +00:00
|
|
|
// allow writing null to any address
|
2018-06-17 23:10:00 +00:00
|
|
|
if (is_mu_address(to)) return from.name == "null";
|
2016-01-31 06:22:52 +00:00
|
|
|
return size_of(to) == 1; // literals are always scalars
|
2015-11-28 06:01:23 +00:00
|
|
|
}
|
2018-06-24 16:16:17 +00:00
|
|
|
return types_strictly_match_sub(to, from);
|
|
|
|
}
|
|
|
|
// variant for others to call
|
|
|
|
bool types_match(reagent/*copy*/ to, reagent/*copy*/ from) {
|
|
|
|
// Begin types_match(reagent to, reagent from)
|
|
|
|
return types_match_sub(to, from);
|
2015-11-28 02:10:15 +00:00
|
|
|
}
|
|
|
|
|
2016-12-12 18:01:12 +00:00
|
|
|
//: copy arguments for later layers
|
2018-06-24 16:16:17 +00:00
|
|
|
bool types_strictly_match_sub(const reagent& to, const reagent& from) {
|
2017-03-03 06:41:18 +00:00
|
|
|
if (to.type == NULL) return false; // error
|
2018-05-13 02:55:21 +00:00
|
|
|
if (is_literal(from) && to.type->value == Number_type_ordinal) return true;
|
2015-11-22 19:51:36 +00:00
|
|
|
// to sidestep type-checking, use /unsafe in the source.
|
|
|
|
// this will be highlighted in red inside vim. just for setting up some tests.
|
2016-01-31 06:22:52 +00:00
|
|
|
if (is_unsafe(from)) return true;
|
2015-11-28 06:14:05 +00:00
|
|
|
// '_' never raises type error
|
2016-01-31 06:22:52 +00:00
|
|
|
if (is_dummy(to)) return true;
|
|
|
|
if (!to.type) return !from.type;
|
|
|
|
return types_strictly_match(to.type, from.type);
|
2015-10-26 04:42:18 +00:00
|
|
|
}
|
2018-06-24 16:16:17 +00:00
|
|
|
// variant for others to call
|
|
|
|
bool types_strictly_match(reagent/*copy*/ to, reagent/*copy*/ from) {
|
|
|
|
// Begin types_strictly_match(reagent to, reagent from)
|
|
|
|
return types_strictly_match_sub(to, from);
|
|
|
|
}
|
2015-10-26 04:42:18 +00:00
|
|
|
|
2016-05-06 07:46:39 +00:00
|
|
|
bool types_strictly_match(const type_tree* to, const type_tree* from) {
|
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
|
|
|
if (from == to) return true;
|
2016-11-11 05:39:02 +00:00
|
|
|
if (!to) return false;
|
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
|
|
|
if (!from) return to->atom && to->value == 0;
|
2016-11-11 05:39:02 +00:00
|
|
|
if (from->atom != to->atom) return false;
|
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
|
|
|
if (from->atom) {
|
|
|
|
if (from->value == -1) return from->name == to->name;
|
|
|
|
return from->value == to->value;
|
|
|
|
}
|
2017-04-19 01:04:50 +00:00
|
|
|
if (types_strictly_match(to->left, from->left) && types_strictly_match(to->right, from->right))
|
|
|
|
return true;
|
|
|
|
// fallback: (x) == x
|
|
|
|
if (to->right == NULL && types_strictly_match(to->left, from)) return true;
|
|
|
|
if (from->right == NULL && types_strictly_match(to, from->left)) return true;
|
|
|
|
return false;
|
2015-09-30 07:07:33 +00:00
|
|
|
}
|
|
|
|
|
2016-04-16 00:40:12 +00:00
|
|
|
void test_unknown_type_does_not_match_unknown_type() {
|
|
|
|
reagent a("a:foo");
|
|
|
|
reagent b("b:bar");
|
|
|
|
CHECK(!types_strictly_match(a, b));
|
|
|
|
}
|
|
|
|
|
|
|
|
void test_unknown_type_matches_itself() {
|
|
|
|
reagent a("a:foo");
|
|
|
|
reagent b("b:foo");
|
|
|
|
CHECK(types_strictly_match(a, b));
|
|
|
|
}
|
|
|
|
|
2017-04-19 01:04:50 +00:00
|
|
|
void test_type_abbreviations_match_raw_types() {
|
|
|
|
put(Type_abbreviations, "text", new_type_tree("address:array:character"));
|
|
|
|
// a has type (address buffer (address array character))
|
|
|
|
reagent a("a:address:buffer:text");
|
|
|
|
expand_type_abbreviations(a.type);
|
|
|
|
// b has type (address buffer address array character)
|
|
|
|
reagent b("b:address:buffer:address:array:character");
|
|
|
|
CHECK(types_strictly_match(a, b));
|
2017-04-19 07:39:10 +00:00
|
|
|
delete Type_abbreviations["text"];
|
|
|
|
put(Type_abbreviations, "text", NULL);
|
2017-04-19 01:04:50 +00:00
|
|
|
}
|
|
|
|
|
2016-09-17 07:19:52 +00:00
|
|
|
//: helpers
|
|
|
|
|
2015-11-22 19:51:36 +00:00
|
|
|
bool is_unsafe(const reagent& r) {
|
|
|
|
return has_property(r, "unsafe");
|
|
|
|
}
|
|
|
|
|
2016-05-06 07:46:39 +00:00
|
|
|
bool is_mu_array(reagent/*copy*/ r) {
|
|
|
|
// End Preprocess is_mu_array(reagent r)
|
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
|
|
|
return is_mu_array(r.type);
|
|
|
|
}
|
|
|
|
bool is_mu_array(const type_tree* type) {
|
|
|
|
if (!type) return false;
|
|
|
|
if (is_literal(type)) return false;
|
|
|
|
if (type->atom) return false;
|
2016-11-08 18:31:48 +00:00
|
|
|
if (!type->left->atom) {
|
|
|
|
raise << "invalid type " << to_string(type) << '\n' << end();
|
|
|
|
return false;
|
|
|
|
}
|
2018-05-13 02:55:21 +00:00
|
|
|
return type->left->value == Array_type_ordinal;
|
2015-09-30 07:07:33 +00:00
|
|
|
}
|
|
|
|
|
2016-05-06 07:46:39 +00:00
|
|
|
bool is_mu_boolean(reagent/*copy*/ r) {
|
|
|
|
// End Preprocess is_mu_boolean(reagent r)
|
2016-01-31 06:17:36 +00:00
|
|
|
if (!r.type) return false;
|
|
|
|
if (is_literal(r)) return false;
|
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
|
|
|
if (!r.type->atom) return false;
|
2018-05-13 02:55:21 +00:00
|
|
|
return r.type->value == Boolean_type_ordinal;
|
2016-01-31 06:17:36 +00:00
|
|
|
}
|
|
|
|
|
2016-05-06 07:46:39 +00:00
|
|
|
bool is_mu_number(reagent/*copy*/ r) {
|
2017-07-09 04:34:52 +00:00
|
|
|
if (is_mu_character(r.type)) return true; // permit arithmetic on unicode code points
|
|
|
|
return is_real_mu_number(r);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool is_real_mu_number(reagent/*copy*/ r) {
|
2016-05-06 07:46:39 +00:00
|
|
|
// End Preprocess is_mu_number(reagent r)
|
2015-10-26 04:42:18 +00:00
|
|
|
if (!r.type) return false;
|
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
|
|
|
if (!r.type->atom) return false;
|
2015-12-28 17:33:33 +00:00
|
|
|
if (is_literal(r)) {
|
2016-02-22 04:30:02 +00:00
|
|
|
return r.type->name == "literal-fractional-number"
|
2016-02-21 04:05:52 +00:00
|
|
|
|| r.type->name == "literal";
|
2015-12-28 17:33:33 +00:00
|
|
|
}
|
2018-05-13 02:55:21 +00:00
|
|
|
return r.type->value == Number_type_ordinal;
|
2015-09-30 08:57:23 +00:00
|
|
|
}
|
|
|
|
|
2016-08-17 15:24:19 +00:00
|
|
|
bool is_mu_character(reagent/*copy*/ r) {
|
|
|
|
// End Preprocess is_mu_character(reagent r)
|
3309
Rip out everything to fix one failing unit test (commit 3290; type
abbreviations).
This commit does several things at once that I couldn't come up with a
clean way to unpack:
A. It moves to a new representation for type trees without changing
the actual definition of the `type_tree` struct.
B. It adds unit tests for our type metadata precomputation, so that
errors there show up early and in a simpler setting rather than dying
when we try to load Mu code.
C. It fixes a bug, guarding against infinite loops when precomputing
metadata for recursive shape-shifting containers. To do this it uses a
dumb way of comparing type_trees, comparing their string
representations instead. That is likely incredibly inefficient.
Perhaps due to C, this commit has made Mu incredibly slow. Running all
tests for the core and the edit/ app now takes 6.5 minutes rather than
3.5 minutes.
== more notes and details
I've been struggling for the past week now to back out of a bad design
decision, a premature optimization from the early days: storing atoms
directly in the 'value' slot of a cons cell rather than creating a
special 'atom' cons cell and storing it on the 'left' slot. In other
words, if a cons cell looks like this:
o
/ | \
left val right
..then the type_tree (a b c) used to look like this (before this
commit):
o
| \
a o
| \
b o
| \
c null
..rather than like this 'classic' approach to s-expressions which never
mixes val and right (which is what we now have):
o
/ \
o o
| / \
a o o
| / \
b o null
|
c
The old approach made several operations more complicated, most recently
the act of replacing a (possibly atom/leaf) sub-tree with another. That
was the final straw that got me to realize the contortions I was going
through to save a few type_tree nodes (cons cells).
Switching to the new approach was hard partly because I've been using
the old approach for so long and type_tree manipulations had pervaded
everything. Another issue I ran into was the realization that my layers
were not cleanly separated. Key parts of early layers (precomputing type
metadata) existed purely for far later ones (shape-shifting types).
Layers I got repeatedly stuck at:
1. the transform for precomputing type sizes (layer 30)
2. type-checks on merge instructions (layer 31)
3. the transform for precomputing address offsets in types (layer 36)
4. replace operations in supporting shape-shifting recipes (layer 55)
After much thrashing I finally noticed that it wasn't the entirety of
these layers that was giving me trouble, but just the type metadata
precomputation, which had bugs that weren't manifesting until 30 layers
later. Or, worse, when loading .mu files before any tests had had a
chance to run. A common failure mode was running into types at run time
that I hadn't precomputed metadata for at transform time.
Digging into these bugs got me to realize that what I had before wasn't
really very good, but a half-assed heuristic approach that did a whole
lot of extra work precomputing metadata for utterly meaningless types
like `((address number) 3)` which just happened to be part of a larger
type like `(array (address number) 3)`.
So, I redid it all. I switched the representation of types (because the
old representation made unit tests difficult to retrofit) and added unit
tests to the metadata precomputation. I also made layer 30 only do the
minimal metadata precomputation it needs for the concepts introduced
until then. In the process, I also made the precomputation more correct
than before, and added hooks in the right place so that I could augment
the logic when I introduced shape-shifting containers.
== lessons learned
There's several levels of hygiene when it comes to layers:
1. Every layer introduces precisely what it needs and in the simplest
way possible. If I was building an app until just that layer, nothing
would seem over-engineered.
2. Some layers are fore-shadowing features in future layers. Sometimes
this is ok. For example, layer 10 foreshadows containers and arrays and
so on without actually supporting them. That is a net win because it
lets me lay out the core of Mu's data structures out in one place. But
if the fore-shadowing gets too complex things get nasty. Not least
because it can be hard to write unit tests for features before you
provide the plumbing to visualize and manipulate them.
3. A layer is introducing features that are tested only in later layers.
4. A layer is introducing features with tests that are invalidated in
later layers. (This I knew from early on to be an obviously horrendous
idea.)
Summary: avoid Level 2 (foreshadowing layers) as much as possible.
Tolerate it indefinitely for small things where the code stays simple
over time, but become strict again when things start to get more
complex.
Level 3 is mostly a net lose, but sometimes it can be expedient (a real
case of the usually grossly over-applied term "technical debt"), and
it's better than the conventional baseline of no layers and no
scenarios. Just clean it up as soon as possible.
Definitely avoid layer 4 at any time.
== minor lessons
Avoid unit tests for trivial things, write scenarios in context as much as
possible. But within those margins unit tests are fine. Just introduce them
before any scenarios (commit 3297).
Reorganizing layers can be easy. Just merge layers for starters! Punt on
resplitting them in some new way until you've gotten them to work. This is the
wisdom of Refactoring: small steps.
What made it hard was not wanting to merge *everything* between layer 30
and 55. The eventual insight was realizing I just need to move those two
full-strength transforms and nothing else.
2016-09-10 01:32:52 +00:00
|
|
|
return is_mu_character(r.type);
|
|
|
|
}
|
|
|
|
bool is_mu_character(const type_tree* type) {
|
|
|
|
if (!type) return false;
|
|
|
|
if (!type->atom) return false;
|
|
|
|
if (is_literal(type)) return false;
|
2018-05-13 02:55:21 +00:00
|
|
|
return type->value == Character_type_ordinal;
|
2016-08-17 15:24:19 +00:00
|
|
|
}
|