It requires more than 1GB to fill the screen with a chessboard pattern
using the definition in shell/iterative-definitions.limg.
I also speed up the chessboard program by clearing the screen up front
and then only rendering the white pixels.
It took me _way_ too long to realize that I'm not checking for errors within
the loop, and that will cause it to manifest as an infinite loop as inner
evaluations fail to run.
Debugging notes, for posterity:
printing one row of a chessboard pattern over fake screen (chessboard screen 4 0 0 15) gets stuck in an infinite loop halfway through
debug pattern during infinite loop: VWEX. It's still in the loop but it's not executing the body
raw (fill_rect screen 16 0 20 4 15) works fine
same number of calls to fill_rect work fine
replacing calls to fill_rect with pixel inside chessboard2 works fine
at the point of the infinite loop it's repeatedly going through the hline loop
-- BUT it never executes the check of the loop (< lo hi) with lo=20, hi=20. Something is returning 1, but it's not inside <
stream optimization is not implicated
simple test case with a single loop
(
(globals . (
(foo . (fn () (screen i n)
(while (< i n)
(pixel screen 4 4 i)
(pixel screen 5 4 i)
(pixel screen 6 4 i)
(pixel screen 7 4 i)
(set i (+ i 1)))))
))
(sandbox . (foo screen 0 100))
)
simpler (if you reset cursor position before every print):
(
(globals . (
(foo . (fn () (screen i n)
(while (< i n)
(print screen i)
(set i (+ i 1)))))
))
(sandbox . (foo screen 0 210))
)
I now believe it has nothing to do with the check. The check always works.
Sometimes no body is evaluated. And so the set has no effect.
All over the Mu code I reflexively initialize all variables just to keep
unsafe SubX easy to debug. However I don't really need to do this for safe
Mu code, since the type- and memory-safety already ensures we can't read
from streams beyond what we've written to them. For now I'll continue mostly
with the same approach, but with one exception for streams of bytes.
Mu programs often emit traces, and in doing so they often use temporary
streams of bytes that can get quite long. I'm hoping avoiding initializing
KBs of data all over the place will measurably speed up the Mu shell.
All highly experimental. Current constraints:
* No tail recursion elimination
* No heap reuse
* Keep implementation simple
So it's slow, and I don't want to complicate it to speed it up. So I'm
investing in affordances to help deal with the slowness. However, in the
process I've taken the clean abstraction of a trace ("all you need to do
is add to the trace") and bolted on call counts and debug-prints as independent
mechanisms.
Before: we always drew pixels atop characters, and we only drew pixels
that were explicitly requested.
After: we always draw pixels atop characters, and we only draw pixels that
don't have color 0.
Both semantics should be identical as long as pixels are never drawn atop
characters.
Even if they duplicate lower-level ones, we have an opportunity for better
error messages. Any time I see a hard-to-debug error message, I should
be asking myself, "what higher-level primitive should catch and improve
this error?"
Filling pixels isn't a rare corner case. I'm going to switch to a dense
rather than sparse representation for pixels, but callers will have to
explicitly request the additional memory.