Longer name, but it doesn't lie. We have no data structure right now for
combining multiple code points. And it makes no sense for the notion of
a grapheme to conflate its Unicode encoding.
Unix text-mode terminals transparently support utf-8 these days, and so
I treat utf-8 sequences (which I call graphemes in Mu) as fundamental.
I then blindly carried over this state of affairs to bare-metal Mu,
where it makes no sense. If you don't have a terminal handling
font-rendering for you, fonts are most often indexed by code points and
not utf-8 sequences.
We'll need this when rendering 16-bit glyphs. They'll occupy two
8x16 display units on screen, but the grapheme is a single unit as far
as fake screens are concerned.
Font rendering now happens off the real screen, which provides the effect
of double-buffering.
Apps can now also use convert-graphemes-to-pixels for more traditional
double-buffering.
Before: we always drew pixels atop characters, and we only drew pixels
that were explicitly requested.
After: we always draw pixels atop characters, and we only draw pixels that
don't have color 0.
Both semantics should be identical as long as pixels are never drawn atop
characters.
Even if they duplicate lower-level ones, we have an opportunity for better
error messages. Any time I see a hard-to-debug error message, I should
be asking myself, "what higher-level primitive should catch and improve
this error?"
Filling pixels isn't a rare corner case. I'm going to switch to a dense
rather than sparse representation for pixels, but callers will have to
explicitly request the additional memory.