in this section we start exploring the visual aspects of the varvara computer: we talk about the fundamentals of its screen device so that we can start drawing on it!
even though uxn is a computer that works natively with 8-bits-sized words (bytes), there are several occasions in which the amount of data that it is possible to store in one byte is not enough.
when we use 8 bits, we can represent 256 different values (2 to the power of 8). at any given time, one byte will store only one of those possible values.
that number corresponds to the values that can be represented using two bytes, or 16 bits, or a "short": 2 to the power of 16. that quantity is also known as 64KB, where 1KB corresponds to 1024 or 2 to the power of 10.
besides expressing addresses in main memory, today we will see another case where 256 values is not always enough: the x and y coordinates for the pixels in our screen.
for these and other cases, using shorts instead of bytes will be the way to go.
counting from right to left, the 6th bit of a byte that encodes an instruction for the uxn computer is a binary "flag" that corresponds to what is called the short mode.
whenever this flag is set, i.e. when that bit is 1 instead of 0, the uxn cpu will perform the instruction given by the first 5 bits (the opcode) but using pairs of bytes instead of single bytes.
the byte that is deeper inside the stack will be the "high" byte of the short, and the byte that is closer to the top of the stack will be the "low" byte of the short.
in uxntal, we indicate that we want to set this flag adding the digit '2' to the end of an instruction mnemonic.
let's see some examples!
## short mode examples
### LIT2
first of all, let's recap. the following code will push number 02 down onto the stack, then it will push number 30 (in hexadecimal) down onto the stack, and finally add them together, leaving number 32 in the stack:
```
#02 #30 ADD
```
final state of the stack:
```
32 <- top
```
in the previous section we said that this was equivalent to using the LIT instruction instead of the literal hex rune (#)
that's right! the stack will have the following values, because we are pushing 4 bytes down onto the stack, ADDing the two of them closest to the top, and pushing the result down onto the stack
in this case we are pushing the same 4 bytes down onto the stack, but ADD2 is doing the following actions:
* take the top element of the stack (08), and store it as the low byte of the first short
* take the new top element of the stack (00), and store it as the high byte of the first short, that is now 0008
* take the new top element of the stack (04), and store it as the low byte of the second short
* take the new top element of the stack (00), and store it as the high byte of the second short, that is now 0004
* add the two shorts (0004 + 0008), getting a result of 000c
* push the high byte of the result (00) down onto the stack
* push the low byte of the result (0c) down onto the stack
the stack ends up looking as follows:
```
00 0c <- top
```
we might not need to think too much about the per-byte manipulations of arithmetic operations, as we can think that they are doing "the same as before", but using pairs of bytes instead of single bytes; not really changing their order.
in any case, it's useful to keep them in mind for some behaviors we might need later :)
### DEO2, DEI, DEI2
let's talk now about the DEO (device out) instruction we discussed, as its short mode implies something special.
the DEO instruction needs a value (1 byte) to output, and an i/o address (1 byte) in the stack, in order to output that value to that address.
this instruction needs an i/o address (1 byte) in the stack, and it will push down onto the stack the value (1 byte) that corresponds to reading that input.
in the case of the short mode of DEO and DEI, the short aspect applies to the value to output or input and not to the address.
remember that the 256 i/o addresses are covered using one byte only already, so using one short for them would be redundant: the high byte would be always 00.
on the other hand, the DEI2 instruction needs an i/o address (1 byte) in the stack, and it will push down onto the stack the value (1 short) that corresponds to that input.
we will see next some examples where we'll be able to use these instructions.
the 'write' output of the console device has a size of 1 byte, so we can't really use with it these instructions in a meaningful way .
the system device is the varvara device with an address of 00. its output addresses (starting at address 08) correspond to three different shorts: one called red, the other one green, and the last one blue.
* there are some lines and a crosshair in the center, drawn with color 2
* at the top left, there are four rows of eight bytes each, represented in hexadecimal and drawn with color 1; these 32 bytes show the deeper contents of the stack, with the stack "top" highlighted using color 2.
* below, there is a single byte drawn with color 2: it corresponds to the address of the top of the return stack (we'll talk about it on day 5)
* finally, there is another set of 32 bytes, drawn with color 3; these show the contents of the first section of the zero page in the main memory.
2ce5 is the short we assigned to the blue components of the system colors, and 0c is the i/o address of the short corresponding to .System/b ! (can you say what are the numerical addresses of each of the color components in the system device?)
we can think of the highlight in the leftmost 2c, as an arrow pointing leftwards to the "top" of the stack. it current position implies that the stack is empty, as there are no more elements to its left.
if we think of the highlight as an arrow pointing left towards the top of the stack, we'll see that its position corresponds with the result that we wrote before!
the highlighted 00, and the 08 to its right, correspond to the 0008 of our second operand. they were used by the ADD2 instruction already, but they are left unused in the stack memory. they would stay there until overwritten.
we mentioned already that the screen device can only show four different colors at a given time, and that these colors are numbered from 0 to 3. we set these colors already with the system device.
in order to do this, we need to set a pair of x,y coordinates where we want the pixel to be drawn, and we need to set the pixel byte to a value to actually perform the drawing.
here's another question for you: how would you write a macro ADD-X that allows you to increment the x coordinate by an arbitrary amount you put in the stack?
each byte corresponds to a row of the tile, and each bit in a row corresponds to the state of a pixel from left to right: it can be either "on" (1) or "off" (0).
it's worth noting (or remembering) that groups of four bits correspond to a nibble, and each possible combination in a nibble can be encoded as an {hexadecimal} digit.
to make sure that these bytes are not read as instructions by the uxn cpu, it's a good practice to precede them with the BRK instruction: this will interrupt the execution of the program before arriving here, leaving uxn "waiting" for inputs.
if you observe carefully, you might see some pattern: each bit in the high nibble of the sprite byte corresponds to a different aspect of this behavior.
the following shows the meaning of each of these bits in the high nibble, assuming that we are counting the byte bits from right to left, and from 0 to 7:
+ <tr><td>mode (0 is 1bpp, 1 is 2bpp)</td><td>layer (0 is background, 1 is foreground)</td><td>flip vertically (0 is no, 1 is yes)</td><td>flip horizontally (0 is no, 1 is yes)</td></tr>
+ </table>
& * bit 4: flip-x
& * bit 5: flip-y
& * bit 6: layer (0 is background, 1 is foreground)
additionally, 5, 'a' and 'f' in the low nibble will draw the pixels that are "on" but will leave the ones that are "off" as is: this will allow you to draw over something that has been drawn before, without erasing it completely.
each one of these states can be encoded with a combination of two bits. these states can be assigned different combination of the four system colors, by using appropriate values in the screen color byte.
the chr encoding needs some interesting manipulation of those bits: we can think of each pair of bits as having a high bit in the left and a low bit in the right.
we separate our tile into two different squares, one for the high bits and the other for the low bits:
``` two 8x8 squares corresponding to dividing the previous square in its high and low bits
00000000 00000001
01111100 01111111
01111100 01111011
01111100 01110011
01111100 01100011
01111100 01000011
00000000 01111111
00000000 11111111
```
now we can take each of these squares as 1bpp sprites, and encode them in hexadecimal as he did before:
``` the two previous 8x8 squares with their corresponding hexadecimal encoding
00000000: 00 00000001: 01
01111100: 7c 01111111: 7f
01111100: 7c 01111011: 7b
01111100: 7c 01110011: 73
01111100: 7c 01100011: 63
01111100: 7c 01000011: 43
00000000: 00 01111111: 7f
00000000: 00 11111111: ff
```
## storing the sprite
in order to write this sprite into memory, we first store the square corresponding to the low bits, and then the square corresponding to the high bits. each of them, from top to bottom:
=> ./img/screenshot_uxn-tiles-2bpp.png screenshot of the output of the program, showing 16 squares colored with different combinations of outline and fill.
the following code will show our sprite in the 16 different combinations of color. there's some margin in between the tiles in order to appreciate them better:
nasu is a tool by 100R, written in uxntal, that makes it easier to design and export 2bpp sprites.
=> https://100r.co/site/nasu.html 100R - nasu
besides using it to draw with colors 1, 2, 3 (and erasing to get color 0), you can use it to find your system colors, to see how your sprites will look with the different color modes (aka blending modes), and to assemble assets made of multiple sprites.
you can export and import chr files, that you can include in your code using a tool like hexdump.
the last thing we'll cover today has to do with the assumptions varvara makes about its screen size, and some code strategies we can use to deal with them.
however, and for example, the virtual computer also runs in the nintendo ds, with a resolution of 256x192 pixels (32x24 tiles), and in the teletype with a resolution of 128x64 pixels (16x8 tiles)
as programmers, we are expected to decide what to do with these: our programs can adapt to the different screen sizes, they might have different modes depending on the screen size, and so on.
## changing the screen size
as of today, the way of changing the screen size in uxnemu is by editing its source code.
those two numbers, 64 and 40, are the default screen size in tiles, as we mentioned above.
you can change those, save the file, and then re-run the build.sh script to have uxnemu working with this new resolution.
## reading and adapting to the screen size (the basics)
as you may recall from the device addresses mentioned above, the screen allows us to read its width and height, as shorts.
if we wanted to, for example, draw a pixel in the middle of the screen regardless of the screen size, we can translate to uxntal an expression like the following:
```
x = screenwidth/2
y = screenheight/2
```
### uxntal division
for this, let's introduce the MUL and DIV instructions: they work like ADD and SUB, but for multiplication and division:
* MUL: take the top two elements from the stack, multiply them, and push down the result ( a b -- a*b )
* DIV: take the top two elements from the stack, divide them, and push down the result ( a b -- a/b )
using DIV, our translated expression for the case of the x coordinate, could look like:
```
.Screen/width DEI2 ( get screen width into the stack )
#0002 DIV2 ( divide over 2 )
.Screen/x DEO2 ( take the result from the stack and output it to Screen/x )
```
### bitwise shifting
if what we want is to divide over or multiply by powers of two (like in this case), we can also use the SFT instruction.
this instruction takes a number and a "shift value" that indicates the amount of bits to shift to the right, and/or to the left.
the low nibble of the shift value tells uxn how many bits to shift to the right, and the high nibble expresses how many bits to shift to the left.
in order to divide a number over 2, we'd need to shift its bits one space to the right.
for example, dividing 10 (in decimal) over 2 could be expressed as follows:
```
#0a #01 SFT ( result: 05 )
```
0a is 0000 1010 in binary, and 05 is 0000 0101 in binary.
to multiply times 2, we shift one space to the left:
```
#0a #10 SFT ( result: 14 in hexadecimal )
```
14 in hexadecimal (20 in decimal), is 0001 0100 in binary.
in short mode, the number to shift is a short, but the shift value is still a byte.
for example, the following will divide the screen width over two, by using bitwise shifting:
```
.Screen/width DEI2
#01 SFT2
```
### HALF macros
in order to keep illustrating the use of macros, we could define a HALF and HALF2 macros, either using DIV or SFT.
note that the HALF2 macro using SFT2 would require one byte less than the one using DIV2. this may or may not be important depending on your priorities :)
* MUL: take the top two elements from the stack, multiply them, and push down the result ( a b -- a*b )
* DIV: take the top two elements from the stack, divide them, and push down the result ( a b -- a/b )
* SFT: take a shift value and a number to shift with that value, and shift it. the low nibble of the shift value indicates the shift to the right, and the high nibble the shift to the left ( number shift -- shiftednumber )
we also covered the short mode, that indicates the cpu that it should operate with words that are 2 bytes long.