I've got a doubt. In the hack platform constants are introduced by the @constant command. This command is 16 bits wide but its first bit is always set to 0, so constants are 15 bits wide. When I wrote the assembler my binary conversion function emitted this kind of constants. However, the book (p. 130) defines the -1 constant as 0xFFFF (16 bits). This is mentioned as the way to represent "true" but I guess it also applies to computing the 2's complements of numbers.
So, what's the right way to go? Are 2's complements computed from 16 or 15 bits constants?
I just noticed that the book suggests getting the -1 constant by decrementing 0 in a register, so I guess the answer to my own question is that 16 bits should be used for both the true constant and for computing the 2's complement. Am I right?