When representing signed binary values in octal or hexadecimal, we pay no attention to the binary format and just convert the bits as we would with an unsigned binary number. Hence, we never see a '-' sign with octal or hexadecimal. The '-' sign is only used to indicate negative decimal values.

Hence, we can think of octal and hexadecimal as compressed representations of binary and nothing more. If given an octal or hexadecimal number, we must also be told what the binary format is in order to determine its value.

Likewise, to convert a signed decimal value to octal or hexadecimal,
we must be told the binary format and number of bits.
-7_{10} is
11111000_{2, one's complement}
= F8_{16, one's complement, but
}11111001_{2, two's complement}
= F9_{16, two's complement}.

The main caveat is that we must be aware of how many bits the value contains in case it differs from the number of digits shown. Whatever octal or hex digits are shown are to be converted to binary. Any digits not shown are assumed to be 0. The examples below should clarify why this is important.

177_8 in 7-bit two's complement = 1 111 111 = -1_10 177_8 in 8-bit two's complement = 01 111 111 = +127_10 377_8 in 8-bit two's complement = 11 111 111 = -1_10 377_8 in 16-bit two's complement = 0 000 000 011 111 111 = +255_10 9F_16 in 8-bit two's complement = 1001 1111 = -(01100000 + 1) = -97_10 9F_16 in 8-bit bias-127 = 1001 1111 = +32_10 9F_16 in 8-bit one's complement = 1001 1111 = -01100000 = -96_10 9F_16 in 16-bit two's complement = 0000 0000 1001 1111 = +159_10 9F_16 in 16-bit one's complement = 0000 0000 1001 1111 = +159_10 1001000_2 7-bit two's complement = 48_16 = 110_8 1001000_2 7-bit unsigned binary = 48_16 = 110_8 1001000_2 7-bit bias-63 = 48_16 = 110_8

Perform the following conversions:

1001101 sign-magnitude to octal

1001101 two's complement to octal

1001101 bias-127 to octal

73_16 to 8-bit two's comp

73_16 to 8-bit bias-63

73_16 7-bit two's comp to binary and decimal

73_16 8-bit two's comp to binary and decimal