Some folks think in Binary, some in Decimal, other in Hexadecimal.

I like Binary (when dealing with say up to 8 bits), because in my mind it immediately relates to those BIT positions in the PICs Registers.

It doesn't matter what you use, or if you chop and change throughout your code, the compiler doesn't care - it's there just for YOUR convenience.

The biggest confusion is when folks come along and say "What's inside the BYTE - is it Decimal or Hex or Binary?". It takes some explaining to convince people the answer is YES - It's ALL OF THOSE simultaneously! It's simply a matter of how you perceive it YOURSELF.

If you wanted to extract say just BIT 5 out of a BYTE... what's easier to visualise what is going on...

Example A.

NewByte=OldByte & %00100000

or...

Example B.

NewByte=Oldbyte & 32

or...

Example C.

NewByte=Oldbyte & $20

All three examples are EXACTLY the same. Use whichever one turns you on.