You can't for sure, that's the whole point. Each of the encoding types has its own byte-level signature, and some offer bit order marks, however there's nothing mandating that signature to only be used in text files. This is part of the reason why file extensions exist-- to indicate what type of file is being dealt with.
For example:
ASCII uses 7 bits to represent one character.
UTF-8 uses one to six octets per character, with the initial octet serving as both an indicator of the number of subsequently used octets and a portion of the character value. UTF-8 is also marked with an opening byte sequence of EF BB BF.
And while UTF-8 is relatively easy to spot, there's absolutely nothing to distinguish ASCII with other than by checking whether or not there are bytes in the file that don't map to characters. A null value, 00, would be one indication that the file is not ASCII-encoded for example.