Rives- Bit depth is fairly simple to understand. Imagine a photograph as a 3-D chessboard with each square representing a single pixel (just for the sake of simple explanation). A color is represented as a combination of zeroes and ones (just like all other digital information). The more bits of information (the more ones and zeroes), the more colors that can be represented, and the greater the nuance/subtlety between shades/tones/colors. So on our three dimensional chessboard the bit depth is simply how many layers of zeroes and ones, beneath each square, that are used to represent a single square (pixel) of color. The greater the bid depth, the more colors that can be represented, and the greater the nuance and subtlety which can pe achieved.
An inexpensive scanner will not create as nearly as detailed and, to use an audio term which may be a good paralell, 'liquid' an image as a high-end scanner capable of greater bit-depth and optical resolution.
To clarify something which I'm not sure you are necessarily understanding judging from what you said in your post; a 35mm may as easily produce a 100mb file as it may a 5mb file, depending on how you scan it. Just as with the sample rate in digital audio domain, there is a point where your eye may no longer distinguish further detail at a given viewing distance from an image, and where scanning to a higher resolution may have little, if any effect at improving an image (again, for a given size and viewing distance). If I a recall from my college days, the measurement of the size of the dot/pixel/film grain at which your eye visually blends into a smooth detail is called the "circle of confusion". You can create a 100mb file from a 35mm slide with an inexpensive scanner and it will not render the nuances or details in shadows and highlights that a more expensive scanner which is capable of greater bit depth is able to with the same slide and the same 100mb file size.
Marco