It turns out that sRGB is a reasonably good way to store photographic images, because it is approximately perceptually uniform. This means it is an efficent encoding of levels of luminance that matches the human eye's ability to percieve levels of luminance. This is described in great detail by Charles Poynton's Gamma FAQ.

The fact that this is a good encoding is a huge coincidence, (or it as an indication that our eyes and electronics designed in the 1920's are both subject to similar laws of physics).

Most images are stored with 8-bit per channel, giving you 256 possible levels of gray. We want to choose 256 levels of gray to store in these slots to give us the best image. In this example I have used only 32 slots so the differences are more obvious:

The left column is what you get if you do the "obvious" thing, which is to multiply the number in the file by a constant to get the light level, this is called linear coding. As you can see, the first color above black is quite bright. You would have a hard time making smooth dark areas in such an image. Also you cannot distinguish the top colors at all, meaning the bits used to store the difference between them is totally wasted.

The second column shows 32 levels evenly distributed on the sRGB curve. Here you can see each step, and each step appears about the same to your eye and brain. This means it is more uniform perceptually, even though if you took a light meter and measured the luminance of each row, you would find the first column is a straight line while the second is a curve.

Some people, in particular Timo Autiokari, have argued that sRGB is not perfectly perceptually uniform. Indeed this is true, the right column shows a logarithmic coding, where each level is the luminance of the previous multiplied by a constant. This is obviously more uniform. However there never has been a claim that sRGB is perfect. sRGB's other claim is it matches how existing computer displays work and this means that virtually every digital image in existence today is stored in this format. For this reason it appears that the minor improvement you get with a more uniform encoding is not worth it. Logarithmic encoding also has problems in that there is no way to represent zero light, and it is not obvious how to increase the bit depth of a log encoding.

(You can read about Timo's arguments with Charles Poynton about this here, or read Mr Poynton's responses.) Timo seems to have altered his pages recently and they now have many examples similar to this page showing that calculations in linear space are better. However he is still confused into believing this requires linear storage. As this paper tries to demonstrate, you can convert from linear to a non-linear storage quite efficiently.

Lots of people think the solution is to stay linear, but to put more bits in the file. Years ago 12 and 16 bit storage was very common. But this is a total waste, all it does is insert the same number of new slots between each of the existing ones. In the example above it should be obvious that the linear encoding needs no new values inserted at the top, but many new values inserted at the bottom. In fact you would have to go to 13 bits before the steps near black are as fine as they are in an 8-bit sRGB file! If you used 13 bits with sRGB you would again be way ahead of the linear encoding for quality.

A lot of other people think that linear floating point values should be stored. This does make sense and we do it extensively. But it turns out that the popular methods of storing floating point numbers (mantissa + exponent) is in fact somewhat logarithimic. The steps between each possible number are much finer near black than near white.