Moore's Law states the number of transistors on a chip will rise exponentially, and this has happened fairly steadily. A variation relates to performance, which for CPUs has doubled every two years or so. Another variation is that CPU performance per dollar has doubled every 18 months or so. In practice, the cost law implies you can buy more and better CPUs as time goes on. Finally, screen resolution has been on its own curve that doubles the number of pixels every 10 years or so.
What does this mean for rendering? About 27 years ago Turner Whitted ray traced a 640x480 image at about 1 ray per pixel. For the same money budget we can get about 18 doubling of performance, but we have 3 doublings or so in the number of pixels. So now we should be able to do 2.5 million pixels, each with 2^15 = 32k rays.
A caveat is that we want to use more objects. Fortunately, performance in practice is related to the log of the number of objects.
What does this all mean? The rendering community should raise its intuition for how many rays per pixel we can use in practice. In academics, where it is good to think 5-10 years out, we should be looking at another 6 doublings or so, and a million rays per pixel should then be reasonable.
Another way to compute ray budget is to assume 30Hz interactive ray tracing will become practical at 1 ray per pixel. For an 8 hour batch run using the same infrastructure, you can send 30*60*60*8 = 864k samples per pixel.
Bottom line: batch rendering algorithms should assume a million samples per pixel or so are available, and this should make new algorithmic possibilities practical.