I can see a few reasons:

  • automated tests on single frames
  • batch renders on a server (e.g. for stills or cutscenes)
  • comparisons across GPU archs - it could essentially be the “standard” for how a scene should be rendered

And of course, maybe some CPU manufacturer will build in an accelerator so lower end GPUs (say, APUs) could have reasonable raytracing in otherwise GPU limited games (i don’t know enough about modern game pipelines to know if that’s a possibility).

Or the final reason, which may be the most important of all: why not?

I’ll add one to this - optimization. A lot of clever optimization techniques tend to come out of projects like this - necessity is the mother of invention.

Create a post

Gaming on the GNU/Linux operating system.

Recommended news sources:

Related chat:

Related Communities:

Please be nice to other members. Anyone not being nice will be banned. Keep it fun, respectful and just be awesome to each other.

  • 0 users online
  • 18 users / day
  • 139 users / week
  • 381 users / month
  • 1.43K users / 6 months
  • 1 subscriber
  • 864 Posts
  • 9.23K Comments
  • Modlog