I'm wondering what kind of optimizations can be done when running a single, full-screen graphical application written in OpenGL, i.e. FreeBSD should boot into this application.
This is all it should do. No widgets, no need for 'true' client-server support, nothing fancy at all.
The application is supposed to run on a platform with GPU from NVIDIA - and must use the proprietary NVIDIA drivers.
Would it be possible to optimize/strip Xorg, without too much hassel for a good C-programmer, to provide only the most basic set of operations necessary for the NVIDIA driver, and in that way optimize boot times?
I know there is documentation and also other threads regarding the optimization of the FreeBSD kernel - so information regarding that is not what I'm looking for.
This is all it should do. No widgets, no need for 'true' client-server support, nothing fancy at all.
The application is supposed to run on a platform with GPU from NVIDIA - and must use the proprietary NVIDIA drivers.
Would it be possible to optimize/strip Xorg, without too much hassel for a good C-programmer, to provide only the most basic set of operations necessary for the NVIDIA driver, and in that way optimize boot times?
I know there is documentation and also other threads regarding the optimization of the FreeBSD kernel - so information regarding that is not what I'm looking for.