How to add two integers using the power of the GPU and not the CPU

I want to write a program which adds two integers but uses for this the GPU and not the CPU to perform the actual computation (any language is ok)
[ PS: I don't want to do artificial intelligence on the GPU, nor fuzzy logic, nor image recognition, nor quantum computing, just simple add two integers]
 
This for your question:

This for details:

This as starting point for your next question:
 
This for your question:

This for details:

This as starting point for your next question:
No CUDA on Freebsd yet. At least on Nvidia cards:

Edit: CUDA only works on Nvidia cards, so no, not possible.
 
How to Use and Teach OpenGL Compute Shaders

Basically you write an OpenGL program and use a GLSL (OpenGL Shading Language) compute shader to do the calculation. I can't elaborate on this since i never tried them because my graphics card is too old and doesn't support such shaders.

You need to know how you create your OpenGL context.
As a context creation framework i recommend GLFW. It comes with a nice license and it's lightweight, simple and well documented. Of course there are many others.

Note that you need to use an OpenGL extension loading library such as GLAD (recommended) or GLEW to work with shaders. A plain (fixed function pipeline) OpenGL program won't do it.
 
Back
Top