@guidryp
Most AI processing is going to be in running the models, not training them. Then you need to get the data in a format suitable for the model. For example, you might need to do image processing to detect relevant elements of an image, or for chatgpt, you need to transform a text into tokens, etc. Then afterwards, you may need to convert the result, or even just render it, if the output is an image.
For training you need to process huge amounts of data, so even though you won't have the postprocessing step, you'll still have to do preprocessing to get the data in the right format. Depending on the kind of preprocessing, a GPU may be far faster and more efficient at it than a CPU.
See:
This post is the first in a series about optimizing end-to-end AI. The great thing about the GPU is that it offers tremendous parallelism; it allows you to perform many tasks at the same time.
developer.nvidia.com
Editor’s Note: This post has been updated. Here is the revised post. Training deep learning models with vast amounts of data is necessary to achieve accurate results. Data in the wild…
developer.nvidia.com