en
Back to the list

The Hardware of Digital Economy: Why Do Data Scientists Like Graphics Cards?

26 February 2020 14:40, UTC
Aleksandre B

From the moment of its appearance in the seventies and until recently, games have been the goal of existence and a stimulus to development for graphic accelerators. The developers sought to create ever more sophisticated devices that make the digital image more attractive to players.

And they did it. Graphics accelerators made it possible to play beautiful and realistic games. Users were inspired by the promises of manufacturers and eagerly awaited for new models, which сould implement even more advanced methods of making graphics better. The interest of the players pushed the whole market to growth, not only in the field of PCs but also in the game console segment. The game industry flourished: the opportunities provided by powerful graphics accelerators made it possible to create complex, colorful, and fascinating worlds, attracting paying users and players, which in turn stimulated developers to improve the technologies further.

14-11-2019 17:42:15  |   Technology
The whole process went smoothly until about the start of new millenia when it became evident that it’s not necessary to purchase more devices in order to play good games. The growing and thriving smartphone market has shown that for a large number of users, the ability to play anytime and anywhere is more important than overwhelming graphics. The hardware of mobile devices has also improved, they had graphics accelerators that brought picture quality closer to perfection. Of course, the GPUs built into smartphones could not compete with powerful graphics cards. But mobile capabilities gained the upper hand.

Fortunately for manufacturers, along with the departure of some users to mobile games, video cards found other consumers with more serious tasks. The era of parallel computing has come, spurred on by the heyday of big data technology and AI.

10-09-2019 16:46:01  |   Guest posts
Why did this happen? How did video cards become so demanded by the creators of serious speech analysis, autonomous movement and face recognition systems? To answer this question, it is worth recalling how GPUs, separate from the central processors and main processing units, appeared. The reason lied in the sequential principle of the central processors’ operation. Each calculation was processed one after another.

A lot of operations are performed in the bowels of a computer: accessing memory, writing data, receiving and transmitting information on buses, etc. No matter how fast and perfect the central processor, all other devices that it controls are always waiting in line. The developers of the graphics processors realized that it was possible to set the central processor free from performing numerous, resource-consuming, and rather uniform operations, due to which graphics are displayed on the display screen.

This separation benefited all computer systems: the central processor received extra time to perform all other operations, and the graphic one, designed specifically to process a lot of similar data at once, got its own memory, bus, and separate operating instructions. These instructions were fewer than those of the central processor, and this allowed them to be successfully improved and optimized.

25-02-2019 15:48:17  |   Technology
By the time data scientists and cryptocurrency miners turned their attention to the computing power of video cards, these devices had impressive capabilities for processing multiple dense data streams at the same time. This is what parallel computing is.

The interest of such a serious industry has forced manufacturers to reconsider their product lines and turn to the previously untouched market segment: graphics accelerators that weren't actually for graphics appeared. The most powerful GPUs released by today will be discussed in the next article.

Image courtesy of: Newegg.com