Intel built numerous bulletins as portion of its unveil of OneAPI Gold and a new GPU product supposed for the movie processing / Android cloud gaming market place.
Initial up: the new Intel Server GPU. This new instantiation of the Xe-LP architecture has four different graphics chips and do the job in tandem to process movie streaming workloads, movie transcoding workloads, and as a option for Android recreation streaming. Xe-LP is the same architecture Intel is shipping and delivery in the Xe Max and inside of Tiger Lake. Functionality opinions of Tiger Lake chips have set it relatively in advance of AMD’s Ryzen 4000 integrated GPU, and the Xe Max is the same chip that’s inside TGL laptops, just managing at a better clock speed.
“We are on a journey from CPU to XPU,” Intel SVP and supervisor of quite a lot all the things (architecture, graphics, and computer software) Raja Koduri advised ExtremeTech on a modern contact. “Our CPU architecture has developed Intel and it also performed its portion to enable the overall world of computing, but we know the workloads have evolved and we are striving for mastery about additional XPU architectures that are super-economical for graphics, media, and AI, memory, safety, and networking.”
Like Xe Max in laptops, the Xe-LP chip Intel is launching nowadays is aimed at media workloads a lot more than gaming, even though in this situation, the organization is speaking up the option as an Android gaming platform. Seemingly, Intel is operating with numerous corporations, including Tencent and GameStream, to convey Android streaming companies to market place. Intel’s argument is that the Xe-LP would make an eye-catching husband or wife to Xeon, allowing for consumers to standardize on all-Intel solutions for these merchandise.
Intel can help 120 streams in a two-GPU configuration and potentially up to 160 simultaneous players based on the recreation in dilemma. Since every single card has 4 GPUs, Intel is supporting 15 Android avid gamers for every GPU at 30fps. Android video games are not virtually as hardware intensive as Computer titles, but 15 streams for every GPU at 30fps still performs out to a cumulative whole of 450 frames for every second. It’d be fascinating to know how the organization distributes the disparate streaming workloads across the card and concerning the two GPUs. It is also feasible that the consumer promises below are a bit optimistic, of training course.
OneAPI Goes Gold
Intel’s other major announcement worries its OneAPI initiative. OneAPI is a cross-common programming model supposed to standardize code advancement concerning various procedure components, including GPUs, FPGAs, and CPUs. Intel has generally offered its individual libraries and code advancement tools, and the organization is leveraging that skills to make an overarching framework for product advancement. Aid for attributes like AVX-512 and DLBoost is baked-in, as you’d be expecting.
Intel has been speaking about OneAPI for numerous a long time now, but the challenge has only just absent gold, with toolkit availability anticipated in December. Free of charge variations will be accessible the two regionally and in the cloud, and Intel will promptly changeover its Parallel Studio XE and Intel Program Studio software suites to OneAPI merchandise.
OneAPI has created a fair bit of news for Intel about the earlier couple a long time, but it isn’t very crystal clear if it’s changed Intel’s overall fortunes in the AI market place. Right up until its substantial-stop Xe cards are accessible, Intel is operating with 1 hand tied driving its back, as much as AI functionality is involved. Even with AVX-512 and DLBoost, a modern Xeon server can not match the raw functionality of a GPU.
With OneAPI, Intel is building an infrastructure backend it helps will catch the attention of programmers to its platforms as new GPUs — not to mention customer CPUs with AVX-512 help — roll out in cellular and desktop by way of 2021 and outside of. For those of you involved about CUDA help, there’s previously a backend developed into the Data Parallel C++ compiler that enables DPC++ code to run on leading of CUDA GPUs. DPC++ is Intel’s unifying language carried out in OneAPI that can tackle FPGAs, CPUs, and GPUs and any other suitable accelerators everyone delivers to market place.
This announcement appears to conclude Intel’s programs for GPU launches in 2020, which means we’ll have to wait around until finally 2021 to see what the organization delivers to the table in customer cards. Tiger Lake has impressed in cellular, which ideally implies fantastic factors about the larger cards Intel will ship in 2021.