[bitsandchips]: Pascal to not have improved Async Compute over Maxwell

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Mahigan

Senior member
Aug 22, 2015
573
0
0
NVidia/Intel just needs to sponsor games with ROV/CR like AMD sponsors Async. And we have the opposite cases.

Even AMD said that no GPU yet supports all DX12 features. And it will be a long time before we see any that does. It will be post Pascal/GCN1.3. MS is already adding new features to DX12 that will be introduced in the end of the year.

And with the look on future GPU performance, we may end up buying new cards due to features rather than performance, from both discrete IHVs.
Polaris will support ROV and CR. That's what the "Primitive Discard Acceleration" is all about.

ROV is a performance reducing feature btw.

What is clear to me is that Maxwell 3.0 (Pascal) needs HBM2 to shine. This is why AMD GCN3 (Fiji) is able to keep up with a GTX 980 Ti (GM200) at 4K despite having a lower ROp count and clockspeed.

NVIDIA went from an 8:1 ratio of ROP to Memory Controller in Kepler (GK110) to a 16:1 ratio with GM20x. In other words Kepler (GK104/GK110) had a ratio of 8 Rops per 64-bit memory controller.

So a 256-bit memory interface would give us 4 x 8 or 32 ROps and a 384-bit memory interface would give us 6 x 8 or 48 ROps.

What NVIDIA did was boost that to 16 ROps per 64-bit memory interface with GM20x. So a 256-bit memory interface now powers 64 ROps and a 384-bit memory interface now powers 96 ROps.

NVIDIA added color compression (Delta) which affects colored pixels and texels but not random pixels and texels in order to make up for the lack of memory bandwidth. It helped out a bit but still couldn't keep up with GCN2s 64 ROps under random scenarios or GCN3s ROps under both random and colored scenarios.


What we're looking at, then, is NVIDIAs initial Pascal offerings being some what nice but not delivering the performance people seem to think it will. GP100, paired with HBM2, will be able to deliver deliver the needed bandwidth for NVIDIAs bandwidth starved 96 ROps (Z testing, pixel blending, anti-aliasing etc devours immense amounts of bandwidth). Therefore I don't think we're going to see more than 96 ROps in GP100. What we're instead likely to see are properly fed ROps.

If the "GTX 1080" comes with 10 Gbps GDDR5x memory on a 256-bit memory interface then we'd be looking at the same 64 ROps that the GTX 980 sports and the same 16:1 ratio (8 ROps/64-bit memory controller) but with 320GB/s memory bandwidth as opposed to 224GB/s on the GTX 980. So the GTX 1080 (320GB/s) should deliver a similar performance/clk as a GTX 980 Ti (336GB/s) at 4K despite sporting 64 ROps to the GTX 980 Ti's 96.

NVIDIA will likely clock the reference clocks on the GTX 1080 higher in order to obtain faster performance than a reference clocked GTX 980 Ti. So the performance increase of a GTX 1080 over a GTX 980 Ti will likely be due to higher reference clocks as it pertains to 4K performance.

I also think that the GTX 1080 will sport the same or around the same CUDA cores as a GTX 980 Ti (2,816). I could be entirely off but that's what I think.

As for FP64, NVLink, FP16 support, those are nice for a data centre but mean absolutely nothing for Gamers... Sad I know.

So what we're looking at from NVIDIA, initially, is GTX 980 Ti performance (or slightly higher performance) at a lower price point with GP104. The real fun will start with GP100 by end of 2016/beginning of 2017.


On the RTG/AMD front..

RTG replaced the geometry engines with new geometry proceasors. One notable new feature is primitive discard acceleration, which is something GCN1/2/3 lacked. This allows future GCN4 (Polaris/Vega) to prevent certain primitives from being rasterized. Unseen tessellated meshes are "culled" (removed from the rasterizer's work load). Primitive Discard Acceleration also means that GCN4 will support Conservative Rasterization.

Basically, RTG have removed one of their weakness's in GCN.

As for the hardware scheduling, GCN still uses an Ultra Threaded Dispatcher which is fed by the Graphics Command Processor and ACEs.


AMD replaced the Graphics Command Processor and increased the size of the command buffer (section of the frame buffer/system memory dedicated to keeping many to-be executed commands). The two changes, when coupled together, allow for a boost in performance under single threaded scenarios.

How? My opinion is that if the CPU is busy handling a complex simulation or other CPU heavy work under DX11, you generally get a stall on ther GPU side whereas the GPU idles, waiting on the CPU to finish the work it is doing so that it can continue to feed the GPU.

By increasing the size of the command buffer, more commands can be placed in-waiting so that while the CPU is busy with other work, the Graphics Command Processor still has a lot of buffered commands to execute. This averts a stall.

So 720p/900p/1080p/1440p performance should be great under DX11 and Polaris/Vega.


Another nifty new feature is instruction prefetching. Instruction prefetch is a technique used in central processor units to speed up the execution of a program by reducing wait states (GPU Idle time).

Prefetching occurs when a processor requests an instruction or data block from main memory before it is actually needed. Once the block comes back from memory, it is placed in a cache (and GCN4 has increased its Cache sizes as well). When the instruction/data block is actually needed, it can be accessed much more quickly from the cache than if it had to make a request from memory. Thus, prefetching hides memory access latency.

In the case of a GPU, the prefetch can take advantage of the spatial coherence usually found in the texture mapping process. In this case, the prefetched data are not instructions, but texture elements (texels) that are candidates to be mapped on a polygon.

This could mean that GCN4 (Polaris/Vega) will be boosting texturing performance without needing to rely on more texturing units. This makes sense when you consider that Polaris will be a relatively small die containing far fewer CUs (Compute Units) than Fiji and that Texture Mapping Units are found in the CUs. By reducing the texel fetch wait times, you can get a more efficient use out of the Texture Mapping Units on an individual basis. Kind of like higher TMU IPC.

On top of all this we have the new L2 cache, improved CUs for better shader efficiency, new memory controllers etc


So what we're looking at from AMD is FuryX performance (or slightly more) at a reduced price point for DX12 and higher than FuryX performance for DX11. Just like with NVIDIA, the real fun starts with Vega by end of 2016/beginning of 2017.


In conclusion,

We have Maxwell 3.0 facing off against a refined GCN architecture. Micron just announced that mass production of GDDR5x is set for this summer so both AMD and NVIDIA are likely to use GDDR5x. It will be quite interesting to see the end result.

So does lacking Asynchronous compute + graphics matter? Absolutely.
 
Last edited:

kagui

Member
Jun 1, 2013
78
0
0
Polaris will support ROV and CR. That's what the "Primitive Discard Acceleration" is all about.

ROV is a performance reducing feature btw.

What is clear to me is that Maxwell 3.0 (Pascal) needs HBM2 to shine. This is why AMD GCN3 (Fiji) is able to keep up with a GTX 980 Ti (GM200) at 4K despite having a lower ROp count and clockspeed.

NVIDIA went from an 8:1 ratio of ROP to Memory Controller in Kepler (GK110) to a 16:1 ratio with GM20x. In other words Kepler (GK104/GK110) had a ratio of 8 Rops per 64-bit memory controller.

So a 256-bit memory interface would give us 4 x 8 or 32 ROps and a 384-bit memory interface would give us 6 x 8 or 48 ROps.

What NVIDIA did was boost that to 16 ROps per 64-bit memory interface with GM20x. So a 256-bit memory interface now powers 64 ROps and a 384-bit memory interface now powers 96 ROps.

NVIDIA added color compression (Delta) which affects colored pixels and texels but not random pixels and texels in order to make up for the lack of memory bandwidth. It helped out a bit but still couldn't keep up with GCN2s 64 ROps under random scenarios or GCN3s ROps under both random and colored scenarios.


What we're looking at, then, is NVIDIAs initial Pascal offerings being some what nice but not delivering the performance people seem to think it will. GP100, paired with HBM2, will be able to deliver deliver the needed bandwidth for NVIDIAs bandwidth starved 96 ROps (Z testing, pixel blending, anti-aliasing etc devours immense amounts of bandwidth). Therefore I don't think we're going to see more than 96 ROps in GP100. What we're instead likely to see are properly fed ROps.

If the "GTX 1080" comes with 10 Gbps GDDR5x memory on a 256-bit memory interface then we'd be looking at the same 64 ROps that the GTX 980 sports and the same 16:1 ratio (8 ROps/64-bit memory controller) but with 320GB/s memory bandwidth as opposed to 224GB/s on the GTX 980. So the GTX 1080 (320GB/s) should deliver a similar performance/clk as a GTX 980 Ti (336GB/s) at 4K despite sporting 64 ROps to the GTX 980 Ti's 96.

NVIDIA will likely clock the reference clocks on the GTX 1080 higher in order to obtain faster performance than a reference clocked GTX 980 Ti. So the performance increase of a GTX 1080 over a GTX 980 Ti will likely be due to higher reference clocks as it pertains to 4K performance.

I also think that the GTX 1080 will sport the same or around the same CUDA cores as a GTX 980 Ti (2,816). I could be entirely off but that's what I think.

As for FP64, NVLink, FP16 support, those are nice for a data centre but mean absolutely nothing for Gamers... Sad I know.

So what we're looking at from NVIDIA, initially, is GTX 980 Ti performance (or slightly higher performance) at a lower price point with GP104. The real fun will start with GP100 by end of 2016/beginning of 2017.


On the RTG/AMD front..

RTG replaced the geometry engines with new geometry proceasors. One notable new feature is primitive discard acceleration, which is something GCN1/2/3 lacked. This allows future GCN4 (Polaris/Vega) to prevent certain primitives from being rasterized. Unseen tessellated meshes are "culled" (removed from the rasterizer's work load). Primitive Discard Acceleration also means that GCN4 will support Conservative Rasterization.

Basically, RTG have removed one of their weakness's in GCN.

As for the hardware scheduling, GCN still uses an Ultra Threaded Dispatcher which is fed by the Graphics Command Processor and ACEs.


AMD replaced the Graphics Command Processor and increased the size of the command buffer (section of the frame buffer/system memory dedicated to keeping many to-be executed commands). The two changes, when coupled together, allow for a boost in performance under single threaded scenarios.

How? My opinion is that if the CPU is busy handling a complex simulation or other CPU heavy work under DX11, you generally get a stall on ther GPU side whereas the GPU idles, waiting on the CPU to finish the work it is doing so that it can continue to feed the GPU.

By increasing the size of the command buffer, more commands can be placed in-waiting so that while the CPU is busy with other work, the Graphics Command Processor still has a lot of buffered commands to execute. This averts a stall.

So 720p/900p/1080p/1440p performance should be great under DX11 and Polaris/Vega.


Another nifty new feature is instruction prefetching. Instruction prefetch is a technique used in central processor units to speed up the execution of a program by reducing wait states (GPU Idle time).

Prefetching occurs when a processor requests an instruction or data block from main memory before it is actually needed. Once the block comes back from memory, it is placed in a cache (and GCN4 has increased its Cache sizes as well). When the instruction/data block is actually needed, it can be accessed much more quickly from the cache than if it had to make a request from memory. Thus, prefetching hides memory access latency.

In the case of a GPU, the prefetch can take advantage of the spatial coherence usually found in the texture mapping process. In this case, the prefetched data are not instructions, but texture elements (texels) that are candidates to be mapped on a polygon.

This could mean that GCN4 (Polaris/Vega) will be boosting texturing performance without needing to rely on more texturing units. This makes sense when you consider that Polaris will be a relatively small die containing far fewer CUs (Compute Units) than Fiji and that Texture Mapping Units are found in the CUs. By reducing the texel fetch wait times, you can get a more efficient use out of the Texture Mapping Units on an individual basis. Kind of like higher TMU IPC.

On top of all this we have the new L2 cache, improved CUs for better shader efficiency, new memory controllers etc


So what we're looking at from AMD is FuryX performance (or slightly more) at a reduced price point for DX12 and higher than FuryX performance for DX11. Just like with NVIDIA, the real fun starts with Vega by end of 2016/beginning of 2017.


In conclusion,

We have Maxwell 3.0 facing off against a refined GCN architecture. Micron just announced that mass production of GDDR5x is set for this summer so both AMD and NVIDIA are likely to use GDDR5x. It will be quite interesting to see the end result.

So does lacking Asynchronous compute + graphics matter? Absolutely.
I have come to the same conclusion coming from another vector (less tachnical, more PR), making gpus more like cpus, and the future is full compute
 

ultima_trev

Member
Nov 4, 2015
148
66
66
Meh, ACEs or no, nVidia will find a way to come out ahead; They always do. Currently AMD offerings provide much better shading and INT8 texture throughput, but nVidia made sure games are programmed to be ROP and FP16 texture limited (don't let synthetic benchmarks fool you), and most likely this will not change when Direct X 12 games become mainstream. So AMD really needs to double up on ROP and FP16 texture capacity, simple as that.
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
Meh, ACEs or no, nVidia will find a way to come out ahead; They always do. Currently AMD offerings provide much better shading and INT8 texture throughput, but nVidia made sure games are programmed to be ROP and FP16 texture limited (don't let synthetic benchmarks fool you), and most likely this will not change when Direct X 12 games become mainstream. So AMD really needs to double up on ROP and FP16 texture capacity, simple as that.
how did they made it sure? little by little everyone that wants a piece of the cake on the consoles will optimise their games towars amd and unless nvidia pays for the entire recompile process of a game on pc to suit them i dont see this happening
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Meh, ACEs or no, nVidia will find a way to come out ahead; They always do. Currently AMD offerings provide much better shading and INT8 texture throughput, but nVidia made sure games are programmed to be ROP and FP16 texture limited (don't let synthetic benchmarks fool you), and most likely this will not change when Direct X 12 games become mainstream. So AMD really needs to double up on ROP and FP16 texture capacity, simple as that.

nVidia can only afford to pay off so many companies. There is already a sizable list of new games that are siding with AMD. Maybe AMD offered more than nVidia, or maybe nVidia put a limit on how many games they will be pushing.

Either way, I think we may see a swing of things with DX12 becoming the standard in the next year or so.

Plus, the PS4K and the new XBoxOne most likely having newer version of GCN will further push devs to optimize for AMDs stuff first.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,421
1,755
136
nVidia can only afford to pay off so many companies.

That's not how it works. What nV offers the typical game dev shop is not money, but help in optimizing and debugging. (Really big name AAA titles might be different, but I don't know about those.) Basically, the typical game studio absolutely does not have a good dev with lots of experience optimizing shader and engine code. nV has a bunch of those working for them as a part of their dev relations program, and whenever a partner has a problem, they can email them for advice and help. This is really valuable not just because of the money it saves, but because it can save a lot of time.

I don't currently work in gamedev, but if AMD is suddenly getting a lot of partner titles, I think they probably finally properly copied the nV dev relations setup. Oh, and now their dev relations engineers can probably also help the game devs optimize the console versions...
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
That's not how it works. What nV offers the typical game dev shop is not money, but help in optimizing and debugging. (Really big name AAA titles might be different, but I don't know about those.) Basically, the typical game studio absolutely does not have a good dev with lots of experience optimizing shader and engine code. nV has a bunch of those working for them as a part of their dev relations program, and whenever a partner has a problem, they can email them for advice and help. This is really valuable not just because of the money it saves, but because it can save a lot of time.

I don't currently work in gamedev, but if AMD is suddenly getting a lot of partner titles, I think they probably finally properly copied the nV dev relations setup. Oh, and now their dev relations engineers can probably also help the game devs optimize the console versions...


Are you privy to the trail of money? How would you know where the money is going? Let's assume what you wrote is true... Do you realize what the meaning behind what you wrote is? Devs are willingly letting gameworks crap into and breaking their games? You mean they want this on purpose w/o any compensation on the backend and as a result they get broken games full of black box code? The end result like for ubifail for ex. are heavy losses year after year?

Really? This doesn't make any fiscal sense to me sorry.
 

C@mM!

Member
Mar 30, 2016
54
0
36
Are you privy to the trail of money? How would you know where the money is going? Let's assume what you wrote is true... Do you realize what the meaning behind what you wrote is? Devs are willingly letting gameworks crap into and breaking their games? You mean they want this on purpose w/o any compensation on the backend and as a result they get broken games full of black box code? The end result like for ubifail for ex. are heavy losses year after year?

Really? This doesn't make any fiscal sense to me sorry.

The reality lays somewhere in between.

It is true that Nvidia maintains a large amount of devs to optimise gamecode. The kicker comes in when reaching out to Nvidia and they go 'Hey game guy, we have this middleware suite that will let you do xyz without any code time on your behalf'.

And thats where we start ending up with shitty gameworks titles.

On a side note, I'm getting somewhat worried about Nvidia. We've seen little movement in the mobile gpu & low end gpu market. This feels like Nvidia may be chasing a big die strategy in order to win the 'performance crown', then when people go out and buy a GPU who usually only have cash for small die, go well 'nvidia has the fastest card' and buys some gimped thing that AMD outperforms.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I'll be holding off on any GPU upgrade until this whole thing runs its course. AMD isn't an option for be because I'm running a G-Sync monitor, but Pascal may not be an option either if it's lacking.
 

Dribble

Platinum Member
Aug 9, 2005
2,076
611
136
Are you privy to the trail of money? How would you know where the money is going? Let's assume what you wrote is true... Do you realize what the meaning behind what you wrote is? Devs are willingly letting gameworks crap into and breaking their games? You mean they want this on purpose w/o any compensation on the backend and as a result they get broken games full of black box code? The end result like for ubifail for ex. are heavy losses year after year?

Really? This doesn't make any fiscal sense to me sorry.

If you left your AMD echo box you'd realise gameswork isn't crap, it's easily the best libraries to give these additional effects in games. In addition Nvidia will provide dev support to integrate those effects into your game, and have huge testing suites to test the games work with Nvidia cards. Outside of forum's like these no one thinks gamesworks is bad - those extra effects sell games because they make the game look cool.

Nvidia don't need to pay dev's anything - the libraries and the support/testing are worth a fortune to most devs who don't have time to develop those features and don't have time to test as the game needed to be released yesterday. Nor would Nvidia want to spend anything - they are in it to make money. If they wanted to spend anything they'd reduce the price of their cards which have plenty of margin on them, that would kill AMD sales faster then trying to buy out a few devs.

Anyway back on topic - because Nvidia has the better software support, they have significantly larger R&D spend, they have the stronger dev relations - you know when Pascal comes out it will run games well. Doesn't matter how many Async threads get started and how many times the echo "Async will kill Nvidia" is heard, it won't matter a bit - the cards will get released and they will end up the fastest again.
 

caswow

Senior member
Sep 18, 2013
525
136
116
If you left your AMD echo box you'd realise gameswork isn't crap, it's easily the best libraries to give these additional effects in games. In addition Nvidia will provide dev support to integrate those effects into your game, and have huge testing suites to test the games work with Nvidia cards. Outside of forum's like these no one thinks gamesworks is bad - those extra effects sell games because they make the game look cool.


i mean we have a mass of benchmarks and videos and people like you still insist that nvidia is doing everything to make "gameworks effects run good on their hardware"? what the hell D:

your whole post is like a text of a pr person
 

96Firebird

Diamond Member
Nov 8, 2010
5,712
316
126
Are you privy to the trail of money?

Are you? I realize it fits the agenda to assume Nvidia is throwing money at all these companies, but when you wake up in the real world you will realize what Tuna-Fish is saying is much closer to reality.
 

airfathaaaaa

Senior member
Feb 12, 2016
692
12
81
Are you? I realize it fits the agenda to assume Nvidia is throwing money at all these companies, but when you wake up in the real world you will realize what Tuna-Fish is saying is much closer to reality.
the last rotr actually is the epitome of what he is saying..unless ofc someone is to believe that they jumped overnight on the gameworks train because of..reasons
 

tential

Diamond Member
May 13, 2008
7,355
642
121
i mean we have a mass of benchmarks and videos and people like you still insist that nvidia is doing everything to make "gameworks effects run good on their hardware"? what the hell D:

your whole post is like a text of a pr person
Nvidia has a huge dev team! They just only optimize for maxwell and still can be behind amds older hardware.

This just sounds bad no matter how the Nvidia hopeful try to spin it.
 

xthetenth

Golden Member
Oct 14, 2014
1,800
529
106
Doesn't matter how many Async threads get started and how many times the echo "Async will kill Nvidia" is heard, it won't matter a bit - the cards will get released and they will end up the fastest again.

For how long will the cards be the fastest this time before getting passed in performance? After all, the 970's getting passed by a card over a year its elder because somehow all that time wasn't enough to get NV a forward looking architecture. With the way games look now, I rather doubt that it's possible to make a stripped down shortsighted architecture perform the same way the last few have for NV just because optimizing for DX 12 games is optimizing for GCN.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,421
1,755
136
Are you privy to the trail of money? How would you know where the money is going?

Was an employee-partial owner of a small private company that participated in TWIMTBP. (Gameworks is after my time.) Yes, I had access to our financials. We never got or wanted a cent for it. We got something far more valuable.

Let's assume what you wrote is true... Do you realize what the meaning behind what you wrote is? Devs are willingly letting gameworks crap into and breaking their games? You mean they want this on purpose w/o any compensation

The compensation is that their games which used to be more broken, get less broken. There are some very weird ideas about game dev around here. Most game dev studios work on basically shoestring budget, always close to a deadline which must be met because after it there is no money to pay salaries. Games are not shipped broken because devs are lazy, they are shipped broken because there is no time to fix. Also, most game dev studios don't employ brilliant GPU programmers who know the intimate details of making good shader code. The guys writing that are mostly just the same programmers who do everything else. What nV offers (offered?) was help optimizing and debugging from professionals who intimately knew what they were doing, and testing setups on basically all nV hardware ever.

Many game companies have been literally saved because a nV engineer answered to an email with a detailed explanation why performance seemingly inexplicably cratered to a level that was just unplayable. DX and OpenGL are both complicated to the point that only people who specialize in only them can really understand all the pitfalls.

on the backend and as a result they get broken games full of black box code? The end result like for ubifail for ex. are heavy losses year after year?

Ubisoft would be one of those big companies making huge AAA titles that can hire specialists and possibly play by different rules.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
For how long will the cards be the fastest this time before getting passed in performance? After all, the 970's getting passed by a card over a year its elder because somehow all that time wasn't enough to get NV a forward looking architecture. With the way games look now, I rather doubt that it's possible to make a stripped down shortsighted architecture perform the same way the last few have for NV just because optimizing for DX 12 games is optimizing for GCN.

Funnily enough, we had a thread here recently where a guy wanted to upgrade from 780ti to 390X. 780ti was 10% faster at release than hawaii. Now is 10-20% slower.

Having async advantage for amd cards will be a third or fourth second-wind for those. (learning to play with English).
 

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
Was an employee-partial owner of a small private company that participated in TWIMTBP. (Gameworks is after my time.) Yes, I had access to our financials. We never got or wanted a cent for it. We got something far more valuable.



The compensation is that their games which used to be more broken, get less broken. There are some very weird ideas about game dev around here. Most game dev studios work on basically shoestring budget, always close to a deadline which must be met because after it there is no money to pay salaries. Games are not shipped broken because devs are lazy, they are shipped broken because there is no time to fix. Also, most game dev studios don't employ brilliant GPU programmers who know the intimate details of making good shader code. The guys writing that are mostly just the same programmers who do everything else. What nV offers (offered?) was help optimizing and debugging from professionals who intimately knew what they were doing, and testing setups on basically all nV hardware ever.

Many game companies have been literally saved because a nV engineer answered to an email with a detailed explanation why performance seemingly inexplicably cratered to a level that was just unplayable. DX and OpenGL are both complicated to the point that only people who specialize in only them can really understand all the pitfalls.



Ubisoft would be one of those big companies making huge AAA titles that can hire specialists and possibly play by different rules.

Yeah I'm not sure what world people are living in here, but programmers with enough skill and knowledge to optimize games at low levels (e.g. write assembly for CPUs and minimize bottlenecks on massively parallel GPU code) are rare and extremely well paid as they are massively in demand. In case anyone hadn't noticed, the actual computer science of realistic computer generated graphics is literally some of the most advanced technology we've ever come up with as a human race. Most people in game dev are more regular joe types who can use the tools (e.g. art, sound, etc.) and generalized programmers who have to be jack of many trades.

If anyone here wants a really well paid job, go learn Fortran or COBOL well enough to write high performance code and work for financial institutions. There's hardly anyone left to support these real time financial systems the world runs on, and they're written in COBOL. Sure, you'll want to dig your eyes out with a spoon working on COBOL. But they clear six figures starting and $200k is not unheard of.
 
Last edited:

Headfoot

Diamond Member
Feb 28, 2008
4,444
641
126
It's like saying that 10k Rolex you got is not actually about money. Not at all.

A spade, a shovel.

So when you have expensive, complicated equipment you need to make work, you never call the manufacturer for help? This goes for literally any machine you can buy.
 

thesmokingman

Platinum Member
May 6, 2010
2,307
231
106
Was an employee-partial owner of a small private company that participated in TWIMTBP. (Gameworks is after my time.) Yes, I had access to our financials. We never got or wanted a cent for it. We got something far more valuable.


Ubisoft would be one of those big companies making huge AAA titles that can hire specialists and possibly play by different rules.


About your experience, I think you're speaking in classical terms, how things once were done. The days when devs dev'd their own games and got help when they needed. However you seem to have an idealized memory of it though that's not what I have issue with. However as you stated you've no experience with gw and admit that company examples like ubifail are possibly, can be working under a different model.

I'd like to ask if I may whether you would allow black box code into your games? And knowing now how some companies have done so and suffered losses? Where does the motivation to use black box code come from in the face of huge losses? The result of these partnerships usually end up worse for the game and its developer.

The gaming landscape is littered with examples of gw fails. Farcry 4 vs Far Cry Primal is a perfect example of using proprietary black box code vs not using it.


2016-2014 losses
http://www.gameinformer.com/b/news/...sts-losses-for-first-half-of-fiscal-2016.aspx
http://www.gamesradar.com/despite-delays-losses-ubisoft-just-fine/
http://www.gameinformer.com/b/news/...-net-loss-for-year-sales-down-20-percent.aspx
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
If you left your AMD echo box you'd realise gameswork isn't crap, it's easily the best libraries to give these additional effects in games. In addition Nvidia will provide dev support to integrate those effects into your game, and have huge testing suites to test the games work with Nvidia cards. Outside of forum's like these no one thinks gamesworks is bad - those extra effects sell games because they make the game look cool.

How are you accurately gauging public opinion if not by forums?

Furthermore, outside of forums like these, console gaming > PC gaming. Does that make it true?
 

linkgoron

Platinum Member
Mar 9, 2005
2,334
857
136
If you left your AMD echo box you'd realise gameswork isn't crap, it's easily the best libraries to give these additional effects in games. In addition Nvidia will provide dev support to integrate those effects into your game, and have huge testing suites to test the games work with Nvidia cards. Outside of forum's like these no one thinks gamesworks is bad - those extra effects sell games because they make the game look cool.

...and that's why Gameworks kills performance, because it's so optimized. That's why a 980ti just barely gets 60fps average @ witcher 3 with HairWorks @1080p (!!!), Titan X got less than 50fps average @ Project cars at launch @ 1080p (I couldn't find newer benchmarks, although I've seen that the MSI lightning 980ti gets 47 FPS at 1440p @[h])

Seriously, you can't say that GameWorks titles are optimized, even for nvidia cards. They're not. Those so-called "cool" features are basically unplayable so what does it really give you? At launch, the 980 (at the time the best card out) got 56 FPS @1080p with min 28 FPS with HW on, the 970 had 43 FPS with min 21, @1080p. The cards gained around 20 FPS to their average when hairworks was turned off. That just doesn't sound very optimized.

BTW, I'm not really sure how this GW discussion is related to the original reason of the thread.

Number sources:
http://in.ign.com/nvidia-geforce-gt...a-geforce-gtx-980-ti-hands-on-with-benchmarks
http://www.techspot.com/review/1000-project-cars-benchmarks/page2.html
http://www.techspot.com/review/1006-the-witcher-3-benchmarks/page6.html
http://www.hardocp.com/article/2015...ti_lightning_video_card_review/5#.Vvv0SuJ96Uk
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,761
4,666
136
If you left your AMD echo box you'd realise gameswork isn't crap, it's easily the best libraries to give these additional effects in games. In addition Nvidia will provide dev support to integrate those effects into your game, and have huge testing suites to test the games work with Nvidia cards. Outside of forum's like these no one thinks gamesworks is bad - those extra effects sell games because they make the game look cool.

Does Gameworks make games better in any way, shape or form? Does it make gameplay more enjoyable? Does Gameworks brand bring more pleasure to the experience of the game?

No.

The same questions we can ask about Gaming Evolved initiative.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |