Jos55efina
Junior Member
- May 10, 2013
- 1
- 0
- 0
if you are referring to the poster right above you then its just the daily spam.what the hell is that?
In the PC world, developers do not spend many time on optimization because hardware is continuously evolving.
If you ask me, hardware is evolving so fast those days that I doubt that the new consoles will survive for 7 years. I think that a 5 years span is more realistic.
It's always seemed to me that the companies making the GPU drivers are the ones that end up iterating to improve performance. Although, to be fair, I don't have insider access to what goes on, and there's probably quite a bit of conversing between the studios and the hardware manufacturers to help increase performance.
Hardware has been evolving even during the last console generation, and that didn't stop them from squeezing as much life as they could from it.
maybe for desktops but low power mobile tech is moving much faster. like I said earlier in the thread, phones/tablets will probably match the cpu power of the PS4 before it is half way through its cycle.I would actually argue that hardware is evolving a lot slower than it was at the beginning of the last console cycle, especially cpus.
maybe for desktops but low power mobile tech is moving much faster. like I said earlier in the thread, phones/tablets will probably match the cpu power of the PS4 before it is half way through its cycle.
For those that are interested, there are some publications about Killzone's tech over at Guerrilla Games's site
http://www.guerrilla-games.com/publications/
I think we were running on all the different cores for all the different systems, including AI and gameplay which weve never done before. Weve never utilised the CPU power weve had like this. We had about sixty guys running around in the demo which is far over what weve been able to do before thats about three or four times more.
Animation networking quality was much higher, I think its just basically been proven that the amount of time it takes AI programmers to get their code up and running in parallel is so much easier that it just enables us to do much more. Of course we were optimising towards 30fps, making sure we didnt drop a frame or that we dropped a few frames but not very many basically just making sure it ran smooth. And this is a launch title, weve just got new hardware and we werent using some of the hardware acceleration for stuff like audio at the time we did the demo, which we have now done. So I think theres a lot more left in the system.
They only used 3GB RAM in killzone shadow fall.
Interesting as well the next interview emphasizing the performance:
http://www.edge-online.com/features/killzone-shadow-fall-and-the-power-of-playstation-4/
They only used 3GB RAM in killzone shadow fall.
Interesting as well the next interview emphasizing the performance:
http://www.edge-online.com/features/killzone-shadow-fall-and-the-power-of-playstation-4/
They only used 3GB RAM in killzone shadow fall.
Interesting as well the next interview emphasizing the performance:
http://www.edge-online.com/features/killzone-shadow-fall-and-the-power-of-playstation-4/
LOL...they used only 6 cores, and the audio was done in the CPU
looks like Sony should have axed some cores IMO, and a brought more shaders
Nope, the more Vram there is, the less work GPU have to do. For example texture compression.Is that RAM or VRAM or both?
The relationship between ram and performance is not linear. They will likely hit a gpu bottleneck long before they ca use all that ram (8GB).
Wrong. it uses 'x' amount of ram because it was developed with that in mind. You can offload more assets to ram...hell you can put whole game to ram and never read data from hdd. But average gamer doesn't have enough memory, and games being ported from console with 256MB RAM can't support such a thing.Most of the time something uses "only" x amount of ram is because that's all the ram it needs. It doesn't mean it's going to magically perform better if it used more.
That may be the reason why it tanks so hard...BTW: Metro: Last Light uses less than 2GB for everything. So using much VRAM does not mean that it is necessary.
Is that RAM or VRAM or both?
The relationship between ram and performance is not linear. They will likely hit a gpu bottleneck long before they ca use all that ram (8GB).
"I think it was for more than a year that we knew the main ingredients and there was just discussion after discussion trying to find a bottleneck," he added. "We actually had parts of both Killzone 3 and very early Killzone 4 art assets running through simulators to try to find out how it would behave on our speculative hardware that didn’t exist – trying to find bottlenecks in the hardware that we could fix before we could even think about the chip."
Developers Won't find a Performance Bottleneck
We’ve never utilised the CPU power we’ve had like this.
GPU is really fast!
Try to run Crysis 2 on 256 MB of RAM + 256 MB VRAM (as in an old consoles) and then tell me about massive performance loss in PC.
3 GB of VRAM. 1.5 GB system memory.
Who said you anything about linear? Exact... nobody.
Nope, the more Vram there is, the less work GPU have to do. For example texture compression.
Can you explain this to me? I always thought texture compression used very little resources (similar to how you can use ultra textures on a low end card as long as you have enough vram). Many reviews have shown no performance gain for gpus with additional vram unless vram is being maxed out (680 4GB performs the same as the 2GB version, 7850 1GB the same as the 2 GB version as long as you are not vram limited).
with close to no performance penalty compared to 'high' textures as long as you can offload them without doing much of decompression. It works something like that:you can use ultra textures on a low end card as long as you have enough vram)
You just answered yourself.
with close to no performance penalty compared to 'high' textures as long as you can offload them without doing much of decompression. It works something like that:
you take matrix 'A' and multiply it by martix 'B'. A- is your compressed data, and B is decoding algorithm.
[A]x=[C]
Depending on size of matrix 'A' and 'B', the result matrix "C" have combined dimmensions:
Matrix A is 'n' by 'm'
Matrix B is 'm' by 'p'
then Matrix C is 'n' by 'p'
You want your matrix C to be bigger than A and B - decompressed. To achieve that dimensions have to be:
n>m and p>m.
If you have enough RAM (Video memory and system memory aswell) you don't have to process so much, because you can have already decompressed matrix C stored to memory.
Well... I don't speak English well enough to explain that better. I hope it is readable.
Here is a picture that explain what I'm talking about:
The gaming PC that Epic and other developers are targeting for the first demos of PS4 is an i7 + 16 GB + GTX-680 (2 GB)
Texture compression, however, is standardized, uses small fixed-size blocks, and has dedicated hardware in modern GPUs, all inside the chips. At this point, it is free bandwidth savings, except for the pixel-level quality loss (which is generally not perceivable).You just answered yourself.
Texture compression, however, is standardized, uses small fixed-size blocks, and has dedicated hardware in modern GPUs, all inside the chips. At this point, it is free bandwidth savings, except for the pixel-level quality loss (which is generally not perceivable).