SteveGrabowski
Diamond Member
- Oct 20, 2014
- 7,115
- 5,993
- 136
Hope so since I'm on a 12GB gpuI agree. But it will really only be a problem for 12GB GPUs when games start getting designed for PS6/XB-Next.
Hope so since I'm on a 12GB gpuI agree. But it will really only be a problem for 12GB GPUs when games start getting designed for PS6/XB-Next.
Same. Since I was the one who specced out these computers, it would be funny (not really) if my older brother's 3080 Ti with 12 GB has less staying power than my little brother's RX6800, even though there's about a 25-30% performance advantage for the 3080 Ti at 4K. The only saving grace for the 3080 Ti at this point is if every game my older brother plays is either 1) not a modern AAA title that gobbles VRAM, or 2) has DLSS so that it's effectively only 1440p internally.Hope so since I'm on a 12GB gpu
As I mentioned in the post above, XGB with high speed asset streaming is superior to a flat XGB. The better assumption is that you will need more VRam on a graphics card in a PC, for an equal experience.
Do games have to be designed for it or can it be added retroactively?That technology is being added to DirectX so it won't matter that much in terms of console vs. PC. More specifically, it was only going to see extensive use in first or second party Sony exclusives. It's a great idea though which is why Microsoft adding it to DirectX is a big deal, but it shouldn't be anything to consider for most games.
Also PCs have access to Ultra quality while consoles typically use a mix of medium/high settings. Ultra = more VRAM.The consoles have at least 12 GB plus high speed streaming. This is superior to a flat 12 GB on the PC as you appear to be equating.
I think it depends on what you are using for video editing. My understanding is that GPU encoding is faster, but CPU encoding is higher quality. Many just use the CPU for video editing, however it can be done with GPU rendering support, depending on the application and what is supported. I am not so knowledgeable about video editing though, so maybe someone else could give a recommendation for video editing software.Now that the 4070 is out, it seems that one makes a ton more sense than the 3060ti I thought I wanted when I opened this thread. However, many good arugments were made here (and many, many more by the review stampede on YouTube) for the 6800XT and the 6950XT as no-brainer alternatives. Here is my question: for NON-GAMING use, what if any issues will I have with one of those two AMD cards if I go that route? I'm not into anything that taxing so I doubt I would run into any limits but I'd like to know what ones are lurking out there that I can't see due to my lack of experience.
ANY of these cards are overkill for me (currently) but I do want a capable desktop and I probably will get back into more serious gaming with a computer that can handle it (my current machine has a first gen i7 Intel CPU and a 660ti GPU). If I start working with large RAW files from a DSLR or do some drone video editing am I going to run into any debilitaing issues with an AMD card mentioned above?
you've got it right. dedicated card encoders like the new amd one with xilinx tech is the same as a video card but uses other accelerators to produce a video as close to software encoding but faster. It's the logical next step in hardware based encoding that slowly trickles down to consumer priced hardware. The new car is around 1500 bucks and it may take another 2-4 generations before we see similar tech in intel and amd cpus.I think it depends on what you are using for video editing. My understanding is that GPU encoding is faster, but CPU encoding is higher quality. Many just use the CPU for video editing, however it can be done with GPU rendering support, depending on the application and what is supported. I am not so knowledgeable about video editing though, so maybe someone else could give a recommendation for video editing software.
Now that the 4070 is out, it seems that one makes a ton more sense than the 3060ti I thought I wanted when I opened this thread. However, many good arugments were made here (and many, many more by the review stampede on YouTube) for the 6800XT and the 6950XT as no-brainer alternatives. Here is my question: for NON-GAMING use, what if any issues will I have with one of those two AMD cards if I go that route? I'm not into anything that taxing so I doubt I would run into any limits but I'd like to know what ones are lurking out there that I can't see due to my lack of experience.
ANY of these cards are overkill for me (currently) but I do want a capable desktop and I probably will get back into more serious gaming with a computer that can handle it (my current machine has a first gen i7 Intel CPU and a 660ti GPU). If I start working with large RAW files from a DSLR or do some drone video editing am I going to run into any debilitaing issues with an AMD card mentioned above?
ai has become a buzz word but there is a market for ai. the kuwaiiti's have presented an ai model reading the news in arabic but it's not good. The voice doesn't match the person. If there is one company that can pull it off it's nvidia.
coolest stuff I've seen since then have been meta's live translation ai that reformulates the speaker's voice into the language of preference. very cool stuff. remember in the 80s how you'd go places if you knew enough japanese? this is like then now but only for the company that can develop the best hot product. ya gotta love capitalism and smart people baby!
I know what one of those services are and from my own experience trying it on a coworkers phone it's terrible.The big AI splash lately is obviously ChatGPT, which seems able to do anything you ask of it in the text medium. Write you an essay, or short story, check or even, write source code in the computer language of your choice. Write a script for TV show. Write your resume, etc...
But things like ChatGPT is a large model that is trained on a massive data set, you probably won't run it on your local machine, though I could see targetted versions running locally, like a programmers assistant that checks your source code, and writes common routines to lift productivity.
A lot of the generative art AT like Stable Diffusion let you run your own local version.
I would really like the extra AI HW in my next GPU.
I know what one of those services are and from my own experience trying it on a coworkers phone it's terrible.
I used it during the 2nd round. I see it's on round 4 now. I'll have to try it out one day but I'm not expecting anything remarkable. I know from reading some like it because it's more direct than googleing a question but I think most people are too daft to know how to google not people here on this forum but most people out there the average person who thinks magic goes on inside their computer. little elves and a litany of other dark forces.?? ChatGPT is widely regarded as kind of amazing. Even if their are rough edges it's easy to see how this is revolutionary technology.
Many tech leaders are appealing for a pause on potentially disruptive technology:
I used it during the 2nd round. I see it's on round 4 now. I'll have to try it out one day but I'm not expecting anything remarkable. I know from reading some like it because it's more direct than googleing a question but I think most people are too daft to know how to google not people here on this forum but most people out there the average person who thinks magic goes on inside their computer. little elves and a litany of other dark forces.
I doubt it would help someone who has no idea about code or compilation. It's impressive as a tech demo but it's still in its infancy. We are far, far away from HAL 9000.You can literally have it write software for you
If you feel the need to ask this question, and I say this with no malice or ill intent, don't bother exploring the AMD side and go with an Nvidia card, as you are looking for the familiar experience and safety of the brand you know. If the 4070 suits your budget, go with this SKU. It's a decent card, and even if it may run out of steam in a couple of years (in terms of VRAM), it will happen gradually and you'll probably lose RT and cruise along for a while longer, which on the AMD side would be lost even sooner.Here is my question: for NON-GAMING use, what if any issues will I have with one of those two AMD cards if I go that route? I'm not into anything that taxing so I doubt I would run into any limits but I'd like to know what ones are lurking out there that I can't see due to my lack of experience.
Anything you buy that has been discussed on this thread would be able to handle your content creation needs. You can check some benchmarks here, with the caveat that your needs are more likely at the bottom of the charts than at the top (in the sense that you don't need a semi-professional setup just as you don't need a high-end gaming rig).If I start working with large RAW files from a DSLR or do some drone video editing am I going to run into any debilitaing issues with an AMD card mentioned above?
Using the base models with 16-bit data, for example, the best you can do with cards that all have 24GB of VRAM — is to run the model with seven billion parameters (LLaMa-7b). That's a start, but very few home users are likely to have such a graphics card, and it runs quite poorly. Thankfully, there are other options.
Loading the model with 8-bit precision cuts the RAM requirements in half, meaning you could run LLaMa-7b with many of the best graphics cards — anything with at least 10GB VRAM could potentially suffice. Even better, loading the model with 4-bit precision halves the VRAM requirements yet again, allowing for LLaMa-13b to work on 10GB VRAM. (You'll also need a decent amount of system memory, 32GB or more most likely — that's what we used, at least.)
Getting the models isn't too difficult at least, but they can be very large. LLaMa-13b for example consists of 36.3 GiB download for the main data (opens in new tab), and then another 6.5 GiB for the pre-quantized 4-bit model (opens in new tab). Do you have a graphics card with 24GB of VRAM and 64GB of system memory? Then the 30 billion parameter model (opens in new tab) is only a 75.7 GiB download, and another 15.7 GiB for the 4-bit stuff. There's even a 65 billion parameter model, in case you have an Nvidia A100 40GB PCIe (opens in new tab) card handy, along with 128GB of system memory (well, 128GB of memory plus swap space). Hopefully the people downloading these models don't have a data cap on their internet connection.
I have heard there is a configuration setting for the X series X which games can use that increases the allocation for the vram pool from 10 to 13GB. I don't know any details about how it works, just heard about it from someone who used it. You'd probably never know which games are using it or not.Where is the evidence that 12GB card won't handle the bad console ports? Consoles can access more than 8GB of VRAM but they can't access more than 12GB.
Here is the Memory allocation for an XBSX:
Xbox Series X Allocates 13.5 GB of Memory to Games
gamingbolt.com
That is 10GB of GPU optimal (IOW VRAM) and 3.5GB of regular RAM for the rest of the game.
So sure bad port with easy access to 10GB VRAM for consoles can lead to problems on 8GB cards, but not 12GB cards.
I'd bet on the 4070 easily beating the 6800 XT on top of having the normal NVidia ecosystem advantages.
Definitely worth waiting 3 days for some reviews...
I have heard there is a configuration setting for the X series X which games can use that increases the allocation for the vram pool from 10 to 13GB. I don't know any details about how it works, just heard about it from someone who used it. You'd probably never know which games are using it or not.
Why? It only matters if he uses software that has specific nvidia optimizations.If you feel the need to ask this question, and I say this with no malice or ill intent, don't bother exploring the AMD side and go with an Nvidia card, as you are looking for the familiar experience and safety of the brand you know.
Because the bias is already there. If somebody comes to you for advice and then continues asking for reassurance, then they're not only asking for advice. At the end of the day the OP must be comfortable with their purchase, and sometimes the arguably better choice is the one that makes the user happier, not necessarily the objectively superior one.Why? It only matters if he uses software that has specific nvidia optimizations.
Some people don't even know if the video card play a role. I do some video and photo editing, but neither software is accelerated by nvidia, so it doesn't matter which video card is in the system. If we were to give the best advice we would need to know which software he uses, and if he uses it often.Because the bias is already there. If somebody comes to you for advice and then continues asking for reassurance, then they're not only asking for advice. At the end of the day the OP must be comfortable with their purchase, and sometimes the arguably better choice is the one that makes the user happier, not necessarily the objectively superior one.
As a side-note, I took a quick look at prices in my local Eastern Europe market: the 4070 is mostly matching the 6800XT in price here. AMD has very low availability for all N21 based cards. There's only one exception with a store selling a Sapphire 6950XT Nitro+ for around $60 premium over the 6800XT/4070 level. That's probably the best deal I've seen locally since Black Friday when I bought my 6800XT.