The No.1 Website for Pro Audio
Ryzen 3000 series
Old 11th September 2019
  #571
Gear Nut
 

Be Quiet Dark Rock or Noctua U12s ? Which one do you think will enable the quieter operation under load ?
Old 11th September 2019
  #572
Lives for gear
 
Pictus's Avatar
 

Quote:
Originally Posted by Tom Barnaby View Post
Be Quiet Dark Rock or Noctua U12s ? Which one do you think will enable the quieter operation under load ?
https://www.tweaktown.com/reviews/90...ew/index6.html
Old 11th September 2019
  #573
Gear Nut
 

Thanks a lot.
Old 11th September 2019
  #574
Gear Nut
 
MindMemories's Avatar
 

Hey!

I have the Noctua NH-U12S SE2. Very quiet operation. You get another bracket piece for installing a second fan, as an outtake version of course. However. The official "second fan" is screaming expensive. But the use of the second mounting bracket you can use any other 120mm fan. As long the specs are the same or surpass the original fan its all good I think.

Anyway. Noctua is very popular in any air cooler round-up. I like 'em.

Old 11th September 2019
  #575
Gear Nut
 

Quote:
Originally Posted by MindMemories View Post
Hey!

I have the Noctua NH-U12S SE2. Very quiet operation. You get another bracket piece for installing a second fan, as an outtake version of course. However. The official "second fan" is screaming expensive. But the use of the second mounting bracket you can use any other 120mm fan. As long the specs are the same or surpass the original fan its all good I think.

Anyway. Noctua is very popular in any air cooler round-up. I like 'em.

Thank you. I just noticed that there is the U12S and the U12A, which has a different fan and seems to be much more expensive.
Old 11th September 2019
  #576
Gear Addict
 
PitchSlap's Avatar
 

Quote:
Originally Posted by wr41th View Post
@ Coolers.

Its true DRP4 and Noctua D15[S] are top of their line, but like the other big coolers (like Scythe Ninja etc) they wont fit RAM that - to my understanding - was mentioned in this thread would be needed to get the required latencies / sweet spot. There seems to be neither 3200/CL14 nor 3600/CLx RAM that is low enough to fit under them.
Thanks, this is important to know. I'll be sure to check carefully.

I guess the hard part is finding the right compromise between cooling/noise and RAM performance.
Old 12th September 2019
  #577
Lives for gear
 
Lesha's Avatar
Techpowerup review of Agesa 1.0.0.3 ABBA

Quote:
The results of our clock-speed analysis and performance tests bode well for AMD, beginning with the CPU tests. In applications that don't scale across cores, such as a single process of web-rendering as tested in Google Octane, we see a significant 6 percent increase. Other web-renderers, such as Mozilla Kraken and WebXprt, post 2 percent gains, each. SuperPi is a good example of a test that doesn't scale beyond a single core and posts roughly a 2.5 percent gain. Similar gains of 0.5 to 2 percent were noticed across the board—with a few notable exceptions, where performance was reduced.

Cinebench 1T is a surprise here as performance remains flat and is in fact reduced by a fraction of a percent. Same goes for 7-zip decompress and Photogrammetry. Microsoft Excel and Adobe Photoshop posted performance losses ranging between 3-4 percent. Overall, AGESA 1.0.0.3ABBA belts out a net performance gain of 0.88 percent and has a marginally positive impact on performance in CPU tests.

Gaming presents a different set of results. In the academically important resolution for CPU testing, 720p, which highlights CPU bottlenecks, we see 6 out of 10 games post frame-rate gains ranging between 1.5-2.25 percent, and none of the games losing any performance, leaving the average at a positive 1.1 percent. The popular Full HD (1080p) sees 2 out of 10 titles lose performance by 1 percent, one of the titles gain 2 percent, while the rest stay mostly flat, giving us an average of 0.36 percent for this resolution. 1440p remains dead flat with no performance losses posted and a net gain of 0.44 percent. This is a resolution where the RTX 2080 Ti is still in its element. At 4K Ultra HD, where the RTX 2080 Ti is pushed to the wall, performance is mostly flat with no performance losses, averaging +0.62 percent.

Power-consumption of the Ryzen 9 3900X with the new AGESA 1.0.0.3ABBA comes as a pleasant surprise. For the kind of clock-speed and performance gains we're seeing, small as they are, they do not come at significant cost of power. Idle power draw is cut by 1 W, and single-threaded tests, which is where we were expecting the most deviation in power, post a mere 1 W gain in power draw (from 92 W to 93 W for the whole system). Multi-threaded power-draw increases by 7 W for the whole system, which is probably because all 24 threads on this chip are being boosted slightly higher, which add up. There's a similar 13 W increase in gaming power draw. This is probably because games tend to tell the processor to stay on its toes and be ready to boost all cores. Under a stress test, where the processor has reached its thermal and electrical limits, the new BIOS makes no difference, which reflects in the power draw staying the same.

AMD has succeeded in delivering on the advertised maximum boost frequencies with elevated clock speeds across all cores, which results in tiny performance gains at negligible increases in power draw. We do recommend AGESA 1.0.0.3ABBA over your existing BIOS provided you know how to update your motherboard BIOS and are willing to do it at your own risk. We appreciate AMD constantly listening to PC enthusiasts and coming out with solutions, rather than basing their customer feedback on some passive data-collection program that's been pushed down users' throats by OEMs.


Old 12th September 2019
  #578
Lives for gear
 
juiseman's Avatar
 

Quote:
Originally Posted by dseetoo View Post
I came to 3900X from 8700K OC’ed to 5.2GHz. I use Sequoia as my DAW. And I use fair amount of Izotope RX. 3900X wins in every account, especially in ASIO buffer size. My interface is a RME UFX. Working on 96KHz project, I am at the smallest ASIO buffer of 96 allowed by the RME driver. Intel 8700K couldn’t do that. Sequoia is not great with multi-core support but 3900X wins it anyways. On Izotope RX which has great multi-core support on its noise reduction, 3900X runs 225-250% faster.

On other apps, such as Lightroom, 3900X is also dramatically better.
Thanks for sharing that. This kind of info is very useful. Raw facts...
Same software, Drivers, Interface... 3900X beating a 8700K OC with better performance at low buffers.

I was under the impression AMD was still slightly at a disadvantage for lower buffer settings. This clearly is not the case.

Could you share your exact system build specs? like RAM brand and speed, Motherboard, storage options? And which Windows 10 version are you on? (I'm assuming 10) But there are still some Windows 7 And 8.1 renegades like myself.


EDIT: I still use and like Win 10 for all my casual PC's; just not my main production rig. I didn't want to that kind of argument here...
Old 13th September 2019
  #579
Gear Addict
 

Quote:
Originally Posted by juiseman View Post
Thanks for sharing that. This kind of info is very useful. Raw facts...
Same software, Drivers, Interface... 3900X beating a 8700K OC with better performance at low buffers.

I was under the impression AMD was still slightly at a disadvantage for lower buffer settings. This clearly is not the case.

Could you share your exact system build specs? like RAM brand and speed, Motherboard, storage options? And which Windows 10 version are you on? (I'm assuming 10) But there are still some Windows 7 And 8.1 renegades like myself.


EDIT: I still use and like Win 10 for all my casual PC's; just not my main production rig. I didn't want to that kind of argument here...


My 3900X rig is nothing fancy, it has an ASRock Phantom Game 4 motherboard, G Skill 64GB of RAM running at 3000 with timing of 16. Video card is a Nvidia GTX 1060 6GB. Boot up is an old Samsung 850EVO 500GB SSD, OS is Windows 10, 1903 built. CPU is clocked all core running at 4.2GHz, Vcore is 1.26 something, less than 1.3V. Power supply is an old Seasonic Fanless 430W unit. What I found was the stock Wrath cooler is not capable of keeping the CPU below 95C and the computer world just shut down once the critical temp is reached. So, I replaced it with a Noctua NH-D15 Type D which lowered the temp by 20C, a truly amazing difference. The computer is completely stable once the cooler is replaced. The best part is that it is totally silent running now.

I would have gotten 128GB of RAM had I realized the motherboard can support that amount. I love to use RAMDisc as work space. It is so much faster than any SSD.
Old 13th September 2019
  #580
Here for the gear
AMD Ryzen 3950x is set for 30th sept. release date, due to the new BIOS that have to be implemented by motherboard manufacturers.
Released alongside AMD Monitoring SDK, for third party developers to able to add 30+ metrics to their CPU monitoring software.
source link: > https://www.pcgamesn.com/amd/ryzen-c...uency-bios-fix

I haven't check on this thread for quite awhile, since early july IIRC.
But I guess most of you know by now how the Infinity fabric (IF) of AMD works in relation to RAM speeds & it's CPU overclocking ability.

e.g. RAM DDR4 G.skill Neo @3800mhz CL14 being the sweet spot kit set to release,
to run IF at the max 1900mhz of it's 1:1 ratio. (Both IF & ram running @ 1900 mhz)
Resulting in the lowest latency out-of-the-box, AMD specified 67ns in specs, 66.3ns verified by G.skill.
Reference: > https://www.gskill.com/community/150...-X570-Platform

Yet quite underestimated is the 2:1 ratio with higher speed ram.
Tho it adds bit more latency, 67ns increase to 71ns latency for an 4800mhz CL18 ram on MSIGodlike board for example, there's much more going on.
The IF remains at 1900mhz, but the higher speed ram processes data at a higher rate (4800/2=2400).
Albeit 4ns added latency, but less stress on the memory controller.
The RAM CL memory timing is what stresses the memory controller, CL18 is less stressful than CL14 or "tighter" timings.
"Loose" timings gives room to the controller to breathe-
[The world record holder in RAM back in June achieved 5886mhz with CL latency in it's upper 20's as I recall
and few days ago on 10sept a new world record was achieved over 6ghz at CL31 timing.
reference: https://www.gskill.com/community/150...-Record-Speed]

-and "seems" to give CPU more room to OC.
It's evident in the records set uptil now on 3950x by overclockers.
On OC-ing their engineering sample CPU they have that factored in.
Take a look at their memory speeds & timings.
reference1 : 5ghz on all cores > https://www.guru3d.com/news-story/am...l-core-i9.html
reference2 : 5.4ghz World Record > https://wccftech.com/amd-ryzen-9-395...cinebench-r15/

Am sure people will discover more info in due time.
The higher clocked RAM optimized for AMD will surely release in near future too, just like the 3800C14 now has.
Just wanted to give you all an heads up not to dismiss 2:1 ratio with 4000mhz+ ram speeds as it does make a difference.
As we audio people rely more on real time single thread processing, surely we'd sacrifice a mere 4ns latency at a 500mhz surplus on memory processing speed.
if it "could" result in a better single thread boost compared to default. Right?

That been said, can always clock down to 3800 to get the max on 1:1 ratio.
I checked these Top boards & their ram speeds. verified in QVL.
MSI>godlike up to 4800 on ram, Prestige-Creation up to 4600 ram
AORUS> Extreme & master, up to 4400 ram
ASUS> Formula+hero up to 4600 ram

Anyway with fine tuning software onboard for IF, ram & able to adjust every core separately on the die we can config a system to our liking.
Not to mention using the speedy ram as a drivedisk to run the daw/plugs from, like I've seen people do in recent years. using Ramdisk-Dataram etc.
reference to tweakable software & how-to > https://www.youtube.com/watch?v=7w1EGPZUESU

Also good to know with the layout/signal flow of the CPU die, is that the higher speeds USB 10gbps is directly linked inside it, favoring low latency.
> https://cdn.techreport.com/wp-conten...endiagram2.jpg

Tho the 10gbps bandwidth within the die refers to USB3.1 gen2 spec,
I've not seen it mentioned yet, how boards that provide USB3.2 gen2 which is 20gbps bandwidth,
is set to be handled by cpu (4x 10gbps) or x570 chipset (8x 10gbps)
My best guess atm is it will downgrade to 10gbps to accommodate.

in case you're not up to date with recent bandwidth specs numbers,
here's my recent made list
USB 2.0 = 480 Mbps
USB 3.0 (Gen1) = 4.8 Gbps
USB 3.1 (Gen2) = 10 Gbps (check motherboard spec on port 3.1 & use correct version 3.1 & gen2 cable)
USB 3.2 (Gen2+2) = 20 Gbps (check motherboard spec on port 3.2 & use correct version 3.2 & gen2+2 cable)
USB 4 = 40Gbps (spec released on 29aug 2019, compatible with Thunderbolt V3)
DisplayPort 1.1 = 10.8 Gbps
DisplayPort 1.2 = 21.6 Gbps
DisplayPort 1.3 / 1.4 = 32.4 Gbps
Thunderbolt V1 = 10 Gbps
Thunderbolt V2 = 20 Gbps
Thunderbolt V3 = 40 Gbps
Gigabit Ethernet = 1 Gbps
10Gigabit Ethernet = 10 Gbps
100Gigabit Ethernet = 100 Gbps
PCI Express 2.0 / x16 = 8 GBps - unidirectional
PCI Express 2.0 / x8 = 4 GBps - unidirectional
PCI Express 2.0 / x4 = 2 GBps - unidirectional
PCI Express 3.0 / x16 = 16 GBps - unidirectional
PCI Express 3.0 / x8 = 8 GBps - unidirectional
PCI Express 3.0 / x4 = 4 GBps - unidirectional
PCI Express 3.0 / x1 = 1 GBps - unidirectional
PCI Express 4.0 / x16 = 32 GBps - unidirectional
PCI Express 4.0 / x8 = 16 GBps - unidirectional
PCI Express 4.0 / x4 = 8 GBps - unidirectional
PCI Express 4.0 / x1 = 4 GBps - unidirectional

edited* Thanks to Media Gary for his keen eye
All seems correct now.
Turns out my source Tom's hardware was wrong stating PCIe 4 x16 as being 64GBps.
https://www.tomshardware.com/news/wh...ie4,39063.html


While I'm on the bandwidth data again, it reminds me;
The 3950X integrated mermory controller is dual channel,
with a nominal max DDR4-3200, max bandwidth of 51.2GB/s. (47.68GiB/s)
source > https://en.wikichip.org/wiki/amd/ryzen_9/3950x

With a capable motherboard, using the higher clocked ram mentioned earlier would theoretically be,
dual.chann. 3800 = 60.8GB/s max bandwidth (+18.75%)
dual.chann. 4800 = 76.8GB/s max bandwidth (+50%)
(used Memory Bandwidth Calculator)

As for 2dimms or 4dimms memory slot occupation in dual-channel.
It's known that 2dimms yields for better benchmarking results & advantage in lower latency,
at a cost of stress/stability. IIRC also known/indicated as 1t rate.
Compared to 4dimms at 2t rate which lends itself to better stability at an performance compromise.
I read it on a different forum once long ago, didn't save my source, sorry.
Notice that RAM OC-ers usually use a single dimm for such reasons.
After all they're experts in tweaking to get the most out of that memory controller.

I've been hitting myself in the head when I found out about 2dimm vs 4dimm advantage,
As I didn't know when I made my system in 2011, when I opted to occupy all 4 slots
with more expensive 4dimms dual channel 16gb, instead of the bit cheaper 2dimms dual channel 16gb at same speeds.
They're G.skill ripjaws & haven't failed or had issues (knock on wood) So all's good.
Knowing it now, going with 2dimms makes it a no-brainer for me, especially as 2dimms sets are now available at 16gb, 32gb & soon 64gb capacities.
I was actually considering getting 2 sets, as I almost couldn't choose between 2dimms 16gb/32gb at 3800 vs 16gb at 4800
Getting both & swap em out depending on the task? Too much gearslutiness?
Probably throwing money away again needlessly hitting myself in the head again in another 5yrs
Okay enough writing from me this time around hahah
Happy Friday the 13th & have a good weekend y'all cheers

Last edited by Victor Valiant; 13th September 2019 at 09:23 PM.. Reason: incorrect data +additional info
Old 13th September 2019
  #581
Here for the gear
Oh almost forgot to ask you guys on something I just can't figure out.
Within Cubase 10, we can set the "processing precision" either to 32bit float or 64bit float, as in running plugin's at 64bit precision that are capable-coded to do so.

Enabling it is quite heavy on daw performance even with longest buffer settings.
is it utilizing single thread or multi thread cpu performance?
I don't know what it's choking on, or if it's processing is in the single thread or multi core/threaded domain?
It drive me nuts not knowing what it's relying on.
Appreciate if anyone could just tell me if it's Single or multi thread dependent. cheers
Old 13th September 2019
  #582
Lives for gear
Quote:
Originally Posted by Victor Valiant View Post
...
in case you're not up to date with recent bandwidth specs numbers,
here's my recent made list cheers
USB 2.0 480*MBps
USB 3.0 (Gen1) 4.8 GBps
USB 3.1 (Gen2) 10 GBps (check motherboard spec on port 3.1 & use correct version 3.1 & gen2 cable)
USB 3.2 (Gen2+2) 20 Gbps (check motherboard spec on port 3.2 & use correct version 3.2 & gen2+2 cable)
DisplayPort 1.1 10.8 GBps
DisplayPort 1.2 21.6 GBps
DisplayPort 1.3 / 1.4 32.4 GBps
Thunderbolt V1 10 GBps
Thunderbolt V2 20 Gbps
Thunderbolt V3 40 Gbps
Gigabit Ethernet 1 GBps
10Gigabit Ethernet 10 GBps
100Gigabit Ethernet 100 GBps
PCI Express 2.0 / x16 16 GBps
PCI Express 2.0 / x8 8 GBps
PCI Express 2.0 / x4 4 GBps
PCI Express 3.0 / x16 32 GBps
PCI Express 3.0 / x8 16 GBps
PCI Express 3.0 / x4 8 GBps
PCI Express 3.0 / x1 2 GBps
PCI Express 4.0 / x16 64 GBps
PCI Express 4.0 / x8 32 GBps
PCI Express 4.0 / x4 16 GBps
PCI Express 4.0 / x1 8 GBps
I am impressed with the amount of good information you've decided to compress into a single post. I read through it and love most of it.

Down at the list of gigabit/sec and gigabyte/sec numbers, many things went sideways with capitalization and calculations. Also, I think it's best to quote PCIe numbers for their unidirectional capacity just as all the other link types are quoted as a unidirectional number. I think it keeps things neater and easier to compare.

The edits that I'm offering are below:

USB 2.0 480 Mbps
USB 3.0 (Gen1) 4.8 Gbps
USB 3.1 (Gen2) 10 Gbps (check motherboard spec on port 3.1 & use correct version 3.1 & gen2 cable)
USB 3.2 (Gen2+2) 20 Gbps (check motherboard spec on port 3.2 & use correct version 3.2 & gen2+2 cable)
DisplayPort 1.1 10.8 Gbps
DisplayPort 1.2 21.6 Gbps
DisplayPort 1.3 / 1.4 32.4 Gbps
Thunderbolt V1 10 Gbps
Thunderbolt V2 20 Gbps
Thunderbolt V3 40 Gbps
Gigabit Ethernet 1 Gbps
10Gigabit Ethernet 10 Gbps
100Gigabit Ethernet 100 Gbps
PCI Express 2.0 / x16 8 GBps - unidirectional
PCI Express 2.0 / x8 4 GBps - unidirectional
PCI Express 2.0 / x4 2 GBps - unidirectional
PCI Express 3.0 / x16 16 GBps - unidirectional
PCI Express 3.0 / x8 8 GBps - unidirectional
PCI Express 3.0 / x4 4 GBps - unidirectional
PCI Express 3.0 / x1 1 GBps - unidirectional
PCI Express 4.0 / x16 32 GBps - unidirectional
PCI Express 4.0 / x8 16 GBps - unidirectional
PCI Express 4.0 / x4 8 GBps - unidirectional
PCI Express 4.0 / x1 4 GBps - unidirectional

Last edited by MediaGary; 13th September 2019 at 05:19 PM.. Reason: embedded rather than PM
Old 13th September 2019
  #583
Here for the gear
Quote:
Originally Posted by MediaGary View Post
I am impressed with the amount of good information you've decided to compress into a single post. I read through it and love most of it.

Down at the list of gigabit/sec and gigabyte/sec numbers, many things went sideways with capitalization and calculations. I'll fix it in a PM to you later today so you can edit, if you don't get to it first.
Thank you Gary, doing my part in sharing info like I get from many of you here.
Yes I see now what you mean, I didn't notice, also I pasted from an excel sheet so it didn't line up properly either
Hope it's not too much trouble, but if you please could send in your fix, I'll edit it later on. Thank you
Old 13th September 2019
  #584
Here for the gear
Quote:
Originally Posted by dseetoo View Post
My 3900X rig is nothing fancy, it has an ASRock Phantom Game 4 motherboard, G Skill 64GB of RAM running at 3000 with timing of 16. Video card is a Nvidia GTX 1060 6GB. Boot up is an old Samsung 850EVO 500GB SSD, OS is Windows 10, 1903 built. CPU is clocked all core running at 4.2GHz, Vcore is 1.26 something, less than 1.3V. Power supply is an old Seasonic Fanless 430W unit. What I found was the stock Wrath cooler is not capable of keeping the CPU below 95C and the computer world just shut down once the critical temp is reached. So, I replaced it with a Noctua NH-D15 Type D which lowered the temp by 20C, a truly amazing difference. The computer is completely stable once the cooler is replaced. The best part is that it is totally silent running now.

I would have gotten 128GB of RAM had I realized the motherboard can support that amount. I love to use RAMDisc as work space. It is so much faster than any SSD.
Nice rig Stock wraith cooler 95C & -20 on noctua 75C, that's quite a difference on a hot turkey.
Then again, you said it's all 12cores running at 4.2ghz, very impressive
Great tweaking man
I'm wondering why you didn't choose to OC 2 to 6 cores instead,
leaving the rest at bit lower clocks in favor for better temps.
Do you need all 12 cores maxed for your workflow? Or other performance concerns?
Got me curious as I'm on the fence getting the noctua or an AIO for my next system
But am impressed the noctua can keep it 75C with 12 cores blazing, even running silent is just awesome!

Your 430W fanless psu seems very worrying to me tho, there's heavy power draw when OCing,
and psu fanless can't cool its internal components when running it's 75% rate.

With an 430W PSU you're running a tight ship.
As the gtx 1060 6gb card is rated to consume 120W at base, with 400w PSU recommended as minimum.
reference specs> https://www.nvidia.com/en-us/geforce...orce-gtx-1060/
And the 12 core AMD 3900x draws 105W at base clock, and up to 142W with OC.
reference>https://www.anandtech.com/show/14605...ing-the-bar/19

With other devices hanging about the PSU can be hard pushed to handle the system load. PSU rated optimal up to 75% (430*0.75=322W effective)
or bit higher on the premium bronze-gold ones.
USB midi controllers can easily draw 80W via USB2.0, external HDD 3.5" around 20W.
Just saying be careful, monitor your wattage use with HWmonitor for ex. cuz it's easy to overshoot.

I've had a OEM build system incident in the past, with a too low PSU wattage to power it.
20W to be exact, 380W PSU with an overload capacity to 125%, 400W peak system draw.
They thought it would cover as it could overload past 400W.
Within 5months, board & all other components insalvageable.
Got a full refund from the shop & I learned to built my rigs ever since.
Just want to prevent such an experience happening to anyone else if I can
Stay awesome man cheers

btw, when running RAMdisk, what's the consequences on ram performance?
I'm looking forward in trying that.
Old 14th September 2019
  #585
Lives for gear
 
b0se's Avatar
Great post Victor, answered some questions I had.

Thanks for sharing!
Old 14th September 2019
  #586
Gear Addict
 

Quote:
Originally Posted by Victor Valiant View Post
Nice rig Stock wraith cooler 95C & -20 on noctua 75C, that's quite a difference on a hot turkey.
Then again, you said it's all 12cores running at 4.2ghz, very impressive
Great tweaking man
I'm wondering why you didn't choose to OC 2 to 6 cores instead,
leaving the rest at bit lower clocks in favor for better temps.
Do you need all 12 cores maxed for your workflow? Or other performance concerns?
Got me curious as I'm on the fence getting the noctua or an AIO for my next system
But am impressed the noctua can keep it 75C with 12 cores blazing, even running silent is just awesome!

Your 430W fanless psu seems very worrying to me tho, there's heavy power draw when OCing,
and psu fanless can't cool its internal components when running it's 75% rate.

With an 430W PSU you're running a tight ship.
As the gtx 1060 6gb card is rated to consume 120W at base, with 400w PSU recommended as minimum.
reference specs> https://www.nvidia.com/en-us/geforce...orce-gtx-1060/
And the 12 core AMD 3900x draws 105W at base clock, and up to 142W with OC.
reference>https://www.anandtech.com/show/14605...ing-the-bar/19

With other devices hanging about the PSU can be hard pushed to handle the system load. PSU rated optimal up to 75% (430*0.75=322W effective)
or bit higher on the premium bronze-gold ones.
USB midi controllers can easily draw 80W via USB2.0, external HDD 3.5" around 20W.
Just saying be careful, monitor your wattage use with HWmonitor for ex. cuz it's easy to overshoot.

I've had a OEM build system incident in the past, with a too low PSU wattage to power it.
20W to be exact, 380W PSU with an overload capacity to 125%, 400W peak system draw.
They thought it would cover as it could overload past 400W.
Within 5months, board & all other components insalvageable.
Got a full refund from the shop & I learned to built my rigs ever since.
Just want to prevent such an experience happening to anyone else if I can
Stay awesome man cheers

btw, when running RAMdisk, what's the consequences on ram performance?
I'm looking forward in trying that.



430W PSU is more than big enough for my need. I don’t play any video games, the load on GPU is so low the fan never runs and yet the card itself is always at room temperature. The only time the video card is ever used is when I do some adjustment in Lightroom. The entire rig never pulls more than 200W out of the wall socket, I checked. For each spinner I connect to the system, it adds 10W. SSD adds less than 5W. The Seasonic PSU is so efficient that it runs very cool. It is rated at something like over 90% efficiency. Actually, with the way I use my computer, I hit the most efficient part of power envelope of the PSU. A bigger wattage unit won’t get to that part of the envelope with 200W draw.

I use Izotope Rx a lot, it’s noise reduction module uses as many cores as the CPU has. 3900X has 12 cores and it runs 225-250% faster than the computer it replaced; Intel 6 core 8700K, OC’ed to 5.2GHz. Same task used to run for 4’28” on Intel now runs just under 2 minutes. What a difference.

RAMDisc is very nice if you have enough RAM installed. The software basically locks up a certain amount of RAM and it presents that amount as a hard drive to OS. I mainly use it as a scratch disc for any apps that use scratch disc. The 2nd usage is to mount the entire project on RAMDisc. It is so fast it makes your 100+ tracks multi-track project runs like a single stereo pair one. No SSD can touch it.

I know I don’t have to lock all the cores to 4.2GHz, the CPU firmware should automatically get there but what heck. The computer is more responsive, at least it feels that way, when all the cores are clocked up. Oh, ASIO driver loves high clock frequency for low latency.
Old 14th September 2019
  #587
Gear Addict
 
PitchSlap's Avatar
 

Nice tip on RAMDisk.
I was originally going to go with 32GB RAM, but if going all out everywhere else, 64GB makes more sense since with a new 12-16 core Ryzen that could be a bottlekneck.

As for power supplies, I've had a few fail and I always thought it's better to have more than you need?

What's the advantage of having barely enough? Lower noise and power consumption? It hardly seems like the place to save a few bucks.

I'm looking at the EVGA SuperNOVA 850 G5, 80 Plus Gold 850W.
Old 15th September 2019
  #588
Gear Addict
 

Quote:
Originally Posted by PitchSlap View Post
Nice tip on RAMDisk.
I was originally going to go with 32GB RAM, but if going all out everywhere else, 64GB makes more sense since with a new 12-16 core Ryzen that could be a bottlekneck.

As for power supplies, I've had a few fail and I always thought it's better to have more than you need?

What's the advantage of having barely enough? Lower noise and power consumption? It hardly seems like the place to save a few bucks.

I'm looking at the EVGA SuperNOVA 850 G5, 80 Plus Gold 850W.


X570 Mobo can take 128GB of RAM. You might want to install all of that. I regret I didn't.


The Seasonic X-400 Fanless Power Supply is not exactly cheap. It can actually output 600W. It is silent, though. I have had it for a couple of years.

https://www.hardwaresecrets.com/seas...supply-review/
Old 15th September 2019
  #589
Lives for gear
 
Pictus's Avatar
 

Quote:
Originally Posted by dseetoo View Post
430W PSU is more than big enough for my need. I don’t play any video games, the load on GPU is so low the fan never runs and yet the card itself is always at room temperature. The only time the video card is ever used is when I do some adjustment in Lightroom.
If you use ACR/Lightroom "Enhanced Details" the GPU usage will go to 100%.



--------------------------------------------------------


Quote:
Originally Posted by PitchSlap View Post
As for power supplies, I've had a few fail and I always thought it's better to have more than you need?

What's the advantage of having barely enough? Lower noise and power consumption? It hardly seems like the place to save a few bucks.

I'm looking at the EVGA SuperNOVA 850 G5, 80 Plus Gold 850W.
Power consumption makes no difference in PSU wattage, but in PSU efficiency.

With semi passive PSU the bigger ones have more range to stay passive/low noise.
The corsair RM850x (2018) the FAN stays off up to +- 250 W and
from there up to 600 W the noise is very low, only +- 10 dBA.
https://www.tomshardware.com/reviews...su,5568-5.html
Old 15th September 2019
  #590
Gear Addict
 

[QUOTE=Pictus;14207059]If you use ACR/Lightroom "Enhanced Details" the GPU usage will go to 100%.



--------------------------------------------------------




You are correct. However, when you do use this feature, the CPU is not doing anything and thus not pulling any real power.
Old 15th September 2019
  #591
Gear Addict
 
PitchSlap's Avatar
 

Quote:
Originally Posted by dseetoo View Post
X570 Mobo can take 128GB of RAM. You might want to install all of that. I regret I didn't.
Good point. If I'm quadrupling the number of cores, I'd want to have more than double the RAM I do now (16GB).

32GB sticks are pretty limited, and with slower speeds and worse latency than the 16GB sticks.
The best thing I can see at Newegg.ca is the CORSAIR Vengeance LPX DDR4 3000 CAS 16 64GB (2x32GB), and all 32GB modules are out of stock.

I'll have to take another look at the X570 RAM benchmarks and decide whether to go with 128GB of what I can get now, or settle with 32GB for now and upgrade to 128GB later when the availability is better (should be lots coming out in the next year).

My understanding is it's always best to keep the same RAM, which probably isn't different with X570 so I couldn't keep 2x16GB and add 2x32GB later for 96GB total?

While I plan to use Omnisphere and other large sample libraries, I assume disk-streaming from an EVO 970 should be fast enough in the meantime (getting a 2TB for my most important samples)...

According to Tom's
Quote:
Think thrice About 32GB-per slot. While 32GB DIMMs have been around for a year, poor availability has given firmware developers an excuse to deprioritize its support. That makes it essential to check each motherboard model's compatibility list (at the manufacturer's website), and to be certain that compatible motherboards have the correct firmware installed.
Old 15th September 2019
  #592
Lives for gear
 
ponzi's Avatar
It doesnt cost much more to add wattage to a ps. I imagine most are more than needed. Recently i have gone in a different direction of spending more to get a higher quality ps. Its like only $100 more to go from a bottom line ps to s really good one.
Old 15th September 2019
  #593
Gear Head
 

Hey everyone! Thought I'd share that my 3900X FINALLY arrived so I'm now typing this on my new build:

AMD Ryzen 9 3900X
MSI X570 Taichi Motherboard
ASUS Rog Strix 2060S Graphics Card
2x 16g 3200 CAS 14 Gskill Ripjaws V RAM
Corsair RM750x PSU
Other boring stuff like M2 SSD, Noctua CPU fan, Fractal R6 case etc

I DO NOT RECOMMEND THE TAICHI if you have a large graphics card! On PCI slot 1 it will block the motherboard fan and the USB front header. Learned this the hard way.

Other than that, all good!

Looking in to overclocking the RAM but googling around I have NO idea what I'm doing.

Last edited by mahoobley; 15th September 2019 at 01:19 PM.. Reason: Fractal R6 not R9
Old 15th September 2019
  #594
Lives for gear
 
Lesha's Avatar
Quote:
Originally Posted by mahoobley View Post
Fractal R9 case
That's a new one

Quote:
Originally Posted by mahoobley View Post
Looking in to overclocking the RAM but googling around I have NO idea what I'm doing.
Start here: https://www.overclock.net/forum/13-a...ram-bench.html
Old 15th September 2019
  #595
Gear Head
 

6, 9 ... what's the difference? :D

I did read that or a similar guide recently, but got scared off tbh. Lots of things it wasn't telling me, and too much risk. E.g, I dont know what chip type my RAM is, and iirc a bunch of other fields were asking for information that I couldn't answer with certainty.
Old 15th September 2019
  #596
Here for the gear
 

Quote:
Originally Posted by dseetoo View Post
I went throough 2600K, 4770K, 6700K, 7700K and 8700K, all OC'ed to some extend. Each generation got better, not a lot but 10-20% better. 3900X is so much better than the 8700K, it is up to 250% better. I have not found anything that runs slower on 3900X than 8700K, single or multi thread. I probably will buy the 3950X on it's first day of release.
I tested the 4790k (stock) against the 2200g (stock) in a bunch of different projects.
I am shocked that the 2200g has around 7-10% more load (less headroom to work with) @ buffers 256-1024.

Haven't tested below 256 buffer size because i almost never use a lower value.

Also a ''rough'' compare between the 8700k and 4790k resulted around 10% more headroom for the 8700k. (from the top of my head)

The projects i used for comparison has around 8-20 synths and lot of processing. Some has only 20 busses, others around the 50-ish.

So, i am a bit worried that the 3900x gives me not that much advantage compared to the 4790k in my user case. Maybe like 10-15%. If your are going to measure it by clock cyle it's of course going to be a much greater improvement.

I would love to hear your thoughts and results/comparisons in your projects.

BTW: balancing the load across multiple busses didn't result in any magic...
Old 15th September 2019
  #597
Lives for gear
 
Lesha's Avatar
Quote:
Originally Posted by mahoobley View Post
6, 9 ... what's the difference? :D

I did read that or a similar guide recently, but got scared off tbh. Lots of things it wasn't telling me, and too much risk. E.g, I dont know what chip type my RAM is, and iirc a bunch of other fields were asking for information that I couldn't answer with certainty.
Yours should be Samsung B-Die, here is a guide how to determine which one is it: https://youtu.be/_2iZavugowE
Old 15th September 2019
  #598
Gear Addict
 

Quote:
Originally Posted by Kamusica View Post
I tested the 4790k (stock) against the 2200g (stock) in a bunch of different projects.
I am shocked that the 2200g has around 7-10% more load (less headroom to work with) @ buffers 256-1024.

Haven't tested below 256 buffer size because i almost never use a lower value.

Also a ''rough'' compare between the 8700k and 4790k resulted around 10% more headroom for the 8700k. (from the top of my head)

The projects i used for comparison has around 8-20 synths and lot of processing. Some has only 20 busses, others around the 50-ish.

So, i am a bit worried that the 3900x gives me not that much advantage compared to the 4790k in my user case. Maybe like 10-15%. If your are going to measure it by clock cyle it's of course going to be a much greater improvement.

I would love to hear your thoughts and results/comparisons in your projects.

BTW: balancing the load across multiple busses didn't result in any magic...

The improvement I am talking about achieved from 3900X over Intel 8700K is based on fully multi-core supported app, such as the Izotope RX and it’s noise reduction module. The DAW I use is Sequoia which has pretty poor multi-core-thread support. Frankly, I don’t see nearly as much of a difference between the two platforms in Sequoia. In the task manager, you see half of the CPU usage on 3900X, that is about it. I don’t load down the DAW much for what I do, most of times, I don’t even use EQs. The track count does go up in my mostly classical music projects but they are still trivial comparing to pop projects. The new Ryzen 3900X rig does allow me to reduce the ASIO buffer size to its minimum setting provided in the driver, which is 96 at 96KHz. However, that could be a function of lower X570 mobo latency, although I have not fully tested latency on both platforms to confirm one way or the other. In general, Ryzen 3900X gives me a better, more responsive day to day computing experience than my older Intel 8700K. Given that, I might go out and upgrade my rig to 3950X at the end of the month.
Old 15th September 2019
  #599
Lives for gear
 
Lesha's Avatar
Quote:
Originally Posted by dseetoo View Post
Given that, I might go out and upgrade my rig to 3950X at the end of the month.
I would wait for the reviews, the 3950X base clock is lower.
Old 15th September 2019
  #600
Gear Addict
 

Quote:
Originally Posted by Lesha View Post
I would wait for the reviews, the 3950X base clock is lower.
Understood.

But, by having 4 more cores it would reduce my rendering time by 25%. There is a 5% on all core clock reduction. So the net speed gain might be 20%. still worth it. Wouldn't you think so?
Topic:
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump