Other DMA activity check up

Hello guys !

I'm interested into see if the DMA is in use during the execution of the dd(8) command:

dd if=/dev/da4s1 of=/dev/da5s1/imagefile.dd

I'm trying with:
tail -f /var/log/messages | grep DMA

but I see nothing on the terminal.

Can you help me to find the correct way ?

Thanks a lot.

Bye.
 
Hello Sir !

ok ... so, is there another way to see if the DMA transfer is running ? and for what devices ?
 
It is very very likely being done with DMA at the bottom of the stack, where the disk device driver operates. Nearly all network and storage device drivers today use PCI memory accesses: The driver tells the device where the data is in memory, and the device fetches or moves it itself. "Programmed IO" (where the CPU pushes each byte to the device) is de-facto non-existing. To be 100% sure, you'd have to read the source code of the device driver.
 
Very important !!
It is very very likely being done with DMA at the bottom of the stack, where the disk device driver operates. Nearly all network and storage device drivers today use PCI memory accesses: The driver tells the device where the data is in memory, and the device fetches or moves it itself. "Programmed IO" (where the CPU pushes each byte to the device) is de-facto non-existing.


Ok guys !!
I've understood. :mad: No memory logs .... PCI memory access.

I try to re-formulate the question.

A device driver should work on the base of the hardware characteristics of the periferals (otherwise it would not be a device driver).

Which super-speedy processor with a super big cache, motherboard with super-fast bus , and ultra-speedy RAM I need to buy and assemble, so to imaging a 1 TB hard disk in minutes ?
The most powerfull of the world !!!
Please, can you give me the list so tomorrow I'll buy all of them ?

:);)

Thanks guys.

Bye !
 
Operating at theoretical maximum bandwidth, a SATA3.0 link can transfer 1TB of data in about half an hour, so there's your hard limit unless you're looking at more enterprise (12G SAS, for example) systems, and even those only move that bar by a factor of two. Enterprise systems get to high throughput by using width; a number of drives across a number of links.

But a single physical HDD drive typically operates at a fraction of that; likely closer to two hours-ish for 1TB if you're doing flat-out sequential writes.

Note that these are at least an order of magnitude slower than the memory bandwidth of modern CPUs. If you're driving one HDD, it would be hard to build a system that couldn't saturate the drive (assuming you're not requiring compute load to generate the data, but have it available in some fashion.) (Not including low power systems like Raspberry Pi, but a traditional desktop AMD/Intel system.)

Here's an extensive listing of interface rates: https://en.wikipedia.org/wiki/List_of_interface_bit_rates

Now, if you have a number of drives to image, you should be able to set up a large with multiple drives and links in parallel, such that you can have enough throughput to image multiple drives in an hour, but each individual drive is not going to be imaged in "minutes."
 
Now, if you have a number of drives to image, you should be able to set up a large with multiple drives and links in parallel, such that you can have enough throughput to image multiple drives in an hour, but each individual drive is not going to be imaged in "minutes."
"in minutes" was a joke, obviously :)

Your reply is instead very very interesting and suggests a great solution for me.
Really my goal is reduce the imaging time because I've to imaging a lot of drives.
So, it is very important to be able to set up a large number of drives in parallel and imaging multiple drives in an hour.

My apologize for the question:
could I achieve this goal using a Professional Data Backup system (like the backplane CHIEFTEC CBP-3141SAS) and a SATA acquisition card with enough throughput to image multiple drives in (about) an hour.
Is it right ?

Thanks in advance.
 
A modern enterprise-grade near line spinning disk can do about 250 MByte/s between the head and the platter. That's at the outside of the platter; on the inside it is about half that.

A modern 64-bit x86 processor can copy about 20-30 GByte/s from the PCI bus to memory. It can sustain coping 18 GByte/s from disk to memory and from there back to the network, while also performing checksum calculations and RAID parity calculation. This was measured on a high-end file server machine. Meaning in rough numbers, a modern computer is about 100x faster than a single disk drive.

Worrying about "DMA" (which in modern computers is really a more complicated concept) is insane. The bottleneck is not at all the computer or the SATA interface. The bottleneck in your case is solely the disk drive. No amount of extra SAS and SATA cards will fix that.
 
The bottleneck is not at all the computer or the SATA interface. The bottleneck in your case is solely the disk drive. No amount of extra SAS and SATA cards will fix that.
Hi ralph !!
It seemed to me that there was a reduction in speed somewhere.
Now, with your explanation, the reason is very clearer: the bottleneck is the disk drive. Perfect !!

Thank you.
 
To be clear, if you are trying to image drives en masse, then yes, setting up a system with multiple drives and links will enable you to have a higher throughput, and you could conceivably have multiple drives “imaged per hour”, but each individual drive will still take just as long.
 
Exactly my though. Your only as fast as your slowest drive. So the drive absorbing the disk image is the slowest member so thats the fastest you can go, unless staggered like Eric A. Borisch mentions.

I just showed the SuperMicro rig in jest. Just to outfit that with drives would be over $15K USD.
You would need a robot to stagger drives to keep up with the Supermicro rig.
From Hard Drive shipping tray to disk imaging dock and back to tray. Like the old tape loader robots.
 
OK here is a recipe to consider.
One NVMe M.2 consumer drive is 3-4X faster than any SATA3 single drive.
So you could simultaneously image 3 or 4 drives at a time at little cost with modern hardware..
1 M.2 NVMe 512GB=~$200USD
4 ESATA hard drive docking ports=~$100USD (Number of docks=number of free SATA3 ports on your mainboard.)
1 NVME paddle card for PCIe 3.0 Slot=$10

You might end up needing more ESATA docks depending on speeds.
Will need to externally breakout the SATA to ESATA.
You will need a PCIe 3.0 slot for the NVMe, so newer hardware required.

Most of the external Hard Drive docks have combination USB and ESATA.
They are complete garbage for saturating 550mb/sec SATA3 (At least all that I tried)
http://www.highpoint-tech.com/USA_new/series_RS5322-overview.htm

You also need to check your motherboard to see if it allows for hotswap SATA drives in bios. Most server boards do.
Otherwise you might need to powerdown after each image writing cycle to disconnect drives.
This comes back to why most docks use USB. The disk disconnect method is standard.

FreeBSD can boot off the NVMe so you can use all your SATA3 ports for docks.
 
I've great appreciated your effort to arrange a recipe, and foundamentally the list describes what I was searching for realize.
Obviously the NVMe technology gives high performances.
Surely you well know the Toshiba OCZ RD400 Series Solid State Drive PCIe NVMe M.2 512GB, and this encourages me to buy it.
Surely this recipe is a very good starting point for me ... in the future probably I'll be able to adopt the Supermicro workstations. One step at a time.


Please, can you kindly assist me in the following questions (so I can fill my Amazon Cart) ?

My mainboard is: ASUS B85M-PLUS/BM6AF/DP_MB. In the attached picture you can see where the PCIe slots are.
It has 2 PCI Express 3.0/2.0 x16 slot and 1 PCI Express 2.0 x1 slot.
Again: it supports the UEFI configuration of hot-swap SATA drives.

QUESTION 1: can I use the PCIe 3.0 x16 for the Toshiba OCZ RD400 or does it need of a PCIe 3.0 x4 slot ? In other words: can a PCIe x16 slot accept a PCIe x4 card providing only four lanes without problem (electrical compatibility, bandwith reduction, ....) ?

QUESTION 2: are you sure that the FreeBSD RELEASE 11.1 is fully supported by the Toshiba OCZ RD400 (no boot problems with UEFI and during the usual running activity of the OS) ? My apologize for this question, surely you have already tested this SSD and you know if its performances with the FreeBSD.

QUESTION 3: I've not found better hard disk rack that the CHIEFTEC CBP-3141SAS backplane. As you wrote it has a number of docks=number of free SATA3 ports on my mainboard and also provides space for four 3.5" or 2.5" hard drives or SSDs.

I hope to hear you soon.

Thanks a lot for you time.

Bye.
 

Attachments

  • IMG_20180513_175415.jpg
    IMG_20180513_175415.jpg
    835.7 KB · Views: 170
Re:NVMe

You can buy NVMe drives in a PCIe factor instead of M.2 as I show.
This is simply the cheapest route I found. Many people sell thier new NVMe on ebay to upgrade so you can save some money.

1) Yes a x4 card will work in a PCIe x8 and x16 slot. PCIe 3.0 is needed. The blue slot on your motherboard is fine.

2) I was showing the Toshiba drive as an example. I use an OEM version found in laptops. Toshiba XG3. The retail version is the OCZ RD400. I cannot say with 100% certainty but I own a very similar model and it does work. I feel confident enough I used it. The only drive I have heard that had issues was an earlier Samsung NVMe. The rest should work fine.

3)The reason I would use a dock instead of Chieftec 5.25" bays is durability.
You are wanting to images drives in large numbers.
Those cheap drive caddies really don't take many insertion cycles before they are junk.
The dock has no moving parts. Once again purely my personal opinion.
On top of the duribility issue, don't most drive trays screw the drives into the tray.
That sounds like an extra step if you goal is for speedy disk imaging in numbers.

The problem with my dock idea is there are no eSATA-only drive docks I can find. I bought my HighPoint RocketStor 3-4 years ago.
I don't see them for sale anymore. Maybe USB3.0 + eSATA docks are better these days.
 
I should add that the newer Toshiba NVMe OEM drive is the XG5 and it is pushing 3000mb/s
Notice the write drops as the size gets smaller. So for the XG5 you would need the 1TB model to get the maximum potential.
For the XG3 the write speed for the 512GB model was the same as the flagship 1TB model. That was the 2016 model.
This article has some details of the newer drive.
https://www.anandtech.com/show/11663/the-toshiba-xg5-1tb-ssd-review
For writing disk images you only need fast reads.
So if you wanted to save some money you could get a smaller drive size with slower writes.
 
I should mention that FreeBSD 11.1 NVMe disk speeds are not nearly as fast as Windows. I am seeing roughly half that in diskinfo -t. FreeBSD -CURRENT seems to offer more like 2000mb/s. So there is some improvement.
That was why I used the 3x-4x faster than SATA3 figure.
 
Interesting news !
Probably is not the truth, but it seems to me that the industry tends (in a way) to optimize the products in the Windows direction (probably is only a personal crazy opinion).
 
Phishfry,

please, what about:

Samsung MZ-V6E500BW SSD 960 EVO, 500 GB, M.2, NVMe.
Its performance (speedy) seems very high.

and:

Samsung MZ-V6P512BW SSD 960 PRO, 512 GB, M.2, NVMe
Its seems developed to support high workload (in fact it's much expensive)


Possible issues with FreeBSD ?

Bye !

(I'm at dinner ... chicken and fries in front of the computer analyzing the NVMe SSDs).
 
I only own the one NVMe.
There is a thread I started when I got my NVMe in which a user mentioned his Samsung NVMe did not work.
I really don't know the details on Samsung. I really won't pay their price premium. That was how I ended up with an XG3.

I really do not like making specific product recommendations. My drive is 2 years young, but I don't know that I would buy something that old now. The writeup on the XG5 sounded blah too with the 3D RAM blah blah.
I would spend the money and get one that has equal writes and reads.
When you run your FreeBSD install on it you will be amazed at the linear increases. Compiling time=3X faster.
There is not a downside that I can see except a new drive interface to learn.
On the first generation motherboards with PCIe 3.0 slots, the Ivy Creek I could not boot off the NVMe due to BIOS.
Boards after Ivy Creek should support booting off the NVMe. So what is your CPU?
 
Ok !
I've found the XG3 on ebay. (I've already read your thread and possible issues with Samsung NVMe).
I'll buy the XG3 so to be on the safe side (the XG5 is much expensive).

Thanks Phishfry !
 
On the first generation motherboards with PCIe 3.0 slots, the Ivy Creek I could not boot off the NVMe due to BIOS.
Boards after Ivy Creek should support booting off the NVMe. So what is your CPU?
Intel Core i3 ...
UEFI Boot ...

However, in the afternoon when I'll come back my office, I'll send you all detailed informations about the CPU (I'll take a picture) and the UEFI (firmware version) to check if it support booting off the NVMe. The XG3 costs 300 Euro and I would to avoid to do a wrong purchase.

Thanks very much for your kindly assistance.

Bye !
 
Back
Top