Sharing of DTrace scripts

I open this topic for people to share their DTrace scripts. So we can perfect our scripts and introduce DTrace to others.

You can visit DTrace - FreeBSD Wiki to see how to enable DTrace in FreeBSD and get more information on DTrace.
  1. There are already seven example scripts in DTrace/Exmaples - FreeBSD Wiki.
  2. disk-io.d (unstable, 9.0) measuring disk io latencies and sizes
  3. vfs-io.d (stable, 9.0) measuring vfs io latencies and sizes
  4. crypto-lat.d (unstable, 9.0) measuring crypto latencies
  5. vfssnoop.d (stable, 9.0) observing file activities (open, close, read, etc) in a running system
  6. vfsstat.d (stable, 9.0) provide statistics for various vfs operations
  7. diskio2.d (unstable, 9.0) measuring disk io latencies and sizes with another approach
  8. malloc.d (stable, 9.0) trace memory usages
  9. em-intr-lat.d (unstable, 9.0) measure the latencies of em(4) handling incoming and outgoing traffic
  10. syscallrwbytypes.d (stable, 9.0) measures whole system read and write output that using syscall(2)s (network include)
  11. diskflushlat.d (unstable, 9.0) measures disk cache flush latencies and log anomalies to /var/log/messages

Addendum: The stable and unstable in the parentheses refer to whether we are using stable or unstable providers. And the number referred to the branch of FreeBSD that the script was tested on.
 
disk-io.d

I get this scripts by accident. I was trying to find equivalent structure to devinfo_t and stumble on struct devicestat. With a lot of digging, the following script is born.

Code:
#!/usr/sbin/dtrace -s

#pragma D option quiet
#pragma D option switchrate=10hz

/* Only devstat_start_transaction_bio corresponds to actual disk
 * io. Also, the following latency is only an approximation.
 * Since we use devstat.start_count as one of the key. A disk
 * may receive several requests before any io is completed on it.
 * So devstat.end_count may not corresponds to the original
 * request. For reference, see <sys/devicestat.h>.
 */

fbt::devstat_start_transaction_bio:entry
{
        dv[args[0], args[0]->start_count] = timestamp;
        @start = count();
}

fbt::devstat_end_transaction:entry
/this->ts = dv[args[0], args[0]->end_count]/
{
        /* device_name corresponds to da, ada etc. args[3]
         * show whether the operation is read or write or
         * other. args[1] is the number of bytes read or
         * write. The start and end count let us check
         * whether there is some missing iops.
         */
        this->dn = stringof(args[0]->device_name);
        this->un = args[0]->unit_number;
        this->op = args[3] == 1 ? "Read" : args[3] == 2 ? "Write" : "Other";
        @lat[this->dn, this->un, this->op] = quantize(timestamp - this->ts);
        @iosize[this->dn, this->un, this->op] = quantize(args[1]);
        dv[args[0], args[0]->end_count] = 0;
        @end = count();
}

tick-1s
{
        printf("Latencies (ns)\n\n");
        printa("%s%d %s %@d\n", @lat);
        printf("IO sizes (bytes)\n\n");
        printa("%s%d %s %@d\n", @iosize);
        printf("Booking keeping\n");
        printa(@start, @end);
        printf("------------------------------------------------\n\n");
        trunc(@lat);
        trunc(@iosize);
        trunc(@start);
        trunc(@end);
}

Sample output:
Code:
Latencies (ns)

da6 Read
           value  ------------- Distribution ------------- count
          131072 |                                         0
          262144 |@@@@@@@@@@@@@@@@                         2
          524288 |@@@@@@@@@@@@@@@@@@@@@@@@                 3
         1048576 |                                         0

ada1 Read
           value  ------------- Distribution ------------- count
          131072 |                                         0
          262144 |@@@@@@@@@@@@@@@@@@@@                     3
          524288 |@@@@@@@@@@@@@@@@@@@@                     3
         1048576 |                                         0

ada0 Read
           value  ------------- Distribution ------------- count
          262144 |                                         0
          524288 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 8
         1048576 |                                         0

da1 Read
           value  ------------- Distribution ------------- count
         2097152 |                                         0
         4194304 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
         8388608 |                                         0

da2 Read
           value  ------------- Distribution ------------- count
         4194304 |                                         0
         8388608 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
        16777216 |                                         0

da5 Read
           value  ------------- Distribution ------------- count
         4194304 |                                         0
         8388608 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
        16777216 |                                         0

da3 Read
           value  ------------- Distribution ------------- count
         8388608 |                                         0
        16777216 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
        33554432 |                                         0

ada0 Write
           value  ------------- Distribution ------------- count
           32768 |                                         0
           65536 |@                                        1
          131072 |                                         0
          262144 |@                                        1
          524288 |@@                                       4
         1048576 |@@@@                                     8
         2097152 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@          59
         4194304 |@@                                       3
         8388608 |                                         0

da6 Write
           value  ------------- Distribution ------------- count
          262144 |                                         0
          524288 |@                                        3
         1048576 |                                         0
         2097152 |@@                                       5
         4194304 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@    112
         8388608 |                                         0

IO sizes (bytes)

da1 Read
           value  ------------- Distribution ------------- count
           16384 |                                         0
           32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
           65536 |                                         0

da2 Read
           value  ------------- Distribution ------------- count
           16384 |                                         0
           32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
           65536 |                                         0

da3 Read
           value  ------------- Distribution ------------- count
           16384 |                                         0
           32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
           65536 |                                         0

da5 Read
           value  ------------- Distribution ------------- count
           16384 |                                         0
           32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
           65536 |                                         0

da6 Read
           value  ------------- Distribution ------------- count
           65536 |                                         0
          131072 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5
          262144 |                                         0

ada1 Read
           value  ------------- Distribution ------------- count
           65536 |                                         0
          131072 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 6
          262144 |                                         0

ada0 Read
           value  ------------- Distribution ------------- count
           65536 |                                         0
          131072 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 8
          262144 |                                         0

ada0 Write
           value  ------------- Distribution ------------- count
            8192 |                                         0
           16384 |@                                        1
           32768 |                                         0
           65536 |                                         0
          131072 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  75
          262144 |                                         0

da6 Write
           value  ------------- Distribution ------------- count
           65536 |                                         0
          131072 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 120
          262144 |                                         0

Booking keeping

              219              219
------------------------------------------------
 
vfs-io.d

The following script uses the stable VFS (virtual file system VFS(9)) provider to give a view of filesystem IOs. While DISK IO and filesystem IO are related, they are different beasts. For further reading, you may want to visit When iostat Leads You Astray.

Code:
#!/usr/sbin/dtrace -s

#pragma D option quiet
#pragma D option bufsize=8m
#pragma D option switchrate=10hz
#pragma D option dynvarsize=16m

/* See /usr/src/sys/kern/uipc_mqueue.c for vop_read_args.
 * Also see sys/uio.h.
 */

vfs::vop_read:entry, vfs::vop_write:entry
{
        self->ts[stackdepth] = timestamp;
        this->size = args[1]->a_uio->uio_resid;
        this->name = probefunc == "vop_read" ? "Read" : "Write";
        @iosize1[execname, this->name] = quantize(this->size);
}

vfs::vop_read:return, vfs::vop_write:entry
/this->ts = self->ts[stackdepth]/
{
        this->name = probefunc == "vop_read" ? "Read" : "Write";
        @lat1[execname, this->name] = quantize(timestamp - this->ts);
        self->ts[stackdepth] = 0;
}

tick-1s
{
        printf("Latencies (ns)\n\n");
        printa("%s %s\n%@d\n", @lat1);
        printf("IO sizes (bytes)\n\n");
        printa("%s %s\n%@d\n", @iosize1);
        printf("--------------------------------------------\n\n");
        trunc(@lat1);
        trunc(@iosize1);
}

Sample out:
Code:
Latencies (ns)

transmission-qt Read
           value  ------------- Distribution ------------- count
            4096 |                                         0
            8192 |@@@@@@@@@@@@@@@@@                        52
           16384 |@@@@@@@@@@@@@@@                          45
           32768 |@@@@@                                    14
           65536 |@                                        3
          131072 |                                         1
          262144 |                                         0
          524288 |@@                                       5
         1048576 |                                         1
         2097152 |                                         0

IO sizes (bytes)

transmission-qt Read
           value  ------------- Distribution ------------- count
            8192 |                                         0
           16384 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 121
           32768 |                                         0

-------------------------------------

Some observations: This script run concurrently with the disk-io.d. We saw such low latencies (most are less than 50 us) because of ARC caching. Also, we can use the following one liner to see the typical reading IO sizes of applications.

# dtrace -n 'vfs::vop_read:entry { @[execname, args[1]->a_uio->uio_resid] = count(); }'

Changing vop_read into vop_write will get you the typical writing IO sizes of applications.
 
crypto-lat.d

The following script measures latencies of crypto(9) operations.

Code:
#!/usr/sbin/dtrace -s

#pragma D option quiet

/* Please refer to man 9 crypto and /usr/src/sys/opencrypto/. */

fbt::crypto_dispatch:entry
{
        self->ts[stackdepth] = timestamp;
        @entry = count();
}

fbt::crypto_dispatch:return
/this->ts = self->ts[stackdepth]/
{
        @lat = quantize(timestamp - this->ts);
        @return = count();
        self->ts[stackdepth] = 0;
}

tick-1s
{
        printf("Crypto latencies:\n\n");
        printa(@lat);
        printa("\n\nBooking keeping: %@d %@d\n", @entry, @return);
        printf("--------------------------------------------------\n");
        trunc(@entry);
        trunc(@return);
        trunc(@lat);
}

The script is tested on an E3-1230 with software based encryption Blowfish-CBC and hardware based encryption AES-XTS respectively.

Sample outputs (a 100MB file is dd onto the resulting encrypted ufs filesystem):

Code:
Jun 21 08:15:37 aaa kernel: GEOM_ELI: Device zvol/myzfs/test.eli created.
Jun 21 08:15:37 aaa kernel: GEOM_ELI: Encryption: Blowfish-CBC 128
Jun 21 08:15:37 aaa kernel: GEOM_ELI:     Crypto: software


Crypto latencies:



           value  ------------- Distribution ------------- count
          262144 |                                         0
          524288 |@@@                                      14
         1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@       186
         2097152 |@@@                                      16
         4194304 |                                         0



Booking keeping: 223 216
--------------------------------------------------
Crypto latencies:



           value  ------------- Distribution ------------- count
           32768 |                                         0
           65536 |@@@@                                     2572
          131072 |@@@@@@@@@@@@@@@@@@@@@@@@                 15213
          262144 |@@@@@@@                                  4488
          524288 |@@@@                                     2478
         1048576 |@                                        597
         2097152 |                                         33
         4194304 |                                         22
         8388608 |                                         6
        16777216 |                                         1
        33554432 |                                         0



Booking keeping: 25403 25410
--------------------------------------------------

Code:
Jun 21 08:23:25 aaa kernel: GEOM_ELI: Device zvol/myzfs/test.eli created.
Jun 21 08:23:25 aaa kernel: GEOM_ELI: Encryption: AES-XTS 128
Jun 21 08:23:25 aaa kernel: GEOM_ELI:     Crypto: hardware


Crypto latencies:



           value  ------------- Distribution ------------- count
           16384 |                                         0
           32768 |@@@@@                                    166
           65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@      1068
          131072 |                                         3
          262144 |                                         1
          524288 |                                         0



Booking keeping: 1239 1238
--------------------------------------------------
Crypto latencies:



           value  ------------- Distribution ------------- count
            8192 |                                         0
           16384 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@            15684
           32768 |@@@@@@@@                                 4059
           65536 |@@@                                      1732
          131072 |                                         1
          262144 |                                         0



Booking keeping: 21476 21476
--------------------------------------------------
Crypto latencies:



           value  ------------- Distribution ------------- count
            8192 |                                         0
           16384 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2910
           32768 |                                         1
           65536 |                                         1
          131072 |                                         0



Booking keeping: 2911 2912
--------------------------------------------------
 
vfssnoop.d

Here comes a big one. It will monitor activities (read, write, open, close, create, remove etc.) of the VFS. Depending on how busy your system is, the output can be huge. Just instrumenting the launching and closing of thunderbird-bin produces a 1.3MB output file. The script can show the up to six level of file system hierarchy. The script is named vfssnoop.txt as the site does not support uploading .d files.

Example output:

Code:
# dtrace -Cs vfssnoop.d -c date
cc1: warning:  is shorter than expected
TIMESTAMP           UID    PID PROCESS          CALL             SIZE PATH/FILE
183094996488299       0  62044 date             vop_inactive        - /usr
183094996504985       0  62044 date             vop_inactive        - /usr/share
183094996517854       0  62044 date             vop_inactive        - /usr/share/locale
183094996538129       0  62044 date             vop_inactive        - /usr/share/locale/en_US.UTF-8/LC_TIME
[I]My local time here[/I]
183094996554607       0  62044 date             vop_inactive        - /usr/share/locale/en_US.UTF-8
183094996566965       0  62044 date             vop_inactive        - /usr/share/locale
183094996579836       0  62044 date             vop_inactive        - /usr/share/locale/en_US.ISO8859-1
183094996603387       0  62044 date             vop_open            - /usr/share/locale/en_US.ISO8859-1/LC_TIME
183094996632357       0  62044 date             vop_getattr         - /usr/share/locale/en_US.ISO8859-1/LC_TIME
183094996663619       0  62044 date             vop_inactive        - /etc
183094996728473       0  62044 date             vop_read         377B /usr/share/locale/en_US.ISO8859-1/LC_TIME
183094996772162       0  62044 date             vop_close           - /usr/share/locale/en_US.ISO8859-1/LC_TIME
183094996791850       0  62044 date             vop_inactive        - /usr/share/locale/en_US.ISO8859-1/LC_TIME
183094996821157       0  62044 date             vop_inactive        - /etc
183094996830776       0  62044 date             vop_inactive        - /etc/localtime
183094996842400       0  62044 date             vop_inactive        - /etc
183094996851484       0  62044 date             vop_open            - /etc/localtime
183094996862231       0  62044 date             vop_getattr         - /etc/localtime
183094996881230       0  62044 date             vop_read          40K /etc/localtime
183094996893759       0  62044 date             vop_close           - /etc/localtime
183094996903022       0  62044 date             vop_inactive        - /etc/localtime
183094996925315       0  62044 date             vop_inactive        - /usr
183094996934383       0  62044 date             vop_inactive        - /usr/share
183094996945368       0  62044 date             vop_inactive        - /usr/share/zoneinfo
183094996958397       0  62044 date             vop_open            - /usr/share/zoneinfo/posixrules
183094996972237       0  62044 date             vop_getattr         - /usr/share/zoneinfo/posixrules
183094996987813       0  62044 date             vop_read          40K /usr/share/zoneinfo/posixrules
183094997007550       0  62044 date             vop_close           - /usr/share/zoneinfo/posixrules
183094997020243       0  62044 date             vop_inactive        - /usr/share/zoneinfo/posixrules
183094997073382       0  62044 date             vop_getattr         - /dev/<unknown>
183094997141234       0  62044 date             vop_inactive        - /bin/date
 

Attachments

  • vfssnoop.txt
    5.7 KB · Views: 1,476
vfsstat.d

belon_cfy asked in this thread for a command like Solaris' fsstat on FreeBSD. I just realized I had all the ingredients to get something similar. So here it is: vfsstat.d.

Code:
#!/usr/sbin/dtrace -s

/*
 * vfsstat emulation
 */

#pragma D option quiet
#pragma D option defaultargs
#pragma D option switchrate=10hz
#pragma D option bufsize=8m
#pragma D option dynvarsize=32m

/*
 * Refer to man 9 VFS and the files <sys/vnode.h>,
 * </usr/src/sys/kern/vfs_default.c> and various
 * information on vnode in each fs.
 */

/*
 * This script only records successful operations. However, it
 * is trivial to modify so that failures are also record.
 * This script is intent to be used with your favourite scripting
 * language to process the output into something like the
 * Solaris fsstat. Also, note that read and write bytes will
 * be significantly different with disk IOs due to IO inflation
 * or deflation.
 */

vfs::vop_read:entry, vfs::vop_write:entry
{
        self->bytes[stackdepth] = args[1]->a_uio->uio_resid;
}

vfs::vop_read:return, vfs::vop_write:return
/this->delta = self->bytes[stackdepth] - args[1]->a_uio->uio_resid/
{
        this->fi_mount = args[0]->v_mount ?
                stringof(args[0]->v_mount->mnt_stat.f_mntonname) :
                        "<none>";
        @bytes[this->fi_mount, probefunc] = sum(this->delta);
        @ops[this->fi_mount, probefunc] = count();
}

vfs::vop_read:return, vfs::vop_write:return
{
        self->bytes[stackdepth] = 0;
}

/* You may add or remove operations of interest here. */

vfs::vop_rename:return, vfs::vop_create:return, vfs::vop_remove:return,
vfs::vop_getattr:return, vfs::vop_access:return, vfs::vop_open:return,
vfs::vop_close:return, vfs::vop_setattr:return, vfs::vop_rename:return,
vfs::vop_mkdir:return, vfs::vop_rmdir:return, vfs::vop_readdir:return,
vfs::vop_lookup:return, vfs::vop_cachedlookup:return
/errno == 0/
{
        this->fi_mount = args[0]->v_mount ?
                stringof(args[0]->v_mount->mnt_stat.f_mntonname) :
                        "<none>";
        @ops[this->fi_mount, probefunc] = count();
}

tick-1s
{
        printf("Number of operations\n");
        printf("%-40s %-18s %20s\n", "FILESYSTEM", "OPERATIONS", "COUNTS");
        printa("%-40s %-18s %20@d\n", @ops);
        printf("\nBytes read or write\n");
        printf("%-40s %-18s %20s\n", "FILESYSTEM", "OPERATIONS", "BYTES");
        printa("%-40s %-18s %20@d\n", @bytes);
        printf("\n--------------------------------------------------\n\n");
}

Sample output:
Code:
Number of operations
FILESYSTEM                               OPERATIONS                       COUNTS
/                                        vop_cachedlookup                      1
/dev                                     vop_getattr                           1
/usr/home/a                              vop_close                             3
/usr/home/a                              vop_open                              3
/usr/home/a                              vop_read                              3
/var                                     vop_close                             4
/tmp                                     vop_lookup                            6
/var                                     vop_read                              8
/tmp                                     vop_access                            9
/usr/home/a/b                            vop_close                             9
/usr/home/a/b                            vop_open                              9
/usr/home/a                              vop_getattr                          10
/                                        vop_close                            40
/                                        vop_open                             45
/                                        vop_read                             46
/usr/local                               vop_open                             53
/usr/local                               vop_read                             53
/usr/local                               vop_close                            54
/usr/home/a                              vop_lookup                           62
/usr/home/a                              vop_access                           68
/                                        vop_getattr                          86
/usr/local                               vop_getattr                         146
/usr/home/a/b                            vop_lookup                          152
/usr/home/a/b                            vop_access                          161
/usr/local                               vop_lookup                          172
/usr/local                               vop_access                          226
/usr/home/a/b                            vop_read                            683
/                                        vop_lookup                          707
/usr/home/a/b                            vop_getattr                         721
/                                        vop_access                          752

Bytes read or write
FILESYSTEM                               OPERATIONS                        BYTES
/usr/home/a                              vop_read                            162
/var                                     vop_read                           1040
/                                        vop_read                         343619
/usr/local                               vop_read                         392099
/usr/home/a/b                            vop_read                       11190272

--------------------------------------------------

Parsing the above output by scripting language should get something similar to fsstat.
 
diskio2.d

Zare, thanks for your compliment.

The following is an alternative to the disk-io.d script using fbt::g_disk_start probes instead of fbt::devstat_start_transaction.

Code:
#!/usr/sbin/dtrace -s

#pragma D option quiet
#pragma D option switchrate=2997hz
#pragma D option dynvarsize=8m

/*
 * This script trace disk io from g_disk_start to g_disk_done.
 * Both functions have only one argument of type struct bio *.
 * But since we only enter g_disk_done after the disk finish
 * reading the data and generate an interrupr, there is no
 * guarantee that the two bio's are the same. After a lot of
 * experiments, I end up with the following identifiers.
 * Also, note that while this->bio->bio_to->name and
 * this->bio->bio_disk->d_geom->name may refer to the same
 * da2, they are only equal when we take stringof them.
 * The high switchrate of this scripts is due to the high
 * collisions of the array with our current keys.
 * See <sys/bio.h>, <geom/geom.h> and <geom/geom_disk.h>.
 */

BEGIN
{
        bio_cmd[1] = "Read";
        bio_cmd[2] = "Write";
        bio_cmd[4] = "Delete";
        bio_cmd[8] = "Getattr";
        bio_cmd[16] = "Flush";
}

fbt::g_disk_start:entry
{
        this->bio = (struct bio *)arg0;
}

fbt::g_disk_start:entry
{
        this->s_name = stringof(this->bio->bio_to->name);
        ddn[this->bio->bio_data, this->s_name, this->bio->bio_cmd] = timestamp;
        @start = count();
}

fbt::g_disk_done:entry
/(this->bio = (struct bio *)arg0) &&
(this->name = this->bio->bio_disk->d_geom->name) &&
(this->ts = ddn[this->bio->bio_data, stringof(this->name), this->bio->bio_cmd])/
{
        this->op = bio_cmd[this->bio->bio_cmd];
        @lat[stringof(this->name), this->op] = quantize(timestamp - this->ts);
        @iosize[stringof(this->name), this->op] = quantize(this->bio->bio_bcount);
        ddn[this->bio->bio_data, stringof(this->name), this->bio->bio_cmd] = 0;
        @end = count();
}

tick-1s
{
        printf("Latencies (ns)\n\n");
        printa("%s %s %@d\n", @lat);
        printf("IO sizes (bytes)\n\n");
        printa("%s %s %@d\n", @iosize);
        printf("Booking keeping\n");
        printa(@start, @end);
        printf("------------------------------------------------\n\n");
        trunc(@lat);
        trunc(@iosize);
        trunc(@start);
        trunc(@end);
}
Snippet of sample output:

Code:
Latencies (ns)

da4 Write
           value  ------------- Distribution ------------- count
          131072 |                                         0
          262144 |@@@@@@@@@@@@@                            21
          524288 |@@@@@@@@@@@@@@@@@                        28
         1048576 |@@@@@                                    8
         2097152 |@@@@@                                    8
         4194304 |                                         0

da3 Flush
           value  ------------- Distribution ------------- count
         8388608 |                                         0
        16777216 |@@@@@@@@@@@@@@@@@@@@                     1
        33554432 |@@@@@@@@@@@@@@@@@@@@                     1
        67108864 |                                         0

da0 Write
           value  ------------- Distribution ------------- count
          131072 |                                         0
          262144 |@@@@@@@                                  12
          524288 |@@@@@@@@@@@@@@@@@@                       33
         1048576 |@@@@@@@@@@@@@                            24
         2097152 |@@                                       3
         4194304 |                                         0

da2 Flush
           value  ------------- Distribution ------------- count
        16777216 |                                         0
        33554432 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
        67108864 |                                         0

da4 Flush
           value  ------------- Distribution ------------- count
        16777216 |                                         0
        33554432 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
        67108864 |                                         0

da5 Flush
           value  ------------- Distribution ------------- count
        16777216 |                                         0
        33554432 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
        67108864 |                                         0

da0 Flush
           value  ------------- Distribution ------------- count
         8388608 |                                         0
        16777216 |@@@@@@@@@@@@@@@@@@@@                     1
        33554432 |                                         0
        67108864 |@@@@@@@@@@@@@@@@@@@@                     1
       134217728 |                                         0

da1 Flush
           value  ------------- Distribution ------------- count
        16777216 |                                         0
        33554432 |@@@@@@@@@@@@@@@@@@@@                     1
        67108864 |@@@@@@@@@@@@@@@@@@@@                     1
       134217728 |                                         0

da5 Write
           value  ------------- Distribution ------------- count
           65536 |                                         0
          131072 |@@@@                                     8
          262144 |@@@@@@@@@@@@@@@@@@@@@                    39
          524288 |@@@@@@@@@                                17
         1048576 |@@                                       3
         2097152 |                                         0
         4194304 |@                                        1
         8388608 |                                         0
        16777216 |@@@                                      5
        33554432 |                                         0

da2 Write
           value  ------------- Distribution ------------- count
           65536 |                                         0
          131072 |@@@@@@@@@                                15
          262144 |@@@@@@@@@@@@@@@@@@@@@@@                  39
          524288 |@@@@@                                    9
         1048576 |                                         0
         2097152 |                                         0
         4194304 |                                         0
         8388608 |                                         0
        16777216 |@                                        2
        33554432 |@@                                       4
        67108864 |                                         0

IO sizes (bytes)

ada0 Flush
           value  ------------- Distribution ------------- count
              -1 |                                         0
               0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
               1 |                                         0

da0 Flush
           value  ------------- Distribution ------------- count
              -1 |                                         0
               0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
               1 |                                         0

da1 Flush
           value  ------------- Distribution ------------- count
              -1 |                                         0
               0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
               1 |                                         0

da2 Flush
           value  ------------- Distribution ------------- count
              -1 |                                         0
               0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
               1 |                                         0

In the above snippet, we can see that there are 2 flush commands issue for each drive, da0-da5, in a zfs pool. They are using the mps(4) driver. However there is only one flush command issue to ada0 with ahci(4) driver. So, for spindles, the extra flush command introduces at least 16ms latency. These are the things that cannot be observed by iostat(8) only.
 
malloc.d

Here is a simple script on malloc(9). This is to show what could be instrumented with DTrace. But I could not make head or tail of much of the result. Running this script on my system with ZFS show that memory usage increased for a time, and then suddenly a few hundred MB of memory were free.

Code:
#!/usr/sbin/dtrace -s

#pragma D option quiet
#pragma D option defaultargs

/* See man 9 malloc and man 9 free for information on malloc and free. */

dtmalloc:::malloc
/execname == $$1 || $$1 == NULL/
{
        @malloc[probefunc] = quantize(args[3]);
        @total = sum(args[3]);
}


dtmalloc:::free
/execname == $$1 || $$1 == NULL/
{
        @free[probefunc] = quantize(args[3]);
        @total = sum(-args[3]);
}

tick-1s,
END
{
        printf("Memory allocate\n\n");
        printa("%-30.30s %@d\n", @malloc);
        printf("\nMemory free\n\n");
        printa("%-30.30s %@d\n", @free);
        printa("\nTotal memory allocated: %@d\n", @total);
        printf("---------------------------------------------\n\n");
        trunc(@malloc);
        trunc(@free);
}

Sample out:

Code:
#dtrace -s malloc.d -c date
[I]My local time here[/I]
Memory allocate

mbuf_tag
           value  ------------- Distribution ------------- count
              32 |                                         0
              64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
             128 |                                         0

sysctl
           value  ------------- Distribution ------------- count
               8 |                                         0
              16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 9
              32 |                                         0

cred
           value  ------------- Distribution ------------- count
              32 |                                         0
              64 |@@@@@@@@@@@@@@@@@@@@                     1
             128 |                                         0
             256 |@@@@@@@@@@@@@@@@@@@@                     1
             512 |                                         0

ioctlops
           value  ------------- Distribution ------------- count
               8 |                                         0
              16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        27
              32 |                                         0
              64 |@@@@                                     3
             128 |@                                        1
             256 |                                         0
             512 |@@                                       2
            1024 |                                         0

solaris
           value  ------------- Distribution ------------- count
               8 |                                         0
              16 |@@@@@@@                                  4
              32 |                                         0
              64 |@@@@@@@@@@                               6
             128 |@@@@@                                    3
             256 |@@@@@@@@@@                               6
             512 |                                         0
            1024 |@@@                                      2
            2048 |                                         0
            4096 |                                         0
            8192 |                                         0
           16384 |@@@@@                                    3
           32768 |                                         0


Memory free

mbuf_tag
           value  ------------- Distribution ------------- count
              32 |                                         0
              64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
             128 |                                         0

sysctl
           value  ------------- Distribution ------------- count
               8 |                                         0
              16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 9
              32 |                                         0

cred
           value  ------------- Distribution ------------- count
              32 |                                         0
              64 |@@@@@@@@@@@@@@@@@@@@                     1
             128 |                                         0
             256 |@@@@@@@@@@@@@@@@@@@@                     1
             512 |                                         0

filedesc
           value  ------------- Distribution ------------- count
             256 |                                         0
             512 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
            1024 |                                         0

ioctlops
           value  ------------- Distribution ------------- count
               8 |                                         0
              16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        27
              32 |                                         0
              64 |@@@@                                     3
             128 |@                                        1
             256 |                                         0
             512 |@@                                       2
            1024 |                                         0

solaris
           value  ------------- Distribution ------------- count
               8 |                                         0
              16 |@@@@@@@                                  4
              32 |                                         0
              64 |@@@@@@@@@@                               6
             128 |@@@@@                                    3
             256 |@@@@@@@@@@                               6
             512 |                                         0
            1024 |@@@                                      2
            2048 |                                         0
            4096 |                                         0
            8192 |                                         0
           16384 |@@@                                      2
           32768 |                                         0


Total memory allocated: 24064
---------------------------------------------
 
em-intr-lat.d

This script measures the latencies of the em(4) network card. It only measures when there is something to read, handled by em_msix_rx or when there is something to send, handled by em_msix_tx or em_handle_tx. For other network cards, you have to instrument the corresponding driver and functions.

Code:
#!/usr/sbin/dtrace -s

#pragma D option quiet
#pragma D option switchrate=10hz

/*
 * Refer to /usr/src/sys/dev/e1000/if_em.c. From 9.0 release
 * to some point in 9.0 stable, transmissions are handled by
 * both em_handle_tx and em_msix_tx. Afterwards, it is solely
 * the work of em_msix_tx. Of course, it may just be that
 * I was not able to trigger em_handle_tx because my connection
 * peak at 4MB/s. @c count the number of entries and will be
 * compared to @d which count the number of returns. @s, @v
 * measures the time required for each operation and the time
 * acutally spend on the cpu for each operation respectively.
 */

fbt::em_msix_rx:entry,
fbt::em_msix_tx:entry,
fbt::em_handle_tx:entry
{
        self->ts[stackdepth] = timestamp;
        self->vs[stackdepth] = vtimestamp;
        @c[probefunc] = count();
}

fbt::em_msix_rx:return,
fbt::em_msix_tx:return,
fbt::em_handle_tx:return
/this->ts = self->ts[stackdepth]/
{
        this->delta = timestamp - this->ts;
        this->delta2 = vtimestamp - self->vs[stackdepth];
        @q[probefunc] = quantize(this->delta);
        @s[probefunc] = sum(this->delta);
        @v[probefunc] = sum(this->delta2);
        @d[probefunc] = count();
        self->ts[stackdepth] = 0;
        self->vs[stackdepth] = 0;
}

tick-1s
{
        normalize(@s, 1000000);
        normalize(@v, 1000000);
        printa(@q, @s, @v, @c, @d);
        trunc(@q);
        trunc(@s);
        trunc(@v);
        trunc(@c);
        trunc(@d);
}

Sample output:

Code:
#dtrace -s em-intr-lat.d

  em_msix_tx
           value  ------------- Distribution ------------- count
            1024 |                                         0
            2048 |@@@@@@@@@@@@@@@@@@@@@                    663
            4096 |@@@@@@@@@@@@@@                           440
            8192 |@                                        34
           16384 |@@@@                                     126
           32768 |                                         2
           65536 |                                         0
          131072 |                                         7
          262144 |                                         0
                8                3             1272             1272
  em_msix_rx
           value  ------------- Distribution ------------- count
            1024 |                                         0
            2048 |@@@@                                     120
            4096 |                                         9
            8192 |@@@@@@@                                  194
           16384 |@@@@@@@@@                                249
           32768 |@@@@@@@@@@@@@@@@@                        474
           65536 |@@@                                      81
          131072 |                                         6
          262144 |                                         0
               35               30             1133             1133

As you can see, transmit data is much faster than receiving data.
 
syscallrwbytypes.d

This script measures the read and write (including network traffic) latencies and sizes of a system. It is only valid for all the processes that use syscall(): read(2), write(2), send(2), recv(2). Results will then be grouped into different categories. Sample output is on the next post.

Code:
#!/usr/sbin/dtrace -s

#pragma D option quiet
#pragma D option switchrate=10hz
#pragma D option bufsize=8m
#pragma D option dynvarsize=8m

/*
 * We first define the types here. Refer to <sys/file.h>.
 * Note that I have define some new types to distingush
 * between AF_UNIX (Unix domain socket) and AF_INET.
 */

BEGIN
{
        ftype[1,0] = "File";
        ftype[2,0] = "Socket";
        ftype[3,0] = "Pipe";
        ftype[4,0] = "Fifo";
        ftype[5,0] = "Event queue";
        ftype[6,0] = "Crypto";
        ftype[7,0] = "Messages queue";
        ftype[8,0] = "Shm";
        ftype[9,0] = "Semaphore";
        ftype[10,0] = "PTY";
        ftype[11,0] = "dev";
        ftype[2,1] = "AF_UNIX";
        ftype[2,2] = "AF_INET";
        ftype[2,28] = "AF_INET6";
}

syscall::read:entry, syscall::pread:entry,
syscall::readv:entry, syscall::preadv:entry,
syscall::write:entry, syscall::pwrite:entry,
syscall::writev:entry, syscall::pwritev:entry,
syscall::recvfrom:entry, syscall::recvmsg:entry,
syscall::sendto:entry, syscall::sendmsg:entry
{
        this->fp = curthread->td_proc->p_fd->fd_ofiles[arg0];
        self->fi_type[stackdepth] = this->fp != 0 ? this->fp->f_type : -1;
        @entry[ftype[self->fi_type[stackdepth], 0]] = count();
        self->ts[stackdepth] = timestamp;
}

/* Here we distinguish between AF_UNIX and AF_INET. */

syscall::read:entry, syscall::pread:entry,
syscall::readv:entry, syscall::preadv:entry,
syscall::write:entry, syscall::pwrite:entry,
syscall::writev:entry, syscall::pwritev:entry,
syscall::recvfrom:entry, syscall::recvmsg:entry,
syscall::sendto:entry, syscall::sendmsg:entry
/self->fi_type[stackdepth] == 2/
{
        this->sp = (struct socket *)this->fp->f_data;
        self->family[stackdepth] = this->sp->so_proto->pr_domain->dom_family;
}

syscall::read:return, syscall::pread:return,
syscall::readv:return, syscall::preadv:return,
syscall::recvfrom:return, syscall::recvmsg:return
/this->delta = timestamp - self->ts[stackdepth]/
{
        this->rw = "Read";  /* These are reads. */
}

syscall::write:return, syscall::pwrite:return,
syscall::writev:return, syscall::pwritev:return,
syscall::sendto:return, syscall::sendmsg:return
/this->delta = timestamp - self->ts[stackdepth]/
{
        this->rw = "Write";  /* These are writes. */
}

syscall::read:return, syscall::pread:return,
syscall::readv:return, syscall::preadv:return,
syscall::recvfrom:return, syscall::recvmsg:return,
syscall::write:return, syscall::pwrite:return,
syscall::writev:return, syscall::pwritev:return,
syscall::sendto:return, syscall::sendmsg:return
/self->ts[stackdepth]/
{
        this->type = self->fi_type[stackdepth];
        this->family = self->family[stackdepth];
        @return[ftype[this->type, this->family]] = count();
        this->bytes = (arg1 != -1) ? arg1 : 0;
        @lat[ftype[this->type, this->family], this->rw] = quantize(this->delta);
        @bytes[ftype[this->type, this->family], this->rw] = quantize(this->bytes);
        @sum[ftype[this->type, this->family], this->rw] = sum(this->bytes);
}

syscall::read:return, syscall::pread:return,
syscall::readv:return, syscall::preadv:return,
syscall::recvfrom:return, syscall::recvmsg:return,
syscall::write:return, syscall::pwrite:return,
syscall::writev:return, syscall::pwritev:return,
syscall::sendto:return, syscall::sendmsg:return
{
        self->ts[stackdepth] = 0;
        self->fi_type[stackdepth] = 0;
        self->family[stackdepth] = 0;
}

tick-1s
{
        printf("%-15s %8s\n", "FILE TYPE", "COUNT");
        printa("%-15s %8@d\n", @return);
        /* Uncomment to check the numbers add up. */
        /* printa("%-15s %8@d\n", @entry); */
        printf("\nLatancies in ns:\n");
        printa(@lat);
        printf("\nFile sizes in bytes:\n");
        printa(@bytes);
        printf("\n\n%-15s %-5s %15s\n", "FILE TYPE", "OPS", "SUM (in bytes)");
        printa("%-15s %-5s %15@d\n", @sum);
        printf("\n------------------------------------------------------------\n");
        trunc(@entry); trunc(@return); trunc(@sum); trunc(@lat); trunc(@bytes);
}
 
A few lines are cut.

Code:
FILE TYPE          COUNT
Pipe                   4
AF_UNIX               49
File                 613
AF_INET             2175

Latencies in ns:

  AF_UNIX                                             Write
           value  ------------- Distribution ------------- count
            1024 |                                         0
            2048 |@@@@@@@@@@@@@@@@@@@@@@@@@@@              2
            4096 |                                         0
            8192 |                                         0
           16384 |                                         0
           32768 |                                         0
           65536 |@@@@@@@@@@@@@                            1
          131072 |                                         0

  AF_UNIX                                             Read
           value  ------------- Distribution ------------- count
             512 |                                         0
            1024 |@@@                                      4
            2048 |@@@@@@@@@@@@@@@@@@                       21
            4096 |@@@@@@@@                                 9
            8192 |                                         0
           16384 |@@@@@                                    6
           32768 |@@@@@                                    6
           65536 |                                         0

  File                                                Write
           value  ------------- Distribution ------------- count
            1024 |                                         0
            2048 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@          317
            4096 |                                         4
            8192 |                                         0
           16384 |@@@@@@@@                                 85
           32768 |                                         4
           65536 |                                         0
  AF_INET                                             Read
           value  ------------- Distribution ------------- count
            1024 |                                         0
            2048 |@@@@@@@@@@@@@@@@@@@@@@@@                 538
            4096 |@@@@@                                    116
            8192 |                                         3
           16384 |@@@@                                     89
           32768 |@@@@@@                                   138
           65536 |                                         0

  File                                                Read
           value  ------------- Distribution ------------- count
            4096 |                                         0
            8192 |@@@@@@                                   30
           16384 |                                         0
           32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@               131
           65536 |@@@@@@                                   32
          131072 |                                         1
          262144 |                                         0
          524288 |@@                                       9
         1048576 |                                         0

  AF_INET                                             Write
           value  ------------- Distribution ------------- count
            1024 |                                         0
            2048 |@                                        20
            4096 |@@@@@@@@@@                               318
            8192 |@@@                                      85
           16384 |@@@@@@                                   201
           32768 |@@@@@@@@@@@@@@@@@@@@                     630
           65536 |@                                        36
          131072 |                                         1
          262144 |                                         0


File sizes in bytes:

  Pipe                                                Read
           value  ------------- Distribution ------------- count
               0 |                                         0
               1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
               2 |                                         0

  Pipe                                                Write
           value  ------------- Distribution ------------- count
               0 |                                         0
               1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 2
               2 |                                         0

  AF_UNIX                                             Write
           value  ------------- Distribution ------------- count
               8 |                                         0
              16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@              2
              32 |                                         0
              64 |                                         0
             128 |                                         0
             256 |                                         0
             512 |                                         0
            1024 |@@@@@@@@@@@@@                            1
            2048 |                                         0

  AF_UNIX                                             Read
           value  ------------- Distribution ------------- count
              -1 |                                         0
               0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@   44
               1 |                                         0
               2 |                                         0
               4 |                                         0
               8 |                                         0
              16 |                                         0
              32 |@                                        1
              64 |                                         0
             128 |                                         0
             256 |                                         0
             512 |                                         0
            1024 |@                                        1
            2048 |                                         0

  File                                                Write
           value  ------------- Distribution ------------- count
               0 |                                         0
               1 |@@@@@                                    55
               2 |                                         1
               4 |@                                        9
               8 |@@@                                      28
              16 |@@@@@@@@@@@@@                            135
              32 |@@@@@@@@@@@@@@@@                         166
              64 |@@                                       16
             128 |                                         0

  AF_INET                                             Read
           value  ------------- Distribution ------------- count
              -1 |                                         0
               0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@           671
               1 |                                         0
               2 |                                         0
               4 |                                         2
               8 |                                         7
              16 |@@@@@@                                   139
              32 |@                                        14
              64 |@                                        24
             128 |                                         10
             256 |@                                        15
             512 |                                         2
            1024 |                                         0

  AF_INET                                             Write
           value  ------------- Distribution ------------- count
              -1 |                                         0
               0 |@                                        35
               1 |                                         0
               2 |                                         0
               4 |                                         0
               8 |                                         3
              16 |                                         7
              32 |                                         7
              64 |                                         8
             128 |                                         4
             256 |@                                        32
             512 |                                         10
            1024 |@@@@@@@@@@                               321
            2048 |@@@@@@@@@@@@@@@@@@@@@@@@                 767
            4096 |@@@                                      89
            8192 |                                         3
           16384 |                                         1
           32768 |                                         3
           65536 |                                         1
          131072 |                                         0

  File                                                Read
           value  ------------- Distribution ------------- count
            8192 |                                         0
           16384 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 203
           32768 |                                         0



FILE TYPE       OPS    SUM (in bytes)
Pipe            Read                2
Pipe            Write               2
AF_UNIX         Read             1596
AF_UNIX         Write            1596
File            Write           12550
AF_INET         Read            13815
File            Read          3325952
AF_INET         Write         3552585
 
diskflushlat.d

This script is a variant of diskio2.d. It only measures the latencies of disk cache flush command (using bio->bio_cmd). And if the latencies exceed some threshold, log the event via logger().

Code:
#!/usr/sbin/dtrace -s

#pragma D option quiet
#pragma D option destructive

/*
 * Disk cache flush (due to SYNC) is one of the costliest disk IO ops.
 * Here we log all those cache flush that lasts more than a threshold
 * to syslog.
 */

BEGIN
{
        ms = 1000000; /* 1 milli second = 1000000 nano seconds */
        thresh["ada0"] = 10 * ms; /* ssd cache flush less than 10 ms */
        thresh["da0"] = 200 * ms; /* hd cache flush less than 200 ms */
        thresh["da1"] = 200 * ms; /* hd cache flush less than 200 ms */
        thresh["da2"] = 200 * ms; /* hd cache flush less than 200 ms */
        thresh["da3"] = 200 * ms; /* hd cache flush less than 200 ms */
        thresh["da4"] = 200 * ms; /* hd cache flush less than 200 ms */
        thresh["da5"] = 200 * ms; /* hd cache flush less than 200 ms */
}

fbt::g_disk_start:entry
{
        this->bio = (struct bio *)arg0;
}

fbt::g_disk_start:entry
/this->bio->bio_cmd == 16/
{
        flush[stringof(this->bio->bio_to->name), 16] = timestamp;
}

fbt::g_disk_done:entry
/(this->bio = (struct bio *)arg0) && (this->bio->bio_cmd == 16)/
{
        this->name = stringof(this->bio->bio_disk->d_geom->name);
        this->delta = timestamp - flush[this->name, this->bio->bio_cmd];
        flush[stringof(this->name), this->bio->bio_cmd] = 0;
}

fbt::g_disk_done:entry
/this->delta > thresh[this->name]/
{
        system("logger -t CACHE_FLUSH %5s %d ms", this->name,
                this->delta / 1000000);
}

Sample output in /var/log/messages (with threshold set to 0):

Code:
Jul 13 22:46:54 aaa CACHE_FLUSH: ada0 0 ms
 
Hi t1066!

This was a fantastic read, thank you!

Have you considered trying to incorporate this work into net-mgmt/collectd5 to make FreeBSD able to report disk latency statistics through DTrace?

/Sebulon

Thanks for your compliment. Sorry that it took me several days to reply.

First of all, I made this thread more than two years ago. And dtrace had since advanced a lot. New providers are introduced. Most of the above scripts needed to be rewritten to reflect all those changes.

And about incorporating this work, my C knowledge is rudimentary. So I am not really suitable to try this. Also, while disk latency is important, VFS latency may be more important. And then presentation of latency is also a problem. Latency without associate I/O sizes may show a false picture.
 
Thanks for your compliment. Sorry that it took me several days to reply.

No hard feelings ;)

First of all, I made this thread more than two years ago. And dtrace had since advanced a lot. New providers are introduced. Most of the above scripts needed to be rewritten to reflect all those changes.

I just tested disk-io.d and diskio2.d on a FreeBSD-9.3-RELEASE system and they worked out of the box. Just FYI.

And about incorporate this work, my C knowledge is rudimentary. So I am not really suitable to try this. Also, while disk latency is important, VFS latency may be more important. And then presentation of latency is also a problem. Latency without associate I/O sizes may show a false picture.

Mmm, I wasn't really thinking about doing it in C as a first step. net-mgmt/collectd5 has a way to just run a command as an Exec statement: https://collectd.org/wiki/index.php/Plugin:Exec.

That way, you could just fire off the D script and be able to graph the output. Or maybe use a wrapper shell script that checks if dtrace modules are loaded and if not, # kldload dtraceall and then runs it?

Stupid question perhaps, but is it possible to write a DTrace script that prints the output and then exits? The reason I'm asking is because all scripts I've run all go on endlessly printing until you do a "CTRL+c"?

About sizes, yeah, I agree that latency without sizes shows a false picture but if you just show the latency for e.g. 4k, which is a rather standard I/O size to measure, it may, over time, give at least a curve that should show anomalies in operations. How much (if at all) services are affected by those changes aren't necessarily related, but they may be, which is why this information is at least better than no information, wouldn't you agree?

/Sebulon
 
Stupid question perhaps, but is it possible to write a DTrace script that prints the output and then exits? Reason I´m asking is because all scripts I've run all go on endlessly printing until you do a "CTRL+c"?

This one is easy. Just add the following lines to the end of a script.
Code:
tick-120s
{
    exit(0);
}
Then the script will exit after 120 seconds. Or better yet, you can add
Code:
tick-$1
{
    exit(0);
}
Then run the program as dtrace -s yourscript.d 120s to achieve the same result. Replacing 120s with 1h would mean the script will run for an hour before exiting.

About sizes, yeah, I agree that latency without sizes shows a false picture but if you just show the latency for e.g. 4k, which is a rather standard I/O size to measure, it may, over time, give at least a curve that should show anomalies in operations. How much (if at all) services are affected by those changes aren't necessarily related, but they may be, which is why this information is at least better than no information, wouldn't you agree?

/Sebulon

DTrace is extremely flexible. When using it as a monitoring tool, I am not sure which part to focus on and how detail should the data be. Or, in other words, I could not decide what to monitor.
 
Run this script with the -C flag, as in the first line of the code.

Code:
#!/usr/sbin/dtrace -Cs

Tried that. Just gives on our FreeBSD 11.0 systems:

> dtrace: failed to compile script vfssnoop.d: line 44: cannot find type for "&": struct *

Seems to run fine on FreeBSD 11.1 though so no big deal.

- Peter
 
Back
Top