Areca and kernel error "g_vfs_done"

Ever since I started using my Areca ARC-1280 controller with a 12 disk RAID 6 array my NAS is generating error messages. At the moment is doesn't look to effect the performance, stability of the system or generates corrupt files, but I would like to find out what is going on. The errors don't happen frequently, but just a couple a times per day.

The RAID 6 array and the Areca card are properly configured (as far as I can see). No errors are reported by the Areca internal event viewer, so it looks to be a FreeBSD error. I'm using the latest firmware and FreeBSD drivers for the controller.

Areca RAID 6 array overview:
Code:
SCSI Ch/Id/Lun	0/0/0
Raid Level	Raid 6
Stripe Size	128KBytes
Block Size	512Bytes
Member Disks	12
Cache Mode	Write Back
Tagged Queuing	Enabled
Volume State	Normal

Boot message FreeBSD 8.1 (dmesg.today):
Code:
ARECA RAID ADAPTER0: Driver Version 1.20.00.17 2010-07-21
ARECA RAID ADAPTER0: FIRMWARE VERSION V1.48 2009-12-31

Error message in FreeBSD 8.1 (messages):
Code:
Dec  4 08:15:06 030-NAS kernel: g_vfs_done():da0p1[WRITE(offset=3416332353536, length=16384)]error = 16
Dec  4 08:45:20 030-NAS kernel: g_vfs_done():da0p1[WRITE(offset=3416349081600, length=2048)]error = 16
Dec  5 00:12:46 030-NAS kernel: g_vfs_done():da0p1[WRITE(offset=3416332353536, length=16384)]error = 16
Dec  5 03:01:15 030-NAS kernel: g_vfs_done():da0p1[READ(offset=6979584, length=2048)]error = 16
Dec  5 14:19:41 030-NAS kernel: g_vfs_done():da0p1[WRITE(offset=3416349081600, length=2048)]error = 16

The Areca controller has a total of 13 disks. 12 of them are in the RAID 6 array and 1 is configured as a pass-through disk. The error above is only generated by the RAID 6 array and not by the pass-through disk.

Geom name: da0 (RAID 6 array)
Code:
fwheads: 255
fwsectors: 63
last: 19531248606
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 9999999269376 (9.1T)
   Sectorsize: 512
   Mode: r1w1e1
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 9999999269376
   offset: 17408
   type: freebsd-ufs
   index: 1
   end: 19531248606
   start: 34
Consumers:
1. Name: da0
   Mediasize: 9999999303680 (9.1T)
   Sectorsize: 512
   Mode: r1w1e2


Geom name: da1 (Pass-through)
Code:
fwheads: 255
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 1000204851712 (932G)
   Sectorsize: 512
   Mode: r1w1e1
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1000204851712
   offset: 17408
   type: freebsd-ufs
   index: 1
   end: 1953525134
   start: 34
Consumers:
1. Name: da1
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Mode: r1w1e

I looked everywhere for a solution, but no luck. Hopefully you can help me with this error. Let me know if you need additional information or log-files. Thanks in advance.
 
I found another Areca related error in my 'messages'

Part of /var/log/messages file:
Code:
........
module_register: module pci/arcmsr already exists!
Module pci/arcmsr failed to register: 17
arcmsr0: <Areca SATA Host Adapter RAID Controller (RAID6 capable)
ARECA RAID ADAPTER0: Driver Version 1.20.00.17 2010-07-21
ARECA RAID ADAPTER0: FIRMWARE VERSION V1.48 2009-12-31
arcmsr0: [ITHREAD]
........
[B](probe16:arcmsr0:0:16:0): inquiry data fails comparison at DV1 step[/B]
da0 at arcmsr0 bus 0 scbus0 target 0 lun 0
da0: <Areca 030-Storage-#1 R001> Fixed Direct Access SCSI-5 device
da0: 166.666MB/s transfers (83.333MHz, offset 32, 16bit)
da0: Command Queueing enabled
da0: 9536742MB (19531248640 512 byte sectors: 255H 63S/T 1215763C)
pass2 at arcmsr0 bus 0 scbus0 target 16 lun 0
pass2: <Areca RAID controller R001> Fixed Processor SCSI-0 device
da1 at arcmsr0 bus 0 scbus0 target 0 lun 1
da1: <Seagate ST31000528ASQ R001> Fixed Direct Access SCSI-5 device
da1: 166.666MB/s transfers (83.333MHz, offset 32, 16bit)
da1: Command Queueing enabled
da1: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C)
 
Back
Top