Solved No space left on device

After buying a new device for backing up my 500GB problem disk, I having a hard time trying to create a copy. The latest problem I ran into was after a successful copy of 62 GB which took all of six hours, the system stopped, saying

filesystem is full.
No space left on device

This is on a 2TB device formatted with a single freebsd-ufs (GPT) partition.

df -h shows that 1.8T out of 1.8T is used and -151G is available.

ls shows a single 62G is on the partition.

Looks like I have hit some limit somewhere.

What to do?


Looks like I have hit some sort of limit.
 
Thanks. Which version of FreeBSD, exactly?

freebsd--version -kru ; uname -aKU

Someone else recently found a problem with the file system. Repairs were made, details unknown.
 
Did you newfs /dev/da0? Normally, it should not be /dev/da0. /dev/da0p1, /dev/da0s1a or something similar. If you are mounting the file system to /mnt, do dumpfs -m -s /mnt.
Apologies, not familiar with this command.

Here's the first part of the output:-

Code:
magic    19540119 (UFS2)
last mounted time    Thu May  9 12:59:37 2024
last modified time    Sun May 12 06:09:39 2024
superblock location    65536    id    [ 5d52b726 143792cd ]
ncg    154    size    24641536    blocks    23866583
bsize    32768    shift    15    mask    0xffff8000
fsize    4096    shift    12    mask    0xfffff000
frag    8    shift    3    fsbtodb    3
minfree    8%    optim    time    symlinklen 120
maxbsize 32768    maxbpg    4096    maxcontig 4    contigsumsize 4
nbfree    919569    ndir    149249    nifree    10874375    nffree    431597
bpg    20035    fpg    160280    ipg    80256    unrefs    0
nindir    4096    inopb    128    maxfilesize    2252349704110079
sbsize    4096    cgsize    32768    csaddr    5056    cssize    4096
sblkno    24    cblkno    32    iblkno    40    dblkno    5056
cgrotor    6    fmod    0    ronly    0    clean    0
metaspace 6408    avgfpdir 64    avgfilesize 16384
flags    soft-updates+journal
check hashes    cylinder-groups
fsmnt    /
volname        swuid    0    providersize    24641536

There are over 19,000 lines of output. Hope the above gives some idea about what is wrong.

gpart show da0 shows

da0 GPT (1.9T) [CORRUPT]

fsck does not find system superblock.
 
Thanks. Which version of FreeBSD, exactly?

freebsd--version -kru ; uname -aKU

Someone else recently found a problem with the file system. Repairs were made, details unknown.
13.2-RELEASE-p2
13.2-RELEASE-p2
13.2-RELEASE-p2
FreeBSD X1 13.2-RELEASE-p2 FreeBSD 13.2-RELEASE-p2 GENERIC amd64 1302001 1302001
 
Please, is there a known issue with using an entire device for UFS?

There's no BUGS section in newfs(8).
No, but in this case, if /dev/da0 was used, dumpfs /dev/da0 should have reported the detail of the filesystem. Also, if GPT table is created ( gpart create -s gpt da0), newfs'ing /dev/da0 will break the table. If you want to newfs entire /dev/da0, you should not create the GPT table, else you should create a partition covering whole free space and use /dev/da0p1.
 
Initially I ran

gpart destroy -F da0
gpart create -s gpt da0
gpart add -t freebsd-ufs da0
newfs /dev/da0p1

It seems as though the device is screwed up now so I can start again.

I only want to use the device for data, so do I need to partition it?

Having just tried, it looks like I can mount an unpartitioned device, however how can I tell if there is some underlying hardware problem?
 
Is it possible that you maybe tried to copy a sparse file? In theory, you can create a sparse file that is larger than the disk that contains it.

Example:
Code:
user@system:~/ % dd if=/dev/zero bs=1 count=1 seek=1000T of=test
1+0 records in
1+0 records out
1 bytes transferred in 0.000067 secs (14993 bytes/sec)
user@system:~/ % ls -lh test
-rw-r--r--  1 user  group   1.0P May 12 18:57 test
 
Please, is there a known issue with using an entire device for UFS?
It is a bad idea, and recommended against. It will work though.

Except in the case of the OP, who seems to be confusing /dev/da0 with /dev/da0p1; given that he probably ran a lot of other commands, he probably has destroyed the file system.
 
Is it possible that you maybe tried to copy a sparse file? In theory, you can create a sparse file that is larger than the disk that contains it.
In practice, sparse files are rare. Exceedingly sparse files (with terabyte holes) are exceedingly rare.
 
It is a bad idea, and recommended against. It will work though.

Except in the case of the OP, who seems to be confusing /dev/da0 with /dev/da0p1; given that he probably ran a lot of other commands, he probably has destroyed the file system.
I was using /dev/da0p1 when I got the 'No space' error after I created a 62GB file on the 2TB partion.

I suspect that the disk is faulty but not sure how to tell.

I can't use smartctl because I need a specific '-d' parameter.
 
May be off-topic... but why do I get *sometimes* a message "No space..." when dd if=/dev/zero of=/dev/somedrive bs=1M finishes?
Good drives just finish without that message.
But, some drives finish with that message after been fully deleted.
And bad drives fail with this message at any point between 0% and 100%...
Any idea why this?

And in case of the OP, might it be a bad target drive?
 
I was using /dev/da0p1 when I got the 'No space' error after I created a 62GB file on the 2TB partion.
Well, to be accurate, you were using a file system, which was mounted on something, here /dev/da0p1.

I suspect that the disk is faulty but not sure how to tell.
Errors in the log files, errors on the console, IO errors to applications. Getting "no space" error due to a disk error is somewhere between far fetched and impossible.

I can't use smartctl because I need a specific '-d' parameter.
Smartctl is a useful tool, but it is not a "go-nogo" gauge to check whether a disk is OK or bad. In reality, that question is much more complex and full of nuance. But if you had disk errors, you would have seen them, as mentioned above.

General comment: People cry "bad disk" way too often. In reality, most storage / file system errors are human error, not hardware error.
 
Initially I ran

gpart destroy -F da0
gpart create -s gpt da0
gpart add -t freebsd-ufs da0
newfs /dev/da0p1
OK, you did the right thing so far. Then, where you mounted it, and what command you issued to copy the data? Please show me the command you specified.
Also, I want to see the output of mount command with no argument.

I doubt you did something wrong to corrupt the GPT table, so I want to know exactly what you did, on the basis of command you issued.
 
Well, to be accurate, you were using a file system, which was mounted on something, here /dev/da0p1.


Errors in the log files, errors on the console, IO errors to applications. Getting "no space" error due to a disk error is somewhere between far fetched and impossible.

I don't know the cause of the error, that's why I'm asking. It happened twice using the same device on two different machines.


Smartctl is a useful tool, but it is not a "go-nogo" gauge to check whether a disk is OK or bad. In reality, that question is much more complex and full of nuance. But if you had disk errors, you would have seen them, as mentioned above.

General comment: People cry "bad disk" way too often. In reality, most storage / file system errors are human error, not hardware error.
How do I eliminate the possibility of 'bad disk'.

I know I can't eliminate 'operator (human) error'.
 
OK, you did the right thing so far. Then, where you mounted it, and what command you issued to copy the data? Please show me the command you specified.
Also, I want to see the output of mount command with no argument.

I doubt you did something wrong to corrupt the GPT table, so I want to know exactly what you did, on the basis of command you issued.
I basically want to start again from scratch, but want to ensure the device is not faulty before I start.
 
Back
Top