Western Digital 500gb fat32 can't mount...

Hi, I have a Western Digital fat32 external USB hard drive. First off don't know how to mount it. Would like to auto mount it. I have Gnome Installed. When I go to My Computer it is listed as My Book - there is a space in-between them. When I click it to mount I get this error message:
Code:
Unable to mount the volume 'My Book'.

Followed by a link called Details which shows this:
Code:
mount_msdosfs:/dev/msdosfs/My Book:Disk too big, try '-o large' mount option:Invalid argument

That is what it says. I tried using this:
[cmd=]mount -t msdosfs -o large /dev/msdosfs/My Book /mnt/username[/cmd]
and this:
[cmd=]mount -t msdosfs -o large /dev/msdosfs/My%Book /mnt/username[/cmd]

Yes, username is a folder inside my mount folder.

This is what I get spit out when trying those:
Code:
usage: mount [-adflpruvw] [-F fstab] [-o options] [-t ufs | external_type]
       mount [-dfpruvw] special | node
       mount [-dfpruvw] [-o options] [-t ufs | external_type] special node

mount_msdosfs: /dev/msdosfs/My%Book: No such file or directory

Any ideas on how to get this USB hard drive to automount?
 
Auto-mounting doesn't work anymore since the Gnome developers started depending on udev/uevent.

Code:
mount -t msdosfs -o large /dev/msdosfs/My%Book /mnt/username
You need to 'escape' the space.

# mount -t msdosfs -o large /dev/msdosfs/My\ Book /mnt/username
 
Put quotes around the argument or escape the space with a backslash:
# mount -t msdosfs -o large "/dev/msdosfs/My Book" /mnt/username
# mount -t msdosfs -o large /dev/msdosfs/My\ Book /mnt/username

Better yet, relabel the drive to change that space to an underline. emulators/mtools might be able to do that.
 
wblock@ said:
Put quotes around the argument or escape the space with a backslash:
# mount -t msdosfs -o large "/dev/msdosfs/My Book" /mnt/username
# mount -t msdosfs -o large /dev/msdosfs/My\ Book /mnt/username

Better yet, relabel the drive to change that space to an underline. emulators/mtools might be able to do that.

Thanks guys it works. Now just got to understand how to use the dump tool in freebsd FreeBSD.
 
Probably something to do with the way Gnome expects to see an automounted device. The people who use Gnome would know more.

Use umount(8) to unmount it.
 
wblock@ said:
Probably something to do with the way Gnome expects to see an automounted device. The people who use Gnome would know more.

Use umount(8) to unmount it.

Thanks man. I unmounted it, found it out by trial and error. I am not off to try and figure out how to use the dump tool.
 
Hi, I finally was able to create a dump file but there are issues.

The problem is that when in the console I type the dump commands. It would start to run. It then gets to III which is about 42% finished. It then results in a write error at some node. It then asks me would I like to restart yes or no. I would type yes and the whole thing happens again. I did this about 10 times. Every time it would stop at different percentages. Yet, still have the same write error.

I don't know what is wrong. I still do have the dump file on the external hard drive. Any ideas what I can do to try and figure out what the problem could be?
 
Depends on where the dump file is being written. Possibly it became bigger than FAT32 supports. 4G, I think that is. Pipe the output of dump into gzip(1) then pipe it into split(1).

# dump -C16 -b64 -0uanL -f - / | gzip | split -b 2000M /mnt/username/rootfs.dump.gz.
(untested, I'm not feeling awake enough right now)
 
wblock@ said:
Depends on where the dump file is being written. Possibly it became bigger than FAT32 supports. 4G, I think that is. Pipe the output of dump into gzip(1) then pipe it into split(1).

# dump -C16 -b64 -0uanL -f - / | gzip | split -b 2000M /mnt/username/rootfs.dump.gz.
(untested, I'm not feeling awake enough right now)

I ran that command and got this message:
Code:
split: /mnt/username/rootfs.dump.gz.: No such file or directory
 
Aha, that's why I said untested. Missing a dash for stdin:
# dump -C16 -b64 -0uanL -f - / | gzip | split -b 2000M - /mnt/username/rootfs.dump.gz.

But still untested.
 
wblock@ said:
Aha, that's why I said untested. Missing a dash for stdin:
# dump -C16 -b64 -0uanL -f - / | gzip | split -b 2000M - /mnt/username/rootfs.dump.gz.

But still untested.

Ok, thanks. Can you explain exactly what your trying to do? Is there any way I can just look up the log and show you the logs showing the error messages?
 
What that does is create a compressed backup of the / partition, splitting it into individual files that are no larger than 2000M each because FAT32 does not deal well with large files. The man pages for dump(8), gzip(1), and split(1) explain the options.

If you're running out of space, it's probably because the backup drive is not mounted.
 
wblock@ said:
What that does is create a compressed backup of the / partition, splitting it into individual files that are no larger than 2000M each because FAT32 does not deal well with large files. The man pages for dump(8), gzip(1), and split(1) explain the options.

If you're running out of space, it's probably because the backup drive is not mounted.

Well I tried your code. It did something, but I think it was overwriting my root files. At some point the server restarted by itself and then showed that the hard drive has bad sectors. So it went through many bad sectors to fix them. Eventually got the system back up.
 
FreeBSD 9.0-RELEASE has some journal bugs that interfere with dump. But you haven't said if you're using 9.0-RELEASE.
 
wblock@ said:
FreeBSD 9.0-RELEASE has some journal bugs that interfere with dump. But you haven't said if you're using 9.0-RELEASE.

No, I don't plan on installing 9.0 yet. I want to eventually upgrade after once I am told by others that it's stable enough to. I was told that there are too many issues to recommend me to upgrade to 9.0. So I plan to hold off until I hear most bugs and issues are solved.
 
Back
Top