tmpfs size limit

Hi I found some posts about the maximum size of a tmpfs filesystem, being calculated from available memory.
I have two boxes (AMD64 8.2) with 8 and 12 gb or ram and nothing installed.
I cannot create a tmpfs larger than 1 gb, no matter if I specify a size or not.
A 1G fs is created, even if I see top showing over 7.5G free.

I'm missing something?
Using stock kernel and loadable module.
Thanks
 
In fstab:

Code:
tmpfs                   /mcache         tmpfs   rw,size=1G      0       0

Each value over 1G, no matter if I use G,M suffixes, nothing or bytes, results in 1G fs.
There must be some hard coded limit somewhere...
 
I'm going to try this way.
I'll add that i can create several 1G filesystems with no problems.
 
Sorry, but reading posts on the internet I found the rc.conf way creates a /tmp in RAM, I need to create a cache directory in a path I choose. I found no examples in /etc/defaults/rc.conf.
 
I suspect I must miss something.

mount -t tmpfs -o size=512m tmpfs /var/log/nginx
creates a 1G filsystem, as well any size instead of 512m. Just copied examples found on the net. System is plain vanilla fresh installed. I'm very perplexed.
 
LoZio said:
Sorry, but reading posts on the internet I found the rc.conf way creates a /tmp in RAM, I need to create a cache directory in a path I choose.

/etc/rc.conf has tmpmfs, a different thing.

As far as the original problem, tmpfs(5) says size is in bytes.
Code:
# mount -t tmpfs -o size=536870912 tmpfs /mnt
# df -h
Filesystem           Size    Used   Avail Capacity  Mounted on
...
tmpfs                512M    4.0k    512M     0%    /mnt

Are you looking at a Linux example?
 
I read in another post (sorry can't remember where) that tmpfs uses the combined capacities of your ram and swap to gauge the size. If you have one or more tmpfs systems mounted it adjusts the size on the fly depending on current capacity, available ram and swap etc. Here's what mine currently looks like. I have 4G of ram and 4G of swap running 8.2 RELEASE i386.

tmpfs fstab entries.
Code:
tmpfs                   /tmp            tmpfs   rw,nosuid       0       0
tmpfs                   /usr/obj        tmpfs   rw,noauto       0       0
tmpfs                   /usr/local/ports/wrkdir tmpfs   rw      0       0

Notice no size argument given.

Code:
# df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
...
tmpfs                 5.7G    633M    5.1G    11%    /tmp
tmpfs                 5.1G    4.0K    5.1G     0%    /usr/local/ports/wrkdir
...

I'm not completely sure of the mechanics behind it, but it works fine here.
 
ikreos said:
I read in another post (sorry can't remember where) that tmpfs uses the combined capacities of your ram and swap to gauge the size.

Yes, from tmpfs(5):
Code:
     size    Specifies the total file system size in bytes.  If zero (the
             default) or a value larger than SIZE_MAX - PAGE_SIZE is given,
             the available amount of memory (including main memory and swap
             space) will be used.
 
wblock@ said:
Yes, from tmpfs(5):
Code:
     size    Specifies the total file system size in bytes.  If zero (the
             default) or a value larger than SIZE_MAX - PAGE_SIZE is given,
             the available amount of memory (including main memory and swap
             space) will be used.

Are we looking at the same man page? I don't see that anywhere on it. All I see is this:
Code:
size    maximum size (in bytes) for the file system.

Never mind that comes from the 9-CURRENT man page. I'm looking at the 8.2 man page.
 
I wrote it in #3. It does not matter if I use bytes or some suffix.
Further tests indicate I can only create a 1G fs, not smaller, not bigger....
The same on two IBM AMD64 servers, installed with "next->next->next" procedure.
If I install a system from the same ISO in a VM with 1G of RAM, I can create a 384M fs using "384m" as size...
 
This just worked here, 8.2-STABLE as of 22 Sept:
/etc/fstab, tab-delimited
Code:
tmpfs           /mcache         tmpfs   rw,size=536870912       2       2

Code:
# mkdir /mcache
# df -h | grep mcache
tmpfs                 512M    4.0k    512M     0%    /mcache
 
Interesting behaviour and wants to be here: for me (7.1-RELEASE) it uses the defaults if I specify a size that is "too small":
Code:
# mount -t tmpfs -o size=1024 tmpfs /mnt
# df -h
Filesystem                 Size    Used   Avail Capacity  Mounted on
...
tmpfs                      7.1G    4.0K    7.1G     0%    /mnt
But it is correct if I specify something "big enough":
Code:
# mount -t tmpfs -o size=1073741824 tmpfs
# df -h
Filesystem                 Size    Used   Avail Capacity  Mounted on
...
tmpfs                      1.0G    4.0K    1.0G     0%    /mnt
 
I haven't ran into size issues:
Code:
dice@williscorto:~>df -h /tmp/
Filesystem    Size    Used   Avail Capacity  Mounted on
tmpfs         3.4G     12k    3.4G     0%    /tmp

But, something is a miss though. On another machine I have 2 GB RAM and 8GB swap. My /tmp/ should be around 10GB. At least it was a few -STABLE versions ago.

Code:
dice@molly:~>df -h /tmp/
Filesystem    Size    Used   Avail Capacity  Mounted on
tmpfs         5.2G    224k    5.2G     0%    /tmp
dice@molly:~>swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/gpt/swap4    2097152    69156  2027996     3%
/dev/gpt/swap5    2097152    67996  2029156     3%
/dev/gpt/swap6    2097152    69904  2027248     3%
/dev/gpt/swap7    2097152    69096  2028056     3%
Total             8388608   276152  8112456     3%
 
Back
Top