Server freezes when using Git to update ports tree

i guess i have different issue than what started this thread but it feels similar and that git config helped somehow. wild guesses. looks like if try to deliberately consume memory from userland, i get swapping and wired, including arc, also goes below 1g eventually. but this would mean that 3g wired came out of nowhere. even more. from something. didn't vmstat -m show it? i never ran it then. maybe first we need to find what causes it. and then it's question what's the solution. i try to understand what that git config does. as git completed all it's tasks, it must have done something slower. but what needs time in zfs? compression? but cpu seemed pretty idle. disk io? if so, shouldn't it just wait? normally things wait. compression, network transfer, disk io, swapping. it could be questionable why 30g git repo should be used with 4g ram but i don't even know why it sometimes fails. why limiting git helps. couldn't system self-limit it, maybe by just waiting or so. i guess why git didn't swap was probably it never needed that much memory. it however asked kernel to do something and that needed a ton of memory. if it was configured to spread it out over longer timeframe, that didn't do anything. so maybe people smarter than me could figure it out. fix is here, but i have no idea what i fixed. all i know is that this chops up the tasks into smaller chunks in order to not choke it. i'm not that bad on it. but i can't wrap my head around fs internals that much either? read file, write file, rename file, delete file? i read git mmaps files, that would iirc put file into ram for fast access. but can you overdo it? funnily, limits help. i could do more research. i bet there are people here who could understand it better. that, should any operation like that go beyond ram, even if ram is only 4g. i'm much familiar with running out of "usual" memory. so yeah, if no changes, it's no need to rush to 14 either. so i'm dabbling in dark room still. fix is here. still curious on better fix tho. if there is any. surely there are still tiny vm's or embedded hw that could benefit from zfs too, and those don't really have 1t ram. yet they might do something, maybe not git, but something else, even if slower, that would trigger it again. unlikely that i can find or fix it. but i'm curious
 
i read more about what i blindingly did
Code:
[core]
        packedGitWindowSize = 128m
        packedGitLimit = 1g
        preloadIndex = false
[diff]
        renameLimit = 16384

apparently increased renameLimit from (now?) 1000 (why eh)
also preloadIndex is now disabled, shouldn't matter, should make it slower
but packedGitWindowSize from 1g to 128m, and packedGitLimit from 32t to 1g
they indeed control memory. they limit mmap

is it a zfs problem at all then? i can't comment on this

why is git default that large? which os does that work on? and should fbsd give it to git? why?

unsure if you need to test that. i have this idea that this should be limited to some amount of ram. maybe 95%

i also realized that kmem size is not size of memory. it's size of memory address space. so now i know much more about memory and can even find where it goes

but should it go where it does, that's the question? isn't ability to mmap files like optional feature. similar to usual fs cache. even error could be given maybe

i don't know really. zfs does work here. it has no ram so it performs slower

tho, what happens on ufs? unless it's not same issue

still, i have still no idea if my suggestion can even be implemented. or it is, but it doesn't somehow apply

so, mmap problem i can only fix at source?

it's damn bad if it's so easy to bring machine down. i tried to find limits too, but couldn't find yet somehow. but they feel wrong. i just tested with swappable memory. it got killed by some limit somewhere

so yeah, if it's that, how to proceed? add more ram, but then? it has finite size too. i didn't know kernel has no limits
 
Additionally, 13.1 is EOL. It's strongly recommended that it be updated to 13.2 now and 13.3 when it's released.
 
i read more about what i blindingly did
Code:
[core]
        packedGitWindowSize = 128m
        packedGitLimit = 1g
        preloadIndex = false
[diff]
        renameLimit = 16384

apparently increased renameLimit from (now?) 1000 (why eh)
also preloadIndex is now disabled, shouldn't matter, should make it slower
but packedGitWindowSize from 1g to 128m, and packedGitLimit from 32t to 1g
they indeed control memory. they limit mmap

is it a zfs problem at all then? i can't comment on this

why is git default that large? which os does that work on? and should fbsd give it to git? why?

unsure if you need to test that. i have this idea that this should be limited to some amount of ram. maybe 95%

i also realized that kmem size is not size of memory. it's size of memory address space. so now i know much more about memory and can even find where it goes

but should it go where it does, that's the question? isn't ability to mmap files like optional feature. similar to usual fs cache. even error could be given maybe

i don't know really. zfs does work here. it has no ram so it performs slower

tho, what happens on ufs? unless it's not same issue

still, i have still no idea if my suggestion can even be implemented. or it is, but it doesn't somehow apply

so, mmap problem i can only fix at source?

it's damn bad if it's so easy to bring machine down. i tried to find limits too, but couldn't find yet somehow. but they feel wrong. i just tested with swappable memory. it got killed by some limit somewhere

so yeah, if it's that, how to proceed? add more ram, but then? it has finite size too. i didn't know kernel has no limits

so yeah, if it's that, how to proceed? add more ram, but then? it has finite size too. i didn't know kernel has no limits

You use more RAM than you have. Obviously you should add RAM if you can.

As for what else to try, you got numerous suggestions that you didn't follow yet. Starting from testing without all those daemon and the VM.
 
what if i don't want my sshd killed because memory is full? is there at least some knob? all that i run there takes memory. i traced it down to possibld mmap, but i read that mmap doesn't mean copy to memory and lock it there? and somehow zfs comes into play too. why can't it return enomem or so. sure, it doesn't know what to prefer. surely kernel can not give it, not just program saying ok i'm not taking it. and can that limit be somehow derived from ram size. oh i don't know. i wouldn't complain if it's swappable memory, it's not. so fuck knows what i'll do, tune git, the other side has no limits possible for that? you know that whatever daemons and vm is there uses swappable memory, plus a little from kernel to have proc. oh well
 
i wonder if it contains any changes to avoid my machine, i think, running out kernel memory for no reason at all?

cracauer@ somehow desperately wanted to see it swapping but it's not

maybe if i battle with things a lot i could make a testcase. which is fun, because it's just a older amd64 box with some disks, and fbsd on top of that that i didn't change

i still think it's a bug. just using memory can't be it. since unused ram is wasted. it's clever to cache things there. then release it

just, how to get it to show. gitting repos randomly isn't doing it and never did

git manpage suggests those limit mmap. which i don't get why it needs limiting

i'm even worse at zfs and have no idea if it plays some role here. or how. so yeah no idea

vmstat somehow with some options, won't appear to show me where it goes. 80m to zfs only. maybe there's some other way

i guess it was wrong just to tell test it with *an* git command since it didn't fail that way

so no idea, i wish there's way to replicate and then we see if it's worth to look into

i mean, i'm not complaining machine swapping on low ram. this is something else. fuck knows. by specific git function i could just bring it all down? who cares if it runs vm. or anything else. with less than 1g swapped out to disk

sad if we can't find it

so let me ask again, should it be possible to run anything as a user in machine that is effectively dos. could argue that i didn't impose limit. don't even know what limits apply here. funnily had no such issue since 4.6 somehow. i can't even remember low mem zfs doing anything. unsure. i think zfs paniced early on it's dev on fbsd

no idea. it's also quite hard to make that without knowing anything. so yeah, have to wait maybe. unless someone comes up with better idea. similar result would be a start. in few seconds it's out of memory and kernel kills everything, then staying running. kernel didn't panic, freeze, crash whatever funnily. so it was able to limit itself. or perhaps killing procs helped. as they did it. or, one that

of which, i have no idea how to deal with. maybe there's some below failure test i could run that would show if it's bad. since it's pretty hard to obtain any info if it quickly fails

so meh, i have no idea what's wrong
 
yes, as that didn't start on init, made no difference. should it, even? it does use file as it's hdd, rest of it should be perfectly swappable memory, or? i mean qemu-system-arm. but yes i did, back then, try to run git without that
 
Back
Top