Rclone + Webdav awful performance

Hi guys,

My recent experience with rclone and webdav is terrible, never had such awful performance on Linux and with 14.x version.
Is practically unusable, same apply for the latest version from GIT and the one from the binary repo.

The connection is slug, it disconnects, it freezes whatever file manager I am using (mc included); I need an alternative because I need to mount my remote drive to work on some projects.

Maybe some of you is using a better setting with some tips to provide.

Thanks… 🙏
 
I used Thunar with fusefs-webdav but it still too slow. And now that I am comparing FreeBSD with Devuan I am getting the same unresponsive time, I am start to thinking the issue is on the server side...
 
Can you share the details of your setup? For instance, I had an issue where my laptop was on both wifi and wired and it was sluggish because it had default routes on both wired and wireless interfaces with the same priority. In my case, it turned out to be having identical default routes, so I set the priority on the wireless to be higher so that it prefers the wired connection.
 
tOsYZYny

Code:
hostname="gpc"
ifconfig_re0="inet 10.0.10.1 netmask 255.255.255.0"
ifconfig_ue0="DHCP"

sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
powerd_enable="YES"
moused_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
kld_list="fusefs i915kms vmm"
dbus_enable="YES"
linux_enable="YES"

# Bhyve
vm_enable="YES"
vm_dir="zfs:zroot/vm"
vm_delay="5"
 
Hmm, nothing stands out from that.

Regarding your configuration, might you share a high-level overview of all the hops from your machine where rclone is running to the server? If it is just a typical network with just your machine, then router, then Internet, then perhaps, can you run rclone with debug information and also do a tcpdump along with htop or dmesg?

Perhaps tcpdump may show a networking issue, or dmesg may show an issue with a network driver? If it is only this that is slow and nothing else including a speed test, then it is probably safe to say it is only this setup which may be an issue.
 
I started it in foreground mode:

Code:
set ALL_PROXY socks5h://localhost:12000 | rclone nfsmount drive:/ /mnt/drive --vfs-cache-mode writes  --no-modtime -v
2026/01/20 09:00:49 INFO  : webdav root '': poll-interval is not supported by this remote
2026/01/20 09:00:49 INFO  : webdav root '': vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2026/01/20 09:00:49 NOTICE: NFS Server running at 127.0.0.1:64804
2026/01/20 09:01:49 INFO  : webdav root '': vfs cache: cleaned: objects 23 (was 23) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2026/01/20 09:02:49 INFO  : webdav root '': vfs cache: cleaned: objects 39 (was 39) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2026/01/20 09:03:49 INFO  : webdav root '': vfs cache: cleaned: objects 39 (was 39) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2026/01/20 09:04:49 INFO  : webdav root '': vfs cache: cleaned: objects 39 (was 39) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2026/01/20 09:05:49 INFO  : webdav root '': vfs cache: cleaned: objects 39 (was 39) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2026/01/20 09:06:49 INFO  : webdav root '': vfs cache: cleaned: objects 39 (was 39) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2026/01/20 09:07:49 INFO  : webdav root '': vfs cache: cleaned: objects 39 (was 39) in use 0, to upload 0, uploading 0, total size 0 (was 0)

Here is when I changed directory the second time:
Code:
2026/01/20 09:02:49 INFO  : webdav root '': vfs cache: cleaned: objects 39 (was 39) in use 0, to upload 0, uploading 0, total size 0 (was 0)

Five minutes later Thunar is still unresponsive:
Code:
2026/01/20 09:07:49 INFO  : webdav root '': vfs cache: cleaned: objects 39 (was 39) in use 0, to upload 0, uploading 0, total size 0 (was 0)

I am getting the same behavior from Debian. I used to work normally on these WebDAV directory something has changed after my provider updated its infrastructure.
 
Wait, you also said you're now experiencing this on Debian too? Then, it may not be the platform.

Hmm, yeah, you might try adjusting polling settings, cache size, etc to see what happens. There are more things going on, what sort of bandwidth are you able to get through your socks proxy? What happens if you try disabling the poll interval or set a cache max size?

How large is the entire volume, do you have a few large files, or many small files?
 
Back
Top