mount_smbfs: unable to open connection: syserr = Authentication error

I am trying to give one server access to a part of the file storage of another over the network. Both machines are in the same data center but in different networks. The networks are properly routed.

First I tried NFS but that turned out to be a pain and never truly worked. The only alternative I can think of is SMB. This turns out to be a pain as well on FreeBSD, as the mount_smbfs command appears to be severely limited in what it works with and if it works at all. I absolutely want to avoid FUSE extensions like sshfs because I've experienced unstable behavior in the past, including the shutdown of network interfaces when running it on FreeBSD.

I have set up the Samba server correctly, which I verified by running smbclient -U mirror -L localhost on the host and smbclient -U mirror -L 1.2.3.4 on the client. I can log in and browse the respective share via smbclient -U mirror '\\\\1.2.3.4\pub'.

When I try to mount the share via mount_smbfs -I 1.2.3.4 -U mirror //1.2.3.4/pub /mnt, it first asks for a password and always immediately fails:

Code:
mount_smbfs: unable to open connection: syserr = Authentication error

Here's the (extremely small) smb4.conf of the server:

Code:
[global]
workgroup = WORKGROUP
security = user

[pub]
path = /pub
valid users = mirror
writable  = yes
browsable = yes
read only = no
guest ok = no
public = no
create mask = 0666
directory mask = 0755

Testing this from a Linux machine worked immediately. Any idea how to get this to work? Alternatively, I'm open to better solutions that work.
 
mount_smbfs support only obsolete SMB1 (aka CIFS) protocol. You can use some FUSE CIFS implementations from the ports or force smb.conf to SMB1
 
I have tried setting

Code:
server max protocol = NT1

as well as

Code:
client max protocol = NT1

and it did nothing to solve this, as expected.

smb.conf() states for both directives:

Normally this option should not be set as the automatic negotiation phase in the SMB protocol takes care of choosing the appropriate protocol.

So unless this is broken in mount_smbfs, it shouldn't need to be set. I did it anyway and it did not work. Does this mean that for all intents and purposes FreeBSD can act as a SMB server but not a client? Meaning: mounting of any (half-way modern and secure) SMB share is not something you should expect to work in FreeBSD at all?

FUSE filesystems are not an option due to instability.
 
Since this appears to be a become a dead end quickly, let me rephrase my question: is there any solid and recommended way to permanently share some data between the filesystems of two servers not located in the same LAN that works with FreeBSD — without having to try to get NFS to work via a VPN connection? Some form of network file system perhaps?
 
you need to modify your smb.conf to limit the max protocol and allow ntlm auth to be able to use the mount_smbfs. The shares will not be accesable by Windows OS unless you enable SMB1 which is NOT recommended because of the exploit. If you want to use only for sharing between two server across the network i will suggest to use NFS or scp instead of samba.

Code:
ntlm auth = yes
max protocol = SMB2
 
I had the same problem and found solution here. Thanks VladiBG !
In my case sufficient was to set ntlm auth = yes. The protocols are set as follow:
Code:
server max protocol = SMB3_11
server min protocol = NT1
client max protocol = SMB3_11
client min protocol = NT1
 
Agree the mount_smbfs(8) can't be changes until some developer take the patches from MacOS or entire rewrite the program to include support for the newest smb3 protocol.
So for now the only option for whoever want to use mount_smbfs(8) to access a samba server must enable ntlm auth on the samba server. Same goes for Windows as from september 2016 with security update MS16-114 microsoft disabled smb1 because of the remote code execution.

https://support.microsoft.com/en-us...-disable-smbv1-smbv2-and-smbv3-in-windows-and
 
@herrbischof I suspect it's not FUSE that's being instable, it's sshfs. My attempts at using sshfs failed because the ssh connection would time out (from the remote server so there wasn't much I could do about that), sshfs was too dumb to auto-reconnect, but it would keep running and hog the local network resources. Oh, and it would unmount instead of giving write errors, so the writes would go to the local filesystem - hilarity ensues if you assume that your backup went to the remote machine and wipe the disk.
 
Back
Top