Vu+
  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

NFS server problem

Discussion in 'Newbies Questions & Infos' started by diverman, Mar 4, 2019.

Tags:
  1. diverman

    diverman Vu+ Newbie

    Messages:
    18
    Hi,
    I have an issue with NFS server running on Duo4k ( BH 3.0.8.0 (2019-02-22-master) ). Mount requests are rejected.

    root@vuduo4k:~# cat /etc/exports
    /media/hdd 192.168.100.0/24(rw,no_root_squash,sync,no_subtree_check)
    root@vuduo4k:~# exportfs -a
    root@vuduo4k:~# showmount -a
    All mount points on vuduo4k:
    root@vuduo4k:~#

    [root@debian ~]# mount -t nfs vuduo4k:/media/hdd/ /media/duo/
    mount.nfs: requested NFS version or transport protocol is not supported
    [root@debian ~]# mount -t nfs4 vuduo4k:/media/hdd/ /media/duo/
    mount.nfs4: requested NFS version or transport protocol is not supported

    VU+Duo4k is running NFS server:
    [root@debian ~]# nmap vuduo4k

    Starting Nmap 7.40 ( nmap.org ) at 2019-03-04 22:16 CET
    Nmap scan report for vuduo4k (192.168.100.8)
    Host is up (0.0042s latency).
    rDNS record for 192.168.100.8: vuduo4k
    Not shown: 991 closed ports
    PORT STATE SERVICE
    22/tcp open ssh
    80/tcp open http
    111/tcp open rpcbind
    139/tcp open netbios-ssn
    445/tcp open microsoft-ds
    2049/tcp open nfs

    Any ideas what I'm doing wrong?
     
  2. Shiro

    Shiro BH-C

    Messages:
    1,635
    blue -> blue -> nfs server panel

    The nfs server is already up and running.
    config in /usr/bin/nfs_server_script.sh
     
    Last edited: Mar 5, 2019
    Thomas67 likes this.
  3. slovaky22

    slovaky22 Vu+ Newbie

    Messages:
    39
    Hi man.
    Me too I have the same problem. Ultimo 4k
    With Open Black images it is ôk.
    Also with black hole 3.0.7...
    But whit BH 3.0.8. Nfs doesn't work.
    Sent from my ONE A2003 using Tapatalk
     
  4. diverman

    diverman Vu+ Newbie

    Messages:
    18
    While debugging the init script, I realized that /etc/exports is not needed, in that case, it should export /media/hdd. I got the following output:

    root@vuduo4k:~# /usr/bin/nfs_server_script.sh start
    + test -f /sbin/portmap
    + test -r /etc/default/nfsd
    + test -x ''
    + NFS_MOUNTD=/usr/sbin/rpc.mountd
    + test -x ''
    + NFS_NFSD=/usr/sbin/rpc.nfsd
    + test -x ''
    + NFS_STATD=/usr/sbin/rpc.statd
    + test -x /usr/sbin/rpc.mountd
    + test -x /usr/sbin/rpc.nfsd
    + test '' '!=' ''
    + NFS_SERVERS=8
    + test -n ''
    + NFS_STATEDIR=/var/lib/nfs
    + case "$1" in
    + start_portmap
    + echo 'Starting portmap daemon...'
    Starting portmap daemon...
    + start-stop-daemon --start --quiet --exec /sbin/portmap
    + '[' -f /var/run/portmap.upgrade-state ']'
    + create_directories
    + echo -n 'creating NFS state directory: '
    creating NFS state directory: + mkdir -p /var/lib/nfs
    + cd /var/lib/nfs
    + umask 077
    + mkdir -p sm sm.bak
    + test -w sm/state
    + umask 022
    + for file in xtab etab smtab rmtab
    + test -w xtab
    + for file in xtab etab smtab rmtab
    + test -w etab
    + for file in xtab etab smtab rmtab
    + test -w smtab
    + for file in xtab etab smtab rmtab
    + test -w rmtab
    + echo done
    done
    + start_nfsd 8
    + echo -n 'starting 8 nfsd kernel threads: '
    starting 8 nfsd kernel threads: + start-stop-daemon --start --exec /usr/sbin/rpc.nfsd -- 8
    + echo done
    done
    + start_mountd
    + echo -n 'starting mountd: '
    starting mountd: + start-stop-daemon --start --exec /usr/sbin/rpc.mountd -- '-f /etc/exports '
    + echo done
    done
    + start_statd
    + echo -n 'starting statd: '
    starting statd: + start-stop-daemon --start --exec /usr/sbin/rpc.statd
    /usr/sbin/rpc.statd is already running
    6038
    + echo done
    done
    + /usr/sbin/exportfs -v -i -o rw,no_root_squash,async :/media/hdd
    exporting :/media/hdd

    I also checked, that mount locally works:
    root@vuduo4k:~# mount -t nfs vuduo4k:/media/hdd/ /media/net/
    root@vuduo4k:~# df -h
    Filesystem Size Used Available Use% Mounted on
    /dev/root 3.5G 658.1M 2.6G 20% /
    devtmpfs 269.0M 4.0K 269.0M 0% /dev
    tmpfs 277.1M 408.0K 276.7M 0% /run
    tmpfs 277.1M 188.0K 276.9M 0% /var/volatile
    /dev/sda1 3.6T 27.4G 3.6T 1% /media/hdd
    vuduo4k:/media/hdd/ 3.6T 27.4G 3.6T 1% /media/net

    However, remote mount does not work (no there isn't any firewall between, both machines are on the same L2 segment):

    [root@debian ~]# mount -t nfs4 -o tcp vuduo4k:/media/hdd/ /media/duo/
    mount.nfs4: requested NFS version or transport protocol is not supported
    [root@debian ~]# mount -t nfs4 vuduo4k:/media/hdd/ /media/duo/
    mount.nfs4: requested NFS version or transport protocol is not supported
     
  5. Thomas67

    Thomas67 Vip

    Messages:
    390
    Go to /etc/nfsmount.conf and open with text editor
    In conf you can change protocol default is 4 but it support 2 and 3 also

    And if fore some reason the file /etc/export has not been created you can extract and edit this and upload with ftp to /etc folder change permission to 755
    export file is set for share /media/hdd for all device with ip from 192.168.0.0 to 192.168.0.255.
    If needed change IP Adress and it should work just fine
    Just remember to to a full reboot for NFS Server to read the new settings
     

    Attached Files:

  6. diverman

    diverman Vu+ Newbie

    Messages:
    18
    If you read my previous posts carefully, you may notice that presence of file /etc/exports is not a problem.
     
  7. diverman

    diverman Vu+ Newbie

    Messages:
    18
    There's something seriously miscofigured (it's fresh installation). rpcinfo shows different results when run with/without hostname:

    root@vuduo4k:~# rpcinfo -s
    program version(s) netid(s) service owner
    100000 3,4 local,udp6,tcp6 portmapper superuser
    100003 3 udp,tcp nfs superuser
    100227 3 udp,tcp - superuser
    100021 4,3,1 tcp6,udp6,tcp,udp nlockmgr superuser

    root@vuduo4k:~# rpcinfo -s vuduo4k
    program version(s) netid(s) service owner
    100000 2 udp,tcp portmapper unknown
    100024 1 tcp,udp status unknown
    100005 3,2,1 tcp,udp mountd unknown

    You can notice that 'nfs' service is not listed in the second case.
     
  8. Thomas67

    Thomas67 Vip

    Messages:
    390
    First of all i read in your post i answered this lines.
    ------------------------------------------------------------------------------------------------------
    [root@debian ~]# mount -t nfs4 -o tcp vuduo4k:/media/hdd/ /media/duo/
    mount.nfs4: requested NFS version or transport protocol is not supported
    [root@debian ~]# mount -t nfs4 vuduo4k:/media/hdd/ /media/duo/
    mount.nfs4: requested NFS version or transport protocol is not supported
    ----------------------------------------------------------------------------------------------------

    And therfore i suggest you check your protocol settings

    Can you post your export file and NFS Server conf file so we can have a look at it?
     
  9. diverman

    diverman Vu+ Newbie

    Messages:
    18
    Sure, exports:
    /media/hdd 192.168.100.0/24(rw,no_root_squash,sync,no_subtree_check)

    nfsmount.conf (default from installation):
    #
    # /etc/nfsmount.conf - see nfsmount.conf(5) for details
    #
    # This is an NFS mount configuration file. This file can be broken
    # up into three different sections: Mount, Server and Global
    #
    # [ MountPoint "Mount_point" ]
    # This section defines all the mount options that
    # should be used on a particular mount point. The '<Mount_Point>'
    # string need to be an exact match of the path in the mount
    # command. Example:
    # [ MountPoint "/export/home" ]
    # background=True
    # Would cause all mount to /export/home would be done in
    # the background
    #
    # [ Server "Server_Name" ]
    # This section defines all the mount options that
    # should be used on mounts to a particular NFS server.
    # Example:
    # [ Server "nfsserver.foo.com" ]
    # rsize=32k
    # wsize=32k
    # All reads and writes to the 'nfsserver.foo.com' server
    # will be done with 32k (32768 bytes) block sizes.
    #
    [ NFSMount_Global_Options ]
    # This statically named section defines global mount
    # options that can be applied on all NFS mount.
    #
    # Protocol Version [2,3,4]
    # This defines the default protocol version which will
    # be used to start the negotiation with the server.
    # Defaultvers=4
    #
    # Setting this option makes it mandatory the server supports the
    # given version. The mount will fail if the given version is
    # not support by the server.
    # Nfsvers=4
    #
    # Network Protocol [udp,tcp,rdma] (Note: values are case sensitive)
    # This defines the default network protocol which will
    # be used to start the negotiation with the server.
    # Defaultproto=tcp
    #
    # Setting this option makes it mandatory the server supports the
    # given network protocol. The mount will fail if the given network
    # protocol is not supported by the server.
    # Proto=tcp
    #
    # The number of times a request will be retired before
    # generating a timeout
    # Retrans=2
    #
    # The number of minutes that will retry mount
    # Retry=2
    #
    # The minimum time (in seconds) file attributes are cached
    # acregmin=30
    #
    # The Maximum time (in seconds) file attributes are cached
    # acregmin=60
    #
    # The minimum time (in seconds) directory attributes are cached
    # acregmin=30
    #
    # The Maximum time (in seconds) directory attributes are cached
    # acregmin=60
    #
    # Enable Access Control Lists
    # Acl=False
    #
    # Enable Attribute Caching
    # Ac=True
    #
    # Do mounts in background (i.e. asynchronously)
    # Background=False
    #
    # Close-To-Open cache coherence
    # Cto=True
    #
    # Do mounts in foreground (i.e. synchronously)
    # Foreground=True
    #
    # How to handle times out from servers (Hard is STRONGLY suggested)
    # Hard=True
    # Soft=False
    #
    # Enable File Locking
    # Lock=True
    #
    # Enable READDIRPLUS on NFS version 3 mounts
    # Rdirplus=True
    #
    # Maximum Read Size (in Bytes)
    # Rsize=8k
    #
    # Maximum Write Size (in Bytes)
    # Wsize=8k
    #
    # Maximum Server Block Size (in Bytes)
    # Bsize=8k
    #
    # Ignore unknown mount options
    # Sloppy=False
    #
    # Share Data and Attribute Caches
    # Sharecache=True
    #
    # The amount of time, in tenths of a seconds, the client
    # will wait for a response from the server before retransmitting
    # the request.
    # Timeo=600
    #
    # Sets all attributes times to the same time (in seconds)
    # actimeo=30
    #
    # Server Mountd port mountport
    # mountport=4001
    #
    # Server Mountd Protocol
    # mountproto=tcp
    #
    # Server Mountd Version
    # mounvers=3
    #
    # Server Mountd Host
    # mounthost=hostname
    #
    # Server Port
    # Port=2049
    #
    # RPCGSS security flavors
    # [none, sys, krb5, krb5i, krb5p ]
    # Sec=sys
    #
    # Allow Signals to interrupt file operations
    # Intr=True
    #
    # Specifies how the kernel manages its cache of directory
    # Lookupcache=all|none|pos|positive
    #
    # Turn of the caching of that access time
    # noatime=True
     
  10. Thomas67

    Thomas67 Vip

    Messages:
    390
    OK
    Export settings looks ok
    /media/hdd 192.168.100.0/24(rw,no_root_squash,sync,no_subtree_check)

    I assume that you use ip span 192.168.100.0-254 so in that case Export is okey


    In conf file you can try and unhash line yellow and test with protocol 2 or 3

    # Protocol Version [2,3,4]
    # This defines the default protocol version which will
    # be used to start the negotiation with the server.
    Defaultvers=4

    But first in your router open port management and add port 2049 so it points to the NFS Server

    NFS Servere version 4 does not support udp
    but to test version 2 and 3 set port redirect to both tcp and udp.

     
  11. diverman

    diverman Vu+ Newbie

    Messages:
    18
    Thanks, but this didn't help. As I said, both machines are on the same LAn segment, there''s no firewall/roter between. All ports are accessible directly.

    For some reason, vuduo4k accepts local NFS mounts, but rejects remote mounts.

    locally:
    root@vuduo4k:~# rpcinfo -s
    program version(s) netid(s) service owner
    100000 3,4 local,udp6,tcp6 portmapper superuser
    100003 3 udp,tcp nfs superuser
    100227 3 udp,tcp - superuser
    100021 4,3,1 tcp6,udp6,tcp,udp nlockmgr superuser

    Remotely (nfs missing!):
    [root@debian ~]# rpcinfo -s vuduo4k
    program version(s) netid(s) service owner
    100000 2 tcp portmapper unknown
    100024 1 tcp,udp status unknown
    100005 3,2,1 tcp,udp mountd unknown

    Mount point is visible, but not accesible:
    [root@debian ~]# showmount -e vuduo4k
    Export list for vuduo4k:
    /media/hdd *

    [root@debian ~]# nmap vuduo4k
    Starting Nmap 7.40 ( nmap.org ) at 2019-03-10 14:24 CET
    Nmap scan report for vuduo4k (192.168.100.8)
    Host is up (0.000079s latency).
    rDNS record for 192.168.100.8: vuduo4k
    Not shown: 991 closed ports
    PORT STATE SERVICE
    22/tcp open ssh
    80/tcp open http
    111/tcp open rpcbind
    139/tcp open netbios-ssn
    445/tcp open microsoft-ds
    2049/tcp open nfs <------ OK
    8001/tcp open vcom-tunnel
    8002/tcp open teradataordbms
    8200/tcp open trivnet1

    I also noticed RPC badcalls statistics on the VU+. Every mount request increments badcalls by one -> NFS server basically decodes RPC request, but rejects it.

    root@vuduo4k:~# nfsstat -s
    Server rpc stats:
    calls badcalls badclnt badauth xdrcall
    0 47 47 0 0
     
  12. Thomas67

    Thomas67 Vip

    Messages:
    390
    Can you mount your remote NFS Share on your computer?

    And are your shares a real directory and not just a datasets?

    This sounds like a issue with user rights in the directory three of the remote NFS Server share
    Remember that if not setup correct as Directory share only datasets you cant share without
    user from Duo4K are added to the remote server.

    In datasets users must exist in both enviroment with rights added
     
  13. diverman

    diverman Vu+ Newbie

    Messages:
    18
    I don't understand your terminology. What is a "dataset" in your terms?
    This is basically fresh Blackhole installation, all defaults are kept, no additional users were added. I operate as root on on both server and client (uid=0, gid=0).
     
  14. Thomas67

    Thomas67 Vip

    Messages:
    390
    Lets go back to start and see if i even have get your setup :)

    You have NFS share on your Duo4k and a share on a Linux server in the same network?
    When you try to mount the remote NFS share from the Linuxserver is where you have the problem?

    Directory are rootfolders on a HDD when Datasets are subfolders on a partition
    So if you share the whole HDD its the top of the three and subfolders are inside the Directory
    and they can inherits right.

    If you have a HDD split in to partitions
    They are no longer just directories they are now subfolders (Datasets) of the directory
    and therefor dont as default inherit rights from Default settings in the NFS export

    You then have to create an account on the remote server assign the user to al folders you want to share
    and use this user as login from your Duo4K
     
  15. diverman

    diverman Vu+ Newbie

    Messages:
    18
    Hi. I have a Duo4K with BH, then Debian server and Debian workstation. All are on the same LAN (192.168.100.0/24).
    1. Debian server exports /data to Debian workstation -> works fine.
    2. Debian server exports /data to Duo4k -> works fine (mounted as /media/net)
    3. Duo4k exports /media/hdd to workstation -> protocol not supported.
    4. Duo4k exports /media/hdd to server -> protocol not supported.
    5. Duo4k exports /media/hdd to localhost -> mounted as /media/testnfs -> works fine
    vuduo4k:/media/hdd is a 4TB HDD which is single primary partition with ext4 filesystem.

    root@vuduo4k:~# ls -ld /media/hdd/
    drwxr-xr-x 10 root root 4096 Mar 10 16:54 /media/hdd/

    root@vuduo4k:~# ls -la /media/hdd/
    drwxr-xr-x 10 root root 4096 Mar 10 16:54 .
    drwxr-xr-x 12 root root 4096 Mar 3 12:11 ..
    drwxr-xr-x 6 root root 4096 Mar 8 18:43 .kodi
    ---------- 1 root root 149023744 Mar 3 22:43 BhPersonalBackup.bh7
    drwxr-xr-x 2 root root 4096 Mar 3 00:18 backup
    -rw-r--r-- 1 root root 1432601 Mar 8 19:03 epg.dat
    drwx------ 2 root root 16384 Mar 2 20:02 lost+found
    drwxr-xr-x 2 root root 24576 Mar 10 17:45 movie
    drwxr-xr-x 2 root root 4096 Mar 2 20:46 movie_trash
    drwxr-xr-x 2 root root 4096 Mar 3 13:58 tuner
    drwxr-xr-x 2 root root 4096 Mar 3 00:16 vti-backupsuite
    -r-x------ 1 root root 16384 Mar 3 10:15 vtidb.db
    drwxr-xr-x 3 root root 4096 Mar 3 12:12 vuplus


    I think I don't have anything special or unusual in my setup. Debian->Debian NFS works just fine, Debian->Duo NFS works also fine, but Duo->Debian causes "protocol not supported".

    I try to keep defaults on Duo as much as possible, so I think it's a problem in Duo, that NFS doesn't work work on LAN (NFS locally works! - see point 5.).
     
  16. Thomas67

    Thomas67 Vip

    Messages:
    390
    This is strange so i hade to setup a test my self and see what results i got

    My box is a Uno4kSE but should be the same and image latest 3.0.8 C

    Default NFS in Vu+ (only ip change in export) Default in conf.

    Open mount manager on Solo2 and added nfs share

    And you are right there are something strang and wrong in the NFS
    Cant make a mount so this i have to dive in to and see whats wrong
     
  17. diverman

    diverman Vu+ Newbie

    Messages:
    18
    Thanks for your time you spend on it. It seems NFS server running on Duo4K denies mounts from 192.168.100.0/24, but accepts mounts from 127.0.0.1/32. This is really strange.
     
  18. Thomas67

    Thomas67 Vip

    Messages:
    390
    No problem :) Time and help is what we are here for even if its just an hobby :)

    This is my exports settings and i can now mount with full ipadress notice the small but needed change i highlight it yellow
    What does all_squash do... Well it combine all users to anonymouse/root and yes its a security risk if you where to share this
    online and in a public network. But we have only Root as user without password and sharing is inside priviate network. You can test and change all to root_squash if you want ot tighten security

    /media/hdd 10.146.200.34/24(rw,all_squash,sync,no_subtree_check)

    I also hade to add it manual in mount manager see screenshots below

    settings.jpg

    mount.jpg
     
  19. slovaky22

    slovaky22 Vu+ Newbie

    Messages:
    39
    Hi guys.
    Still I have problems with the NFS mount with my NAS HDD. And black hole 3.0.8.
    I tried do everything to mount NAS HDD.
    But with BH 3.0.8 doesn't work.
    See the foto.
    When I scan Network Browse I can see my NAS HDD, but
    When I try to connect it last step doesn't work. I didn't see my folders inside NAS HDD.
    And this is the main problem I think.
    I did the same thing with OBH. And everything work perfec.
    It is possible to solve this problem??
    Please I need help with this problem.
    Thanks
    The first two foto are showing my Ultimo 4K can see NAS HDD. And the second two foto showing ultimo 4k doesn't mount with NAS HDD.
    e03119a14066ee6aaade03e22691886d.jpg b3849669971e622da572d6c11d6f84e9.jpg 118d729db0a07c072428f47991005981.jpg aea30364c2bf5aae79a68a40a273c142.jpg

    Sent from my ONE A2003 using Tapatalk
     
  20. Shiro

    Shiro BH-C

    Messages:
    1,635
    just for curiosity, why you don't use samba ? Your NAS support it and it is the easiest way.
     

Share This Page