main.cf (postfix) & 10-mail.conf (dovecot2): Correct settings?

Hello everybody,

I set up a FreeBSD 11.4 system with postfix 3.5.7.1, dovecot 2.3.11.3, dovecot-pigeonhole 0.5.11 and roundcube-php74-1.4.9,1 working with virtual users.

All works as expected, with one exception: problems with server-side sieve filters.
The error log shows up:

imap(user@domain.tld)<51952><PWnZSdWwA71/AAAB>: Error: stat(/var/mail/vhosts/ user@domain.tld /user/.dovecot.sieve/tmp) failed: Not a directory

According to several sources on the WWW, that sieve-error message depends on the fact that the HOME directory is used as the MAILDIR directory.

Since this is a freshly set up system, I still have the possibility to configure the whole thing from the beginning in a way that the described problem is no longer caused by using the HOME directory as MAILDIR directory.

My question is:

In my 10-mail.conf for Dovecot2 currently you find:
Code:
mail_home = /var/mail/vhosts/%d/%n
mail_location = maildir:~

main.cf (postfix):
Code:
home_mailbox = Maildir/
virtual_mailbox_base = /var/mail/vhosts

Can anyone tell me which modifications I have to make in 10-mail.conf and main.cf to fulfill following recommendations from https://wiki2.dovecot.org/VirtualUsers/Home:
Home vs. mail directory

Home directory shouldn't be the same as mail directory with mbox or Maildir formats
(but with dbox/obox it's fine). It's possible to do that, but you might run into trouble with it sooner or later.


Many greetings
sidney2017
 
Check the configuration of your /usr/local/etc/dovecot/conf.d/90-plugin.conf file

Comment out your home_mailbox in mail.cf
Comment out your mail_home in 10-mail.conf

Then set as following

/usr/local/etc/dovecot/conf.d/10-mail.conf
mail_location = maildir:/var/mail/vhosts/%d/%n

/usr/local/etc/postfix/main.cf
virtual_mailbox_base = /var/mail/vhosts

/usr/local/etc/dovecot/conf.d/90-plugin.conf
Code:
plugin {
  sieve = /var/mail/vhosts/%d/%n/.dovecot.sieve
  sieve_dir = /var/mail/vhosts/%d/%n/sieve
  sieve_global_dir = /var/mail/vhosts/default.sieve
  mail_home = /var/mail/vhosts/%d/%n
}

Don't forget to create the /var/mail/vhosts and give the permissions of the users under you are running your postfix , dovecot and virus scan.

Edit:
sieve_dir and sieve_global_dir are now deprecated
 
Hello,


thank you very much for your support. I wonder where in the WWW you can find hints that changes have to be made in 90-plugin.conf as well? Before I posted here, I spent the whole weekend using search engines to find solutions.

When I activate and save the Out of Office Assistant in Roundcube, I get an error message. In the logfiles this message appears as follows:

/var/log/dovecot.log

Code:
Oct 05 10:09:48 auth: Debug: passwd-file(user@domain.tld,127.0.0.1,<PyGOAeiwSDh/AAAB>): Performing passdb lookup
Oct 05 10:09:48 auth: Debug: passwd-file(user@domain.tld,127.0.0.1,<PyGOAeiwSDh/AAAB>): lookup: user=user@domain.tld file=/usr/local/etc/dovecot/users
Oct 05 10:09:48 auth: Debug: passwd-file(user@domain.tld,127.0.0.1,<PyGOAeiwSDh/AAAB>): Finished passdb lookup
Oct 05 10:09:48 auth: Debug: auth(user@domain.tld,127.0.0.1,<PyGOAeiwSDh/AAAB>): Auth request finished
Oct 05 10:09:48 auth: Debug: client passdb out: OK      1       user=user@domain.tld
Oct 05 10:09:48 auth: Debug: master in: REQUEST 1021313025      68810   1       3a158b54081624c84a3e6c0d3d6a5d1c        session_pid=68811
Oct 05 10:09:48 auth: Debug: passwd-file(user@domain.tld,127.0.0.1,<PyGOAeiwSDh/AAAB>): Performing userdb lookup
Oct 05 10:09:48 auth: Debug: passwd-file(user@domain.tld,127.0.0.1,<PyGOAeiwSDh/AAAB>): lookup: user=user@domain.tld file=/usr/local/etc/dovecot/users
Oct 05 10:09:48 auth: Debug: passwd-file(user@domain.tld,127.0.0.1,<PyGOAeiwSDh/AAAB>): Finished userdb lookup
Oct 05 10:09:48 auth: Debug: master userdb out: USER    1021313025      user@domain.tld              auth_mech=PLAIN
Oct 05 10:09:48 managesieve-login: Info: Login: user=<user@domain.tld>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=68811, secured, session=<PyGOAeiwSDh/AAAB>
Oct 05 10:09:48 managesieve(user@domain.tld)<68811><PyGOAeiwSDh/AAAB>: Debug: Added userdb setting: plugin/=yes
Oct 05 10:09:48 managesieve(user@domain.tld)<68811><PyGOAeiwSDh/AAAB>: Debug: Effective uid=1001, gid=1001, home=
Oct 05 10:09:48 managesieve(user@domain.tld)<68811><PyGOAeiwSDh/AAAB>: Debug: Namespace inbox: type=private, prefix=, sep=, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/var/mail/vhosts/user@domain.tld/user
Oct 05 10:09:48 managesieve(user@domain.tld)<68811><PyGOAeiwSDh/AAAB>: Debug: maildir++: root=/var/mail/vhosts/user@domain.tld/user, index=, indexpvt=, control=, inbox=/var/mail/vhosts/user@domain.tld/user, alt=
Oct 05 10:09:48 managesieve(user@domain.tld)<68811><PyGOAeiwSDh/AAAB>: Debug: sieve: Pigeonhole version 0.5.11 (d71e0372) initializing
Oct 05 10:09:48 managesieve(user@domain.tld)<68811><PyGOAeiwSDh/AAAB>: Debug: sieve: include: sieve_global is not set; it is currently not possible to include `:global' scripts.
Oct 05 10:09:48 managesieve(user@domain.tld)<68811><PyGOAeiwSDh/AAAB>: Error: sieve: file storage: Sieve storage path `~/sieve' is relative to home directory, but home directory is not available.
Oct 05 10:09:48 managesieve(user@domain.tld)<68811><PyGOAeiwSDh/AAAB>: Fatal: Failed to open Sieve storage.

Roundcube error.log

Code:
[05-Oct-2020 08:09:48 +0000]: <ecdb61c0> PHP Error: BYE "Internal error occurred. Refer to server log for more information. [2020-10-05 10:09:48]" (POST /roundcube/?_task=settings&_action=plugin.managesieve-vacation)
[05-Oct-2020 08:09:48 +0000]: <ecdb61c0> PHP Error: Unable to connect to managesieve on localhost:4190 in /usr/local/www/roundcube/plugins/managesieve/lib/Roundcube/rcube_sieve_engine.php on line 223 (POST /roundcube/?_task=settings&_action=plugin.managesieve-vacation)
[05-Oct-2020 08:09:48 +0000]: <ecdb61c0> PHP Error: Not currently connected (POST /roundcube/?_task=settings&_action=plugin.managesieve-vacation)

This should not have anything to do with permissions for /var/vhosts, because I once tested setting this folder recursively to 777 after the above error message appeared.

It looks like if the setting
sieve_global_path = /var/mail/vhosts/default.sieve
in 90-plugin.conf is not accepted or overwritten somewhere:

dovecot.log
Code:
Sieve storage path `~/sieve' is relative to home directory, but home directory is not available.

Any idea to solve that problem?

Thanks in advance and kind regards
sidney2017

P.S.: I guess you meant main.cf and not mail.cf. Maybe important for users with the same problem reading here your hints in future.
 
Yes main.cf not mail.cf sorry about that.



To Other users:
Do not follow blindly any howtos/guides from internet. Only source of documentation that you need is the official manual of the program that you are using.

NOTE:
You should limit the access of the /var/mail/vhosts only to the user group of the processes which need read/write access to this directory.

NOTE2:
sieve_global_dir = (< v0.3.1) must be changed to sieve_global as sieve_global_dir is deprecated.
sieve_dir = ~/sieve (< v0.3.1) must be changed to sieve settings https://wiki.dovecot.org/Pigeonhole/Sieve/Configuration#Per-user_Sieve_script_location
for more info: https://wiki.dovecot.org/Pigeonhole/Sieve/Configuration#Deprecated_Settings

sieve_dir = ~/sieve (< v0.3.1)
Directory for personal include scripts for the include extension. The Sieve interpreter only recognizes files that end with a .sieve extension, so the include extension expects a file called name.sieve to exist in the sieve_dir directory for a script called name. When using ManageSieve, this is also the directory where scripts are uploaded. For recent Pigeonhole versions, this location is configured as part of the sieve setting.
 
Hi,

thank you very much for your hints mentioned above!

I replaced the settings in /usr/local/etc/dovecot/conf.d/90-plugin.conf as recommended by you:

Code:
plugin {
  -e = /var/mail/vhosts/%d/%n/.dovecot.sieve
  sieve = /var/mail/vhosts/%d/%n/sieve
  sieve_global = /var/mail/vhosts/default.sieve
  mail_home = /var/mail/vhosts/%d/%n
}

I restartet dovecot with service dovecot restart

/var/mail/vhosts recursively
is owned by vpostfix:vpostfix with permissions 775.

In group vpostfix following users have been defined:
Code:
vpostfix:*:1001:vscan,dovecot,postfix,spamd,clamav

Unfortunately the same errors described above occur and in Roundcube activating the out-of-office-assistant and saving the message still throws a "unable to connect to server".

Code:
 sockstat -l4 | grep -i 4190
root     dovecot    90267 15 tcp4   *:4190                *:*

Do you have another idea what could be the cause of the problem?


Thanks in advance and best regards
sidney2017
 
remove this -e this is mistype from sieve (i fixed it in my first post)
here how my config look like

plugin {
sieve = /mail/%d/%n/.dovecot.sieve
sieve_dir = /mail/%d/%n/sieve
sieve_global_path = /mail/default.sieve
mail_home = /mail/%d/%n
}

I know i have to update my configs also as they are deprecated.

After that reload your dovecot config and verify also your setting for managesieve_default in /usr/local/www/roundcube/plugins/managesieve/config.inc.php

$config['managesieve_default'] = '/var/mail/vhosts/default.sieve';

Note:
i'm using /mail as it's mounted on different raid volume.
 
Hi,

my /usr/local/etc/dovecot/conf.d/90-plugin.conf now looks like following:

Code:
plugin {
  sieve = /var/mail/vhosts/%d/%n/.dovecot.sieve
  sieve_dir = /var/mail/vhosts/%d/%n/sieve
  sieve_global = /var/mail/vhosts/default.sieve
  mail_home = /var/mail/vhosts/%d/%n
}

But if you compare it with the settings you recommended first you see that the new set sieve_dir = now contains the path of sieve = (former 90-plugin.conf).

Former:
Code:
plugin {
  -e = /var/mail/vhosts/%d/%n/.dovecot.sieve
  sieve = /var/mail/vhosts/%d/%n/sieve
  sieve_global = /var/mail/vhosts/default.sieve
  mail_home = /var/mail/vhosts/%d/%n
}


Additionally I added in managesieve/config.inc.php:
$config['managesieve_default'] = '/var/mail/vhosts/default.sieve';
and restartet dovecot.

But the error obviously is very "persistent" because I still get the same error messages, when trying to set up a filter in roundcube.

For example roundcube/logs/errors-log shows:
PHP Error: Unable to connect to managesieve on localhost:4190

Kind regards and thanks for your patience
sidney2017
 
Are getting the same error ?
"sieve: file storage: Sieve storage path `~/sieve' is relative to home directory, but home directory is not available."

Check the actual config with
doveconf -a
And see if your mail_home is set under plugins.
 
Hi,

yes, the error message still is:

Code:
05 16:47:04 managesieve-login: Info: Login: user=<user@domain.tld>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=61522, secured, session=<cVtHju2wkbF/AAAB>
Oct 05 16:47:04 managesieve(user@domain.tld)<61522><cVtHju2wkbF/AAAB>: Debug: Added userdb setting: plugin/=yes
Oct 05 16:47:04 managesieve(user@domain.tld)<61522><cVtHju2wkbF/AAAB>: Debug: Effective uid=1001, gid=1001, home=
Oct 05 16:47:04 managesieve(user@domain.tld)<61522><cVtHju2wkbF/AAAB>: Debug: Namespace inbox: type=private, prefix=, sep=, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/var/mail/vhosts/domain.tld/user
Oct 05 16:47:04 managesieve(user@domain.tld)<61522><cVtHju2wkbF/AAAB>: Debug: maildir++: root=/var/mail/vhosts/domain.tld/user, index=, indexpvt=, control=, inbox=/var/mail/vhosts/domain.tld/user, alt=
Oct 05 16:47:04 managesieve(user@domain.tld)<61522><cVtHju2wkbF/AAAB>: Debug: sieve: Pigeonhole version 0.5.11 (d71e0372) initializing
Oct 05 16:47:04 managesieve(user@domain.tld)<61522><cVtHju2wkbF/AAAB>: Error: sieve: file storage: Sieve storage path `~/sieve' is relative to home directory, but home directory is not available.
Oct 05 16:47:04 managesieve(user@domain.tld)<61522><cVtHju2wkbF/AAAB>: Fatal: Failed to open Sieve storage.


dovecot -a

...
passdb {
args = scheme=CRYPT username_format=%u /usr/local/etc/dovecot/users
auth_verbose = default
default_fields =
deny = no
driver = passwd-file
master = no
mechanisms =
name =
override_fields =
pass = no
result_failure = continue
result_internalfail = continue
result_success = return-ok
skip = never
username_filter =
}
plugin {
mail_home = /var/mail/vhosts/%d/%n
sieve = file:~/sieve;active=~/.dovecot.sieve
sieve_dir = /var/mail/vhosts/%d/%n/sieve
sieve_global = /var/mail/vhosts/default.sieve
}

pop3_client_workarounds =
...

May the problem derive from the entry
sieve = file:~/sieve;active=~/.dovecot.sieve
which should be
sieve = /var/mail/vhosts/%d/%n/.dovecot.sieve
as defined in 90-plugin.conf?


EDIT #1:
Obviously sieve = defined in 90-plugin.conf is overwritten by
sieve = in 90-sieve.conf.

90-sieve.conf
plugin {
# The location of the user's main Sieve script or script storage. The LDA
# Sieve plugin uses this to find the active script for Sieve filtering at
# delivery. The "include" extension uses this location for retrieving
# :personal" scripts. This is also where the ManageSieve service will store
# the user's scripts, if supported.
#
# Currently only the 'file:' location type supports ManageSieve operation.
# Other location types like 'dict:' and 'ldap:' can currently only
# be used as a read-only script source ().
#
# For the 'file:' type: use the ';active=' parameter to specify where the
# active script symlink is located.
# For other types: use the ';name=' parameter to specify the name of the
# default/active script.
#sieve = file:~/sieve;active=~/.dovecot.sieve

I am just testing it after having commented out sieve = in 90-sieve.conf.

EDIT #2:

Indeed sieve = was overwritten in 90-plugin.conf.
Commenting out sieve = in 90-sieve.conf
removed the error message.

However, the absence filters unfortunately still do not trigger activation. The sender does not get an autoreply.


Thanks again for your assistance
sidney2017
 
try this:

plugin {
mail_home = /var/mail/vhosts/%d/%n
sieve = file:/var/mail/vhosts/%d/%n/sieve;active=/var/mail/vhosts/%d/%n/.dovecot.sieve
sieve_global = /var/mail/vhosts/default.sieve
}

Also check in the user folder there must be folder called sieve and a link pointing to that folder active script. This is created by the dovecot so no need to be created manually.

Do you have empty file /var/mail/vhosts/default.sieve
 
Now you removed sieve_dir = from 90-plugin.conf, correct?

NO, I cannot find a file named default.sieve under /var/mail/vhosts

In my user folder
/var/mail/vhosts/domain.tld/user
there was a .dovecot-sieve symlink pointing to sieve/roundcube.sieve.

For testing I have deleted /var/mail/vhosts/domain.tld/user/.dovecot.sieve and
/var/mail/vhosts/domain.tld/user/sieve/

Afterwards I restartet dovecot but both - .dovecot.sieve and /sieve are not recreated again.

doveconf -a | grep -i sieve
managesieve_client_workarounds =
managesieve_implementation_string = Dovecot Pigeonhole
managesieve_logout_format = bytes=%i/%o
managesieve_max_compile_errors = 5
managesieve_max_line_length = 64 k
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext
sieve = file:/var/mail/vhosts/%d/%n/sieve;active=/var/mail/vhosts/%d/%n/.dovecot.sieve
sieve_global = /var/mail/vhosts/default.sieve
protocols = imap pop3 lmtp sieve
service managesieve-login {
executable = managesieve-login
inet_listener sieve {
protocol = sieve
service managesieve {
executable = managesieve
protocol = sieve
unix_listener login/sieve {
mail_plugins = " sieve"

There is a space in " sieve"!
Ist this normal?

Actually I wanted to move away from sendmail to postfix and dovecot, because these components were supposedly a bit easier to handle. But that is probably more a rumor.:rolleyes:

Greetings
sidney2017
 
try this:

Also check in the user folder there must be folder called sieve and a link pointing to that folder active script. This is created by the dovecot so no need to be created manually.

Do you have empty file /var/mail/vhosts/default.sieve

Meanwhile I reinserted "sieve_dir = ..." in 90-plugin.conf:
(I hope that sieve_dir = /var/mail/vhosts/%d/%n/sieve is correct.

Code:
plugin {
mail_home = /var/mail/vhosts/%d/%n
sieve_dir = /var/mail/vhosts/%d/%n/sieve
sieve = file:/var/mail/vhosts/%d/%n/sieve;active=/var/mail/vhosts/%d/%n/.dovecot.sieve
sieve_global = /var/mail/vhosts/default.sieve
}

Now a new ".dovecot.sieve" symlink has been created
=> /var/mail/vhosts/domain.tld/user/sieve/managesieve.sieve

What surprises me: The old symbolic link ".dovecot.sieve" I renamed had pointed to
=> /var/mail/vhosts/domain.tld/user/sieve/roundcube.sieve

So I wonder what is correct? managesieve.sieve or roundcube.sieve?

/var/mail/vhosts/default.sieve still is NOT created although
sieve_global = /var/mail/vhosts/default.sieve is defined in 90-plugin.conf

And doveconf -n | grep -i sieve prints out:
sieve_global = /var/mail/vhosts/default.sieve

I can send and receive emails and set up filters in Roundcube, but thoses filters unfortunately still are not triggered.

Kind regards
sidney2017
 
You can have different .sieve scripts inside your /sieve/ directory. I'm using managesieve.sieve to store out of office. The simlink which is created by dovecot point to the active .sieve script.
Do you have sieve plugin activated in your lda and lmtp protocol config?

mail_plugins = $mail_plugins sieve
 
Hi,

here is the content of dovecot/conf.d/15-lda.conf:
Code:
protocol lda {
  # Space separated list of plugins to load (default is global mail_plugins).
  mail_plugins = $mail_plugins sieve
}

20-lmtp.conf:
Code:
protocol lmtp {
  # Space separated list of plugins to load (default is global mail_plugins).
  #mail_plugins = $mail_plugins
  mail_plugins = $mail_plugins sieve
}

doveconf -a
# 2.3.11.3 (502c39af9): /usr/local/etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.11 (d71e0372)
...
mail_home =
mail_index_log2_max_age = 2 days
mail_index_log_rotate_max_size = 1 M
mail_index_log_rotate_min_age = 5 mins
mail_index_log_rotate_min_size = 32 k
mail_index_rewrite_max_log_bytes = 128 k
mail_index_rewrite_min_log_bytes = 8 k
mail_location = maildir:/var/mail/vhosts/%d/%n
mail_log_prefix = "%s(%u)<%{pid}><%{session}>: "
mail_max_keyword_length = 50
mail_max_lock_timeout = 0
mail_max_userip_connections = 10
mail_never_cache_fields = imap.envelope
mail_nfs_index = no
mail_nfs_storage = no
mail_plugin_dir = /usr/local/lib/dovecot
mail_plugins =
mail_prefetch_count = 0
mail_privileged_group = vpostfix
mail_save_crlf = no
mail_server_admin =
mail_server_comment =
mail_shared_explicit_inbox = no
mail_sort_max_read_count = 0
mail_temp_dir = /tmp
mail_temp_scan_interval = 1 weeks
mail_uid = 1001
mail_vsize_bg_after_count = 0
mailbox_idle_check_interval = 30 secs
mailbox_list_index = yes
mailbox_list_index_include_inbox = no
mailbox_list_index_very_dirty_syncs = no
maildir_broken_filename_sizes = no
maildir_copy_with_hardlinks = yes
maildir_empty_new = no
maildir_stat_dirs = yes
maildir_very_dirty_syncs = no
managesieve_client_workarounds =
managesieve_implementation_string = Dovecot Pigeonhole
managesieve_logout_format = bytes=%i/%o
managesieve_max_compile_errors = 5
managesieve_max_line_length = 64 k
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext
master_user_separator =
mbox_dirty_syncs = yes
mbox_dotlock_change_timeout = 2 mins
mbox_lazy_writes = yes
mbox_lock_timeout = 5 mins
mbox_md5 = apop3d
mbox_min_index_size = 0
mbox_read_locks = fcntl
mbox_very_dirty_syncs = no
mbox_write_locks = dotlock fcntl
mdbox_preallocate_space = no
mdbox_rotate_interval = 0
mdbox_rotate_size = 10 M
mmap_disable = no
namespace inbox {
disabled = no
hidden = no
ignore_on_failure = no
inbox = yes
list = yes
location =
mailbox Drafts {
auto = no
autoexpunge = 0
autoexpunge_max_mails = 0
comment =
driver =
special_use = \Drafts
}
mailbox Junk {
auto = no
autoexpunge = 0
autoexpunge_max_mails = 0
comment =
driver =
special_use = \Junk
}
mailbox Sent {
auto = no
autoexpunge = 0
autoexpunge_max_mails = 0
comment =
driver =
special_use = \Sent
}
mailbox "Sent Messages" {
auto = no
autoexpunge = 0
autoexpunge_max_mails = 0
comment =
driver =
special_use = \Sent
}
mailbox Trash {
auto = no
autoexpunge = 0
autoexpunge_max_mails = 0
comment =
driver =
special_use = \Trash
}
order = 0
prefix =
separator =
subscriptions = yes
type = private
}
old_stats_carbon_interval = 30 secs
old_stats_carbon_name =
old_stats_carbon_server =
old_stats_command_min_time = 1 mins
old_stats_domain_min_time = 12 hours
old_stats_ip_min_time = 12 hours
old_stats_memory_limit = 16 M
old_stats_session_min_time = 15 mins
old_stats_user_min_time = 1 hours
passdb {
args = scheme=CRYPT username_format=%u /usr/local/etc/dovecot/users
auth_verbose = default
default_fields =
deny = no
driver = passwd-file
master = no
mechanisms =
name =
override_fields =
pass = no
result_failure = continue
result_internalfail = continue
result_success = return-ok
skip = never
username_filter =
}
plugin {
mail_home = /var/mail/vhosts/%d/%n
sieve = file:/var/mail/vhosts/%d/%n/sieve;active=/var/mail/vhosts/%d/%n/.dovecot.sieve
sieve_dir = /var/mail/vhosts/%d/%n/sieve
sieve_global = /var/mail/vhosts/default.sieve
}
pop3_client_workarounds =
pop3_delete_type = default
pop3_deleted_flag =
pop3_enable_last = no
pop3_fast_size_lookups = no
pop3_lock_session = no
pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%m, size=%s
pop3_no_flag_updates = no
pop3_reuse_xuidl = no
pop3_save_uidl = no
pop3_uidl_duplicates = allow
pop3_uidl_format = %08Xu%08Xv
pop3c_features =
pop3c_host =
pop3c_master_user =
pop3c_password =
pop3c_port = 110
pop3c_quick_received_date = no
pop3c_rawlog_dir =
pop3c_ssl = no
pop3c_ssl_verify = yes
pop3c_user = %u
postmaster_address = postmaster@%{if;%d;ne;;%d;%{hostname}}
protocols = imap pop3 lmtp sieve sieve
quota_full_tempfail = no
rawlog_dir =
recipient_delimiter = +
rejection_reason = Your message to <%t> was automatically rejected:%n%r
rejection_subject = Rejected: %s
replication_dsync_parameters = -d -N -l 30 -U
replication_full_sync_interval = 1 days
replication_max_conns = 10
replicator_host = replicator
replicator_port = 0
sendmail_path = /usr/sbin/sendmail
service aggregator {
chroot = .
client_limit = 0
drop_priv_before_exec = no
executable = aggregator
extra_groups =
fifo_listener replication-notify-fifo {
group =
mode = 0600
user =
}
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener replication-notify {
group =
mode = 0600
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service anvil {
chroot = empty
client_limit = 0
drop_priv_before_exec = no
executable = anvil
extra_groups =
group =
idle_kill = 4294967295 secs
privileged_group =
process_limit = 1
process_min_avail = 1
protocol =
service_count = 0
type = anvil
unix_listener anvil-auth-penalty {
group =
mode = 0600
user =
}
unix_listener anvil {
group =
mode = 0600
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service auth-worker {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = auth -w
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 1
type =
unix_listener auth-worker {
group =
mode = 0600
user = $default_internal_user
}
user =
vsz_limit = 18446744073709551615 B
}
service auth {
chroot =
client_limit = 0
drop_priv_before_exec = no
executable = auth
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 1
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0666
user = postfix
}
unix_listener auth-client {
group =
mode = 0600
user = $default_internal_user
}
unix_listener auth-login {
group =
mode = 0600
user = $default_internal_user
}
unix_listener auth-master {
group =
mode = 0600
user =
}
unix_listener auth-userdb {
group = postfix
mode = 0600
user = postfix
}
unix_listener login/login {
group =
mode = 0666
user =
}
unix_listener token-login/tokenlogin {
group =
mode = 0666
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service config {
chroot =
client_limit = 0
drop_priv_before_exec = no
executable = config
extra_groups =
group =
idle_kill = 4294967295 secs
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 0
type = config
unix_listener config {
group =
mode = 0600
user =
}
user =
vsz_limit = 18446744073709551615 B
}
service dict-async {
chroot =
client_limit = 0
drop_priv_before_exec = no
executable = dict
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener dict-async {
group = $default_internal_group
mode = 0660
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service dict {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = dict
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener dict {
group = $default_internal_group
mode = 0660
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service director {
chroot = .
client_limit = 0
drop_priv_before_exec = no
executable = director
extra_groups =
fifo_listener login/proxy-notify {
group =
mode = 00
user =
}
group =
idle_kill = 4294967295 secs
inet_listener {
address =
haproxy = no
port = 0
reuse_port = no
ssl = no
}
privileged_group =
process_limit = 1
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener director-admin {
group =
mode = 0600
user =
}
unix_listener director-userdb {
group =
mode = 0600
user =
}
unix_listener login/director {
group =
mode = 00
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service dns-client {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = dns-client
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener dns-client {
group =
mode = 0666
user =
}
unix_listener login/dns-client {
group =
mode = 0666
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service doveadm {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = doveadm-server
extra_groups = $default_internal_group
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 1
type =
unix_listener doveadm-server {
group =
mode = 0600
user =
}
user =
vsz_limit = 18446744073709551615 B
}
service health-check {
chroot =
client_limit = 1
drop_priv_before_exec = yes
executable = script -p health-check.sh
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 0
type =
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service imap-hibernate {
chroot =
client_limit = 0
drop_priv_before_exec = no
executable = imap-hibernate
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol = imap
service_count = 0
type =
unix_listener imap-hibernate {
group = $default_internal_group
mode = 0660
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service imap-login {
chroot = login
client_limit = 0
drop_priv_before_exec = no
executable = imap-login
extra_groups =
group =
idle_kill = 0
inet_listener imap {
address =
haproxy = no
port = 143
reuse_port = no
ssl = no
}
inet_listener imaps {
address =
haproxy = no
port = 993
reuse_port = no
ssl = yes
}
privileged_group =
process_limit = 0
process_min_avail = 0
protocol = imap
service_count = 1
type = login
user = $default_login_user
vsz_limit = 18446744073709551615 B
}
service imap-urlauth-login {
chroot = token-login
client_limit = 0
drop_priv_before_exec = no
executable = imap-urlauth-login
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol = imap
service_count = 1
type = login
unix_listener imap-urlauth {
group =
mode = 0666
user =
}
user = $default_login_user
vsz_limit = 18446744073709551615 B
}
service imap-urlauth-worker {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = imap-urlauth-worker
extra_groups = $default_internal_group
group =
idle_kill = 0
privileged_group =
process_limit = 1024
process_min_avail = 0
protocol = imap
service_count = 1
type =
unix_listener imap-urlauth-worker {
group =
mode = 0600
user = $default_internal_user
}
user =
vsz_limit = 18446744073709551615 B
}
service imap-urlauth {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = imap-urlauth
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 1024
process_min_avail = 0
protocol = imap
service_count = 1
type =
unix_listener token-login/imap-urlauth {
group =
mode = 0666
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service imap {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = imap
extra_groups = $default_internal_group
group =
idle_kill = 0
privileged_group =
process_limit = 1024
process_min_avail = 0
protocol = imap
service_count = 1
type =
unix_listener imap-master {
group =
mode = 0600
user =
}
unix_listener login/imap {
group =
mode = 0666
user =
}
user =
vsz_limit = 18446744073709551615 B
}
service indexer-worker {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = indexer-worker
extra_groups = $default_internal_group
group =
idle_kill = 0
privileged_group =
process_limit = 10
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener indexer-worker {
group =
mode = 0600
user = $default_internal_user
}
user =
vsz_limit = 18446744073709551615 B
}
service indexer {
chroot =
client_limit = 0
drop_priv_before_exec = no
executable = indexer
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 1
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener indexer {
group =
mode = 0666
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service ipc {
chroot = empty
client_limit = 0
drop_priv_before_exec = no
executable = ipc
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 1
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener ipc {
group =
mode = 0600
user = $default_internal_user
}
unix_listener login/ipc-proxy {
group =
mode = 0600
user = $default_login_user
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service lmtp {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = lmtp
extra_groups = $default_internal_group
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol = lmtp
service_count = 0
type =
unix_listener lmtp {
group =
mode = 0666
user =
}
user =
vsz_limit = 18446744073709551615 B
}
service log {
chroot =
client_limit = 0
drop_priv_before_exec = no
executable = log
extra_groups =
group =
idle_kill = 4294967295 secs
privileged_group =
process_limit = 1
process_min_avail = 0
protocol =
service_count = 0
type = log
unix_listener log-errors {
group =
mode = 0600
user =
}
user =
vsz_limit = 18446744073709551615 B
}
service managesieve-login {
chroot = login
client_limit = 0
drop_priv_before_exec = no
executable = managesieve-login
extra_groups =
group =
idle_kill = 0
inet_listener sieve {
address =
haproxy = no
port = 4190
reuse_port = no
ssl = no
}
privileged_group =
process_limit = 0
process_min_avail = 0
protocol = sieve
service_count = 1
type = login
user = $default_login_user
vsz_limit = 18446744073709551615 B
}
service managesieve {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = managesieve
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol = sieve
service_count = 1
type =
unix_listener login/sieve {
group =
mode = 0666
user =
}
user =
vsz_limit = 18446744073709551615 B
}
service old-stats {
chroot = empty
client_limit = 0
drop_priv_before_exec = no
executable = old-stats
extra_groups =
fifo_listener old-stats-mail {
group =
mode = 0600
user =
}
fifo_listener old-stats-user {
group =
mode = 0600
user =
}
group =
idle_kill = 4294967295 secs
privileged_group =
process_limit = 1
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener old-stats {
group =
mode = 0600
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service pop3-login {
chroot = login
client_limit = 0
drop_priv_before_exec = no
executable = pop3-login
extra_groups =
group =
idle_kill = 0
inet_listener pop3 {
address =
haproxy = no
port = 110
reuse_port = no
ssl = no
}
inet_listener pop3s {
address =
haproxy = no
port = 995
reuse_port = no
ssl = yes
}
privileged_group =
process_limit = 0
process_min_avail = 0
protocol = pop3
service_count = 1
type = login
user = $default_login_user
vsz_limit = 18446744073709551615 B
}
service pop3 {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = pop3
extra_groups = $default_internal_group
group =
idle_kill = 0
privileged_group =
process_limit = 1024
process_min_avail = 0
protocol = pop3
service_count = 1
type =
unix_listener login/pop3 {
group =
mode = 0666
user =
}
user =
vsz_limit = 18446744073709551615 B
}
service replicator {
chroot =
client_limit = 0
drop_priv_before_exec = no
executable = replicator
extra_groups =
group =
idle_kill = 4294967295 secs
privileged_group =
process_limit = 1
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener replicator-doveadm {
group =
mode = 00
user = $default_internal_user
}
unix_listener replicator {
group =
mode = 0600
user = $default_internal_user
}
user =
vsz_limit = 18446744073709551615 B
}
service stats {
chroot =
client_limit = 0
drop_priv_before_exec = no
executable = stats
extra_groups =
group =
idle_kill = 4294967295 secs
privileged_group =
process_limit = 1
process_min_avail = 0
protocol =
service_count = 0
type =
unix_listener stats-reader {
group =
mode = 0600
user =
}
unix_listener stats-writer {
group = $default_internal_group
mode = 0660
user =
}
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
service submission-login {
chroot = login
client_limit = 0
drop_priv_before_exec = no
executable = submission-login
extra_groups =
group =
idle_kill = 0
inet_listener submission {
address =
haproxy = no
port = 587
reuse_port = no
ssl = no
}
privileged_group =
process_limit = 0
process_min_avail = 0
protocol = submission
service_count = 1
type = login
user = $default_login_user
vsz_limit = 18446744073709551615 B
}
service submission {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = submission
extra_groups = $default_internal_group
group =
idle_kill = 0
privileged_group =
process_limit = 1024
process_min_avail = 0
protocol = submission
service_count = 1
type =
unix_listener login/submission {
group =
mode = 0666
user =
}
user =
vsz_limit = 18446744073709551615 B
}
service tcpwrap {
chroot =
client_limit = 1
drop_priv_before_exec = no
executable = tcpwrap
extra_groups =
group =
idle_kill = 0
privileged_group =
process_limit = 0
process_min_avail = 0
protocol =
service_count = 0
type =
user = $default_internal_user
vsz_limit = 18446744073709551615 B
}
shutdown_clients = yes
ssl = yes
ssl_alt_cert =
ssl_alt_key =
ssl_ca =
ssl_cert = </usr/local/etc/letsencrypt/live/mail.domain.tld/fullchain.pem
ssl_cert_username_field = commonName
ssl_cipher_list = ALL:!kRSA:!SRP:!kDHd:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK:!RC4:!ADH:!LOW@STRENGTH
ssl_client_ca_dir =
ssl_client_ca_file =
ssl_client_cert =
ssl_client_key =
ssl_client_require_valid_cert = yes
ssl_crypto_device =
ssl_curve_list =
ssl_dh =
ssl_key = # hidden, use -P to show it
ssl_key_password =
ssl_min_protocol = TLSv1
ssl_options =
ssl_prefer_server_ciphers = no
ssl_require_crl = yes
ssl_verify_client_cert = no
state_dir = /var/db/dovecot
stats_http_rawlog_dir =
stats_writer_socket_path = stats-writer
submission_client_workarounds =
submission_host =
submission_logout_format = in=%i out=%o
submission_max_mail_size = 0
submission_max_recipients = 0
submission_relay_command_timeout = 5 mins
submission_relay_connect_timeout = 30 secs
submission_relay_host =
submission_relay_master_user =
submission_relay_max_idle_time = 29 mins
submission_relay_password =
submission_relay_port = 25
submission_relay_rawlog_dir =
submission_relay_ssl = no
submission_relay_ssl_verify = yes
submission_relay_trusted = no
submission_relay_user =
submission_ssl = no
submission_timeout = 30 secs
syslog_facility = mail
userdb {
args = username_format=%u /usr/local/etc/dovecot/users
auth_verbose = default
default_fields =
driver = passwd-file
name =
override_fields =
result_failure = continue
result_internalfail = continue
result_success = return-ok
skip = never
}
valid_chroot_dirs =
verbose_proctitle = yes
verbose_ssl = yes
version_ignore = no
protocol lmtp {
mail_plugins = " sieve"
}
protocol lda {
mail_plugins = " sieve"
}

Best regards
Sidney2017
 
No error message!

But the out-of-office message and/or other rules defined in Roundcube are not fired up. But they are saved on the storage.

Kind regards
sidney2017
 
Check if the managesieve.sieve is your default script file and if the link .dovecot.sieve is pointing to it.
Also you can use sieve-test to test the script.
 
Are you sure that
sieve_global = /var/mail/vhosts/default.sieve is correct?

Shouldn´t it be:
sieve_defaut = /var/mail/vhosts/default.sieve instead?

Anyway: Even a
sieve_default = /var/mail/vhosts/default.sieve does not create a default.sieve under /var/mail/vhosts.

I removed the old directory contents and started postfix and dovecot again.
The structures have been recreated:

/var/mail/vhosts/domain.tld/user/.dovecot.sieve => /var/mail/vhosts/domain.tld/user/sieve/managesieve.sieve

Bit still the filters are not triggered! :-(

Kind regards
sidney2017
 
sieve_global_path = (< v0.2)
The deprecated name for the sieve_default setting.

default.sieve must be created by you.
 
sieve_default = (v0.3+)

The location of the default personal sieve script file which gets executed ONLY if user's private Sieve script does not exist, e.g. file:/var/lib/dovecot/default.sieve (check the multiscript section for instructions on running global Sieve scripts before and after the user's personal script). This is usually a global script, so be sure to pre-compile the specified script manually in that case using the sievec command line tool, as explained here. This setting used to be called sieve_global_path, but that name is now deprecated.

Source: https://wiki2.dovecot.org/Pigeonhole/Sieve/Configuration

So I wonder why you used sieve_global above?

Kind Regards
sidney2017
 
It must be sieve_global_path or sieve_default

sieve_global_dir = (< v0.3.1)
Directory for :global include scripts for the include extension. The Sieve interpreter only recognizes files that end with a .sieve extension, so the include extension expects a file called name.sieve to exist in the sieve_global_dir directory for a script called name. For recent Pigeonhole versions, a more generic version of this setting is called sieve_global and allows locations other than file system directories.

This is my plugin configuration which works.

Code:
plugin {
  expire = Trash
  mail_home = /mail/%d/%n
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size
  quota = maildir:User quota
  quota_exceeded_message = Storage quota for this account has been exceeded, please try again later.
  quota_rule = *:storage=1G
  quota_rule2 = Trash:storage=+30%%
  quota_rule3 = Sent:storage=+30%%
  quota_warning = storage=90%% quota-warning 90 %u
  quota_warning2 = storage=75%% quota-warning 75 %u
  sieve = /mail/%d/%n/.dovecot.sieve
  sieve_dir = /mail/%d/%n/sieve
  sieve_global_path = /mail/default.sieve
}
 
Back
Top