ContentTools and ContentCGI

obsigna

Profile disabled
For many years I was running a BLog using a modified WordPress which I kept up to date by the way of the SVN vendor branches method. For various reasons I got tired with WordPress and I began to search for a substitute. My BLog needs nothing special. Because of the GDPR, I turned off already the commentary facility of WordPress, and since the beginning, I preferred not to use the various WP media tools, but copy any media directly into the respective directory on the web server. Actually my BLog is a kind of a Web Diary and I want it in that way and I don't need/want all the bells and whistles.

In my search, I stumbled across Anthony Blackshaw's ContentTools. And by exploring the demo page, I was immediately amazed by the elegance and simplicity of editing in a WYSIWYG manner the actual website, instead of needing to enter into a special editor on the admin page of WP for managing my content. So, I wanted the ContentTools for my new BLog.

However, the ContentTools form only the frontend, i.e. the JavaScript programs which run in the users browser. The backend, i.e. the server side storage engine needs to be provided by separate means.

A month ago, I started working on an extensible FastCGI Daemon written in C and Obejctive-C for FreeBSD and macOS - I named it ContentCGI and I made it available on GitHub under a BSD license. It became ready for prime time now. My new BLog is up and running and I already moved some old articles from the old WP system to the new one. The last article is exactly about how to employ the ContentTools/ContentCGI combo for a Web Diary style of BLog – see: The new Obsigna BLog authored with the ContentTools backed by the ContentCGI

On GitHub as well as in said BLog article you'll find installation instructions for FreeBSD (out of the box, this won't run on Linux, they would need to do a port :-D)

In case you need only a minor subset of the CMS power horses like WordPress or Drupal, you might want to have a look. If you need a commentary area, media management, RSS feeds, etc. p.p., then please ignore the ContentCGI.
 
Do you self host your blog? If so what would you consider minimum hardware requirements?
I have a DMZ setup off my pfSense box I have been thinking of facing the cruel world.
Would an Arm board cut it?
 
Yes, it is self-hosted on my home server. This is an Atom board from 2010 with 2 Gbyte RAM.
Intel(R) Atom(TM) CPU D510 @ 1.66GHz (1666.73-MHz K8-class CPU)

ContentCGI should build on ARM and it uses much less resources than PHP+MySQL. For my old WordPress BLog, I was in need to limit MySQL to MyISAM (turn off InnoDB and the Performance Scheme) in order to have the system run smooth. I never tested the Apache web server on my ARMv7 BBB, perhaps 512 Mbyte is a little low for it. If Apache does not run on the BBB, then at least the RPi3 should have sufficient resources, though. So yes, chances are that this would run on ARM.

The top output:
Code:
  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
29607 root          1  20    0 20164K  3524K CPU3    3   0:00   0.21% top
...
 9090 root          1  20    0   259M 15308K select  0   0:08   0.01% httpd
...
20365 www          27  26    0   310M 25864K piperd  3   0:02   0.00% httpd
20364 www          27  27    0   316M 26316K piperd  0   0:02   0.00% httpd
20366 www          27  27    0   316M 26488K piperd  0   0:02   0.00% httpd
 1790 root          1  20    0 12596K  2452K nanslp  0   0:02   0.00% cron
20381 www          27  20    0   304M 25292K piperd  3   0:02   0.00% httpd
...
25707 www           5  27    0 71336K 18444K accept  1   0:01   0.00% ContentCGI
...
 
Interesting. Personally, I'm going with static web generators like Pelican and Ivy. It was a bit frustrating to have to learn the quirks of Markdown (or rather various implementations of it) and I haven't added much media to my Pelican and Ivy sites yet.
 
Interesting. Personally, I'm going with static web generators like Pelican and Ivy. It was a bit frustrating to have to learn the quirks of Markdown (or rather various implementations of it) and I haven't added much media to my Pelican and Ivy sites yet.
ContentTools/ContentCGI do generate static web pages. The visitor sees only static html pages, except when a search is requested, in which case ContentCGI looks-up the search query in the previously generated Zettair index and generates a html page for presenting the results. Otherwise ContentTools/ContenCGI is invoked only by logged-in authors for editing/creating/deleting pages.

Does Pelican or Ivy allow WYSIWYG editing directly on your web page in your browser?
 
Does Pelican or Ivy allow WYSIWYG editing directly on your web page in your browser?
No, they do not AFAIK - you have to write / edit the pages in Markdown (or another markup language, there are plugin support for several) in a text editor, then run Pelican or Ivy to generate the output.
If you want browser editing you'll have to look elsewhere.
 
No, they do not AFAIK - you have to write / edit the pages in Markdown (or another markup language, there are plugin support for several) in a text editor, then run Pelican or Ivy to generate the output.
If you want browser editing you'll have to look elsewhere.

I don't need to look elsewhere anymore, I got the ContentTools/ContentCGI up and running, which do exactly WYSIWYG editing in the browser and produce static web pages.
 
Thanks for the info. Am thinking about a self hosted site, but have been a long time WP.com user. The advantage of WP.com is that the free sites will last forever (or at least for a while) - after I fall into the drink and turn into seaweed.
 
ContentCGI does not start in FreeBSD12.

There is a /var/run/ContentCGI.pid but not /tmp/ContentCGI.sock. The site loads but not the https://SITE_URL/edit/. It gets past the authentication then throws "Service Unavailable".

/var/logs/http-error.log shows:
Code:
[proxy:error] [pid 47321] (2)No such file or directory: AH02454: FCGI: attempt to connect to Unix domain socket /tmp/ContentCGI.sock

I am running http@8088 and https@8443 for apache. All modules are loaded. Any suggestions?
 
ContentCGI does not start in FreeBSD12.

There is a /var/run/ContentCGI.pid but not /tmp/ContentCGI.sock. The site loads but not the https://SITE_URL/edit/. It gets past the authentication then throws "Service Unavailable".

/var/logs/http-error.log shows:
Code:
[proxy:error] [pid 47321] (2)No such file or directory: AH02454: FCGI: attempt to connect to Unix domain socket /tmp/ContentCGI.sock

I am running http@8088 and https@8443 for apache. All modules are loaded. Any suggestions?
This looks like ContentCGI is not running.

Please verify this using pgrep -fl ContentCGI. Here this results in:
Code:
1112 /usr/local/bin/ContentCGI -c /root/certs -w /usr/local/www/Obsigna/webdocs -u www:www

In case it is not running, launch it manually on the command line using the options -u www:www and -c and -w as appropriate, and in addition -f, which would leave ContentCGI in the foreground, and therefore you can see whether it crashes at some point, and/or see some diagnostic output.

In case it crashes, please compile it again using the debug target on the make command:
cd /path/to/ContentCGI
make clean debug
./ContentCGI -f -c ... -w ... -u www:www

Finally, please feed the core dump as described here into lldb(1), and please report back.
 
This looks like ContentCGI is not running.

Please verify this using pgrep -fl ContentCGI. Here this results in:
Code:
1112 /usr/local/bin/ContentCGI -c /root/certs -w /usr/local/www/Obsigna/webdocs -u www:www

In case it is not running, launch it manually on the command line using the options -u www:www and -c and -w as appropriate, and in addition -f, which would leave ContentCGI in the foreground, and therefore you can see whether it crashes at some point, and/or see some diagnostic output.

In case it crashes, please compile it again using the debug target on the make command:
cd /path/to/ContentCGI
make clean debug
./ContentCGI -f -c ... -w ... -u www:www

Finally, please feed the core dump as described here into lldb(1), and please report back.
Thanks obsigna. I followed all of the above but no .core was created. When I tried to run it in the foreground, I see this error, where ContentCGI crashes:
Code:
ContentCGI 3273 - - Error creating the non-TLS IPv6 listening socket.

Indeed, it foes not run; so there is no need to create a core and dump it. But why is a .pid created? If you could restrict it to IPv4, that would work. There is no manual to guide me in specifying an IP address.
 
I can see that the jail does not have an IPv6 address. And it could not inherit the one in the host. The etc/rc.conf has in it:
ifconfig_vtnet0="DHCP"
ifconfig_vtnet0_ipv6="inet6 accept_rtadv"
 
I have resolved it. I needed to set the jail conf to inherit ipv4 and ipv6. In addition, I allowed socket_af and sysipvc.
Thanks.
 
Yes a man file is missing, however, all command line options together with a brief description are available using the following command ContentCGI -h:
Code:
usage: ContentCGI [-f] [-n] [-l local port] [-a local IPv4] [-b local IPv6] [-s secure port] [-c cert dir] [-r plugins] [-w web root] [-u uid:gid] [-p pid file] [-u unix domain socket] [-h|?]
-f             foreground mode, don't fork off as a daemon.
-n             no console, don't fork off as a daemon - started/managed by launchd(8) or daemon(8).
-l local port  listen on the non-TLS local host/net port number, port 0 means don't listen [default: 4000].
-a local IPv4  bind non-TLS ContentCGI to the given IPv4 address [default: 127.0.0.1].
-b local IPv6  bind non-TLS ContentCGI to the given IPv4 address [default: ::1].
-s secure port listen on the TLS secure remote port number, port 0 means don't listen [default: 5000].
-4 IPv4        bind TLS ContentCGI to the given IPv4 address [default: 0.0.0.0].
-6 IPv6        bind TLS ContentCGI to the given IPv6 address [default: ::].
-c cert dir    the path to the directory holding the certificate chain [default: ~/certdir].
-r plugins     the path to the async responder plugins directory [default: ~/plugins/ContentCGI].
-w web root    the path to the web root directory [default: ~/webroot].
-u uid:gid     switch to another user:group before launching the child.
-p pid file    the path to the pid file [default: /var/run/ContentCGI.pid]
-d unix socket the path to the unix domain socket on which ContentCGI is listening on [default: /tmp/ContentCGI.sock].
?|-h           shows these usage instructions.

In case ContentCGI is running on the same machine as the web server, then there would be no need at all to listen on any network socket, the unix domain would be completely sufficient and superior. Therefore, I added some time ago the feature to disable the network sockets, for this you want to add to the ContentCGI_flags in /etc/rc.conf the following -l 0 and -s 0. As stated in the usage instructions, this would let ContentCGI not to listen on any network socket.

Thinking about it, for what people would need a man file, if I am around ?

BTW: Today, I found a bug in my fork of the Zettair search engine, and I corrected it already on GitHub. So you want to update your working copy from upstream and compile ContentCGI and it's plugins once again.

In case you followed the instructions on GitHub, the following command issued from inside your working copy would do this:
./bsdinstall.sh update install clean
 
Thinking about it, for what people would need a man file, if I am around
7234

What are the URLs for Create and Delete in the above image? I keep looking for how to create and delete articles. Would I just drop a random file in .html to the articles folder and expect its link to show up under the Search textbox? I can edit to various contents in the links at the top left but can create "blogposts".

Alternatively, you can extend the paragraph containing this sentence:
Code:
It responds to edit, save, create, delete, revive and search requests. The search facility is backed by the Zettair search engine.
. You can include how to perform the "CRUD" actions.

Lastly, will renaming the textfiles in the webdocs automatically rename the links at the top left? I am also wanting to add a few more.
 
Let’s assume, the base (static) URL of the respective site is https://example.com/, then the URL’s for the editing mode would be:

Enter Editing Mode
https://example.com/edit/​
In general, the local links are composed in a way that the editing mode is kept when editors navigate within the site. This means we could enter the editing mode and navigate to the articles which we are going to edit. Saving is done by clicking the check mark in the opaque green circle. Editing can be canceled by clicking the cross in the opaque red circle.​

Create a new Article
default language = en​
https://example.com/edit/articles/create​
other languages, for example German = de:​
https://example.com/edit/articles/create?lang=de​
Portuguese = pt:​
https://example.com/edit/articles/create?lang=pt​
The language option is added to the lang attribute of the pages HTML tag. This let’s modern browsers do a perfect language based automatic hyphenation and spell checking of the text. The names of the articles are composed of the UNIX time stamp of creation and the extension .html. Only time-stamped files are screened for automatic generation of the Index and the TOC pages. The latter appears at the right side, on all articles.​

Delete an Article
https://example.com/edit/articles/delete?article=1574097332.html​
Here the article 1574097332.html would be deleted and the index and toc files would be regenerated automatically. Removal takes effect immediately, and without any questioning whether we are sure about our actions. Although, the respective article is not deleted from the files system, but moved into /tmp. I got in my /etc/rc.conf:​
Code:
...
clear_tmp_enable="YES"
...

Revival of the the Index and TOC
https://example.com/edit/articles/revive​
Usually auto-indexing would be initiated in the course of one of the above commands. Auto-indexing involves as well removal of stale images. Sometimes we want to manipulate the webdocs directory directly and bypassing ContentCGI. Once the changes are done, the revive action would update the index and the TOC to reflect the actual state.​
Searching is actually a non-editing command, since this can be called by occasional visitors. How this actually works is described here: https://obsigna.com/articles/1539035906.html

Searching and refreshing the search index
In case you followed the directions on my GitHub page, everything should be in place for providing the search facility to your visitors. ContentCGI does not directly update Zettair’s search index. Instead it would place a token into the Zettair’s index directory once something on the site has been changed. The spider script which comes with ContentCGI would be called by a cron job (I do it every minute) and start reindexing only in case it finds the token, otherwise it quits immediately.​
While ContentCGI itself works with UTF-8 throughout, Zettair is quite an old system. The main work was done before UTF-8 became popular. So Zettair works well for languages whose characters may be encoded by one of the ISO-8859-x character sets. I write in English, German and Portuguese, so I use the Western European encoding ISO-8859-1 in the spider script, and in the search-delegate plugin of ContentCGI see:​
Adaption to another ISO-encoding should be straightforward. Do a search/replace of ISO-8859-1 to one of the other ISO-8859 encodings in the whole source base of ContentCGI/Zettair, then re-compile everything. I fear, Asian languages do not work well with Zettair.​

Lastly, will renaming the textfiles in the webdocs automatically rename the links at the top left? I am also wanting to add a few more.

Ideally you would edit the article template file to your needs:

ContentCGI draws new articles from this template. The content section which can be edited is embraced by the following html comment tags.
Code:
<!--e-->
...
...
<!--E-->

You need to keep these intact, otherwise editing won’t work, because ContentCGI won’t know where to inject the WYSIWYG-Editor’s JS code. The CSS can be heavily customized without restrictions, and the art work (icon and logo) as well, of course.
 
Last edited:
Awesome. Could you please add the information to your blog? You could extend the blog post on ContentCGI/ContentTools or create a new blog post linked to the existing one.

Finally, add the information as a README page in your git repo.
 
  1. Could you should me how a token is generated? The zettair spider only works when I run "touch token" in its directory. You may have a snippet that detects the changes in the webdocs and automatically writes the token. Do share it with me please.
  2. I can see the siteroot and synchron folders in the /var/db/zettair however a search on the website throws a 500 server error. Adding "/_search?tag=SEARCHTERM" to the SITE_URL throws the same error. What is missing here is maybe how to process the ARTICLES_FILENAMES.iso.html into search results. Of course, I can run "zet" in "/var/db/zettair" and search for whatever I want. In another sense, how is the search_delegate.{js, html,so,css} in "~/plugins/.." triangulated with the WEBfiles/server and the zettair doc dir to produce search results like your blog does? There is a whole lot of information missing there. The vhost.conf may need to be extended to respond to _search location path. There are just several ways to get it done. How have you done it?
  3. Are you aware that visiting SITE_URL/edit/articles/revive in a browser injects spurious code like this -
    Code:
    <img height="307" src="articles/media/1574957035/conference.jpg.png^@ width="675"^@  
    <img height="111" src="articles/media/1574957035/ba.png.png^@ width="675"^@
    - in the index.html? The result is that the images inserted in each page do not show in the index page. In addition, the footer "AUTHOR NAME with copyright symbol/info" takes a different fontsize from what it should be. In short, each individual pages looks OKAY when visited but the index page is distorted - i.e. varying font sizes for texts, no images, etc - after running the revive action.
 
Could you should me how a token is generated? The zettair spider only works when I run "touch token" in its directory. You may have a snippet that detects the changes in the webdocs and automatically writes the token. Do share it with me please.
The token is created each time when you press the green save button after editing a page.
see: https://github.com/cyclaero/ContentCGI/blob/master/plugins/content-delegate/content-delegate.m#L1724

In case it is not, then check the access permissions. Usually ContentCGI runs as user www and it needs write permissions in zettair's index directory, for being able to place the token.

I can see the siteroot and synchron folders in the /var/db/zettair however a search on the website throws a 500 server error. Adding "/_search?tag=SEARCHTERM" to the SITE_URL throws the same error. What is missing here is maybe how to process the ARTICLES_FILENAMES.iso.html into search results. Of course, I can run "zet" in "/var/db/zettair" and search for whatever I want. In another sense, how is the search_delegate.{js, html,so,css} in "~/plugins/.." triangulated with the WEBfiles/server and the zettair doc dir to produce search results like your blog does? There is a whole lot of information missing there. The vhost.conf may need to be extended to respond to _search location path. There are just several ways to get it done. How have you done it?

In the vhost file, there are 2 location matches, which both would trigger ContentCGI to take over the request. One matches when /edit/ is the first path component, ...

https://github.com/cyclaero/ContentCGI/blob/master/apache-vhost.conf#L41

... and the other one matches, when any path component is lead by an underscore_

https://github.com/cyclaero/ContentCGI/blob/master/apache-vhost.conf#L36

In the case of _search and other _xxxx requests, Apache acts on the leading underscore and does merely invoke ContentCGI without parsing the rest of the request. Then ContentCGI takes over and loops through the installed plugins for finding one which wants to respond. In case no plugin can handle the request, a 404 - The requested resource was not found. is returned.

For example, on my site, I left the hello-responder plugin (which is actually a template for creating new plugins) in place:
https://obsigna.com/_hello
>>> The Hello Responder Delegate does work.

On the other hand:
https://obsigna.com/_sayhello
>>> 404 - The requested resource was not found.

Now the 500 response may have different causes. In case it comes as pure text (like the _hello responses above), then the search-responder took the request, but was not able to respond for one of the following reasons:
  1. It could not find or open the zettair store's index file -- check the access rights
  2. the iconv library failed to initialze
  3. The zettair library did not respond appropriately to the search
In case the 500 response is html formatted, then it comes from Apache, and this would indicate some sort of a mis-configuration.

Anyway, now I see a possible cause of the error. In May of this year, I added provisions to the search-plugin, so that it is able to respond independently for searches on each site of multi site installations. Of course, this would work only, if each site got it`s own zettair store. This implicates that the zettair store needed to be moved from /var/db/zettair/ to /var/db/zettair/HOST-NAME. In my case it is now /var/db/zettair/obsigna.com.

See: https://github.com/cyclaero/Content...e5b55d7#diff-5c54d0b4b4f5931060c9f540202d169c

Please run:
mkdir /tmp/your.site.domain
mv /var/db/zettair/* /tmp/your.site.domain/
mv /tmp/your.site.domain /var/db/zettair/
chown -R www:www /var/db/zettair

Then try again. Sorry after this change to the search-plugin, I forgot to update the instructions, I did it just now.

Are you aware that visiting SITE_URL/edit/articles/revive in a browser injects spurious code like this -
Code:
<img height="307" src="articles/media/1574957035/conference.jpg.png^@ width="675"^@
<img height="111" src="articles/media/1574957035/ba.png.png^@ width="675"^@
- in the index.html? The result is that the images inserted in each page do not show in the index page. In addition, the footer "AUTHOR NAME with copyright symbol/info" takes a different fontsize from what it should be. In short, each individual pages looks OKAY when visited but the index page is distorted - i.e. varying font sizes for texts, no images, etc - after running the revive action.

Well this should not happen, and certainly does not happen here. Actually, images in the index are not supported. This is planned for the future, however for this to work well automatically, some provisions need to be programmed, and I did not come to care for this yet.

Regarding the varying fonts, I would like to blame your CSS, until I know more.

PS: A quick look into the source revealed, where the image tags are altered in the course of auto-indexing. The culprit lines are in enumerateImageTags() in content-delegate.m. I corrected and commited it to the GitHub repo already.
 
Last edited:
The token is created each time when you press the green save button after editing a page.
see: https://github.com/cyclaero/ContentCGI/blob/master/plugins/content-delegate/content-delegate.m#L1724

In case it is not, then check the access permissions. Usually ContentCGI runs as user www and it needs write permissions in zettair's index directory, for being able to place the token.



In the vhost file, there are 2 location matches, which both would trigger ContentCGI to take over the request. One matches when /edit/ is the first path component, ...

https://github.com/cyclaero/ContentCGI/blob/master/apache-vhost.conf#L41

... and the other one matches, when any path component is lead by an underscore_

https://github.com/cyclaero/ContentCGI/blob/master/apache-vhost.conf#L36

In the case of _search and other _xxxx requests, Apache acts on the leading underscore and does merely invoke ContentCGI without parsing the rest of the request. Then ContentCGI takes over and loops through the installed plugins for finding one which wants to respond. In case no plugin can handle the request, a 404 - The requested resource was not found. is returned.

For example, on my site, I left the hello-responder plugin (which is actually a template for creating new plugins) in place:
https://obsigna.com/_hello
>>> The Hello Responder Delegate does work.

On the other hand:
https://obsigna.com/_sayhello
>>> 404 - The requested resource was not found.

Now the 500 response may have different causes. In case it comes as pure text (like the _hello responses above), then the search-responder took the request, but was not able to respond for one of the following reasons:
  1. It could not find or open the zettair store's index file -- check the access rights
  2. the iconv library failed to initialze
  3. The zettair library did not respond appropriately to the search
In case the 500 response is html formatted, then it comes from Apache, and this would indicate some sort of a mis-configuration.

Anyway, now I see a possible cause of the error. In May of this year, I added provisions to the search-plugin, so that it is able to respond independently for searches on each site of multi site installations. Of course, this would work only, if each site got it`s own zettair store. This implicates that the zettair store needed to be moved from /var/db/zettair/ to /var/db/zettair/HOST-NAME. In my case it is now /var/db/zettair/obsigna.com.

See: https://github.com/cyclaero/Content...e5b55d7#diff-5c54d0b4b4f5931060c9f540202d169c

Please run:
mkdir /tmp/your.site.domain
mv /var/db/zettair/* /tmp/your.site.domain/
mv /tmp/your.site.domain /var/db/zettair/
chown -R www:www /var/db/zettair

Then try again. Sorry after this change to the search-plugin, I forgot to update the instructions, I did it just now.



Well this should not happen, and certainly does not happen here. Actually, images in the index are not supported. This is planned for the future, however for this to work well automatically, some provisions need to be programmed, and I did not come to care for this yet.

Regarding the varying fonts, I would like to blame your CSS, until I know more.

PS: A quick look into the source revealed, where the image tags are altered in the course of auto-indexing. The culprit lines are in enumerateImageTags() in content-delegate.m. I corrected and commited it to the GitHub repo already.
Hi Obsigna,
Is there any new information giving that you updated this post very recently? I could not easily spot the update.

And here is another quick question: How can I run multiple instances on the same host? Running each instance in a separate jail based on the current procedure (i.e. dirs, rc.conf entries etc) is very expensive.
 
Hi Obsigna,
Is there any new information giving that you updated this post very recently? I could not easily spot the update.

Are you talking about #19? There was a spelling error, which I corrected.

And here is another quick question: How can I run multiple instances on the same host? Running each instance in a separate jail based on the current procedure (i.e. dirs, rc.conf entries etc) is very expensive.

There is no problem to run several ContentCGI daemons site by site. There may be other means, however, I create for each site a separate rc(8) script by cloning and modifying the original script shipped with ContentCGI on GitHub - https://raw.githubusercontent.com/cyclaero/ContentCGI/master/ContentCGI.rc.

Let’s assume, you got the first ContentCGI daemon servicing your first.com site using the original rc script and then you want another ContentCGI daemon provide its service for the second site second.com. Then you would create the second rc script as follows:

# sed -e "s|ContentCGI|SecondCGI|g;s|-w /usr/local/www/SecondCGI/webdocs|-l 0 -s 0 -u www:www -d /tmp/SecondCGI.sock -p /var/run/SecondCGI.pid -w /usr/local/www/SecondCGI/webdocs|" /usr/local/etc/rc.d/ContentCGI > /usr/local/etc/rc.d/SecondCGI
# chmod +x /usr/local/etc/rc.d/SecondCGI
Code:
#!/bin/sh

# FreeBSD rc-script for auto-starting/stopping the SecondCGI daemon
#
#  Created by Dr. Rolf Jansen on 2018-05-19.
#  Copyright © 2018-2019 Dr. Rolf Jansen. All rights reserved.
#
#
# PROVIDE: SecondCGI
# REQUIRE: LOGIN
# KEYWORD: shutdown
#
# Add the following lines to /etc/rc.conf to enable the SecondCGI daemon:
#    SecondCGI_enable="YES"
#
# optional:
#    SecondCGI_user="root"
#    SecondCGI_group="wheel"
#
# Don't use spaces in the following path argumments:
#    SecondCGI_flags="-l 0 -s 0 -u www:www -d /tmp/SecondCGI.sock -w /usr/local/www/SecondCGI/webdocs"
#    SecondCGI_pidfile="/var/run/SecondCGI.pid"

. /etc/rc.subr

name=SecondCGI
rcvar=SecondCGI_enable

load_rc_config $name

: ${SecondCGI_enable:="NO"}
: ${SecondCGI_user:="root"}
: ${SecondCGI_group:="wheel"}
: ${SecondCGI_flags="-l 0 -s 0 -u www:www -d /tmp/SecondCGI.sock -w /usr/local/www/SecondCGI/webdocs"}
: ${SecondCGI_pidfile:="/var/run/SecondCGI.pid"}

pidfile="${SecondCGI_pidfile}"
if [ "$pidfile" != "/var/run/SecondCGI.pid" ]; then
   SecondCGI_flags="${SecondCGI_flags} -p $pidfile"
fi

command="/usr/local/bin/SecondCGI"
command_args=""

run_rc_command "$1"

By this you inform in the very rc script the other location of the web directory of the second site, and as well another UNIX local domain socket on which the second ContentCGI daemon would communicate with Apache. You also deactivate the local and remote network sockets, in order not to interfere with those of the first ContentCGI daemon.

Since /usr/local/bin/SecondCGI doesn’t yet exist on the system, you create a symbolic link:
# ln -s ContentCGI /usr/local/bin/SecondCGI

Now you edit Apache’s virtual host file accordingly.

Then add the following lines to /etc/rc.conf:
SecondCGI_enable="YES"

Finally, start the SecondCGI aka second ContentCGI daemon # service SecondCGI start.
 
Last edited:
Back
Top