Ramifications of removing proc

I'm experimenting with kernel based code and I am trying to figure out what the consequences are of removing a member from the proc list. I've read somewhere that the kernel may walk off the end of the list corrupting data and cause other general mayhem in the system which results in a kernel panic. On doing a google search, I have found lots of information on how to hide a process, but nothing about what the effects on the system are. I have read the book "Designing BSD Rootkits" by Joseph Kong and a few threads on this forum dealing with the subject. Looking through the source code, I did a grep on PROC_LOCK and way too many references came up.
 
I talked to someone who was in the know about the issue that I presented. The way it was explained to me was that if the kernel tries to remove an item on a list and it's not there, the kernel will panic because it will be dereferencing a NULL pointer when it tries to remove the item the second time. The reason for this is that the next and prev pointers are zeroed out when it unlinks the item from the list.
 
Uhm, I believe this is true, even if you look at sys/queue.h the removing of an element in a double linked list makes the prev and next pointers adjusted to the new situation. Can you elaborate a little more?


Code:
#define LIST_REMOVE(elm, field) do {                                    \
        QMD_SAVELINK(oldnext, (elm)->field.le_next);                    \
        QMD_SAVELINK(oldprev, (elm)->field.le_prev);                    \
        QMD_LIST_CHECK_NEXT(elm, field);                                \
        QMD_LIST_CHECK_PREV(elm, field);                                \
        if (LIST_NEXT((elm), field) != NULL)                            \
                LIST_NEXT((elm), field)->field.le_prev =                \
                    (elm)->field.le_prev;                               \
        *(elm)->field.le_prev = LIST_NEXT((elm), field);                \
        TRASHIT(*oldnext);                                              \
        TRASHIT(*oldprev);                                              \
} while (0)


#define TRASHIT(x)      do {(x) = (void *)-1;} while (0)
#define QMD_SAVELINK(name, link)        void **name = (void *)&(link)
 
I've been doing a foray into security programming at the kernel level. Looking at the source code of various rootkits, when they perform process hiding they take out the entry in the allproc list and the pidhashtbl to make a process invisible to the system. Over on packetstorm.org there is a rootkit there called turtle2 which employs these techniques. My question deals with the effects on system stability with such manipulation of the kernel data structures as there is no attempt to restore the items on their respective lists when the process exits or aborts for any reason. I'm thinking that such a kernel panic would be indicative that something is amiss with the system. The result of such an event, in my mind at least, would prompt any competent administrator to look to see why the kernel initiated a panic.
 
Maelstorm said:
I've been doing a foray into security programming at the kernel level. Looking at the source code of various rootkits, when they perform process hiding they take out the entry in the allproc list and the pidhashtbl to make a process invisible to the system. Over on packetstorm.org there is a rootkit there called turtle2 which employs these techniques. My question deals with the effects on system stability with such manipulation of the kernel data structures as there is no attempt to restore the items on their respective lists when the process exits or aborts for any reason. I'm thinking that such a kernel panic would be indicative that something is amiss with the system. The result of such an event, in my mind at least, would prompt any competent administrator to look to see why the kernel initiated a panic.

Well, I don't think that a panic could be a good idea, but a level of tracing can be enabled. Moreover, it would be interesting to synchronize allproc and zombproc so that the latter cannot accept a process that is not in the former. The problem is that the process itself is manipulating the lists as in the kern_exit.c:

Code:
        /*                                                                                 
         * Remove proc from allproc queue and pidhash chain.                               
         * Place onto zombproc.  Unlink from parent's child list.                          
         */
        sx_xlock(&allproc_lock);
        LIST_REMOVE(p, p_list);
        LIST_INSERT_HEAD(&zombproc, p, p_list);
        LIST_REMOVE(p, p_hash);
        sx_xunlock(&allproc_lock);
 
I know exactly the code that you are referring too as I have been over that myself. Instead of synchronizing the lists, it might be interesting to edit the routine to actually LOOK for the process on the allproc list before removing it. In pseudo code:

Code:
acquire exclusive lock on allproc
used FOR_EACH macro to scan over the list
if p is not found,
[INDENT]print message to system console
place p on zombie list[/INDENT]
if p is found,
[INDENT]remove from list[/INDENT]
place p on zombie list
remove pid from phashtbl

I have to wonder how difficult it would be to get the run queue from the scheduler to back trace all the threads to their respective proc structure and bash it to the allproc list. This could be ran in the kernel as a low priority thread every 5 sec to 5 min with a random component. If it finds a proc that is orphaned, it gets placed back on the lists and a message to the console is printed. That way if someone was manipulating the allproclist for nefarious purposes, the sysadmin would be notified pretty quick. The same thing could be done with INET port hiding as well. As part of the module, I am working on a kernel thread that checks the system call tree for hooked system calls periodically. If it finds something amiss, not only does it report it to the sysadmin via a console message, it will also fix it as well since it knows which function that should be called when that system call is called. Something like this is very kernel version dependent as system calls are added and deleted periodically.

As I think about it, I wonder if the kernel guys would be willing to include something like this in the base system. Not only would it enhance system security, but with the active component actually looking for tampering with certain critical areas of the system, there would be no way for a hacker to hide himself on a system because the attempt to do so would expose him in a very prominent way. After all, it is just three low priority kernel threads that sleep most of the time.
 
Back
Top