The previous articles describe a number of ways to hide processes, hide TCP connections, and hide kernel modules. To sum up, the differences from most articles on the Internet that introduce Rootkit are:
- Most articles on the web are hook procfs to hide objects.
- My approach is to remove objects directly from data structures such as chains.
Undoubtedly, my approach is simpler because it is clear that hook procfs requires a lot of code modification related to VFS API calls.
However, the monk can't run the temple even though the process, TCP connection, kernel module and so on take the chain list, it is still in the slab!We only need dump-specific slab objects to find them:
- All slab objects of kmem cache named TCP, dump, can find hidden TCP connections.
- dump is named task_Hidden processes can be found in all slab objects of struct's kmem cache.
- ...
However, the kernel does not provide a method for dump all slab objects.
As we know, /proc/slabinfo is not enough information at all, just some statistics recorded by the Linux kernel along with the allocation and release of slab objects, so we have to dump all slabs ourselves.
Come on, let me do it.
slab is at the top of the buddy system and takes pages from the buddy system, so we need to write on pages.
Annoyingly, slub kmem_is the default implementation of slab Cache, there are no fields to save those page s!Once an object is assigned, it is associated with kmem_cache's management is unhooked until it is released before it returns to kmem_In the free list of the cache.
However, unfortunately there is always luck, although kmem_cache does not have references to all pages, but pages have reverse references:
- Slab_of page The cache field records the kmem_to which it belongsCache.
The solution is to scan all page s and follow their kmem_Categorize and summarize the cache fields.The code is as follows:
#include <linux/module.h> #include <linux/kallsyms.h> #include <net/inet_sock.h> struct list_head *_slab_caches; struct list_head __slab_caches; struct kmemcache_stat { struct list_head list; struct kmem_cache *slab; unsigned long cnt; void (*print_obj)(struct kmem_cache *, void *); void *priv; }; // Traversing through n consecutive pages containing consecutive obj s #define for_each_object(__p, __s, __addr, __objects) \ for (__p = (__addr); __p < (__addr) + (__objects) * (__s)->size;\ __p += (__s)->size) // Task_Print callback function for struct void print_task(struct kmem_cache *s, void *p) { struct task_struct *p1 = (struct task_struct *)p; printk("##### %s %d \n", p1->comm, p1->pid); } // Print Callback Function for tcp socket void print_tcp(struct kmem_cache *s, void *p) { struct inet_sock *sk = (struct inet_sock *)p; printk("##### %08X->%08X %d %d \n", sk->inet_daddr, sk->inet_rcv_saddr, ntohs(sk->inet_dport), sk->inet_num); } // Mm_Print callback function for struct void print_mm(struct kmem_cache *s, void *p) { struct mm_struct *mm = (struct mm_struct *)p; printk("##### owner:[%s] %d PGD:%lx \n", mm->owner?mm->owner->comm:"[null]", mm->owner?mm->owner->pid:-1, (unsigned long)mm->pgd); } // Vm_Area_Print callback function for struct void print_vm(struct kmem_cache *s, void *p) { struct vm_area_struct *vma = (struct vm_area_struct *)p; if (vma->vm_mm && vma->vm_mm->owner) { printk("##### VMA owner:[%s] %d PGD:%lx \n", vma->vm_mm->owner->comm, vma->vm_mm->owner->pid, (unsigned long)vma->vm_mm->pgd); } } void print_object(struct kmemcache_stat *ment, void *p) { ment->cnt ++; if (ment->print_obj) { ment->print_obj(ment->slab, p); } } unsigned long show_objects(struct page *page) { struct kmem_cache *s = page->slab_cache; struct kmemcache_stat *entry; void *p, *addr; int found = 0; list_for_each_entry (entry, &__slab_caches, list) { if (entry->slab == s) { found = 1; break; } } if (!found) { return 1; } addr = page_address(page); for_each_object (p, s, addr, page->objects) { print_object(entry, p); } // Over n pages return (PAGE_ALIGN((unsigned long)p) - (unsigned long)addr)/PAGE_SIZE; } static void slab_scan(void) { int i = 0; for_each_online_node(i) { unsigned long spfn, epfn, pfn; spfn= node_start_pfn(i); epfn = node_end_pfn(i); for (pfn = spfn; pfn < epfn;) { struct page *page = pfn_to_page(pfn); if(!PageSlab(page)) { pfn ++; continue; } pfn += show_objects(page); } } } static int __init dump_slab_obj_init(void) { struct kmem_cache *entry; struct kmemcache_stat *ment, *tmp; unsigned long total = 0; _slab_caches = (struct list_head *)kallsyms_lookup_name("slab_caches"); if (!_slab_caches) { return -1; } INIT_LIST_HEAD(&__slab_caches); list_for_each_entry (entry, _slab_caches, list) { struct kmemcache_stat *stat; stat = kmalloc(sizeof(struct kmemcache_stat), GFP_KERNEL); stat->cnt = 0; stat->print_obj = NULL; INIT_LIST_HEAD(&stat->list); list_add(&stat->list, &__slab_caches); stat->slab = entry; if (!strcmp(entry->name, "task_struct")) { stat->print_obj = print_task; } if (!strcmp(entry->name, "TCP")) { stat->print_obj = print_tcp; } if (!strcmp(entry->name, "mm_struct")) { stat->print_obj = print_mm; } if (!strcmp(entry->name, "vm_area_struct")) { stat->print_obj = print_vm; } } slab_scan(); list_for_each_entry_safe (ment, tmp, &__slab_caches, list) { printk("[%s] %lu\n", ment->slab->name, ment->cnt); total += ment->cnt; list_del(&ment->list); kfree(ment); } printk("total objs: %lu\n", total); return -1; } module_init(dump_slab_obj_init); MODULE_LICENSE("GPL");
The code is very simple, so let me give you an output:
##### owner:[sshd] 1841 PGD:ffff88003be3d000 ##### owner:[[null]] -1 PGD:ffff88003c110000 ##### owner:[bash] 1843 PGD:ffff880000098000 ##### owner:[sshd] 1745 PGD:ffff880035aac000 ##### owner:[agetty] 664 PGD:ffff88003c321000 ##### owner:[dhclient] 1793 PGD:ffff88003c3fd000 ##### owner:[[null]] -1 PGD:ffff880035aa2000 ##### owner:[[null]] -1 PGD:ffff88003bd06000 ... ##### gmain 2260 ##### tuned 1248 ##### VMA owner:[firewalld] 658 PGD:ffff88003b7f6000 ##### VMA owner:[firewalld] 658 PGD:ffff88003b7f6000 ##### VMA owner:[firewalld] 658 PGD:ffff88003b7f6000 ##### VMA owner:[firewalld] 658 PGD:ffff88003b7f6000 ... ##### 0138A8C0->6E38A8C0 62006 22 ##### 0138A8C0->6E38A8C0 62011 22 ... total objs: 2367564
Yes, you read it correctly. I deliberately showed mm_struct, vm_area_struct and TCP:
- You hide task_struct, not even kmem_in taskAssign task_to cacheStruct, but mm_struct/vm_area_struct sold your task.
- You hide TCP and remove it from ehash, but it is also in the slab of a TCP socket.
All the data structures in the Linux kernel are woven into a large network that is interconnected with each other.As long as you grab a corner, everything else comes out.
Each task, for modern operating system Linux, whether it is a kernel thread or an ordinary user process, must have its PGD. If you find the PGD, you can dump out the address space data of this task.And we know that PGD is located at the mm_of taskIn the struct field mm, and mm_The owner of struct points directly to the task itself if we find mm_in the slab Strct, you can locate the task, no matter where it is hidden!
If you are afraid someone will copy the fork processProcess to hook, see:
https://blog.csdn.net/dog250/article/details/105939822
That is mm_struct object and task_The struct object itself is not allocated in the slab, so it's a good idea to find its vm_anywhereArea_Struct object, then from its vm_mm Field Positioning mm_struct, and then to task.We know that vm_Area_The allocation of struct happens automatically with the task running, so it is difficult to hook its allocation, so from vm_Area_Strct is a great way to get rid of hidden tasks by following Tussles.
Water in, water in.
As long as you use a slab, you can be sure to follow suit. The cost of convenience and efficiency is system control.Even though I assigned task_using kmalloc's anonymous slab in the previous articleStruct, that's still a slab object.
If you want to hide your Rotkit deeper, you must find a way to completely break away from the management of these kernel data structures. What can you do?Call alloc_directlyPages?No, it is not!alloc_pages are still controlled, and pages that are retrieved are still controlled by adding list s.What on earth?It's simple!
Stop the page!!
This is what follows.
Zhejiang Wenzhou leather shoes are wet, so you won't get fat when it rains.