Been thinking about this kernel a bit more and I've chosen a few things I will try to implement in it, although lets face reality I most likely won't ever come close to doing it. A working kernel with all these features take many hundred of thousands of hours of work to put to complete.
For sake of easiness to implement I will be attempting to create a monolithic kernel. This means that any software running on the operating system will directly communicate with the kernel, and it will be the kernels job to communicate with the hardware at a lower level. There are good and bad things about this, a good thing is that if designed right it is one of the more responsive/faster running kernel implementations, but a bad side to it is that if the kernel crashes the entire operating system crashes. If compared to a micro kernel structure where all software interacts with user land servers which communicate with the kernel which communicates with the hardware, the benefits are that if a server crashes in user land (say the networking server module) the kernel can recover and simply relaunch the networking module instead of bringing the entire operating system down. It is said though a micro kernel structure is indeed slower because of all the overhead of the software first communicating with the servers which relay to the kernel.
Here is a simple monolithic kernel structure. As you can see it is fragile as per if the kernel fails there is likely no recovery from that failure.
http://i.imgur.com/EHBxLP7.pngNow the Task managing schema I will try to implement is that of a time sharing type. Essentially each process has the right to use the CPU for a predetermined amount of time and then once its time is up, the kernel will put it back in a "last in last out" queue and grab the next process awaiting its turn to use the processor. This is slightly more complex then the simpler version which is just a mono tasking scheme where only one process can run at a time including the kernel. With a multi time sharing system you have to save the states of the current process and where it has left off and when it comes its turn again you must find its data and load it back into memory so the process can carry on its processing. The type of CPU sharing scheme as stated would be a "last in last out" queue in which once a current processes time is up it will be put to the back of the queue and the first in line will get its chance, then will be added to the back in a repeating motion.