Quote:
Originally Posted by
ponzi
➡️
@
Psychlist1972
, very nice and many thanks. One thing I have wondered about is the value of deleting unused device drivers. Maybe a few hardware generations ago, I read about how device drivers can share an interrupt. They would maybe be chained and each one would query its device to see if it was the one that generated the interrupt. Obsolete thinking now?
If the device isn't discovered on the system, the driver won't be loaded, so no allocation will happen.
The reason I advocate for disabling things in the BIOS instead of just in Windows, is two-fold.
1. Guarantee they won't re-enable after an update. Windows generally sees disabled devices as an error on the system, and helpfully (for most people) assumes that if you have it, you probably want to use it.
2. Give the system the best chance to allocate resources, up-front. If you have enabled and installed devices, they will be in the pool when allocations are decided. Some of those are permanent allocations (registry etc.). If those resources aren't available to other devices at the time allocations are sorted, and those devices get stuck sharing, you end up with a sub-optimal configuration, even after later disabling the device in Windows. This becomes less of an issue as we continue to move to more modern devices and connectivity approaches, but it's something I still follow.
As to how interrupts are processed - I really don't know the specifics there. It's going to depend a lot on how the device is connected and what it's trying to do. PCIe, for example, doesn't generally use interrupts in the classic CPU interrupt sense*, but is serial message based. The system knows which device generated it.
* I'm not sure how many PCIe cards today use the classic wire interrupt signals vs. the packet-based "interrupts", so don't take this as gospel as it may turn out that most PCIe devices today use INTx vs MSI. Not what I have come to understand, but also not my area of expertise.
Pete