A client of mine has a large IBM P7 series server that’s divided up into logical partitions (LPARs). In the partition I am interested in, there are several Oracle databases that all see the same 16 vCPUs and 40GB of memory. Each database gets 4GB of that (using the dreaded memory_target/memory_max_target parameters).
The challenge is that whilst the application under performance test load is not CPU bound (as evidenced by the LPAR’s low load average and high idle percent), the one database in question could do with more memory, especially for buffer cache (and the keep cache) to reduce the number of physical reads.
The LPAR is getting an additional 4 vCPUs and 20GB more memory this weekend…. great! However, all the instances will share the CPUs (again, not constrained) but none of the instances is getting any more memory to use – not an additional byte for buffer cache or PGA.
It puzzles me that we’re not bumping up the memory allocation for this one database and when pressed, the answer is that they don’t want to make too many changes at once!
The “other” change is increasing the processes parameter from 500 to 1000 so they don’t run into ORA-00020 under load… I would point out that changing that is mitigating a known limitation and not a performance enhancement after all, but I don’t want to waste my breath.
A bridge too far, perhaps.