Sep 30, 2019

Bug 24590018 on exadata SCM0 on TOP process CPU Consumption.





First of all: Sorry for my languaje, my english is not very rich, 😔


Straight to the point ... 

Some days ago i was working in a database migration from Onprem to cloud environment (Exadata), and i could see that scm background process it was on TOP con CPU consumption (it was so weird because that database was inactive without active users on it)


TOP example:
op - 13:31:04 up 72 days, 11:01,  4 users,  load average: 13.41, 9.01, 7.45
Tasks: 5896 total,  13 running, 5870 sleeping,   0 stopped,  13 zombie
%Cpu(s): 22.0 us,  2.6 sy,  0.0 ni, 75.1 id,  0.0 wa,  0.0 hi,  0.1 si,  0.1 st
KiB Mem : 74261779+total,  5118632 free, 57913158+used, 15836755+buff/cache
KiB Swap: 16777212 total, 16776784 free,      428 used. 14184574+avail Mem

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 74136 oracle    20   0 8299256  89592  82416 R  98.4  0.0  30840:38 ora_scm0_ctmtst
265577 oracle    20   0   79.7g 865372  85984 R  98.4  0.1   1518:56 oracle
343847 oracle    20   0   80.3g 117968  85712 R  67.7  0.0   8:10.80 oracle
 49052 odhagent  20   0   25.7g 464020  10012 S  28.8  0.1   4959:42 java
 24823 root      20   0 6273548 822004  61116 S  24.1  0.1   1526:05 ohasd.bin
  1887 oracle    20   0 8282336  99408  91092 S  22.6  0.0   0:00.72 oracle_1887_ctm
  1863 oracle    20   0 8413404  99384  91068 S  22.3  0.0   0:00.71 oracle_1863_ctm
393677 oracle    20   0   79.0g  84496  70540 S  22.3  0.0   0:02.01 oracle
396019 oracle    20   0  180680  82388  28944 S  21.9  0.0   0:04.45 rman
  1872 oracle    20   0 8413408  98800  90480 S  21.6  0.0   0:00.69 oracle_1872_ctm
  1877 oracle    20   0 8282332  98908  90572 S  21.6  0.0   0:00.69 oracle_1877_ctm
255797 oracle    -2   0 7749688  63404  59912 S  21.6  0.0   1159:44 ora_vktm_ctmts5
396710 oracle    20   0  180692  82420  28992 S  21.6  0.0   0:04.43 rman
  1867 oracle    20   0 8544480  98284  89968 S  21.3  0.0   0:00.68 oracle_1867_ctm

We can also observe that process it had 514 hrs aprox. or 21 days aprox. of CPU consumption, awesome 😳🤨

Although i had the instance running only few days ago, i cloud see that BG procces SCM it was on TOP of the list : / very weird. Well, that behaviour is a issue, specific this:
12.2 RAC DB Background process SCM0 consuming excessive CPU (Doc ID 2373451.1)

That is a bug: Bug 24590018 - RAC PERF: SCM0 PROCESS USING 100% CPU, FG'S USING ~80% SYS CPU POSTING SCM0

From the note the support mentions us the solution: 

The DLM Statistics Collection and Management slave (SCM0) is responsible for collecting and managing the statistics related to global enqueue service (GES) and global cache service (GCS). This slave exists only if DLM statistics collection is enabled.
The value is set to 1. Please go ahead and run the following command to change the value of _dlm to 0:
kill -9 <os pid of SCM0>


alter system set "_dlm_stats_collect" = 0 scope = spfile sid = '*';
This does require a reboot for changes to take effect. If a reboot is not an option, as a workaround you may kill the SCM0 process at OS level, it will respawn a new process soon.

Disabling DML STAT COLLECT has no negative impact on performance or other things on 12.2. However on 18c or 19c it should be enabled again. For the moment there are not report this negative behaviour on the last database versions.