KIling Oracle DB Connections

Had to kill an application user with thousands of connections from different app servers.

The code from the link below did the job with a slight amendment.

https://www.oracledocs.com/generating-query-to-kill-session-from-sql_id/

SELECT 'alter system kill session '
|| ''''
|| SID
|| ','
|| SERIAL#
|| ' immediate ;'
FROM v$session
WHERE sql_id = '&sql_id';

For RAC instances

SELECT 'alter system kill session '
|| ''''
|| SID
|| ','
|| SERIAL#
|| ',@'
|| inst_id
|| ''''
|| ' immediate ;'
FROM gv$session
where username='APP_USER' and machine in ('evc01','evc02','evc03')

Simply spooled the output - checked it was what we wanted - changed the .lst file to .sql and used that to kill all the sessions. Hit an ORA-00031 error when killing the session - Burleson gives more detail.

ORA-00031: session marked for kill.

Cause: The session specified in an ALTER SYSTEM KILL SESSION command cannot be killed immediately (because it is rolling back or blocked on a network operation), but it has been marked for kill. This means it will be killed as soon as possible after its current uninterruptible operation is done.

Action: No action is required for the session to be killed, but further executions of the ALTER SYSTEM KILL SESSION command on this session may cause the session to be killed sooner.

Monitoring Golden Gate using OEM

I was asked to look at monitoring Oracle Golden Gate but unfortunately the environment was already up and running, which was a pity as I would have probably went down the JAgent route. However, that seems more suited for a new build and trying to put it on retrospectively as least for the build I was working on was going to be a pain.

My colleague found this from "dbasolved" which is excellent.

https://www.dbasolved.com/2014/01/yet-another-way-to-monitor-oracle-goldengate-with-oem12c/

It uses this perl script behind the scenes; which in itself is nothing exciting but does the job.

#!/usr/bin/perl -w
#
#
use strict;
use warnings;
#Static Variables
my $gghome = “/oracle/app/product/12.1.2/ggate”;
#Program
my @buf = `$gghome/ggsci << EOF
info all
EOF`;
foreach (@buf)
{
if(/EXTRACT/||/REPLICAT/)
{
s/\s+/\|/g;
print $_.”\n”;
}
}



Relinking Grid Infrastructure


This is the official document when you need to relink Oracle after an o/s kernel upgrade.

How To Relink The Oracle Grid Infrastructure Standalone (Restart) Installation Or Oracle Grid Infrastructure RAC/Cluster Installation (11.2 or 12c). (Doc ID 1536057.1)

Note the roothas.sh or the rootcrs.sh -unlock changes the ownership of the grid home from root to oracle to allow for updates on that directory. You can see this by running the "ls" command before and after running that script.


For RAC follow :


Instruction Steps
Task Steps
Reference/Screenshots
1.     
Verify that cluster services is running on all nodes
$GRID_HOME/bin/crsctl stat res -t
2.     
Shutdown OEM agent – if available
$AGENT_HOME/bin/emctl stop agent
3.     
Shutdown or Failover Goldengate if required

4.     
As root, disable automatic restart for HAS
sudo bash
. /usr/local/bin/oraenv
+ASM1 (or active ASM instance)
$ORACLE_HOME/bin/crsctl disable has
5.     
Shutdown HA services to prepare for relinking
$ORACLE_HOME/crs/install/rootcrs.sh -prepatch
For pre-12c installs use:
perl $ORACLE_HOME/crs/install/rootcrs.pl -unlock
6.     
Confirm all Oracle processes are down, apply O/S patches and reboot
ps -ef | grep -E 'oracle|has'
7.     
After reboot, login as oracle and verify HAS did not start
$GRID_HOME/bin/crsctl stat res -t
8.     
As oracle (or owner of $GRID_HOME), relink executables in GRID_HOME
. /usr/local/bin/oraenv
+ASM1 (or active ASM instance)
$ORACLE_HOME/bin/relink all
9.     
Check log for relink errors
egrep 'error|fail' $ORACLE_HOME/install/relink.log
10.  
Double check relinked binaries.
Files should have date stamp of current date/time
ls -ltr  $ORACLE_HOME/bin
11.  
As oracle, relink executables in DB_HOME
. /usr/local/bin/oraenv
nextfare1 (or other db name on server)
$ORACLE_HOME/bin/relink all
12.  
Check log for relink errors
egrep 'error|fail' $ORACLE_HOME/install/relink.log
13.  
Double check relinked binaries.
Files should have date stamp of current date/time
ls -l  $ORACLE_HOME/bin
14.  
As root, re-enable HAS
sudo bash
. /usr/local/bin/oraenv
+ASM1 (or active ASM instance)
$ORACLE_HOME/bin/crsctl enable has
15.  
As root, run rootadd_rdbms.sh
$ORACLE_HOME/rdbms/install/rootadd_rdbms.sh
16.  
If GRID_HOME is owned by a user other than oracle, ie grid, verify <DB_HOME>/bin/oracle has correct permission/uid/gid as below.  If not, run setasmgidwrap to reset the oracle binary owner/group.
Replace <DB_HOME> with correct path for database home.

Verify permissions/uid/gid as below:
ls -l <DB_HOME>/bin/oracle
permissions: -rwsr-s—x
uid: oracle
gid: asmadmin
su – grid
$ORACLE_HOME/bin/setasmgidwrap o=<DB_HOME>/bin/oracle

17.  
Restart HA services
$ORACLE_HOME/crs/install/rootcrs.sh -postpatch
For pre-12c installs use:
perl $ORACLE_HOME/crs/install/rootcrs.pl -patch
18.  
Verify that HA services have restarted
$ORACLE_HOME/bin/crsctl stat res -t
19.  
As oracle, start OEM agent – if available
$AGENT_HOME/bin/emctl start agent
20.  
Start Goldengate if required

21.  
Repeat Steps 2-19 for each additional node replacing ASM instance and DB instance as needed.

22.  








Patching OEM 13c Cloud Control - part 2

This is for the OEM October 2019 Patch Set Update.

You will need an Oracle Metalink Account and search for :

"Patch 30203475: EMBP Patch Set Update 13.3.0.0.191015"












See also :

https://www.durham.computer/2019/05/patching-oem-13c-cloud-control.html

Daily / Hourly archive logs


Daily / Hourly archive log generation scripts - very handy.

Daily Archive Log Generation :

select trunc(COMPLETION_TIME,'DD') Day, thread#, round(sum(BLOCKS*BLOCK_SIZE)/1024/1024/1024) GB, count(*) Archives_Generated from v$archived_log group by trunc(COMPLETION_TIME,'DD'),thread# order by 1;

Hourly Archive Logs :

set pages 1000
alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';

select trunc(COMPLETION_TIME,'HH') Hour,thread# , round(sum(BLOCKS*BLOCK_SIZE)/1024/1024/1024) GB, count(*) Archives from v$archived_log
group by trunc(COMPLETION_TIME,'HH'),thread# order by 1 ;

** If you are running this on a dataguard env you need to run with applied='NO' or you get the log generation for both the primary and secondary and figure won't match what the o/s reports.

select trunc(COMPLETION_TIME,'DD') Day, thread#, round(sum(BLOCKS*BLOCK_SIZE)/1024/1024/1024) GB, count(*) Archives_Generated from v$archived_log  where applied = 'NO' group by trunc(COMPLETION_TIME,'DD'),thread# order by 1
 

Find the amount of archiving you are using (run from the archive directory)


# cat arc_by_day.sh - run from the directory with archive - if this already split into dated directories just run a du -skh

#!/bin/bash
# check days
array=($(ls -la | grep rman_arc | awk '{print $6 $7}'| uniq))
for ((i=0; i<${#array[@]}; i++));
do
        mth=`echo ${array[$i]} | cut -c1-3`
        day=`echo ${array[$i]} | cut -c4-5`
        chk="$mth $day"
         echo -n "$chk "
# sum the files for the list of days
         ls -la | grep "$chk" | grep rman_arc | awk '{sum+=$5}END{print sum/1024/1024/1024}'
done
# sum the total
echo -n "Total "
ls -la | grep rman_arc | awk '{sum+=$5}END{print sum/1024/1024/1024}'


[root@db03 arc]# ./arc_by_day.sh

Jun 14 34.2387
Jun 15 38.9001
Jun 16 109.296
Jun 17 49.5452
Jun 18 46.5221
Jun 19 32.9514
Jun 20 55.9935
Jun 21 13.1455

Total 380.592




[root@db03 arc]# pwd

/u02/backups/rman/db03/arc

Similar script to the ones earlier for ASM :

select substr(FIRST_TIME, 1,9), count(FIRST_TIME) from V$ARCHIVED_LOG group by substr(FIRST_TIME, 1,9) order by 1