Feed aggregator

Disable fast_start failover

Michael Dinh - Wed, 2020-01-15 18:54

Data Guard Fast-Start Failover Test

If you recalled, observer was started from ol7-121-dg1 which was standby at the time and is now primary after failover.

[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:21:28 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
Warning: ORA-16819: fast-start failover observer not started

cdb1 - (*) Physical standby database
Warning: ORA-16819: fast-start failover observer not started

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 16 seconds ago)

DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020
Kill observer process from OS:
[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jan 16 00:32:04 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg1 sql]$

[oracle@ol7-121-dg1 sql]$ ps -ef|grep dgmgrl
oracle 10381 32397 0 00:32 pts/1 00:00:00 grep --color=auto dgmgrl
oracle 31831 30778 0 Jan15 pts/0 00:00:01 dgmgrl

[oracle@ol7-121-dg1 sql]$ kill -9 31831
Output from Observer session:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020

Killed *****
[oracle@ol7-121-dg1 sql]$
Disable fast_start failover:
[oracle@ol7-121-dg1 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
Error: ORA-16820: fast-start failover observer is no longer observing this database

cdb1_stby - (*) Physical standby database
Error: ORA-16820: fast-start failover observer is no longer observing this database

Fast-Start Failover: ENABLED

Configuration Status:
ERROR (status updated 31 seconds ago)

DGMGRL> disable fast_start failover
Disabled.

DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 12 seconds ago)

DGMGRL> exit
[oracle@ol7-121-dg1 sql]$

Data Guard Fast-Start Failover Test – Shutdown Primary Host

Michael Dinh - Wed, 2020-01-15 16:37

Note: Primary Database: cdb1_stby is because failover was previously performed.

This also demonstrate why it may not be a good idea to suffix stby for standby database.

Review Data Guard using sqlplus:
OL7-121-DG2:(SYS@cdb1):PRIMARY> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      cdb1
db_unique_name                       string      CDB1_STBY
pdb_file_name_convert                string
OL7-121-DG2:(SYS@cdb1):PRIMARY> show parameter fal

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fal_client                           string
fal_server                           string      cdb1
OL7-121-DG2:(SYS@cdb1):PRIMARY>

********************************************************************************

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      cdb1
db_unique_name                       string      cdb1
pdb_file_name_convert                string
OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> show parameter fal

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fal_client                           string
fal_server                           string      cdb1_stby
OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY>
Review Data Guard configuration:
DGMGRL> show configuration verbose
Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
cdb1 - Physical standby database

Properties:
FastStartFailoverThreshold = '30'
OperationTimeout = '30'
TraceLevel = 'USER'
FastStartFailoverLagLimit = '30'
CommunicationTimeout = '180'
ObserverReconnect = '0'
FastStartFailoverAutoReinstate = 'TRUE'
FastStartFailoverPmyShutdown = 'TRUE'
BystandersFollowRoleChange = 'ALL'
ObserverOverride = 'FALSE'
ExternalDestination1 = ''
ExternalDestination2 = ''
PrimaryLostWriteAction = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

********************************************************************************

DGMGRL> show database verbose cdb1_stby

Database - cdb1_stby

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
cdb1

Properties:
DGConnectIdentifier = 'cdb1_stby'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = 'cdb1'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb1_stby_DGMGRL)(INSTANCE_NAME=cdb1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

********************************************************************************

DGMGRL> show database verbose cdb1

Database - cdb1

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 2.00 KByte/s
Active Apply Rate: 0 Byte/s
Maximum Apply Rate: 0 Byte/s
Real Time Query: ON
Instance(s):
cdb1

Properties:
DGConnectIdentifier = 'cdb1'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = 'cdb1_stby'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg1)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb1_DGMGRL)(INSTANCE_NAME=cdb1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

DGMGRL>
Validate Data Guard configuration:
DGMGRL> validate database verbose cdb1_stby

Database Role: Primary database

Ready for Switchover: Yes

Capacity Information:
Database Instances Threads
cdb1_stby 1 1

Temporary Tablespace File Information:
cdb1_stby TEMP Files: 1

Flashback Database Status:
cdb1_stby: On

Data file Online Move in Progress:
cdb1_stby: No

Transport-Related Information:
Transport On: Yes

Log Files Cleared:
cdb1_stby Standby Redo Log Files: Cleared

Automatic Diagnostic Repository Errors:
Error cdb1_stby
No logging operation NO
Control file corruptions NO
System data file missing NO
System data file corrupted NO
System data file offline NO
User data file missing NO
User data file corrupted NO
User data file offline NO
Block Corruptions found NO

********************************************************************************

DGMGRL> validate database verbose cdb1

Database Role: Physical standby database
Primary Database: cdb1_stby

Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)

Capacity Information:
Database Instances Threads
cdb1_stby 1 1
cdb1 1 1

Temporary Tablespace File Information:
cdb1_stby TEMP Files: 3
cdb1 TEMP Files: 3

Flashback Database Status:
cdb1_stby: On
cdb1: On

Data file Online Move in Progress:
cdb1_stby: No
cdb1: No

Standby Apply-Related Information:
Apply State: Running
Apply Lag: 0 seconds (computed 1 second ago)
Apply Delay: 0 minutes

Transport-Related Information:
Transport On: Yes
Gap Status: No Gap
Transport Lag: 0 seconds (computed 1 second ago)
Transport Status: Success

Log Files Cleared:
cdb1_stby Standby Redo Log Files: Cleared
cdb1 Online Redo Log Files: Cleared
cdb1 Standby Redo Log Files: Available

Current Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(cdb1_stby) (cdb1)
1 3 4 Sufficient SRLs

Future Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(cdb1) (cdb1_stby)
1 3 4 Sufficient SRLs

Current Configuration Log File Sizes:
Thread # Smallest Online Redo Smallest Standby Redo
Log File Size Log File Size
(cdb1_stby) (cdb1)
1 50 MBytes 50 MBytes

Future Configuration Log File Sizes:
Thread # Smallest Online Redo Smallest Standby Redo
Log File Size Log File Size
(cdb1) (cdb1_stby)
1 50 MBytes 50 MBytes

Apply-Related Property Settings:
Property cdb1_stby Value cdb1 Value
DelayMins 0 0
ApplyParallel AUTO AUTO

Transport-Related Property Settings:
Property cdb1_stby Value cdb1 Value
LogXptMode ASYNC ASYNC
RedoRoutes
Dependency
DelayMins 0 0
Binding optional optional
MaxFailure 0 0
MaxConnections 1 1
ReopenSecs 300 300
NetTimeout 30 30
RedoCompression DISABLE DISABLE
LogShipping ON ON

Automatic Diagnostic Repository Errors:
Error cdb1_stby cdb1
No logging operation NO NO
Control file corruptions NO NO
SRL Group Unavailable NO NO
System data file missing NO NO
System data file corrupted NO NO
System data file offline NO NO
User data file missing NO NO
User data file corrupted NO NO
User data file offline NO NO
Block Corruptions found NO NO

DGMGRL>
Validate Data Guard connectivity from all hosts:
[oracle@ol7-121-dg2 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> exit
[oracle@ol7-121-dg2 sql]$

********************************************************************************

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> exit
[oracle@ol7-121-dg1 sql]$
Start Data Guard observer from standby:

Note: this is not good practice for real world and only for testing purposes only.

oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:21:28 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
Warning: ORA-16819: fast-start failover observer not started

cdb1 - (*) Physical standby database
Warning: ORA-16819: fast-start failover observer not started

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 16 seconds ago)

DGMGRL> start observer
Observer started
DGMGRL>
Shutdown primary host:
[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:26:51 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG2:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$ logout

[vagrant@ol7-121-dg2 ~]$ logout
Connection to 127.0.0.1 closed.

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant halt
==> default: Attempting graceful shutdown of VM...

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$
Failover succeeded:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020
Review Data Guard configuration:
[oracle@ol7-121-dg1 ~]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
Warning: ORA-16829: fast-start failover configuration is lagging

cdb1_stby - (*) Physical standby database (disabled)
ORA-16661: the standby database needs to be reinstated

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 41 seconds ago)

DGMGRL>
Start primary host:
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default poweroff (virtualbox)

The VM is powered off. To restart the VM, simply run `vagrant up`

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant up

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master) 
$ vagrant status
Current machine states:

default running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$
Start listener:
[oracle@ol7-121-dg2 ~]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 15-JAN-2020 21:33:11

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 15-JAN-2020 21:33:12
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "cdb1_stby_DGMGRL" has 1 instance(s).
Instance "cdb1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@ol7-121-dg2 ~]$
Startup mount database:
[oracle@ol7-121-dg2 sql]$ ps -ef|grep pmon
oracle 17762 17567 0 21:34 pts/0 00:00:00 grep --color=auto pmon

[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:34:09 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SYS@cdb1> startup mount;
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size 2924928 bytes
Variable Size 520097408 bytes
Database Buffers 1073741824 bytes
Redo Buffers 13848576 bytes
Database mounted.
SYS@cdb1> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$
Review Data Guard configuration:
[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration
ORA-16795: the standby database needs to be re-created

Configuration details cannot be determined by DGMGRL
DGMGRL>
Review Observer:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020
Review and Validate Data Guard configuration:
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - (*) Physical standby database
Warning: ORA-16829: fast-start failover configuration is lagging

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 54 seconds ago)

DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS (status updated 24 seconds ago)

DGMGRL> validate database cdb1

Database Role: Primary database

Ready for Switchover: Yes

DGMGRL> validate database cdb1_stby

Database Role: Physical standby database
Primary Database: cdb1

Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)

DGMGRL>

Installing Oracle GoldenGate 19c Microservices – Binaries Only

DBASolved - Wed, 2020-01-15 12:43

It has taken me some time to get around to doing more videos; mostly because I don’t like the way that I sound when I hear my voice. Maybe next time, I’ll just do a silent video..lol.  Moving forward,  I’ll make more of an effort to produce a more videos that explain a few of […]

The post Installing Oracle GoldenGate 19c Microservices – Binaries Only appeared first on DBASolved.

Categories: DBA Blogs

Oracle Helps Customers Easily Move to Autonomous Cloud

Oracle Press Releases - Wed, 2020-01-15 07:00
Press Release
Oracle Helps Customers Easily Move to Autonomous Cloud New Oracle Enterprise Manager automates database migration and simplifies complex hybrid cloud environments

Redwood City, Calif.—Jan 15, 2020

To help customers easily move Oracle Databases to the cloud and simplify management of hybrid cloud environments, Oracle announced significant enhancements to its enterprise management platform, Oracle Enterprise Manager. The new release adds functionality that automates database migration and provides a single dashboard that improves visibility, control, and management for hybrid IT environments. 

Oracle Enterprise Manager’s migration capabilities provide unprecedented flexibility and ease of use to accelerate and simplify the transition to the cloud. Since most large organizations have to move multiple databases to the cloud over an extended period of time, it’s critical to have a cloud migration solution that eliminates the timing and pricing pressures typical with other vendor’s rigid migration solutions.

“As organizations move to the cloud, they are faced with complex, time-consuming, manual, error-prone migration tasks,” said Wim Coekaerts, senior vice president, Software Development, Oracle. “Only Oracle provides Autonomous Cloud services, as well as the tools and migration services to help customers easily move to the cloud. Oracle Enterprise Manager removes the complexity with highly automated, guided migrations and provides a single dashboard for easily managing hybrid cloud environments.”

“IDC’s research shows that well over 90 percent of major enterprises rely on a mix of on-premises IT, dedicated cloud environments and public cloud services, and are seeking efficient ways to onboard, monitor, and manage across these hybrid environments,” said Mary Johnston Turner, IDC Research vice president for Cloud Management. “Enterprise cloud management leaders tell us that analytics is their number one priority, since the scale and complexity of hybrid and multi-cloud operations requires robust automation that is informed by deep performance and optimization intelligence.”

Built for Hybrid Environments

Oracle Enterprise Manager provides enhancements in three key areas to help enterprises more easily manage hybrid database environments, including:

  • Intelligent Analytics: New intelligent analytics provided by the Exadata Warehouse enable users to maximize performance and utilization of Oracle Database and Exadata environments on-premises or in the cloud via improved capacity planning and forecasting. Additionally, the new version improves visibility of the entire hybrid estate through comprehensive monitoring and management for Oracle’s latest technology, including Autonomous Database and Exadata Cloud Service.

  • Comprehensive Lifecycle Automation and Control: Advancements in lifecycle automation and control enable enterprises to easily adopt Autonomous Database and Exadata Cloud Service and improve their security posture.

  • Mobility and Security: New comprehensive security controls include fleet maintenance support for Transparent Data Encryption, improved compliance monitoring, fine-grained control of on-premises fleets, and new security standards for Oracle Database 18c and 19c.  The new functionality also provides access to a new mobile app and new Grafana plug-in for rich visualization of Oracle Enterprise Manager data.

Additionally, Oracle is expanding deployment and access choices for Oracle Enterprise Manager. DBAs can now deploy Oracle Enterprise Manager on Oracle Cloud Infrastructure using Oracle best-practices for high availability, capitalizing on Oracle Enterprise Manager features while enjoying the benefits of a cloud deployment.

In addition, Oracle announced today that Oracle Enterprise Manager has been certified by the Center for Internet Security Benchmarks to compare the configuration status of Oracle Databases against the consensus-based best practice standards contained in the Oracle 12c Benchmark v2.1.0, Level 1- RDBMS. Organizations that use Oracle Enterprise Manager can now ensure that the configurations of their critical assets align with the CIS Benchmarks consensus-based practice standards.

Organizations Benefit from Highly Automated Capabilities, Easy Migration to Cloud

“We depend on Oracle Enterprise Manager to optimize our Oracle Database and Exadata fleet, which provides a mission-critical shared service for all of our most important business functions,” said Jones John, Database Services Manager, Technology and Innovation Division at Link Group. “The latest release of Oracle Enterprise Manager allows us to adopt the newest Exadata X8 environments without delay, and to continue to use Oracle Enterprise Manager’s comprehensive management automation capabilities across our entire hybrid database fleet.”

“Our key public sector and commercial customers and our own experts use Oracle Enterprise Manager every day to manage their Oracle Database and Exadata fleet,” said Erik Benner, Vice President, Transformation at Mythics, an Oracle Cloud partner. “The new Oracle Enterprise Manager functionality to ease migration to Autonomous Database and to apply machine learning analytics to their Oracle Enterprise Manager data is precisely what is needed to help ensure they can continue to operate seamlessly across their entire Database fleet.”

Contact Info
Victoria Brown
Oracle
+1.650.850.2009
victoria.brown@oracle.com
Sara Zick
The Hatch Agency for Oracle
+1.646.209.8726
szick@thehatchagency.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Victoria Brown

  • +1.650.850.2009

Sara Zick

  • +1.646.209.8726

Javascript in ANT

Darwin IT - Wed, 2020-01-15 04:37
Earlier I wrote about an ANT script to scan JCA adapters files in your projects home, subversion working copy or github local repo.

In my current project we use sensors to kick-of message-archiving processes, without cluttering the BPEL process. I'm not sure if I would do that like that if I would do on a new project, but technically the idea is interesting. Unfortunately, we did not build a registry what BPEL processes make use of it and how. So I tought of how I could easily find out a way to scan that, and found that based on the script to scan JCA files, I could easily scan all the BPEL sensor files. If you have found the project folders, like I did in the JCA scan script, you can search for the *_sensor.xml files.

So in a few hours I had a basic sript. Now, in a second iteration, I would like to know what sensorActions the sensors trigger. For that I need to interpret the accompanying *_sensorAction.xml file. There for, based on the found sensor filename I need to determine the name of the sensor action file.

The first step to that is to figure out how to do a substring in ANT. With a quick google on "ant property substring", I found a nice stackoverflow thread, with a nice example of an ANT script defininition based on Javascript:
  <scriptdef name="substring" language="javascript">
<attribute name="text"/>
<attribute name="start"/>
<attribute name="end"/>
<attribute name="property"/>
<![CDATA[
var text = attributes.get("text");
var start = attributes.get("start");
var end = attributes.get("end") || text.length();
project.setProperty(attributes.get("property"), text.substring(start, end));
]]>
</scriptdef>

And that can be called like:
    <substring text="${sensor.file.name}" start="0" end="20"   property="sensorAction.file.name"/>
<echo message="Sensor Action file: ${sensorAction.file.name1}"></echo>

The javascript substring() function is zero-based, so the first character is indexed by 0.
Not every sensor file name has the same length, the file is called after the BPEL file that it is tight too. And so to get the base name, the part without the "_sensor.xml" postfix, we need to determine the length of the filename. A script that determines that can easily be extracted from the script above:
  <scriptdef name="getlength" language="javascript">
<attribute name="text"/>
<attribute name="property"/>
<![CDATA[
var text = attributes.get("text");
var length = text.length();
project.setProperty(attributes.get("property"), length);
]]>
</scriptdef>

Perfect! Using this I could create the logic in ANT to determine the sensorAction file name. However, I thought that it would be easier to determine the filename in Javascript all the way. Using the strength of the proper language at hand:
  <!-- Script to get the sensorAction filename based on the sensor filename. 
1. Cut the extension "_sensor.xml" from the filename.
2. Add "_sensorAction.xml" to the base filename.
-->
<scriptdef name="getsensoractionfilename" language="javascript">
<attribute name="sensorfilename"/>
<attribute name="property"/>
<![CDATA[
var sensorFilename = attributes.get("sensorfilename");
var sensorFilenameLength = sensorFilename.length();
var postfixLength = "_sensor.xml".length();
var sensorFilenameBaseLength=sensorFilenameLength-postfixLength;
var sensorActionFilename=sensorFilename.substring(0, sensorFilenameBaseLength)+"_sensorAction.xml";
project.setProperty(attributes.get("property"), sensorActionFilename);
]]>
</scriptdef>
And then I can get the sensorAction filename as follows:
    <getsensoractionfilename sensorfilename="${sensor.file.name}" property="sensorAction.file.name"/>
<echo message="Sensor Action file: ${sensorAction.file.name}"></echo>

Superb! I found ANT a powerfull language/tool already. But with a few simple JavaScript snippets you can extend it easily.
Notice by the way also the use of xslt in the Scan JCA adapters files article. You can read xml files as properties, but to do that conveniently you need to transform a file like the sensors.xml in a way that you can easily reference the properties following the element-hierarchy. This is also explained in the Scan JCA adapters files article.
I'll go further with my sensors scan script. Maybe I'll write about it when done.

@DATE, @DATENOW … Date functions in GoldenGate

DBASolved - Tue, 2020-01-14 11:59

Dates are always fun to play with when it comes to the Oracle Database, much less any other relational database.  Dates are used for many thinks in a wide range of application and schemas.  You have birthdays, ship dates, order dates, registration date, etc….  You get the picture.   In the Oracle Database you can […]

The post @DATE, @DATENOW … Date functions in GoldenGate appeared first on DBASolved.

Categories: DBA Blogs

Largest Russian Footwear Retailer Kicks Manual Assortment Planning to the Curb with Oracle

Oracle Press Releases - Tue, 2020-01-14 11:00
Press Release
Largest Russian Footwear Retailer Kicks Manual Assortment Planning to the Curb with Oracle ZENDEN Group plans to increase business margin by 3.5 percent while reducing cross-store replenishment by 50 percent

NRF 2020 Retail’s Big Show, New York City—Jan 14, 2020

Popular Russian footwear chain, ZENDEN Group, deployed Oracle Retail to drive inventory productivity and elevate the customer experience with better inventory placement. Today, ZENDEN leads the Russian footwear market on profit per square meter, engaging about 3 million loyalty cardholders and having more than 74.4 million people visiting its stores. In the last five years, ZENDEN has sold 42 million pairs of shoes. With Oracle Retail, ZENDEN Group shifted from a manual planning process to adopt automated demand-driven assortment planning. The new process helps to ensure the right shoes are always in stock for its customers.

“With the rapid growth of the brand, we required a modern solution that could optimize the process, of forecasting and planning assortment. For us, those were the most important factors which have influenced our decision. The new Oracle Retail solution provides the common environment for all participants of the process and also allows us to make the top-down and bottom-up plans,” said Vitaly Fomin, project director, ZENDEN Group. “Due to high accuracy and the new forecasting tool which uses machine learning, we can more effectively meet consumer demand in our stores by providing better assortment availability. In the long-term this helps to reduce the volume of clearances and to increase the overall margin of the business.”

“Before the implementation, our team was responsible for shipments into over 400 stores that we managed through a highly manual and time-consuming process with using big and unsynchronized spreadsheets. Now we have a universal solution with a single user interface and seamless automation of our best practices, which gives us the capacity to think more strategically,” said Victoria Rakhimzhanova, planning department director, ZENDEN Group.

With Oracle Retail, the teams have standardized 90 percent of the planning process in one central system. ZENDEN Group implemented assortment planning solution, Oracle Retail Assortment Planning; demand forecasting solution, Oracle Retail Demand Forecasting; and Veltio’s new Product Forecasting on the Oracle Retail Predictive Application Server platform. Veltio, a Gold level member of Oracle PartnerNetwork (OPN), provided Implementation services on the project.

“In our recent global consumer research showed that it’s no longer good enough for retailers to simply sell products. Retailers need to get better at acquiring the right customers and placing assortment and inventory where the customer expects to engage with the brand,” said Mike Webster, senior vice president, and general manager, Oracle Retail. “By leveraging the machine learning embedded in our Planning and Optimization solutions, ZENDEN gains a single view of the assortment and can now orchestrate the supply chain to continue to drive margin and reduce costs.”

Contact Info
Kaitlin Ambrogio
Oracle Retail PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Zenden

Zenden Group – Russian diversified multidisciplinary holding on a federal scale with 22 years of history. The company manages retail chains, modern shopping centers, foreign representative offices and a number of social projects. Due to the active use of the world’s best practices and the latest technologies in footwear production, highly qualified management and staff, the holding annually shows high production and financial results, and also strives to become the most dynamic, efficient and innovative company in Russian shoe retail.

About Veltio

Veltio is one of the world leaders in the field of analytics and optimization of retail processes, solving complex business problems using machine learning systems, data mining and decision support based on Oracle Retail Planning and Optimization solutions Veltio works. For more information, visit our website http://www.veltio.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Who are Dynamic Search Ads most useful for?

VitalSoftTech - Tue, 2020-01-14 09:53

Here is a question for you. Who do you think from among the given options has the most use of Dynamic Search Ads? A website is positioning its ads strategically in the way which their audiences’ eyes move. An international clothing retailer that operates online with dynamically changing inventory every season. A local eatery with […]

The post Who are Dynamic Search Ads most useful for? appeared first on VitalSoftTech.

Categories: DBA Blogs

Drop Column bug

Jonathan Lewis - Tue, 2020-01-14 07:22

When I was a child I could get lost for hours in an encyclopedia because I’d be looking for one topic, and something in it would make me want to read another, and another, and …

The same thing happens with MOS (My  Oracle Support) – I search for something and the search result throws up a completely irrelvant item that looks much more interesting so I follow a hyperlink, which mentions a couple of other notes, and a couple of hours later I can’t remember what I had started looking for.

Today’s note is a side effect of that process. A comment made yesterday about count(*)/count(1) referenced Oracle bug “19450314: UNNECESSARY INVALIDATIONS IN 12C”, and when I searched MOS for more information on this bug I discovered bug 30404639 : TRIGGER DOES NOT WORK CORRECTLY AFTER ALTER TABLE DROP UNUSED COLUMN. The impact of this bug is easy to demonstrate, and the ramifications are as follows:

Exercise extreme care with the “alter table drop column” command in 18c and above.

The problem is easy to work around, but the impact of not knowing about it could be catastrophic if your pre-production testing wasn’t quite good enough. Here’s a little demonstration script – the bug note says the problem appeared in 18.3 but I ran this test against 19.3. The script is a modified version of the SQL in the bug note:


create table t1 (c0 varchar2(30), c1 varchar2(30), c2 varchar2(30), c3 varchar2(30), c4 varchar2(30));
create table t2 (c_log varchar2(30));

create or replace trigger t1_ariu
after insert or update on t1
for each row
begin
        IF :new.c3 is not null then
                insert into t2 values (:new.c3);
        end if;
end;
/

spool drop_col_bug_18c.lst

insert into t1(c3) values ('Inserting c3 - should log'); 
select * from t2;
 
insert into t1(c4) values ('Inserting c4 - should not log'); 
select * from t2;

prompt  ===================================
prompt  Drop some columns in two steps then
prompt  truncate t2 and repeat the test
prompt  ===================================
 
alter table t1 set unused (c1, c2);
alter table t1 drop unused columns;

truncate table t2;

insert into t1(c3) values ('Inserting c3 - should log'); 
select * from t2;
 
insert into t1(c4) values ('Inserting c4 - should not log'); 
select * from t2;
 

The code is very simple. It creates a couple of tables an “after row” trigger on one of them to copy one column value across to the other table on an insert or update provided the new column value is not null.

To check that the trigger is (at least in part) behaving the code does two inserts – one which should copy a value and one which should not – and we see that the copy takes place as expected.

Now comes the critical part. We mark two of the columns in the table as unused, then drop all unused columns, truncate the second table and repeat the inserts.

If you run the test on 12.2.0.1 then you should find that the second run behaves just like the first run. If you’re running 18c or 19c be prepared for the following:


insert into t1(c3) values ('Inserting c3 - should log')
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [insChkBuffering_1], [4], [4], [], [], [], [], [], [], [], [], []

no rows selected

insert into t1(c4) values ('Inserting c4 - should not log')
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [insChkBuffering_1], [4], [4], [], [], [], [], [], [], [], [], []

This is not good – but it gets worse. If your application starts consistently breaking with an ORA-00600 error that’s going to annoy a lot of users for (we hope) a brief interval, but if your application keeps running and corrupting your data that’s a much bigger problem. Re-run the whole script (dropping the two tables first) but change it to mark just one of the two columns as unused, and you’ll get results for the second pass that look like this:


Table truncated.


1 row created.


no rows selected


1 row created.


C_LOG
------------------------------
Inserting c4 - should not log

1 row selected.

The trigger seems to “lose count” of the columns in the table (presumably it’s compiled to refer to something like “column_position = 3” and doesn’t adjust on the “drop column” – the linked bug notes on MOS refer to the problem being associated with the project to increase fine-grained dependencies) so it manages to survive with one column dropped because there’s still a “column 3” which happens now to be the column that used to be “column 4”.

Workaround

There is a simple workaround if you run into this problem after modifying your production system (and before you’ve corrupted a huge amount of data – recompile the trigger manually immediately after the drop completes: “alter trigger t1_ariu compile”.

Refinement

The problem seems to appear only if the following two conditions are true:

  • you use a two-step approach to dropping a column, viz: set unused then drop. If you simply issue “alter table t1 drop column c1” (with or without a “checkpoint NNN”) then the problem does not appear. It’s a great shame that in the past I’ve given advice that setting columns unused and dropping them later is a better option than doing an immediate drop.
  • you drop columns that appear earlier in the table than the highest position column mentioned in the trigger. But this isn’t something you should gamble on, particularly since the workaround is so easy to implement, because the order the columns appear in the table declaration isn’t necessarily the internal column ordering so you might get it wrong (not that I’ve tried to test for that threat) – and what if there are some selective materialized view logs where you don’t explicitly create triggers and forget to cater for.

I don’t expect anyone to be dropping columns in production systems with any great frequency, and you would expect such a significant operation to be tested quite carefully, but it’s easy to envisage a scenario where the testing might be split into two pieces viz:

  1. test the application on a pre-prod version of the database where a table has been created as a subset of the production data without the column that’s due to be dropped
  2. test how long it takes to execute the actual drop on a (minimal) recovered backup of production, but don’t test the new production code on the resulting table.

Sometimes it’s easy to overlook things that “obviously” won’t need testing, especially when it’s something that has always worked in the past with no special treatment required.

<h3>Footnote</h3>

If you try running this model on LiveSQL you’ll find that the code stops and the web page reports “Error: Internal Server Error” so you can’t tell that the problem is exactly the same there – but it seems quite likely that it is.

Given how easy it is to bypass the problem I haven’t bothered to do any further research on the issue – is it only related to insert and update trigger, and do they have to be after row for the update, and what about before row delete triggers (with materialized view logs in mind).

 

A Refresh-able URL

Jim Marion - Mon, 2020-01-13 22:41

I find development to be an iterative process. I often create a shell of a program (module, component, page, etc.), just enough to test, and then run my first test. I then iteratively enhance the program (module, component, page, etc.), verifying and validating each change. For online PeopleSoft pages and components, this often means pressing the browser refresh button a LOT! But here is the problem: each time I refresh a PeopleSoft component, PeopleSoft reverts to the search page. After selecting a search value, I'm then directed to the first page in the component. What if I'm testing a different page? Is it possible to craft a URL to bypass component search and open a different page within the component? Yes! We can bypass component search by placing level 0 search key field names and values in the URL. Likewise, we can specify the starting page by placing the Page=... parameter in our URL. But if you don't know the component's search keys and the target page's name, do you have to find them in Application Designer? Fortunately, no. Every PeopleSoft page contains a JavaScript variable named strCurrUrl. This variable contains the current full URL, including search keys and Page parameter. Here is an example of a URL we might use to open the Job Earnings Distribution page of the JOB_DATA component for an employee with the ID KU0001:

http://hr92u030.example.com:8000/psp/ps/EMPLOYEE/HRMS/c/ADMINISTER_WORKFORCE_(GBL).JOB_DATA.GBL?EMPLID=KU0001&EMPL_RCD=0&PAGE=JOB_DATA_ERNDIST

For the last several years, we have been showing customers a simple JavaScript console trick to move the strCurrUrl JavaScript variable into the browser's address bar, allowing us to bypass search and navigation when testing PeopleSoft pages. The real trick to this, though, is that Fluid and Classic behave a little differently, requiring different JavaScript. So today, I want to share a simple JavaScript fragment that you may use across Classic and Fluid. This small snippet is "smart" enough to apply the correct rule on Fluid or Classic. It also has a little logic to detect Classic content in Fluid Activity Guides and Fluid WorkCenters. Please note that using this script with a Fluid Activity Guide/WorkCenter will open Classic content outside of the WorkCenter/Activity Guide frame, but then for testing purposes, that is often desirable.

OK... so... wait, it gets better. Why copy and paste that script into the JavaScript console every time you want to invoke it? Why not just create a browser bookmark or favorite to accomplish this task (also known as a bookmarklet)? Here is the same script in bookmark form. Simply drag this hyperlink into your browser's bookmarks bar and you have a bookmark you may use on almost any PeopleSoft component to get a refresh-able URL.

PS Refresh-able

Are you interested in more tips like this? Jim Marion shares tons of them in his training classes. Register today for one of our upcoming classes!

Oracle Delivers Modern Retail in the Cloud

Oracle Press Releases - Mon, 2020-01-13 11:00
Press Release
Oracle Delivers Modern Retail in the Cloud Powered by Oracle Cloud Infrastructure, Oracle Retail provides customers the ability to innovate faster and more securely

NRF 2020 Retail’s Big Show, New York City—Jan 13, 2020

An increasingly fierce retail market requires new approaches to engaging customers and satisfying their demands with efficiency and elegance. Retailers need the transparency and flexibility to shift with the needs of their customers and business—whether those interactions are happening online, in-store or in the spaces in-between, such as buying online and picking up in-store (BOPIS).

Everything from inventory to experiences must live in harmony and that requires brands to innovate faster, have intelligence at the tips of their fingers, and use data science to put customers at the center of decision making. Yesterday’s cloud infrastructure can’t meet the demands of today’s customers. Providing retailers the modern platform they need to deliver on their brand promises, Oracle’s core retail operations portfolio is now powered by Oracle's Gen 2 cloud.

Oracle Cloud Infrastructure is a purpose-built, best-in-class platform for running the most important enterprise applications with unmatched performance, security, and cost. Only Oracle’s Gen 2 Cloud runs Oracle Autonomous Database, the industry's first and only self-driving database. Oracle Cloud provides a comprehensive cloud computing portfolio, from application development and business analytics to data management, integration, security, artificial intelligence (AI), and blockchain.

“The retail enterprise is becoming increasingly complex with a slim margin for error when it comes to maintaining customer loyalty,” said Jeff Warren, vice president, Oracle Retail. “It’s more critical than ever that retailers can operate with agility and next-level intelligence on everything from planning to inventory management to customer acquisition. This can only be achieved with a next-level cloud platform, like Oracle CIoud Infrastructure.”

Customers are already seeing the advantages. For example, Gap Inc. partnered with Oracle to deploy Oracle Retail Merchandising Cloud Service and Oracle Retail Integration Cloud Service, powered by Oracle Cloud Infrastructure, to drive operational agility and furnish the Banana Republic business teams with better intelligence.

Oracle Retail applications leverage Oracle Cloud’s highly fault-tolerant regions to protect customers from unexpected outages. Oracle’s cloud applications can tolerate network, power, or hardware outages and continue to provide service without missing a beat. Additionally, the applications are architected with redundancy at all levels. If a particular component fails, another is there to take its place without disruption to the services. Oracle’s focus on continuous service versus eventual recovery help reduce the risk and magnitude of lost business and customers.

Contact Info
Kaitlin Ambrogio
Oracle Retail PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Collections

Jonathan Lewis - Mon, 2020-01-13 08:31

This is a note I drafted in September 2015 and only rediscovered a couple of days ago while searching for something I was sure I’d written about collections and/or table functions. The intention of collections and table functions is that they should behave like tables when you use them in a query – but there are cases where a real table and something cast to a table() aren’t treated the same way by the optimizer – and this 4-year old note (which is still valid in 2020 for 19c) is one of those cases.

 

There was a question – with test case – on Oracle-L recently [ed: now more than 4 years ago] about the behaviour of a query that changed plans as you switched from using a global temporary table to a collection – why was Oracle doing something inefficient with the collection. The answer was: “Bad luck, it’s a limitation in the optimizer”.  (Sub-text: collections are a pain).

The test case was short and simple so I thought I’d post it – with an h/t to Patrick Jolliffe who presented the probem and Timur Akhmadeev and Stefan Koehler who explained the problems.

Here’s the script (with a little cosmetic editing) to create the necessary objects and data:

rem
rem     Script:         collections.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2015
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create or replace type number_table is table of number;
/

create table test_objects as select * from all_objects;
create /* unique */ index test_objects_idx on test_objects(object_id);

exec dbms_stats.gather_table_stats(null, 'test_objects');

create global temporary table gtt_test_objects (object_id number);
insert into gtt_test_objects values (1);


In this example I’ve created a type which is a simple table of number. In a more general case you might create a simple object type, and then a type that was a table of that object type, then you might create a function that returned a variable of that table type, or a function that was declared to return the table type “pipelined” and uses the “pipe row” instruction in the code to return one value of the simple object type at a time. Whichever variation you used you could then use the table() operator to tell Oracle to treat the content of the table type as if it were a relational table. (In recent versions of Oracle the table() operator is redundant).

Here’s the first query, which uses the global temporary table in an “IN” subquery, followed by its execution plan – again with a little cosmetic editing and the addition of query block names across the board:


prompt  ==================================
prompt  Query using global temporary table
prompt  ==================================

select  
        /*+ qb_name(main) */ 
        null 
from    (
        select
                /*+ qb_name(inline) */
                distinct object_id 
        from    test_objects
        ) 
where   object_id in (
                select 
                        /*+
                                qb_name(subq)
                                cardinality(gtt_test_objects 1) 
                        */ 
                        gtt_test_objects.object_id
                from
                        gtt_test_objects 
        )
;


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                  |      1 |        |      0 |00:00:00.01 |       5 |       |       |          |
|   1 |  VIEW                 | VM_NWVW_1        |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   2 |   SORT UNIQUE NOSORT  |                  |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   3 |    NESTED LOOPS       |                  |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   4 |     SORT UNIQUE       |                  |      1 |      1 |      1 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   5 |      TABLE ACCESS FULL| GTT_TEST_OBJECTS |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|*  6 |     INDEX RANGE SCAN  | TEST_OBJECTS_IDX |      1 |      1 |      0 |00:00:00.01 |       2 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_ID"="GTT_TEST_OBJECTS"."OBJECT_ID")

As you can see I’ve set statistics_level to all, and used dbms_xplan.display_cursor() to pull the actual execution plan from memory. This plan tells us that the optimizer unnested the IN subquery to generate a unique set of values and used that unique set to drive a nested loop join into the test_objects table (with an index-only probe). Moreover, before this step, the optimizer used complex view merging and cost-based query transformation to postpone the “distinct” from the original query to do the join before distinct. The E-rows at operation 5 also tells us that the optimizer “knew” that there was only one row in the GTT – it took note of my cardinality() hint.

Now we replace with gtt_test_objects table with the collection – casting it to a table() and giving Oracle the same cardinality() hint – as follows:


select 
        /*+ 
                qb_name(main)
--              no_use_hash_aggregation(@sel$1)
        */ 
        null
from    (
        select  
                /*+ inline */
                distinct object_id 
        from    test_objects
        )
where   object_id in (
                select 
                        /*+ 
                                qb_name(subq)
                                cardinality(gtt_test_objects 1) 
                        */ 
                        column_value object_id
                from
                        table(number_table(1)) gtt_test_objects
        )
;

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                  |      1 |        |      0 |00:00:00.08 |     132 |       |       |          |
|   1 |  MERGE JOIN SEMI                        |                  |      1 |      1 |      0 |00:00:00.08 |     132 |       |       |          |
|   2 |   SORT JOIN                             |                  |      1 |  56762 |      1 |00:00:00.08 |     132 |  1470K|   606K| 1306K (0)|
|   3 |    VIEW                                 |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |       |       |          |
|   4 |     HASH UNIQUE                         |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |  4122K|  2749K| 3418K (0)|
|   5 |      INDEX FAST FULL SCAN               | TEST_OBJECTS_IDX |      1 |  56762 |  56762 |00:00:00.01 |     132 |       |       |          |
|*  6 |   SORT UNIQUE                           |                  |      1 |      1 |      0 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   7 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                  |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_ID"=VALUE(KOKBF$))
       filter("OBJECT_ID"=VALUE(KOKBF$))

The second plan is completely different. The optimizer has unnested the subquery to produce a join, but instead of using the unique set of values that it generated from the collection to drive a nested loop it’s decide to do a merge semi-join, which has entailed an expensive fast full scan of the test_objects_idx index to acquire all the key values first.

I tried to make the optimizer use the collection to drive a nested loop, adding some carefully targeted hints to force the join order and dictate a nested loop join with pushed predicate: but the optimizer wouldn’t push the “obvious” join predicate and continued to do an index fast full scan and sort of the text_object_idx. If you’re interested here are the hints and the resulting plan:

/*+
        qb_name(main)
        leading( @sel$8969f1c9 kokbf$0@sel$2 "from$_subquery$_001"@main)
        use_nl( @sel$8969f1c9 "from$_subquery$_001"@main)
        push_pred(@sel$8969f1c9 "from$_subquery$_001"@main)
*/

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                  |      1 |        |      0 |00:00:00.03 |     132 |       |       |          |
|   1 |  NESTED LOOPS                           |                  |      1 |      1 |      0 |00:00:00.03 |     132 |       |       |          |
|   2 |   SORT UNIQUE                           |                  |      1 |      1 |      1 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                  |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|*  4 |   VIEW                                  |                  |      1 |      1 |      0 |00:00:00.03 |     132 |       |       |          |
|   5 |    SORT UNIQUE                          |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |  2604K|   728K| 2314K (0)|
|   6 |     INDEX FAST FULL SCAN                | TEST_OBJECTS_IDX |      1 |  56762 |  56762 |00:00:00.01 |     132 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("OBJECT_ID"=VALUE(KOKBF$))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 6 (U - Unused (1))
---------------------------------------------------------------------------
0 - SEL$102722C0
- qb_name(subq)

1 - SEL$8969F1C9
- leading( @sel$8969f1c9 kokbf$0@sel$2 "from$_subquery$_001"@main)
- qb_name(main)

1 - SEL$8969F1C9 / from$_subquery$_001@MAIN
U - push_pred(@sel$8969f1c9 "from$_subquery$_001"@main)
- use_nl( @sel$8969f1c9 "from$_subquery$_001"@main)

5 - SEL$1
- inline

In the previous post we had a “NOT IN” subquery against a collection/pipelined table function that couldn’t even be unnested (even in 19c); in this example we have an IN subquery that does unnest but then can’t drive a nested loop efficiently because the optimizer won’t push the collection values into the distinct view, and won’t do complex view merging to avoid having to do that join predicate pushdown. Collections and table functions() just don’t play nicely with the optimizer!

In fact this plan also shows one of those “generic” approaches in the optimizer that allows a human operator to see a special case that could have been further optimized: if the optimizer had used a sort unique rather than a hash unique at operation 4 then the sort join at operation 2 would have been redundant – with an overall reduction in memory and CPU usage that I managed to get in a separate test by adding the hint /*+ no_use_hash_aggregation(@sel$1) */ to the query. (Since operation 6 is also a sort unique the merge join semi could, in principle, have become a merge join with no risk of producing duplicates – but the semi-join code path is probably a little more efficient, anyway, and a balance has to be struck between the risk of introducing complexity for a special case and the potential frequency and scale of the benefit it might produce.)

Conclusion

You can often see collections and table functions behaving very like tables when you use them in the from clause of queries – but there are some restrictions on the transformations that the optimizer can use when your query isn’t using “real” tables.

Footnote

There are many ways that you can play around with this starting model to investigate where the boundaries might be. For example, if I make the index on test_objects unique the plan changes to a simple nested loop driven by the unnested collection (there’s no longer a non-mergeable view in the way). If I eliminate the distinct from the original query the same thing happens (for the same reason). If I force the join order to start with the collection (using the leading() hint) but don’t hint a nested loop Oracle produces (at least in my case) a hash join with a Bloom filter that minimised the memory and and CPU requirement.

I mentioned at the start that Timur Akhmadeev and Stefan Koehler supplied explanations for what was going on behind the scenes. Critically Stefan also referenced one of two posts from the Oracle blog on complex view merging and its restrictions: part 1, part 2.

The related problem that led me to re-discover and complete this note is at this URL (published a couple of days ago).

12.2.0.1 And Later Support (Limited) Extended Stats On Virtual columns / Column Groups of Expressions

Randolf Geist - Mon, 2020-01-13 05:14
I do have a demo as part of my optimizer related workshops that shows the restriction / limitation of DBMS_STATS not supporting extended statistics on virtual columns / group of expressions, so for example the combination of both expressions and column groups, like ((TRUNC(COL1)), (TRUNC(COL2))).

Surprisingly, when following a certain sequence of operation, this starts working (to some degree) from 12.2.0.1 on.

The official documentation up to and including 19c still mentions this as a restriction, and since it doesn't work when explicitly referencing virtual columns (see the test case what I exactly mean by this) I assume this is more like a side effect / unintended feature.

Nevertheless, the optimizer happily picks up this additional information and comes up with improved estimates when having a combination of skew and correlation on expressions, for example.

The following test case shows the change in behaviour from 12.2.0.1 on:

set echo on linesize 200 trimspool on trimout on tab off pagesize 999 timing on

alter session set nls_language = american;

drop table t1;

purge table t1;

-- Initialize the random generator for "reproducible" pseudo-randomness
exec dbms_random.seed(0)

-- ATTR1 and ATTR2 are both skewed and correlated
create table t1
as
select
rownum as id
, trunc(dbms_random.value(1, 1000000000000)) as fk
, case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
, case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;

-- Histograms on ATTR1 and ATTR2 for representing skew properly
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

-- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

-- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

-- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
-- Officially this is isn't allowed and errors out up to and including 12.1.0.2
-- ORA-20001: Error when processing extension - missing right parenthesis
-- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
-- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
-- Remove the previous step and it will error out:
-- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

-- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
-- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
-- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
explain plan for
select
count(*)
from
t1 a
where
trunc(attr1) = 1
and trunc(attr2) = 1;

select * from table(dbms_xplan.display(format => 'TYPICAL'));

-- But: Explicitly referencing a virtual column doesn't work
-- This will error out:
-- ORA-20001: Error when processing extension - virtual column is referenced in a column expression

exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

-- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')


alter table t1 add (trunc_attr1 as (trunc(attr1)));

alter table t1 add (trunc_attr2 as (trunc(attr2)));

-- This works and is supported in all versions supporting virtual columns
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

-- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
-- even in 12.2.0.1 and later
-- But: Works in 12.2.0.1 and later if the call to
-- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
-- above is removed, because the extension then already exists (!)
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')

-- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

And here is the output I get when using 12.1.0.2:

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 10 12:39:00 2020

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
CBO_TEST orcl121 DELLXPS13 368 46472 12.1.0.2.0 20200110 6908 59 15536:4996 00007FFA110E9B88 00007FFA12B6F1E8



SQL>
SQL> alter session set nls_language = american;

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> drop table t1;

Table dropped.

Elapsed: 00:00:00.02
SQL>
SQL> purge table t1;

Table purged.

Elapsed: 00:00:00.06
SQL>
SQL> -- Initialize the random generator for "reproducible" pseudo-randomness
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> -- ATTR1 and ATTR2 are both skewed and correlated
SQL> create table t1
2 as
3 select
4 rownum as id
5 , trunc(dbms_random.value(1, 1000000000000)) as fk
6 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
7 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
8 , rpad('x', 100) as filler
9 from
10 dual
11 connect by
12 level <= 1000000
13 ;

Table created.

Elapsed: 00:00:13.07
SQL>
SQL> -- Histograms on ATTR1 and ATTR2 for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.43
SQL>
SQL> -- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.08
SQL>
SQL> -- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.51
SQL>
SQL> -- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
SQL> -- Officially this is isn't allowed and errors out up to and including 12.1.0.2
SQL> -- ORA-20001: Error when processing extension - missing right parenthesis
SQL> -- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
SQL> -- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
SQL> -- Remove the previous step and it will error out:
SQL> -- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis


Elapsed: 00:00:00.04
SQL>

SQL> -- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
SQL> -- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
SQL> -- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
SQL> explain plan for
2 select
3 count(*)
4 from
5 t1 a
6 where
7 trunc(attr1) = 1
8 and trunc(attr2) = 1;

Explained.

Elapsed: 00:00:00.02
SQL>
SQL> select * from table(dbms_xplan.display(format => 'TYPICAL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 4777 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
|* 2 | TABLE ACCESS FULL| T1 | 807K| 6305K| 4777 (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(TRUNC("ATTR1")=1 AND TRUNC("ATTR2")=1)

14 rows selected.

Elapsed: 00:00:00.11
SQL>
SQL> -- But: Explicitly referencing a virtual column doesn't work
SQL> -- This will error out:
SQL> -- ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.13
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
SQL>
SQL> -- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis



Elapsed: 00:00:00.02
SQL>
SQL>
SQL> alter table t1 add (trunc_attr1 as (trunc(attr1)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table t1 add (trunc_attr2 as (trunc(attr2)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> -- This works and is supported in all versions supporting virtual columns
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.57
SQL>
SQL> -- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL> -- even in 12.2.0.1 and later
SQL> -- But: Works in 12.2.0.1 and later if the call to
SQL> -- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
SQL> -- above is removed, because the extension then already exists (!)
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - virtual column is referenced in a column expression
ORA-06512: at "SYS.DBMS_STATS", line 34634
ORA-06512: at line 1


Elapsed: 00:00:00.06
SQL>
SQL> -- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis


Elapsed: 00:00:00.04
SQL>

And that is what I get from 12.2.0.1 on, here using 19.3:

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 10 12:35:18 2020

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Verbunden mit:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production


USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
CBO_TEST orcl19c DELLXPS13 146 48961 19.0.0.0.0 20200110 5648 53 7260:13644 00007FF91687B3D8 00007FF91656B858



SQL>
SQL> alter session set nls_language = american;

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> drop table t1;

Table dropped.

Elapsed: 00:00:00.02
SQL>
SQL> purge table t1;

Table purged.

Elapsed: 00:00:00.10
SQL>
SQL> -- Initialize the random generator for "reproducible" pseudo-randomness
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> -- ATTR1 and ATTR2 are both skewed and correlated
SQL> create table t1
2 as
3 select
4 rownum as id
5 , trunc(dbms_random.value(1, 1000000000000)) as fk
6 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
7 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
8 , rpad('x', 100) as filler
9 from
10 dual
11 connect by
12 level <= 1000000
13 ;

Table created.

Elapsed: 00:00:11.89
SQL>
SQL> -- Histograms on ATTR1 and ATTR2 for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.68
SQL>
SQL> -- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.75
SQL>
SQL> -- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.43
SQL>
SQL> -- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
SQL> -- Officially this is isn't allowed and errors out up to and including 12.1.0.2
SQL> -- ORA-20001: Error when processing extension - missing right parenthesis
SQL> -- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
SQL> -- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
SQL> -- Remove the previous step and it will error out:
SQL> -- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.24
SQL>
SQL> -- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
SQL> -- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
SQL> -- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
SQL> explain plan for
2 select
3 count(*)
4 from
5 t1 a
6 where
7 trunc(attr1) = 1
8 and trunc(attr2) = 1;

Explained.

Elapsed: 00:00:00.01
SQL>
SQL> select * from table(dbms_xplan.display(format => 'TYPICAL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 4797 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
|* 2 | TABLE ACCESS FULL| T1 | 893K| 6977K| 4797 (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(TRUNC("ATTR1")=1 AND TRUNC("ATTR2")=1)

14 rows selected.

Elapsed: 00:00:00.10
SQL>
SQL> -- But: Explicitly referencing a virtual column doesn't work
SQL> -- This will error out:
SQL> -- ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.33
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.33
SQL>
SQL> -- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.11
SQL>
SQL>
SQL> alter table t1 add (trunc_attr1 as (trunc(attr1)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table t1 add (trunc_attr2 as (trunc(attr2)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> -- This works and is supported in all versions supporting virtual columns
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.49
SQL>
SQL> -- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL> -- even in 12.2.0.1 and later
SQL> -- But: Works in 12.2.0.1 and later if the call to
SQL> -- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
SQL> -- above is removed, because the extension then already exists (!)
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - virtual column is referenced in a column expression
ORA-06512: at "SYS.DBMS_STATS", line 40751
ORA-06512: at "SYS.DBMS_STATS", line 40035
ORA-06512: at "SYS.DBMS_STATS", line 38912
ORA-06512: at "SYS.DBMS_STATS", line 15726
ORA-06512: at "SYS.DBMS_STATS", line 22064
ORA-06512: at "SYS.DBMS_STATS", line 22162
ORA-06512: at "SYS.DBMS_STATS", line 22232
ORA-06512: at "SYS.DBMS_STATS", line 22864
ORA-06512: at "SYS.DBMS_STATS", line 38313
ORA-06512: at "SYS.DBMS_STATS", line 39738
ORA-06512: at "SYS.DBMS_STATS", line 40183
ORA-06512: at "SYS.DBMS_STATS", line 40732
ORA-06512: at line 1


Elapsed: 00:00:00.09
SQL>
SQL> -- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.


Elapsed: 00:00:05.32
SQL>

So, from 12.2.0.1 it looks like extended statistics on virtual columns are supported to some degree, when following a certain sequence of operation - first creating extended statistics on each of the expressions used, which creates corresponding virtual columns under the cover, and afterwards creating extended statistics using those expressions as part of the column group expression.

Strange enough, when explicitly creating virtual columns for those expressions creating a column group explicitly referencing those virtual columns doesn't work - but using the expressions covered by the virtual columns instead works even with those virtual columns created explicitly.

WooCommerce Sales Analysis Report

Nilesh Jethwa - Sun, 2020-01-12 22:04

Advanced Sales Analysis for WooCommerce An in-depth understanding of your eCommerce data will help you to propel the sales of your woocommerce store. Sales Analysis or Sales Summary Dashboard is part of the WooCommerce Reporting Pro package Summary: This is one of the most useful and necessary reports for all WooCommerce Store owners. So let … Read more

Hat Tip To: InfoCaptor WooCommerce Plugins

kubectl tree - A kubectl plugin to explore ownership relationships between Kubernetes objects through ownersReferences

Pas Apicella - Sun, 2020-01-12 18:51
A kubectl plugin to explore ownership relationships between Kubernetes objects through ownersReferences on them. To get started and install the plugin visit this page.

https://github.com/ahmetb/kubectl-tree

Install Steps

Install as follows

1. Create a script as follows

install-krew.sh

(
  set -x; cd "$(mktemp -d)" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}" &&
  tar zxvf krew.tar.gz &&
  KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" &&
  "$KREW" install --manifest=krew.yaml --archive=krew.tar.gz &&
  "$KREW" update
)

2. Install as follows

papicella@papicella:~/pivotal/software/krew$ ./install-krew.sh
+++ mktemp -d
++ cd /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tmp.kliHlfYB
++ curl -fsSLO 'https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}'
++ tar zxvf krew.tar.gz
x ./krew-darwin_amd64
x ./krew-linux_amd64
x ./krew-linux_arm
x ./krew-windows_amd64.exe
x ./LICENSE
+++ uname
+++ tr '[:upper:]' '[:lower:]'
++ KREW=./krew-darwin_amd64
++ ./krew-darwin_amd64 install --manifest=krew.yaml --archive=krew.tar.gz
Installing plugin: krew
Installed plugin: krew

...

3. On a Mac add the following to your PATH and source your profile file or start a new shell

export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

4. Check plugin is installed

$ kubectl plugin list
The following compatible plugins are available:

/Users/papicella/.krew/bin/kubectl-krew
/Users/papicella/.krew/bin/kubectl-tree

Can also use this:

$ kubectl tree --help
Show sub-resources of the Kubernetes object

Usage:
  kubectl tree KIND NAME [flags]

Examples:
  kubectl tree deployment my-app
  kubectl tree kservice.v1.serving.knative.dev my-app

6. Ok now it's installed let's see what it shows / displays information about k8s objects and relationships on my cluster which has riff and knative installed

$ kubectl tree deployment --namespace=knative-serving networking-istio
NAMESPACE        NAME                                       READY  REASON  AGE
knative-serving  Deployment/networking-istio                -              8d
knative-serving  └─ReplicaSet/networking-istio-7fcd97cbf7   -              8d
knative-serving    └─Pod/networking-istio-7fcd97cbf7-z4dc9  True           8d

$ kubectl tree deployment --namespace=riff-system riff-build-controller-manager
NAMESPACE    NAME                                                    READY  REASON  AGE
riff-system  Deployment/riff-build-controller-manager                -              8d
riff-system  └─ReplicaSet/riff-build-controller-manager-5d484d5fc4   -              8d
riff-system    └─Pod/riff-build-controller-manager-5d484d5fc4-7rhbr  True           8d


More Information

GitHub Tree Plugin
https://github.com/ahmetb/kubectl-tree

Categories: Fusion Middleware

Thank You

Jonathan Lewis - Sun, 2020-01-12 03:16

Regular readers will have noticed that for the last few weeks I’ve been adding a footnote (now deleted) to any new blog posts asking for donations to the Nepal Youth Foundation (UK), with an offer to match (up to a limit) any donations made by my readers.  The page I had set up on JustGiving to collect donations and describe the work of the foundation reached my limit last week, so I sent off my matching cheque on 6th Jan. I got a very prompt email telling me the cheque had arrived the following day, and thanking me for the effort; then yesterday a follow-up thank you letter arrived in the morning post.

I thought I’d share this with you so that you could see that your generosity has been appreciated by an organisation that really cares about the change it can make to the lives of the people who most need their help:

My JustGiving page is still live, but if you feel at any time that you’d like to donate to a worthwhile cause the NYF(UK) has a JustGiving page of its own that you could donate to directly, and for further information about what they do, here’s a link to their main (UK) website. For other countries there’s a link to the US site with information about other international offices.

Collection limitation

Jonathan Lewis - Fri, 2020-01-10 11:43

The ODC SQL and PL/SQL forum came up with an example a couple of days ago that highlighted an annoying limitation in the optimizer’s handling of table functions. The requirement was for a piece of SQL that would generate “installments” information from a table of contract agreements and insert into another table any installments that were not yet recorded there.

The mechanism to turn a single row of contract data into a set of installments was a (optionally pipelined) table function that involved some business logic that (presumably) dealt with the timing and size of the installments. The final SQL to create the data that needed to be inserted was reported as follows (though it had clearly been somewhat modified):

select 
        y.*,
        trunc(sysdate,'mm'),
        user
from 
        table_a a
join 
        table(
                my_function(
                        a.loan_acct_nbr, 
                        a.start_dt,
                        a.maturity_dt,
                        a.num_of_terms
                )
        ) y
on 
        a.loan_acct_nbr = y.loan_acct_nbr
where 
        a.yr_mnth_cd = to_char(add_months(sysdate,-1),'yyyymm')       -- last month
and     a.loan_typ   = 'ABC'
and     not exists ( 
                select  1 
                from
                        final_load_table l
                where
                        l.loan_acct_nbr = y.loan_acct_nbr
                and     l.yr_mnth_cd    = y.yr_mnth_cd 
        )
;

In this query, table_a is the set of contracts, final_load_table is the set of installments, and my_function() is the pipelined function returning a table of installments derived from the start date, maturity date, and number of installments for a contract. The query needs a “not exists” subquery to eliminate any installments that are already known to the database. Once this query is operating efficiently it could be used either to drive a PL/SQL loop or (generally the better strategy) to do a simple “insert as select”.

We were told that the function would return around 60 rows per contract; that the starting size of the final_load_table would be about 60M rows, and the size of the result set from the initial join would be about 60M or 70M rows (which suggests about 1M rows acquired from table_a).

The owner of this problem seemed to be saying that the query had returned no data after 4 or 5 hours – which suggests (a) the optimizer chose a bad plan and/or (b) the PL/SQL function is working very inefficiently and/or (c) the join had generated a vast amount of data but the effect of the subquery was to discard all of it .

Clearly we needed to see an execution plan (preferably from SQL Monitor) and be given have some idea of how much of the 60M/70M rows predicted for the join would be discarded because it already existed.

The owner did send us an execution plan – which included a very obvious threat and suggested a couple of optimizer errors – but supplied it as a picture rather than a flat text.

You’ll notice, of course, that the tables and columns have changed from the original statement. More significantly, though, there are serious problems with the numbers – the estimated row count for the basic join is only 5,326 rather than 50 Million which, at first sight, is probably why the optimizer has decided that a filter subquery (operation 1) to do an index-only probe (operation 5) is a good way of handling the subquery. Perhaps if the estimates had been a little more accurate (e.g. through the addition of a couple of column groups or, if necessary, by opt_estimate() or cardinality() hints) the subquery would have been unnested and turned into a hash anti-join.

I suggested a quick test of a suitable cardinality() hint – but ran up a little model to check that I’d got the hint right – and found that I had but it wasn’t going to help. So I decided to write up the model (and a possible solution for the owner of the problem) in this blog note.

Here’s the code to create the set of objects I’m going to work with. The naming follows the naming in the original statement of the problem suggested by the owner:


rem
rem     Script:         table_function_plan.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table table_a (
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        start_dt        date,
        maturity_dt     date    ,
        number_of_terms number,
        constraint ta_pk primary key(loan_acct_nbr, yr_mnth_cd)
)
;

execute dbms_random.seed(0)

insert /*+ append */
into    table_a
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 4000 -- > comment to avoid wordpress issue
)
select
        trunc(dbms_random.value(1e9, 2e9)),
        to_char(sysdate-(365-mod(rownum,365)),'yyyymm'),
        sysdate-(365-mod(rownum,365)),
        sysdate+(1500+mod(rownum,365)),
        60
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

create table final_load_table_l(
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        v1              varchar2(10),
        padding         varchar2(200),
        constraint lt_pk primary key(loan_acct_nbr, yr_mnth_cd)
)
;

insert /*+ append */ into final_load_table_l
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 4000 -- > comment to avoid wordpress issue
)
select
        trunc(dbms_random.value(1e9, 2e9)),
        to_char(sysdate-(365-mod(rownum,365)),'yyyymm'),
        lpad(rownum,10),
        lpad('x',200,'x')
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;


begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'table_a',
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'final_load_table_l',
                method_opt  => 'for all columns size 1'
        );
end;
/

create type my_row_type as object (
        loan_acct_nbr   number,
        yr_mnth_cd      varchar2(6),
        v1              varchar2(10),
        padding         varchar2(200)
);
/

create type my_table_type as table of my_row_type;
/

create  or replace function my_function (
        i_loan_acct_nbr         in      number,
        i_yr_mnth_cd            in      varchar2,
        i_start_dt              in      date,
        i_maturity_dt           in      date,
        i_number_of_terms       in      number
)       return  my_table_type pipelined
as
begin
        for i in 1..i_number_of_terms loop
                pipe row (
                        my_row_type(
                                i_loan_acct_nbr,
                                to_char(i_start_dt+32*i,'yyyymm'),
                                i,
                                lpad('x',200,'x')
                        )
                );
        end loop;
        return;
end;
/

I was planning to create some large tables – hence the option to generate 16M rows from my generator CTEs – but I discovered the critical problem almost as soon as I had some data and code in place, so I didn’t need to go large.

I’ve had to create an object type and table type in order to create a pipelined function that returns the table type by piping rows of the object type. The data I’ve created, and the way the function generates data probably doesn’t bear much resemblance to the real system of course, but I don’t think it needs to be very close to make the crucial point.

Here’s the text of the select statement the OP wants to run, with the execution plan I got from my data set after running the query and pulling the plan from memory:

alter session set statistics_level = all;

select 
        /*+ find this 1 */
        y.* 
from 
        table_a a, 
        table(my_function(
                a.loan_acct_nbr,
                a.yr_mnth_cd,
                a.start_dt,
                a.maturity_dt,
                a.number_of_terms
        )) y
where 
        a.yr_mnth_cd = '202001'
and     not exists (
                select  
                        /*+ unnest */
                        null
                from    final_load_table_l l
                where   l.loan_acct_nbr = y.loan_acct_nbr
                and     l.yr_mnth_cd    = y.yr_mnth_cd
        )
;

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |             |      1 |        |  5059K(100)|  14580 |00:00:00.15 |   16330 |
|*  1 |  FILTER                             |             |      1 |        |            |  14580 |00:00:00.15 |   16330 |
|   2 |   NESTED LOOPS                      |             |      1 |   6283K| 10421   (8)|  14580 |00:00:00.11 |     335 |
|*  3 |    TABLE ACCESS FULL                | TABLE_A     |      1 |    769 |    10  (10)|    243 |00:00:00.01 |     297 |
|   4 |    COLLECTION ITERATOR PICKLER FETCH| MY_FUNCTION |    243 |   8168 |    14   (8)|  14580 |00:00:00.10 |      38 |
|*  5 |   INDEX UNIQUE SCAN                 | LT_PK       |  14580 |      1 |     1   (0)|      0 |00:00:00.02 |   15995 |
--------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NULL)
   3 - filter("A"."YR_MNTH_CD"='202001')
   5 - access("L"."LOAN_ACCT_NBR"=:B1 AND "L"."YR_MNTH_CD"=:B2)


I’ve put in a hint to tell the optimizer to unnest the subquery – and it didn’t. Oracle does not ignore hints (unless they’re illegal, or out of context, or the optimizer never got to them, or you’ve found a bug) so why did Oracle appear to ignore this hint? There’s a really nice feature in 19.3 execution plans – you can request a hint report for a plan, and here’s the hint report for this query (ignore the bits about “find this” being an error):


Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 3 (U - Unused (1), E - Syntax error (2))
---------------------------------------------------------------------------
   0 -  SEL$1
         E -  find
         E -  this

   5 -  SEL$3
         U -  unnest / Invalid correlated predicates


I put in an /*+ unnest */ hint to unnest the subquery, and I’ve been told that the predicates are not valid. The only thing about them that could be invalid is that they come from a pipelined function that has returned an object type. The pipelined function does not behave exactly like a table. But this gives me a clue about forcing the unnest to happen – hide the pipelined function inside a non-mergeable view.


select
        /*+ find this 2 */
        v.*
from    (
        select  /*+ no_merge */
                y.* 
        from 
                table_a a, 
                table(my_function(
                        a.loan_acct_nbr,
                        a.yr_mnth_cd,
                        a.start_dt,
                        a.maturity_dt,
                        a.number_of_terms
                )) y
        where 
                a.yr_mnth_cd = '202001'
        )       v
where   not exists (
                select
                        /*+ unnest */
                        null
                from    final_load_table_l l
                where   l.loan_acct_nbr = v.loan_acct_nbr
                and     l.yr_mnth_cd    = v.yr_mnth_cd
        )
/


------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name        | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |             |      1 |        | 10628 (100)|  14580 |00:00:00.12 |     387 |       |       |          |
|*  1 |  HASH JOIN RIGHT ANTI                |             |      1 |   6283K| 10628  (10)|  14580 |00:00:00.12 |     387 |  1878K|  1878K| 2156K (0)|
|   2 |   INDEX FAST FULL SCAN               | LT_PK       |      1 |  10000 |     6  (17)|  10000 |00:00:00.01 |      91 |       |       |          |
|   3 |   VIEW                               |             |      1 |   6283K| 10371   (8)|  14580 |00:00:00.11 |     296 |       |       |          |
|   4 |    NESTED LOOPS                      |             |      1 |   6283K| 10371   (8)|  14580 |00:00:00.10 |     296 |       |       |          |
|*  5 |     TABLE ACCESS FULL                | TABLE_A     |      1 |    769 |    10  (10)|    243 |00:00:00.01 |     296 |       |       |          |
|   6 |     COLLECTION ITERATOR PICKLER FETCH| MY_FUNCTION |    243 |   8168 |    13   (0)|  14580 |00:00:00.10 |       0 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("L"."LOAN_ACCT_NBR"="V"."LOAN_ACCT_NBR" AND "L"."YR_MNTH_CD"="V"."YR_MNTH_CD")
   5 - filter("A"."YR_MNTH_CD"='202001')

I’ve wrapped the basic join between table_a and the table function in an inline view called (unimaginatively) v, and I’ve added a /*+ no_merge */ hint to that inline view. So the main query becomes a select from a non-mergeable view with a “not exists” subquery applied to a couple of view columns. And Oracle thinks that that’s okay – and my unnest hint encouraged the optimizer to use a hash anti-join.

So here’s an option for the owner of the problem – but with a few outstanding questions: will a rewrite of their query in this form do the same, will the performance of the select be good enough, and will the subsequent “insert as select” keep the same driving plan.

There’s one significant detail to worry about: the build table in this hash (anti-)join is going to be constructed from 50M (load_acct_bfr, yr_mnth_cd) pairs – which means the workarea requirement is likely to be about 1.2GB for an optimial (i.e. in-memory) hash join; otherwise the join may spill to disc and do a lot of I/O – probably as a one-pass hash join.

(Did you notice,by the way, that the word storage appeared at operation 3 in the original plan?  That suggests a nice big Exadata box; however, there’s no storage predicate in the Predicate Information section for that operation and you would have thought that lease_type = ‘US_SSP’ would be pushed to storage, so maybe this is a ZFS Pillar backing a less exotic server.)

Conclusion

Some (if not all) types of correlated subqueries behave badly if the correlation predicates involve table functions. But you may be able to work around the issue by hiding part of the query, including the table function, inside a non-mergeable inline view before applying the subquery to the view.

Footnote

When I realised that the table function was causing a problem unnesting I remembered that I had written about a similar problem a few years ago – after searching for a little while I discovered a draft note that I had started in September 2015 but had not quite finished; so I’ll be finishing it off and publishing it some time in the next few days.

 

 

A decade in data, a decade to come

Rittman Mead Consulting - Fri, 2020-01-10 08:52

First of all I’d like to take the chance to wish everyone in our network, colleagues, customers and friends a very happy new year.

The last decade was an incredible one for the data industry.  I realised before writing this that 2010 was actually when I started at my first data management consultancy, so it’s given me the perfect chance to reflect on what I’ve witnessed in the last 10 years.  

Data Insights out of IT and into the business

One thing that stands out is the relevance of insights through data within business functions.  The reason why we do what we do at Rittman Mead is because we believe that data solutions will drive positive change for us all in the future.  Making it accessible, insightful and interesting to people who are doing great things for great companies is the goal for all of us.

During the last decade we saw the mass market adoption of line of business data discovery tools such as Tableau, Oracle Data Visualization, QlikSense & Microsoft PowerBI.  All wonderfully innovative products that have flown the flag for bringing insights through data into the business.  

Oracle DV Geo LayersWhat is Big Data?

Also, what has happened to the Big Data hype which was a huge back in 2010?

Ten years on do we think it lived up to its hype?  Do we even know what it means? On reflection was it as groundbreaking as we might have thought at the time?  This is an argument that could be had but might never end.

Public Cloud Cover

One thing is for sure - the adoption at scale of public Cloud, SaaS, Storage and what it seems is infinite amounts of processing power has been huge since 2010.  We've seen AWS and Microsoft Azure leading the way, we see Google and Oracle Cloud Infrastructure catching up.  We know there are pros, cons, knowns and unknowns when moving to Cloud hosted applications and platforms.  We know that one thing is for sure - come 2030 the Cloud will be bigger whether we like it or not.

Streams of Events

Finally the emergence of event driven architectures is as innovative as it is complex. Large growth companies like Uber have built their business model on the ability to process huge amounts of events at low latency in order to operate.  Think demand + supply and Uber’s dynamic pricing model.  Fortunately open source projects like Kafka and particularly organisations like Confluent, the company behind Kafka, have made it easier to integrate this functionality to businesses throughout the world.

Analysing stream events in realtime using Kafka KSQL

It wouldn’t be fair to say that the innovation in the data industry has always been used ethically in the last 10 years and this is something that we all have a part to play in going forwards.  The Facebook / Cambridge Analytica scandal certainly made everyone reflect on the importance of using data for positive and ethical outcomes rather than underhand and in-transparent tactics.

And whats next?

As we embark on a new decade, we at Rittman Mead are excited about what is to come.  Artificial Intelligence, Machine Learning and Augmented Analytics is going to hit the mainstream and will play a part in all of our lives.  But the good news is - there will always be a place in this market for people.  Data is necessary, sophisticated machines are necessary but it relies on the people for it to work whether that be building an algorithm or making a decision based on the outcome of the information that a particular algorithm has generated.  We will see more automation and this is a brilliant thing.  Robotic Process Automation (RPA) and Autonomous databases such as Oracle’s offering are designed to breed innovation, not replacement.

Come and meet us in February

For us it all starts in February at Oracle OpenWorld Europe.  We will be exhibiting and presenting here and there will be insightful, thought provoking and relatable content throughout the two days.  Augmented analytics & autonomous will be at the top of the agenda and rightly so.  What’s more, it’s completely free to attend. Registration details can be found here:

We’d love to meet with you if you do decide to come.  To make this easier (as we know there is a lot to cover on days like these) we have set up a calendar where you can book a meeting slot with one of our sales team or technical consultants.  We can talk about:

  • Oracle
  • AI/ML
  • What are we doing?
  • What are you doing?
  • Anything you like in an informal setting

Please feel free to book a time slot using the link below:

https://calendly.com/rittman-mead/oracle-openworld

Finally we wish everyone a very prosperous new decade - who knows where we will be by 2030!

Categories: BI & Warehousing

Home Shopping Retailer Turns Browsers into Buyers with Oracle Cloud

Oracle Press Releases - Fri, 2020-01-10 08:00
Press Release
Home Shopping Retailer Turns Browsers into Buyers with Oracle Cloud South African online retailer Home Choice remodels its planning processes and modernizes the shopping experience to drive sales and customer satisfaction

Redwood Shores, Calif.—Jan 10, 2020

Home Choice, one of South Africa’s largest home shopping retailers and financing providers, has deployed Oracle Retail Cloud in just 12 weeks to deliver the products and offers that turn browsers into buyers. Shoppers look to Home Choice to discover name-brand furniture, electronics, and appliances at competitive prices. Offering a wide range of easy finance options, the retailer engages customers in finding and purchasing housewares via its website, catalog, and call centers. With Oracle Retail, the Home Choice team is shifting from legacy, manual processes to a best in class planning and optimization solution that unifies processes and integrates data science into decision making to drive more sales and better customer satisfaction.  

“Home Choice is focused on delivering a compelling assortment of products that meets market demand across multiple geographies. The Oracle Retail Cloud Services allow us to adopt a more scientific approach to planning and markdowns quickly,” said Dirk Oberholster, chief information officer, Home Choice. “The flexibility of the solution also enables us to configure and extend capabilities to meet business requirements without touching the base code.”

In  Oracle’s “Setting the Bar: Global Customer Experience Trends 2019”, nearly 16,000 global consumers surveyed are seeking preferential treatment based on their relationship with  brand. Roughly half (48 percent) say that offers or discounts which are better than what anyone else can get based on their loyalty to that retailer are ‘absolutely essential’. With a single view of inventory, Home Choice will be able to understand how customers interact with the brand to make better buying decisions and deliver relevant offers that compel a passive shopper to purchase.

“Oracle Retail is empowering the Home Choice team to deliver the right products to the right location and channels at a compelling price using science and optimization,” said Mike Webster, senior vice president and general manager, Oracle Retail. “As Home Choice places the customer at the center of the value chain, Oracle will enable them to interact with customers on a more personalized level.”

Home Choice is adopting Oracle Retail Merchandise Financial Planning Cloud Service, Oracle Customer Experience (CX) Commerce, Oracle Retail Assortment Planning Cloud Service, Oracle Retail Science Platform Cloud Service, and Oracle Retail Offer Optimization Cloud Service. Home Choice is centralizing its planning process by adopting the best practices built into Oracle Retail Merchandise Financial Planning to drive inventory productivity and profit. Additionally, the modern architecture of Oracle CX Commerce leverages the power of data to curate engaging experiences that drive online conversion rates and in-store traffic.

In the first phase, the team deployed Oracle Retail Merchandise Financial Planning Cloud Service with support from Cognira, a Gold level member of Oracle PartnerNetwork (OPN). Simultaneously, Oracle Retail Consulting worked to finalize the implementation of Oracle CX Commerce. The company evaluated competing products and selected Oracle based on its deep retail functionality, cloud offerings, and proven ability to support growing businesses.

Contact Info
Kris Reeves
Oracle PR
+1.925.787.6744
kris.reeves@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

My Weblogic on Kubernetes Cheatsheet, part 1.

Darwin IT - Fri, 2020-01-10 07:59
Last week I had the honour to present at the UKOUG TechFest 19, together with my 'partner in crime', I think I can say now: Simon Haslam. We combined our sessions into a part 1 and a part 2.

For me this presentation is the result of having done a workshop at the PaaSForum in Mallorca, and then to work that around into a setup where I was able to run the MedRec Weblogic sample application against a managed Database under Kubernetes.

Kubernetes  Weblogic Operator TutorialI already wrote a blog about my workshop at the PaaSForum this year, but Marc Lameriks from Amis, did a walkthrough on the workshop. It basically comes down to this tutorial, which you can do as a self-paced tutorial. Or checkout a Meetup in your neighbourhoud. If you're in the Netherlands, we'll be happy to organized one, or if you like I would come over to your place and we could set something up. See also the links at the end of part 2 of our presentations for more info on the tutorial for instance.

I did the tutorial more or less three times now, once at the PaaSForum, then I re-did it, but deliberately changed namespace-names, domain-name, etc. Just to see where the dependencies are, and actually to see where the pitfalls are. It's based on my method to get to know an unfamiliar city: deliberately get lost in it. Two years ago we moved to another part of Amersfoort. To get to know my new neighbourhood, I often took another way home then I when I left. And this is basically what I did with the tutorial too.

The last time I did it was to try to run a more real-life application with an actual database. And therefor I setup a new OKE cluster, this time in a compartment of our own company cloud subscription. Interesting in that is that you work with a normal customer-alike subscription within a compartment. Another form of a deliberate D-Tour. But also to setup a database and see that configuration overrides to change your runtime datasource-connection pool actually works.





CheatsheetWhen doing the tutorial, you'll find that besides all the configurations on the Cloud Pages, to setup your OKE Cluster, configure Oracle Pipelines, you'll find that you'll have to enter a lot of commandline-commands. Most of them are kubectl commands, some helm, and a bit of OCI commandline interface. Doing it the first time I soon got lost in the meaning of them and what I was doing with it. Also, most kubectl commands work with namespaces where your Weblogic has another namespace then the Weblogic Operator. And as is my habit nowadays, I soon put the commands in smart but simple scripts. And those I want to share with you. Maybe not all, but at least enough so you'll get the idea.

I also found the official kubernetes.io kubectl cheat sheet and this one on github. But those are more explanations of the particular commands.

I found it helpfull to set up this Cheatsheet following the tutorial. I guess this helps in relating the commands in what they're meant for.
Shell vs. AliasAt the UKOUG TechFest, someone pointed that you could use aliases too. Of course. You  could do an alias like
alias k=kubectl

However, you'll still need to extend every command with the proper namespace, pod naming, etc.
Therefor, I used the approach of creating a oke_env.sh script that I can include in every script, and a property file to store the credentials to put in secrets. Then call (source) the oke_env.sh script in every other script.
Setup Oracle Kubernetes Engine instance on Oracle Cloud InfrastructureThese scripts refer to the first part of the tutorial: 0. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure.
oke_env.shIt all starts with my oke_env.sh. Here you'll find all the particular necessary variables that are used in most other scripts. I think in a next iteration I would move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID to my credential properties file. But I introduced that later on, during my experiments.

#!/bin/bash
echo Set OKE Environment
export OCID_USER="ocid1.user.oc1..{here goes that long string of characters}"
export OCID_TENANCY="ocid1.tenancy.oc1..{here goes that other long string of characters}"
export OCID_CLUSTERID="ocid1.cluster.oc1.eu-frankfurt-1.{yet another long string of characters}"
export REGION="eu-frankfurt-1" # or your other region
export CLR_ADM_BND=makker-cluster-admin-binding
export K8S_NS="medrec-weblogic-operator-ns"
export K8S_SA="medrec-weblogic-operator-sa"
export HELM_CHARTS_HOME=/u01/content/weblogic-kubernetes-operator
export WL_OPERATOR_NAME="medrec-weblogic-operator"
export WLS_DMN_NS=medrec-domain-ns
export WLS_USER=weblogic
export WLS_DMN_NAME=medrec-domain
export WLS_DMN_CRED=medrec-domain-weblogic-credentials
export OCIR_CRED=ocirsecret
export WLS_DMN_YAML=/u01/content/github/weblogic-operator-medrec-admin/setup/medrec-domain/domain.yaml
export WLS_DMN_UID=medrec-domain
export MR_DB_CRED=mrdbsecret
export ADM_POD=medrec-domain-adminserver
export MR1_POD=medrec-domain-medrec-server1
export MR2_POD=medrec-domain-medrec-server2
export MR3_POD=medrec-domain-medrec-server3
export DMN_HOME=/u01/oracle/user_projects/domains/medrec-domain
export LCL_LOGS_HOME=/u01/content/logs
export ADM_SVR=AdminServer
export MR_SVR1=medrec-server1
export MR_SVR2=medrec-server2
export MR_SVR3=medrec-server3

credentials.properties
This stores the most important credentials. That allows me to abstract those from the scripts. However, as mentioned, I should move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID variables to this file.
weblogic.user=weblogic
weblogic.password=welcome1
ocir.user=my.email@address.nl
ocir.password=my;difficult!pa$$w0rd
ocir.email=my.email@address.nl
oci.tenancy=ourtenancy
oci.region=fra
db.medrec.username=MEDREC_OWNER
db.medrec.password=MEDREC_PASSWORD
db.medrec.url=jdbc:oracle:thin:@10.11.12.13:1521/pdb1.subsomecode.medrecokeclstr.oraclevcn.com

create_kubeconfig.shAfter having setup the OKE Cluster in OCI, and configured your OCI CLI, the first actuall command you issue is to create a Kube Config file, using the OCI CLI. This one is executed only once, normally for every setup. So this script is merely to document my commands:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create Kubeconfig -> Copy command from Access Kube Config from cluster
mkdir -p $HOME/.kube
oci ce cluster create-kubeconfig --cluster-id $OCID_CLUSTERID --file $HOME/.kube/config --region $REGION --token-version 2.0.0

The SCRIPTPATH variable declaration is a trick to be able to refer to other scripts relatively from that variable. Then as you will see in all my subsequent scripts, I source here the oke_env.sh script. Doing so I can refer to the particular variables in the oci command. There for, as described in the tutorial, you should note down your OCID_CLUSTERID and update that into the oke_env.sh file, as well as the REGION variable.

Note by the way, that recently Oracle Kubernetes Engine upgraded to only support the Kubeconfig token version 2.0.0. See also this document.
getnodes.shThis one is a bit dumb, and could as easily be created by an alias:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s nodes
kubectl get node

Even the call to the oke_env.sh doesn't add anything, really but it is a base for the other scripts and when needing to add namespaces it makes sense.

create_clr_rolebinding.shThe last part of setting up the OKE cluster is to create a role binding. This is done with:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create cluster role binding
echo kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER
kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER

Install WebLogic OperatorThe second part of the tutorial is about seting up your project environment with Github and have Oracle Pipelines build your projects image. This is not particularly related to K8S, so no relevant scripts there. 
The next part of the tutorial is about installing the operator: 2. Install WebLogic Operator.
create_kubeaccount.shInstallling Weblogic Operator is done using Helm. As far as I have understood is Helm a sort of package manager for Kubernetes. Funny thing in naming is that where Kubernetes is Greek for the steering officer on a ship, helm is the steering device of a ship. It makes use of Tiller, the server side part of Helm. A tiller is the "steering stick" or lever that manages the Helm device. (To be honest, to me it feels a bit the otherway around, I guess I would have named the server side Helm and the client Tiller).

As a first step is to create a Helm Cluster admin role binding, a kubernetes namespace for the Weblogic Operator and a serviceaccount within this namespace. To do so the script create_kubeaccount.sh does the following:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create helm-user-cluster-admin-role
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: helm-user-cluster-admin-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
EOF
echo Create namespace $K8S_NS
kubectl create namespace $K8S_NS
echo kubectl create serviceaccount -n $K8S_NS $K8S_SA
kubectl create serviceaccount -n $K8S_NS $K8S_SA

install_weblogic_operator.sh
Installing the Weblogic operator is done with this script. Notice that you need to execute the helm command within the folder in which you checked out the Weblogic Operator github repository.
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm install kubernetes/charts/weblogic-operator \
--name $WL_OPERATOR_NAME \
--namespace $K8S_NS \
--set image=oracle/weblogic-kubernetes-operator:2.3.0 \
--set serviceAccount=$K8S_SA \
--set "domainNamespaces={}"
cd $SCRIPTPATH

The script will cd to the Weblogic Operator local repository and executes helm. In the begin of the script the current folder is saved as SCRIPTPATH. After running the helm command, it does a cd back to it.
delete_weblogic_operator.shDuring my investigations the Weblogic Operator was upraded. If you take a closer look to the command in the tutorial, you'll notice that the image that is used is oracle/weblogic-kubernetes-operator:2.0, but I used oracle/weblogic-kubernetes-operator:2.3.0 in the script above.
I found it usefull to be able to delete the operator to be able to re-install it again. To delete the weblogic operator run the  delete_weblogic_operator.sh script:

#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm del --purge $WL_OPERATOR_NAME
cd $SCRIPTPATH

Again in this script the helm command is surrounded by a cd to the helm charts folder of the Weblogic Operator local github repository, and back again to the current folder.

getpods.shAfter having installed the Weblogic Operator, you can list the pods of the kubernetes namespace it runs in, using this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $K8S_NS
kubectl get po -n $K8S_NS

list_wlop.shYou can check the Weblogic Operator installion by performing a helm list of the Weblogic Operator charts. I wrapped that ino this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo List Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm list $WL_OPERATOR_NAME
cd $SCRIPTPATH

ConclusionIf you would have followed the workshop, and maybe used my scripts, uptil now you have installed the Weblogic Operator. Let's not make this article too long and call this Part 1. And quickly move on to part 2, to install/configure and monitor the rest of the setup. Maybe at the end I move these contents to an easy to navigate set of articles.









Pages

Subscribe to Oracle FAQ aggregator