Feed aggregator

[Troubleshoot Socket:000445] Connection Rejected, Filter Blocked Socket While Accessing EBS R12 WebLogic Console On OCI

Online Apps DBA - Thu, 2020-01-23 03:06

Fixed: WebLogic Console For E-Business (EBS) R12 on Oracle Cloud (OCI) Socket Blocked Did you encounter the “The Server is not able to service this request: blocked Socket, weblogic.security.net.FilterException:” error while accessing the Weblogic console from the Oracle EBS environment created from the Oracle EBS Cloud Manager? If YES, then the blog post at https://k21academy.com/ebscloud36 […]

The post [Troubleshoot Socket:000445] Connection Rejected, Filter Blocked Socket While Accessing EBS R12 WebLogic Console On OCI appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Adding PostgreSQL extensions without being super user?

Yann Neuhaus - Wed, 2020-01-22 09:33

Usually, when you need to install a PostgreSQL extension you do this as superuser (or at least I am doing it like this). The downside of that is, of course, that a super user must be available once a new extension is required or that all the extensions are installed by default (e.g. in template1). Recently the question popped up internally if you can install extensions without being super user. The answer is: yes and no. It depends on the objects that get created by the extensions and it depends on the permissions of the user that wants to install the extension. As always, lets do some demos as this is the easiest way to understand how things work.

Currently there are no extensions installed in my database except for the standard PL/pgSQLl extension:

postgres=# \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(1 row)

For the purpose of this post lets create a new user without any special permissions other than login:

postgres=# create user a with login password 'a';
CREATE ROLE
postgres=# \du a
           List of roles
 Role name | Attributes | Member of 
-----------+------------+-----------
 a         |            | {}

This user, even if not granted anything, by default has permissions to create objects in the public schema (you should definitely avoid that, check here):

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> select current_database();
 current_database 
------------------
 postgres
(1 row)
postgres=> create table tab1 ( a int );
CREATE TABLE

What this user is not able to do, is to use create extension” to install a new extension into the database:

postgres=> create extension lo;
ERROR:  permission denied to create extension "lo"
HINT:  Must be superuser to create this extension.
postgres=> 

Why is that? If we take a look at the extension’s SQL file the first statement is this:

CREATE FUNCTION ltree_in(cstring)
RETURNS ltree
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;

“CREATE FUNCTION” does work as we are able to create objects in the public schema. The issue is this:

postgres=> CREATE FUNCTION ltree_in(cstring)
postgres-> RETURNS ltree
postgres-> AS 'MODULE_PATHNAME'
postgres-> LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
ERROR:  permission denied for language c
postgres=> 

We do not have access to the language. Lets try to grant the required privilege for using the language:

postgres=# grant USAGE on LANGUAGE c to a;
ERROR:  language "c" is not trusted
DETAIL:  GRANT and REVOKE are not allowed on untrusted languages, because only superusers can use untrusted languages.

That does not work as well, as c is untrusted. The same is true for “language internal” as in intagg–1.1.sql:

postgres=> CREATE FUNCTION int_agg_state (internal, int4)
postgres-> RETURNS internal
postgres-> AS 'array_agg_transfn'
postgres-> PARALLEL SAFE
postgres-> LANGUAGE INTERNAL;
ERROR:  permission denied for language internal

As all the extensions in standard PostgreSQL community do reference either “c” or “internal” somehow we do not have any chance to get an extension installed as user “a”. Lets do another test and create a new database with user “a” as it’s owner:

postgres=> \c postgres postgres
You are now connected to database "postgres" as user "postgres".
postgres=# create database dba with owner a;
CREATE DATABASE
postgres=# \c dba a
You are now connected to database "dba" as user "a".
dba=> 

Can we install extensions now?

dba=> create extension lo;
ERROR:  permission denied to create extension "lo"
HINT:  Must be superuser to create this extension.
dba=> 

Another error message but it still does not work. We can, however, install PL/Perl as this is a trusted language:

dba=> create extension plperl;
CREATE EXTENSION
dba=> 

Actually PL/Perl is the only extension that can be installed in this scenario, which can be confirmed by a simply PL/pgSQL code block:

dba=> drop extension plperl;
DROP EXTENSION
dba=> do $$
dba$> declare
dba$>   ext record;
dba$>   exception_text text;
dba$> begin
dba$>   for ext in
dba$>       select name
dba$>         from pg_available_extensions
dba$>        order by name
dba$>   loop
dba$>     begin
dba$>        execute 'create extension '||ext.name;
dba$>        raise notice 'SUCCESS for %', ext.name;
dba$>     exception
dba$>        when others
dba$>          then get stacked diagnostics exception_text = MESSAGE_TEXT;
dba$>          raise notice '% failed with %', ext.name, exception_text;
dba$>     end;
dba$>   end loop;
dba$> end $$;
NOTICE:  adminpack failed with permission denied to create extension "adminpack"
NOTICE:  amcheck failed with permission denied to create extension "amcheck"
NOTICE:  autoinc failed with permission denied to create extension "autoinc"
NOTICE:  bloom failed with permission denied to create extension "bloom"
NOTICE:  btree_gin failed with permission denied to create extension "btree_gin"
NOTICE:  btree_gist failed with permission denied to create extension "btree_gist"
NOTICE:  citext failed with permission denied to create extension "citext"
NOTICE:  cube failed with permission denied to create extension "cube"
NOTICE:  dblink failed with permission denied to create extension "dblink"
NOTICE:  dict_int failed with permission denied to create extension "dict_int"
NOTICE:  dict_xsyn failed with permission denied to create extension "dict_xsyn"
NOTICE:  earthdistance failed with required extension "cube" is not installed
NOTICE:  file_fdw failed with permission denied to create extension "file_fdw"
NOTICE:  fuzzystrmatch failed with permission denied to create extension "fuzzystrmatch"
NOTICE:  hstore failed with permission denied to create extension "hstore"
NOTICE:  hstore_plperl failed with required extension "hstore" is not installed
NOTICE:  hstore_plperlu failed with required extension "hstore" is not installed
NOTICE:  hstore_plpython2u failed with required extension "hstore" is not installed
NOTICE:  hstore_plpython3u failed with required extension "hstore" is not installed
NOTICE:  hstore_plpythonu failed with required extension "hstore" is not installed
NOTICE:  insert_username failed with permission denied to create extension "insert_username"
NOTICE:  intagg failed with permission denied to create extension "intagg"
NOTICE:  intarray failed with permission denied to create extension "intarray"
NOTICE:  isn failed with permission denied to create extension "isn"
NOTICE:  jsonb_plperl failed with required extension "plperl" is not installed
NOTICE:  jsonb_plperlu failed with required extension "plperlu" is not installed
NOTICE:  jsonb_plpython2u failed with required extension "plpython2u" is not installed
NOTICE:  jsonb_plpython3u failed with required extension "plpython3u" is not installed
NOTICE:  jsonb_plpythonu failed with required extension "plpythonu" is not installed
NOTICE:  lo failed with permission denied to create extension "lo"
NOTICE:  ltree failed with permission denied to create extension "ltree"
NOTICE:  ltree_plpython2u failed with required extension "ltree" is not installed
NOTICE:  ltree_plpython3u failed with required extension "ltree" is not installed
NOTICE:  ltree_plpythonu failed with required extension "ltree" is not installed
NOTICE:  moddatetime failed with permission denied to create extension "moddatetime"
NOTICE:  pageinspect failed with permission denied to create extension "pageinspect"
NOTICE:  pg_buffercache failed with permission denied to create extension "pg_buffercache"
NOTICE:  pg_freespacemap failed with permission denied to create extension "pg_freespacemap"
NOTICE:  pg_prewarm failed with permission denied to create extension "pg_prewarm"
NOTICE:  pg_stat_statements failed with permission denied to create extension "pg_stat_statements"
NOTICE:  pg_trgm failed with permission denied to create extension "pg_trgm"
NOTICE:  pg_visibility failed with permission denied to create extension "pg_visibility"
NOTICE:  pgcrypto failed with permission denied to create extension "pgcrypto"
NOTICE:  pgrowlocks failed with permission denied to create extension "pgrowlocks"
NOTICE:  pgstattuple failed with permission denied to create extension "pgstattuple"
NOTICE:  SUCCESS for plperl
NOTICE:  plperlu failed with permission denied to create extension "plperlu"
NOTICE:  plpgsql failed with extension "plpgsql" already exists
NOTICE:  plpython3u failed with permission denied to create extension "plpython3u"
NOTICE:  postgres_fdw failed with permission denied to create extension "postgres_fdw"
NOTICE:  refint failed with permission denied to create extension "refint"
NOTICE:  seg failed with permission denied to create extension "seg"
NOTICE:  sslinfo failed with permission denied to create extension "sslinfo"
NOTICE:  tablefunc failed with permission denied to create extension "tablefunc"
NOTICE:  tcn failed with permission denied to create extension "tcn"
NOTICE:  tsm_system_rows failed with permission denied to create extension "tsm_system_rows"
NOTICE:  tsm_system_time failed with permission denied to create extension "tsm_system_time"
NOTICE:  unaccent failed with permission denied to create extension "unaccent"
NOTICE:  xml2 failed with permission denied to create extension "xml2"
DO

If you want to install an extension that only creates objects you are allowed to create anyway, that would succeed. The probably best way to handle extensions is to install all the required ones by default and then grant permissions to the users who need them.

Cet article Adding PostgreSQL extensions without being super user? est apparu en premier sur Blog dbi services.

Make Oracle database simple again!

Yann Neuhaus - Wed, 2020-01-22 09:30
Introduction

Let’s have a look at how to make Oracle database as simple as it was before.

Oracle database is a great piece of software, yes it’s quite expensive, but it’s still the reference and most of the companies can find a configuration that fits their needs according to a budget. Another complain about Oracle is the complexity: nothing is really simple, and you’ll need skillful DBA(s) to deploy, manage, upgrade, troubleshoot your databases. But complexity is sometimes caused by wrong decisions you make without having the knowledge, mainly because some choices add significant complexity compared to others.

The goal

Why the things need to be simple?

Obviously, simplification is:

  • easier troubleshooting
  • more “understandable by the others”
  • reinstallation made possible in case of big troubles
  • avoiding bugs related to the mix of multiple components
  • less work, because you probably have enough work with migration, patching, performance, …
  • more reliability because less components is less problems
On the hardware side

Rules for simplifying on the hardware side are:

  • Choose the same hardware for all your environments (DEV/TEST/QUAL/PROD/…): same server models, same CPU family, same revision. Make only slight variations on memory amount, number of disks and processor cores configuration if needed. Order all the servers at the same time. If a problem is related to hardware, you will be able to test the fix on a less important environment before going on production
  • Don’t use SAN: SAN is very nice, but SAN is not the performance guarantee you expect. Adopt local SSD disks: NVMe type SSDs have amazing speed, they are a true game changer in today’s database performance. Getting rid of the SAN is also getting rid of multipathing, resource sharing, complex troubleshooting, external dependencies and so on
  • Provision very large volumes for data: dealing with space pressure is not the most interesting part of your job. And it’s time consuming. You need 4TB of disks? Order 12TB and you’ll be ready for each situation even those not planned. For sure it’s more expensive, but adding disks afterall is not always that simple. It makes me think about a customer case where trying to add a single disk led to a nightmare (production down for several days)
  • Consider ODA appliances (Small or Medium): even if it’s not simplifying everything, at least hardware is all that you need and is dedicated to Oracle database software
  • Think consolidation: Oracle database has a strong database isolation thus leading to easy consolidation. Consolidate to limit the number of servers you need to manage is also simplifying your environment
  • Avoid virtualization: without talking about the license, virtualization is for sure underlying complexity
On the system side

Some rules are also good to know regarding the system:

  • Go for Redhat or Oracle Linux: mainly because it’s the most common OS for Oracle databases. Releases and patches are always available for Linux first. UNIX and Windows are decreasing in popularity for Oracle Databases these past 10 years
  • Same OS: please keep your operating systems strictly identical from development to production. If you decide to upgrade the OS, do that first on TEST/DEV and finish with production servers without waiting months. And never update through internet as packages can be different each time you update a system
  • Limit the filesystems number for your databases: 1 big oradata and 1 big FRA is enough on SSD, you don’t need to slice everything as we did before, and slicing is always wasting space
On the software side

A lot of things should be done, or not done regarding software:

  • Install the same Oracle version (release + patch) and use the same tree everywhere. Use OFA (/u01/…) or not but be consistent
  • Limit the Oracle versions in use: inform your software vendors that your platform cannot support too old versions, especially non-terminal releases like 11.2.0.3. 19c, 12.1.0.2 and eventually 11.2.0.4 are the only recommended version to deploy
  • Don’t use ASM: because ext4 is fine and SSDs now bring you maximum performance even on a classic filesystem. ASM will always be linked to Grid Infrastructure making dependencies between the DB Homes and that Grid stack, making patching much more complex
  • Don’t use RAC: because most of your applications cannot correctly manage high availability. RAC is much more complex compared to single instance databases. Not choosing RAC is getting rid of interconnect, cluster repository, fusion cache for SGA sharing, SAN or NAS technologies, split brains, scan listeners and complex troubleshooting. Replacing RAC with Data Guard or Dbvisit standby is the new way of doing sort of high availability without high complexity
Simplify backup configuration

For sure you will use RMAN, but how to simplify backups with RMAN?

  • Use the same backup script for all the databases
  • Use the same backup frequency for each database because when you need to restore, you’d better have a fresh backup
  • Configure only different retention on each database
  • Backup to disk (the most convenient being on a big nfs share) and without any specific library (backup your /backup filesystem later with your enterprise backup tool if needed)
  • provision large enough filesystem to never need to delete backups still in the retention period

Using the same backup strategy means being able to use the same restore procedure on all databases because you always need a quick restore of a broken database.

Always backup controlfile and spfile on the standby databases, the resulting backupset has a very small footprint and makes easier restore of the standby using database backupsets from the primary without the need for duplication.

Consider RMAN catalog only if you have enough databases to manage.

Simplify database management and configuration
  • Create scripts for database configuration and tablespace creation (for example: configure_SID.sql and tablespaces_SID.sql) to be able to reconfigure the same database elsewhere
  • don’t create grid and oracle users if you plan to use Grid Infrastructure/ASM: as a DBA you probably manage both ASM and database instances. Instead of loosing time switching between these 2 users, configure only one oracle user for both
  • never use graphical tools to create a database, deploy an appliance, configure something: because screenshots are far less convenient than pure text commands easily repeatable and scriptable
  • Use OMF: configure only db_create_file_dest and db_recovery_file_dest and Oracle will multiplex the controlfile and the redologs in these areas. OMF is also naming datafiles for you: there is no need for manual naming, who really cares about the name of the datafiles?
  • Don’t use multitenant: multitenant is fine but it’s been years we’re living with non-CDB databases and it works like a charm. You can still use non-CDB architecture in 19c, so multitenant is not mandatory even on this latest version. Later migration from non-CDB to pluggable database is quite easy, you will be able to use multitenant later
  • Keep your spfile clean: don’t specify unused parameters or parameters that already have the given value as a standard. Remove from the spfile these parameters using ALTER SYSTEM RESET parameter SCOPE=spfile SID='*';
Simplify patching

Patching can also be simplified:

  • Patch once a year, because you need to patch, but you don’t need to spend all your time applying each PSU every 3 months
  • Start with test/dev databases and take the time to test from the application
  • Don’t wait too much to patch the other environments: production should be patched few weeks after the first patched environment
Simplify Oracle*Net configuration

Simplifying also concerns Oracle*Net configuration:

  • Avoid configuring multiple listeners on a system because one is enough for all your databases
  • put your Oracle*Net configuration files in /etc because you don’t want multiple files in multiple homes
  • Keep your Oracle*Net configuration files clean and organized for increased readability
Make your database movable

Last but not least, one of the biggest mistake is to create a strong dependency between a database and a system. How to make your database easily movable? By configuring a standby database and using Data Guard or Dbvisit standby. Moving your database to another server is done within a few minutes with a single SWITCHOVER operation.

Using standby databases make your life easier for all of these purposes:

  • you need to move your server to another datacenter
  • a power outage happened on one site
  • you need to update hardware or software
  • you suspect a hardware problem impacting the database

Don’t only create standbys for production databases: even development databases are some kind of production for developers. If a database cannot be down for 1 day, you need a standby database.

Finest configuration is not dedicating a server for the primaries and a server for the standbys but dispatching the primaries between 2 identical servers on 2 sites. Each database having a preference server for its primary, the standby being on the opposite server.

Conclusion

It’s so easy to increase complexity without any good reason. Simplifying is the power of saying NO. No to interesting features and configurations that are not absolutely necessary. All you need for your databases is reliability, safety, availability, performance. Simplicity helps you in that way.

Cet article Make Oracle database simple again! est apparu en premier sur Blog dbi services.

Philosophy 23

Jonathan Lewis - Wed, 2020-01-22 08:46

It’s a long time since I wrote a note with the “philosophy” theme, but a recent complaint about Oracle prompted me to suggest the following thought:

“If you write SQL that is technically incorrect it’s not Oracle’s fault if sometimes the SQL completes without an error.”

Consider the following fragment of code:

drop table t1 purge;

create table t1 (n1 number, n2 number);
insert into t1 values (1,1);
commit;

select n1 from t1 where n2 = to_number('x');
select n1 from t1 where n2 = to_number('x') and n1 = 2;

select 
        case 
                when 
                        0 in  (select n1 from t1 where n2 = to_number('x'))
                then
                        1
                else
                        0
        end
from
        dual
;

Clearly the first query must raise an error because ‘x’ can’t be converted to a number (until Oracle supplies a format option to read it as a Roman Numeral).

Clearly the second query must raise an error because it’s just the first query with an extra predicate on the end.

Clearly the third query must raise an error because it’s going to execute a subquery that must raise an error.

Here’s the output from running the code from SQL*Plus (on 12.2.0.1).

Table dropped.


Table created.


1 row created.


Commit complete.

select n1 from t1 where n2 = to_number('x')
                                       *
ERROR at line 1:
ORA-01722: invalid number



no rows selected


CASEWHEN0IN(SELECTN1FROMT1WHEREN2=TO_NUMBER('X'))THEN1ELSE0END
--------------------------------------------------------------
                                                             0

1 row selected.

The first query fails with the expected conversion error: ORA-01722: invalid number. The second and third queries run to completion because SQL is a declarative language, not a procedural language, and the optimizer is allowed to transform your query in any way that it thinks might make it complete more quickly.

Here’s the execution plan – with predicate information – for the second query:

-------------------------------------+-----------------------------------+
| Id  | Operation          | Name    | Rows  | Bytes | Cost  | Time      |
-------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT   |         |       |       |     2 |           |
| 1   |  TABLE ACCESS FULL | T1      |     1 |    26 |     2 |  00:00:01 |
-------------------------------------+-----------------------------------+

Predicate Information:
----------------------
1 - filter(("N1"=2 AND "N2"=TO_NUMBER('x')))

The optimizer has decided that it’s more efficent to test for the constant 2 than it is to call the to_number() function, so it’s evaluated n1 = 2 first for each row and never had to check the second predicate because nothing got past the first.

The explanation for the successful completion of the third query is slightly different, but again it revolves around transforming for efficiency. Oracle will (very often) convert an IN subquery to an EXISTS subquery. In my example the resulting SQL looks like this (taken from the CBO (10053) trace file, with some cosmeticy enhancement):


SELECT 
        CASE
                WHEN 
                        EXISTS (SELECT 0 FROM T1 WHERE N1=0 AND N2=TO_NUMBER('x')) 
                THEN 1 
                ELSE 0 
        END  "CASEWHEN0IN(SELECTN1FROMT1WHEREN2=TO_NUMBER('X'))THEN1ELSE0END" 
FROM
        DUAL

Note how the “guaranteed to fail” subquery has an extra predicate added as the IN subquery is transformed into an EXISTS subquery and, just like the previous example, the extra predicate is applied before the to_number() predicate, and there’s no data to match the first predicate so the to_number() function never gets called and never gets a chance to raise an exception.

You could argue, or course, that the optimizer should spot the attempt to generate a constant and evaluate it (if possible) at parse time and raise the error before the runtime engine even sees the query – and it wouldn’t be too  hard to make a case for that – but it would only take a few seconds of thought to create slightly more complex examples to demonstrate the point I’ve been making.

Bottom line(s):

1) Use the correct datatypes for your data.

2) Don’t assume that a statement that raises an exception in some circumstances will result in an exception if it is subsequently embedded in a more complex statement. The optimizer is non-procedural and may transform your statement in a way that bypasses your bad design.

 

Footnote:

I probably shouldn’t say this – it will only encourage someone to do the wrong thing – but for the very simplest examples you can show that the ordered_predicates hint still works:


SQL> select /*+ ordered_predicates */  n1 from t1 where n2 = to_number('x') and n1 = 2;
select /*+ ordered_predicates */  n1 from t1 where n2 = to_number('x') and n1 = 2
                                                                  *
ERROR at line 1:
ORA-01722: invalid number

[AZ-103] Roles And Responsibilities Of A Microsoft Azure Administrator

Online Apps DBA - Wed, 2020-01-22 07:07

AZ-103 | Roles and Responsibilities as a Microsoft Azure Administrator Want to know the Roles and Responsibilities of a Microsoft Azure Administrator or knowledge and role-specific skills he/she should possess for bigger & better job opportunities? Check out our blog at https://k21academy.com/az10312 which covers: ▪ What Is Microsoft Azure Certification? ▪ Why Is Azure Certification […]

The post [AZ-103] Roles And Responsibilities Of A Microsoft Azure Administrator appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Manquehue Hotels Expands Properties, Personalizes Guest Experiences with Oracle

Oracle Press Releases - Wed, 2020-01-22 07:00
Press Release
Manquehue Hotels Expands Properties, Personalizes Guest Experiences with Oracle Leading hospitality technology fuels hotel chain growth, while fostering guest loyalty

Redwood Shores, Calif.—Jan 22, 2020

For 36 years, Manquehue Hotels has been delighting guests with unique properties and top-notch service. When the group looked to expand its footprint in Chile with three new hotels, they turned to Oracle Hospitality to help support this rapid growth. With Oracle, Manquehue was able to efficiently establish internal operations at the new hotels and enhance the overall guest experience for its customers.

“As we continue to grow our hotels and chain, we need a system in place that provides us with standardized, timely and online information to better serve our customers,” said Abner Cayul, Hotel Manquehue corporate general manager. “Oracle Hospitality was the only solution that could help us expand to be one of the most prominent hotel chains in Chile. We look forward to utilizing their offerings to fuel our continued growth.”

Oracle Hospitality OPERA Property features a range of innovations, including deeper customer insights and comprehensive functionality for all areas of hotel management. With it, Hotel Manquehue can create in-depth customer profiles to tailor experiences to each customer’s specific preferences, such as suggesting certain dining venues and luxury items. A personal touch that has helped the group increase customer loyalty. Hotel Manquehue also relies on OPERA for improved internal operations for employees, offering them immediate access to 24/7 support and connection to electronic invoices for easy back-office reporting.

The chain also uses the Oracle MICROS Simphony Cloud Point of Sale system to support a wide range of food and beverage operations within the hotel, including restaurant and room service amenities. Not only does MICROS Simphony streamline employee ordering and delivery, it also provides key data on what items are most popular and when. Conversely, it provides insight into which items are not selling, providing more informed decisions on menu planning to better serve guests while improving inventory management and reducing waste.

With hoteliers increasingly focusing on restaurant operations for revenue growth, it’s imperative for them to tap the advantages of an integrated property management and point-of-sale platform. The combination of OPERA and MICROS Simphony provide hoteliers with a comprehensive view of all their operations – including food and beverage – and insightful analysis of the contributions of each. Such integration also yields one-stop access to 360-degree guest profiles, giving staff detailed information about guests’ preferences and behavior anywhere on property.

“From their favorite dining options to customized rooms, today’s guests want an unprecedented level of personalization when enjoying their visit,” said David Meltzer, Group Vice President, Oracle Hospitality. “Hotel Manquehue understands this and uses the latest technology to better understand and serve the needs and desires of their customers every step of the way. As they expand throughout Chile, Oracle will continue to provide the intelligence and tools both management and front-line employees will need to ensure seamless, unforgettable guest experiences that create loyalty and bring guests back to their properties.”

Watch the Manquehue Hotels Video

In this video, Abner Cayul, corporate general manager of Manquehue Hotels shares how Oracle Hospitality Solutions is helping to support the hotel chain’s strategic growth.

Contact Info
Christine Allen
Oracle
+1.603.743.4534
christine.allen@oracle.com
About Oracle Hospitality

Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our solutions include platforms for property management, point-of-sale, distribution, reporting and analytics all delivered from the cloud to lower IT cost and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and continues to serve as a foundation for industry innovation. Learn more about Oracle’s Hospitality Solutions here.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • +1.603.743.4534

Add a deployment to Oracle GoldenGate 19c Microservices

DBASolved - Tue, 2020-01-21 13:33

Once you have an up and running Oracle GoldenGate Microservices environment, there may come a time when you need to add another deployment to the enviornment.  Adding deployments is easily done using Oracle GoldenGate Configuration Assistant (OGGCA).  In the below video, I show you how to add a deployment to an existing ServiceManager.  Additionally, I […]

The post Add a deployment to Oracle GoldenGate 19c Microservices appeared first on DBASolved.

Categories: DBA Blogs

If you’d like your ads to show on certain sites across the Internet, you can add these websites as:

VitalSoftTech - Tue, 2020-01-21 09:53

There are certain guesses you make about what would be the right course of action while displaying an ad. If you’d like to show your ads on certain sites across the Internet what should you add them as?  The obvious way is to include those websites in your ad. The question of how you should […]

The post If you’d like your ads to show on certain sites across the Internet, you can add these websites as: appeared first on VitalSoftTech.

Categories: DBA Blogs

Index Engineering

Jonathan Lewis - Mon, 2020-01-20 10:53

This is a case study based on a question that appeared on the Oracle Developer Community forum a few days ago.

What I’m aiming to present in this note is the pattern of thinking that you should adopt in cases like this. The final suggestion in this note isn’t necessarily the best answer to the question posed (at the time of writing the OP hadn’t supplied enough information to allow anyone to come up with a best solution), but the point of the exercise is to talk about the journey and (perhaps) remind you of some of the extreme engineering you can do with indexes.

The (massaged) problem statement is as follows:

I have a table of more than 200 million rows that is used for inserts, updates and queries. I have a query on this table and want to know what index I could create to speed up the query.

The supplied definition of the table was not consistent with the names used in the query, so I’ve had to do a little editing, but table, current indexes, and query were as follows:

rem
rem     Script:         extreme_indexing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

create table tbl (
        r_id                    varchar2(30) not null,
        c_id                    number,
        n_id                    varchar2(40),
        created_by              varchar2(30) not null,
        last_modified_by        varchar2(30),
        c_status                char(1),
        a_action                char(1),
        r_creation_dt           timestamp(6),
        cnt                     number(38)
)
;

create        index tbl_1 on tbl(cnt, r_creation_dt, c_id, a_action, last_modified_by);  
create        index tbl_2 on tbl(cnt, c_status, r_creation_dt);  
create bitmap index tbl_3 on tbl(c_status); 

select
        /*+ index(tbl) */
        c_id,
        a_action,
        cnt,
        last_modified_by
from
        tbl
where
        c_status in(
            'N',
            'F'
        )
and     cnt <= 5 -- > comment to avoid wordpress format issue
and     r_creation_dt is not null
group by
        cnt,
        r_creation_dt,
        c_id,
        a_action,
        last_modified_by,
        c_status
order by
        r_creation_dt
fetch 
        first 1000 rows only
;


The first thing to point out is the bitmap index tbl_i3 is almost certainly a bad idea – bitmaps and transactional activity do not mix. It seems quite likely that the OP in this case had read one of the many Internet notes that makes the “not totally wrong” but very misleading statement “bitmap indexes are good when you have a small number of distinct values”, and appled the principle to a column that looks like a “status” column holding only a few distisnct values.

Having got that error out of the way we can start to think about the query.  It’s using the (fairly new) “Fetch first N rows” syntax, which means we may have to find a lot of data and sort it before returning a subset: performance issues can be very deceptive in cases like this because we might want a small result set but have to do a large amount of work to get it.

In this case we’re after the first 1,000 rows – which makes you think that maybe there will be a lot of data satisfying the query. So we have two targets to meet to optimise the query:

  • acquire the data we need as efficiently as possible
  • post-process the data we acquire to derive the 1,000 rows as efficiently as possible

The query is just a single table access – which means we’re either going to do a full tablescan or find a good indexed access path, we don’t have to worry about join strategies.  So the first thing to consider is the volume (and scatter) of data that matches the predicates. If there’s only a “small” amount of data where “c_status in (‘N’,’F’) and cnt <= 5” then an index on – or starting with – (c_status, cnt) may be very helpful. (Note how I’ve specified the column with the equality predicate first – that’s part of a generic strategy for creating multi-column indexes.)

This, though, raises several questions that need to be answered:

  • How small is “small” ? In the context of 200 million rows, 100,000 is small; but if you had to visit 100,000 different blocks in the table and do 100,000 real single block reads from disc that might still be a very bad thing.
  • How many rows have status ‘N’, how many have status ‘F’, how many have cnt <= 5 ? Maybe a really tiny number of rows have cnt<=5 and lots have c_status in (‘N’,’F’) which could make this a case where ignoring the generic column-ordering strategy would be very effective.  Maybe the number of rows satisfying the individual conditions is high but the number satisfying the combination is very low.
  • Is this the ONLY combination of c_status and cnt that is of interest, or (for example) was 5 just the number that was picked as an example,  Would different c_status values be of interest, would some required combinations of c_status and cnt have to use completley different execution paths for the best performance.

I’m going to make some decisions in order to proceed – they may be totally wrong as far as the OP is concerned – so remember that this note is just for discussion purposes. Let’s assume that the common query is always exactly as stated. Perhaps it’s a query that runs every few minutes to clear up some outstanding work with the expectation that new rows matching the query keep appearing while older rows are processed, change status, and disappear from the result set. Let’s also assume that the result set is always “small”, and that it’s small because ‘N’ and ‘F’ are rare (even if the total number of rows with cnt <= 5 is large).

With these assumptions we could start by creating an index on (c_status, cnt), which gets us to exactly the rows we want from the table with no “throwaway” after visiting the table. Here’s the excution plan if that’s our choice of index (running on 12.2.0.1, and with an index() hint to force the use of the index when necessary):

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |        |      1 |        |   1000 |00:00:00.03 |    1573 |     34 |       |       |          |
|*  1 |  VIEW                           |        |      1 |   1000 |   1000 |00:00:00.03 |    1573 |     34 |       |       |          |
|*  2 |   WINDOW NOSORT STOPKEY         |        |      1 |   6451 |   1000 |00:00:00.03 |    1573 |     34 |   219K|   219K|          |
|   3 |    SORT GROUP BY                |        |      1 |   6451 |   1001 |00:00:00.03 |    1573 |     34 |  1186K|   567K| 1054K (0)|
|   4 |     INLIST ITERATOR             |        |      1 |        |  13142 |00:00:00.02 |    1573 |     34 |       |       |          |
|*  5 |      TABLE ACCESS BY INDEX ROWID| TBL    |      2 |  13743 |  13142 |00:00:00.02 |    1573 |     34 |       |       |          |
|*  6 |       INDEX RANGE SCAN          | TBL_I1 |      2 |  13743 |  13142 |00:00:00.01 |      33 |     34 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "R_CREATION_DT")<=1000)
   5 - filter("R_CREATION_DT" IS NOT NULL)
   6 - access((("C_STATUS"='F' OR "C_STATUS"='N')) AND "CNT"<=5)

I’ve enabled rowsource_execution_statistics (alter session set statistics_level = all) and pulled my execution plan from memory. As you can see from the A-rows for the index range scan and table access by index rowid, I’ve identified and acquired exactly the rows from the table that might be relevant (all 13,142 of them), then I’ve done a sort group by of all that data, sorting in a way that means the rows will be produced in exactly the order I need for the windowing function that Oracle will use to select the 1,000 rows I want.

If you’re curious, here (courtesy of dbms_utility.expand_sql_text() but cosmetically enhanced) is the transformed SQL that was actually optimised and executed:

SELECT 
        A1.C_ID C_ID,A1.A_ACTION A_ACTION,A1.CNT CNT,A1.LAST_MODIFIED_BY LAST_MODIFIED_BY 
FROM  (
        SELECT 
                /*+ INDEX (A2) */ 
                A2.C_ID C_ID,
                A2.A_ACTION A_ACTION,
                A2.CNT CNT,
                A2.LAST_MODIFIED_BY LAST_ MODIFIED_BY,
                A2.R_CREATION_DT rowlimit_$_0,
                ROW_NUMBER() OVER ( ORDER BY A2.R_CREATION_DT) rowlimit_$$_rownumber 
        FROM 
                TEST_USER.TBL A2 
        WHERE 
                (A2.C_STATUS='N' OR A2.C_STATUS='F') 
        AND     A2.CNT<=5 
        AND     A2.R_CREATION_DT IS NOT NULL 
        GROUP BY 
                A2.CNT,A2.R_CREATION_DT,A2.C_ID,A2.A_ACTION,A2.LAST_MODIFIED_BY,A2.C_STATUS
        ) A1 
WHERE 
        A1.rowlimit_$$_rownumber<=1000 
ORDER BY 
        A1.rowlimit_$_0

There are three main drawbacks to this choice of index.

  • I’ve acquired all the rows in the table that match the predicate even though I only really needed a subset
  • I’ve done a massive sort
  • I’ve created an index that includes every row in the table

Remember that the OP has a table of 200M rows, and we are assuming (pretending) that only a very small fraction of them match the initial predicates. Creating an index on 200M rows because we’re interested in only a few tens of thousands is wasteful of space and (given we have a “status” column) probably wasteful of processing resources as the status moves through several values. So I’m going to address that issue first. Let’s create a “function-based” index that ignores most of the data, and change the code to take advantage of that index – but since this is 12c, let’s do it by adding a virtual column and indexing that column.


alter table tbl add nf_r_creation_dt invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then r_creation_dt
                end
        ) virtual
/

create index tbl_i2 on tbl(nf_r_creation_dt)
/

I’ve introduced an invisible virtual column called nf_r_creation_dt (nf_ for status N/F) which uses a CASE expression matching the original predicate to return the r_creation_dt for rows that match and null for all the other (ca. 200M) rows. So when I create an index on the column the only entries in the index are for rows that I might want to see.

I have to edit the SQL to match – which simply means changing every appearance of r_creation_dt to nf_r_creation_dt, and eliminating the original predicate giving the following text and execution plan:


select
        /*+ index(tbl) */
        c_id,
        a_action,
        cnt,
        last_modified_by
from
        tbl
where
        nf_r_creation_dt is not null
group by
        nf_r_creation_dt,
        cnt,
        c_id,
        a_action,
        last_modified_by,
        c_status
order by
        nf_r_creation_dt
fetch 
        first 1000 rows only    -- 1,000 rows in the original
/

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |      1 |        |   1000 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  1 |  VIEW                          |        |      1 |   1000 |   1000 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  2 |   WINDOW NOSORT STOPKEY        |        |      1 |     48 |   1000 |00:00:00.02 |   13139 |     35 | 73728 | 73728 |          |
|   3 |    SORT GROUP BY               |        |      1 |     48 |   1001 |00:00:00.02 |   13139 |     35 |  1116K|   556K|  991K (0)|
|   4 |     TABLE ACCESS BY INDEX ROWID| TBL    |      1 |   2500 |  13142 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  5 |      INDEX FULL SCAN           | TBL_I2 |      1 |  13142 |  13142 |00:00:00.01 |      36 |     35 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "NF_R_CREATION_DT")<=1000)
   5 - filter("NF_R_CREATION_DT" IS NOT NULL)

The plan shows an index full scan on the new index. Since the index holds only those rows that might be interesting this isn’t a threat. However we still have to visit all the matching rows in the table – and that might result in more random I/O than we like. So the next step in enhancing performance is to consider adding all the columns we want to the index. There’s a little problem with that: if we add the columns as they are we will go back to having an index entry for every single row in the table so we need to use the same CASE mechanism to create more virtual columns:

alter table tbl add nf_c_status invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then c_status
                end
        ) virtual
/

alter table tbl add nf_last_modified_by invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then last_modified_by
                end
        ) virtual
/

alter table tbl add nf_a_action invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then a_action
                end
        ) virtual
/

alter table tbl add nf_c_id invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then c_id
                end
        ) virtual
/

alter table tbl add nf_cnt invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then cnt
                end
        ) virtual
/

create index tbl_i3 on tbl(
        nf_r_creation_dt,
        nf_cnt,
        nf_c_id,
        nf_a_action,
        nf_last_modified_by,
        nf_c_status
)
;

It looks like a bit of a pain to go through all this rigmarole to get all those columns that are null most of the time but echo the original values when the rows match our original predicate; and then we have to modify the query to match:


select
        /*+ index(tbl) */
        nf_c_id,
        nf_a_action,
        nf_cnt,
        nf_last_modified_by
from
        tbl
where
        nf_r_creation_dt is not null
group by
        nf_r_creation_dt,
        nf_cnt,
        nf_c_id,
        nf_a_action,
        nf_last_modified_by,
        nf_c_status
order by
        nf_r_creation_dt
fetch 
        first 1000 rows only    -- 1,000 rows in the original
/

But the big payoff comes from the execution plan:


----------------------------------------------------------------------------------------------------
| Id  | Operation              | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |        |      1 |        |   1000 |00:00:00.01 |      74 |     12 |
|*  1 |  VIEW                  |        |      1 |   1000 |   1000 |00:00:00.01 |      74 |     12 |
|*  2 |   WINDOW NOSORT STOPKEY|        |      1 |   2500 |   1000 |00:00:00.01 |      74 |     12 |
|   3 |    SORT GROUP BY NOSORT|        |      1 |   2500 |   1001 |00:00:00.01 |      74 |     12 |
|*  4 |     INDEX FULL SCAN    | TBL_I3 |      1 |   2500 |   1003 |00:00:00.01 |      74 |     12 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "NF_R_CREATION_DT")<=1000)
   4 - filter("NF_R_CREATION_DT" IS NOT NULL)

Notice how the SORT GROUP BY operation is a NOSORT, and the WINDOW operation is both NOSORT and STOPKEY ?

We’ve got the smallest index possible that only gets modified as rows move into, or out of, the interesting state, and when we run the query Oracle does a full scan of the index maintaining “running totals” but stop as soon as it’s aggregated enough results.

tl;dr

For very special cases it’s really amazing what you can (sometimes) do – if you can modify the code – with carefully engineered indexes to minimise the work done by a query AND the work done maintaining the infrastructure needed for that query. Virtual columns are a fantastic aid, especially now that 12c allows them to be invisible.

Video : Schema Only Accounts in Oracle Database 18c Onward

Tim Hall - Mon, 2020-01-20 01:53

Today’s video is a demonstration of schema only accounts, introduced in Oracle Database 18c.

This is based on the following articles.

The star of today’s video is Paul Vallee, of Pythian and Tehama fame.

Cheers

Tim…

Video : Schema Only Accounts in Oracle Database 18c Onward was first posted on January 20, 2020 at 8:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

PostgreSQL 13: parallel vacuum for indexes

Yann Neuhaus - Mon, 2020-01-20 01:06

Because of its implementation of MVCC PostgreSQL needs a way to cleanup old/dead rows and this is the responsibility of vacuum. Up to PostgreSQL 12 this is done table per table and index per index. There are a lot of parameters to fine tune auto vacuum but none of those allowed vacuum to run in parallel against a relation. The only option you had to allow auto vacuum to do more work in parallel was to increase autovacuum_max_workers so that more relations can be worked on at the same time. Working against multiple indexes of the same table was not possible. Once PostgreSQL 13 will be released this will change.

When you have a look at the help of vacuum you will notice a new option:

postgres=# \h vacuum
Command:     VACUUM
Description: garbage-collect and optionally analyze a database
Syntax:
VACUUM [ ( option [, ...] ) ] [ table_and_columns [, ...] ]
VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns [, ...] ]

where option can be one of:

    FULL [ boolean ]
    FREEZE [ boolean ]
    VERBOSE [ boolean ]
    ANALYZE [ boolean ]
    DISABLE_PAGE_SKIPPING [ boolean ]
    SKIP_LOCKED [ boolean ]
    INDEX_CLEANUP [ boolean ]
    TRUNCATE [ boolean ]
    PARALLEL integer

and table_and_columns is:

    table_name [ ( column_name [, ...] ) ]

URL: https://www.postgresql.org/docs/devel/sql-vacuum.html

By providing a positive integer to the “PARALLEL” option you tell vacuum how many background workers should be used to vacuum indexes for a given table (0 will disable parallel processing). Lets do a small test setup to demonstrate this:

postgres=# create table t1 as select i as a, i::text as b, now() as c from generate_series(1,3000000) i;
SELECT 3000000
postgres=# create index i1 on t1(a);
CREATE INDEX
postgres=# 
postgres=# create index i2 on t1(b);
CREATE INDEX
postgres=# create index i3 on t1(c);
CREATE INDEX

One table, four indexes. If we go for parallel 4 we should see four background workers doing the work against the indexes:

postgres=# update t1 set a=5,b='ccc',c=now() where mod(a,5)=0;
UPDATE 600000
postgres=# vacuum (parallel 4) t1;
VACUUM

As the table and the indexes are quite small we need to be fast but at least 2 parallel workers show up in the process list for the vacuum operation:

postgres 16688 15925 13 07:30 ?        00:01:07 postgres: postgres postgres [local] VACUUM
postgres 19184 15925  0 07:39 ?        00:00:00 postgres: parallel worker for PID 16688   
postgres 19185 15925  0 07:39 ?        00:00:00 postgres: parallel worker for PID 16688   

Nice. Please note that indexes are only considered for parallel vacuum when they meet the min_parallel_index_scan_size criteria. For FULL vacuum there will be no parallel processing as well.

Btw: The current maximal value is 1024:

postgres=# vacuum (parallel -4) t1;
ERROR:  parallel vacuum degree must be between 0 and 1024
LINE 1: vacuum (parallel -4) t1;

You can also see the parallel stuff on the verbose output:

postgres=# vacuum (parallel 4, verbose true) t1;
INFO:  vacuuming "public.t1"
INFO:  launched 2 parallel vacuum workers for index vacuuming (planned: 2)
INFO:  scanned index "i2" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.24 s, system: 0.06 s, elapsed: 0.89 s
INFO:  scanned index "i1" to remove 600000 row versions
DETAIL:  CPU: user: 0.17 s, system: 0.10 s, elapsed: 1.83 s
INFO:  scanned index "i3" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.16 s, system: 0.14 s, elapsed: 1.69 s
INFO:  scanned index "i4" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.25 s, system: 0.09 s, elapsed: 1.17 s
INFO:  "t1": removed 600000 row versions in 20452 pages
DETAIL:  CPU: user: 0.17 s, system: 0.16 s, elapsed: 1.43 s
INFO:  index "i1" now contains 3000000 row versions in 14308 pages
DETAIL:  600000 index row versions were removed.
1852 index pages have been deleted, 640 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i2" now contains 3000000 row versions in 14305 pages
DETAIL:  600000 index row versions were removed.
1851 index pages have been deleted, 640 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i3" now contains 3000000 row versions in 14326 pages
DETAIL:  600000 index row versions were removed.
3941 index pages have been deleted, 1603 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i4" now contains 3000000 row versions in 23391 pages
DETAIL:  600000 index row versions were removed.
5527 index pages have been deleted, 2246 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "t1": found 600000 removable, 3000000 nonremovable row versions in 21835 out of 22072 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 499
There were 132 unused item identifiers.
Skipped 0 pages due to buffer pins, 237 frozen pages.
0 pages are entirely empty.
CPU: user: 0.75 s, system: 0.36 s, elapsed: 5.07 s.
INFO:  vacuuming "pg_toast.pg_toast_16392"
INFO:  index "pg_toast_16392_index" now contains 0 row versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_toast_16392": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 499
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM

Cet article PostgreSQL 13: parallel vacuum for indexes est apparu en premier sur Blog dbi services.

Running the (Segment) Space Advisor - on a Partitioned Table

Hemant K Chitale - Sat, 2020-01-18 08:30
Here is a quick demo on running the Segment Space Advisor manually

I need to start with the ADVISOR privilege

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:10 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: system
Enter password:
Last Successful login time: Sat Jan 18 2020 22:00:32 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> grant advisor to hemant;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production


I can then connect with my account to run the Advisor

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:35 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: hemant
Enter password:
Last Successful login time: Sat Jan 18 2020 21:50:05 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
SQL> DECLARE
l_object_id NUMBER;
l_task_name VARCHAR2(32767) := 'Advice on My SALES_DATA Table';

BEGIN
DBMS_ADVISOR.create_task (
advisor_name => 'Segment Advisor',
task_name => l_task_name
);

DBMS_ADVISOR.create_object (
task_name => l_task_name,
object_type => 'TABLE',
attr1 => 'HEMANT',
attr2 => 'SALES_DATA',
attr3 => NULL,
attr4 => NULL,
attr5 => NULL,
object_id => l_object_id
);

DBMS_ADVISOR.set_task_parameter (
task_name => l_task_name,
parameter => 'RECOMMEND_ALL',
value => 'TRUE');

DBMS_ADVISOR.execute_task(task_name => l_task_name);
end;
/

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
PL/SQL procedure successfully completed.

SQL>


I can then review the advise :

SQL> set serveroutput on
begin
FOR cur_rec IN (SELECT f.impact,
o.type,
o.attr1,
o.attr2,
o.attr3,
o.attr4,
f.message,
f.more_info
FROM dba_advisor_findings f, dba_advisor_objects o
WHERE f.object_id = o.object_id
AND f.task_name = o.task_name
AND f.task_name = 'Advice on My SALES_DATA Table'
ORDER BY f.impact DESC)
LOOP
DBMS_OUTPUT.put_line('..');
DBMS_OUTPUT.put_line('Type : ' || cur_rec.type);
DBMS_OUTPUT.put_line('Schema : ' || cur_rec.attr1);
DBMS_OUTPUT.put_line('Table Name : ' || cur_rec.attr2);
DBMS_OUTPUT.put_line('Partition Name : ' || cur_rec.attr3);
DBMS_OUTPUT.put_line('Tablespace Name : ' || cur_rec.attr4);
DBMS_OUTPUT.put_line('Message : ' || cur_rec.message);
DBMS_OUTPUT.put_line('More info : ' || cur_rec.more_info);
END LOOP;
end;
/

SQL> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2015
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2016
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2017
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2018
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2019
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_MAXVALUE
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:

PL/SQL procedure successfully completed.

SQL>


Thus, it actually reports for each Partition in the table.


Note : Script based on script by Tim Hall  (@oraclebase)  at https://oracle-base.com/dba/script?category=10g&file=segment_advisor.sql


Categories: DBA Blogs

Deploying your own PostgreSQL image on Nutanix Era – 2 – Deploying a new PostgreSQL VM

Yann Neuhaus - Sat, 2020-01-18 00:00

In the last post I described how you can create your own PostgreSQL image in Nutanix Era. In Nutanix wording this is a “Software profile”. This profile can now be used to deploy PostgreSQL VMs with just a few clicks or by using the API. In this post we’ll look at how this can be done and if it is really as easy as Nutanix promises. We’ll be doing it by using the graphical console first and then by using the API. Lets go.

For deploying new databases we obviously need to go to the database section of Era:

For now we are going to deploy a single instance:

Our new software profile is available and this is what we use. Additionally a public ssh key should be provided for accessing the node:

Provide the details for the new instance:

Time machine allows you to go back in time and creates automatic storage snapshots according to the schedule you define here:

SLAs define retention policies for the snapshots. I will not cover that her.

Kick it off and wait for the task to finish:

A few minutes later it is done:

Our new database is ready:

From a deployment point of view this is super simple: A few clicks and a new PostgreSQL server is ready to use in minutes. The API call to do the same on the command line would be this:

curl -k -X POST \
	https://10.38.11.9/era/v0.8/databases/provision \
	-H 'Content-Type: application/json' \
	-H 'Authorization: Basic YWRtaW46bngyVGVjaDAwNyE=' \
	-d \
	'{"databaseType":"postgres_database","databaseName":"db2","databaseDescription":"db2","clusterId":"ccdab636-2a11-477b-a358-2b8db0382421","softwareProfileId":"aa099964-e17c-4264-b047-00881dc5e2f8","softwareProfileVersionId":"7ae7a44c-d7c5-46bc-bf0f-f9fe7665f537","computeProfileId":"fb1d20bb-38e4-4964-852f-6209990402c6","networkProfileId":"8ed5214c-4e2a-4ce5-a899-a07103bc60cc","dbParameterProfileId":"3679ab5b-7688-41c4-bcf3-9fb4491544ea","newDbServerTimeZone":"Europe/Zurich","timeMachineInfo":{"name":"db2_TM","description":"","slaId":"4d9dcd6d-b6f8-47f0-8015-9e691c1d3cf4","schedule":{"snapshotTimeOfDay":{"hours":1,"minutes":0,"seconds":0},"continuousSchedule":{"enabled":true,"logBackupInterval":30,"snapshotsPerDay":1},"weeklySchedule":{"enabled":true,"dayOfWeek":"FRIDAY"},"monthlySchedule":{"enabled":true,"dayOfMonth":"17"},"quartelySchedule":{"enabled":true,"startMonth":"JANUARY","dayOfMonth":"17"},"yearlySchedule":{"enabled":false,"dayOfMonth":31,"month":"DECEMBER"}},"tags":[],"autoTuneLogDrive":true},"provisionInfo":[{"name":"application_type","value":"postgres_database"},{"name":"nodes","value":"1"},{"name":"listener_port","value":"5432"},{"name":"proxy_read_port","value":"5001"},{"name":"proxy_write_port","value":"5000"},{"name":"database_size","value":10},{"name":"working_dir","value":"/tmp"},{"name":"auto_tune_staging_drive","value":true},{"name":"enable_synchronous_mode","value":false},{"name":"dbserver_name","value":"postgres-2"},{"name":"dbserver_description","value":"postgres-2"},{"name":"ssh_public_key","value":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDgJb4co+aaRuyAJv8F6fsz2YgO0ZHldX6qS22c813iDeUGWP/1bGhF598uODlgK5nx29sW2WTlcmorZuCgHC2BJfzhL1xsL5Ca94uAEg48MbA+1+Woe49mF6bPDJWLXqC0YUpCDBZI9VHnrij4QhX2r3G87W0WNnNww1POwIJN3xpVq/DnDWye+9ZL3s+eGR8d5f71iKnBo0BkcvY5gliOtM/DuLT7Cs7KkjPgY+S6l5Cztvcj6hGO96zwlOVN3kjB18RH/4q6B7YkhSxgTDMXCdoOf9F6BIhAjZga4lodHGbkMEBW+BJOnYnFTl+jVB/aY1dALECnh4V+rr0Pd8EL daw@ltdaw"},{"name":"db_password","value":"postgres"}]}'

As said, provisioning is easy. Now lets see how the instance got created. Using the IP Address that was assigned, the postgres user and the public ssh key we provided we can connect to the VM:

ssh postgres@10.38.11.40
The authenticity of host '10.38.11.40 (10.38.11.40)' can't be established.
ECDSA key fingerprint is SHA256:rN4QzdL2y55Lv8oALnW6pBQzBKOInP+X4obiJrqvK8o.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.38.11.40' (ECDSA) to the list of known hosts.
Last login: Fri Jan 17 18:48:26 2020 from 10.38.11.9

The first thing I would try is to connect to PostgreSQL:

-bash-4.2$ psql postgres
psql (11.6 dbi services build)
Type "help" for help.

postgres=# select version();
                                                          version
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit
(1 row)

postgres=#

Ok, that works. What I did before I created the Software profile in the last post: I installed our DMK and that gets started automatically as the required stuff is added to “.bash_profile” (you can see that in the last post because PS1 is not the default). But here I did not get our environment because Era has overwritten “.bash_profile”:

-bash-4.2$ cat .bash_profile
export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/u01/app/postgres/product/11/db_6/bin:/u01/app/postgres/product/11/db_6/bin
export PGDATA=/pg_data_544474d1_822c_4912_8db4_7c1c7f45df54
export PGDATABASE=postgres
export PGUSER=postgres
export PGPORT=5432
export ANSIBLE_LIBRARY=/opt/era_base/era_engine/drivers/nutanix_era/era_drivers/AnsibleModules/bin

Would be great if the original “.bash_profile” would still be there so custom adjustment are not lost.

Looking at mountpoinzs:

-bash-4.2$ df -h
Filesystem                                                                                                                          Size  Used Avail Use% Mounted on
devtmpfs                                                                                                                            7.8G     0  7.8G   0% /dev
tmpfs                                                                                                                               7.8G  8.0K  7.8G   1% /dev/shm
tmpfs                                                                                                                               7.8G  9.7M  7.8G   1% /run
tmpfs                                                                                                                               7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos_centos7postgres12-root                                                                                            26G  2.4G   24G   9% /
/dev/sda1                                                                                                                          1014M  149M  866M  15% /boot
/dev/sdb                                                                                                                             27G   74M   26G   1% /u01/app/postgres/product/11/db_6
tmpfs                                                                                                                               1.6G     0  1.6G   0% /run/user/1000
/dev/mapper/VG_PG_DATA_544474d1_822c_4912_8db4_7c1c7f45df54-LV_PG_DATA_544474d1_822c_4912_8db4_7c1c7f45df54                          50G  108M   47G   1% /pg_data_544474d1_822c_4912_8db4_7c1c7f45df54
/dev/mapper/VG_PG_DATA_TBLSPC_544474d1_822c_4912_8db4_7c1c7f45df54-LV_PG_DATA_TBLSPC_544474d1_822c_4912_8db4_7c1c7f45df54           9.8G   44M  9.2G   1% /pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_software_1700f784395111eab53f506b8d241a39      1.5G  534M  821M  40% /opt/era_base/era_engine
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_logs_1700f784395111eab53f506b8d241a39          3.9G   18M  3.6G   1% /opt/era_base/logs
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_config_1700f784395111eab53f506b8d241a39         47M  1.1M   42M   3% /opt/era_base/cfg
/dev/mapper/ntnx_era_agent_vg_5c3a7120261b4354a3b38ee168726298-ntnx_era_agent_lv_db_stagging_logs_5c3a7120261b4354a3b38ee168726298   99G   93M   94G   1% /opt/era_base/db_logs

The PostgreSQL binaries got their own mountpoint. PGDATA is on its own mountpoint and it seems we have a tablespace in “/pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54”:

-bash-4.2$ psql -c "\db" postgres
                            List of tablespaces
     Name      |  Owner   |                    Location
---------------+----------+-------------------------------------------------
 pg_default    | postgres |
 pg_global     | postgres |
 tblspc_dbidb1 | postgres | /pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54
(3 rows)

-bash-4.2$ psql -c "\l+" postgres
                                                                     List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   |  Size   |  Tablespace   |                Description
-----------+----------+----------+-------------+-------------+-----------------------+---------+---------------+--------------------------------------------
 dbidb1    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7717 kB | tblspc_dbidb1 |
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7717 kB | pg_default    | default administrative connection database
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7577 kB | pg_default    | unmodifiable empty database
           |          |          |             |             | postgres=CTc/postgres |         |               |
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7577 kB | pg_default    | default template for new databases
           |          |          |             |             | postgres=CTc/postgres |         |               |
(4 rows)

This confirms that our database “dbidb1” was placed into a separate tablespace. There is another mountpoint which seems to be there for the archived WAL files, and we can confirm that as well:

-bash-4.2$ psql
psql (11.6 dbi services build)
Type "help" for help.

postgres=# show archive_command ;
                      archive_command
-----------------------------------------------------------
  sh /opt/era_base/cfg/postgres/archive_command.sh %p  %f
(1 row)

postgres=# \! cat /opt/era_base/cfg/postgres/archive_command.sh
test ! -f /opt/era_base/db_logs/dbidb1/$2 &&  cp -p $1 /opt/era_base/db_logs/dbidb1//$2
postgres=#

So archiving is enabled and this is what I expected. The costing parameters seem to be the default:

postgres=# select name,setting from pg_settings where name like '%cost%';
             name             | setting
------------------------------+---------
 autovacuum_vacuum_cost_delay | 20
 autovacuum_vacuum_cost_limit | -1
 cpu_index_tuple_cost         | 0.005
 cpu_operator_cost            | 0.0025
 cpu_tuple_cost               | 0.01
 jit_above_cost               | 100000
 jit_inline_above_cost        | 500000
 jit_optimize_above_cost      | 500000
 parallel_setup_cost          | 1000
 parallel_tuple_cost          | 0.1
 random_page_cost             | 4
 seq_page_cost                | 1
 vacuum_cost_delay            | 0
 vacuum_cost_limit            | 200
 vacuum_cost_page_dirty       | 20
 vacuum_cost_page_hit         | 1
 vacuum_cost_page_miss        | 10
(17 rows)

Memory parameters seem to be the default as well:

postgres=# show shared_buffers ;
 shared_buffers
----------------
 128MB
(1 row)

postgres=# show work_mem;
 work_mem
----------
 4MB
(1 row)

postgres=# show maintenance_work_mem ;
 maintenance_work_mem
----------------------
 64MB
(1 row)

postgres=# show effective_cache_size ;
 effective_cache_size
----------------------
 4GB
(1 row)

There are “Parameter profiles” you can use and create but currently there are not that many parameters that can be adjusted:

That’s it for now. Deploying new PostgreSQL instances is really easy. I am currently not sure why there needs to be a tablespace but maybe that becomes clear in a future post when we’ll look at backup and restore. From a PostgreSQL configuration perspective you for sure need to do some adjustments when the databases become bigger and load increases.

Cet article Deploying your own PostgreSQL image on Nutanix Era – 2 – Deploying a new PostgreSQL VM est apparu en premier sur Blog dbi services.

RDS Resource in Terraform

Pakistan's First Oracle Blog - Fri, 2020-01-17 21:19
Just marveling again how easy and clean it is to create an RDS resource in Terraform.

# Create the RDS instance
resource "aws_db_instance" "databaseterra" {
  allocated_storage    = "${var.database_size}"
  engine               = "${var.DbEngineType}"
  engine_version       = "${var.DbMajorEngineVersion}"
  instance_class       = "${var.DBInstanceClass}"
  identifier             = "testdbterra"
  username             = "${var.database_user}"
  password             = "${var.database_password}"
  db_subnet_group_name = "${aws_db_subnet_group.rdssubnetgroupterra.id}"
  parameter_group_name = "${aws_db_parameter_group.rdsparametergroupterra.id}"
  multi_az           = "${var.db_multiaz}"
  vpc_security_group_ids = ["${aws_security_group.RdsSecurityGroupterra.id}"]
  publicly_accessible    = "false"
  backup_retention_period = "2"
  license_model           = "license-included"
  apply_immediately = "true"
  tags = {
    Env = "Non-Prod"
  }

Just have your vars ready and thats it.
Categories: DBA Blogs

Data Guard Fast-Start Failover Test – Shutdown Standby Host

Michael Dinh - Fri, 2020-01-17 15:41

Data Guard Fast-Start Failover Test – Shutdown Primary Host

Review primary host and start observer:
[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 20:42:54 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 13 seconds ago)

DGMGRL> enable fast_start failover
Enabled.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 12 seconds ago)

DGMGRL> validate database cdb1

  Database Role:    Primary database

  Ready for Switchover:  Yes

DGMGRL> validate database cdb1_stby

  Database Role:     Physical standby database
  Primary Database:  cdb1

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

DGMGRL> show database cdb1

Database - cdb1

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    cdb1

  Database Error(s):
    ORA-16820: fast-start failover observer is no longer observing this database

Database Status:
ERROR

DGMGRL> show database cdb1_stby

Database - cdb1_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 2.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    cdb1

  Database Error(s):
    ORA-16820: fast-start failover observer is no longer observing this database

Database Status:
ERROR

DGMGRL> start observer
[P001 01/17 20:46:01.38] Authentication failed.
DGM-16979: Unable to log on to the primary or standby database as SYSDBA
Failed.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> start observer
Observer started
Restart standby host, listener, and database:
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default                   poweroff (virtualbox)

The VM is powered off. To restart the VM, simply run `vagrant up`

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...

====================================================================
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default                   running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

====================================================================
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant ssh
Last login: Fri Jan 17 20:11:35 2020 from 10.0.2.2
[vagrant@ol7-121-dg2 ~]$ sudo su - oracle
Last login: Fri Jan 17 20:11:44 UTC 2020 on pts/0
[oracle@ol7-121-dg2 ~]$ . oraenv <<< cdb1
ORACLE_SID = [cdb1] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ol7-121-dg2 ~]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 17-JAN-2020 20:53:20

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                17-JAN-2020 20:53:22
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "cdb1_stby_DGMGRL" has 1 instance(s).
  Instance "cdb1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@ol7-121-dg2 ~]$ cd /sf_working/sql
[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 20:53:38 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@cdb1> startup mount;
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  2924928 bytes
Variable Size             520097408 bytes
Database Buffers         1073741824 bytes
Redo Buffers               13848576 bytes
Database mounted.
SYS@cdb1> @stby.sql

Session altered.

*** v$database ***

DB          OPEN                   DATABASE           REMOTE     SWITCHOVER      DATAGUARD  PRIMARY_DB
UNIQUE_NAME MODE                   ROLE               ARCHIVE    STATUS          BROKER     UNIQUE_NAME
----------- ---------------------- ------------------ ---------- --------------- ---------- ---------------
cdb1_stby   MOUNTED                PHYSICAL STANDBY   ENABLED    NOT ALLOWED     ENABLED    cdb1

*** gv$archive_dest ***

                                                                                              MOUNT
 THREAD#  DEST_ID DESTINATION               STATUS       TARGET           SCHEDULE PROCESS       ID
-------- -------- ------------------------- ------------ ---------------- -------- ---------- -----
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL            ACTIVE   ARCH           0
       1       32 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL            ACTIVE   RFS            0

*** gv$archive_dest_status ***

                               DATABASE        RECOVERY
 INST_ID  DEST_ID STATUS       MODE            MODE                    GAP_STATUS      ERROR
-------- -------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1        1 VALID        MOUNTED-STANDBY IDLE                                    NONE
       1       32 VALID        UNKNOWN         IDLE                                    NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                   26 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       1        1 NO              25 17-JAN-2020 20:53:53         2 41.68333
       1        1 YES             23 17-JAN-2020 20:12:12

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 24 thread 1;

*** v$dataguard_stats ***

NAME                      VALUE              UNIT
------------------------- ------------------ ------------------------------
transport lag             +00 00:00:00       day(2) to second(0) interval
apply lag                                    day(2) to second(0) interval

*** gv$managed_standby ***

no rows selected

SYS@cdb1> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$
Screen output from observer:
DGMGRL> start observer
Observer started
[W000 01/17 20:48:58.27] The primary database has requested a transition to the UNSYNC/LAGGING state.
[W000 01/17 20:48:58.28] Permission granted to the primary database to transition to UNSYNC/LAGGING state.
[W000 01/17 20:50:01.29] The primary database has been in UNSYNC/LAGGING state for 63 seconds.
[W000 01/17 20:51:04.31] The primary database has been in UNSYNC/LAGGING state for 126 seconds.
[W000 01/17 20:52:07.33] The primary database has been in UNSYNC/LAGGING state for 189 seconds.
[W000 01/17 20:53:10.36] The primary database has been in UNSYNC/LAGGING state for 252 seconds.
[W000 01/17 20:54:13.39] The primary database has been in UNSYNC/LAGGING state for 315 seconds.
[W000 01/17 20:54:16.39] The primary database returned to SYNC/NOT LAGGING state.
Validate Data Guard configuration:
[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 10 seconds ago)

DGMGRL> validate database cdb1

  Database Role:    Primary database

  Ready for Switchover:  Yes

DGMGRL> validate database cdb1_stby

  Database Role:     Physical standby database
  Primary Database:  cdb1

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

DGMGRL> exit
[oracle@ol7-121-dg2 sql]$
Open database read only:

This is required because database is not register to cluster resource.

[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 21:33:07 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG2:(SYS@cdb1):PHYSICAL STANDBY> alter database open read only;

Database altered.

OL7-121-DG2:(SYS@cdb1):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show database cdb1_stby

Database - cdb1_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 1.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    cdb1

Database Status:
SUCCESS

DGMGRL> exit
[oracle@ol7-121-dg2 sql]$

Dbvisit 9: Adding datafiles and or tempfiles

Yann Neuhaus - Fri, 2020-01-17 11:57

One question I was asking is if the standby_file_management parameter is relevant in a Dbvisit environment with Oracle Standard Edition. I did some tests and I show here what I did.
We suppose that the Dbvisit is already set and that the replication is fine

[oracle@dbvisit1 trace]$ /u01/app/dbvisit/standby/dbvctl -d dbstd -i
=============================================================
Dbvisit Standby Database Technology (9.0.08_0_g99a272b) (pid 19567)
dbvctl started on dbvisit1: Fri Jan 17 16:48:16 2020
=============================================================

Dbvisit Standby log gap report for dbstd at 202001171648:
-------------------------------------------------------------
Description       | SCN          | Timestamp
-------------------------------------------------------------
Source              2041731        2020-01-17:16:48:18 +01:00
Destination         2041718        2020-01-17:16:48:01 +01:00

Standby database time lag (DAYS-HH:MI:SS): +00:00:17

Report for Thread 1
-------------------
SOURCE
Current Sequence 8
Last Archived Sequence 7
Last Transferred Sequence 7
Last Transferred Timestamp 2020-01-17 16:48:07

DESTINATION
Recovery Sequence 8

Transfer Log Gap 0
Apply Log Gap 0

=============================================================
dbvctl ended on dbvisit1: Fri Jan 17 16:48:23 2020
=============================================================

[oracle@dbvisit1 trace]$

While the standby_file_management is set to MANUAL on both servers

[oracle@dbvisit1 trace]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 17 16:50:50 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL>


[oracle@dbvisit2 back_dbvisit]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 17 16:51:15 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

SQL>  show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL>

Let’s create a tablespace MYTAB on the primary database

SQL> create tablespace mytab datafile '/u01/app/oracle/oradata/DBSTD/mytab01.dbf' size 10M;

Tablespace created.

SQL>

SQL> alter system switch logfile;

System altered.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB

6 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

SQL>

A few moment we can see that the new datafile is replicated on the standby

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB

6 rows selected.

SQL>  select name from v$datafile
  2  ;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

SQL>

Now let’s repeat the tablespace creation while the parameter is set to AUTO on both side

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      AUTO
SQL>  create tablespace mytab2 datafile '/u01/app/oracle/oradata/DBSTD/mytab201.dbf' size 10M;

Tablespace created.

SQL>

SQL> alter system switch logfile;

System altered.

SQL>

A few moment later the tablespace mytab2 was also replicated on the standby

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2

7 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/mytab201.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

6 rows selected.

In Dbvisit documentation we can find this
Note 2: This feature is independent of the Oracle parameter STANDBY_FILE_MANAGEMENT. Dbvisit Standby will detect if STANDBY_FILE_MANAGEMENT has added the datafile to the standby database, and if so, Dbvisit Standby will not add the datafile.
Note 3: STANDBY_FILE_MANAGEMENT can only be used in Enterprise Edition and should not be set in Standard Edition.

Dbvisit does not use STANDBY_FILE_MANAGEMENT for datafile replication. So I decide to set this value to its default value which is MANUAL.

What about adding tempfile in a dbvisit environment. In the primary I create a new temporary tablespace

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL> create temporary tablespace temp2 tempfile '/u01/app/oracle/oradata/DBSTD/temp2_01.dbf' size 10M;

Tablespace created.

SQL> alter system switch logfile;

System altered.

SQL>

We can see on the primary that we now have two tempfiles.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2
TEMP2

8 rows selected.

SQL>  select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf
/u01/app/oracle/oradata/DBSTD/temp2_01.dbf

SQL>

On standby side, the new temporary tablespace was replicated.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2
TEMP2

8 rows selected.

But the new tempfile is not listed on the standby

SQL>  select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf

SQL>

In fact it’s the expected behavior. In the documentation we can find following
If your preference is to have exactly the same number of temp files referenced in the standby control file as your current primary database, then once a new temp file has been added on the primary, you need to recreate a standby control file by running the following command from the primary server:
dbvctl -f create_standby_ctl -d DDC

So let’s recreate the standby control file

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -f create_standby_ctl -d dbstd
=>Replace current standby controfiles on dbvisit2 with new standby control
file?  [No]: yes

>>> Create standby control file... done

>>> Copy standby control file to dbvisit2... done

>>> Recreate standby control file... done

>>> Standby controfile(s) on dbvisit2 recreated. To complete please run dbvctl on the
    primary, then on the standby.
[oracle@dbvisit1 ~]$

And then after we can verify that the new tempfile is now visible at standby side

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf
/u01/app/oracle/oradata/DBSTD/temp2_01.dbf

SQL>

Cet article Dbvisit 9: Adding datafiles and or tempfiles est apparu en premier sur Blog dbi services.

Deploying your own PostgreSQL image on Nutanix Era

Yann Neuhaus - Fri, 2020-01-17 11:03

Some days ago we had a very good training on Nutanix. Nutanix is a Hyper-converged infrastructure and that means that all is software driven and the system can be deployed on many hardware configurations. I will not go into the details of the system itself but rather look at one component/module which is called Era. Era promises to simplify database deployments by providing a clean and simple user interface (and an API) that provides deployment procedures for PostgreSQL, MS SQL, MySQL, MariaDB and Oracle. There are predefined templates you can use but in this post I’ll look at how you can use Era to deploy your own PostgreSQL image.

Before you can register a software profile with Era there needs to be a VM up and running which already has PostgreSQL installed. For that I’ll import the latest CentOS 7 ISO with Prism (CentOS 8 is not yet supported).

Importing images is done in the “Images Configuration” section under “Settings” of Prism:


Once you start the upload a new task is generated which can be monitored in the tasks section:

Now that the image is ready we need to deploy a new virtual machine which will use the image as installation source:








As the virtual machine is now defined we need to power it on and then launch the console:


Follow your preferred way of doing the CentOS installation and once it is done you need to power off the virtual machine for removing the ISO. Otherwise you will always land in the installation procedure when the virtual machine is started:


After you powered of the virtual machine again you should be able to connect with ssh:

The next step is to install PostgreSQL as you prefer to do it. Here is an example for doing it from source code. We will not create a PostgreSQL instance, the binaries are enough. In my case everything was installed here:

 postgres@centos7postgres12:/home/postgres/ [pg121] echo $PGHOME
/u01/app/postgres/product/12/db_1/
postgres@centos7postgres12:/home/postgres/ [pg121] ls $PGHOME
bin  include  lib  share

Now that we have out PostgreSQL server we need to register the server in Era. Before doing that you should download and execute the pre-check script on the new database server:

postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh

----------------------------------------------------------------------------------
   Error: Database type not specified
   Syntax: $ ./era_linux_prechecks.sh -t|--database_type  [-c|--cluster_ip ] [-p|--cluster_port] [-d|--detailed]
   Database type can be: oracle_database, postgres_database, mariadb_database, mysql_database
----------------------------------------------------------------------------------

postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh -t postgres_database
libselinux-python-2.5-14.1.el7.x86_64


--------------------------------------------------------------------
|              Era Pre-requirements Validation Report              |
--------------------------------------------------------------------

     General Checks:
     ---------------
         1] Username           : root
         2] Package manager    : yum
         2] Database type      : postgres_database

     Era Configuration Dependencies:
     -------------------------------
         1] User has sudo access                         : YES
         2] User has sudo with NOPASS access             : YES
         3] Crontab configured for user                  : YES
         4] Secure paths configured in /etc/sudoers file : YES

     Era Software Dependencies:
     --------------------------
          1] GCC                  : N/A
          2] readline             : YES
          3] libselinux-python    : YES
          4] crontab              : YES
          5] lvcreate             : YES
          6] lvscan               : YES
          7] lvdisplay            : YES
          8] vgcreate             : YES
          9] vgscan               : YES
         10] vgdisplay            : YES
         11] pvcreate             : YES
         12] pvscan               : YES
         13] pvdisplay            : YES
         14] zip                  : NO
         15] unzip                : YES
         16] rsync                : NO

     Summary:
     --------
         This machine does not satisfy all of the dependencies required by Era.
         It can not be onboarded to Era unless all of these are satified.

     **WARNING: Cluster API was not provided. Couldn't go ahead with the Prism API connectivity check.
     Please ensure Prism APIs are callable from the host.
====================================================================
1postgres@centos7postgres12:/home/postgres/ [pg121]

In my case only “zip” and “rsync” are missing which of course is easy to fix:

postgres@centos7postgres12:/home/postgres/ [pg121] sudo yum install -y zip rsync
...
postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh -t postgres_database
libselinux-python-2.5-14.1.el7.x86_64


--------------------------------------------------------------------
|              Era Pre-requirements Validation Report              |
--------------------------------------------------------------------

     General Checks:
     ---------------
         1] Username           : root
         2] Package manager    : yum
         2] Database type      : postgres_database

     Era Configuration Dependencies:
     -------------------------------
         1] User has sudo access                         : YES
         2] User has sudo with NOPASS access             : YES
         3] Crontab configured for user                  : YES
         4] Secure paths configured in /etc/sudoers file : YES

     Era Software Dependencies:
     --------------------------
          1] GCC                  : N/A
          2] readline             : YES
          3] libselinux-python    : YES
          4] crontab              : YES
          5] lvcreate             : YES
          6] lvscan               : YES
          7] lvdisplay            : YES
          8] vgcreate             : YES
          9] vgscan               : YES
         10] vgdisplay            : YES
         11] pvcreate             : YES
         12] pvscan               : YES
         13] pvdisplay            : YES
         14] zip                  : YES
         15] unzip                : YES
         16] rsync                : YES

     Summary:
     --------
         This machine satisfies dependencies required by Era, it can be onboarded.

Looks good and the database server can now be registered:




Era as well has a task list which can be monitored:

… and then it fails because PostgreSQL 12.1 is not supported. That is fine but I would have expected the pre-check script to tell me that. Same procedure again, this time with PostgreSQL 11.6 and that succeeds:

This database server is now the source for a new “Software profile”:




And that’s it: Our new PostgreSQL software profile is ready to use. In the next post we’ll try to deploy a new virtual machine from that profile.

Cet article Deploying your own PostgreSQL image on Nutanix Era est apparu en premier sur Blog dbi services.

Group by Elimination

Jonathan Lewis - Fri, 2020-01-17 06:57

Here’s a bug that was highlighted a couple of days ago on the Oracle Developer Community forum; it may be particularly worth thinking about if if you haven’t yet got up to Oracle 12c as it appeared in an optimizer feature that appeared in 12.2 (and hasn’t been completely fixed) even in the latest release of 19c (currently 19.6).

Oracle introduce “aggregate group by elimination” in 12.2, protected by the hidden parameter “_optimizer_aggr_groupby_elim”. The notes on MOS about the feature tell us that Oracle can eliminate a group by operation from a query block if a unique key from every table in the query block appears in the group by clause. Unfortunately there were a couple of gaps in the implementation in 12.2 that can produce wrong results. Here’s some code to model the problem.

rem
rem     Script:         group_by_elim_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

create table ref_clearing_calendar(
        calendar_name   char(17),
        business_date   date,
        update_ts       timestamp (6) default systimestamp,
        constraint pk_ref_clearing_calendar 
                        primary key (business_date)
)
/

insert into ref_clearing_calendar (business_date)
select
        sysdate + 10 * rownum
from 
        all_objects 
where 
        rownum <= 40 -- > comment to avoid wordpress format issue
/

commit;

execute dbms_stats.gather_table_stats(null,'ref_clearing_calendar',cascade=>true)

set autotrace on explain

select
        to_char(business_date,'YYYY') , count(*)
from
        ref_clearing_calendar
group by 
        to_char(business_date,'YYYY')
order by 
        to_char(business_date,'YYYY')
/

set autotrace off

I’ve created a table with a primary key on a date column, and then inserted 40 rows which are spaced every ten days from the current date; this ensures that I will have a few dates in each of two consecutive years (future proofing the example!). Then I’ve aggregated to count the rows per year using the to_char({date column},’YYYY’) conversion option to extract the year from the date. (Side note: the table definition doesn’t follow my normal pattern as the example started life in the ODC thread.)

If you run this query on Oracle 12.2 you will find that it returns 40 (non-unique) rows and displays the following execution plan:


---------------------------------------------------------------------------------------------
| Id  | Operation        | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                          |    40 |   320 |     2  (50)| 00:00:01 |
|   1 |  SORT ORDER BY   |                          |    40 |   320 |     2  (50)| 00:00:01 |
|   2 |   INDEX FULL SCAN| PK_REF_CLEARING_CALENDAR |    40 |   320 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

The optimizer has applied “aggregate group by elimination” because it hasn’t detected that the primary key column that appears in the group by clause has been massaged in a way that means the resulting value is no longer unique.

Fortunately this problem with to_char() is fixed in Oracle 18.1 where the query returns two rows using the following execution plan (which I’ve reported from an instance of 19.5):

---------------------------------------------------------------------------------------------
| Id  | Operation        | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                          |    40 |   320 |     2  (50)| 00:00:01 |
|   1 |  SORT GROUP BY   |                          |    40 |   320 |     2  (50)| 00:00:01 |
|   2 |   INDEX FULL SCAN| PK_REF_CLEARING_CALENDAR |    40 |   320 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Unfortunately there is still at least one gap in the implementation. Change the to_char(business_date) to extract(year from business_date) at all three points in the query, and even in 19.6 you’re back to the wrong results – inappropriate aggregate group by elimination and 40 rows returned.

There are a couple of workarounds, one is the hidden parameter _optimizer_aggr_groupby_elim to false at the system or session level, or through an opt_param() hint at the statement level (possibly injected through an SQL_Patch. The other option is to set a fix_control, again at the system, session, or statement level – but there’s seems to be little point in using the fix_control approach (which might be a little obscure for the next developer to see the code) when it seems to do the same as the explicitly named hidden parameter.

select
        /*+ opt_param('_optimizer_aggr_groupby_elim','false') */
        extract(year from business_date) , count(*)
from ,,,

select
        /*+ opt_param('_fix_control','23210039:0') */
        extract(year from business_date) , count(*)
from ...

One final thought about this “not quite fixed” bug. It’s the type of “oversight” error that gives you the feeling that there may be other special cases that might have been overlooked. The key question would be: are there any other functions (and not necessarily datetime functions) that might be applied (perhaps implicitly) to a primary or unique key that would produce duplicate results from distinct inputs – if so has the code that checks the validity of eliminating the aggregate operation been written to notice the threat.

Footnote

The problem with extract() has been raised as a bug on MOS, but it was not public at the time of writing this note.

Update (about 60 seconds after publication)

Re-reading my comment about “other functions” it occurred to me that to_nchar() might, or might not, behave the same way as to_char() in 19c – so I tested it … and got the wrong results in 19c.

 

 

 

Dbvisit Standby 9 : Do you know the new snapshot feature?

Yann Neuhaus - Thu, 2020-01-16 10:23

Dbvisit snapshot option is a new feature available starting from version 9.0.06. I have tested this option and in this blog I am describing the tasks I have done.
The configuration I am using is following
dbvist1 : primary server
dbvist 2 : standby server
orcl : oracle 19c database
We suppose that the dbvisit environment is already set and the replication is going fine. See previous blog for setting up dbvisit standby
First there are two options
-Snapshot Groups
-Single Snapshots

Snapshot Groups
This option is ideal for companies that are using Oracle Standard Edition (SE,SE1,SE2) where they would like to have a “database that is read-only, but also being kept up to date” . It allows to create 2 or more read-only snapshots of the standby at a time interval. The snapshot will be opened in a read-only mode after its creation. A new service which will point to the latest snapshot is created and will be automatically managed by the listener.
The use of the feature is very easy. As it required managing filesystems like mount and umount, the user which will create the snapshot need some admin privileges.
In our case we have configured the user with full sudo privileges, but you can find all required privileges in the documentation .

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] sudo grep oracle /etc/sudoers
oracle ALL=(ALL) NOPASSWD:ALL
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

The use of the option can be done by command line or using the graphical dbvserver console. But It is highly recommended to use the graphical tool.
When the option snapshot is available with your license, you will see following tab in the dbvserver console

To create a snapshot groups we just have to choose the option NEW SNAPSHOT GROUP

And then fill the required information. In this example
-We will create a service named snap_service
-We will generate a snapshot every 15 minutes (900 secondes)
There will be a maximum of 2 snapshots. When the limit is reached, the oldest snapshop is deleted. Just note that the latest snapshot will be deleted only when the new snapshot is successfully created. That’s why we can have sometimes 3 snapshots
-The snapshots will be prefixed by MySna

After the snapshots are created without errors, we have to start the snapshots generation by clicking on the green button

And then we can see the status

On OS level we can verify that the first snapshot is created and that the corresponding instance started

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    2300     1  0 09:05 ?        00:00:00 ora_pmon_orcl
oracle    6765     1  0 09:57 ?        00:00:00 ora_pmon_MySna001
oracle    7794  1892  0 10:00 pts/0    00:00:00 grep --color=auto pmon
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

We can also verify that the service snap_service is registered in the listener. This service will be automatically pointed to the latest snapshot of the group.

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 16-JAN-2020 10:01:49

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbvisit2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                16-JAN-2020 09:05:07
Uptime                    0 days 0 hr. 56 min. 42 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/dbvisit2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbvisit2)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
…
…
…
Service "snap_service" has 1 instance(s).
  Instance "MySna001", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

To connect to this service, we just have to create an alias like

snapgroup =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbvisit2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = snap_service)
    )
  )

15 minutes later we can see that a new snapshot was generated

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    2300     1  0 09:05 ?        00:00:00 ora_pmon_orcl
oracle    6765     1  0 09:57 ?        00:00:00 ora_pmon_MySna001
oracle   11355     1  0 10:11 ?        00:00:00 ora_pmon_MySna002
oracle   11866  1892  0 10:13 pts/0    00:00:00 grep --color=auto pmon
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

Note that we can only open the snapshot in a read only mode

oracle@dbvisit1:/home/oracle/ [orcl (CDB$ROOT)] sqlplus sys/*****@snapgroup as sysdba

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           MOUNTED
SQL> alter pluggable database all open;
alter pluggable database all open
*
ERROR at line 1:
ORA-65054: Cannot open a pluggable database in the desired mode.


SQL>  alter pluggable database all open read only;

Pluggable database altered.

Of course you have probably seen that you can stop, start, pause and remove snapshots from the GUI.

Single Snapshot
This option allows to create read-only as well read-write snapshots for the database.
To create a single snapshot , we just have to choose the NEW SINGLE SNAPSHOT
In the following example the snapshot will be opened in a read write mode

At this end of the creation we can see the status

We can verify that a service SingleSn was also created

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] lsnrctl status | grep Singl
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
Service "SingleSn" has 1 instance(s).
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service

And that the instance SinglSn is started

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    3294     1  0 16:04 ?        00:00:00 ora_pmon_SingleSn
oracle    3966  1748  0 16:08 pts/0    00:00:00 grep --color=auto pmon
oracle   14349     1  0 13:57 ?        00:00:00 ora_pmon_orcl
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

We can also remark that at OS level, a filesystem is mounted for the snap. So there must be enough free space at the LVM that host the database.

oracle@dbvisit2:/home/oracle/ [MySna004 (CDB$ROOT)] df -h | grep snap
/dev/mapper/ora_data-SingleSn   25G   18G  6.3G  74% /u01/app/dbvisit/standby/snap/orcl/SingleSn
oracle@dbvisit2:/home/oracle/ [MySna004 (CDB$ROOT)]

Using the alias

singlesnap =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbvisit2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = SingleSn)
    )
  )

We can see that new snapshot is opened in read write mode

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
SINGLESN  READ WRITE

SQL>
Conclusion

What we will retain is that a snapshot groups is a group of snapshots taken at a regular time interval. These kinds of snapshots can only be opened in a read only mode.
The single snapshot can be created in a read only or read write mode. When opened in read write mode, it can be compared maybe to Oracle snapshot standby.
The option dbvisit snapshot required a license and is only supported on linux.

Cet article Dbvisit Standby 9 : Do you know the new snapshot feature? est apparu en premier sur Blog dbi services.

Why Businesses Need Gen 2 Cloud

Oracle Press Releases - Thu, 2020-01-16 09:46
Blog
Why Businesses Need Gen 2 Cloud

By Barbara Darrow—Jan 16, 2020

As we move into the third decade of cloud computing, it’s important to remember that not all clouds are alike.

Back in, say 2006, cloud computing services were a revelation to software developers. These rentable storage and compute services provided a handy virtual holding cell that developers could use for designing and prototyping new applications without breaking the bank, or requiring scarce cooperation from in-house resources. And that was all well and good for that time.

Fast forward more than a decade and the status quo is a lot more complicated. As businesses of all sizes weigh the use of cloud platforms to run their critical applications, one part of the equation is that these applications cannot fail without serious financial and reputational consequences. Put it this way: If a computer game glitches, your point total may take a hit. If your manufacturing system hiccups, you can lose real money. And/or your job, depending on your role at the company.

For these types of workloads, businesses require highly secure cloud services that can work in tandem with other cloud services, with services that remain on-premise, and are modern enough for emerging competitive and technological needs.

Wanted:  Gen 2 Cloud

This shift in how companies look at cloud services affects tech suppliers and their customers alike, said Edward Screven, Oracle’s chief corporate architect, in an address earlier this month to Oracle customers attending Oracle Cloud Day in New York City .

“Gen 1 clouds are not good enough for many uses,” he said. Oracle built its Gen 2 cloud to support its cloud business applications, and for its business customers. “Your critical apps drove how we built our cloud,” Screven said.

Early clouds were built so that processors were shared by many customers. The advantage was cost efficiency; the downside was that customer code can, in some circumstances, access the very computers that are used to run the cloud service. In this case, if that customer code were to be infected by malware, the threat may not be contained to that customer’s workload, and can spread.

Oracle’s Gen 2 cloud, by contrast, isolates customer code so it cannot access the cloud’s control computers—or vice versa. A separate processor, which Screven called a perimeter control computer, filters incoming network packets to prevent bad code from entering the customer’s work zone. Isolating the controls of the cloud operations from customer’s tasks helps ensure better security all around.

Gen 2 cloud also utilizes autonomous technology to patch, update, and configure cloud resources without human intervention. In complex environments, it is expensive and time consuming for human administrators to keep up with the latest security threats or configuration wrinkles—provided you can even find them in a tight labor market.

Oracle’s Gen 2 cloud supports online patching. During the Spectre /Meltdown bugs that afflicted Intel processors, online patching averted downtime for Oracle customers by applying 150 million fixes across 1.5 million computer cores. That massive operation took just four hours, Screven said. Perhaps more impressively, online patching -- based on Oracle’s K-splice technology—meant no servers had to be rebooted at all. “Our customers didn’t realize we’d even done it,” Screven said.

Autonomous capabilities give Oracle’s Gen 2 cloud 99.995% availability—which nets out to less than 2.5 minutes of downtime per month.

Bolstering Hybrid

While it is true that many business workloads are moving to outside public cloud infrastructure, it is also true that most businesses will keep running some mission-critical applications on premise.

According to Forrester VP and principal analyst James Staten, a healthy 74% of businesses surveyed describe their IT strategy as hybrid, meaning they use cloud in parallel with non-cloud technologies.

Those companies need a comprehensive way to manage their data and applications across deployment models. 

Oracle’s latest release of Enterprise Manager is intended to help companies migrate workloads to Oracle’s Gen 2 cloud—as well as manage their databases from a single dashboard, whether they are running on premise or in the cloud.

New World Order

Businesses are quickly learning that Gen 1 clouds relying on commodity hardware and software, and a patchwork quilt of disparate services, are not up to the mission-critical task of running their most important applications and safeguarding their data in a secure, scalable, reliable way.

Most likely they will want to avail themselves of a Gen 2 cloud built from its foundations to handle important business applications.

Pages

Subscribe to Oracle FAQ aggregator