Friday, January 19, 2018

It’s time to think about upgrading your database

Last year when the end of free extended support for Oracle 11.2.0.4 was looming, many companies were scrambling to plan upgrades before the support bills started rolling in. Then, suddenly, Oracle extended the free support several times, eventually settling on December 2018. Now that the new year is upon us, it’s time to start thinking seriously about upgrading.

Current Oracle database versions are covered under Premier Support and are provided with updates, patches, bug fixes, and security alerts. Customers also get 24/7 support and certifications with third-party software.

The time will come when Oracle releases a new database version, and it’s not long after that the old version will go on Extended Support. This generally lasts for three years after Premier Support ends and includes many (but not all) of Premier’s features, most notably third-party certifications. Extended Support also costs extra, but Oracle often allows customers to have that for free to allow time to migrate off that version. Eventually you’ll have to upgrade to the new release or pay extra money to stay on the old version.

The last stage of support is Sustaining Support. Now the database version is no longer fully supported. You’ll still have access to everything created during Premier and Extended support as well as access to the knowledge bases. What you won’t get is any new updates, patches, security alerts, or anything else. You’re on your own.

And now for what you’ve been waiting for: Details on Oracle’s supported database versions. For those of you on Oracle 11.2.0.3 and earlier, it’s long past time to upgrade. All of those versions are on Sustaining Support (also known as ‘not supported’).

11.2.0.4 owners are in luck – for now. You are on free Extended Support until December 2018. Afterward you’ll have to pay for support until December 31st, 2020 when 11g goes on Sustaining Support.

If you’re on 12.1.0.1, I hate to say this but you’re unsupported as well. Oracle stopped patching that in August 2016.

Version 12.1.0.2 is on Extended Support now and has a little bit more free time left. It’ll transition to paid Extended Support in July 2019.

At this point you’re probably wondering, “What does Oracle have in Premier Support?” If you said 12.2.0.1 and 12.2.0.2 then you’re right. The next version coming up behind this one is Oracle 18c, but that’s not out yet.

Now you know what databases are supported, but what database versions can I use with what JDE versions? If you’re on the latest and greatest JDE 9.2.2 tools release, you can use 12.2.0.1 and 12.1.0.2 versions. Those of you on JDE TR 9.1.4 through 9.2.1 can have Oracle 12.1.0.2, 12.1.0.1, or 11.2.0.4. When you get down to JDE TR 9.1.3 and earlier you’re stuck with the Oracle 11.x releases.

Here’s a handy grid with all the relationships:

11.2.0.4
12.1.0.1
12.1.0.2
12.2.0.1
9.2.2
NO
NO
Yes
Yes
9.2.1
Yes
Yes
Yes
NO
9.2.0
Yes
Yes
Yes
NO
9.1.5
Yes
Yes
Yes
NO
9.1.4
Yes
Yes
Yes
NO
9.1.3
Yes
NO
NO
NO

Now it's entirely possible that Oracle could bump out the extended support dates as they have done many times in the past. Unfortunately decisions can't be made on what might happen, so unless you want to roll the dice you need to start planning your upgrade now.

Saturday, July 15, 2017

Skip duplicate keys on inserts and log errors

Sometimes when you do a bulk insert, the statement will fail due to duplicate keys. Unfortunately you don't get any information on what records failed. When the database encounters the first error, it rolls back the entire transaction no matter how many records were successful.

Good news! There is a way to insert all the good records and list all the rows that were duplicates. In our example we got duplicate key errors inserting records into the proddta.f3013 table from the same table in another environment on another server.

First, create a table to log the errors. You put the name of the table being logged in the dbl_table_name parameter.

begin
dbms_errlog.create_error_log (dml_table_name => 'proddta.f3013');
end;

That procedure creates a table called proddta.err$_f3013. Next, put the error logging info at the end of your SQL statement that failed. Look below to the full SQL: It's a standard "insert into select *" statement, but we're using a database link called JDENP_PUB to grab data from another server.

The actual meat of the error logging starts with "log errors into" followed by the name of your error log table. Next is the error tag, which in this case is simply ('INSERT'). You can put anything you want in there, it doesn't matter. Finally, the reject limit specifies how many failures you want to get before stopping. Setting it to unlimited tells it to process all records.

insert into proddta.f3013 select * from cvdta.f3013@JDENP_PUB log errors into proddta.err$_f3013 ('INSERT') reject limit unlimited;

The statement will finish without any fanfare. At this point you can commit the statement and view your error log.



All the records that failed will be listed. You can export that to any file type (csv, excel file, etc) and send it back to the client for review. 

Friday, December 16, 2016

Microsoft SQL Server Performance: optimize for ad-hoc workloads

A SQL statement always requires an execution plan to run it over your data. That plan is used to return the data quickly and efficiently. SQL statements might need to be used over and over in quick succession, so to improve performance the server uses a Plan Cache to save the execution plans for later. When the same SQL comes in again, the database can use the already cached plan to execute the statement which saves a lot of time. If a statement is only ever used once, then the execution plan in the cache will age out and disappear.

What if you have a lot of SQL statements coming to your database that are only used once? As you can probably see, your plan cache will be filled with execution plans that will never be re-used. This can prevent statements that actually repeat themselves from getting in the cache and slow your system down. Third-party reporting and even JDE apps and UBEs can and do send these on-time use SQL statements quite frequently. These are called “ad-hoc” statements.

The configuration setting “optimize for ad-hoc workloads” can help you speed up your system by changing how plans are cached. With this setting enabled, the first time a SQL statement runs it still creates an execution plan, but it only stores a “plan stub” in the cache rather than the entire execution plan. If the statement is run a second time, SQL Server will store the full plan in the plan cache. If it’s never run again then it never has to create a full execution plan, and that frees up space for SQL that actually repeats itself


Enabling optimize for ad-hoc workloads is highly recommended for JD Edwards running on SQL Server, especially when using third-party applications that interface with the database. Before enabling it, be sure to take a performance baseline so that you can compare before and after data. It’s unlikely you will find anything negative, but it will be nice to have the data to back up your choice. 

Tuesday, November 15, 2016

Oracle container and pluggable databases are now the default

This may not have come up too much yet, but starting with Oracle version 12.1.0.2 (supported as of TR 9.1.4) Oracle has done away with the concept of single instance databases in favor of Container Databases (CDB) with Pluggable databases (PDB). This adds a little bit more complexity to the install and management of the database. Any customer upgrading or migrating to 12c will have to convert their database to a PDB. Don’t worry though – it’s actually not that hard to do.

What is a CDB/PDB?
A CDB is basically all the Oracle system databases and other objects that every database needs like control files, redo logs, tempfiles, etc.. The PDB is all the user databases (like JDE). This architecture makes the data independent of the system, although an end-user wouldn’t be able to tell. All future installations of Oracle database will use this two-part configuration whether you need it or not.

PDB/CDB benefits
Where the PDB/CDB combo really shines is when a company is hosting multiple database instances. For example, instead of having two servers to hold production and development data, you could have those two instances on a single server. Each instance would be a separate Pluggable database (PDB) that both use the single common Container database (CDB). A great feature of a PDB is that it can be instantly cloned, providing you with a point-in-time consistent copy of the source database. It is, however, an additional cost to use the multitenant option.

Using a PDB
JDE users wouldn’t ever connect to a CDB – that is reserved for the DBA. To an end user (or even a junior DBA), the PDB would look like a complete Oracle database – just like old times. In reality, most JDE installs are only going to ever have one PDB per CDB which is essentially the same as it always was. However, you could have a PDB for production, one for DV, and another for PY which could all run on the same large machine sharing a single CDB.

Why is there a CDB/PDB?
There are some great reasons to have PDBs, such as fast upgrades, rapid cloning and snapshots, and reduced overhead. The main benefit to this is really for Oracle, because it makes it easier to provide a cloud-style database service. This lets them manage multiple customer databases from a single point without multiplying the common overhead.

Uses in JDE
For a company using JDE software there are definitely some advantages, but in general JDE practice a multitenant CDB/PDB configuration would not typically be used. A copy of a PDB would need to have to same table owners as the source, and the act of duplicating the PDB requires a brief period of read-only access. It could not be used for a JDE-style data refresh of a lower environment. Perhaps in future versions it will be possible to make PDB clones more flexible.


Friday, August 12, 2016

Ten tips for improving database performance on JD Edwards EnterpriseOne

There are probably a hundred tips that I could give you on how to speed up E1, and choosing just ten was kind of difficult. Should I list tips only for MSSQL or Oracle? What about generic tips? I decided to include a variety of tips that I recommend the most often overall. They generally offer the greatest benefit with the least amount of complexity. Now, you're going to look at a few items on this list and say, "Wait a second, this one right here is really difficult!" You'd be right to think that, but here's a secret: It's tough to do any performance tuning on a database that isn't somewhat complex. That's just databases for you.

Before you get in to the list, I have some disclaimers. This is a list of what I think are the ten best tips. Other DBAs may pick a different list or even say that what I've got listed here is a bunch of crazy talk. I also have to caution you that making some of the changes in here might be beyond your capabilities, budget, or both. Please don't do anything without doing research and determining if it's actually right for your system. Also, I know you understand every system is different. On one installation a particular tip could improve performance by 50%. There might be one IT shop that wouldn't see any improvement at all.

Finally, here are my ten most recommended tips for improving JDE database performance. I've included some links for reference, but I encourage you to spend some time reading as much about each topic as possible. Your database is the only one like it in the world, make sure your solution is the best one for you.


  • General: SSDs
  • General: Remove unused indexes
    • Every index adds an extra read or write to every transaction.
    • If an index isn't used very often or not at all, it's stealing performance.
    • You can run a test before dropping them
      • Oracle
        • Make the index invisible - ALTER INDEX index_name INVISIBLE;
        • Index is still maintained like any other index, but optimizer does not use it.
        • Useful for making sure nothing gets worse when it's gone
      • MSSQL
        • Disable the index - ALTER INDEX IX_IndexName ON Schema.TableName DISABLE;
        • Index is not maintained and not used by the optimizer.
        • You can measure the performance benefit and be sure it doesn't break anything.
    • Make sure you remove the indexes from JDE E1 specs as well, otherwise if you regenerate indexes using OMW they will come back.
  • General: Don't generate indexes from OMW
    • By default OMW drops and recreates all indexes on a table. You can't change this behavior.
    • Index generation is an OFFLINE operation by default, meaning the table is not usable until ALL the indexes are regenerated.
    • On a large, busy table this will cause your system to grind to a halt.
    • Better idea: Create the indexes directly on the database. If they work for you, then create them inside JDE.
    • DO NOT generate them. Simply promote them using OMW so the specs are saved.
  • MSSQL: Tempdb optimization
  • MSSQL: Adjust max server memory
    • A common setting that installers overlook is the max server memory.
    • By default it is set to its maximum value of 2147483647 MB - 2.15 Petabytes!
    • MSSQL will use all of your physical memory if you don't change this value.
    • This can starve the operating system and other processes of memory and bring your system to a crawl.
    • A good starting rule of thumb is to set the max server memory to 90% of the total available RAM.
    • Reference:
  • MSSQL: Read-Committed Snapshot Isolation (RCSI)
    • Are you seeing a lot of blocking on your database?
    • By default, Microsoft SQL Server uses "pessimistic locking".
    • This means that read operations block writes and vice-versa.
    • RCSI changes it to "optimistic locking".
    • The database is "optimistic" that the results of your read operation will not change.
    • If it does change, the database simply re-reads the data after the write is finished.
    • Since most write blocks do not change the data you're reading, implementing RCSI is almost always a huge performance improvement.
    • This requires increasing the amount of Tempdb space available.
    • Reference:
  • Oracle: Create data files at their maximum size
    • If you're using a smallfile tablespace (the default), you will have one or more data files that are 32GB or less.
    • Avoid the overhead of file growth operations by creating them at full size right away.
    • It also covers you when you make a mistake on file growth settings.
    • When the file has to grow, it causes a performance impact.
    • Reference
  • Oracle: Redo log tuning
    • Your redo logs should switch about five times per hour, and have enough redo log groups that each one can write out to the archive logs before the logs switches get back around.
    • Longer than 15 minutes between switches can will slow down your system.
    • Less than that puts you at greater risk for data loss in the event of disaster.
    • When you figure out the right size you need, simply add them all to your database and force log switches until all your old logs are inactive. Once you've got that, delete the old redo logs.
    • Reference
  • Oracle: AIX/Linux - Huge/large pages
    • There are differences in opinion about the overall performance benefit of this setting, however it does provide protection against SGA swapping on memory-dependent systems.
    • The memory page size on these operating systems is quite small out of the box.
    • When dealing with large Oracle SGA (System Global Area), you can improve performance by reducing the number of OS memory pages the database has to deal with.
    • In AIX it's called large pages and on Linux it's huge pages.
    • The size of your pages depend on the size of your SGA, the operating system, and the capability of the hardware.
    • The general advice is to implement 16MB on AIX and up to 1GB on Linux.
    • Create enough pages at the larger size to accommodate your entire SGA
    • Reference:
  • Oracle: Pin the SGA
    • This goes hand-in-hand with huge/large pages.
    • "Pinning" means that the entire SGA will be loaded in to the memory pages you set aside and it will never be swapped out to disk.
    • If you don't pin, the SGA can get swapped out to disk due to memory exhaustion. Your database will run much more slowly if that happens.
    • Swapped memory pages won't get moved back to disk even when the memory pressure subsides, so your database will continue to run poorly.
    • Reference:
      • See above links

Thursday, July 21, 2016

Once again, we must talk about indexes

I've given a couple of talks about JDE and indexes, as well as had numerous discussions with developers, DBAs, and others. What I've found is that almost nobody really knows how indexes work, what the relationship between JDE and the database is regarding indexes, and that adding an index might not help anything at all.

There are many reasons why someone would think they need an index. A common reason is when performance is slow with a particular app or UBE. The first thing people usually recommend is to add an index. The problem I normally find is that nobody truly understands the problem. 

Slow performance is not a problem, it's a symptom of something. Do you really need a new index, or do you have bad table and index statistics? Could the application need to be re-written slightly? Maybe there's a NOT IN condition that forces a full table scan? You really have to figure out exactly what's going on before choosing a course of action. I see a lot of people put an index on a database only to scratch their heads when there isn't any improvement. What usually ends up happening is the database is blamed, so the DBA crawls up in the guts of the application only to find that the real issue is data or development related. 

Sometimes there's not yet any problem with performance at all. A developer could add an index just because it makes their work easier or they think their app will need the index. Inside JDE when you're using Table Design Aid inside OMW to create an object that has table IO, you have to choose an index from a drop-down list that contains all indexes inside the JDE specs for that table. A developer will pick the index they think is appropriate for their work and move on. If there isn't one in there that works for them, they can create a new index that fits their needs.

Now if a developer creates a new index within JDE for whatever reason, it will show up on the list for the developer to choose. If the table or indexes are regenerated using OMW the new index will be created on the database. However, at runtime JDE does not send any information to the database on what index to use – it only sends a SQL statement. That's because the information in that drop-down box is only there for the benefit of the developer – it has no bearing on what index the database will use to serve the data. 

What this means is that an the existence of an index is not a guarantee that it will ever be accessed. I analyze a lot of databases and many indexes shipped from JDE are never touched. Some are only used a few times in their lives. Others could be used thousands of times an hour. Regardless of how much they’re used, every one of those indexes creates an additional write and/or delete for every single transaction. As a result, extra unnecessary indexes can actually cause slower performance.

How does the database decide what index to use? In a nutshell, the optimizer tries to determine the “cost” of the transaction based on the existing table and index statistics, then chooses a plan that can run it with the lowest cost. It can (and does) get much more complicated than that, but for the purposes of this blog post it works. 

The question of what indexes could it use to better service a particular statement has some guidelines as well. For example, adding a new index on columns A and B when there is already an index on A, B, C, and D is a waste of time and decreases performance. The database can already get the best results out of an AB statement by using an ABCD index. 

"Go big or go home" isn't a phrase you should utter either. Indexes over five to eight columns don’t improve performance much if at all. The database can also use multiple indexes to fulfill a request, so adding a new index that encompasses what exists in two others might not add a huge benefit.


Indexes are a complicated subject to discuss. Sometimes they can make things go faster, sometimes they don’t seem to do anything, and occasionally they make things worse. From the JDE side there is no control over how the database chooses to execute a SQL statement, and that is largely due to its DB-agnostic design. 

Ideally, developers and users should work together with DBAs to find the best solution to a performance problem. If an index is required, then that's what we'll do - but let's do it because it's solving a problem, not reacting to a symptom. If you really need an index, ask your DBA add it on the database side temporarily to see how it goes. If it works, then put it in the table specs. If not, attack it from another angle to see what's really going on. 

Tuesday, April 26, 2016

JDE Best Practice - Security

One of the most overlooked best practices when managing JDE E1 is user security. When you set up a new user inside E1 you have to assign them a password. Pretty simple, right? Unfortunately that’s not all there is to security inside E1. As shipped, JDE E1 is essentially wide open for all users and companies must develop a plan to handle all aspects of E1 security. Once the plan is in place it requires regular maintenance and adjustment as users come and go or company needs change.

Any good CNC knows that in a vanilla E1 installation, the JDE user and all the schema accounts (PRODDTA, SY900, CRPCTL, etc.) all have very well known default passwords on the database. Many times during an implementation those passwords will stay as shipped to make the process easier and the company may have the best intentions to address that problem after go-live, but that doesn’t always happen. If you don’t change these passwords, anyone with basic knowledge of E1 could access any data on your system.

A little-known tidbit about JDE E1 database security – in versions 9.1 and older, all database users have all permissions to all tables. This means if you give someone a login directly to the database they will be able to view, change, or delete any record or any table regardless of what permissions you give that user. The only way to fix that is by running the Oracle Public Shutdown procedure.

There’s also operating system security. The physical files that reside on your E1 servers are fully accessible by default. Some of them need to be available, but many do not need to be wide-open. The most important files to protect are your INIs since they contain passwords for your key system users. Prior to tools release 9.1.4.x you had to rely on OS level security to protect those files, but in TR 9.1.4.x and above the passwords are encrypted.

Inside JDE you can control the E1 user passwords as well as other details, such as password complexity. Most CNCs already know about that, but one overlooked part of E1 security is within OMW. If a user has access to OMW they will be able to assign themselves to any project as any type of user – developer, manager, PVC administrator – the sky’s the limit! There’s no built-in security management application for OMW users, so you have to use row or application security to keep people from assigning themselves to whatever they want.


There are many other questions related to JDE E1 security that you might have and it’s likely that the answers you get will create even more questions. JDE E1 security is a very complex topic and if you require unique or specialized restrictions you’re going to need an expert.