Friday, December 16, 2016

Microsoft SQL Server Performance: optimize for ad-hoc workloads

A SQL statement always requires an execution plan to run it over your data. That plan is used to return the data quickly and efficiently. SQL statements might need to be used over and over in quick succession, so to improve performance the server uses a Plan Cache to save the execution plans for later. When the same SQL comes in again, the database can use the already cached plan to execute the statement which saves a lot of time. If a statement is only ever used once, then the execution plan in the cache will age out and disappear.

What if you have a lot of SQL statements coming to your database that are only used once? As you can probably see, your plan cache will be filled with execution plans that will never be re-used. This can prevent statements that actually repeat themselves from getting in the cache and slow your system down. Third-party reporting and even JDE apps and UBEs can and do send these on-time use SQL statements quite frequently. These are called “ad-hoc” statements.

The configuration setting “optimize for ad-hoc workloads” can help you speed up your system by changing how plans are cached. With this setting enabled, the first time a SQL statement runs it still creates an execution plan, but it only stores a “plan stub” in the cache rather than the entire execution plan. If the statement is run a second time, SQL Server will store the full plan in the plan cache. If it’s never run again then it never has to create a full execution plan, and that frees up space for SQL that actually repeats itself


Enabling optimize for ad-hoc workloads is highly recommended for JD Edwards running on SQL Server, especially when using third-party applications that interface with the database. Before enabling it, be sure to take a performance baseline so that you can compare before and after data. It’s unlikely you will find anything negative, but it will be nice to have the data to back up your choice. 

Tuesday, November 15, 2016

Oracle container and pluggable databases are now the default

This may not have come up too much yet, but starting with Oracle version 12.1.0.2 (supported as of TR 9.1.4) Oracle has done away with the concept of single instance databases in favor of Container Databases (CDB) with Pluggable databases (PDB). This adds a little bit more complexity to the install and management of the database. Any customer upgrading or migrating to 12c will have to convert their database to a PDB. Don’t worry though – it’s actually not that hard to do.

What is a CDB/PDB?
A CDB is basically all the Oracle system databases and other objects that every database needs like control files, redo logs, tempfiles, etc.. The PDB is all the user databases (like JDE). This architecture makes the data independent of the system, although an end-user wouldn’t be able to tell. All future installations of Oracle database will use this two-part configuration whether you need it or not.

PDB/CDB benefits
Where the PDB/CDB combo really shines is when a company is hosting multiple database instances. For example, instead of having two servers to hold production and development data, you could have those two instances on a single server. Each instance would be a separate Pluggable database (PDB) that both use the single common Container database (CDB). A great feature of a PDB is that it can be instantly cloned, providing you with a point-in-time consistent copy of the source database. It is, however, an additional cost to use the multitenant option.

Using a PDB
JDE users wouldn’t ever connect to a CDB – that is reserved for the DBA. To an end user (or even a junior DBA), the PDB would look like a complete Oracle database – just like old times. In reality, most JDE installs are only going to ever have one PDB per CDB which is essentially the same as it always was. However, you could have a PDB for production, one for DV, and another for PY which could all run on the same large machine sharing a single CDB.

Why is there a CDB/PDB?
There are some great reasons to have PDBs, such as fast upgrades, rapid cloning and snapshots, and reduced overhead. The main benefit to this is really for Oracle, because it makes it easier to provide a cloud-style database service. This lets them manage multiple customer databases from a single point without multiplying the common overhead.

Uses in JDE
For a company using JDE software there are definitely some advantages, but in general JDE practice a multitenant CDB/PDB configuration would not typically be used. A copy of a PDB would need to have to same table owners as the source, and the act of duplicating the PDB requires a brief period of read-only access. It could not be used for a JDE-style data refresh of a lower environment. Perhaps in future versions it will be possible to make PDB clones more flexible.


Friday, August 12, 2016

Ten tips for improving database performance on JD Edwards EnterpriseOne

There are probably a hundred tips that I could give you on how to speed up E1, and choosing just ten was kind of difficult. Should I list tips only for MSSQL or Oracle? What about generic tips? I decided to include a variety of tips that I recommend the most often overall. They generally offer the greatest benefit with the least amount of complexity. Now, you're going to look at a few items on this list and say, "Wait a second, this one right here is really difficult!" You'd be right to think that, but here's a secret: It's tough to do any performance tuning on a database that isn't somewhat complex. That's just databases for you.

Before you get in to the list, I have some disclaimers. This is a list of what I think are the ten best tips. Other DBAs may pick a different list or even say that what I've got listed here is a bunch of crazy talk. I also have to caution you that making some of the changes in here might be beyond your capabilities, budget, or both. Please don't do anything without doing research and determining if it's actually right for your system. Also, I know you understand every system is different. On one installation a particular tip could improve performance by 50%. There might be one IT shop that wouldn't see any improvement at all.

Finally, here are my ten most recommended tips for improving JDE database performance. I've included some links for reference, but I encourage you to spend some time reading as much about each topic as possible. Your database is the only one like it in the world, make sure your solution is the best one for you.


  • General: SSDs
  • General: Remove unused indexes
    • Every index adds an extra read or write to every transaction.
    • If an index isn't used very often or not at all, it's stealing performance.
    • You can run a test before dropping them
      • Oracle
        • Make the index invisible - ALTER INDEX index_name INVISIBLE;
        • Index is still maintained like any other index, but optimizer does not use it.
        • Useful for making sure nothing gets worse when it's gone
      • MSSQL
        • Disable the index - ALTER INDEX IX_IndexName ON Schema.TableName DISABLE;
        • Index is not maintained and not used by the optimizer.
        • You can measure the performance benefit and be sure it doesn't break anything.
    • Make sure you remove the indexes from JDE E1 specs as well, otherwise if you regenerate indexes using OMW they will come back.
  • General: Don't generate indexes from OMW
    • By default OMW drops and recreates all indexes on a table. You can't change this behavior.
    • Index generation is an OFFLINE operation by default, meaning the table is not usable until ALL the indexes are regenerated.
    • On a large, busy table this will cause your system to grind to a halt.
    • Better idea: Create the indexes directly on the database. If they work for you, then create them inside JDE.
    • DO NOT generate them. Simply promote them using OMW so the specs are saved.
  • MSSQL: Tempdb optimization
  • MSSQL: Adjust max server memory
    • A common setting that installers overlook is the max server memory.
    • By default it is set to its maximum value of 2147483647 MB - 2.15 Petabytes!
    • MSSQL will use all of your physical memory if you don't change this value.
    • This can starve the operating system and other processes of memory and bring your system to a crawl.
    • A good starting rule of thumb is to set the max server memory to 90% of the total available RAM.
    • Reference:
  • MSSQL: Read-Committed Snapshot Isolation (RCSI)
    • Are you seeing a lot of blocking on your database?
    • By default, Microsoft SQL Server uses "pessimistic locking".
    • This means that read operations block writes and vice-versa.
    • RCSI changes it to "optimistic locking".
    • The database is "optimistic" that the results of your read operation will not change.
    • If it does change, the database simply re-reads the data after the write is finished.
    • Since most write blocks do not change the data you're reading, implementing RCSI is almost always a huge performance improvement.
    • This requires increasing the amount of Tempdb space available.
    • Reference:
  • Oracle: Create data files at their maximum size
    • If you're using a smallfile tablespace (the default), you will have one or more data files that are 32GB or less.
    • Avoid the overhead of file growth operations by creating them at full size right away.
    • It also covers you when you make a mistake on file growth settings.
    • When the file has to grow, it causes a performance impact.
    • Reference
  • Oracle: Redo log tuning
    • Your redo logs should switch about five times per hour, and have enough redo log groups that each one can write out to the archive logs before the logs switches get back around.
    • Longer than 15 minutes between switches can will slow down your system.
    • Less than that puts you at greater risk for data loss in the event of disaster.
    • When you figure out the right size you need, simply add them all to your database and force log switches until all your old logs are inactive. Once you've got that, delete the old redo logs.
    • Reference
  • Oracle: AIX/Linux - Huge/large pages
    • There are differences in opinion about the overall performance benefit of this setting, however it does provide protection against SGA swapping on memory-dependent systems.
    • The memory page size on these operating systems is quite small out of the box.
    • When dealing with large Oracle SGA (System Global Area), you can improve performance by reducing the number of OS memory pages the database has to deal with.
    • In AIX it's called large pages and on Linux it's huge pages.
    • The size of your pages depend on the size of your SGA, the operating system, and the capability of the hardware.
    • The general advice is to implement 16MB on AIX and up to 1GB on Linux.
    • Create enough pages at the larger size to accommodate your entire SGA
    • Reference:
  • Oracle: Pin the SGA
    • This goes hand-in-hand with huge/large pages.
    • "Pinning" means that the entire SGA will be loaded in to the memory pages you set aside and it will never be swapped out to disk.
    • If you don't pin, the SGA can get swapped out to disk due to memory exhaustion. Your database will run much more slowly if that happens.
    • Swapped memory pages won't get moved back to disk even when the memory pressure subsides, so your database will continue to run poorly.
    • Reference:
      • See above links

Thursday, July 21, 2016

Once again, we must talk about indexes

I've given a couple of talks about JDE and indexes, as well as had numerous discussions with developers, DBAs, and others. What I've found is that almost nobody really knows how indexes work, what the relationship between JDE and the database is regarding indexes, and that adding an index might not help anything at all.

There are many reasons why someone would think they need an index. A common reason is when performance is slow with a particular app or UBE. The first thing people usually recommend is to add an index. The problem I normally find is that nobody truly understands the problem. 

Slow performance is not a problem, it's a symptom of something. Do you really need a new index, or do you have bad table and index statistics? Could the application need to be re-written slightly? Maybe there's a NOT IN condition that forces a full table scan? You really have to figure out exactly what's going on before choosing a course of action. I see a lot of people put an index on a database only to scratch their heads when there isn't any improvement. What usually ends up happening is the database is blamed, so the DBA crawls up in the guts of the application only to find that the real issue is data or development related. 

Sometimes there's not yet any problem with performance at all. A developer could add an index just because it makes their work easier or they think their app will need the index. Inside JDE when you're using Table Design Aid inside OMW to create an object that has table IO, you have to choose an index from a drop-down list that contains all indexes inside the JDE specs for that table. A developer will pick the index they think is appropriate for their work and move on. If there isn't one in there that works for them, they can create a new index that fits their needs.

Now if a developer creates a new index within JDE for whatever reason, it will show up on the list for the developer to choose. If the table or indexes are regenerated using OMW the new index will be created on the database. However, at runtime JDE does not send any information to the database on what index to use – it only sends a SQL statement. That's because the information in that drop-down box is only there for the benefit of the developer – it has no bearing on what index the database will use to serve the data. 

What this means is that an the existence of an index is not a guarantee that it will ever be accessed. I analyze a lot of databases and many indexes shipped from JDE are never touched. Some are only used a few times in their lives. Others could be used thousands of times an hour. Regardless of how much they’re used, every one of those indexes creates an additional write and/or delete for every single transaction. As a result, extra unnecessary indexes can actually cause slower performance.

How does the database decide what index to use? In a nutshell, the optimizer tries to determine the “cost” of the transaction based on the existing table and index statistics, then chooses a plan that can run it with the lowest cost. It can (and does) get much more complicated than that, but for the purposes of this blog post it works. 

The question of what indexes could it use to better service a particular statement has some guidelines as well. For example, adding a new index on columns A and B when there is already an index on A, B, C, and D is a waste of time and decreases performance. The database can already get the best results out of an AB statement by using an ABCD index. 

"Go big or go home" isn't a phrase you should utter either. Indexes over five to eight columns don’t improve performance much if at all. The database can also use multiple indexes to fulfill a request, so adding a new index that encompasses what exists in two others might not add a huge benefit.


Indexes are a complicated subject to discuss. Sometimes they can make things go faster, sometimes they don’t seem to do anything, and occasionally they make things worse. From the JDE side there is no control over how the database chooses to execute a SQL statement, and that is largely due to its DB-agnostic design. 

Ideally, developers and users should work together with DBAs to find the best solution to a performance problem. If an index is required, then that's what we'll do - but let's do it because it's solving a problem, not reacting to a symptom. If you really need an index, ask your DBA add it on the database side temporarily to see how it goes. If it works, then put it in the table specs. If not, attack it from another angle to see what's really going on. 

Tuesday, April 26, 2016

JDE Best Practice - Security

One of the most overlooked best practices when managing JDE E1 is user security. When you set up a new user inside E1 you have to assign them a password. Pretty simple, right? Unfortunately that’s not all there is to security inside E1. As shipped, JDE E1 is essentially wide open for all users and companies must develop a plan to handle all aspects of E1 security. Once the plan is in place it requires regular maintenance and adjustment as users come and go or company needs change.

Any good CNC knows that in a vanilla E1 installation, the JDE user and all the schema accounts (PRODDTA, SY900, CRPCTL, etc.) all have very well known default passwords on the database. Many times during an implementation those passwords will stay as shipped to make the process easier and the company may have the best intentions to address that problem after go-live, but that doesn’t always happen. If you don’t change these passwords, anyone with basic knowledge of E1 could access any data on your system.

A little-known tidbit about JDE E1 database security – in versions 9.1 and older, all database users have all permissions to all tables. This means if you give someone a login directly to the database they will be able to view, change, or delete any record or any table regardless of what permissions you give that user. The only way to fix that is by running the Oracle Public Shutdown procedure.

There’s also operating system security. The physical files that reside on your E1 servers are fully accessible by default. Some of them need to be available, but many do not need to be wide-open. The most important files to protect are your INIs since they contain passwords for your key system users. Prior to tools release 9.1.4.x you had to rely on OS level security to protect those files, but in TR 9.1.4.x and above the passwords are encrypted.

Inside JDE you can control the E1 user passwords as well as other details, such as password complexity. Most CNCs already know about that, but one overlooked part of E1 security is within OMW. If a user has access to OMW they will be able to assign themselves to any project as any type of user – developer, manager, PVC administrator – the sky’s the limit! There’s no built-in security management application for OMW users, so you have to use row or application security to keep people from assigning themselves to whatever they want.


There are many other questions related to JDE E1 security that you might have and it’s likely that the answers you get will create even more questions. JDE E1 security is a very complex topic and if you require unique or specialized restrictions you’re going to need an expert.

Monday, April 18, 2016

Generating indexes with OMW is a bad thing

You've analyzed the performance of your application or UBE, and after working with the DBA you've all decided that adding an index to the F0911 is the best way to speed up your slow process. You tell your developer who goes in to Table Design Aid in OMW and adds the new index. They check in the table and the CNC promotes it to Production, after which a package is built and deployed. The only thing left is to generate the new index in Production. The CNC opens OMW, goes to Table Design Aid, clicks the Table Operations tab, and clicks Generate Indexes.

Within a few minutes the calls start coming in: Everything is running slow. Processes are taking forever. Shipments aren't processing as fast as they should, product is being held up, trucks aren't rolling. What happened? This was supposed to speed things up!

What you probably didn't know is that the moment you click the button to generate indexes in OMW, the first thing it does is drop all the existing indexes on that table. That might not be a big deal on a small table with a few indexes, but it's a huge problem on a large table like the F0911 which ships with 25 indexes. Since that table is used by most operations in E1, the loss of all the indexes causes every application and UBE to slow down. That table is typically so large it could take 24 hours to regenerate them all. Even worse, there's really no simple way to roll that back. You just have to let it continue until it's done.

Unfortunately, OMW does not allow you to regenerate individual indexes. It's an all-or-nothing proposition. This also makes testing possible indexes a very difficult task. Faced with this, CNC administrators schedule index generation after hours or on the weekend to reduce the performance hit.

There's a better way to get index changes applied to your system. It's significantly faster, it doesn't kill performance, and you only change what's necessary:

Create the indexes directly on the database.

Yes that's right: You open a SQL tool like SQL Developer and make the index changes directly on the database. In fact, you can create your database indexes without defining them in E1 at all. This can help if you want to test your changes before committing to anything.

You might wonder how E1 will know the indexes are there if you don't create them in OMW first. The truth is that E1 itself doesn't actually know or care what indexes are on the database. Behind the scenes E1 simply issues SQL statements to the database without any regard for indexing. Generally only developers and CNC admins need those specs inside E1 to facilitate their work. The transactions flowing out of E1 to the database don't look at those specs at all. When a transaction arrives, the database decides what index to use (if any) and executes the statement in the way it thinks is best.

That's not to say you shouldn't create indexes within E1 - you certainly should. You're going to need them for ESUs, upgrades, and E1 development, not to mention other CNCs that may not know about your new secret. By creating indexes manually on the database, you not only save huge amounts of time but you can test indexes quite easily. If you create an index and it doesn't do what you thought, you can easily remove it and try another one without having to go back through the development cycle.

Remember this new, faster way to get index changes into your system:
1. Determine what index you need.
2. Add it directly to the database.
3. If it works the way you want, add it inside E1.
4. Don't generate indexes with OMW!

Thursday, January 21, 2016

The shocking truth: You can't create a read-only user on your E1 database!

Did you know that by default all users created on Oracle and MSSQL databases have full permission on all JDE tables? It's true. Not only that, this permission takes precedence over any restrictions you might place on a particular DB user such as making them read-only. This is hard-coded in to EnterpriseOne and Oracle has no plans to change this behavior.

Why is this? JDE is designed to insulate users and administrators from the hardware and database. This is part of the spirit of CNC - Configurable Network Computing. Yes, CNC actually means something other than "Person Who Gets Blamed For Everything". The CNC concept means that with only JDE tools a user can perform select, insert, update, delete, create, and drop operations on any table.

Because you don't have to know what database or operating system is being used, you don't need to have any specific database skills. Without ever leaving EnterpriseOne you can do full environment refreshes, table copies, add new indexes, and much more. The best part is that it is all controlled by JDE security. You never have to touch the database to do your day-to-day job.

As flexible as that is, there are often good reasons to directly access the database. One example is to do faster refreshes. In the middle ground are SQL fixes due to data issues. On the other end are bad reasons which (in my opinion) include providing read-only access and large exports of raw data. I'll talk about why those are bad in another post.

No matter what the justification is for providing direct access, the shocking truth is that every database user can view and manipulate all JDE tables and data on the system - and there is nothing you can to do change that.

I take that back. There actually is something you can do to change it. It's called Oracle Public Shutdown (OPS), a somewhat complicated procedure that removes these global permissions and allows administrators to have better security control at the database level. Once you've run OPS you can now create read-only users as well as any other levels of access you can think of.

Unfortunately for your administrative workload, OPS is not permanent. Every time a table is created or regenerated in OMW, every user gets full permissions reassigned to that object. Every ESU, ASU, and Update that contains tables will reset those permissions. An upgrade will reset many permissions. What this all means is that you have to come back to your database every so often and re-run OPS.

The next time someone asks for a database user, make sure there is a really good justification for their access. You'd be surprised how often they can do what they want inside E1. Plus, you don't want to give what is essentially DBA authority to just any user. Related tip: Make sure you have good backups.

If your client is concerned about the security of their database - especially if they have SOX or FDA requirements - they may need Oracle Public Shutdown. The GSI Database Team can help you evaulate your client's needs and work to secure your database.

Friday, January 8, 2016

Protecting your data from end-to-end

It’s pretty difficult to have a database without some protection for the data. You need a database user name and password to access the data at the very least. If you don’t want someone to see your data, you just don’t give them the credentials. That seems pretty simple, doesn’t it? Well, it is simple, but having a single user doesn’t allow for any flexibility in determining different levels of access.

Does everyone really need read/write access to every column in every table? Normally that answer is “no”. You’re probably going to need a read-only user, and maybe other users that are prevented from seeing sensitive data. It could get complicated setting up all those additional users and roles.
I’m not like the others

Maybe your installation is different and users don’t access the database directly. Their access is controlled by an application which handles all the security and permissions across the board. That application only needs a single user to connect to the data, and nobody but you knows the password. Well, you and the application administrator. That should be fine, right?

Even in a perfect world, there will be mistakes that can’t be corrected within an application. Will you, as the DBA, be performing all those corrections? What about analyzing the raw data to determine what corrections to make? Will you also be exporting raw data for business analysts to pore over? Chances are you won’t have the time or inclination to do all of that. You’re still going to need some additional database users with varying levels of authority.

Your data is still vulnerable

You’ve got users with specific permissions, you’ve got security in the application, and maybe you even have auditing enabled. The only people who can use the application or access the database are on the corporate network or VPN. You feel pretty safe right about now, don’t you?

What about operating system credentials? Who has access to the root or oracle account? What users have sudo rights? Anyone on the operating system can view or manipulate your data with OS superuser privileges. They don’t even have to access the database directly! Now imagine if someone outside your company – a hacker or corporate spy for example – finds an exploit that gives them access to the server operating system. Even if they can’t find a way to use SQL, they can get your data using OS commands as well.

Data encryption

Data is stored in the database in plain text. Despite all the user security you might have in place, the fact of the matter is that if someone gains access to your data files they’re going to be able to see the info. Even inside the company with application security you may be able to prevent people from seeing certain columns of data, but that data is still stored in the database in plain text. Unless you encrypt the data, your data is vulnerable.

The data sitting in a database is considered data at rest. This can be recent information or historical, archived data on a data warehouse. Data that is currently being manipulated or used in any way is data in use. Data moving across a network or other medium is data in transit. Data in in transit is usually encrypted by VPN or other active methods. Data in use is going to be in memory, and that is normally not encrypted. Data at rest is also generally not encrypted and depending on your database, you might not even have the capability to encrypt it at all.

Oracle Advanced Security

How do you protect your data in all its various states? To encrypt data at rest, you can use Oracle Advanced Security in Oracle 12c. Transparent Data Encryption (TDE) helps prevent attacks that attempt to bypass the database and read sensitive information from data files at the operating system level, from database backups, or from database exports. Oracle Advanced Security Data Redaction complements TDE by reducing the risk of unauthorized data exposure in applications, redacting sensitive data in query results before the data leaves the database.

What about data in use or in transit? For that you have to get serious. The Oracle Supercluster’s SPARC M7 processor protects, verifies, and encrypts data in use with built in, dedicated encryption accelerators. Oracle Solaris 11 includes cryptographic APIs which the ZFS uses to benefit from AES algorithm hardware acceleration on SPARC processors.

Data protection isn’t simple

Protecting your data isn’t as easy as having strong, secure passwords. Even with that your data is still unprotected and exposed. Whether that data is actively being used or if it’s sitting on an old, slow SAN somewhere, it’s vulnerable to a determined attacker. With Oracle 12c and Supercluster you can protect your data from end-to-end with little to no performance compromises.