Wednesday, December 3, 2025

JDE Security History (F9312) – Tracking User Activity, Logins, Additions, and Deletions


In Oracle JD Edwards EnterpriseOne, security auditing is a critical capability for tracking user behavior, compliance, and system integrity. One of the most important tables used for this purpose is F9312 – Security History Table, which captures key security events such as logins, user creation, and deletions.

This blog explains how to use F9312 effectively with practical SQL queries for common audit requirements.

What is F9312?

The F9312 (Security History File) stores historical security event logs generated by JD Edwards EnterpriseOne Security Server.

It helps track:

  • Failed login attempts
  • User creation events
  • User deletion events
  • Other security-related activities

Each record includes:

  • User ID
  • Event Type (SHEVTYP)
  • Event Status (SHEVSTAT)
  • Date (SHUPMJ – Julian date format)
  • Time and additional audit details

1. Failed Login Attempts for a Specific User

To identify failed login attempts for a user:

SELECT *
FROM SY920.F9312
WHERE SHUSER = 'xxxxx'
AND SHEVTYP = '01'
AND SHEVSTAT = '02';

Explanation:

  • SHUSER → User ID
  • SHEVTYP = '01' → Login Event
  • SHEVSTAT = '02' → Failed Login Status

👉 This query helps detect brute-force attempts or incorrect password usage patterns.

2. User Creation Events (From a Given Date)

To track when users were added:

SELECT *
FROM SY920.F9312
WHERE SHEVTYP = '05'
AND SHUPMJ >= 125001;

Explanation:

  • SHEVTYP = '05' → User Creation Event
  • SHUPMJ >= 125001 → Records from 01-Jan-2025 onwards (Julian format)

👉 Useful for onboarding audits and compliance checks.

3. User Deletion Events (From a Given Date)

To track deleted users:

SELECT *
FROM SY920.F9312
WHERE SHEVTYP = '06'
AND SHUPMJ >= 125001;

Explanation:

  • SHEVTYP = '06' → User Deletion Event
  • SHUPMJ >= 125001 → From 01-Jan-2025 onwards

👉 Helps ensure no unauthorized user removals occurred.

4. Common SHEVTYP (Event Types)

Below are some commonly used event type codes in F9312:

Here is a List of all Event Tyoe (SHEVTYP)




Why F9312 is Important

Using F9312 effectively helps organizations:

  • Strengthen security monitoring
  • Detect suspicious login behavior
  • Maintain compliance (SOX, audit requirements)
  • Track administrative changes in real time
  • Improve governance in ERP systems

Best Practices

  • Regularly archive F9312 data to avoid performance issues
  • Build dashboards for failed login trends
  • Alert on repeated failed login attempts
  • Combine with user profile tables for deeper analysis
  • Restrict access to security audit tables

Final Thoughts

The F9312 Security History table in Oracle JD Edwards EnterpriseOne is a powerful but often underutilized tool. With the right queries and monitoring strategy, it can significantly improve your enterprise security posture and audit readiness.


Monday, December 1, 2025

SQL Server: Identify Data Files with Less Than 10% Free Space

 

In database administration, monitoring storage utilization is critical to avoid performance degradation, blocking issues, and unexpected outages. One of the most important checks for a DBA is identifying data files that are running low on free space.

In this blog, we’ll look at a practical approach to identify database files in Microsoft SQL Server that have less than 10% free space using a dynamic query across all databases.

Why Monitoring File Space Matters

When data files run out of space, it can lead to:

  • Transaction failures
  • Slow query performance
  • Blocking and deadlocks
  • Application downtime
  • Emergency auto-growth events (which are expensive)

Proactive monitoring helps DBAs:

  • Plan storage expansion
  • Avoid emergency outages
  • Optimize database performance

Solution Overview

The following script:

  • Scans all databases
  • Collects data file size and free space
  • Stores results in a temporary table
  • Filters files with less than 10% free space

SQL Query to Identify Low-Space Data Files


CREATE TABLE #FileSize
(
    dbName NVARCHAR(128), 
    FileName NVARCHAR(128), 
    type_desc NVARCHAR(128),
    CurrentSizeMB DECIMAL(10,2), 
    FreeSpaceMB DECIMAL(10,2),
    FreeSpacePercentage DECIMAL(10,2)
);

INSERT INTO #FileSize
(dbName, FileName, type_desc, CurrentSizeMB, FreeSpaceMB, FreeSpacePercentage)
EXEC sp_msforeachdb 
'
USE [?];

SELECT 
    DB_NAME() AS DbName,
    name AS FileName,
    type_desc,
    CAST((size * 8.0 / 1024) AS DECIMAL(10, 2)) AS CurrentSizeMB,
    CAST(((size - FILEPROPERTY(name, ''SpaceUsed'')) * 8.0 / 1024) AS DECIMAL(10, 2)) AS FreeSpaceMB,
    CAST((((size - FILEPROPERTY(name, ''SpaceUsed'')) * 100.0) / size) AS DECIMAL(10, 2)) AS FreeSpacePercentage
FROM sys.database_files
WHERE type IN (0,1);
';

SELECT * 
FROM #FileSize
WHERE dbName NOT IN ('distribution', 'master', 'model', 'msdb')
AND FreeSpacePercentage < 10;

DROP TABLE #FileSize;


How This Query Works

1. Temporary Table Creation

A staging table #FileSize is created to store results from all databases.

2. Iterating Through All Databases

We use:

sp_msforeachdb

This stored procedure loops through every database in the instance and executes the query.

3. Collecting File Metrics

From:

sys.database_files

We extract:

  • File name
  • File type (data/log)
  • Total size
  • Used space
  • Free space percentage

4. Filtering Critical Files

We exclude system databases:

  • master
  • model
  • msdb
  • distribution

Then we filter:

FreeSpacePercentage < 10

This highlights files that are running critically low on space.

Output Example

dbNameFileNametype_descCurrentSizeMBFreeSpaceMBFreeSpacePercentage
SalesDBSales_DataROWS2048012005.8
HRDBHR_DataROWS102408007.9

Best Practices

1. Set Alerts

Integrate this query into SQL Agent jobs for proactive alerting.

2. Avoid Frequent Auto-Growth

Instead of relying on auto-growth, plan capacity ahead of time.

3. Separate Data and Log Monitoring

Data files and log files behave differently—monitor both independently.

4. Use Performance Dashboards

Visualize file growth trends using tools like:

  • Power BI
  • Grafana
  • SSRS

Enhancements You Can Add

If you want to extend this script further:

  • Add disk-level free space checks
  • Include auto-growth settings
  • Identify top growing databases
  • Export results to email alerts

Final Thoughts

In Microsoft SQL Server environments, storage issues are one of the most common causes of production incidents. This simple query helps DBAs proactively identify databases running low on space and take corrective action before users are impacted.


Wednesday, October 29, 2025

Understanding ESU and ASU Process in JD Edwards – How Object Installation Decisions Are Made


In Oracle JD Edwards EnterpriseOne, Software Updates play a critical role in keeping environments stable, secure, and up to date. Two key components of this update mechanism are:

  • ESU (Electronic Software Update)
  • ASU (Application Software Update)

A common question for administrators and developers is:

How does JDE decide whether an object should be added, replaced, or skipped during an ESU/ASU installation?

This blog explains the internal logic behind that decision and how the system uses key tables such as F96400 and F9672.

What are ESU and ASU?

ESU (Electronic Software Update)

ESU delivers fixes or enhancements for specific objects in JD Edwards applications.

ASU (Application Software Update)

ASU is a broader update mechanism that may include multiple ESUs and object-level changes.

Both follow similar object comparison logic during installation.

Key Tables Used in Decision Making

JD Edwards uses two important tables during the ESU/ASU process:

1. F96400 (SGESUDM)

  • Stores ESU object metadata
  • Contains update date for the incoming ESU object
  • Field of interest: SGESUDM

2. F9672 (SUSUDATE)

  • Stores software update history
  • Tracks previously applied objects in the environment
  • Field of interest: SUSUDATE

Core Decision Logic

The system determines whether an object should be:

  • Added
  • Replaced
  • Merged (legacy only, removed in 64-bit systems)

Decision Rule:

If SGESUDM > SUSUDATE → Object is REPLACED

In simple terms:

  • If the incoming ESU object is newer → Replace existing object
  • If the dates are equal or older → No replacement or conditional handling

Important Notes

1. F9861 is NOT used

The ESU installation process does not check F9861 to decide object installation.


2. MERGE Option Removed in 64-bit

In modern JD Edwards environments:

  • Only ADD and REPLACE are supported
  • MERGE functionality has been removed

SQL to Identify All Objects That Will Be Replaced

Before applying an ESU (example: UN9), you can identify impacted objects using:

SELECT 
T1.SGOBNM,
T1.SGESUP,
T1.SGESUDM,
T2.SUOBNM,
T2.SUPKGNAME,
T2.SUSUDATE,
T2.SUPATHCD
FROM JDESY920.F96400 T1
JOIN JDESY920.F9672 T2
ON T1.SGOBNM = T2.SUOBNM
WHERE T1.SGESUP = 'UN9'
AND T2.SUPATHCD = 'PD920'
AND T1.SGESUDM > (
SELECT MAX(T3.SUSUDATE)
FROM JDESY920.F9672 T3
WHERE T3.SUOBNM = T1.SGOBNM
AND T3.SUPATHCD = 'PD920'
);

What this shows:

  • Objects that will be REPLACED
  • New ESU objects not yet applied
  • Impact scope before deployment

Object-Level Comparison Example

Example 1 – Replace Scenario

SELECT SUOBNM, SUPKGNAME, SUSUDATE, SUPATHCD
FROM JDESY920.F9672
WHERE SUPATHCD = 'DV920'
AND SUOBNM = 'B9600186';
SELECT SGOBNM, SGESUP, SGESUDM
FROM JDESY920.F96400
WHERE SGESUP = 'UN9'
AND SGOBNM = 'B9600186';

Interpretation:

If:

SGESUDM > SUSUDATE

👉 Object will be REPLACED


Why This Logic Matters

Understanding ESU decision logic helps:

  • Avoid unexpected object overwrites
  • Predict deployment impact
  • Reduce downtime during ESU application
  • Improve change management planning
  • Support audit and compliance requirements

Common Use Case Before Applying ESU (UN9 Example)

Before installing ESU package like UN9, administrators should:

  1. Extract ESU package
  2. Install in Planner Environment
  3. Run comparison SQL
  4. Identify:
    • Objects to be replaced
    • New objects to be added
    • No-change objects

Reference Oracle Support Documents

  • E1: ESU Install Process Under the Covers (Doc ID: 2388199.2)
  • How to Force Merge or Re-Merge Objects (Doc ID: 790705.1)
  • Why Objects Are Merged vs Replaced (Doc ID: 660444.1)
  • Net Change Functionality of Software Updates (Doc ID: 651812.1)

Final Thoughts

The ESU/ASU process in Oracle JD Edwards EnterpriseOne is driven by a simple but powerful rule set based on object metadata comparison.

At the core:

SGESUDM (incoming ESU) vs SUSUDATE (existing system)

Understanding this logic allows technical teams to:

  • Predict update impact accurately
  • Avoid production surprises
  • Improve deployment confidence









Monday, August 11, 2025

Powershell Command over PrintQueue


In JD Edwards EnterpriseOne (JDE) environments, file management in directories like PrintQueue, media objects, and upload folders becomes critical for performance, cleanup, and troubleshooting.

Over time, these folders grow significantly, especially in production environments. PowerShell provides a fast and reliable way to analyze, filter, copy, and count JDE-related files.

This blog covers practical PowerShell commands used in real JDE environments for PrintQueue analysis and file operations.

1. Finding JDE PrintQueue Information

The PrintQueue directory in JDE stores batch print outputs, reports, and spool files. Over time, it accumulates large volumes of data.

Typical Path:

C:\JDEdwardsPPack\E920\PrintQueue

You can inspect files using:

Get-ChildItem -Path "C:\JDEdwardsPPack\E920\PrintQueue"

What This Helps With:

  • Identify active and old print jobs
  • Analyze disk usage
  • Support troubleshooting for UBE printing issues

2. Copying PrintQueue Files Based on Age (Retention Strategy)

In large JDE environments, old PrintQueue files need to be archived based on age.

Example: Files older than 1000 days

Get-ChildItem -Path "C:\JDEdwardsPPack\E920\PrintQueue" |
Where-Object {$_.LastWriteTime -lt (Get-Date).AddDays(-1000)} |
Copy-Item -Destination "C:\JDEdwardsPPack\E920\GoLive\PrintQueueArc"

Why This Matters:

  • Prevents PrintQueue folder from growing uncontrollably
  • Improves system performance
  • Helps in archival strategy for compliance

3. Finding a Specific File in JDE Folder

Sometimes troubleshooting requires locating a specific report or PDF generated by JDE.

Example:

Get-ChildItem -Path "X:\JDEdwards\E920\mediaobj\htmlupload" `
-Filter "FILE-10-0-10-141-58881134646742537-1504646610815.pdf"

4. Counting Files in a JDE Folder


To understand storage usage or file volume, you can count files easily.

Example:

Get-ChildItem -Path "D:\JDEdwards\E920\mediaobj\htmlupload" -File |
Measure-Object | Select-Object -ExpandProperty Count

5. Counting Files Between Date Range

For audit or cleanup planning, filtering by date range is extremely useful.

Script:

$Path = "D:\JDEdwards\E920\mediaobj\htmlupload"      
$StartDate = "2017-01-01"
$EndDate = "2023-01-01"

(Get-ChildItem -Path $Path -File -Recurse |
Where-Object {
$_.CreationTime -ge $StartDate -and $_.CreationTime -le $EndDate
}).Count


6. Best Practices for JDE File Management

✔ Always Archive Before Delete

Never delete PrintQueue files directly in production without backup.


Final Thoughts

PowerShell is a powerful tool for managing JD Edwards PrintQueue and file system growth. With simple commands, administrators can:

  • Analyze file usage
  • Archive old print jobs
  • Count media objects
  • Improve system performance
  • Support compliance requirements

Sunday, February 16, 2025

JDE Web Package Build

 

Application 9.2

Tools Release 9.2.5.x

Verify Tools Planner ESU, Tools ESU and Tools ASI is completed

Three new applications have been delivered, off of menu GH9083, for the web client:

  • P9601W - Package Assembly
  • P9621W - Package Definition
  • P9631W - Package Deployment


Configure Deployment Server INI

[INSTALL]

ClientType=deployment

[JDENET_KERNEL_DEF11]
KrnlName=PACKAGE BUILD KERNEL
dispatchDLLName=jdekrnl.dll
dispatchDLLFunction=_JDEK_DispatchPkgBuildMessage@28
MaxNumberOfProcesses=1
numberOfAutoStartProcesses=1

 

Also, make sure the following matches the content in the jde.ini on the Enterprise Server:

[SECURITY]
SecurityServer=xxxxxxx
User=JDE
Password=xxxxxxxx
DefaultEnvironment=DV920

[JDENET]
serviceNameListen=6017
serviceNameConnect=6017


Start and Stop Deployment Server service - JDE B9 Client Network

Set Service to Start Automatic 

Use Application P9601W to Package Assembly,followed by Define and deployment....

Thursday, February 6, 2025

JDE Check-In Process Based on Tool Release

 

Understanding how the JD Edwards EnterpriseOne check-in process works across different Tools Releases is important for CNC administrators and developers. Over time, Oracle changed how specifications and artifacts are stored, managed, and deployed.

This article explains the differences in the check-in process from older releases through Tools 9.2.5.x and higher.


Tools Prior to 9.2.1

In releases prior to Tools 9.2.1, the check-in process was straightforward and relied heavily on the Deployment Server file system.


Specs

Specifications were copied:

  • From the Development Workstation
  • To the Central Objects tables (F987*)

These specification records were stored in the Central Objects database.

Artifacts

Artifacts such as:

  • Source files
  • Include files
  • Java files
  • Resource (.res) files

were copied from the Development Workstation directly to the Deployment Server pathcode folders.

Architecture Overview


Development Workstation
        |
        |---- Specs ----> Central Objects (F987*)
        |
        |---- Artifacts ----> Deployment Server Pathcode Folder




Tools 9.2.1.x to 9.2.4.x

Oracle gradually introduced the Repository tables (F98780R and F98780H) and changed how artifacts were handled.


Tools 9.2.1.x

Specs

Specifications continued to be copied:

  • From Development Workstation
  • To Central Objects (F987*) tables

Artifacts

Artifacts were copied to two locations:

Deployment Server Pathcode Folder

Artifacts copied included:

  • Source
  • Include
  • Java
  • Resource files

Repository Tables

Artifacts were also stored in:

  • F98780R
  • F98780H

This was the beginning of repository-based artifact management.

Architecture Overview

Development Workstation
|
|---- Specs ----> Central Objects (F987*)
|
|---- Artifacts ----> Deployment Server Pathcode Folder
|
|---- Artifacts ----> F98780R / F98780H

Tools 9.2.3.x

Tools 9.2.3 introduced major changes for NER and TER object handling.

Specs

Specifications still copied to:

  • Central Objects (F987*) tables

Artifacts

Deployment Server

The following continued to copy to Deployment Server:

  • Source
  • Include
  • Java
  • Resource files

However:

  • NER artifacts were no longer copied
  • TER artifacts were no longer copied

Repository Tables

Artifacts copied to:

  • F98780R
  • F98780H

But:

  • NER and TER artifacts were not stored in Repository tables
  • BSFN artifacts continued to be stored

Build-Time Generation

NER and TER artifacts (.c and .h) started being generated during package build time instead of being stored directly.

Key Change

This reduced dependency on storing generated NER and TER C source files in the Deployment Server and Repository.

Architecture Overview


Development Workstation
|
|---- Specs ----> Central Objects (F987*)
|
|---- BSFN Artifacts ----> Deployment Server
|
|---- BSFN Artifacts ----> F98780R / F98780H
|
|---- NER/TER .c and .h generated during build


Tools 9.2.4.x

Tools 9.2.4 further modernized artifact handling and reduced dependency on the Deployment Server.

Specs

Specifications continued to be copied to:

  • Central Objects (F987*) tables

Artifacts

Artifacts such as:

  • Source
  • Include
  • Java
  • Resource files

were copied only to:

  • F98780R
  • F98780H

Deployment Server Changes

The following were no longer copied to the Deployment Server:

  • BSFN artifacts
  • NER artifacts
  • TER artifacts

Repository Behavior

BSFN

BSFN artifacts continued to be stored in:

  • F98780R
  • F98780H

NER and TER

NER and TER artifacts were not stored in Repository tables.

Instead:

  • .c
  • .h

files were generated dynamically during build time.

Key Improvement

This release significantly reduced file-system dependency on the Deployment Server and moved EnterpriseOne closer to repository-centric object management.

Architecture Overview

Development Workstation
|
|---- Specs ----> Central Objects (F987*)
|
|---- BSFN Artifacts ----> F98780R / F98780H
|
|---- NER/TER generated during build



Tools 9.2.5.x and Higher

Tools 9.2.5 introduced another major architectural change.

E1Local Database Removed

The E1Local database was removed from the architecture.

This simplified the development and check-in process.

Specs

Specifications are copied:

  • From User Spec Tables (F98xxxUS)
  • To Central Objects Check-in location tables (F987xxx)

Artifacts

Artifacts including:

  • Source
  • Include
  • Java
  • Resource files

are copied directly into:

  • F98780R
  • F98780H

No Deployment Server artifact dependency exists for check-in processing.

Architecture Overview

Development Workstation
|
|---- User Spec Tables (F98xxxUS)
| |
| ---> Central Objects (F987*)
|
|---- Artifacts ----> F98780R / F98780H



Final Thoughts

The JD Edwards EnterpriseOne check-in architecture has evolved significantly over time:

  • Older releases depended heavily on Deployment Server file systems.
  • Mid releases introduced Repository tables.
  • Modern releases rely primarily on Repository storage and build-time artifact generation.

Understanding these changes helps CNC administrators troubleshoot:

  • Check-in failures
  • Missing artifacts
  • Package build issues
  • Repository synchronization problems
  • Object promotion inconsistencies

It also helps explain why older troubleshooting methods may not apply to newer Tools Releases.


Specs - Copied from the User Spec Tables (F98xxxUS) to the Central Objects (F987xxx) Check-in location tables.
Artifacts -> From Dev WorkStation source, include, java, res copied  to Repository F98780R and F98780H

Wednesday, January 22, 2025

JDE Submitted Job Execution Performance


Submitted jobs (UBE reports and batch processes) should also be regularly analyzed to identify:

  • Long-running jobs
  • Performance degradation
  • Resource-heavy versions
  • Runtime inconsistencies
  • Scheduling bottlenecks

One of the best ways to evaluate batch job performance is by analyzing execution time history from the F986114 table.

This article explains how to calculate:

  • Minimum runtime
  • Average runtime
  • Maximum runtime

for submitted jobs in JD Edwards.



Why Analyze Submitted Job Performance?

Monitoring submitted job execution helps identify:

  • UBEs taking longer over time
  • Jobs affected by data growth
  • Batch queue contention
  • SQL or indexing issues
  • Specific versions causing delays
  • Opportunities for scheduling optimization

Examples:

  • Nightly jobs suddenly increasing from 5 minutes to 45 minutes
  • Payroll or invoice jobs slowing after upgrades
  • Reports running inconsistently across environments


Step 1 – Create Execution Time Table

Creating a separate table helps simplify analysis and improves reporting performance.

SQL Server


-- Create table from execution history

SELECT 
    JCJOBNBR,
    JCPID,
    JCVERS,
    JCSTDTIM,
    JCETDTIM,
    DATEDIFF(MINUTE, JCSTDTIM, JCETDTIM) AS EXECUTION_MINUTE
INTO SVM920.F986114_EXECUTION
FROM SVM920.F986114
WHERE JCENHV LIKE '%PD920%'
    AND JCJOBSTS='D'
    AND DATEDIFF(MINUTE, JCSTDTIM, JCETDTIM) > 0
ORDER BY JCPID, JCVERS, EXECUTION_MINUTE DESC;
 

Step 2 – Find Execution Time for Jobs

This query calculates runtime in minutes for each submitted job.

SQL Server Query

SELECT
JCJOBNBR,
JCPID,
JCVERS,
JCSTDTIM,
JCETDTIM,
DATEDIFF(MINUTE, JCSTDTIM, JCETDTIM) AS EXECUTION_MINUTE
FROM SVM920.F986114
WHERE JCENHV LIKE '%PD920%'
AND JCJOBSTS='D'
AND DATEDIFF(MINUTE, JCSTDTIM, JCETDTIM) > 0
ORDER BY JCPID, JCVERS, EXECUTION_MINUTE DESC;


 AS400 / IBM i Query

SELECT
JCJOBNBR,
JCPID,
JCVERS,
JCSTDTIM,
JCETDTIM,
TIMESTAMPDIFF(4, CHAR(JCETDTIM - JCSTDTIM)) AS EXECUTION_MINUTE
FROM SVM920.F986114
WHERE
JCENHV LIKE '%PD920%'
AND JCJOBSTS='D'
AND TIMESTAMPDIFF(4, CHAR(JCETDTIM - JCSTDTIM)) > 0
ORDER BY JCJOBNBR DESC, JCVERS, EXECUTION_MINUTE DESC;

Note: You can add filter to seacrch specific UBE JCPID='Rxxxxx' AND JCVERS='XJDExxx'

Step 3 – Find Minimum, Average, and Maximum Runtime

After collecting execution data, aggregate the results to identify runtime patterns.


SELECT
JCJOBNBR,
JCPID,
JCVERS,
MIN(EXECUTION_MINUTE) AS MINIMUM,
AVG(EXECUTION_MINUTE) AS AVERAGE,
MAX(EXECUTION_MINUTE) AS MAXIMUM
FROM SVM920.F986114_EXECUTION
GROUP BY JCPID, JCVERS;

Understanding the Results

Minimum Runtime

Shows the fastest execution recorded.

Useful for identifying:

  • Best-case performance
  • Ideal execution window
  • System baseline

Average Runtime

Shows typical execution duration.

Useful for:

  • Capacity planning
  • SLA validation
  • Batch scheduling

Maximum Runtime

Shows worst-case execution.

Helpful for finding:

  • Blocking issues
  • Data spikes
  • Lock contention
  • Resource bottlenecks

Example Use Cases

Identify Slow UBEs

Find reports consistently taking too long.

Example:

UBEAverage Runtime
R42565  2 min
R09801  45 min
R55XXX03  90 min

Detect Performance Degradation

Compare current month vs previous month runtimes.

Batch Queue Optimization

Move heavy jobs into separate queues.

Infrastructure Planning

Determine whether:

  • Additional kernel processes are needed
  • Database tuning is required
  • More CPU or memory is needed

Recommended Enhancements

You can further improve reporting by adding:

Additional Filters

AND JCUSER = 'JDE'
AND JCSTDTIM >= '2025-01-01'


Build Dashboards

Use:

  • Power BI
  • SQL Reporting Services
  • Grafana
  • Excel Pivot Reports

to visualize job execution trends.


Final Thoughts

Submitted job performance analysis is one of the most overlooked areas in JD Edwards administration.

Using the F986114 table, CNC administrators can quickly identify:

  • Slow-running reports
  • Runtime inconsistencies
  • Capacity issues
  • Scheduling bottlenecks
  • Growth-related performance degradation

Regular monitoring of submitted jobs helps maintain stable batch processing and improves overall EnterpriseOne system performance.