APEX Debug 101

Today, I finally figured out how to work the APEX debug. It was not straightforward due to a little fine print that we need to do get it working. This is a pretty good reference that I used to get started. However, I only manage to make it work after a few hours of tweaking and a single line of essential code!

APEX_DEBUG

APEX have a debug utility functions. This helps you to instrument your PLSQL code to log your errors and messages. The most important debug I find is ENABLE procedure and the level information. Without this debug function, you will not be able to see your logging appear.

APEX_DEBUG.ENABLE (
p_level IN T_LOG_LEVEL DEFAULT C_LOG_LEVEL_INFO );

Another key reason why I am not able to see my logging appears is the p_level setting. You need to read the fine print carefully below.

For example, setting p_level to 2 logs any message at level 1 and 2.

Initially, I did not think much of the level as this is usually not needed for dbms_log. I enable the p_setting to level 2. Then, I keep wondering why apex_debug messages are not showing. It is only when I set the p_level to 4. This means you can only see apex.debug only at level 4. The error constant is not a criticality view but have a secondary meaning in showing the types of debugging logs.

c_log_level_error constant t_log_level := 1; — critical error
c_log_level_warn constant t_log_level := 2; — less critical error
c_log_level_info constant t_log_level := 4; — default level if debugging is enabled (for example, used by apex_application.debug)
c_log_level_app_enter constant t_log_level := 5; — application: messages when procedures/functions are entered
c_log_level_app_trace constant t_log_level := 6; — application: other messages within procedures/functions
c_log_level_engine_enter constant t_log_level := 8; — Application Express engine: messages when procedures/functions are entered
c_log_level_engine_trace constant t_log_level := 9; — Application Express engine: other messages within procedures/functions

APEX have a cool utility APEX_DEBUG. However, you need to enable as least level 4 to view meaningful logs in your PLSQL. I also notice that logging level 9 slows down your application a lot. Do remember to turn off the logs for production. With logs, we are back in business to troubleshooting APEX!

APEX Interactive Grid and REST

Oracle Apex Interactive Grid is a cool feature. You can quickly deploy CRUD (Create Read Update Delete) in a matter of minutes. Unfortunately, it does not work well using out of box REST data source. We realise this after a week of troubleshooting as to why our REST endpoints keep failing.

REST Enabled

I will be looking forward to products with REST enabled capabilities. In the Cloud world, it is common to find data sources with various REST endpoints across different cloud platforms and/or on premises data sources. Thus, it is surprising to see interactive grid that is not fully REST enabled.

Custom Coding Required

You will need to REST enabled interactive grid with PLSQL coding. I found a step by step blog with an example to do so. However, I am still trying to get this to work as I have faced a variety of issues. In summary, the solution is to use custom PLSQL code to invoke POST, PUT or DELETE REST methods.

The challenges of low code and product like APEX is the understanding on constraints of each feature. You can only learn this via extensive usage for your use cases. In this case, we realise interactive grid are not fully enabled for REST data sources. I will not wish myself luck to get it working with custom PLSQL. Fingers crossed!

PUT to REST

The standard REST methods in ORDS (Oracle REST Data Services) are GET, POST, PUT and DELETE. We usually started testing REST with GET because it is usually the easiest for SELECT statement. POST will be next for INSERT for new creation. The complex PUT will be the last for any UPDATE process. DELETE is seldom provided to preserve data integrity. So, why should you leave PUT to the last? This is because no modification process often requires exception flow and handling.

Photo by Ethan
Handling PUT

You should relate PUT to modification process. REST does not have constraints to modify data unless you add the conditions. There are two approaches to handle PUT. One way is the trigger that will allow user to modify the data. Audit trail should be handled at source and/or target application if you want to keep track of the changes. Another way is to set conditions in your PUT services to prevent unauthorised modification.

PUT Response

A key characteristic of PUT is primary key (PK). You will need the PK to amend the correct data. So, what happens if PUT cannot find the PK? There are two ways to handle the PUT response.

  1. You can give a valid response or 201 status if PUT is successful.
  2. You can choose to INSERT if PK cannot be found.

PUT is the last and most complex REST service that you may want to expose. Handling PUT and its responses will need careful consideration on how you want to manage data modification. In some cases, you may choose to omit PUT service to preserve data integrity like DELETE.

You are not Learning with Dashboard

3 years ago, I embarked on a mission to design and deploy dashboard. The overarching vision is to utilise dashboard as a tool to enable business analytics and to learn on how to improve business operations. From a moment of self reflection as of today, it is interesting to realise a key lesson from my mission. This lesson stems from the misconception that dashboard will allow users to learn and improve their business.

Users not Ready for Dashboard

One major feedback from users that they are unable to use the dashboard is due to the timeliness refresh of the dashboard. Hearing this remark makes one ponder why organisation should not view tool like dashboard as a magic pill to resolve business issues. It also jerk a reality check to me on the efforts that users can find any excuse not to use a tool. After all, it is always easy to blame tools like dashboard because it is passive and does not include machine learning capabilities to retort. Users are the missing learning components to analyse the data and perform corrective actions.

Ugly Truths with Dashboard

If management query the effectiveness of the dashboard, it is showing the ignorance and disconnect of management to the operations. The right question should be asking the users what they have learn from the dashboard. What else will you like to learn from the dashboard? Dashboard and data analytics reveal ugly truths from the operations. This is why many resist the adoption of Dashboard for the review of SOP (Standard Operating Procedures). Typical excuse you will hear is blaming the dashboard for data timeliness, incorrect data or data errors. It is not surprising because the truth leads to change! Change is something that operations or quality hate to indulge!

The key lesson from adopting Dashboard is to acknowledge that it will not automatically equates users to learn from Dashboard. Users will choose to ignore the ugly truth from dashboard to avoid changes. You must ensure at least monthly review of Dashboard check on what users learn and to improve the SOP. Dashboard is not a forum to complain on data timeliness and data errors.

Troubleshooting APEX REST Synchronisation

Oracle APEX REST can be a pain if it is not working as expected. Today was the day we delved deeper to REST to why data source is not synchronised to the target table. It was another day of hair pulling and lots of testing to figure out how synchronisation should be working for APEX.

Synchronisation is One Way

There is a misconception that synchronisation will keep the target table to be similar with the local table. This is the mindset we have while troubleshooting. However, we finally realise that the synchronisation is to local table and not vice versa. You should not change the local table and expect it to be synch back to your target table. The synchronisation type also shows that it is one way with 3 types – Append, Merge and Replace.

Design for Two Way

The synchronisation approach could mean that the local table should not be used as a transactional table. This will mean that you need to design your application for a two way data transfer between your local table and target table. One method is to direct REST POST to your target table and let APEX auto synch to your local table. Another method is to separate your transactions to be different from your local table. However, we will need to test this approaches in details.

Synchronisation can be straightforward if this is a direct table processing. If there are conditional logic and preprocessing, you will hit synchronisation issues. This is because of the one way synch from target table to your local table. So, it is back to drawing board for design of two way synch.

REST in Peace

Recently, we are exploring a lot of REST API. Although REST is quick and simple, there are challenges to take note while using it. This is because we are used to field types such as date. There are also things to take note while handling the different REST methods like GET or POST.

When REST don’t work!

REST services can be challenging to troubleshoot when it’s not working. One of my favorite tool is Postman. This allows you to test REST endpoints quickly. The most common errors of REST endpoints encountered in Postman are firewall, parameters and typos. Half the battle is won if you can get your REST working in Postman.

The Other Part of REST

If your REST is working in Postman but not working in your application. Then, this part of issues likely lies in your application configuration for REST. The way to troubleshoot can be broken into the following parts.

  • Setup of REST endpoints
  • Configure then right parameters
  • Unit test of REST methods
  • Check your logs

The failure points for REST can be a real test for your patience. You will need to trace each steps of the REST methods. This will help you determine the root cause. Most of time, errors are due to configuration, parameters or typos. Stay strong and hang in there!

Why PM is Outdated

In the world of Agile, there is no PM (Project Manager) role. I have questioned on the relevance of PM in Agile project. With Agile gaining grounds, it is time to determine that PM role is outdated. These are reasons why you can run your Agile project without PM. To be specific, it is time to put a stop to freeloaders working as PM for Agile projects.

Why Agile Obsolete PM

In waterfall project, PM is required to work on project management e.g. schedule, scope. There is a lot of trackers to track schedule, gaps or issue logs. In contrast, Agile focus on self empowerment and self organising. There is no need for PM in this framework based on this key principles. You will not need a PM to chase or track after deliverables. As an analogy, waterfall is like a baby with constant minding from the guardian. Agile places the collective accountability to the Agile team.

Adaptability Outpace PM

A PM mainly function like an observer to the project. Monitoring and tracking does not provide the adaptability required by Agile. Often, PM is outpaced by Agile team. The additional communication layer of PM becomes redundant. Users or customers can work in close collaboration with Agile team. This is a key reason why Agile do not require PM. If you are running an Agile project, do expect to be frustrated by the placement of the PM.

It is time to realise that you no longer require PM for Agile projects. We should not continue to be in self denial mode to have this role for historical reasons. Agile team is highly independent to get things done and adapt quickly with users or customers.

P.S. Self declared PM continued to exist during digital transformation period. It is time to wake up and transform PM role as well!

OTM Upgrading Risks

OTM (Oracle Transportation Management) upgrading has its own challenges and risks. Thus, a major upgrade or major version increase will typically take more than six months or so. You should have standard risk mitigation action plans. The major risks from upgrades comes from both internal and external factors. These are some common ones that you can take note.

Know your Application

Upgrade errors usually comes from customisation that are done to the product. Such errors usually need product team to analyse the root cause and provide a resolution plan. You should always remove or isolate customisation prior to upgrades as part of your risk mitigation plan. Another risk comes from the potential changes to your customised modules. It is advisable to develop configurable settings instead of hardcoding them to your application.

Know your Architecture

OTM upgrading impact for on premise is a much difficult than managing on the Cloud. This is because your infrastructure is likely non configurable and needs to be manually changed one by one. The complexity setup of your OTM architecture should be prepared prior to upgrading to lower the risks. This makes you aware of the firewall or SSL certs to be deployed when you conduct the upgrading.

The common risks for OTM upgrading is customised features and architecture setup. Customisation impacts patches or upgrading scripts. This often leads to specific architecture requirements and firewall with your on premise applications. Although these risks are not new, they are time consuming and impact the upgrading duration. Thus, you should always be proactive to mitigate the risks.

REST and Dates Type

Database field types like numbers or dates are no longer relevant for REST services. This is because REST transmission are often in text format. Thus, you will need to consider your database design while handling REST. Should you choose a date type or retain string format for REST?

Selecting the Field Type

The key factor to select the field type is how your applications will consume the information. By default, setting data for REST endpoints in string is fast and simple. Applications often use these like staging table. Thus, your initial design should often default the data types to string for easier processing. This also allows you to consume the data without worrying about the data format. However, there are scenarios where your applications will require a specific field type to utilise feature like date and calendar.

Field Type as Data Integrity

The issue with setting string type will mean you can consume all kinds of junk data. This could be costly if you do not enable any source system do not enforce any field definition checks. Thus, field type helps to determine what type of data you will be expecting. This helps to reduce and reject data that do not conform to your field type. As this is stricter, you will required mapping checks between source and target systems of the REST endpoints.

Many applications are new to REST and the usage of field types. Such example are dates where the you may debate to use date type of retain it as string. The selection you must take will depends on your applications or the efficiency of the REST endpoints.

Real Time System is Costly

You will find many instance where users ask for real time in the requirements. Such requirements must always be check carefully for many reasons. This is because real time design is costly. How do you determine the need of real time? These are some standard queries you can ask before you embark on real time design.

Mission Critical

A common way to check for real time is the type of system you will be designing. Mission critical system like a car application will need real time design because your car requires instant feedback on the road. Another such system is airline control tower where you will be managing incoming and outgoing aircrafts. Such systems will need a real time infrastructure compared to standard applications. The design is usually highly availability, resilient and fault tolerance.

I want Real Time for Free

The challenges come when your users want real time data refresh in your system. The correct term that you need to use is “near real time”. This means you can achieve data refresh with a tolerance lag of five minutes or more. Another term is “live streaming” where you may design a dedicated pipeline for data refresh. Such design will incur higher cost and must be segregated for your premier customers. In another words, real time requirements should be segmented properly in your product offerings with a cost.

Real time requirements will impact your design and cost greatly. Mission critical systems are real time from backbone to front end. For other applications, real time could be a specific feature. You must always take note that the cost of implementation for real time is always higher than standard requirements. Thus, you must consider these additional cost when you market your products.