You are not Learning with Dashboard

3 years ago, I embarked on a mission to design and deploy dashboard. The overarching vision is to utilise dashboard as a tool to enable business analytics and to learn on how to improve business operations. From a moment of self reflection as of today, it is interesting to realise a key lesson from my mission. This lesson stems from the misconception that dashboard will allow users to learn and improve their business.

Users not Ready for Dashboard

One major feedback from users that they are unable to use the dashboard is due to the timeliness refresh of the dashboard. Hearing this remark makes one ponder why organisation should not view tool like dashboard as a magic pill to resolve business issues. It also jerk a reality check to me on the efforts that users can find any excuse not to use a tool. After all, it is always easy to blame tools like dashboard because it is passive and does not include machine learning capabilities to retort. Users are the missing learning components to analyse the data and perform corrective actions.

Ugly Truths with Dashboard

If management query the effectiveness of the dashboard, it is showing the ignorance and disconnect of management to the operations. The right question should be asking the users what they have learn from the dashboard. What else will you like to learn from the dashboard? Dashboard and data analytics reveal ugly truths from the operations. This is why many resist the adoption of Dashboard for the review of SOP (Standard Operating Procedures). Typical excuse you will hear is blaming the dashboard for data timeliness, incorrect data or data errors. It is not surprising because the truth leads to change! Change is something that operations or quality hate to indulge!

The key lesson from adopting Dashboard is to acknowledge that it will not automatically equates users to learn from Dashboard. Users will choose to ignore the ugly truth from dashboard to avoid changes. You must ensure at least monthly review of Dashboard check on what users learn and to improve the SOP. Dashboard is not a forum to complain on data timeliness and data errors.

Troubleshooting APEX REST Synchronisation

Oracle APEX REST can be a pain if it is not working as expected. Today was the day we delved deeper to REST to why data source is not synchronised to the target table. It was another day of hair pulling and lots of testing to figure out how synchronisation should be working for APEX.

Synchronisation is One Way

There is a misconception that synchronisation will keep the target table to be similar with the local table. This is the mindset we have while troubleshooting. However, we finally realise that the synchronisation is to local table and not vice versa. You should not change the local table and expect it to be synch back to your target table. The synchronisation type also shows that it is one way with 3 types – Append, Merge and Replace.

Design for Two Way

The synchronisation approach could mean that the local table should not be used as a transactional table. This will mean that you need to design your application for a two way data transfer between your local table and target table. One method is to direct REST POST to your target table and let APEX auto synch to your local table. Another method is to separate your transactions to be different from your local table. However, we will need to test this approaches in details.

Synchronisation can be straightforward if this is a direct table processing. If there are conditional logic and preprocessing, you will hit synchronisation issues. This is because of the one way synch from target table to your local table. So, it is back to drawing board for design of two way synch.

REST in Peace

Recently, we are exploring a lot of REST API. Although REST is quick and simple, there are challenges to take note while using it. This is because we are used to field types such as date. There are also things to take note while handling the different REST methods like GET or POST.

When REST don’t work!

REST services can be challenging to troubleshoot when it’s not working. One of my favorite tool is Postman. This allows you to test REST endpoints quickly. The most common errors of REST endpoints encountered in Postman are firewall, parameters and typos. Half the battle is won if you can get your REST working in Postman.

The Other Part of REST

If your REST is working in Postman but not working in your application. Then, this part of issues likely lies in your application configuration for REST. The way to troubleshoot can be broken into the following parts.

  • Setup of REST endpoints
  • Configure then right parameters
  • Unit test of REST methods
  • Check your logs

The failure points for REST can be a real test for your patience. You will need to trace each steps of the REST methods. This will help you determine the root cause. Most of time, errors are due to configuration, parameters or typos. Stay strong and hang in there!

Why PM is Outdated

In the world of Agile, there is no PM (Project Manager) role. I have questioned on the relevance of PM in Agile project. With Agile gaining grounds, it is time to determine that PM role is outdated. These are reasons why you can run your Agile project without PM. To be specific, it is time to put a stop to freeloaders working as PM for Agile projects.

Why Agile Obsolete PM

In waterfall project, PM is required to work on project management e.g. schedule, scope. There is a lot of trackers to track schedule, gaps or issue logs. In contrast, Agile focus on self empowerment and self organising. There is no need for PM in this framework based on this key principles. You will not need a PM to chase or track after deliverables. As an analogy, waterfall is like a baby with constant minding from the guardian. Agile places the collective accountability to the Agile team.

Adaptability Outpace PM

A PM mainly function like an observer to the project. Monitoring and tracking does not provide the adaptability required by Agile. Often, PM is outpaced by Agile team. The additional communication layer of PM becomes redundant. Users or customers can work in close collaboration with Agile team. This is a key reason why Agile do not require PM. If you are running an Agile project, do expect to be frustrated by the placement of the PM.

It is time to realise that you no longer require PM for Agile projects. We should not continue to be in self denial mode to have this role for historical reasons. Agile team is highly independent to get things done and adapt quickly with users or customers.

P.S. Self declared PM continued to exist during digital transformation period. It is time to wake up and transform PM role as well!

OTM Upgrading Risks

OTM (Oracle Transportation Management) upgrading has its own challenges and risks. Thus, a major upgrade or major version increase will typically take more than six months or so. You should have standard risk mitigation action plans. The major risks from upgrades comes from both internal and external factors. These are some common ones that you can take note.

Know your Application

Upgrade errors usually comes from customisation that are done to the product. Such errors usually need product team to analyse the root cause and provide a resolution plan. You should always remove or isolate customisation prior to upgrades as part of your risk mitigation plan. Another risk comes from the potential changes to your customised modules. It is advisable to develop configurable settings instead of hardcoding them to your application.

Know your Architecture

OTM upgrading impact for on premise is a much difficult than managing on the Cloud. This is because your infrastructure is likely non configurable and needs to be manually changed one by one. The complexity setup of your OTM architecture should be prepared prior to upgrading to lower the risks. This makes you aware of the firewall or SSL certs to be deployed when you conduct the upgrading.

The common risks for OTM upgrading is customised features and architecture setup. Customisation impacts patches or upgrading scripts. This often leads to specific architecture requirements and firewall with your on premise applications. Although these risks are not new, they are time consuming and impact the upgrading duration. Thus, you should always be proactive to mitigate the risks.

REST and Dates Type

Database field types like numbers or dates are no longer relevant for REST services. This is because REST transmission are often in text format. Thus, you will need to consider your database design while handling REST. Should you choose a date type or retain string format for REST?

Selecting the Field Type

The key factor to select the field type is how your applications will consume the information. By default, setting data for REST endpoints in string is fast and simple. Applications often use these like staging table. Thus, your initial design should often default the data types to string for easier processing. This also allows you to consume the data without worrying about the data format. However, there are scenarios where your applications will require a specific field type to utilise feature like date and calendar.

Field Type as Data Integrity

The issue with setting string type will mean you can consume all kinds of junk data. This could be costly if you do not enable any source system do not enforce any field definition checks. Thus, field type helps to determine what type of data you will be expecting. This helps to reduce and reject data that do not conform to your field type. As this is stricter, you will required mapping checks between source and target systems of the REST endpoints.

Many applications are new to REST and the usage of field types. Such example are dates where the you may debate to use date type of retain it as string. The selection you must take will depends on your applications or the efficiency of the REST endpoints.

Real Time System is Costly

You will find many instance where users ask for real time in the requirements. Such requirements must always be check carefully for many reasons. This is because real time design is costly. How do you determine the need of real time? These are some standard queries you can ask before you embark on real time design.

Mission Critical

A common way to check for real time is the type of system you will be designing. Mission critical system like a car application will need real time design because your car requires instant feedback on the road. Another such system is airline control tower where you will be managing incoming and outgoing aircrafts. Such systems will need a real time infrastructure compared to standard applications. The design is usually highly availability, resilient and fault tolerance.

I want Real Time for Free

The challenges come when your users want real time data refresh in your system. The correct term that you need to use is “near real time”. This means you can achieve data refresh with a tolerance lag of five minutes or more. Another term is “live streaming” where you may design a dedicated pipeline for data refresh. Such design will incur higher cost and must be segregated for your premier customers. In another words, real time requirements should be segmented properly in your product offerings with a cost.

Real time requirements will impact your design and cost greatly. Mission critical systems are real time from backbone to front end. For other applications, real time could be a specific feature. You must always take note that the cost of implementation for real time is always higher than standard requirements. Thus, you must consider these additional cost when you market your products.

Test Grouping Tips

With the increased migration of Cloud or upgrading activities, testing efforts are increased exponentially. Full testing is often ideal but unrealistic and costly. Thus, you will need to strike a balance in your testing coverage. One method is to group your test cases to reduce the testing duration. However, there are ways to prepare for this approach.

Preparing for Test Group

The purpose of test group is to reduce cost and efforts. You can also maximise testing coverage with minimal test cases. However, you will require deep understanding of your test cases before you can conduct a proper grouping. There are many approaches to group your test cases. A common way is to group the application features. Other methods involves functional grouping or customer groupings. You can also group by user base or locations. While there is no right or wrong, the key is to obtain the most efficient grouping.

Agile your Test Grouping

There is a misconception that test grouping is fixed when testing is on-going. This statement is true for many waterfall projects. On the other hand, you are encouraged to iterate your test groups if you are running on Agile. The testing process will gauge the effectiveness of your test grouping. It is important to update your test grouping to maximise your test coverage. As opposed to waterfall mindset, test groups process will lead you to design a better testing approach for your next Agile sprint.

Testing is most efficient and effectively if you can group them properly. Preparation is a key part to start your test group on the right track. You should also be prepared to amend the groups accordingly for your next Agile sprint.

Language Pack vs Translation Service

Many Cloud service have provided language pack and translation service options in their product offerings. I managed to test both of this in ODA (Oracle Digital Assistant) Chatbot. It is ideal that translation service be used as this simplify the maintenance and need for language packs. In reality, languages are complex and on the fly translation often turn to gibberish.

Why Language Packs?

These are the reasons why you still need language packs for your key texts.

  • Accurate translated texts.
  • Avoid ambiguity in translation.
  • Support acronyms and puns.
  • Able to translate industry terms correctly.
  • Faster translation for fixed text.
Why Translation Service?

In Chatbot, translation service is a must-have beside language packs. Other type of application may not need translation service. Will translation service obsolete language pack? These are some reasons why you must enable your translation service.

  • Helps to increase language type coverage.
  • Prevent unnecessary setup.
  • Aid Chatbot which allows NLP (Natural Language Process)

For the moment, it seems you have to support both language packs and translation service. Language packs are more accurate but cannot cater to unknown wordings like translation service.

End of COP26 Deal

The end of COP26 deal making saw a clear division of views for sustainability. Targets are set but lack the commitment of rich Nations. Overall, this conference is a small step to save the dying Earth. What are the good things that we know from COP26?

COP26 Outcomes

Glasgow Climate Pact was the outcome that came from COP26. This pact had its up and downs. Some claimed it’s a milder version of what was expected. The key outcomes are:

  • Reduce carbon or achieve net zero carbon.
  • Limit temperature rise to 1.5C
A Mild COP26

Resistance remains from carbon nations. There were talks that zero carbon is the desired goals. However, these were rejected from heavy carbon countries. The interventions from these countries had prevented a more aggressive target.

These climate conferences are like a tug of war. There are often push and pull factors in committing on sustainability. We will see this “wayang” for the years to come because coal remains a cheaper energy source. Can Earth be patience to wait for full commitment? Only time will tell.