The gold rush for Cloud have been started. You can say many organisations are caught off guard when COVID-19 pandemic hit the world hard. Many current cloud implementations are being accelerated by Cloud providers. However, it is time that you must know what you are moving forward into.
There are many Cloud providers now. Incidentally, you can also view it as the death of on premise applications. This is because many applications had shifted their development for cloud applications. With so much choices, you need to choose and pick wisely. Do not jump into cloud for the sake of it. The end of support for on premise is usually announced early. Do consider at least 2 to 3 years to plan and pick your cloud applications for any major move.
Manage your Risk
A neglected area when moving to cloud is risk management. There are many horror stories on expectation gaps for many cloud move. Do consider to apply risk management in your cloud move. You should also be prepared to use multi cloud options to replace your legacy applications. Another common risk area is containerisation of applications. Thus, you may want to pilot cloud to explore risk areas and implement mitigation approaches.
The rush into Cloud could be hasty and headache for many applications owners. It is best to plan ahead and understand each cloud providers. You should learn mitigate your risks with pilot Cloud implementation. This way, your journey into cloud will be a breeze.
In 2022, it is a good time to move your existing legacy solution design to cloud solution. Many existing migration is mostly focused on infrastructure. Although this allows you to take advantage of cloud infrastructure, you cannot utilise the full potential of it. Thus, it is time to refactor your application solution for cloud. These are some of steps that you should be starting.
Legacy applications design are usually highly coupled. Some examples are J2EE or web tiered framework where database and application logic reside within a single server. This makes it difficult for you to migrate to cloud because of dependencies. Decoupling removes and clean up these dependencies. Thus, you can move easily or change to other cloud design.
REST your Integration
Decoupling step will push you towards the usage of REST API. These steps also help you to standardise all your integration points and connectivity. You can also move your customisation out of the products and connect via REST API. This step let you remove the coupling within application. You will also find this step useful if you will decide to move to SaaS (Software as a Service).
There are two important steps to start planning if you have intention to change to cloud design. You must look into how you can decouple your existing applications. The design to use REST will help you determine on how you can easily move to SaaS. These steps will transition your legacy applications into cloud design.
It was a quick year of 2021 due to COVID pandemic. There are so many strains of COVID that we are getting used to it. Notable mention are Delta and Omicron strain. As we say goodbye to 2021, I looked forward to a year of 2022 with less restrictions. It is also a year that we will expect with more “digital transformation”.
2021 is a year of COVID roller coaster because of the impact of COVID variant and vaccines. It is a cat and mouse effects because vaccines are not a fail safe approach. The mindset to endemic is still far because of uncertainty. On the digital front, work from home remains default and IT is being spiked by the need of digitalisation. You can say that 2021 is a crossroads for digital transformation and transforming to Cloud. It will take a year or more for IT to fully aligned with business and be transformed.
Year 2022 will see many gaps created by digital transformation. As expected, many organisations started digital transformation without a holistic view from business. Like the ERP (Enterprise Resource Planning) era, digital transformation is viewed as IT implementation. It is Deja vu for many as we grapple with the disconnect of business and IT. Thus, I will expect to see increased hiring for “digital transformers” to help bridge the gaps.
As I see the last of 2021, the pace of digital transformation is liken to that of COVID. Uncertainty and risks are high because of many unknowns. However, it is a great year because this paves the way for 2022. I can see more exciting digitalisation ahead. It is up to organisations to grab the required resources with the mindset to bridge these gaps.
The good part of Cloud is high availability (HA). It should be hard to find downtime if you are using PaaS or SaaS. This is because HA is often considered in the architecture design. Thus, this is a great push factor for you to move to Cloud. Traditionally, on premise architecture must cater the required licences for HA. Will you move to Cloud for the sake of HA?
HA will be Norm
You may notice that Cloud applications are all HA as a norm. It is not surprising that you should either upgrade your on premise to HA or move to Cloud. The cost of HA is definitely much higher and not affordable in the past. It is usually done for large global enterprise. Most of the time, only production environment is HA because of the high cost of maintenance.
Time to HA on Cloud
If one of your objectives is HA, it is worth consider the cost savings for moving to Cloud. Many Cloud cost do not explicitly state how much HA will be in their cost. Thus, you need to compute the HA costs into your existing total cost of ownership before comparing to cloud cost. You will see that HA will one of the key justification on why you want to move to Cloud.
Building HA into your architecture used to be a costly ordeal. Now, many applications have been included HA as part of their product offerings. It is a ripe time for you to move to Cloud if HA is a critical component. The cost savings can be substantial because you can have HA across different types of development environment beside your production.
2022 will see a year of COVID normalisation. Similarly, the “clouded” space will see emergence of key players. Every organisations are expected to deploy application in at least 3 major cloud platforms aka multicloud. By then, teams are expected to be equipped with cloud skills that are generic across different cloud platforms. The clouded space will continue to increase during 2022 as major software shifted to cloud.
You will soon see that digital transformation will be changed to cloud transformation. The stabilisation of COVID pandemic will give rise to future cloud transformation that will work well across geographical locations. The increased usage of cloud creates demand for skills that can adapt and transform applications to cloud based platform.
Applications time to market will continue to be DevOps and Agile. The redesign of AMS (Application Managed Support) towards DevOps will continue throughout 2022. Organisations will continue to invest for in-house capabilities and obtain the optimum Agile application and team. We will expect to see continuous struggle to eliminate traditional project managment approach in favor for Agile methods.
In 2022, it is near impossible to escape from the “clouded” space. Cloud transformation is expected to dominate with increasing usage of in-house generic cloud architect. We will continue the battle for full Agile approach to align with cloud capabilities.
The major issue with upgrading and migration to Cloud is backward compatibility. This process is like an evolution where an entire species will be extinct unless you evolved. Take OTM upgrade as an example, the nonsensical troublemaker Glog XML is forever not backward compatibility. This creates a ripple effect to legacy systems who have to conform to this change. While we are lamenting this issue, what should be done?
Where to Change?
When the product do not support backward compatibility, this raise the issue on where you are willing to invest on change. Usually, there are two areas of change that could be done. The standard approach is to make the change on your end to conform to the upgraded Glog. What happens when the change impact is huge and take a long time? The other approach is to seek the vendor help to raising a product change request to support the backward compatibility?
You will be in a deadlock when both approaches have difficulty to meet the required change. Thankfully for Cloud, you can now easily create transitional application to connect and translates the changes for backward compatibility. This is a common interim solution to maintain the change impact to minimum. The transitional application will be transition in nature and allows migration or upgrade activities to proceed. It also helps you to conduct full change to the new Glog at a later stage.
In the bid to upgrade for cloud, many applications are not backward compatibility to older versions. This creates a dilemma on where to manage the change. You may want to consider building transitional application minimise the change impact of backward compatibility like OTM Glog.
Data synchronisation for Cloud will be a standard because of residing data in different platform. The ideal synchronisation timing is always real time. Unfortunately, this can be costly and unrealistic. This is because data changes only happens for a particular moment. Thus, a full synchronisation is not efficient. How should we design for real time synchronisation?
All data went through a lifecycle process from creation to modification and/or view only. A way to synch your data is to understand user behavioural pattern. This helps you to add in your trigger when there are user actions. The trigger to synch is usually to maintain modification consistency across the source and target system. It is also wise to close off the data and move these data as read only. That way, costly modification will not be needed for these data.
The most common synchronisation is schedule job. This method is brute force approach and helps ensure data integrity. However, it can be costly if data grows exponentially. It is usually cost effective to synch for changes instead of a full data synchronisation. A targeted schedule synchronisation is the most effective.
User behavioural patterns help to determine the data synchronisation you required in Cloud. A full synchronisation is costly and you should add filter to synchronise efficiency.
It is my third year of having a cloud platform. Year 1 happened due to major project. Year 2 was exploratory and I started to use simple features like object storage. It was only on the third year that I managed to explore and build an entire cloud architecture. There is no turning back because Cloud platform has provided me freedom to Agile and configured applications quickly. These is a reflection on why I need a Cloud Platform.
The reality of self service finally reached the IT community. That self service comes in a Cloud Platform. As product owner, we usually need a box in the past to initiate any new product features. Such request takes a long time from procurement to deployment of the application module. All these efforts can now be self service at a cloud platform. You can quickly spin up a compute within minutes. You can even setup entire eCommerce architecture in a cloud platform.
Another favourite advantage in having a cloud platform is to be fully agile. I am no longer constrained by dependencies like network, computes or storage. These are now easily available with cloud platform. In another words, you can agile innovations with high uncertainty. You can also RAD (Rapid Application Development) or DevOps your changes because cloud is configurable.
The freedom of being agile and ability to self service are key reasons on why I will continue to need a cloud platform. If is also rare to find organisations that are still on-premise. I am excited that the next years will be spend to migrate the needed to cloud platform one way or another!
The standard REST methods in ORDS (Oracle REST Data Services) are GET, POST, PUT and DELETE. We usually started testing REST with GET because it is usually the easiest for SELECT statement. POST will be next for INSERT for new creation. The complex PUT will be the last for any UPDATE process. DELETE is seldom provided to preserve data integrity. So, why should you leave PUT to the last? This is because no modification process often requires exception flow and handling.
You should relate PUT to modification process. REST does not have constraints to modify data unless you add the conditions. There are two approaches to handle PUT. One way is the trigger that will allow user to modify the data. Audit trail should be handled at source and/or target application if you want to keep track of the changes. Another way is to set conditions in your PUT services to prevent unauthorised modification.
A key characteristic of PUT is primary key (PK). You will need the PK to amend the correct data. So, what happens if PUT cannot find the PK? There are two ways to handle the PUT response.
You can give a valid response or 201 status if PUT is successful.
You can choose to INSERT if PK cannot be found.
PUT is the last and most complex REST service that you may want to expose. Handling PUT and its responses will need careful consideration on how you want to manage data modification. In some cases, you may choose to omit PUT service to preserve data integrity like DELETE.
Oracle APEX REST can be a pain if it is not working as expected. Today was the day we delved deeper to REST to why data source is not synchronised to the target table. It was another day of hair pulling and lots of testing to figure out how synchronisation should be working for APEX.
Synchronisation is One Way
There is a misconception that synchronisation will keep the target table to be similar with the local table. This is the mindset we have while troubleshooting. However, we finally realise that the synchronisation is to local table and not vice versa. You should not change the local table and expect it to be synch back to your target table. The synchronisation type also shows that it is one way with 3 types – Append, Merge and Replace.
Design for Two Way
The synchronisation approach could mean that the local table should not be used as a transactional table. This will mean that you need to design your application for a two way data transfer between your local table and target table. One method is to direct REST POST to your target table and let APEX auto synch to your local table. Another method is to separate your transactions to be different from your local table. However, we will need to test this approaches in details.
Synchronisation can be straightforward if this is a direct table processing. If there are conditional logic and preprocessing, you will hit synchronisation issues. This is because of the one way synch from target table to your local table. So, it is back to drawing board for design of two way synch.