Expected Benefits of Process Automation

Navigating the Automation Roadmap Blog Series

Are the expected benefits of process automation too high?

Are the expected benefits of process automation too high?

If you follow the RPA market, you will know advisory firms have provided guidance for Robotic Process Automation (RPA) service providers to shift from RPA into more product-based, “Hyperautomation”, to accelerate intelligent process automation deployment.  While this shift is expected to push outcomes, more than the journey of RPA, I cannot help asking if either approach is really achieving the desired benefits.  Otherwise asked, are the expected benefits of process automation too high?

Why the need for automation arises?

Before addressing the desired benefits, please allow me to set the context for why the need for automation arises (generally speaking).  When a company implements an enterprise application, the intent is to provide support for the business transaction.  Inherent to the software is automation to create an efficient and effective processing of the business transaction.  The result is that process teams can focus on higher value tasks because the lower value tasks are reduced because of the application.

Over time, the application’s ability to stay aligned with the business erodes, therefore business teams absorb the system deficiencies and support with manual work-arounds and stop-gaps.  Before anyone realizes, the business teams are largely supporting the work-arounds and manual work, only leaving minimal time to support the higher value tasks.  As the business continues to evolve, business leaders and IT engage in discussions to amend or add onto the enterprise system.  Typically, enhancements usually have a 6 to 9 month development cycle and a price tag into the hundreds of thousands of dollars, which becomes difficult to justify compared to the labor expense (especially if work was moved off-shore).  The result is that manual work becomes the status-quo and to be addressed in the future.

This is the segue where automation enters the picture, since automating the low-value, manual, repeatable tasks is the sweet spot for RPA, but more so, shifts the status quo back towards the business focusing on high value tasks.

Defining benefits

Every organization will have different approaches to tracking benefits, but savings often include: 1) Reduced Labor Costs, 2) Seasonal Coverage Costs, 3) Overtime Elimination, 4) Higher Value Time Replacement, 5) Improved Quality, Regulatory Compliance, 6) Employee Satisfaction, 7) Cost Avoidance – Future Salaries.  Some of these benefits can more easily be measured and tracked over time than others, but these benefits are reasons why processes should (or should not) be automated.  Assuming you can assign an annualized dollar amounts to these benefits, the cost savings calculation is simply:  {Development Costs}  +  {Attributed Operating Expenses}  –  {Business Process Savings}  =  Cost Savings.  Organizations will adjust this basic equation to suit their needs and ability to track, but it provides a basis for understanding the costs and benefits to automating a business process.

Clarity on your automation investment returns

Business leadership will typically make automation decisions based on the prospective ROI for an automation, based on the as-is state of the process.  Unfortunately, we often see automation decisions are made without consideration for the high value work that was being rushed and diminished because of the manual, low-value work that has overwhelmed the team.  Automating processes often just allows the staff to re-focus on the high-value work that lost focus over time, which is a significant benefit, but not necessarily the return on investment leadership was targeting.

My point is that we might not be placing enough value on restoring order to business processing for business teams to focus on high-impact priorities, while doing their jobs more comprehensively and completely, without sacrifice.  We include #4, “Higher Value Time Replacement,” in our benefits calculation, so we enable business leaders to understand and capture the benefits of refocusing team members on the more important work they are currently distracted from.  Without understanding of how work teams are refocused, business benefits can be over-estimated if only focused on the hours returned or the FTE cost savings.  Rather than RPA not delivering benefits, maybe we need to start being honest with ourselves and a bit more realistic that RPA is not a magic pill that will eliminate jobs overnight, but it can be used to create business agility (and cost savings) if used correctly.

Vigilant’s understanding of value and technology not only uniquely positions us to help address restoring of order to a process, but also finding additional benefits trapped in the low and high value manual work.
Thank you for reading.  Our future blog articles will focus on addressing the impact and root causes of bad data perpetuated by ineffective and inefficient business processing, and how solution architects can make or break your automation program.

Please reach out on automation@vigilant-inc.com for a spirited discussion on maximizing the benefits on RPA and how we have found the ‘secret sauce’ for achieving success with automating Oracle EBS Financials and accounting operations.

We look forward to your feedback.


Joshua Gotlieb

Intelligent Automation Practice Director, Vigilant Technologies

Action Oriented Security Approach | Public & Private Cloud

data security concept

Action Oriented Security Approach:

 Design, Monitor and Optimize

Vigilant Technologies incorporates a collection of best practices that provide clear actionable guidance for security related decisions. This is designed to help your organization improve upon its security posture and reduce risk whether your environment is cloud-only, or a hybrid enterprise spanning cloud(s) and on-premises data centers.

The Vigilant Technologies core approach for success is three-fold

Design for the business, Monitor and Auto-Remediate, & Optimize for Change.

As part of this core service offering, Vigilant Technologies executes upon a set of principles and capabilities that support a variety of consumer cloud platforms including Hybrid-Cloud, Oracle Cloud, AWS, Microsoft Azure, & Microsoft Modern Workplace:

  • Governance, risk, and compliance
  • Security operations
  • Identity and access management
  • Network security and containment
  • Information protection and storage
  • Applications and services

Progress through Action: Vigilant Technologies Security Design Principles

Vigilant Technologies core security principles provide actionable steps for improvement in three key areas and provides a securely architected system hosted on cloud or on-premises datacenters (or a combination of both).

With careful execution of these principles your team will dramatically increase the likelihood of its security architecture to maintain (1) confidentiality, (2) integrity, and (3) availability.

Each recommendation referenced below includes a summary description of why it is recommended and how each principal map to one of more of these security concepts:

  • Align Security Priorities to Mission –Security resources are almost always limited, so prioritize efforts and assurances by aligning security strategy and technical controls to the business using classification of data and systems. Security resources should be focused first on people and assets (systems, data, accounts, etc.) with intrinsic business value and those with administrative privileges over business-critical assets.
  • Build a Comprehensive Strategy –A security strategy should consider investments in culture, processes, and security controls across all system components. The strategy should also consider security for the full lifecycle of system components including the supply chain of software, hardware, and services.
  • Drive Simplicity –Complexity in systems leads to increased human confusion, errors, automation failures, and difficulty of recovering from an issue. Favor simple and consistent architectures and implementations.
  • Design for Attackers –Your security design and prioritization should be focused on the way attackers see your environment, which is often not the way IT and application teams see it. Inform your security design and test it with penetration testing to simulate one-time attacks and red teams to simulate long-term persistent attack groups. Design your enterprise segmentation strategy and other security controls to contain attacker lateral movement within your environment. Actively measure and reduce the potential Attack Surface that attackers target for exploitation for resources within the environment.
  • Leverage Native Controls –Favor native security controls built into cloud services over external controls from third parties. Native security controls are maintained and supported by the service provider, eliminating, or reducing effort required to integrate external security tooling and update those integrations over time.
  • Use Identity as Primary Access Control –Access to resources in cloud architectures is primarily governed by identity-based authentication and authorization for access controls. Your account control strategy should rely on identity systems for controlling access rather than relying on network controls or direct use of cryptographic keys.
  • Accountability– Designate clear ownership of assets and security responsibilities and ensure actions are traceable for nonrepudiation. You should also ensure entities have been granted the least privilege required (to a manageable level of granularity).
  • Embrace Automation –Automation of tasks decreases the chance of human error that can create risk, so both IT operations and security best practices should be automated as much as possible to reduce human errors (while ensuring skilled humans govern and audit the automation).
  • Focus on Information Protection –Intellectual property is frequently one of the biggest repositories of organizational value and this data should be protected anywhere it goes including cloud services, mobile devices, workstations, or collaboration platforms (without impeding collaboration that allows for business value creation). Your security strategy should be built around classifying information and assets to enable security prioritization, leveraging strong access control and encryption technology, and meeting business needs like productivity, usability, and flexibility.
  • Design for Resilience –Your security strategy should assume that controls will fail and design accordingly. Making your security posture more resilient requires several approaches working together
    • Balanced Investment– across core functions spanning the full NIST Cybersecurity Framework lifecycle (identify, protect, detect, respond, and recover) to ensure that attackers who successfully evade preventive controls lose access from detection, response, and recovery capabilities.
    • Ongoing maintenance– of security controls and assurances to ensure that they don’t decay over time with changes to the environment or neglect
    • Ongoing vigilance– to ensure that anomalies and potential threats that could pose risks to the organizations are addressed in a timely manner.
    • Defense in depth– approach includes additional controls in the design to mitigate risk to the organization in the event a primary security control fails. This design should consider how likely the primary control is to fail, the potential organizational risk if it does, and the effectiveness of the additional control (especially in the likely cases that would cause the primary control to fail).
    • Least Privilege– This is a form of defense in depth to limit the damage that can be done by any one account. Accounts should be granted the least amount of privileged required to accomplish their assigned tasks by access permissions and by time. This helps mitigate the damage of an external attacker who gains access to the account and/or an internal employee that inadvertently or deliberately (for example, insider attack) compromises security assurances.
  • Baseline and Benchmark –To ensure your organization considers current thinking from outside sources, evaluate your strategy and configuration against external references (including compliance requirements). This helps to validate your approaches, minimize risk of inadvertent oversight, and the risk of punitive fines from noncompliance.
  • Drive Continuous Improvement –Systems and existing practices should be regularly evaluated and improved to ensure they are and remain effective against attackers who continuously improve and the continuous digital transformation of the enterprise. This should include processes that proactively integrate learnings from real world attacks, realistic penetration testing and red team activities, and other sources as available.
  • Assume Zero Trust –When evaluating access requests, all requesting users, devices, and applications should be considered untrusted until their integrity can be sufficiently validated. Access requests should be granted conditionally based on the requestors trust level and the target resource’s sensitivity. Reasonable attempts should be made to offer means to increase trust validation (for example, request multi-factor authentication) and remediate known risks (change known-leaked password, remediate malware infection) to support productivity goals.
  • Educate and incentivize security –The humans that are designing and operating the cloud workloads are part of the whole system. It is critical to ensure that these people are educated, informed, and incentivized to support the security assurance goals of the system. This is particularly important for people with accounts granted broad administrative privileges.

Steven headshot

Author: Stephen Clark

Principal – Technology Strategist, Vigilant Technologies

RPA Implementation Pitfalls

Navigating the Automation Roadmap Blog Series

Not another “Pitfalls” of RPA

While industry trends are shifting towards ‘Hyperautomation’ to accelerate the automation journey, there are many considerations and reasons to use RPA, but there are also cautionary tales about RPA done wrong, which is not always talked about.  With so many people and service providers posturing about “doing RPA right” and “pitfalls of RPA”, we wanted to present a series of our no-nonsense view of RPA lessons learned from hardened industry veterans.

In our hands-on experience, and after correcting the trajectory of many poorly implemented RPA programs, we offer our top 5 list of considerations.  (We had more to share, but our marketing team forced a limit of 5 points.)

1.    Stop listening to the RPA tool vendor hype and get your RPA teams aligned with the transformation strategy.

Business teams looking for automated solutions are often working around IT, which can work against achieving the desired business outcomes.  Business leaders cannot shy away from IT because IT is feared to be some scary function in the back rooms of the organization.  Business leaders need to embrace IT as a business enabler and help IT better understand their role in supporting business objectives.  Our experience finds that organizations achieve greater results from transformation when the Executive, Business, and IT are all aligned on priorities and objectives.  To be clear, the transformation strategy should set the priorities for the organization, so everyone is aligned on how resources, projects, and obligations are being focused to achieve timely results.


2.    Managing RPA benefit expectations (Learn more)

RPA has been the ‘flavor of the month’ because it holds the promise of creating process efficiencies for business teams to focus on higher-value work.  Many executives hear ‘automation’ and believe it will reduce head count.  The reality is that many legacy business processes have evolved over time, but the business systems have not remained aligned.  Therefore, business teams are manually maintaining lower-value work to support the execution of the business function.  RPA restores balance between diminished system capabilities and (re)focusing business teams on higher-value tasks.  Business benefits are still significant, but not necessarily only for the reduction in headcount executives might be looking for.


3.    Yes, another consideration about selecting the right process candidate for automation.

There are many articles written about how important selecting the right process is for automation.  For an effective automation strategy, additional consideration must be placed on using a building blocks in your automation roadmap to map out reusable components for reducing future development complexity and expense.  This might mean you do not start with the biggest ROI or impact, but foundational building blocks are critical for a successful automation program.  For additional information, give us a call and we can work with you to map out a practical automation program that achieves the transformational benefits you are looking for.


4.    Solution architects will make or break the success of your automation program.  

Quality solution architects know how to better leverage technical capabilities of the robot and maximize the value and benefits of the automation. Skilled solution architects will also know how to incorporate available technology to re-imagine and enhance the automated solution.  Lastly, solution architects will ensure automations are cost-effective when considering reusability, available robot capacity, and capabilities of the automation team.  Underestimating the value of qualified and capable solution architects (and having the right strategy) can be the difference between success and failure of your automation program.


5.    Automating bad data practices is adding to your data debt! (Learn more)

There is a good chance you are creating data debt through the hardening and accommodating bad data practices with automation.  The typical automation approach is to identify a perceived benefit (i.e. ROI) then automate the process (if qualified).  What often gets overlooked is examining the standardization of data inputs into a process to ensure data best practices are being adhered to upstream.  Automating these upstream processes might not be sexy or produce immediate ROI, but they are part of the building blocks to maximizing the benefits of automation and creating business agility.  Be warned, bad data accommodations increase project scope, create technical (data) debt, and increase the cost of support since the automation is built on bad data.

In the coming blog articles, we will drill down into each of the 5 topics and also expand on important points not addressed in this blog, so keep an eye out over the coming weeks and months.
Thank you for reading.  Please reach out on automation@vigilant-inc.com for a spirited discussion on maximizing the benefits on RPA and how we have found the ‘secret sauce’ for achieving success with automating Oracle EBS Financials and accounting operations.  We look forward to your feedback.

Author: Joshua Gotlieb

Intelligent Automation Practice Director, Vigilant Technologies

Lessons Learned – Critical Infrastructure Disruption

Moonlight Maze

In 1996, in the infancy of the Internet, someone was rummaging through military, research, and university networks primarily in the United States, stealing sensitive information on a massive scale. Victims included the Pentagon, NASA, and the Department of Energy, to name a very limited few. The scale of the theft was literally monumental, as investigators claimed that a printout of the stolen materials would stand three times taller than the Washington Monument.

The Russian government was blamed for the attacks, although there was initially little hard evidence to back up the US accusations besides a Russian IP address that was traced to the breach. Moonlight Maze represents one of the first widely known cyber espionage campaigns in world history. It was even classified as an Advanced Persistent Threat (a very serious designation for stealthy computer network threat actors, typically a nation state or state-sponsored group) after two years of constant assault. Although Moonlight Maze was regarded as an isolated attack for many years, unrelated investigations revealed that the threat actor involved in the attack continued to be active and employ similar methods until as recently as 2016.

The attack began with the threat actors building “back doors” through which they could re-enter the infiltrated systems at will and steal further data; they also left behind tools that reroute specific network traffic through Russia. The breach was not discovered till June 1998. An investigation task force was only formed in 1999.

Solar Sunrise

SOLAR SUNRISE was a series of DoD computer network attacks which occurred from 1-26 February 1998. The attack pattern was indicative of a preparation for a follow-on attack on the DII. DoD unclassified networked computers were attacked using a well-known operating system vulnerability.

At least eleven attacks followed the same profile on Air Force, Navy, and Marine Corps computers worldwide. Attacks were widespread and appeared to come from sites such as: Israel, the United Arab Emirates (UAE), France, Taiwan, and Germany. The attacks targeted key parts of the defense networks and obtained hundreds of network passwords.

So, who was behind these attacks – Iraq, terrorists, foreign intelligence services, nation states, or bad actors for hire? As it would turn out, the attackers were two teenagers from California and one teenager from Israel.

Even though the Solar Sunrise breach started later and ended before Moonlight Maze, it was discovered before the latter. Moreover, both attacks targeted Solaris and other Unix operation systems. So, from February to June of 1998, thousands of critical servers sat unpatched even though a few teenagers had showed the world how easy it was to break into US military networks.

Eligible Receiver 97

Eligible Receiver 97 was a U.S. Defense Department exercise conducted under what is known as the No-Notice Interoperability Exercise Program. The exercises were held June 9–13, 1997. Eligible Receiver 97 featured mock cyberattacks, hostage seizures, and special operations raids that sought to demonstrate potential national security threats that could be posed through the cyber domain. The joint exercise involved a National Security Agency Red Team which played the role of North Korea, Iran, and Cuba attempting to cause critical civilian infrastructural damage, as well as gain control over the militaries command-and-control capabilities.

The NSA Red Team used threat actor techniques and software that was freely available on the Internet at that time. The Red Team was able to crack networks and do things such as deny services; change and manipulate emails to make them appear to come from a legitimate source; disrupt communications between the National Command Authority, intelligence agencies, and military commands. Common vulnerabilities were exploited which allowed the Red Team to gain root access to over 36 government networks which allowed them to change/add user accounts and reformat server hard drives.

So, now we go from June of 1997 to Feb of 1998 to June of 1998 – when Eligible Receiver showed us how vulnerable we are to teenagers exploiting those vulnerabilities to the discovery of Moonlight Maze. And all this while, critical servers sit unpatched and defenseless. Was this a skill issue or a will issue?

Aurora Generator

Fast forward to 2007, 30 lines of code blew up a 27-ton generator which could produce enough electricity to power, say, a hospital or a navy ship. Fortunately, this was a controlled exercise and not an actual attack. The exercise was to kill that very expensive and resilient piece of machinery not with any physical tool or weapon but with about 140 kilobytes of data, a file smaller than the average cat GIF shared today on Twitter. Like any real digital sabotage, it was performed from miles away, over the internet.

The Aurora Generator exercise proved without a doubt that bad actors who attacked an electric utility could go beyond a temporary disruption of the victim’s operations: They could damage its most critical equipment beyond repair.

Here we are again, from 1997 to 2007, a decade later, and yet vulnerable to bad actors.


Shamoon, also known as W32.DistTrack, is a modular computer virus that was discovered in 2012. The virus was used for cyberwarfare against national oil companies including Saudi Arabia’s Saudi Aramco and Qatar’s RasGas. A group named “Cutting Sword of Justice” claimed responsibility for this attack on 35,000 Saudi Aramco workstations, causing the company to spend more than a week restoring their services.

Shamoon was launched in retaliation to Operation Olympic Games. Operation Olympic Games was a covert and still unacknowledged campaign of sabotage by means of cyber disruption, directed at Iranian nuclear facilities by the United States and likely Israel.

Started under the administration of George W. Bush in 2006, Olympic Games was accelerated under President Obama, who heeded Bush’s advice to continue cyberattacks on the Iranian nuclear facility at Natanz. Bush believed that the strategy was the only way to prevent an Israeli conventional strike on Iranian nuclear facilities.

There are many more incidents like these, that have been perpetrated by nation states. Instead of weapons of mass destruction, we are now dealing with weapons of mass disruption. The recent SolarWinds breach highlights how vulnerable we remain to such attacks simply because “cyber” war does not fit the glorified definition of a war that is made tangible by the visible pile of bodies.

How will this play out in the future? Especially, in volatile political environments like the Middle East. What will be the consequences of such disruptions? What does all this mean for the common citizen? Does my healthcare depend on how secure our compute networks are? Short answer, yes. Does my social security and pension depend on the same? What about daily amenities like fresh, running water or electricity? The short answer to all those is a resounding yes – after all, everything is a computer now, and all of it runs on code. Where there’s code, there are vulnerabilities to be exploited.

Traversing the Oracle License Compliance Maze

We have worked with Oracle Products and Services for almost a decade now and as much as we passionately recommend Oracle to everyone we come across, we must admit that understanding and be complying with Oracle’s license policies can cause a bit of a headache (and possibly some heartburn).

Recently we came across multiple clients and prospects who have undergone license reviews by the Oracle LMS Audit team, irrespective of the size of their Oracle landscape. In many cases, Oracle users are not aware of many of the fine points of Oracle licensing guidelines until they are already in an audit.

The gap in understanding these licensing rules and their impact on an organization is where many falter. And this is where leading Oracle partners come into play. We can help you navigate through the licensing maze and help make sense of it all. For instance, we can help advise you regarding licensing requirements when running the latest version of VMWare in your shop.

Partners like us are proficient in such nuances of the Oracle licensing world. Our License Compliance Review service includes the following –

  • Capturing and consolidating information from various agreements and contracts over the years between Oracle and your company
  • Capturing your Oracle footprint – Products, versions, additional options, and so on
  • Reconciling your entitlements with actual usage
  • Educating you about the fine-prints of Oracle Licensing
  • Offering remediation options, if necessary
  • Our services include analysis of Oracle database and “Tech” products in use, as well as many applications, including EBS.

Get an Oracle License Assessment

Let us come in and give you the peace of mind you deserve. We sign NDAs at the very onset of the engagement, so your information is strictly confidential and doesn’t leave our consulting team. No information is shared with Oracle without written client approval. Contact Us today to explore how we can add value!

Contact us at 248-965-4441 or email us at solutions@vigilant-inc.com