Talent as a Service (TaaS), the new model in Talent Acquisition

Talent as a Service (TaaS), the new model in Talent Acquisition

Talent as a Service (TaaS)

Advancements in Cloud technologies created the opportunity for concepts like software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS). These “services” concepts are designed to offer cost optimization, speed, agility and scalability. Talent as a Service (TaaS) is a new model in talent acquisition that disrupts the traditional workforce staffing by borrowing concepts from the gig economy and introducing a unique approach to crowd sourcing open roles.  With staffing shortages brought on by COVID-19, traditional staffing models are no longer an option

What is TaaS?

Talent as a service (TaaS) is a flexible staffing model that provides businesses access to hire high-skilled, experienced staff, on-demand.  Vigilant offers TaaS for a specific duration and purpose (e.g. projects) or as a way to leverage a talent to support a role function (managed service).

Be wary of TaaS providers using the TaaS concept to rebrand contract staffing and placement, which retains the same limitations of the staffed contractor, which includes limited experience, lack of ownership, and does not share consideration or importance for building a long-term partnership.

Why Vigilant TaaS?

Skills on Demand

Businesses are often faced with limitations of capacity or lacking a specialized skill to complete critical business initiatives.  TaaS empowers businesses to deliver key projects with on-demand talent to keep organizations agile and adaptable to today’s rapidly shifting markets.

Cost Optimization

With Vigilant TaaS, you only pay for the services you use. You on-board our staff for the mutually agreed duration then release the manpower when the term or project completes. It is that simple.  Vigilant TaaS is a great way to control your staffing costs and eliminating other benefits such as health insurance and bonuses.

Flexibility and Scalability

Vigilant TaaS enables on-demand access to talent to give you the flexibility to scale teams up and down, to meet your business needs.  With Vigilant TaaS, you now can adapt to changing priorities, without missing out on opportunities.  Vigilant TaaS is like having your cake and eating it too.

Speed

The COVID-19 pandemic has completely changed how we work and live our lives.  The pandemic has created market shifts we have never seen before in our lifetimes that are happening in rapid-fire succession.  Businesses are having to react and adapt in real-time to keep up with the market shifts.  Organizations are having to make decisions between short-term needs and long-term strategies.  Vigilant TaaS eliminates the “either / or” decision by rapidly deploying talent to meet all of your business needs.

Innovation

Vigilant is a full-stack technology professional services firm.  We leverage One Vigilant, to put the strength of our 300+ employees behind every project or function we support.  The collective knowledge ensures we are leveraging the right solutions by maximizing the best technologies to support businesses reaching their goals.  The strength of Vigilant also allows businesses looking to employ new technologies, such as AI, Blockchain, Analytics, or other unique skill to have access to talent to explore these new technologies.

Thank you for reading.

Learn more about Vigilant’s TaaS (Talent-as-a-Service) Offerings

Cloud Adoption Framework

Cloud migration in the Cloud Adoption Framework

Introduction

Any enterprise-scale cloud adoption plan will include workloads that do not warrant significant investments in the creation of new business logic. Those workloads could be moved to the cloud through any number of approaches: lift and shift; lift and optimize; or modernize. Each of these approaches is considered a migration.

The below steps will help establish the iterative processes to assess, migrate, optimize, secure, and manage those workloads.

To prepare you for this phase of the cloud adoption lifecycle, Vigilant Technologies recommend the following:

The Migrate methodology and the steps above build on the following assumptions:

  • The methodology governing migration sprints fits within migration waves or releases, which are defined using the Plan, Ready, and Adopt methodologies. Within each migration sprint, a batch of workloads is migrated to the cloud.
  • Before migrating workloads, at least one landing zone has been identified, configured, and deployed to meet the needs of the near-term cloud adoption plan.
  • Migration is commonly associated with the terms lift and shiftor rehost. This methodology and the above steps are built on the belief that no datacenter and few workloads should be migrated using a pure rehost approach. While many workloads can be rehosted, customers more often choose to modernize specific assets within each workload. During this iterative process, the balance between speed and modernization is a common discussion point.

Migration effort

The effort required to migrate workloads generally falls into three types of effort (or phases) for each workload:

  1. Assess workloads,
  2. Deploy workloads,
  3. and release workloads.

In a standard two-week long iteration, an experienced migration team from Vigilant Technologies can complete this process for 2-5 workloads of low-medium complexity.

More complex workloads, such as SAP, may take several two-week iterations to complete all three phases of migration effort for a single workload. Experience and complexity both have a significant impact on timelines and migration velocity.

Migration waves and iterative change management

Migration iterations deliver technical value by migrating assets and workloads. A migration wave is the smallest collection of workloads that deliver tangible and measurable business value. Each iteration should result in a report outlining the technical efforts completed.

Next steps

The steps outlined above can help you develop an approach to improve processes within each migration sprint. The approach at Vigilant Technologies can ensure that common tools and approaches needed during your first migration wave will render a successful result.

Read more about Vigilant’s Azure Cloud Migration Services

Steven headshot

Author:
Stephen Clark

Principal – Technology Strategist, Vigilant Technologies

Azure SQL Migration Roadmap

Getting Started with Azure SQL

Introduction

Vigilant Technologies can help your team successfully move your SQL Server workloads to Azure SQL Database Managed Instance and save up to 85 percent with Azure Hybrid Benefit and reserved capacity pricing.

Our team of SME DBA’s can ensure that you get a fully managed database service with built-in security and performance monitoring for managing hundreds to thousands of databases at scale. For SQL Server workloads that use SQL Server Analysis Service, SQL Server Reporting Service, and other non-engine capabilities, Vigilant Technologies can ensure a successful shift and move to SQL Server in a VM to get extended security updates at no extra charge.

The Vigilant Approach…

The Vigilant Technologies SQL Migration Roadmap consists of five stages, each encompassing several important tasks required to complete a successful migration to Azure cloud services.

The purpose of each stage can be summarized below, but we will look at each stage in more depth in the sections to follow:

  1. Initiate and discover – Understand your database footprint and potential approaches to migration
  2. Assess – Assess the discovered workload requirements and any dependencies
  3. Plan – Plan and describe the workloads to be migrated, the tool to be used for migration and the target platform for the workload
  4. Transform – Transform and optimize any workloads not currently compatible with modern data platforms. Optimize workloads to take advantage of new features
  5. Migrate, validate and remediate – Perform migration, validate successful migration, and remediate applications where required



Other Options with Vigilant Technologies – SQL PaaS

Azure SQL Database is a fully managed service that is comparable to a traditional on-premises SQL Server deployment, but greatly enhances SQL performance and robustness by making performance levels and storage capacity easily upgradable as well as providing standard high availability.

Azure SQL Database delivers predictable performance at multiple service levels that provides dynamic scalability with no downtime, built-in intelligent optimization, global scalability and availability, and advanced security options — all with near-zero administration.

These capabilities allow you to focus on rapid app development and accelerating your time to market, rather than allocating precious time and resources to managing virtual machines and infrastructure. Azure SQL Database currently resides in 38 data centers around the world, with more data centers coming online regularly, enabling you to run your database in a data center near you.

Steven headshot

Author:
Stephen Clark

Principal – Technology Strategist, Vigilant Technologies

6 Challenges CFOs solve with RPA

6 Challenges CFOs Could Solve with
Robotic Process Automation (RPA)

Introduction

CFOs and finance executives are always looking to accelerate and increase efficiencies across finance and accounting functions so accountants can focus more on the risks and opportunities for the company.  Streamlining operations equates to greater business agility to immediately identify when market turbulences occur then effectively respond and navigate to meet customer and shareholder demands.

With systems not evolving fast enough to meet the business needs, finance and accounting functions are required to accommodate these needs with manual work-arounds or by employing more resources.  These events have opened the doors for new technology solutions to more rapidly adapt to business needs to accommodate changes without having to employ armies of staff on or off-shore.

In this article, we will explore Robotic Process Automation (RPA) and the benefits and opportunities CFOs must consider for transforming Finance and Accounting functions.

What is RPA?

In simple terms, Robotic Process Automation (RPA) is a computer software (aka “robot” or “Digital Worker”) that emulates the actions of a human interacting with a computer.  RPA is best used to perform any manual task that is repetitive, easily defined, and high volume.

With RPA, digital workers replicate the mouse and keyboard functions of an employee but can go beyond the human interaction aspect and leverage more technological solutions, like running queries, calling APIs and Web Services, and conducting advanced analytics within the robot.  Additional benefits of employing digital workers are the 24/7 work, ability to rapidly scale to meet demand, and flattening the peak demand from your financial close.

1. Streamlining business processes

According to a report published by the McKinsey Global Institute, 42% of finance activities can be fully automated and an additional 19% can be mostly automated. RPA is good for simple tasks, like checking for FX rate changes, to complex processes, like reading bank statements for bank reconciliations.  Other examples where RPA is often used are: Entry of sales orders, cash application, account reconciliations, vendor registration, purchase order creation, invoice registration, journal entry uploads, report retrieval, assembly, and preparation.

Imagine what you can do with over 50% of your finance and accounting processes automated effectively, efficiently, standardized, and with quality.

Related readings: Transforming the Finance Function with Automation 

Finance Function | Process Automation

2. Improving productivity and reducing operational costs

The intent of RPA is to automate repetitive, standard, and manual low-value work.  Employing robots streamlines business operations and returns hours to the business and enables employees to focus on higher-value tasks.  Freeing up resources also enables employees to work on high priority projects that are often tabled because resources are only able to keep up with the volume of manual work.

Poor data quality costs businesses millions of dollars each year because employees will accommodate bad data and integrate work-arounds as part of their day-to-day work.  Poor data quality will trickle through an organization, causing dependent processes to also accommodate the data issues.  For example, unstructured vendor entry can cause additional lookups in procurement, invoice registration, cash application, and vendor reporting.   RPA can be used to standardize data entry and validate data to ensure data quality at the acquisition point of the data.

Accurate and standardized data means faster processing which translates to faster report generation, which accelerates Close, Financial Analysis, Analytics, and more.

Processes automated with RPA can also be repurposed to support time-consuming system upgrade projects that tie up your most knowledgeable resources with user acceptance testing.

Related readings: RPA can fix your data quality issues

3. Reducing operational risk (aka Enable your team to excel, not Excel)

Does your organization have to certify an Excel workbook, as a system of record?  Does your F&A team specialize in complicated macros and formulas?  Are you at risk of a key-man dependency because someone on the team built a complex macro-laden, formula powered, cross-referencing data behemoth of an Excel workbook?

RPA is a great way to eliminate End User Tools (EUT/EUC) and the risk of corrupted Excel files.  RPA also ensures that the same steps are always completed the same way, which eliminates the risk of an accountant not refreshing a lookup table or retrieving the most up-to-date results.  RPA processed data will be standard, consistent, documented, and auditable.

4. Scale operations to meet growing demands

Processes automated with RPA can return a significant number of hours back to your business, which enables team members to refocus on what is important, such as identification of risks, errors, and opportunities.  Standardizing processes also enable the automated process to scale in the event of higher transaction volumes or seasonality, which reduces costs associated with hiring (FTE or temp workers).

Be aware that most RPA practitioners want to automate accounting tasks based on an individual’s user requirements, which is not scalable and cannot also automate comparable functions across Finance and Accounting teams.  Our extensive experience supporting finance and accounting functions helps us to identify automation components that can be repurposed across a multitude of processes, which not only helps to speed up automation deployment, but also establishes standards and consistency in processing.

5. Optimizing key performance indicators (KPIs)

A big challenge we often see when automating finance and accounting departments is that most companies do not have any metrics to support where employees spend their time, processing times, or volumes.  The implementation of RPA enables organizations to drill into the details on how teams are utilizing their time.  We often see a lot of inefficiencies in manual business processes, so we use the captured information to reengineer the process for automation to ensure that the processes run more efficient and effectively.

RPA also enables the tracking of processes for volume counts, average processing times, processing costs, and exceptions.  Isolating and analyzing exceptions can further improve process efficiencies, but it requires operational metrics to understand how well the processes are being executed.

Related readings: Understanding benefits realization with RPA

6. Ensuring compliance requirements are being met

RPA is effective for supporting and ensuring compliance across your organization. RPA ensures greater compliance since the defined actions within an automated process are always executed in a consistent manner with more accuracy and higher quality.  This ensures greater compliance across all business processes.

RPA also improves oversight and auditability, because a Digital Worker’s defined actions are captured during execution into an audit log for monitoring and auditing, which simplifies operations and enables compliance concerns to be addressed more quickly.  The log is also helpful in troubleshooting processing issues.

Conclusion

With the job requirements of CFOs to be more focused on the company’s viability, long-term growth strategies, short-term crisis navigation, and managerial decision-making, there is less time for focusing on the day-to-day functions.  Leveraging RPA will help free your staff from the low-value tasks and functions to create business agility by transforming and reshaping the finance function from number crunchers into data-driven, strategic partners your business needs.

Author:

Joshua Gotlieb

Intelligent Automation Practice Director, Vigilant Technologies

Finance Transformation with Automation

Transforming the Finance Function with Automation

With rapid market shifts forcing the need for business agility, CFOs are being pressured to reinvent their finance and accounting teams from number crunchers into THE competitive advantage for their company.  Before the digital revolution, finance leaders could only throw more bodies at the problem areas, but now businesses can employ armies of digital workers to execute business tasks and processes with speed, efficiency, consistency, and quality.

Digital Workers are the “Robots” in Robotic Process Automation (RPA).  With RPA, digital workers replicate the mouse and keyboard functions of an employee but can go beyond the human interaction aspect and leverage more technological solutions, like running queries, calling APIs and Web Services, and conducting advanced analytics within the robot.  Additional benefits of employing digital workers are the 24/7 work, their ability to rapidly scale to meet demand and flattening the peak demand from your financial close.

What processes and tasks can I automate?

According to a report published by the McKinsey Global Institute, 42% of finance activities can be fully automated and an additional 19% can be mostly automated. RPA is good for simple tasks, like checking for FX rate changes, to complex processes, like reading bank statements for bank reconciliations.  Other examples where RPA is often used can be: Entry of sales orders, cash application, account reconciliations, vendor registration, Purchase Order creation, invoice registration, journal entry uploads, report retrieval, assembly, and preparation.

It is also important to understand that RPA is effectively the entry point for Artificial Intelligent (AI) technologies, so RPA can engage more advanced technologies like OCR and Machine Learning for the reading of invoice documents of a supplier, extracting the information, and registering the invoice into your AP system.  RPA can also be for engaging AI to support analytic models in forecasting process.

The following is a simple way to think about where to employ RPA:

  • Rule-based
  • Easily described
  • High transaction volumes
  • Low exceptions
  • Stable and well-defined processes
  • Low system change
  • Structured data and readable electronic inputs
  • Are people passionate about the process?

Examples where we have used automation in Finance and Accounting:

Financial Control & Reporting
Credit-to-Cash (AR)
Procure-to-Pay (AP)
Financial Planning and Analysis (FP&A)
Cash & Treasury Management
Payroll
Financial Control & Reporting

  • Journal Entry Creation and Upload/Entry
  • Account Reconciliation
  • Report Assembly and Preparation

Credit-to-Cash (AR)

  • Enter Sales Orders
  • Credit Checks
  • Customer Follow-ups
  • Retrieve Cash from Bank
  • Cash Application / Allocation

Procure-to-Pay (AP)

  • Vendor Registration
  • Create Purchase Order
  • Invoice Registration
  • 2/3 way matching
  • Payment (Batch) Creation
  • Payment Issuance

Financial Planning and Analysis (FP&A)

  • Retrieving reports from internal/external sources
  • Standardizing and cleansing data
  • Consolidating datasets
  • Building standard report outputs and populating PowerPoints

Cash & Treasury Management

  • Generate Daily Cash Positions
  • Cash Forecasting
  • Bank Account Analysis

Payroll

  • Time-sheet coding validation
  • Run payroll
  • Calculating deductions
  • Auditing reported hours against schedule

When over 50% of your finance and accounting processes are automated, finance functions can focus on the risk identification, business analysis, forecasting, and analytics to accelerate data-driven decision making. With efficient process automation you can transform your finance function.

Related readings:

Author:

Joshua Gotlieb

Intelligent Automation Practice Director, Vigilant Technologies

RPA Can Fix Your Data Quality Issues

Robotic Process Automation (RPA)
can fix your data quality issues

According to Gartner, “the average financial impact of poor data quality on organizations is $9.7 million per year.”  In 2016, IBM estimated  the yearly cost of poor data quality in the US alone, to be $3.1 trillion.  Anyone that works with data that has completed its processing journey understands the impacts, but why are we not talking more about the issue with data quality?

While there are many possible explanations for organizations not addressing data quality, there is no identifiable relationship between data quality and business results (i.e. it cannot be quantified) and most business functions do not have awareness of the impacts that poor data has on downstream processes.

The reality is that executives, managers, employees, accountants, et al, just accommodate the bad data and integrate work-arounds as part of their day-to-day work.  Employees accept bad data since there is little/no incentive and it is easier to accept than trying to figure out where bad data originates then work with the upstream teams to correct the behaviors/actions resulting in the bad data.

These bad data accommodations cost organizations in both employee’s time and their expense since a large portion of their time is focused on the low-value tasks associated with data correction and standardization, rather than the higher value tasks of analysis for insights, risks, and opportunities.  Even non-reporting tasks cost business processes efficiency, since simple tasks like selecting a vendor or supplier could result in a series of try/fails until the right entry is found, because of undisciplined data creation practices.

We use RPA to standardize data entry and validate data to ensure data quality.  We look for the root cause of the data issues to correct bad data practices before we apply RPA to business processes to ensure we do not harden accommodations for bad data, since fixing at the root ensures we do not have to automate bad data accommodations at every related downstream process that utilizes the data.  Also, accurate and standardized data means faster processing which translates to faster report generation, which accelerates Close, Financial Analysis, Analytics, and more.

For more information about how hardening bad data practices adds data debt to your organization, read our Process Automation Approach blog

Author:

Joshua Gotlieb

Intelligent Automation Practice Director, Vigilant Technologies

Process Automation Approach

Navigating the Automation Roadmap Blog Series

Your Automation Approach is Adding Data Debt to Your Organization!

Process automation positively impacts businesses in many ways, including the elimination of human processing errors, faster processing, 24×7 work time, and the scaling of capacity to flatten peak workloads.  With so much goodness, what could be bad?  How about the data debt that is piling up with each process automation?

With all the talk of robotic process automation (RPA), intelligent process automation (IPA), Hyperautomation, and other fancy terms, at the most basic level of a business process is the movement of data.  Data moves from Point A to Point B, PDF to system, system to system, system to report, etc.  The ultimate purpose of these data movements is to support data-driven decision making at various management levels.  If all goes well, captured data is mined for insights that improve the customer experience.

What is data debt?

Data debt is the cost that comes with a decision to choose fast solutions over the longer, more technically correct solution.  Sometimes these decisions are known at the time or origin, but they can also get established over time, unknowingly.  A relatable example of (inadvertent) data debt is the unstandardized practice for name and address entry.  Also related is the implementation of name and address entry standards after the process has matured, but a decision is made not to clean-up and align legacy data with the new standards.

In both situations, unstandardized name and address databases force downstream processes to contend with data variations.  Now, consider the processing impacts of name and address variations across Procurement, Accounts Payable, Accounts Receivable, Accounting, Sales and Marketing, and Reporting.  Human accommodations for these data deviations might not be fully understood or realized, because it is easy enough for an employee to perform a lookup and use historical processing knowledge to make the right selection.  The fact is, these accommodations are process deviations.

Process deviations across downstream processing create process waste and complexity, but also adds to the data debt that is unrealized by most organizations.

Fast forward to the future when you upgrade your name and address system to a new CRM system.  The organization is now confronted with the decision to standardize the data or convert as-is.  Standardizing the data increases the costs, time, and risk of the project, whereas converting as-is maintains (or increases) the data debt in the organization and perpetuates bad data practices.

How automation contributes to data debt

Automation backlogs consists of process candidates that are typically isolated functions (or tasks) within a larger process (e.g. invoice registration).  Decisions to “green light” or automate a process from the backlog are based on priority, projected returns on investment (e.g. hours, FTEs), and if the ‘gating’ qualifications have been met.  Most qualification guidelines include the identification of ‘known’ process deviations at a higher level (Levels 0 to 3), but it is easy to miss process deviations resulting from human accommodations to support bad data practices at the desktop level (Level 4).

To the trained and untrained eye, a business process might look to be standardized and consistent at the higher levels and even during a process (desktop) walk-through.  Automation development teams will define the process definition document and scope the business process based on the business team walk-through.  Depending on your development methodology (e.g. agile or waterfall), process variations caused by inconsistent data are identified during development reviews or during user acceptance testing (UAT).  Accommodating process variation within the current phase of the automation project might seem insignificant, but they will often add up and expand the scope of the project and in many cases increase the project duration.

Project scope creep and longer project times aside, the deeper issue is that process variations are often solved with hard-coding unstandardized data or by utilizing lookup tables, which really means the automation has now established the bad data practices as standards.

Data debt in practice

An example that exemplifies the data issues is starting with invoice registration for automation.  The AP invoice registration team receives an invoice and manually keys the invoice into the registration system.  Data entry usually entails straight entry and the utilization of field lookups to select and ‘connect’ the invoice with the vendor data and required internal coding structures.  From a top level, the process looks to be consistent and repetitive, but it is easy to miss that the processor is performing multiple lookups to identify the correct vendor or business unit entries for selection because of data inconsistencies.

It is commonplace for the lack of governance over vendor name and address entry or Purchase Order entry, therefore resulting with inconsistent data being keyed into supporting systems.  The issue this causes is that the information is a critical input for many downstream processes that will require process variations to support (subtle variations or more pronounced) the unstandardized data.

Eliminate data debt, create business agility , and transform your organization!

If you are thinking, “It is impossible and impractical to map out every process scenario,” then you are correct, but you are not addressing the root cause of the process variations, which are usually the result of inconsistent data entry standards.  To dig deeper into the issue, we need to examine where the unstandardized data originates from and address at the point of entry into the organization (where possible/practical).  From an automation perspective, this means that automating the process with the highest ROI might not be the right process to start with and you might need to start reengineering for automation further upstream to bring ‘real’ transformation to the organization.

We recommend examining all process inputs during the scoping phase of an automation, then ask if the inputs have a standardized process at their point of entry.  The Yes/No answer will help to determine if an investigation is required to examine the quality and consistency of the data.  Based on the outcome of the analysis you can determine how to best address for your organization.

Parting thought

Transformation of an organization usually does not come from the shiny object in front of you but comes from establishing the right practices to guide an aligned organization.  It is also important to note that transformation occurs though many available tools (levers to pull), such as system upgrades, reengineering, expansion of system functionality/parameters, or automation.  Too often, people want to believe automation is the only answer for transformation, but it highlights an organization that is not aligned on transformation initiatives.

In future articles we will address the important of an “aligned organization” for transformation, but for now I hope this article has been informative and thought provoking.

Please reach out on automation@vigilant-inc.com. We would appreciate hearing your feedback or having discussions to learn how we can help support your data governance implementation and execution, or automation initiatives.  Thank you for reading.

Author:

Joshua Gotlieb

Intelligent Automation Practice Director, Vigilant Technologies

How Security Changes with Cloud Networking

How Security Changes with Cloud Networking

In cloud computing there are two macro layers to infrastructure:

  • – The fundamental resources pooled together to create a cloud. This is the raw, physical and logical compute (processors, memory, etc.), networks, and storage used to build the cloud’s resource pools. For example, this includes the security of the networking hardware and software used to create the network resource pool.
  • – The virtual/abstracted infrastructure managed by a cloud user. That’s the compute, network, and storage assets that they use from the resource pools. For example, the security of the virtual network, as defined and managed by the cloud user.

All clouds utilize some form of virtual networking to abstract the physical network and create a network resource pool. Typically, the cloud user provisions desired networking resources from this pool, which can then be configured within the limits of the virtualization technique used.

There are two major categories of network virtualization commonly seen in cloud computing today:

  • Virtual Local Area Networks (VLANs): VLANs leverage existing network technology implemented in most network hardware. VLANs are extremely common in enterprise networks, even without cloud computing. They are designed for use in single-tenant networks (enterprise data centers) to separate different business units, functions, etc. (like guest networks). VLANs are not designed for cloud-scale virtualization or security and shouldn’t be considered, on their own, effective security control for isolating networks. They are also never a substitute for physical network segregation.
  • Software-Defined Networking (SDN): A more complete abstraction layer on top of networking hardware, SDNs decouple the network control plane from the data plane (you can read more on SDN principles at this Wikipedia entry). This allows us to abstract networking from the traditional limitations of a LAN.

Security challenges with cloud networking:

  • The lack of direct management of the underlying physical network changes common network practices for the cloud user and provider. The most commonly used network security patterns rely on control of the physical communication paths and insertion of security appliances. This isn’t possible for cloud customers, since they only operate at a virtual level.
  • Traditional Network Intrusion Detection Systems, where communications between hosts are mirrored and inspected by the virtual or physical Intrusion Detection Systems will not be supported in cloud environments; customer security tools need to rely on an in-line virtual appliance or a software agent installed in instances. This creates either a chokepoint or increases processor overhead, so be sure you really need that level of monitoring before implementing. Some cloud providers may offer some level of built-in network monitoring (and you have more options with private cloud platforms) but this isn’t typically to the same degree as when sniffing a physical network.

On the positive side, software-defined networks enable new types of security controls, often making it an overall gain for network security:

  • Isolation is easier. It becomes possible to build out as many isolated networks as you need without constraints of physical hardware. For example, if you run multiple networks with the same CIDR address blocks, there is no logical way they can directly communicate, due to addressing conflicts. This is an excellent way to segregate applications and services of different security contexts.
  • SDN firewalls (e.g., security groups) can apply to assets based on more flexible criteria than hardware-based firewalls, since they aren’t limited based on physical topology. SDN firewalls are typically policy sets that define ingress and egress rules that can apply to single assets or groups of assets, regardless of network location (within a given virtual network).
  • Combined with the cloud platform’s orchestration layer, this enables very dynamic and granular combinations and policies with less management overhead than the equivalent using traditional hardware or host-based approach.
  • Default deny is often the starting point, and you are required to open connections from there, which is the opposite of most physical networks.

In conclusion, the Cloud can be as or more secure than traditional on-premises deployment if configured correctly.  

Vigilant simplifies the complexity and reduces the cost of managing and maintaining your IT infrastructure including servers, network, backup, and storage technologies.  Please reach out to infraservices@vigilant-inc.com for a spirited discussion on maximizing the Cloud’s benefits for your company.

We look forward to your feedback.

Amrita

Author:
Amrita Mukherjee

Principle Security & Cloud Architect, Vigilant Technologies