Park Place Technologies, a 1991-founded IT services company, supports 58,000+ data centers in 150+ countries. The company has created a new technology service category – Discover, Monitor, Support, and Optimize (DMSO), which is a fully integrated approach to managing critical IT infrastructure.
Paul Mercina, Director of Product Management at Park Place Technologies, in an interaction with Voice&Data, reveals why companies must adopt its DMSO strategy to optimize the performance of data centers and he also discusses the benefits of third party maintenance services.
Few excerpts:
Voice&Data (V&D): Organizations, irrespective of their size, have been affected by the COVID-19 pandemic. What IT solutions can help organizations redefine workplaces?
Paul Mercina (Paul): The sudden switch to remote digital work, overnight, and en masse, is highly likely to accelerate changes in how work is performed and the way we think about working arrangements. Looking at the bigger picture, Covid-19 may prove to be a tipping point for the digital transformation of the workplace. Certainly, it looks near impossible to put that digital genie back in the bottle, once the health emergency is over.
The changing use of, and requirements from our technology, brought about by the enforcement of home working during this crisis has the potential to completely redefine the workplace. The foremost point to bear in mind is that IT departments must maintain a strong VPN service and multi-factor security authentication platform. Every employee should have access to a laptop, softphones, monitors, and necessary software and applications.
Companies must keep very careful track of sales operations, lead generation, client onboarding and event support. All lines of the team and personal communications have to be kept open and accessible, from the C Suite leadership team to every associate.
Additionally, organizations that may have experienced downtime in their data center during the COVID-19 pandemic, may wish to explore and adopt the use of remote, automated support, and AI tools for their IT hardware. Automated support tools to help monitor all hardware within the stack (storage, server, and networking hardware) from any vendor is readily available, and these tools, combined with discovery tools will enable organizations to know exactly what assets; (physical, virtual, cloud and edge) they have in their IT infrastructure globally while benefitting from peace of mind that their hardware is supported and in the event of a fault, this is remediated remotely.
Organizations may also wish to think about adopting remote tools to log and access hardware tickets through a customer portal, or mobile app, which are now readily available by some specialist IT vendors.
What enterprises need today is more flexibility and an intelligent approach to digital infrastructure maintenance.
V&D: What is DMSO type service concept all about and why should organizations consider a DMSO strategy to make their IT infrastructure more efficient?
Paul: DMSO stands for Discover, Monitor, Support, and Optimize.
Today’s enterprises have their data completely spread out. It lies on-premises, in operation centers and networks across the world, in public and private cloud, and devices on the edge – creating a very complex structure to manage. As they continue with their digital agenda, businesses must navigate an array of different service providers and establish accountability and support while dealing with huge costs, time, and labor spends. A traditional system is not able to handle such an intricate ecosystem. Therefore, what enterprises need today is more flexibility and an intelligent approach to digital infrastructure maintenance.
Discover – Automation used to comprehensively map servers (cloud, physical or virtual), edge devices, desktops, and peripherals and then list all the data center assets alongside the enterprise dependency on them, across OEMs.
Monitor – Hardware, and software are used to monitor the server and storage on an ongoing basis.
Support – Proactive, predictive alerts alongside ticket generation, for events that could occur in operating systems, hardware, and network hardware. OS remediation includes updates and patch management and network incidents where it identifies the root cause, manages it, and rectifies configuration issues.
Optimize – Capacity management while ensuring uptime. Also monitors CPU utilization and has Cloud cost controls.
Network analytics solutions help in identifying and preventing incidents before they actually affect business. Otherwise, the network will become a blind spot and operational impacts will be very difficult to track.
V&D: Why does network analytics play a crucial role today?
Paul: Corporate IT teams have a lot of data on their network’s status, which means that they need to deal with more alerts on their network management systems than they have ever had to deal with before. Mining through reams of data to filter out things that require immediate attention takes time and effort and it is bound to lead to delayed responses and inefficient network performance.
This is where network analytics comes to the rescue. When machine learning is applied to the network layer, it can identify authentic network errors in real-time. These can then be addressed immediately. With intelligent event suppression, alert overloads can be prevented. This way network analytics solutions help in identifying and preventing incidents before they actually affect business. Otherwise, the network will become a blind spot and operational impacts will be very difficult to track.
V&D: What pain points are you seeing from organizations in the current IT landscape and why do these pain points require resolutions through third-party maintenance services?
Paul: Primarily, the pain points are:
- They don’t have clear visibility into their network performance,
- Their current toolsets are outdated, and they are struggling with scalability and performance issues,
- Tool consolidation and new features and functionality of a new toolset is a pain area,
- Lack of access to data to be able to make more informed and educated decisions, and
- Support of emerging technologies such as cloud and SDX administration overhead/time.
Automated support processes are making the maintenance and support of hardware much simpler using Machine learning and AI. Today’s enterprise CIOs have a long list of responsibilities and are stretched on trained manpower. In such a situation integrating third-party solutions for managing the server, storage, and networking equipment can be critical.
Third Party Maintenance (TPM) has gained traction and has become the go-to solution for enterprises to maintain uptime in data centers. Especially against the COVID-induced scenario that has caused companies to tighten their budgets and be more prudent in their spending.
In addition to a hassle-free transfer of maintenance services, TPM ensures that enterprises could shave off 30-40% of the OEM cost for maintenance. This allows for the cost savings to be funneled into more strategic projects.
Apart from the cost play, TPM works as a trusted partner to the enterprise as it brings the power of expert engineer knowledge, easy availability of spare parts, and round the clock service, to assist the enterprise.
V&D: What are the latest trends in service delivery and ticket tracking for fault resolution?
Paul: Mobile apps like customer portals give customers remote access so they can control and maintain their data centers and IT infrastructure in real-time. Several important features such as submit, edit and view incidents or monitoring escalation processes are the same in mobile apps as in the customer portals.
And customer portals do enable customers to track the progress of their tickets in real-time. The portal and mobile app provide customers with a single pane of glass view of the health of their whole IT infrastructure (compute, storage, and network) along with the ability to receive real-time updates on the progress of service incidents, no matter where they are working from.
Apart from the cost play, TPM works as a trusted partner to the enterprise as it brings the power of expert engineer knowledge, easy availability of spare parts, and round the clock service, to assist the enterprise.
V&D: How can IT infrastructure problems be prevented and remediated quickly when problems arise? (For both office and home workers)
Paul: It is crucial to ensure remote workers can access company networks without issue (via VPN). A dashboard is certainly needed to monitor how many active VPN sessions per location and when VPN saturation will occur. This allows the company’s IT team to visualize VPN activity over time, how it fluctuates during business hours and encourages proactivity in preventing saturation.
Correlation of the number of VPN sessions with application performance (such as response time) and outbound/inbound traffic on certain interfaces is necessary. If circuit saturation and application degradation correspond with the rising number of VPN sessions, the company’s IT team can start planning for optimizing capacity.
It’s also important to ensure on-premise applications are performing optimally. With fewer people, on-site, extra attention should be paid to the environmental conditions of server rooms, QA labs, and data centers. A server room (with servers hosting many important on-premise applications) could be susceptible to a cooling failure. The first warning signs may be that users reporting slow or unresponsive applications. Then monitoring solution may start showing temperature alarms on CPU and chassis, or fan failures. The last thing any business wants is for their users to be their early warning system for a potentially catastrophic and costly failure.
Again, IT teams can be proactive by monitoring key servers, setting thresholds for CPU, chassis, fans and application response time. This will give your IT team precious time to remediate any issues before they turn into revenue-impacting downtime.
V&D: What IT and technological trends are you seeing now and what are your predictions for 2021?
Paul: We anticipate an explosion of data with increased adoption of the Internet of Things (IoT). Organizations are already struggling with massive amounts of data, with an average company managing nearly 10 petabytes of information, we’ll see an escalation in device-generated data now.
The talent shortfall of skilled professionals for data centers continues; they will have to look for available alternatives to run the data centers
NFV or Network Function Virtualization will advance as will SD-WAN. Though organizations lack the business models to support edge computing, slowing this technology’s growth, they will prepare their infrastructure for increased automation.
Artificial Intelligence will continue to be the real news. As enterprises and data centers struggle with huge amounts of data and realize the difficulty of managing these data sets in the cloud, they will see the value of high-density, AI-ready installations on-site, and start investing more. Also, simple ML algorithms will evolve towards powerful deep learning solutions.
(Anusha Ashwin: x-anushaa@cybermedia.co.in)