Friday, 28 July 2017

Lets look a little closer into the hyper Convergence technology

Architecture without Hyper Converge 
(Decoupling storage performance from capacity and data services)
A new “decoupled” architecture has emerged that addresses the issues discussed in the previous post. Like hyper convergence, it puts storage performance in the server, using high-speed server media like flash (and RAM). But unlike hyper convergence, it leaves capacity and data services in shared storage arrays

There are several benefits that come with separating storage performance from capacity:
  •  Fast VM performance by putting CPU Centric or All Flash Centric share storage intelligence.( UNITY All Flash Array, Nimble Hybrid Array , HP 3PAR All Flash Array using eMLC SSD  Drives only )     
  •  No vendor lock-in, as decoupled architectures leverage any third-party server and storage hardware
  • Cost effective scale-out. Additional storage performance can be easily added simply by adding more server media. Capacity is handled separately, eliminating expensive over-provisioning. 
  •  No disruption. Decoupled software is installed inside the hyper visor with no changes to existing VMs, servers or storage. 
  •  Easy technology adoption.  With complete hardware flexibility, you can ride the server technology curve and leverage the latest media technology for fast VM performance (E.g. SSD, PCIe, NVMe, DRAM, etc.).  

Once in place, a decoupled storage architecture becomes a strategic platform to better manage future growth. Because performance and capacity are isolated from one another in this structure, they can be tuned independently to precisely meet the user requirements.  Going back to the previous example of a service supported by a two-node cluster, performance and capacity can now be added separately, as needed, to reach the desired service level without over provisioning.

IT operators are often faced with the desire to gain a competitive edge by adopting new technology while ceaselessly looking to mitigate risk. Often, one has to be prioritized above the other. In the case of hyper convergence, pushing the innovation envelope involves compromising flexibility and accepting institutional changes to fundamental operating procedures in the data center. 

Decoupled storage architectures, on the other hand, offer the rare opportunity to take advantage of two major industry trends -- data locality and faster storage media -- to speed virtualized applications in a completely non-intrusive manner. In essence, all the performance benefits of hyper convergence without any of the disruption.

For More Queries Contact:
Prasad Pimple - Head of Department, Enterprise Solution Group.
Email: prasad@netlabindia.com 
Network Techlab (I) Pvt. Ltd.

www.netlabindia.com
 
Disclaimer: All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information provided in this blog.

Wednesday, 26 July 2017

Is the spotlight now on pioneering hyper convergence technology??


-By Mr. Prasad Pimple, Head of Department. 


New wave of Hyper converge will improve Virtual Machine (VM) performance by using server flash for key storage functions. But it has its drawbacks. Separating storage performance from capacity overcomes these issues. Applications & Database Workload are increasing, more virtual machines in a data center are creating pressure on back-end shared storage and due to this performance bottlenecks arise.To handle such IOPS & Latency pressure only enterprise solid-state drive is a right answer , which helps in additional acceleration & improve application performance by reducing latency access times by almost 10x or more. Some of the mission critical workloads like ( SAP / OLTP Database / ORACLE ) may demands Sub Millisecond latency for data access along with high IOPS But another performance challenge also exists: network latency. Every transaction going to and from a VM must traverse various checkpoints, including a host bus adapter (HBA) on the server, LAN, storage controllers, and a storage fabric. To address this, many companies are placing active data on the host instead of on back-end storage to shorten the distance (and time) for each read/write operation. 
Moving data closer to VMs at the server tier reduces latency. Hyperconvergence puts solid-state storage inside servers. In this respect, it brings incremental performance gains to several applications, like VDI. But architecturally, it introduces various drawbacks, particularly around flexibility, cost, and scale. Perhaps most significantly, it causes substantial disruption to the data center. 
Let's look a little closer at these hyper convergence challenges and how to overcome them. Hyper Convergence Hangover
As explained, hyper convergence improves VM performance by leveraging server flash for key storage I/O functions. But combining the functions conventionally provided by two discrete systems -- servers and storage -- requires a complete overhaul of the IT environment currently in place. It creates new business processes (e.g. new vendor relationships, deployment models, upgrade cycles, etc.) and introduces new products and technology to the data center, which creates disruption for any non-green field deployment. For example, the storage administrator may need to re-implement data services such as snapshots, cloning, and replication, restructure processes for audit/compliance, and require training to become familiar with a new user interface and/or tool.
Another major challenge with hyper convergence is, to mold it for scaling. IT infra has become completely obsolete where by customer do not have a choice due to de-facto mode of scaling ,a hyper converged environment is, to simply add another appliance. It restricts the ability of the administrator to precisely allocate resources to meet the desired level of performance without similarly adding capacity. 
This might work for some applications where performance and capacity typically go hand in hand, but it's an inefficient way to support other applications, like virtualized databases, where that is not the case. For instance, let’s consider a service supported by a three-node or Four Node cluster of hyper converged systems. In order to reach the desired performance threshold, an additional appliance must be added. While the inclusion of the fifth box has the desired performance outcome, it forces the end user to also buy unneeded capacity.
This over provisioning is unfortunate for several reasons: It is an unnecessary hardware investment that can require superfluous software licenses, consume valuable data center real estate, and increase environmental (i.e. power and cooling) load. 
Finally, hyper converged systems restrict choice. They are typically delivered by a vendor who requires the use of specific hardware (and accompanying software for data services). Or they are packaged to adhere to precisely defined specifications that preclude customization. In both scenarios, deployment options are limited. Organizations with established dual-vendor sourcing strategies or architects desiring a more flexible tool to design their infrastructure will need to make significant concessions to adopt this rigid model.


More to follow on the same topic.
Stay Tuned.

For more information regarding Hyper Convergence or VM,
Visit our website at www.netlabindia.com
Or Contact Mr. Prasad Pimple at prasad@netlabindia.com

Disclaimer: All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information provided in this blog. 

Wednesday, 19 July 2017

How to build a remarkable & fully functional Data Center?

Before we understand the basic layout of the components in a Data Center, we should go through Downtime and Its Consequences:
 
1- Google's 5 minute downtime in 2013 was reported to have cost over 500,000 USD and caused a 40% network traffic drop.
2- Amazon lost USD 1,104.00 per second when it went down due to an unusually strong thunderstorm in Northern Virginia and generators failed to operate properly.
3- Amadeus flight reservation system downtime caused over 400 flight delays in Australia.
4- "hosting.com" downtime affected 1,100 customers because of a human error during preventive maintenance of the UPS systems.

 
Keeping all these issues in mind, we should have a fully functional Data center. The key Components of a data Center are:

1. Location of Data Center :
The choice of location for installation of the Data Center should be made taking into account the region, consistent with the Code of the City zoning, land size, easy access for delivery of equipment, high areas without flooding, existence of infrastructure basic sanitation, water, telephone and electricity.

Criteria For Site Selection of Data Center:
• Being close to points of presence to access networks of optical fiber enabling the connection of two different trunks.
• Availability of energy with the possibility of obtaining two power inputs
• Scalability, to allow increased building area over time.

2. Architecture of Data Center :
Data Center is usually divided into three areas of physical security in increasing order of restriction of access:
Zone I: Public areas including the Lobby, the area for visitors and administrative areas.
Zone II: Areas of Data Center Operation.
Zone III: Equipment rooms, the heart of the Data Center, where the servers are located, the “shaft” of cables, power distribution units (PDUs), batteries and air conditioning machines.

3. Construction of Data Center :
The construction should provide a solid structure composing secure facilities that complement and protect equipment and information residing in the Data Center.

Electricity: The Electrical segment consists of the Uninterrupted Power System (UPS) and the Emergency Power System and Power Distribution Units (PDU). The UPS has the function of providing energy for all data center equipment including safety equipment, fire detection and alarm. It consists of sets of compounds by UPS batteries, rectifiers and inverter. These UPS's are redundant and are connected in parallel to ensure a continuous supply of power even in case of  power transformer failure. The banks of batteries are sized to feed the loads for a period of 15 minutes. This time is sufficient for starting a connection of diesel generators in case of power failure of the Concessionaire.

The power system consists of a group emergency diesel generator which will come into operation and connect to the electrical system of the Data Center automatically.

Generators are rated to withstand all the loads necessary for the operation of all the Equipment and Data Center during a power failure of the Concessionaire. The goal is to assist the operation 24x7, considering the conditions for preventive maintenance, adding new components and replacement operation after unplanned outages.

The power distribution units (PDU) are responsible for conditioning the signal that is used to feed multiple devices at Data Center.

4. Air conditioning in Data Center :
The segment of Air Conditioning has the function of maintaining a controlled temperature and humidity in the premises of Data Center. The segment includes the air conditioning system for cooling units and air handling system Distribution of Air Conditioning. It should be connected to emergency power generators.

The Cooling System provides heating, cooling, humidification and de-humidification of the building.

The Air Treatment System must be separated into three types of area: Room Facilities: Data Center area offices, equipment rooms Air Conditioning and Electrical. The separation is due to differences in sensible heat and latent heat of each area, the temperature conditions and humidity.
The Distribution System of Air Conditioning Equipment Room to the Data Center will use the system to supply air for the full set beneath the raised floor. This system involves inflating the raised floor at a minimum height of 60 cm., That depending on the amount of conduit, tubing, mats, etc., should have its height adjusted so as to allow air to circulate throughout the room Data Center. The goal is to operate 24x7.
An adequate cooling is essential for maintaining performance and safety of operation of data center services.

5. Fire Protection System in Data Center :
The Data Center is a facility for electronics essentials such as servers and other types of computers and telecommunications equipment. In addition to meeting standards of the local fire department, the fire protection system should seek to avoid damage to the equipment in case of fire.

One of the best solutions for the firefighting equipment rooms is a combination of the Combat System with Pre Action Sprinkler (with dry pipe) above the raised floor system and Fire Fighting for Gas FM 200 below the raised floor.
The combat system with gas will be connected to a sensitive detection system and be the first to be fired. The gas is spread throughout the area, leaving no residue to damage sensitive equipment or to order a cleanup cost of the equipment.

The system of pre-action when triggered,  discharges  water only in the sprinklers that have been operated by heat over the fire.

6. Supervision and Control System in Data Center :
The control and supervision system continuously monitors the various segments of Data Center tracking items such as:
• Control of loading and parallelism of the generating sets
• Supervision and control of medium voltag
e panels
• Integration with system of generators
• Integration with system rectifiers
• Supervision and control panels for low-voltage
The system consists of computers with the latest technology capable of withstanding continuous use, appropriate systems for supervision and control. The same are redundant to each other, allowing high flexibility and performance system.

The Data Center also has a system of closed circuit television and access control that controls the entry and exit in various rooms and areas of physical security at Data Center.

With increased role of data center for business operation, the organization must be able to maintain high standards, integrity, and functionality. Fortunately, not many companies have their own data center IT professionals to design, plan, and maintain the needs of a data center. With this high demand for data center services, especially in conventional data centers, NTIPL supplies their skilled expert team to handle our clients needs for their successful data center operations.

For more information visit www.netlabindia.com
You can also tweet to us : @ntipl_netlab

Do leave a like and comment if you have anything to share with us.
 

Monday, 17 July 2017

HOW HAVE DATA CENTERS BECOME THE BRAIN OF THE COMPANY?

Data Centers are a form of value-added service that offers resources for processing and storing data on a large scale for organizations of any size. Professionals may have at hand a structure of great power, flexibility, high security, and also qualified in terms of hardware and software to process and store information.

There are Two main categories of data centers:
1. Data Center at Own Site
   a) Conventional Data Center
   b) Smart Data Center

2. Data Center at Co-location.

Data Center at own Site is owned and operated by private corporations, institutions or government agencies with the primary purpose of storing data resulting from processing operations, procedure and also in applications related to the Internet.

Data Center at Co-location is usually owned and operated by a provider of telecommunications services by operators of commercial telephony or other types of providers of telecommunications services. Its main objective is to provide various types of connection services, web hosting and equipment to its users. Services can range from long distance communications to Internet access, content storage, etc. The client hires the racks of physical space and infrastructure for energy and telecommunication, but the servers, systems, management, monitoring and technical support are provided by the client. This relationship can be relaxed and it is customary to establish a contract with the terms and conditions, clearly defining the scope of services of each side. 
One aspect that must be observed while hiring a service for Data Center is the type of access (co-location) that the user will have from the server service provider. The type of access will be defined by which the server will be accessed if necessary.

If the co-location is hired, the access is granted by employees of the data center service provider, locally. If the co-location is remote, access is provided through remote control software that will be chosen by the user. In this case the application is installed on the remote access server by the staff of the service provider. 

Eventually one or more tools may  need maintenance or there may be a need to install new applications. In such cases, the user must request the service provider to arrange whatever is necessary for the operation. While hosting the server, the user signs a term stating the legality of all software installed on your server.

There are lot of things to consider when designing a data center, from external space and location to the interior design and the efficiency of the building. Having designed and built hundreds of data centers and computer rooms, the NTIPL team are well positioned to give helpful advice to any organizations planning a new data center in the near future. Here are their top tips for designing a cost effective and energy efficient date facility.


Risk Factors of Data Centers:
A) Natural Origin
   1- Heat
   2- Cold
   3- Flooding
   4- Earthquake
   5- Lightning
   6- Air Pollution Contamination

B) Human Origin
   1- Equipment Failure Vandalism
   2- Unintentional Human Errors
   3- Sabotage
   4- Terrorism

c) Data Network Origin
   1- Network Saturation
   2- Virus Hackers

WANT TO KNOW MORE ABOUT DATA CENTER SERVICES?
VISIT www.netlabindia.com

MORE TO FOLLOW......
STAY TUNED.....

Friday, 14 July 2017

True Next-Gen Storage: Reduce data footprint and amplify performance

This new Storage technology will change IT INFRA architecture and reduce data foot print in your Data Center.
I have worked on many IT projects of storage consolidation , servers as well as Desktop virtualization. As an architect my personal experience and observation is that IT is under cost pressure to optimize the storage architecture and bring down the cost at the same time.

Why only storage? Because it is the most critical as well expensive component in the IT Infrastructure & I must say the heart of every organization. I have always tried to provide best of the breed solution & technology so that business critical applications function smoothly. In short I aim at  providing a WOW experience!!!
I believe Every Organization IT platform must be designed in such a way that users & partners experience a magical Jet speed performance. I always evaluate new  technologies, understand their architecture in-depth and assess the cost benefit ratio. One of my recent observations is that all storage vendors are forusing on design re-architecture to deliver performance with the help of CPU & memory controller rather than relying on age old disk based performance. They are also modifying storage kernel for utilizing CPU & memory to deliver maximum throughput.
Some of the known solutions that have adopted this architecture are: EMC XtremIO , EMC VNX2 , HP 3 PAR , Pure Storage , Nimble Storage etc. While some solutions demand using All Flash & some demand a Hybrid approach, the interesting fact is that we all need to accept the CPU & Memory centric architecture because these are the only components that have witnessed dynamic growth and change. According to IDG, Intel has a history of launching a processor faster than its predecessor every 16 Months .

My recommendation is, while designing data or virtualization solutions for customer we should start taking new technology advantages into consideration which will benefit the business by reducing footprint and optimizing performance at a reasonable cost which in turn will help provide customers a WOW Experience !!!
--- Written By Mr. Prasad Pimple
HOD ESG

Wednesday, 12 July 2017

Overcome challenges of modern storage

Organization data growth is more than yesterday’s traditional storage infrastructures can handle.
End users expecting  speedy access to their data—anywhere, any time, using any device.
Technologies such as virtualization put heavy demands on storage in terms of performance,
Latency capacity, and data protection.

Traditional storage technologies are unable to cope effectively with these demands, making
primary storage and backup and recovery processes more costly and challenging to manage.
Every Customer is dealing with one or more of the following storage challenges.
 
Challenge: Getting both performance and capacity.
The limitations of traditional storage force the choice between performance and capacity. High-
performance enterprise disks are expensive. High-capacity drives are less costly, but they are not
fast enough by themselves to support most primary storage applications. Hybrid solutions that use
a tiring model to blend tiers of storage are unable to move the right data to the right place at
the right time, and cannot respond effectively to performance peaks.

Challenge: Improving business continuity and data availability
We live in an always-on world. Everyone expects data to be available any time, anywhere, on any
device. Gone are the days when a company could shut down systems overnight or over the weekend
for system upgrades or to perform a backup or restore. Despite advances like disk backup and
deduplication, traditional copy-based backup solutions consume a lot of computing resources and
bandwidth. Meeting backup windows continues to be a struggle. Data recovery is slow and painful.
Deduplication alone does not fully address these challenges. Traditional replication-based disaster
recovery (DR) is bandwidth-intensive and expensive. As a result, it is used only for the most critical applications.

Challenge: Simplifying primary, backup, and disaster recovery storage management
Deploying, managing, upgrading, and supporting traditional storage can be an all-consuming effort
that requires specialized training and expertise. In visualized environments, storage management is
even more complex. At the heart of these challenges is the simple truth that traditional storage
architectures are unable to keep up with today’s storage demands. Modernizing these architectures
to provide the performance, capacity, data protection, and manageability you need demands financial
resources, time, and staff you may not have.

Leveraging Innovation
Every so often a technology comes along that revolutionizes an entire industry. Recent advances in
flash memory promise to transform enterprise storage. While there have been substantial improvements
in disk density, CPU performance, and network bandwidth over the past decade, disk drive access
time—a primary measure of performance—has improved the least. Poor gains in disk drive access
time mean that high RPM drives are less able to keep up with demanding applications.
standard approach for delivering Input/output Operations per Second (IOPS) has been to deploy as
many high-RPM drives as necessary to achieve the required performance. Increasingly, flash solid-state
disk (SSD) drives are being used to deliver 5- 10x better I/O performance than the fastest disk drive.
Solid-state storage is becoming mainstream. However, adding SSD storage to existing arrays isn’t simply
a matter of replacing traditional hard disks with solid state drives. Strategies that optimize performance
and reliability for HDDs are wasteful when applied to SSDs. SSD random IOPS

A Fresh Approach to Storage
With very small IT teams, limited budgets, complex applications, and rapidly growing data, today’s IT
managers are having serious difficulty just keeping pace with business demands. Unfortunately, current
traditional storage solutions based on existing architectures are not adequately addressing these challenges.
It’s time for a fresh approach.
new approach to storage that converges storage, backup, and disaster recovery into a single solution.
Storage  architecturemust be with combinations of  flash memory with low-cost without compromising
performance & Latency of the application,
Storage must be designed with high-density drives, storage reduces backups and restore times from
days to seconds, and enables enterprises of any size to finally implement an affordable disaster recovery solution.

Tuesday, 11 July 2017

The Route to a SMART DATA CENTER

All data centers, regardless of their size, have operational and business objectives. Until now, balancing data center best practices for capacity, space utilization, availability and efficiency has been difficult without making sacrifices. More than ever, companies are looking for efficient, flexible and scalable data center solutions that will reduce complexity and cost, while enabling rapid application growth and new technologies. Thinking of your Data Center Infrastructure as a set of integrated parts that must function seamlessly together is now the expectation.
 Your data is a critical corporate asset:

As data has become more important and technology more complex, an ever increasing number of businesses are migrating their information systems to independent data centers and companies that offer superior technical knowledge, experience and expertise. Regardless of whether your business operates out of a single office or you have multiple branches around the world, your data must be safe, secure and accessible to all your employees every hour of everyday of the year. Each Smart Solutions offering addresses data center management needs with rapidly deploy-able solutions that cost-effectively add data center capacity, improve IT control and increase efficiency–to balance the most common data center objectives. These solutions contain the industry’s leading power, cooling and management systems to achieve efficiency without compromise in IT environments of all sizes.

What IT and Data Center Managers Want:

Faced with challenges, IT and data center managers increasingly want infrastructure strategies that are alternatives to conventional approaches. These new strategies must include solutions that:
1- Improve energy efficiency, space utilization and IT productivity
2- Offer measurable savings in CAPEX and OPEX
3- Offer location flexibility and compatibility with existing infrastructure
4- Improve the ability to manage and control the IT environment
5- Feature interoperability for fast and easy design and implementation
6- Support greater capacity by improving management of density and availability

These are advantages of the Smart Solutions family of infrastructure. These solutions provide a cost-effective power, precision cooling and management infrastructure to help you achieve your IT objectives regardless of data center size and complexity.

Each Smart Solution offering integrates industry best practices in data center design and operations, including:

1- Hot air and cold air separation
2- Cold air containment
3- High availability and high efficiency UPS
4- High-efficiency precision cooling
5- Space-savings, small-footprint
6- Modularity for flexibility and easier expansion
7- Integrated monitoring and control to optimize efficiency in planning and management
8- Unique local service for design audits, configuration support, installation support, maintenance and repair.

Got an IT related Issue? We have the professional Solution!!

Established in the year 1996, Network Techlab Pvt. Ltd. is widely considered among the best in the industry for providing IT related pro...