temperature@lert blog

Go Back
  • Overheating: The Concern Over Stability in Data Centers

    One of the major concerns of global organizational operations is business continuity.

    Because firms rely on their information systems to operate, once a system shuts down unexpectedly, company operation will be impaired inevitably or even stopped. It is crucial for firms to provide a stable and reliable infrastructure for IT operations and reduce the possibility of disruptions. Besides emergency backup power generation, a data center also needs to closely monitor the operation rooms in order to ensure the continuous functionality of its hosted computer environment .

    The Uptime Institute in Santa Fe, New Mexico, defined four levels of availability as shown below:


    Temprature@lert Image Uptime Institute

     
    The tolerance for unavailability of service of the tier systems is listed below over one year (525,600 minutes): 


    Tier 1 (99.671%) status would allow 1729.224 minutes
    Tier 2 (99.741%) status would allow 1361.304 minutes
    Tier 3 (99.982%) status would allow 94.608 minutes
    Tier 4 (99.995%) status would allow 26.28 minutes 



    High temperature is one of the major causes that lead to severe malfunction or damage to data centers. Many data centers have reported losses due to overheating conditions, including some of the leading firms. On March 14th 2013, Microsoft’s outlook.com service endured a 16-hour long outage caused by “a rapid and substantial temperature spike in the data center.” Wikipedia also experienced similar troubles on March 24th, 2010. “Due to an overheating problem in our European data center, many of our servers turned off to protect themselves”, as reported by Wikimedia on its tech blog (http://blog.wikimedia.org/2010/03/24/global-outage-cooling-failure-and-dns/). Earlier in the same year, too much hot air in the operation room knocked Spotify offline as one of the big air conditioner didn’t start properly.   


    Microsoft’s lengthy down time in 2013 was an unexpected accident due to its routine firmware updates. It caused a lot of trouble for customers who could not log into their Outlook and Hotmail accounts for a whole calendar day. 


    On the other hand, according to Domas Mituzas, the performance engineer at Wikipedia, the cost of downtime for the user-managed encyclopedia is minimal that “the down time used to be [their] most profitable product” because Wikipedia displays donation-seeking information for additional servers when it is offline. 


    The losses suffered from the shutdowns vary from firm to firm, and it is necessary for all parties to install safeguard process and close monitoring to minimize the potential damage. Next week we will briefly discuss how to protect your data center from changing environmental conditions.  


    Temperature@lert ebook



    Reference:


    Tom Warren, “Microsoft blames overheating datacenter for 16-hour Outlook outage”, March 14, 2013.
    http://www.theverge.com/2013/3/14/4102720/outlook-outage-overheating-datacenter

    Rich Miller, “Wikipedia’s Data Center Overheats”,  March 25th, 2010. 
    http://www.datacenterknowledge.com/archives/2010/03/25/downtime-for-wikipedia-as-data-center-overheats/

    Nicole Kobie , “Overheating London data centre takes Spotify offline”, Feb 22nd, 2010.
    http://www.itpro.co.uk/620752/overheating-london-data-centre-takes-spotify-offline


    Written by:

    Ivory Wu, Sharp Semantic Scribe

    Traveling from Beijing to Massachusetts, Ivory recently graduated with a BA from Wellesley College in Sociology and Economics. Scholastic Ivory has also studied at NYU Stern School of Business as well as MIT. She joins Temperature@lert as the Sharp Semantic Scribe, where she creates weekly blog posts and assists with marketing team projects. When Ivory is not working on her posts and her studies, she enjoys cooking and eating sweets, traveling and couch surfing (12 countries and counting), and fencing (She was the Women's Foil Champion in Beijing at 15!). For this active blogger, Ivory's favorite temperature is 72°F because it's the perfect temperature for outdoor jogging.

    Chris Monaco Temperature@lert

    Full story

    Comments (0)

  • Data Center Monitoring: Raised Temperatures, Riskier Management

    Data Center Temperature Monitoring: Raised Temperatures, Riskier Management

    In 2008, American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) published new environmental guidelines for datacom equipment. They increased the high-end temperature from 77°F to 80.6°F.

    The guideline chart below shows the changes in more details:

    data center guideline chart

    According to the 2008 guideline, the recommended operating environments could not ensure optimum energy efficiency. There are varying degrees of energy efficiency within the recommended zone, depending on the outdoor temperature and the cooling system design. Thus, the guideline suggests, “it is incumbent upon each data center operator to review and determine, with appropriate engineering expertise, the ideal point for their system”.

    Patrick Thibodeau, reporter at computerworld.com, conducted an interview with Roger Schmidt, the IBM chief engineer for data center energy efficiency, about how the new temperature parameters will influence energy savings and data center cooling. When asked “how much heat can servers handle before they run into trouble”, Schmidt replied:

    “The previous guidelines for inlet conditions into server and storage racks was recommended at 68 degrees Fahrenheit to 77 Fahrenheit. This is where the IT industry feels that if you run at those conditions you will have reliable equipment for long periods of time. There is an allowable limit that is much bigger, from 59 degrees Fahrenheit to 89 degrees. That means that IT equipment will operate in that range, but if you run at the extremes of that range for long periods of time you may have some fails. We changed the recommended level -- the allowable levels remained the same -- to 64F to 81F. That means at the inlet of your server rack you can go to 81 degrees -- that's pretty warm. [The standard also sets recommendation on humidity levels as well.]”

    He also revealed that 81°F is a point where the power increase is minimal, because “raising it higher than that [the recommended limit] may end up diminishing returns for saving power at the whole data center level.” In fact, according to GSA, it can save about 4% to 5% in energy costs for each degree of increase in the server inlet temperature.

    Too much humidity will result in condensation, which leads to electrical shorts. According to GSA, “based on extensive reliability testing of Printed Circuit Board (PCB) laminate materials, it has been shown that conductive anodic filament (CAF) growth is strongly related to relative humidity. As humidity increases, time to failure rapidly decreases. Extended periods of relative humidity exceeding 60% can result in failures, especially given the reduced conductor to conductor spacing common in many designs today.” The upper moisture region is also important in protecting the disk and tape from corrosion. Excessive humidity forms monolayers of water on device surfaces, providing electrolyte for corrosion. On the other hand, too little humidity will leave the room electro-statistically charged.

    After the new standards were published, it would take time for the data centers to update their operating rooms. According to Schmidt, IBM started using the new guidelines internally since 2008, and some other data center probably would step it up two degrees at a time. To run near the new ASHRAE temperature limits means a higher risk environment for staff to manage and requires more operational expertise. According to 2013 Uptime Institute survey data, nearly half of all data centers reported that their systems ran at 71°F to 75°F. 37% of data center reported temperature from 65°F to 70°F, the next largest temperature segment. The trend to warmer data centers is better revealed by the fact that there were 7% data centers operating at 75°F or above, compared with 3% in the year before.

    Free IT Monitoring Guide


    References:

    ASHRAE, “2008 ASHRAE Environmental Guidelines for Datacom Equipment” http://tc99.ashraetcs.org/documents/ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf

    Patrick Thibodeau, “It's getting warmer in some data centers”, 07/15/2013. http://www.computerworld.com/s/article/9240803/It_s_getting_warmer_in_some_data_centers

    Patrick Thibodeau , “Q&A: The man who helped raise server operating temperatures”, 07/06/2009. http://www.computerworld.com/s/article/9135139/Q_A_The_man_who_helped_raise_server_operating_temperatures_



    Written by:

    Ivory Wu, Sharp Semantic Scribe

    Traveling from Beijing to Massachusetts, Ivory recently graduated with a BA from Wellesley College in Sociology and Economics. Scholastic Ivory has also studied at NYU Stern School of Business as well as MIT. She joins Temperature@lert as the Sharp Semantic Scribe, where she creates weekly blog posts and assists with marketing team projects. When Ivory is not working on her posts and her studies, she enjoys cooking and eating sweets, traveling and couch surfing (12 countries and counting), and fencing (She was the Women's Foil Champion in Beijing at 15!). For this active blogger, Ivory's favorite temperature is 72°F because it's the perfect temperature for outdoor jogging.

    Chris Monaco Temperature@lert

    Full story

    Comments (0)

  • Dawn of Solar Data Centers?


    Major player projects can point to readiness, costs and benefits of solar power for data centers.


    Water, water everywhere,

    And all the boards did shrink.

    Water, water everywhere,

    Nor any drop to drink.                The Rime of the Ancient Mariner - Samuel Taylor Coleridge


    Data center managers must feel a lot like Coleridge’s Ancient Mariner when they look out the window (assuming their offices have any windows). Like the sailors on Coleridge’s journey, data center professionals are surrounded by free power from the wind, sun, water,the earth’s heat and biofuel, but none of it is usable as it exists to power the insatiable demands of the equipment inside the vessel. Despite this challenge, there have been several interesting projects regarding green energy sources. This piece in the data center energy series will explore solar photovoltaic to help determine if the technology is suitable to provide cost effective, reliable power to data centers.


    Temperature@lert Blog: Dawn of Solar Data Centers?
    Left: Engraving by Gustave Doré for an 1876 edition of the poem. "The Albatross," depicts 17 sailors on the deck of a wooden ship facing an albatross. Right: A statue of the Ancient Mariner, with the albatross around his neck, at Watchet, Somerset in south west England where the poem was written. (Link to Source - Wikipedia)


    Solar powered data centers have been in the news recently primarily due to projects by Apple and Google. In an effort to build green data center, Apple’s Maiden, North Carolina 500,000 sq.ft. site is powered in part by a nearby 20-acre, 20-megawatt (MW) solar array, The site also has a 10-MW fuel cell array that uses “directed biogas” credits as the energy source. (Link to Apple Source) The remainder of the power needed for the site is purchased from the local utility with Apple buying renewable energy credits to offset the largely coal and nuclear generated Duke Energy electricity. Apple sells the power from the fuel cells to the local utility in the form of Renewable Energy Credits used to pay electric utility bills. Apple expects that the combination of solar photovoltaic panels and biogas fuel cells will allow the Maiden data center to use 100% renewable energy or energy credits by the end of the year. Several lesser known companies have also implemented solar initiatives but the news is not so widespread.


    Temperature@lert Blog: Dawn of Solar Data Centers?
    Left: Apple Maiden, NC data center site shows solar array in green (Link to Source - Apple); Right: Aerial photo of site with solar array in foreground (Link to Source - Apple Insider)


    It will be instructive to follow reports from Apple to determine the cost-effectiveness of the company’s green approach. That being said, many if not most companies do not have the luxury of being able to build a 20-acre solar farm next to the data center. And most have neither the cash to invest in such projects nor the corporate caché of Apple to get such projects approved, so initiatives such as Maiden may be few and far between. Still, there’s a lot of desert land ripe for solar farms in the US Southwest. Telecommunication infrastructure may be one limitation, but California buys a lot of its electrical power from neighboring states so anything is possible.

    What about solar power for sites where the data center is built in more developed areas, is there any hope? Colocation provider Lifeline Data Centers announced their existing 60,000 sq. ft. Indianapolis, Indiana site will be “largely powered by solar energy”. (Link to Source - Data Center Dynamics) Author Mark Monroe’s piece titled Solar Data Center NOT “Largely Solar Powered” thought about his solar panel installation and took a at the numbers behind this claim. Lifeline is planning to install a 4-MW utility-grade solar array on the roof and in campus parking lot by mid-2014. Author Monroe takes a swag at determining how much of the data center’s power needs will be filled by the solar array.

    Assuming the site’s PUE is equal to the Uptime Institute’s average of 1.64 and taking into account the photovoltaic array’s operating characteristics (tilt angle, non-tracking), site factors (sun angle, cloud cover), etc., Monroe calculates that 4.7% of the site’s total energy and 12% of the overhead energy will be available from the solar installation. At an industry leading PUE of 1.1, the installation will provide 7% of the total energy and 77% of the overhead energy. Monroe notes that while these numbers are a step in the right direction, Lighthouse’s claim of a data center “largely powered by solar energy” is largely not based on the facts. His piece notes that even Apple’s Maiden site with 20 acres of panels only generates about 60% of the total energy needed by the site overhead and IT gear. Lifeline would need to add and extra 6-MW of solar capacity and operate at a PUE of 1.2 to operate at Net Zero Overhead.

    I am curious to see hard data from these and other solar photovoltaic projects for data centers that will show hard cost, performance data and financial incentives (tax considerations, power contracts, etc.) that the industry can review to determine if solar is the right approach for their electrical power needs. Although such disclosure is unlikely due to competitive considerations, it would greatly assist the industry to help promote such green initiatives to help take the spotlight off of headlines criticizing the “power hungry monster”.

    All efforts to improve industry efficiency and reduce energy consumption are steps in the right direction. Companies like Lighthouse Data Centers that don’t have the deep pockets of Apple or Google are taking steps toward the goal of Net Zero Overhead. The challenge for data center operators that initiate green energy or efficiency based projects will be to boast about these efforts to make headline grabbing claims that may not be well supported by the data. As Launcelot Gobbo tells Old Gobbo in Shakespeare’s The Merchant of Venice, “but at any length truth will out.” Green powered and energy independent are claims that need to be examined carefully to maintain industry credibility and good will or “truth will out.”

    Temperature@lert FREE IT Monitoring Guide

    Full story

    Comments (0)

  • Does Cogeneration Yield a Suitable RoI in Data Centers?


    What does the data say?

    This is the second of two pieces on Cogeneration or CHP.  The first explored the topic, this one will explore the RoI of technology proven for other industries as applied to data centers.

    As the data center industry continued to consolidate and competitiveness becomes more intense, IT professionals understand the pressure on both capital and operating budgets.  They are torn by two competing forces, faster and more reliable vs. low cost and now.  IT equipment improvements are continuously and the desire to update always calls.  Reliability has become the mantra of hosted application and cloud customers and although electrical grid failures are not considered “failures against uptime guarantees” for some, businesses affected by outages feel the pain all the same.  And if there are solutions, management pressure to implement them quickly and at low cost is always a factor.

    Cogeneration is typically neither fast nor cheap, but it does offer an alternate path to reliability and uptime.   As in all major investments that require sizable capital and space, the best time to consider cogeneration is during data center construction.  That being said, data centers operating today are not going any place soon, so retrofit upgrade paths are also a consideration, especially in areas where electric power reliability from the local utility has become less reliable over time.  So when should data center professionals consider cogeneration or CHP?  Fortunately there are studies available on public websites that help provide answers.

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?

    University of Syracuse data center exterior; Microturbines in utility area (Link to Source)

    One such study is an installation at the University of Syracuse.  Opened in 2009, the 12,000 ft2 (1100 m2) data center with a peak load of 780 KW employs cogeneration and other green technologies to squeeze every ounce of energy out of the system. (Link to Source)  The site’s 12 natural gas fueled microturbines generate electricity.  The microturbine’s hot exhaust is piped to the chiller room, where it is used to generate cooling for the servers and both heat and cooling for an adjacent office building.  Technologies such as adsorption chillers to turn heat into cooling, reusing waste heat in nearby buildings and rear door server rack cooling that eliminates the need for server fans completes what IBM calls its Greenest Data Center yet.

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?

    Left: Heat exchanger used in winter months to capture waste microturbine heat for use in nearby buildings; Right: IBM “Cool Blue” server rack heat exchangers employ chilled water piped under floor.

    This is certainly an aggressive project, but can the cost be justified with a reasonable Return on Investment?  Fortunately data has recently been released to quantify the energy conservation benefits.  PUE performance measured during 2012 was presented at an October 2013 conference and show a steady PUE between 1.25 and 1.30 during the period, a value that compares very favorably when compared to the typical data center PUE of 2.0. Uptime Institute self reporting average PUE is 1.65 with qualifications, Digital Realty Trust survey of 300 IT professionals with annual revenues of at least $1 Billion and 5,000 employees revealed PUE of 2.9.  (Link to Sources: Uptime Institute Digital Realty Trust)

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?      

    IBM/SU Green Data Center 2009 Goals (Link to Source); 2012 Actual Performance (Link to Source)

    So how can we calculate the actual RoI and compare it to the projected goals.  First, the goals stated in the table on the left show savings of $500,000+ per year.  Another presentation by the microturbine supplier shows a $300,000 per year goal, quite a bit different.  So how do we know what the savings is?  We don’t since there is no reference site where the data center is identical and in an identical location without the CHP.  So we can use the 2.0 average PUE and calculate the energy savings, but that’s not a real answer.  And we also need to take into account the fact that tax incentives and grants such as the $5 Million for the Syracuse University project needs to be reviewed to determine the cost to non-subsidized projects.  Hopefully project managers will provide more information to help data center operators better understand the actual savings as the project matures.

    CHP for data centers is presented with an array of benefits including improved reliability through less dependence on grid power, lower power costs, reduced carbon footprint.  NetApps installed CHP in their Silicon Valley data center to reduce their reliance on grid power due to frequent rolling brownouts and the uncertainties of the power market costs.  Their experience is not as instructive due to the site’s reduced need for cooling due to use of direct air cooling.  As a result the CHP system is used only when the utility is strained.  It is difficult to find quantitative data for modern installations.   While the data seems encouraging, actual energy cost savings are not provided.  We will watch the progress at this and other projects over the next several months to see if CHP costs yield an acceptable RoI via reduced energy costs.  Stay tuned.

    Full story

    Comments (0)

  • Does Cogeneration Have a Role in Data Centers?

    Operators have many options to consider.


    An earlier piece in this series titled Data Centers as Utilities explored the idea that emergency backup power systems in data centers could be used to supply the utility with peak demand power when the grid is running near capacity and the data center’s emergency generators are not needed.  But what about the idea that data centers generate their own power to provide less reliance on the grid?  There are several approaches, particularly in the green energy space that will be explored in future pieces.  One that is readily available and may make sense for data centers to consider is called cogeneration or Combined Heat and Power, CHP for short.

    CHP is not new, it has been used in more traditional industries for decades, primarily heavy industries with large energy needs, steel and paper mills for example.  Cogeneration for data centers has been in the news for quite some time but has had a relatively low adoption rate.  After all, data center operators try to put their capital into IT infrastructure; the utility and facility sides are often looked at as necessary added cost.  But with reports that grid capacity and reliability may not be able to address the growth or reliability needs of the industry, operators are taking a fresh look at options such as self generation.   Low natural gas prices are also a factor since operators may be able to secure the fuel for their own operations more cheaply than through electric utilities.

    As early as 2007 the US Environmental Protection Agency highlighted the potential of cogeneration in the future of data centers in a piece titled The Role of Distributed Generation and Combined Heat and Power (CHP) Systems in Data Centers.(Link to Source)  With advances in the technology, changes in energy costs, and greater emphasis on grid capacity and reliability as it pertains to data centers, cogeneration has received a significant boost with sponsorship from companies such as IBM.  

    Temperature@lert Does Cogeneration Have a Role in Data Centers?

    US sponsored report table showing various technology applications

    all under the CHP or Cogeneration name. (Link to Source)

    There are several approaches to cogeneration or CHP.  The EPA report shows application of several technologies that fall under the sphere CHP or cogeneration.  Recent installations include five gasoline engine powered turbines in a Beijing data center. According to one report, Powered by five of GE’s 3.34-megawatt (MW)  cogeneration units, the 16.7-MW combined cooling and heating power plant (CCHP) will offer a total efficiency of up to 85 percent to minimize the data center’s energy costs. (Link to Source) The project is sponsored by the China National Petroleum Corporation and represents the trend toward distributed energy production in high usage industries.  Ebay’s natural gas powered Salt Lake City plans to deploy a geothermal heat recovery system to product electricity from waste heat. (Link to Source)

    Temperature@lert: Does Cogeneration Have a Role in Data Centers?

    Example of Micro Turbine or Fuel Cell CHP layout (Link to Source)

    Data from projects at the University of Syracuse and University of Toledo data centers will be examined in a companion piece to demonstrate the potential RoI for CHP.

    Temperature@lert: Does Cogeneration Have a Role in Data Centers?

    University of Toledo Natural Gas Fired Micro Turbine Cogeneration Plant. (Link to Source)

    Full story

    Comments (3967)

  • 5 Tips to Increase Efficency of a Data Center

    5 Tips: Increase the Efficiency of a  Data Center

    With energy consumption on the radar of all businesses, take these 5 tips as stepping stones in your quest to make your own business more energy efficient. These are meant as rudimentary guidelines only, and for official tips, strategies, and official documents related to the subject, refer to Energy Star's homepage for Data Center Efficiency.


    1. Virtualization: Though ‘Cloud Computing’ and ‘SaaS’ are all the hype these days, the benefits of virtualization are still relevant to many businesses. The consolidation of independent, standalone servers into one physical sever is a better use of computing resources. The addition of a hypervisor allows you to divide the machines into tangible parts that can have separate uses. You can host a database server, web server, and print server on the same box by using a hypervisor to virtually manage the different functions. By putting multiple operating systems on the same box (and reducing your physical machine count) and operating more efficiently, you can reduce energy costs from 10-40% on average.
       
    2. Energy Star Advantage: Certain server models (that are certified by ENERGY STAR) may use up to 30% less energy than a traditional workhorse server. You can find a list of Enterprise Servers that are certified by ES on this page. 
       
    3. Hot Aisles/Cold Aisles: As you probably know, IT equipment typically takes cold air from the front of the unit and dispels it from the back. Since this is a consistent architecture, you can formulate your server racks to maximize your energy potential. Position several servers in the same direction to create cool aisles, and place hotter servers in line to be the beneficiary of the cooler air. A worst case scenario involves an intake (or front of server) receiving external heat from other servers that are placed behind it. Streamline your servers to keep organized, and to stay cool. One important note, never move server equipment without shutting down all of the power sources, as well as unplugging all of the cables for safety. The following chart shows the ideal setup for a “Hot/Cold Aisle”. (photo credit to techmeasures.files.wordpress.com)hotcold  aisles
    4. Management of Air Flow: While related to hot/cold aisles, also implement ‘blanking panels’ to cover open areas to ensure that air will pass through the equipment. Dell defines these blanking panels as “a way to cover unused rack space in the front of a rack, resulting in improved airflow to the installed equipment and reducing internal hot-air circulation within the rack”. In terms of the industry, utilization of blanking panels has long been considered a ‘best’ practice. Be sure to check your servers and make certain that you’ve established an efficient flow dynamic.  Check out Dell’s white paper for a more thorough analysis on the subject of blanking panels, viewable here.
       
    5. Air Side Economizer: By taking advantage of the local environment, you may be able to reduce energy consumption by installing an air-side economizer. If your data center is located in a hot climate, periods of rain or cool evenings are useful environmental conditions to take advantage of. The Air-Side economizer is integrated into the air handling system and is a professional installation. For more information on Air-Side economizers, please visit:

       http://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_economizer_airside



    Download our FREE E-Book For more information on temperature control and other monitoring points for your data center enviornment, including additional tips on efficiency and the varying temperature conditions in your server room.

    Full story

    Comments (0)

  • Consideration of High Temperature Ambient Environments and Free Cooling in Data Center Operation

    Driectly from the original post: http://www.datacenterpost.com/2013/01/consideration-of-high-temperature.html

     

    Temperature@lert

     David Ruede, VP Marketing at Temperature@lert, says:

    Techies love acronyms, and IT professionals are masters of the jargon. Where else would we find such gems as CRAC, PUE, SaaS, DCIM, VoIP and VPN among the scores if not hundreds of options for the next big idea?

    Why do we need these when The Free Dictionary lists 259 phrases alone for the acronym DC? (Link 1)  First, we love to speak in shorthand.  Time is always too short; things need to be done quickly.  Speaking in Acronym makes us an insider, the elite few who can feel the bits and petabytes flowing through the veins and arteries of the interconnected web of the virtual world.  And short of a Vulcan Mind Meld, acronyms save time, although one could argue that when used in meetings there may be a few who don’t really understand the meaning and because they don’t want to appear “stupid”, don’t ask.

    Many of these terms started off as marketing terms.  Why would we need CRAC when AC may be sufficient?  And why is PUE debated daily as to its true meaning in professional social media sites?  Every data center operator, supplier and professional looks to set themselves or their companies apart from the competition.  I’ll argue this is a good thing because it makes web searches easier – I don’t have to sort through hundreds of household air conditioners sold in retail outlets to find what I need for a data center, server or telecom room.

    Recently a new acronym has been making its way into the jargon.  HTA, High Temperature Ambient, has cropped up in several professional periodicals and online marketing pieces.  The phrase is used to describe the benefits of reduced energy consumption in data centers and other IT facilities that operate at what many consider higher than “normal” temperatures, say 30°C (86°F) for example.  Described in earlier pieces as high ambient temperature or high temperature in the ambient, the idea of running data centers at higher temperatures has gained prominence as a way to save electrical energy, a very costly piece of the data center’s operating budget.  Often used with terms like “free cooling” or “air side economizers”, the idea is that today’s servers have been specified to run at higher temperatures than those just a few years ago, so operating equipment at higher temperatures has no detrimental effect.

    In April 2012, Intel published a study of the potential energy savings in green data center maker Gitong’s modular data centers.  The Shanghai study showed an annual cost reduction of almost $33,000 per year, which is significant.

    Figures 1a, 1b: Tables showing before and after HTA results - Source: Intel Link 2

    While saving energy is a very desirable goal, data center, server and telecom room operators are well served to understand the underlying assumptions behind “turning up the heat and opening up the doors and windows”.  First, all of the equipment in an IT space comes with manuals, and the manuals specify operating conditions. Insuring all of the equipment in the ambient is able to run at elevated temperatures is highly recommended, particularly since older devices or appliances may be more prone to heat related performance degradation.  ASHRAE’s TC 9.9 2011 Thermal Guidelines for temperature and humidity control are a good reference as to where to start when designing or setting up an HVAC system. (Link 3)

    Second, while the HVAC systems in IT spaces are generally well designed and provide adequate airflow to the equipment, time has a way of changing things.  Profiling the temperature of the data center to see if any changes in operation or addition of equipment have created “hot spots” with sufficient resolution to insure each rack or piece of equipment is operating within specification can be done with existing equipment by moving temperature sensors to areas not normally monitored during the temperature mapping process.

    Third, changes in temperature can cause changes in relative humidity.  Continuous monitoring of not only temperature but relative humidity before and after raising the temperature is recommended to insure both of these critical parameters are within manufacturer’s specification.

    And if IT professionals decide to employ “free cooling” by figuratively “opening up the doors and windows”, they would be well advised to check ASHRAE’s TC 9.9 Gaseous and Particulate Contamination Guidelines for Data Centers and again their supplier manuals for specification compliance. (Link 4)

    Figure 2: Ambient Air Cooling Unit (Link 5)

    Much has been written about free cooling; a June 2012 article is a good example. (ref. Link 5)  Cooling may indeed be “free” and many can and do use free cooling combined with HTA to make significant reductions in their energy bills.  As in all good ideas, “first, do no harm” is a good motto.  IT professionals may be well served to verify and validate the assumptions against best practices as they apply to their sites before any significant changes in operation are made.

    Full story

    Comments (0)

  • Essential Tech Check List: Building & Retrofitting Your Server Room

    Whether you're building a server room, adding on, or moving equipment there are many considerations to mull over. From the basics to alarm systems, it is important to ensure your server room is efficient and to protect your mission critical equipment. Previously in our blog, we have addressed the issues surrounding the microclimate present in your server room; however, it is critical to have an understanding of how a server room should be laid-out and managed. Use our check list as a guide for promoting security, efficiency, and productivity:

    Our Essential Tech Check List

    (1) Your Basics of Space

    • -Examine the layout of the space and how many units of space you have to work with.

    • -The walls (including ceiling) and doors should isolate the sounds that your equipment is creating.

    • -Check to see which way the door opens. There should also be no windows or other entry points other than the doors in the room.

    • -Consider the floor and whether your equipment will need raised flooring. Aim for anti-static floor finishing to prevent an unwanted static charge.

    • -Make sure there is enough clearance for racks and that they are stable enough to hold your equipment.

    • -Check for aisle clearance too, make sure your have enough room for exhaust to escape and not over-heat nearby equipment.

    • -Think about whether you need ladder racks, cabinets, shelves, patch panels, or rack mounts.

    • -Take into weight and size of each piece of equipment into consideration when designing the layout.


    (2) Keeping Your Cool

    • -Check and see what type if centralized cooling is available, whether an under the floor air distribution or an air duct system.

    • -If there is no centralized system available, get an air conditioner or cooling unit that is able to keep your equipment working productively while minimizing energy consumption and costs.

    • -If at all possible, fresh air vents are great and save on energy costs and consumption!

    • -Remove any and all radiators or other heating equipment currently present in the room. You don't need to add heat at all!

    • -Monitor your cooling system(s) to make sure it is working properly, especially when no one is there.

    • -Make sure your cooling units are not too close in proximity to your electrical equipment, think condensation and flooding. Do not place air conditioning units over your servers.

    • -Monitor the humidity to prevent static charge and electrical shorts.

    • -See if a chilled water system is in the budget or find something within the budget constraints to ensure that the hot air has somewhere to go.

     

    (3) Using Your Power

    • -Check to make sure that you have enough outlets to support power to all your equipment and not to overload them.

    • -Get backup power, preferably UPS to prevent data loss from power blinking or outages.

    • -Don't surpass the maximum electrical intensity per unit of space.

    • -Consider shut down capabilities of equipment (SNMP traps for example).

    • -Make sure your equipment is grounded.

    • -Monitor for power outages if you are not using back-up power systems.

    • -Monitor your back up power systems to make sure your mission critical equipment is not failing due to power loss.

     

    (4) Keeping Secure & Safe

    • -Have at least one phone present in the room in case of emergencies.

    • -Either check for a preexisting fire alarm system and install one if there isn't.

    • -Get a fire suppression system if there is not one there. Take into consideration of whether you will have a wet or dry suppression system and the effects that will have on your equipment. (Halon is a great choice!)

    • -Have reliable contacts to help resolve issues immediately, or form a system of escalation.

    • -Monitor for flooding, especially if this has happened historically in the past.

    • -Secure entrances/exits, this is expensive equipment with critical data, you don't want just anyone in there messing around!

     

    (5) Other Considerations

    • -Get the best cabling/wiring available within budget constraints. 

    • -Keep extra cabling/wiring around, because you never know when you may need it.

    • -Consider color coding wires/cables, a little more work now but definitely a time-saver in the future!

    • -Think about lighting: location & heat produced.

    • -If there is someone sharing the space, get them some earplugs! It's going to be loud in there with the equipment being used.

    • -Consider networking/phone lines being run in there and how much space you have left after that.

    • -Plan for future expansion or retrofitting (again).

    • -Leave the service loops in the ceilings.

    • -Label outlets.

    • -Get rid of dust, your equipment hates it!

    • -Check if you have a rodent/pest problem.

    • -Cover emergency shutoff switches so that it can't be accidentally triggered.

    • -Try to centralize the room in the building so that you can eliminate having to use more cabling/wiring than you need to.

    • -Meet OSHA and ASHRAE guidelines as well local codes.


    Is your server room or do you know of someone's server room that is not being monitored for temperature? Are you concerned with energy consumption, ability to monitor off-hours, and/or preventing mission critical equipment from failure? If you or know someone who is experiencing such issues, we want to hear form YOU!

    We will be giving away ONE FREE USB DEVICE per month to the server room with the most need! Valued at $129.99,Temperature@lert USB Edition is a low-cost, high-performance device that monitors the ambient temperature in your server room and alerts you via email when the temperature rises or falls outside your acceptable range.

    Please send a brief description, pictures, and/or videos to diane@temperaturealert.com for consideration! Our team will select one winner each month based on description and need, because we firmly believe that companies in every industry 


    Full story

    Comments (0)

  • How To Increase the Lifespan of Your Server

    It's Monday. You grab your coffee, toss in sugar, and begin chugging your beloved caffeine. There's no feeling like walking into your server room or data center and having it swelteringly hot along with equipment malfunctioning. Even though you tried to prevent this from occuring by installing air conditioning units and other coolers, loss of equipment and information still can happen.

    In fact, for every 18 degrees that the temperature remains above 68°, servers lose approximately 50% of their reliability. Servers are an investment and one must take care in protecting such an important asset. Considering the average lifespan of a server is 4-6 years, it would be more cost effective to maintain your server by keeping it at proper temperatures.

    Of course cooling units are a great way to cool down an already hot server room, but there's not always one person designated to monitor such a room 24/7. By implementing a temperature monitoring solution, you can increase the lifespan of the servers and maintain your reliability. Not to mention, you'll avoid a case of the Mondays.

     

    Get Your FREE IT/Server Room/Data Center Monitoing Guide Now:


    Full story

    Comments (0)

  • What Can You Monitor with Temperature@lert?



    When deciding on a Temperature@lert solution, generally you would have something in mind for the application prior to purchase.  Of course we have our standard industries that require the use of our products; however, there are many imaginative ways consumers have thought up that have opened a new world of monitoring possibilities.  

    Here are some of the innovative uses that have been implemented:

    • R/V pet monitoring 
    • HVAC systems
    • Warehouses
    • Wine storage
    • Ovens 
    • BBQ Smokers
    • Cryogenic Freezers
    • Food Trucks
    • Reefer Trucks
    • Kennels
    • Police K9 vehicles
    • Water Tanks 
    • Ponds
    • Farms/Barns
    • Chicken Coops
    • Portable bio-pharmaceutical cooling units
    • Steam Pipes
    • Incubators
    • Boiler rooms
    • Crops
    • Greenhouses
    • Explosives
    • Vacation homes
    • Candy factories
    • Vacant commercial property
    • Boiler rooms
    • Crawl spaces
    • Outdoor Cooling Units
    • Saunas
    • Hot tubs

    Of course these applications would not be possible without our smart sensors:

    • Temperature
    • Humidity
    • Flood
    • Expanded Range Temperature
    • Tank Level
    • Pressure
    • Leaf Wetness
    • Soil Moisture
    • Wind Direction
    • WInd Speed
    • Rainfall
    • CO2
    • O2
    • Dry Contact
    • Stainless Steel Temperature
    • Wine Bottle Temperature

    With the implementation of our smart sensors, the possibilities are endless in discovering solutions for your monitoring needs.  If you need a solution for your monitoring we're here to help, just send us a quick quote request: Quote Inquiry. Or if you have an interesting way you use your device, we'd love to hear about it, email info@temperaturealert.com.


    Full story

    Comments (0)

  1. 1
  2. 2
  3. Next page