temperature@lert blog

Go Back
  • Wireless Temperature Monitoring System Topology Considerations

    Smart decisions during the evaluation process can help simplify the sensor network layout.

    In this ongoing series centered around NYC Hospital Queens’ experience in selecting and installing a Wireless Temperature Monitoring (WTM) system to track medications and blood in hospital refrigerators (Link to Article) several factors as to the placement of WTM devices to support 174 refrigerators, freezers, and other critical areas in a hospital that is comprised of four main buildings, some built in the 1950s needed to be taken into account.

    As was noted in a previous piece, the WTM system chosen at NYC Hospital Queens uses  wireless receivers located above the ceiling as communication bridges between the sensor modules and the hospital’s IT network.  The author notes, “signal strength dictates the number of receivers needed. Our institution comprises four main buildings, some of which were built in the late 1950s. Thus, the signal strength of the sensors in the oldest building was less than optimal and required the addition of multiple receivers to provide consistent readings. Basement areas also may require multiple receivers”.

    Temperature@lert WIFI Monitoring Device

    WiFi WTM device installed in server room provides a strong signal, good range and fast data rate without the expense of additional equipment (e.g. repeater/gateway).

    Evaluating a WTM device’s signal strength or range in all of the locations to be monitored is paramount before selecting any one technology.  Depending on the wireless technology chosen, each wireless sensor type may require more or fewer receivers to make the connection, resulting in more or less complex and higher or lower cost deployments.  NYC Hospital Queens could possibly have chosen a device that does not need a receiver (a.k.a. gateway) but had sufficient signal strength to communicate to the site’s IT network directly.  A standard WiFi device could potentially provide such capability without the added expense of a receiver/gateway device.

    Mesh network showing sensor nodes (red/green) and receivers/gateways (red).  In this case some sensors also act as gateways and can help link remote sensors without the added cost of a dedicated gateway. (Link to Source)

    Some wireless technologies are able to overcome interference from the building infrastructure, equipment or furnishings that others may not.  Other wireless technologies have mesh network capability, meaning the wireless sensors or receiver/gateways can communicate with each other.  Therefore when one device is not operating properly or experiences signal degradation caused by interference, the device can communicate with an alternative neighboring device to maintain the network integrity.  And still other WTM designs employ receiver/gateways that can contain their own temperature sensor(s) in addition to serving as a gateway, providing an additional pathway to lower the complexity and cost of the system.   Evaluating wireless devices from several vendors, each using different wireless technologies, WiFi, ZigBee, RFID, Bluetooth, proprietary, etc. can help the user understand how each works in the various locations to be monitored.

    But what does one do when these technologies don’t work or are not feasible for a hospital’s IT network?  For example, some IT departments are averse to adding new devices to their internal networks due to security or capacity capacity concerns because continuous temperature monitoring of 174 sensors in the case of New York Hospital Queens for example can generate a lot of data quickly.  To meet the hospital’s need, historical data needs to be maintained, secured, and stored for an extended period of time for regulatory purposes.  Adding alerting capability to the WTM system, for example sending email, text or phone call messages when something goes wrong, means an additional level of IT capacity is needed to send and log these alerts. Adding an escalation plan for times when issues do not get resolved in a timely manner adds an additional level of complexity.  Close collaboration with the hospital’s IT resources will be needed to determine what is possible and what is not.

    Temperature@lert How It Works

    If IT capacity or network policies make it very difficult if not impossible to add a WTM system, what options exist?  One good option is a cellular gateway that communicates directly to the wireless sensor network and uploads data to cloud based sensors via major carrier cellular networks.   Temperature@lert’s Cellular Edition is one such device.  Each Cellular Edition is equipped with a cellular transmitter/receiver that communicates through national cellular carrier networks to Temperature@lert’s Sensor Cloud web based storage, reporting and alerting services.  Each Cellular Edition can link to several Z-Point wireless sensor nodes resulting in up to 45 sensors being monitored via one Cellular Edition gateway depending on signal strength and equipment layout.

    Understanding how any new wireless network will operate at a site requires study and testing.  Once the locations to be monitored are mapped and solutions that the organization’s IT department supports are determined, those tasked with the WTM decision are ready to make their recommendation.  This all takes time and energy, so add that to the planning process and everyone will have a better understanding of who, what, when, where and why the final selection is made.  Because once this happens and the installation starts, it will be good to have the history to remind all how they got here.

    Temperature@ert’s WiFi, Cellular and ZPoint product offerings linked to the company’s Sensor Cloud platform provides a cost effective solution for organizations of all sizes. The products and services can help bring a laboratory or medical practice into compliance with minimum training or effort. For information about Temperature@lert visit our website at http://www.temperaturealert.com/ or call us at +1-866-524-3540.

    Written By:

    Dave Ruede, Well-Versed Wordsmith

    Dave Ruede, a dyed in the wool Connecticut Yankee, has been involved with high tech companies for the past three decades. His background in chemistry and experience in a multitude of industries such as industrial chemicals and systems, pulp and paper, semiconductor fabrication, data centers, and test and assembly facilities informs his work daily. Well-versed in sales, marketing, management, and business development, Dave brings real world experience to Temperature@lert. When not crafting new Temperature@lert projects, Dave enjoys spending time with his young granddaughter, who keeps him grounded to the simple joys in life. Such joys for this wordsmith include reading prize winning fiction and non-fiction. Although a Connecticut Yankee, living for a decade in coastal California’s not too hot, not too cold climate epitomizes Dave’s favorite temperature, 75°F.

    Temperature@lert Dave Ruede

    Full story

    Comments (0)

  • Overheating: The Concern Over Stability in Data Centers

    One of the major concerns of global organizational operations is business continuity.

    Because firms rely on their information systems to operate, once a system shuts down unexpectedly, company operation will be impaired inevitably or even stopped. It is crucial for firms to provide a stable and reliable infrastructure for IT operations and reduce the possibility of disruptions. Besides emergency backup power generation, a data center also needs to closely monitor the operation rooms in order to ensure the continuous functionality of its hosted computer environment .

    The Uptime Institute in Santa Fe, New Mexico, defined four levels of availability as shown below:

    Temprature@lert Image Uptime Institute

    The tolerance for unavailability of service of the tier systems is listed below over one year (525,600 minutes): 

    Tier 1 (99.671%) status would allow 1729.224 minutes
    Tier 2 (99.741%) status would allow 1361.304 minutes
    Tier 3 (99.982%) status would allow 94.608 minutes
    Tier 4 (99.995%) status would allow 26.28 minutes 

    High temperature is one of the major causes that lead to severe malfunction or damage to data centers. Many data centers have reported losses due to overheating conditions, including some of the leading firms. On March 14th 2013, Microsoft’s outlook.com service endured a 16-hour long outage caused by “a rapid and substantial temperature spike in the data center.” Wikipedia also experienced similar troubles on March 24th, 2010. “Due to an overheating problem in our European data center, many of our servers turned off to protect themselves”, as reported by Wikimedia on its tech blog (http://blog.wikimedia.org/2010/03/24/global-outage-cooling-failure-and-dns/). Earlier in the same year, too much hot air in the operation room knocked Spotify offline as one of the big air conditioner didn’t start properly.   

    Microsoft’s lengthy down time in 2013 was an unexpected accident due to its routine firmware updates. It caused a lot of trouble for customers who could not log into their Outlook and Hotmail accounts for a whole calendar day. 

    On the other hand, according to Domas Mituzas, the performance engineer at Wikipedia, the cost of downtime for the user-managed encyclopedia is minimal that “the down time used to be [their] most profitable product” because Wikipedia displays donation-seeking information for additional servers when it is offline. 

    The losses suffered from the shutdowns vary from firm to firm, and it is necessary for all parties to install safeguard process and close monitoring to minimize the potential damage. Next week we will briefly discuss how to protect your data center from changing environmental conditions.  

    Temperature@lert ebook


    Tom Warren, “Microsoft blames overheating datacenter for 16-hour Outlook outage”, March 14, 2013.

    Rich Miller, “Wikipedia’s Data Center Overheats”,  March 25th, 2010. 

    Nicole Kobie , “Overheating London data centre takes Spotify offline”, Feb 22nd, 2010.

    Written by:

    Ivory Wu, Sharp Semantic Scribe

    Traveling from Beijing to Massachusetts, Ivory recently graduated with a BA from Wellesley College in Sociology and Economics. Scholastic Ivory has also studied at NYU Stern School of Business as well as MIT. She joins Temperature@lert as the Sharp Semantic Scribe, where she creates weekly blog posts and assists with marketing team projects. When Ivory is not working on her posts and her studies, she enjoys cooking and eating sweets, traveling and couch surfing (12 countries and counting), and fencing (She was the Women's Foil Champion in Beijing at 15!). For this active blogger, Ivory's favorite temperature is 72°F because it's the perfect temperature for outdoor jogging.

    Chris Monaco Temperature@lert

    Full story

    Comments (0)

  • Data Center Monitoring: Raised Temperatures, Riskier Management

    Data Center Temperature Monitoring: Raised Temperatures, Riskier Management

    In 2008, American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) published new environmental guidelines for datacom equipment. They increased the high-end temperature from 77°F to 80.6°F.

    The guideline chart below shows the changes in more details:

    data center guideline chart

    According to the 2008 guideline, the recommended operating environments could not ensure optimum energy efficiency. There are varying degrees of energy efficiency within the recommended zone, depending on the outdoor temperature and the cooling system design. Thus, the guideline suggests, “it is incumbent upon each data center operator to review and determine, with appropriate engineering expertise, the ideal point for their system”.

    Patrick Thibodeau, reporter at computerworld.com, conducted an interview with Roger Schmidt, the IBM chief engineer for data center energy efficiency, about how the new temperature parameters will influence energy savings and data center cooling. When asked “how much heat can servers handle before they run into trouble”, Schmidt replied:

    “The previous guidelines for inlet conditions into server and storage racks was recommended at 68 degrees Fahrenheit to 77 Fahrenheit. This is where the IT industry feels that if you run at those conditions you will have reliable equipment for long periods of time. There is an allowable limit that is much bigger, from 59 degrees Fahrenheit to 89 degrees. That means that IT equipment will operate in that range, but if you run at the extremes of that range for long periods of time you may have some fails. We changed the recommended level -- the allowable levels remained the same -- to 64F to 81F. That means at the inlet of your server rack you can go to 81 degrees -- that's pretty warm. [The standard also sets recommendation on humidity levels as well.]”

    He also revealed that 81°F is a point where the power increase is minimal, because “raising it higher than that [the recommended limit] may end up diminishing returns for saving power at the whole data center level.” In fact, according to GSA, it can save about 4% to 5% in energy costs for each degree of increase in the server inlet temperature.

    Too much humidity will result in condensation, which leads to electrical shorts. According to GSA, “based on extensive reliability testing of Printed Circuit Board (PCB) laminate materials, it has been shown that conductive anodic filament (CAF) growth is strongly related to relative humidity. As humidity increases, time to failure rapidly decreases. Extended periods of relative humidity exceeding 60% can result in failures, especially given the reduced conductor to conductor spacing common in many designs today.” The upper moisture region is also important in protecting the disk and tape from corrosion. Excessive humidity forms monolayers of water on device surfaces, providing electrolyte for corrosion. On the other hand, too little humidity will leave the room electro-statistically charged.

    After the new standards were published, it would take time for the data centers to update their operating rooms. According to Schmidt, IBM started using the new guidelines internally since 2008, and some other data center probably would step it up two degrees at a time. To run near the new ASHRAE temperature limits means a higher risk environment for staff to manage and requires more operational expertise. According to 2013 Uptime Institute survey data, nearly half of all data centers reported that their systems ran at 71°F to 75°F. 37% of data center reported temperature from 65°F to 70°F, the next largest temperature segment. The trend to warmer data centers is better revealed by the fact that there were 7% data centers operating at 75°F or above, compared with 3% in the year before.

    Free IT Monitoring Guide


    ASHRAE, “2008 ASHRAE Environmental Guidelines for Datacom Equipment” http://tc99.ashraetcs.org/documents/ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf

    Patrick Thibodeau, “It's getting warmer in some data centers”, 07/15/2013. http://www.computerworld.com/s/article/9240803/It_s_getting_warmer_in_some_data_centers

    Patrick Thibodeau , “Q&A: The man who helped raise server operating temperatures”, 07/06/2009. http://www.computerworld.com/s/article/9135139/Q_A_The_man_who_helped_raise_server_operating_temperatures_

    Written by:

    Ivory Wu, Sharp Semantic Scribe

    Traveling from Beijing to Massachusetts, Ivory recently graduated with a BA from Wellesley College in Sociology and Economics. Scholastic Ivory has also studied at NYU Stern School of Business as well as MIT. She joins Temperature@lert as the Sharp Semantic Scribe, where she creates weekly blog posts and assists with marketing team projects. When Ivory is not working on her posts and her studies, she enjoys cooking and eating sweets, traveling and couch surfing (12 countries and counting), and fencing (She was the Women's Foil Champion in Beijing at 15!). For this active blogger, Ivory's favorite temperature is 72°F because it's the perfect temperature for outdoor jogging.

    Chris Monaco Temperature@lert

    Full story

    Comments (0)

  • Dawn of Solar Data Centers?

    Major player projects can point to readiness, costs and benefits of solar power for data centers.

    Water, water everywhere,

    And all the boards did shrink.

    Water, water everywhere,

    Nor any drop to drink.                The Rime of the Ancient Mariner - Samuel Taylor Coleridge

    Data center managers must feel a lot like Coleridge’s Ancient Mariner when they look out the window (assuming their offices have any windows). Like the sailors on Coleridge’s journey, data center professionals are surrounded by free power from the wind, sun, water,the earth’s heat and biofuel, but none of it is usable as it exists to power the insatiable demands of the equipment inside the vessel. Despite this challenge, there have been several interesting projects regarding green energy sources. This piece in the data center energy series will explore solar photovoltaic to help determine if the technology is suitable to provide cost effective, reliable power to data centers.

    Temperature@lert Blog: Dawn of Solar Data Centers?
    Left: Engraving by Gustave Doré for an 1876 edition of the poem. "The Albatross," depicts 17 sailors on the deck of a wooden ship facing an albatross. Right: A statue of the Ancient Mariner, with the albatross around his neck, at Watchet, Somerset in south west England where the poem was written. (Link to Source - Wikipedia)

    Solar powered data centers have been in the news recently primarily due to projects by Apple and Google. In an effort to build green data center, Apple’s Maiden, North Carolina 500,000 sq.ft. site is powered in part by a nearby 20-acre, 20-megawatt (MW) solar array, The site also has a 10-MW fuel cell array that uses “directed biogas” credits as the energy source. (Link to Apple Source) The remainder of the power needed for the site is purchased from the local utility with Apple buying renewable energy credits to offset the largely coal and nuclear generated Duke Energy electricity. Apple sells the power from the fuel cells to the local utility in the form of Renewable Energy Credits used to pay electric utility bills. Apple expects that the combination of solar photovoltaic panels and biogas fuel cells will allow the Maiden data center to use 100% renewable energy or energy credits by the end of the year. Several lesser known companies have also implemented solar initiatives but the news is not so widespread.

    Temperature@lert Blog: Dawn of Solar Data Centers?
    Left: Apple Maiden, NC data center site shows solar array in green (Link to Source - Apple); Right: Aerial photo of site with solar array in foreground (Link to Source - Apple Insider)

    It will be instructive to follow reports from Apple to determine the cost-effectiveness of the company’s green approach. That being said, many if not most companies do not have the luxury of being able to build a 20-acre solar farm next to the data center. And most have neither the cash to invest in such projects nor the corporate caché of Apple to get such projects approved, so initiatives such as Maiden may be few and far between. Still, there’s a lot of desert land ripe for solar farms in the US Southwest. Telecommunication infrastructure may be one limitation, but California buys a lot of its electrical power from neighboring states so anything is possible.

    What about solar power for sites where the data center is built in more developed areas, is there any hope? Colocation provider Lifeline Data Centers announced their existing 60,000 sq. ft. Indianapolis, Indiana site will be “largely powered by solar energy”. (Link to Source - Data Center Dynamics) Author Mark Monroe’s piece titled Solar Data Center NOT “Largely Solar Powered” thought about his solar panel installation and took a at the numbers behind this claim. Lifeline is planning to install a 4-MW utility-grade solar array on the roof and in campus parking lot by mid-2014. Author Monroe takes a swag at determining how much of the data center’s power needs will be filled by the solar array.

    Assuming the site’s PUE is equal to the Uptime Institute’s average of 1.64 and taking into account the photovoltaic array’s operating characteristics (tilt angle, non-tracking), site factors (sun angle, cloud cover), etc., Monroe calculates that 4.7% of the site’s total energy and 12% of the overhead energy will be available from the solar installation. At an industry leading PUE of 1.1, the installation will provide 7% of the total energy and 77% of the overhead energy. Monroe notes that while these numbers are a step in the right direction, Lighthouse’s claim of a data center “largely powered by solar energy” is largely not based on the facts. His piece notes that even Apple’s Maiden site with 20 acres of panels only generates about 60% of the total energy needed by the site overhead and IT gear. Lifeline would need to add and extra 6-MW of solar capacity and operate at a PUE of 1.2 to operate at Net Zero Overhead.

    I am curious to see hard data from these and other solar photovoltaic projects for data centers that will show hard cost, performance data and financial incentives (tax considerations, power contracts, etc.) that the industry can review to determine if solar is the right approach for their electrical power needs. Although such disclosure is unlikely due to competitive considerations, it would greatly assist the industry to help promote such green initiatives to help take the spotlight off of headlines criticizing the “power hungry monster”.

    All efforts to improve industry efficiency and reduce energy consumption are steps in the right direction. Companies like Lighthouse Data Centers that don’t have the deep pockets of Apple or Google are taking steps toward the goal of Net Zero Overhead. The challenge for data center operators that initiate green energy or efficiency based projects will be to boast about these efforts to make headline grabbing claims that may not be well supported by the data. As Launcelot Gobbo tells Old Gobbo in Shakespeare’s The Merchant of Venice, “but at any length truth will out.” Green powered and energy independent are claims that need to be examined carefully to maintain industry credibility and good will or “truth will out.”

    Temperature@lert FREE IT Monitoring Guide

    Full story

    Comments (0)

  • Does Cogeneration Yield a Suitable RoI in Data Centers?

    What does the data say?

    This is the second of two pieces on Cogeneration or CHP.  The first explored the topic, this one will explore the RoI of technology proven for other industries as applied to data centers.

    As the data center industry continued to consolidate and competitiveness becomes more intense, IT professionals understand the pressure on both capital and operating budgets.  They are torn by two competing forces, faster and more reliable vs. low cost and now.  IT equipment improvements are continuously and the desire to update always calls.  Reliability has become the mantra of hosted application and cloud customers and although electrical grid failures are not considered “failures against uptime guarantees” for some, businesses affected by outages feel the pain all the same.  And if there are solutions, management pressure to implement them quickly and at low cost is always a factor.

    Cogeneration is typically neither fast nor cheap, but it does offer an alternate path to reliability and uptime.   As in all major investments that require sizable capital and space, the best time to consider cogeneration is during data center construction.  That being said, data centers operating today are not going any place soon, so retrofit upgrade paths are also a consideration, especially in areas where electric power reliability from the local utility has become less reliable over time.  So when should data center professionals consider cogeneration or CHP?  Fortunately there are studies available on public websites that help provide answers.

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?

    University of Syracuse data center exterior; Microturbines in utility area (Link to Source)

    One such study is an installation at the University of Syracuse.  Opened in 2009, the 12,000 ft2 (1100 m2) data center with a peak load of 780 KW employs cogeneration and other green technologies to squeeze every ounce of energy out of the system. (Link to Source)  The site’s 12 natural gas fueled microturbines generate electricity.  The microturbine’s hot exhaust is piped to the chiller room, where it is used to generate cooling for the servers and both heat and cooling for an adjacent office building.  Technologies such as adsorption chillers to turn heat into cooling, reusing waste heat in nearby buildings and rear door server rack cooling that eliminates the need for server fans completes what IBM calls its Greenest Data Center yet.

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?

    Left: Heat exchanger used in winter months to capture waste microturbine heat for use in nearby buildings; Right: IBM “Cool Blue” server rack heat exchangers employ chilled water piped under floor.

    This is certainly an aggressive project, but can the cost be justified with a reasonable Return on Investment?  Fortunately data has recently been released to quantify the energy conservation benefits.  PUE performance measured during 2012 was presented at an October 2013 conference and show a steady PUE between 1.25 and 1.30 during the period, a value that compares very favorably when compared to the typical data center PUE of 2.0. Uptime Institute self reporting average PUE is 1.65 with qualifications, Digital Realty Trust survey of 300 IT professionals with annual revenues of at least $1 Billion and 5,000 employees revealed PUE of 2.9.  (Link to Sources: Uptime Institute Digital Realty Trust)

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?      

    IBM/SU Green Data Center 2009 Goals (Link to Source); 2012 Actual Performance (Link to Source)

    So how can we calculate the actual RoI and compare it to the projected goals.  First, the goals stated in the table on the left show savings of $500,000+ per year.  Another presentation by the microturbine supplier shows a $300,000 per year goal, quite a bit different.  So how do we know what the savings is?  We don’t since there is no reference site where the data center is identical and in an identical location without the CHP.  So we can use the 2.0 average PUE and calculate the energy savings, but that’s not a real answer.  And we also need to take into account the fact that tax incentives and grants such as the $5 Million for the Syracuse University project needs to be reviewed to determine the cost to non-subsidized projects.  Hopefully project managers will provide more information to help data center operators better understand the actual savings as the project matures.

    CHP for data centers is presented with an array of benefits including improved reliability through less dependence on grid power, lower power costs, reduced carbon footprint.  NetApps installed CHP in their Silicon Valley data center to reduce their reliance on grid power due to frequent rolling brownouts and the uncertainties of the power market costs.  Their experience is not as instructive due to the site’s reduced need for cooling due to use of direct air cooling.  As a result the CHP system is used only when the utility is strained.  It is difficult to find quantitative data for modern installations.   While the data seems encouraging, actual energy cost savings are not provided.  We will watch the progress at this and other projects over the next several months to see if CHP costs yield an acceptable RoI via reduced energy costs.  Stay tuned.

    Full story

    Comments (0)

  • Does Cogeneration Have a Role in Data Centers?

    Operators have many options to consider.

    An earlier piece in this series titled Data Centers as Utilities explored the idea that emergency backup power systems in data centers could be used to supply the utility with peak demand power when the grid is running near capacity and the data center’s emergency generators are not needed.  But what about the idea that data centers generate their own power to provide less reliance on the grid?  There are several approaches, particularly in the green energy space that will be explored in future pieces.  One that is readily available and may make sense for data centers to consider is called cogeneration or Combined Heat and Power, CHP for short.

    CHP is not new, it has been used in more traditional industries for decades, primarily heavy industries with large energy needs, steel and paper mills for example.  Cogeneration for data centers has been in the news for quite some time but has had a relatively low adoption rate.  After all, data center operators try to put their capital into IT infrastructure; the utility and facility sides are often looked at as necessary added cost.  But with reports that grid capacity and reliability may not be able to address the growth or reliability needs of the industry, operators are taking a fresh look at options such as self generation.   Low natural gas prices are also a factor since operators may be able to secure the fuel for their own operations more cheaply than through electric utilities.

    As early as 2007 the US Environmental Protection Agency highlighted the potential of cogeneration in the future of data centers in a piece titled The Role of Distributed Generation and Combined Heat and Power (CHP) Systems in Data Centers.(Link to Source)  With advances in the technology, changes in energy costs, and greater emphasis on grid capacity and reliability as it pertains to data centers, cogeneration has received a significant boost with sponsorship from companies such as IBM.  

    Temperature@lert Does Cogeneration Have a Role in Data Centers?

    US sponsored report table showing various technology applications

    all under the CHP or Cogeneration name. (Link to Source)

    There are several approaches to cogeneration or CHP.  The EPA report shows application of several technologies that fall under the sphere CHP or cogeneration.  Recent installations include five gasoline engine powered turbines in a Beijing data center. According to one report, Powered by five of GE’s 3.34-megawatt (MW)  cogeneration units, the 16.7-MW combined cooling and heating power plant (CCHP) will offer a total efficiency of up to 85 percent to minimize the data center’s energy costs. (Link to Source) The project is sponsored by the China National Petroleum Corporation and represents the trend toward distributed energy production in high usage industries.  Ebay’s natural gas powered Salt Lake City plans to deploy a geothermal heat recovery system to product electricity from waste heat. (Link to Source)

    Temperature@lert: Does Cogeneration Have a Role in Data Centers?

    Example of Micro Turbine or Fuel Cell CHP layout (Link to Source)

    Data from projects at the University of Syracuse and University of Toledo data centers will be examined in a companion piece to demonstrate the potential RoI for CHP.

    Temperature@lert: Does Cogeneration Have a Role in Data Centers?

    University of Toledo Natural Gas Fired Micro Turbine Cogeneration Plant. (Link to Source)

    Full story

    Comments (5256)

  • Who exactly is ushering in ASHRAE’s Temperature Guidelines?

    Temperature@lert Dave Ruede Dave Ruede, VP of Marketing at Temperature@lert, says:

    "Is raising data center temperature like a game of “you blinked first”, only with your job on the line?"

    While no global standard exists for data center temperature recommendations, many refer to the white paper ASHRAE Technical Committee (TC 9.9) for Mission Critical Facilities, Technology Spaces, and Electronic Equipment.  As many know, the committee published a 2011 update titled 2011 Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance.  (Link to Whitepaper)  With this document, ASHRAE’s TC 9.9 raised the recommended high end temperature from 25°C (77°F) to 27°C (80.6°F) for Class 1 data centers (the most tightly controlled class).  More importantly, the allowed high end was set a warm 32°C (89.6°F), perfect for growing succulents like cacti.

    And yet, recent posts on IT professional social media sites have produced questions like, “What gloves are recommended for data centers to help protect from cold temperatures?”  So it appears not everyone is following ASHRAE’s guidelines.  Yet the other fact is that many IT professional media discussions are about energy savings. And if I remember living through the history of the 1973 OPEC oil embargo correctly, raising home air conditioning temperatures during the summer and lowering home heating temperatures during the winter saves energy and money.  The U.S. Department of Energy’s website estimates a 1% energy saving for each degree the AC temperature is raised.  Some sites claim 2%, 3% and even 4% savings, but even 1% for a data center’s energy budget is very significant.

    What are data center’s really doing?  In a July 15, 2013 piece posted on the Computerworld UK website titled It’s getting warmer in some data centers, author Patrick Thibodeau notes that, “The U.S. General Services Administration, as part of data center consolidation and efficiency efforts, has recommended raising data center temperatures from 72 degrees Fahrenheit (22.2°C) to as high as 80 degrees (26.7°C) . Based on industry best practices, the GSA said it can save 4% to 5% in energy costs for every one degree increase in the server inlet temperature.”  (Link to Article)  A 5% energy savings is something that makes IT managers really salivate.

    eBay’s newest data center in Phoenix, AZ employs open-air cooling technology to reduce energy used for cooling as a percentage of total site power consumption.  (Link to Image)
    So where is the industry?  The article continues that the 2013 Uptime Institute survey that included 1,000 data centers globally, almost 50% were operating at between 71°F (21.6°C) and 75°F (23.9°C).  The Uptime Institute noted that the survey did not show much change from the previous year.  Incredibly, 37% of data centers were operating a frigid 65°F (18.3°C) to 70°F (21.1°C).  Some good news was the fact that data centers operating at less than 65°F (18.3°C) have decreased from 15% to 6% of those surveyed.  This is a self-selected survey, so the data has to be looked at somewhat cautiously since some data center personnel may not elect to participate, but the data is sobering.

    So what’s the problem?  Server and other electronic equipment suppliers have participated fully in the TC 9.9 guidelines; they are certain that their equipment will operate within specification at the higher temperatures.  Their warranties reflect this.  And yet, other issues exist.

    One may the issue of poorly controlled buildings.   Older, poorly insulated facilities with dated, less efficient HVAC equipment may be forced to lower the temperature to withstand elevated summer temperatures, especially if they have significant air leakage.  Indeed, in the Boston area the month of July 2013 has been an average 4°F (2.2°C) hotter than average, a load that will tax even newer cooling systems.  Finally, the elevated temperatures may only apply to the newer equipment in any given data center.  Many data centers have a collection of equipment that contains some of the newest, state of the art servers sharing space vintage electronics that need the cooler temperatures to operate without problems.  And changing out equipment to allow a site to raise the temperature will mean assessing all electronic systems, including building facilities.
    So the industry has a dilemma, save energy and operating cost by raising data center temperatures which could require building, HVAC electronic equipment upgrades, or continue to pay higher operating costs.  The flip side is the price to retrofit buildings, systems and electronic equipment; a cost that would be paid by “Facilities” or “Operations”, not “IT”.
    Image from Slate.com piece about Google’s data center (Link to Image)

    Data center professionals are no different from other industries in that making change is hard, it can come with risks.  And changes to operating protocols are not done lightly when many data centers based their business strategy on reliability guarantees to their customers.  Who among us is willing to stake their professional reputation and possibly their job on a major undertaking that contains variables that may be out of our control?  So a studied approach is called for.   But in the end, the cost of energy will inevitably increase, and the need to implement more powerful servers, etc. will be irresistible. When that time comes, the need to implement raising temperature limits will be examined closely as part of an overall business strategy.  In the mean time, data center personnel may want to check out a recent Slate website post titled “The Internet Wears Shorts”, wherein the author describes Google technicians who work in summer clothes.  The thrust is that Google has achieved significant energy efficiency, partially by running their data centers at “a balmy 80°F” (22.2°C).

    Author: Dave Ruede is VP Marketing at Boston based Temperature@lert (www.temperaturealert.com), a leading developer and provider of low-cost, high-performance temperature monitoring products.  Professional interests include environmental and energy issues as they relate to data centers, clean rooms, and electronics.  Contact: dave@temperaturealert.com

    Full story

    Comments (0)

  • What's the Best Temperature Sensor?

    "What's the Best Temperature Sensor?"

    The Classic Top 10 List: We've all seen (and heard) top 10 lists, and these lists can be both informative and misleading. The attempt to categorize any product or collection of people into a "top" list can be useful for outsiders or uninformed audiences, but overall, these lists tend to be rash generalizations supplanted by one particular perspective.

    "The Best Temperature Sensor" In the world of Temperature Sensors and environmental monitors, many people look for a sacred 'list' of Best Temperature Sensors, or some sort of "top 5" reference point that leads to an informed decision. In truth, many prospective buyers are unfamiliar with the sensor market, and are depending on this type of neutral resource to guide them on a path to some sort of "purchaser enlightenment". Unfortunately, this resource does not exist in the natural internet environment, and users rely on technical forums (SpiceWorks) and raw customer feedback (Amazon Reviews, etc). If you're using these resources, there is one important detail to keep in mind when scanning these scattered sources of information.

    What is that Detail?  Application or industry. This is the most important indicator to consider when browsing for 'best temperature sensors'. Not all sensors are created equal, and we can use of our own products as a simple example. Our USB device is primarily utilized by IT clients that are looking to monitor the temperature of their server rooms, and for this purpose, the USB edition is our first recommendation for any prospect with this exact need. With that said, the USB edition has little or no value to the commercial refrigeration industry (for instance), wherein a computer is not typically in close proximity to a walk-in fridge or freezer. Even if this is the case, the device is primarily designed for the IT industry, and suggestions to use the USB product for other applications outside of IT are, well, misleading. In the larger picture, there are specialized sensors that are designated for particular industries, and it's very rare to find a product that is infinitely flexible, or can be used in every industry. Take recommendations from your peers in your industry, and don't rely on reviews or outsider suggestions if you aren't sure of their application. 

    Or take this example. In restaurants and commercial refrigeration, power outages can lead to serious issues. The loss of cooling (from refrigerators or freezers) can cause temperatures to rise dramatically, and as we've discussed in other blog posts, exposure to these high temperatures can lead to bacterial infestation. Put another way, in the case of ice cream, the rising temperatures can lead to thousands of dollars in melted deliciousness. In this particular instance, suggesting any kind of temperature sensor or monitoring solution without a backup battery would be pointless. If the device solely relies on AC-power, the power outage would disable the reading capability of the device. Even if a vendor boasts reliability, dependability, or any other relevant buzzword, the absence of a backup power source is a serious hole. The purchase of this device would be a major oversight, and in fact, would not meet the complete set of needs for a commercial refrigeration client. 

     If you're looking for a list of best temperature sensors, remember that the 'best' is in the eye of the beholder, and in this type of situation, the beholder will make suggestions from their industry perspective, which may or may not be relevant to your search. Search with an open ear, and remember that the voice of reason in "temperature sensors" is highly dependent on the specific application. There are companies that specialize in sensors for the IT industry, and while they may be reputable vendors, they may not have the best solution for your needs. Listen to your industry voices!

    Full story

    Comments (143)

  • How much is my Server Room worth?

    How much is your Server Room Worth?

    Redundancy and the value of your in-house data

    Way in the back of your office--beyond the marketing mavens and chipper CEOs--is a room of servers. There might be a few 4U servers in a closet,  a handful of database servers in a larger space, or even an entire room with “racks on racks”. Most businesses will have a dedicated space for server equipment, and no matter the size, the overall value of the information can far outweigh the actual costs of the server hardware. Think of it in these terms; how valuable do you consider your “big data”, and what precautions are you undertaking to protect the information?

    Redundancy is one common method and is typically associated with the concept of Disaster Recovery (DR). In fact, a slew of cloud and hosting providers now tout DRaaS (Disaster Recovery as a Service) as a selling point for their solutions. But for the smaller-scale SMBs that utilize an in-house data closet, in-house redundancy can be difficult to produce. In-house redundancy may involve the use of vacant servers and equipment that receives copies of all data transmissions and related information. While this is an important concept to consider for your servers, keep in mind that the duplicate purchases (of identical equipment for failover) can be a costly expense. Remember that  the costs of data loss/leakage (depending on your business size) can be astronomical. Check out these words about data losses from David M. Smith of Pepperdine’s Graziadio School of Business and Management:

    “The final cost to be accounted for in a data loss episode is the value of the lost data if the data cannot be retrieved. As noted earlier, this outcome occurs in approximately 17 percent of data loss incidents. The value of the lost data varies widely depending on the incident and, most critically, on the amount of data lost. In some cases the data may be re-keyed in a short period of time, a result that would translate to a relatively low cost of the lost data. In other cases, the value of the lost data may take hundreds of man-hours over several weeks to recover or reconstruct. Such prolonged effort could cost a company thousands, even potentially millions, of dollars.[12] Although it is difficult to precisely measure the intrinsic value of data, and the value of different types of data varies, several sources in the computer literature suggest that the value of 100 megabytes of data is valued at approximately $1 million, translating to $10,000 for each MB of lost data.[13] Using this figure, and assuming the average data loss incident results in 2 megabytes of lost data, one can calculate that such a loss would cost $20,000. Factoring in the 17 percent probability that the incident would result in permanent data loss, one can further predict that each such data loss would result in a $3,400 expected cost.”

    And with that said, the cost of redundancy (or rescuing data after a failure or disaster) can be difficult to calculate when you consider the man-hours associated with the recovery. As an aside, big data has come under the spotlight recently as a overused buzzword, and the divide between useful data and well, just data, is a difficult line to draw. Marketers in particular are faced with this problem, and sorting through the mountains of data can be both cumbersome and useful. Some consider "big data" to be overrated, in that the finding of useful information within piles of useless data can be time intensive and wasteful. Regardless, the value of an endless data pool is difficult to calculate (depending on business size and application), but is significant, and ultimately the consequences of lost data aren’t to be ignored. Protect your data, use redundancy, and seek out other methods of reliability and sustainability for your priceless server rooms and closets.

    Full story

    Comments (0)

  • Particle Filter Series: What You Can't Remove

    Particle Filter Series: What You Can't Remove

    Particle filters may not be enough to protect IT equipment in all environments.

    In this series, we’ve compared data center particle filters to those found in a home HVAC system, mainly to highlight the similarities for filter consideration. In either application, needless to say, particle filters remove particles.  Some remove more particles or smaller particles, some less.  The more particles a filter removes, the less efficiency-robbing dust there will be on heat exchangers, fans, and electronic circuitry.  Unless we use HEPA or ULPA filters (as is done in semiconductor manufacturing to remove very small size particles), some will get through.  What is needed, is a balance of cost, efficiency, pressure drop, an understanding of the cleanliness of the local environment, and the OEM’s requirements.

    But particle filters cannot and do not remove gases such as oxygen and nitrogen.  Of concern here is the fact that particle filters do not remove corrosive gases, those that can corrode exposed metal in electronic assemblies.  Equipment manufacturers often take precautions to prevent corrosion.  Coatings are sometimes used, as well as materials that specifically inhibit corrosion.  But since the implementation of RoHS lead-free initiatives, the materials used for electronics are now less resistant to the effects of corrosive gases than they were before the regulation updates.  Nobody wants to go back to using lead and other environmentally hazardous materials, so additional precautions may be needed.

    What are the corrosive gases of concern?  The most corrosive are two byproducts of combustion: sulfur dioxide (SO2) from automobile exhausts and home heating systems and hydrogen sulfide (H2S) from burning coal, pulp mills, landfills, or waste treatment plants.  Needless to say, large urban areas known for poor air quality, especially communities that are situated "downwind" of coal-fired electrical generators. In these areas, the air can have elevated concentrations of these corrosive gases.  Other corrosive agents are the list of chemicals containing chlorine or chlorides.  As seaside residents know, rust is an issue on metallic surfaces.  Facilities near coastal areas, as well as those near water treatment plants or pulp and paper mills, are also areas of concern for chloride-based gases.

    The effects of corrosive gases are everywhere: remember your grandmother’s silverware? What needs to be understood is the rate of corrosion.  To help with guidelines, ASHRAE’s Technical Committee 9.9 has issued a report to help IT professionals with the issue of corrosive gases. (Link to ASHRAE guidelines)

    Table 2 from the ASHRAE report titled 2011 Gaseous and Particulate Contamination Guidelines For Data Centers shows a reference to the International Automation Society’s standard for electronic materials corrosion, ISA-71.04 (1985).  The table describes copper corrosion activity per month and classifies it into four categories; the greater the amount of corrosive gases, the greater the amount of corrosion per unit time.  Table 4 describes ASHRAE’s latest recommendations for Acceptable Limits of Gaseous Contamination.  The ASHRAE guidelines add silver corrosion activity to the guidelines.  This is due to the use of silver as a replacement for lead in RoHS compliant electronics, and the relatively high reactivity of silver to sulfur containing gases.  ISA is considering adding silver to the ISA-71.04 standard. Electronic device manufacturers often reference these documents in their equipment warranty policies.


    Too Small?

    Particle filters, no matter how efficient they may be, cannot filter out these gaseous “particles”. They are simply too small (meaning the size of air molecules), and therefore cannot be removed with this type of filtration.

    Several companies specialize in determining the level of corrosion on copper and silver strips of metal in data centers and other environments.  Typically they employ metal strips referred to as “coupons” that are placed in the IT space for thirty days and sent to be analyzed.  A report describing the level of corrosion per month as well as the types of gases present is the result.  In many locations, the levels of corrosive gases in the data center are low enough to be considered safe and within OEM specification.

           particle resized 600        

    Figures 3a, 3b: Corrosion Classification Coupons, also known as Reactivity Monitoring Coupons from two suppliers for use in Data Centers based on methods describes in ASHRAE TC 9.9 and ISA-71.04 guidelines.  

    (Link to Image)














    Large urban centers in Asia and South Asia can experience high levels of corrosive gases in the local atmosphere. Keep in mind, urban centers in Europe and North America have also experienced high levels, especially during home heating season months. Figures 4a and 4b show results from the US EPA Acid Rain Program.  The program was founded under the 1990 Clean Air Act and shows dramatic decreases in SO2 (4a) and Sulfate (4b) concentrations. (Link to Source)  Data centers that are immediately downwind of sources will need to monitor corrosive gas levels at the atmospheric level to understand if they are in areas of concern.  Those considering air side economizers will want to place some coupons outside, specifically near the make-up air intake.

    Figure 4a (Above): SO2 levels in Eastern USA in 1989-1991 and 2007-2009

    Figure 4b (Above): Sulfate levels in Eastern USA in 1989-1991 and 2007-2009

    For IT managers in locations with high levels of corrosive gases, what's the best plan of action?  Fear not, there are solutions.  The next and final post in this series will provide some guidance.

    Full story

    Comments (0)

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. Next page