temperature@lert blog

Go Back
  • Dawn of Solar Data Centers?


    Major player projects can point to readiness, costs and benefits of solar power for data centers.


    Water, water everywhere,

    And all the boards did shrink.

    Water, water everywhere,

    Nor any drop to drink.                The Rime of the Ancient Mariner - Samuel Taylor Coleridge


    Data center managers must feel a lot like Coleridge’s Ancient Mariner when they look out the window (assuming their offices have any windows). Like the sailors on Coleridge’s journey, data center professionals are surrounded by free power from the wind, sun, water,the earth’s heat and biofuel, but none of it is usable as it exists to power the insatiable demands of the equipment inside the vessel. Despite this challenge, there have been several interesting projects regarding green energy sources. This piece in the data center energy series will explore solar photovoltaic to help determine if the technology is suitable to provide cost effective, reliable power to data centers.


    Temperature@lert Blog: Dawn of Solar Data Centers?
    Left: Engraving by Gustave Doré for an 1876 edition of the poem. "The Albatross," depicts 17 sailors on the deck of a wooden ship facing an albatross. Right: A statue of the Ancient Mariner, with the albatross around his neck, at Watchet, Somerset in south west England where the poem was written. (Link to Source - Wikipedia)


    Solar powered data centers have been in the news recently primarily due to projects by Apple and Google. In an effort to build green data center, Apple’s Maiden, North Carolina 500,000 sq.ft. site is powered in part by a nearby 20-acre, 20-megawatt (MW) solar array, The site also has a 10-MW fuel cell array that uses “directed biogas” credits as the energy source. (Link to Apple Source) The remainder of the power needed for the site is purchased from the local utility with Apple buying renewable energy credits to offset the largely coal and nuclear generated Duke Energy electricity. Apple sells the power from the fuel cells to the local utility in the form of Renewable Energy Credits used to pay electric utility bills. Apple expects that the combination of solar photovoltaic panels and biogas fuel cells will allow the Maiden data center to use 100% renewable energy or energy credits by the end of the year. Several lesser known companies have also implemented solar initiatives but the news is not so widespread.


    Temperature@lert Blog: Dawn of Solar Data Centers?
    Left: Apple Maiden, NC data center site shows solar array in green (Link to Source - Apple); Right: Aerial photo of site with solar array in foreground (Link to Source - Apple Insider)


    It will be instructive to follow reports from Apple to determine the cost-effectiveness of the company’s green approach. That being said, many if not most companies do not have the luxury of being able to build a 20-acre solar farm next to the data center. And most have neither the cash to invest in such projects nor the corporate caché of Apple to get such projects approved, so initiatives such as Maiden may be few and far between. Still, there’s a lot of desert land ripe for solar farms in the US Southwest. Telecommunication infrastructure may be one limitation, but California buys a lot of its electrical power from neighboring states so anything is possible.

    What about solar power for sites where the data center is built in more developed areas, is there any hope? Colocation provider Lifeline Data Centers announced their existing 60,000 sq. ft. Indianapolis, Indiana site will be “largely powered by solar energy”. (Link to Source - Data Center Dynamics) Author Mark Monroe’s piece titled Solar Data Center NOT “Largely Solar Powered” thought about his solar panel installation and took a at the numbers behind this claim. Lifeline is planning to install a 4-MW utility-grade solar array on the roof and in campus parking lot by mid-2014. Author Monroe takes a swag at determining how much of the data center’s power needs will be filled by the solar array.

    Assuming the site’s PUE is equal to the Uptime Institute’s average of 1.64 and taking into account the photovoltaic array’s operating characteristics (tilt angle, non-tracking), site factors (sun angle, cloud cover), etc., Monroe calculates that 4.7% of the site’s total energy and 12% of the overhead energy will be available from the solar installation. At an industry leading PUE of 1.1, the installation will provide 7% of the total energy and 77% of the overhead energy. Monroe notes that while these numbers are a step in the right direction, Lighthouse’s claim of a data center “largely powered by solar energy” is largely not based on the facts. His piece notes that even Apple’s Maiden site with 20 acres of panels only generates about 60% of the total energy needed by the site overhead and IT gear. Lifeline would need to add and extra 6-MW of solar capacity and operate at a PUE of 1.2 to operate at Net Zero Overhead.

    I am curious to see hard data from these and other solar photovoltaic projects for data centers that will show hard cost, performance data and financial incentives (tax considerations, power contracts, etc.) that the industry can review to determine if solar is the right approach for their electrical power needs. Although such disclosure is unlikely due to competitive considerations, it would greatly assist the industry to help promote such green initiatives to help take the spotlight off of headlines criticizing the “power hungry monster”.

    All efforts to improve industry efficiency and reduce energy consumption are steps in the right direction. Companies like Lighthouse Data Centers that don’t have the deep pockets of Apple or Google are taking steps toward the goal of Net Zero Overhead. The challenge for data center operators that initiate green energy or efficiency based projects will be to boast about these efforts to make headline grabbing claims that may not be well supported by the data. As Launcelot Gobbo tells Old Gobbo in Shakespeare’s The Merchant of Venice, “but at any length truth will out.” Green powered and energy independent are claims that need to be examined carefully to maintain industry credibility and good will or “truth will out.”

    Temperature@lert FREE IT Monitoring Guide

    Full story

    Comments (0)

  • Does Cogeneration Yield a Suitable RoI in Data Centers?


    What does the data say?

    This is the second of two pieces on Cogeneration or CHP.  The first explored the topic, this one will explore the RoI of technology proven for other industries as applied to data centers.

    As the data center industry continued to consolidate and competitiveness becomes more intense, IT professionals understand the pressure on both capital and operating budgets.  They are torn by two competing forces, faster and more reliable vs. low cost and now.  IT equipment improvements are continuously and the desire to update always calls.  Reliability has become the mantra of hosted application and cloud customers and although electrical grid failures are not considered “failures against uptime guarantees” for some, businesses affected by outages feel the pain all the same.  And if there are solutions, management pressure to implement them quickly and at low cost is always a factor.

    Cogeneration is typically neither fast nor cheap, but it does offer an alternate path to reliability and uptime.   As in all major investments that require sizable capital and space, the best time to consider cogeneration is during data center construction.  That being said, data centers operating today are not going any place soon, so retrofit upgrade paths are also a consideration, especially in areas where electric power reliability from the local utility has become less reliable over time.  So when should data center professionals consider cogeneration or CHP?  Fortunately there are studies available on public websites that help provide answers.

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?

    University of Syracuse data center exterior; Microturbines in utility area (Link to Source)

    One such study is an installation at the University of Syracuse.  Opened in 2009, the 12,000 ft2 (1100 m2) data center with a peak load of 780 KW employs cogeneration and other green technologies to squeeze every ounce of energy out of the system. (Link to Source)  The site’s 12 natural gas fueled microturbines generate electricity.  The microturbine’s hot exhaust is piped to the chiller room, where it is used to generate cooling for the servers and both heat and cooling for an adjacent office building.  Technologies such as adsorption chillers to turn heat into cooling, reusing waste heat in nearby buildings and rear door server rack cooling that eliminates the need for server fans completes what IBM calls its Greenest Data Center yet.

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?

    Left: Heat exchanger used in winter months to capture waste microturbine heat for use in nearby buildings; Right: IBM “Cool Blue” server rack heat exchangers employ chilled water piped under floor.

    This is certainly an aggressive project, but can the cost be justified with a reasonable Return on Investment?  Fortunately data has recently been released to quantify the energy conservation benefits.  PUE performance measured during 2012 was presented at an October 2013 conference and show a steady PUE between 1.25 and 1.30 during the period, a value that compares very favorably when compared to the typical data center PUE of 2.0. Uptime Institute self reporting average PUE is 1.65 with qualifications, Digital Realty Trust survey of 300 IT professionals with annual revenues of at least $1 Billion and 5,000 employees revealed PUE of 2.9.  (Link to Sources: Uptime Institute Digital Realty Trust)

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?      

    IBM/SU Green Data Center 2009 Goals (Link to Source); 2012 Actual Performance (Link to Source)

    So how can we calculate the actual RoI and compare it to the projected goals.  First, the goals stated in the table on the left show savings of $500,000+ per year.  Another presentation by the microturbine supplier shows a $300,000 per year goal, quite a bit different.  So how do we know what the savings is?  We don’t since there is no reference site where the data center is identical and in an identical location without the CHP.  So we can use the 2.0 average PUE and calculate the energy savings, but that’s not a real answer.  And we also need to take into account the fact that tax incentives and grants such as the $5 Million for the Syracuse University project needs to be reviewed to determine the cost to non-subsidized projects.  Hopefully project managers will provide more information to help data center operators better understand the actual savings as the project matures.

    CHP for data centers is presented with an array of benefits including improved reliability through less dependence on grid power, lower power costs, reduced carbon footprint.  NetApps installed CHP in their Silicon Valley data center to reduce their reliance on grid power due to frequent rolling brownouts and the uncertainties of the power market costs.  Their experience is not as instructive due to the site’s reduced need for cooling due to use of direct air cooling.  As a result the CHP system is used only when the utility is strained.  It is difficult to find quantitative data for modern installations.   While the data seems encouraging, actual energy cost savings are not provided.  We will watch the progress at this and other projects over the next several months to see if CHP costs yield an acceptable RoI via reduced energy costs.  Stay tuned.

    Full story

    Comments (0)

  • Does Cogeneration Have a Role in Data Centers?

    Operators have many options to consider.


    An earlier piece in this series titled Data Centers as Utilities explored the idea that emergency backup power systems in data centers could be used to supply the utility with peak demand power when the grid is running near capacity and the data center’s emergency generators are not needed.  But what about the idea that data centers generate their own power to provide less reliance on the grid?  There are several approaches, particularly in the green energy space that will be explored in future pieces.  One that is readily available and may make sense for data centers to consider is called cogeneration or Combined Heat and Power, CHP for short.

    CHP is not new, it has been used in more traditional industries for decades, primarily heavy industries with large energy needs, steel and paper mills for example.  Cogeneration for data centers has been in the news for quite some time but has had a relatively low adoption rate.  After all, data center operators try to put their capital into IT infrastructure; the utility and facility sides are often looked at as necessary added cost.  But with reports that grid capacity and reliability may not be able to address the growth or reliability needs of the industry, operators are taking a fresh look at options such as self generation.   Low natural gas prices are also a factor since operators may be able to secure the fuel for their own operations more cheaply than through electric utilities.

    As early as 2007 the US Environmental Protection Agency highlighted the potential of cogeneration in the future of data centers in a piece titled The Role of Distributed Generation and Combined Heat and Power (CHP) Systems in Data Centers.(Link to Source)  With advances in the technology, changes in energy costs, and greater emphasis on grid capacity and reliability as it pertains to data centers, cogeneration has received a significant boost with sponsorship from companies such as IBM.  

    Temperature@lert Does Cogeneration Have a Role in Data Centers?

    US sponsored report table showing various technology applications

    all under the CHP or Cogeneration name. (Link to Source)

    There are several approaches to cogeneration or CHP.  The EPA report shows application of several technologies that fall under the sphere CHP or cogeneration.  Recent installations include five gasoline engine powered turbines in a Beijing data center. According to one report, Powered by five of GE’s 3.34-megawatt (MW)  cogeneration units, the 16.7-MW combined cooling and heating power plant (CCHP) will offer a total efficiency of up to 85 percent to minimize the data center’s energy costs. (Link to Source) The project is sponsored by the China National Petroleum Corporation and represents the trend toward distributed energy production in high usage industries.  Ebay’s natural gas powered Salt Lake City plans to deploy a geothermal heat recovery system to product electricity from waste heat. (Link to Source)

    Temperature@lert: Does Cogeneration Have a Role in Data Centers?

    Example of Micro Turbine or Fuel Cell CHP layout (Link to Source)

    Data from projects at the University of Syracuse and University of Toledo data centers will be examined in a companion piece to demonstrate the potential RoI for CHP.

    Temperature@lert: Does Cogeneration Have a Role in Data Centers?

    University of Toledo Natural Gas Fired Micro Turbine Cogeneration Plant. (Link to Source)

    Full story

    Comments (4243)

  • LEED for Existing Buildings: Hotels and Property Management

    LEED for Existing Buildings: Hotels and Property Management

    If you aren't familiar with the LEED certification for existing buildings, hotels, and commercial properties, here's a snippet from the USGBC website that explains the rationale and story behind the certification.

    "LEED stands for Leadership in Energy and Environmental Design.

    Developed by the U.S. Green Building Council (USGBC), LEED is an internationally recognized certification system that measures how well a building or community performs across these metrics: energy savings, water efficiency, CO2 emissions reduction, improved indoor environmental quality, stewardship of resources and sensitivity to their impacts. LEED provides building owners and operators a concise framework for identifying and implementing practical and measurable green building design, construction, operations and maintenance solutions. 

    LEED applies to virtually all building types -- commercial as well as residential. It works throughout the building lifecycle -- design and construction, operations and maintenance, tenant fitout and significant retrofit."


    Green certification and energy efficiency are 'hot button' issues for many of our hotel owners and property managers. As a general note, all property owners should be aware that LEED certification is dependent on their implementation of green/efficient practices, technologies, and standards. To understand some of these parameters a bit closer, we've consolidated a few of the 'existing building' credits into a simple guide (along with their LEED value). The values or 'credits' are the building blocks to LEED certification, and thus a higher score leads to a higher certification (i.e Silver vs Platinum). All of these suggestions are taken directly from the US Green Building Council website, and do not represent organic suggestions from Temperature@lert.

     

    Advanced Energy Metering (2 points)

    Goal: To support energy management and identify opportunities for additional energy savings by tracking building-level and system-level energy use.

    Program the facility’s energy management system to set an alarm whenever the energy consumption and peak demand rise above the anticipated amount by more than 5%. The anticipated consumption and peak should be determined by analyzing historical facility performance and weather and operating conditions and should be set on at least monthly, preferably daily. Demand measurements must be taken in time increments no longer than the increments used for utility billing or in one-hour increments, whichever is less time. On at least a monthly basis, report the facility’s utility peak demand and total consumption and compare it with the data for the previous month and the same month from the previous year.

     

    Renewable Energy and Carbon Offsets (up to 5 points)

    Goal: To encourage the reduction of greenhouse gas emissions through the use of local and grid-source renewable energy technologies and carbon mitigation projects.

    Meet at least some of the building’s total energy use directly with renewable energy systems, or engage in a contract to purchase green power, carbon offsets, or Renewable Energy Certificates (RECs). Green power and RECs must be Green-e Energy Certified or the equivalent. RECs can be used only to mitigate the effects of Scope 2, electricity use. Carbon offsets may be used to mitigate Scope 1 or Scope 2 emissions on a metric ton of carbon dioxide–equivalent basis and must be Green-e Climate certified, or the equivalent. For U.S. projects, the offsets must come from greenhouse gas emissions reduction projects within the United States.

     

    Thermal Comfort (1 point)

    Goal: To promote occupants’ productivity, comfort, and well-being by providing quality thermal comfort.

    The monitoring system must meet the following requirements- Continuous monitoring: Monitor at least air temperature and humidity in occupied spaces, at sampling intervals of 15 minutes or less.  Monitor air speed and radiant temperature in occupied spaces. Using handheld meters is permitted. An alarm must indicate conditions that require system adjustment or repair.  Specify procedures for adjustments or repairs to be made in response to problems identified. All monitoring devices must be calibrated within the manufacturer’s recommended interval.

     

    As the third suggestion indicates, a temperature monitoring system can be a small piece of a larger green puzzle. While these suggestions attempt to drive the green standard for LEED certification for existing buildings, they also serve as cornerstone 'concern points' for property managers. A temperature monitoring system is designed to promote occupant comfort, but more importantly, should serve as an alerting beacon that informs the operator or owner of a potential problem (relative to temperature).

    For our hotel owners and property managers, rising or falling temperatures can indicate A/C failure and malfunction of other HVAC/R equipment. With a temperature monitoring system, imminent adjustments and repairs to other equipment can be highlighted by rising or falling temperatures. The alerting procedures are an important telescope into the temperature status within the building(s), and are thereby crucial for any migrating owner. Guests and/or tenants should be the last line of defense when it comes to building management concerns, and to that extent, guests should not be the primary whistleblowers for problems related to HVAC/R and temperature control. By installing a monitoring system, an owner can keep the management responsibilities out of the minds of guests, and with the proper alerting capability, will benefit tremendously from the invisible layer of supervision that is provided. Check out the US Green Building Council LEED website for more information on energy efficiency, property management, and other tactics for achieving LEED certification.

    Full story

    Comments (0)

  • Consideration of High Temperature Ambient Environments and Free Cooling in Data Center Operation

    Driectly from the original post: http://www.datacenterpost.com/2013/01/consideration-of-high-temperature.html

     

    Temperature@lert

     David Ruede, VP Marketing at Temperature@lert, says:

    Techies love acronyms, and IT professionals are masters of the jargon. Where else would we find such gems as CRAC, PUE, SaaS, DCIM, VoIP and VPN among the scores if not hundreds of options for the next big idea?

    Why do we need these when The Free Dictionary lists 259 phrases alone for the acronym DC? (Link 1)  First, we love to speak in shorthand.  Time is always too short; things need to be done quickly.  Speaking in Acronym makes us an insider, the elite few who can feel the bits and petabytes flowing through the veins and arteries of the interconnected web of the virtual world.  And short of a Vulcan Mind Meld, acronyms save time, although one could argue that when used in meetings there may be a few who don’t really understand the meaning and because they don’t want to appear “stupid”, don’t ask.

    Many of these terms started off as marketing terms.  Why would we need CRAC when AC may be sufficient?  And why is PUE debated daily as to its true meaning in professional social media sites?  Every data center operator, supplier and professional looks to set themselves or their companies apart from the competition.  I’ll argue this is a good thing because it makes web searches easier – I don’t have to sort through hundreds of household air conditioners sold in retail outlets to find what I need for a data center, server or telecom room.

    Recently a new acronym has been making its way into the jargon.  HTA, High Temperature Ambient, has cropped up in several professional periodicals and online marketing pieces.  The phrase is used to describe the benefits of reduced energy consumption in data centers and other IT facilities that operate at what many consider higher than “normal” temperatures, say 30°C (86°F) for example.  Described in earlier pieces as high ambient temperature or high temperature in the ambient, the idea of running data centers at higher temperatures has gained prominence as a way to save electrical energy, a very costly piece of the data center’s operating budget.  Often used with terms like “free cooling” or “air side economizers”, the idea is that today’s servers have been specified to run at higher temperatures than those just a few years ago, so operating equipment at higher temperatures has no detrimental effect.

    In April 2012, Intel published a study of the potential energy savings in green data center maker Gitong’s modular data centers.  The Shanghai study showed an annual cost reduction of almost $33,000 per year, which is significant.

    Figures 1a, 1b: Tables showing before and after HTA results - Source: Intel Link 2

    While saving energy is a very desirable goal, data center, server and telecom room operators are well served to understand the underlying assumptions behind “turning up the heat and opening up the doors and windows”.  First, all of the equipment in an IT space comes with manuals, and the manuals specify operating conditions. Insuring all of the equipment in the ambient is able to run at elevated temperatures is highly recommended, particularly since older devices or appliances may be more prone to heat related performance degradation.  ASHRAE’s TC 9.9 2011 Thermal Guidelines for temperature and humidity control are a good reference as to where to start when designing or setting up an HVAC system. (Link 3)

    Second, while the HVAC systems in IT spaces are generally well designed and provide adequate airflow to the equipment, time has a way of changing things.  Profiling the temperature of the data center to see if any changes in operation or addition of equipment have created “hot spots” with sufficient resolution to insure each rack or piece of equipment is operating within specification can be done with existing equipment by moving temperature sensors to areas not normally monitored during the temperature mapping process.

    Third, changes in temperature can cause changes in relative humidity.  Continuous monitoring of not only temperature but relative humidity before and after raising the temperature is recommended to insure both of these critical parameters are within manufacturer’s specification.

    And if IT professionals decide to employ “free cooling” by figuratively “opening up the doors and windows”, they would be well advised to check ASHRAE’s TC 9.9 Gaseous and Particulate Contamination Guidelines for Data Centers and again their supplier manuals for specification compliance. (Link 4)

    Figure 2: Ambient Air Cooling Unit (Link 5)

    Much has been written about free cooling; a June 2012 article is a good example. (ref. Link 5)  Cooling may indeed be “free” and many can and do use free cooling combined with HTA to make significant reductions in their energy bills.  As in all good ideas, “first, do no harm” is a good motto.  IT professionals may be well served to verify and validate the assumptions against best practices as they apply to their sites before any significant changes in operation are made.

    Full story

    Comments (0)

  • Essential Tech Check List: Building & Retrofitting Your Server Room

    Whether you're building a server room, adding on, or moving equipment there are many considerations to mull over. From the basics to alarm systems, it is important to ensure your server room is efficient and to protect your mission critical equipment. Previously in our blog, we have addressed the issues surrounding the microclimate present in your server room; however, it is critical to have an understanding of how a server room should be laid-out and managed. Use our check list as a guide for promoting security, efficiency, and productivity:

    Our Essential Tech Check List

    (1) Your Basics of Space

    • -Examine the layout of the space and how many units of space you have to work with.

    • -The walls (including ceiling) and doors should isolate the sounds that your equipment is creating.

    • -Check to see which way the door opens. There should also be no windows or other entry points other than the doors in the room.

    • -Consider the floor and whether your equipment will need raised flooring. Aim for anti-static floor finishing to prevent an unwanted static charge.

    • -Make sure there is enough clearance for racks and that they are stable enough to hold your equipment.

    • -Check for aisle clearance too, make sure your have enough room for exhaust to escape and not over-heat nearby equipment.

    • -Think about whether you need ladder racks, cabinets, shelves, patch panels, or rack mounts.

    • -Take into weight and size of each piece of equipment into consideration when designing the layout.


    (2) Keeping Your Cool

    • -Check and see what type if centralized cooling is available, whether an under the floor air distribution or an air duct system.

    • -If there is no centralized system available, get an air conditioner or cooling unit that is able to keep your equipment working productively while minimizing energy consumption and costs.

    • -If at all possible, fresh air vents are great and save on energy costs and consumption!

    • -Remove any and all radiators or other heating equipment currently present in the room. You don't need to add heat at all!

    • -Monitor your cooling system(s) to make sure it is working properly, especially when no one is there.

    • -Make sure your cooling units are not too close in proximity to your electrical equipment, think condensation and flooding. Do not place air conditioning units over your servers.

    • -Monitor the humidity to prevent static charge and electrical shorts.

    • -See if a chilled water system is in the budget or find something within the budget constraints to ensure that the hot air has somewhere to go.

     

    (3) Using Your Power

    • -Check to make sure that you have enough outlets to support power to all your equipment and not to overload them.

    • -Get backup power, preferably UPS to prevent data loss from power blinking or outages.

    • -Don't surpass the maximum electrical intensity per unit of space.

    • -Consider shut down capabilities of equipment (SNMP traps for example).

    • -Make sure your equipment is grounded.

    • -Monitor for power outages if you are not using back-up power systems.

    • -Monitor your back up power systems to make sure your mission critical equipment is not failing due to power loss.

     

    (4) Keeping Secure & Safe

    • -Have at least one phone present in the room in case of emergencies.

    • -Either check for a preexisting fire alarm system and install one if there isn't.

    • -Get a fire suppression system if there is not one there. Take into consideration of whether you will have a wet or dry suppression system and the effects that will have on your equipment. (Halon is a great choice!)

    • -Have reliable contacts to help resolve issues immediately, or form a system of escalation.

    • -Monitor for flooding, especially if this has happened historically in the past.

    • -Secure entrances/exits, this is expensive equipment with critical data, you don't want just anyone in there messing around!

     

    (5) Other Considerations

    • -Get the best cabling/wiring available within budget constraints. 

    • -Keep extra cabling/wiring around, because you never know when you may need it.

    • -Consider color coding wires/cables, a little more work now but definitely a time-saver in the future!

    • -Think about lighting: location & heat produced.

    • -If there is someone sharing the space, get them some earplugs! It's going to be loud in there with the equipment being used.

    • -Consider networking/phone lines being run in there and how much space you have left after that.

    • -Plan for future expansion or retrofitting (again).

    • -Leave the service loops in the ceilings.

    • -Label outlets.

    • -Get rid of dust, your equipment hates it!

    • -Check if you have a rodent/pest problem.

    • -Cover emergency shutoff switches so that it can't be accidentally triggered.

    • -Try to centralize the room in the building so that you can eliminate having to use more cabling/wiring than you need to.

    • -Meet OSHA and ASHRAE guidelines as well local codes.


    Is your server room or do you know of someone's server room that is not being monitored for temperature? Are you concerned with energy consumption, ability to monitor off-hours, and/or preventing mission critical equipment from failure? If you or know someone who is experiencing such issues, we want to hear form YOU!

    We will be giving away ONE FREE USB DEVICE per month to the server room with the most need! Valued at $129.99,Temperature@lert USB Edition is a low-cost, high-performance device that monitors the ambient temperature in your server room and alerts you via email when the temperature rises or falls outside your acceptable range.

    Please send a brief description, pictures, and/or videos to diane@temperaturealert.com for consideration! Our team will select one winner each month based on description and need, because we firmly believe that companies in every industry 


    Full story

    Comments (0)

  • Top 3 Reasons to Monitor Your Server Room / Data Center

    It's 2013, a new year with a smaller budget and of course a higher expectancy for better equipment efficiency. In order to have this higher level of efficiency while meeting budget constraints, you would need to essentially extend the lifespan of your equipment. Expanding the lifespan requires a monitoring system that would ensure your equipment is operating in an acceptable range of environmental conditions. Here are our Top 3 Reasons to Monitor Your Server Room / Data Center:

     

    (1) Protect Your Mission Critical equipment from Failure

    The humming of servers is generally a good indicator that equipment is working diligently. However with the increase in productivity, comes an increase in temperature created by your efficient equipment. Although ASHRAE did increase the temperature envelope to 80.6°F for data centers, many still try to push the envelope in order to promote higher efficiency while trying to lower energy costs and usage. To achieve this, you would need to use less coolers and chillers yet still run equipment at a high rate of productivity; such as Google's Data Center in Belgium, which has been deemed Google's most efficicent data center.

    Innovative approaches to running your server and other technical equipment at a higher temperature have greatly improved productivity levels while lowering energy costs. However not every company has the budget for the latest in server room and data center technology. Less technologically innovative servers that try to run at higher productivity in hotter climates can fail, resulting in damaged or melting equipment as well as data loss, not to mention unhappy IT people crammed into that hot room as well.


    (2) Inability to Physically & Personally Monitor After Hours

    In the IT realm, servers are most certainly mission critical; however, servers are rarely viewed as a life or death matter. Considering how much data and information has been collected and stored, these pieces of equipment surely serve an important purpose to all. After all, technology is the backbone supporting a company's operations nowadays.

    Just like a human cannot function at high efficiency without a healthy spine, it is very difficult for a company to function productively without technology in such a tech-savvy timeBut since servers are not often seen as mission critical by ones outside the IT realm, there is a lack of a budget for monitoring these servers. Often overlooked and forgotten, there is rarely a person designated to monitor after hours when IT staff have left for the day. This often leaves these pieces of mission critical equipment unmonitored, resulting in not only informational loss but financial loss as well: During 2009, an estimated $50 million to $100 million losses occurred due to environmental issues going unmonitored!


    (3) Be Green Friendly: Lower Energy & Costs

    With decreased budgets presented and increased efficiency expected along with meeting green and sustainability initiatives, IT staff are forced to make due. This means working in hotter enviornments in order to run machines at full productivity levels while not over-using the air conditioning, cooler, chiller or HVAC systems. Even Google's Data Center in Beligum uses only fresh air to cool off the equipment. Despite the risks of high temperature, many must make these choices in order to meet departmental changes.

    By at least monitoring temperature, you can help extend the lifespans of your servers. Considering the fact that running them at higher temperatures is a must, making sure your servers are not working in too hot of an environment is therefore crucial. At some point, the envelope will be pushed to such an extent that equipment will malfunction and even melt. By efficiently limiting use of cooling & HVAC systems, you would save in costs and lower energy consumption while still protecting your mission critical equipment. By using temperature monitoring equipment with SNMP traps, you would even be able to program in a shut down mode for your equipment if the temperature threshold has been breached.

    By taking the initiative to meet all the new requirements ranging from budget to sustainability by doing temperature monitoring, you will be able to prevent disaster instead of having to clean up melted server. Learn more from our FREE E-Book on Temperature Monitoring:

    Full story

    Comments (0)

  • What Can You Monitor with Temperature@lert?



    When deciding on a Temperature@lert solution, generally you would have something in mind for the application prior to purchase.  Of course we have our standard industries that require the use of our products; however, there are many imaginative ways consumers have thought up that have opened a new world of monitoring possibilities.  

    Here are some of the innovative uses that have been implemented:

    • R/V pet monitoring 
    • HVAC systems
    • Warehouses
    • Wine storage
    • Ovens 
    • BBQ Smokers
    • Cryogenic Freezers
    • Food Trucks
    • Reefer Trucks
    • Kennels
    • Police K9 vehicles
    • Water Tanks 
    • Ponds
    • Farms/Barns
    • Chicken Coops
    • Portable bio-pharmaceutical cooling units
    • Steam Pipes
    • Incubators
    • Boiler rooms
    • Crops
    • Greenhouses
    • Explosives
    • Vacation homes
    • Candy factories
    • Vacant commercial property
    • Boiler rooms
    • Crawl spaces
    • Outdoor Cooling Units
    • Saunas
    • Hot tubs

    Of course these applications would not be possible without our smart sensors:

    • Temperature
    • Humidity
    • Flood
    • Expanded Range Temperature
    • Tank Level
    • Pressure
    • Leaf Wetness
    • Soil Moisture
    • Wind Direction
    • WInd Speed
    • Rainfall
    • CO2
    • O2
    • Dry Contact
    • Stainless Steel Temperature
    • Wine Bottle Temperature

    With the implementation of our smart sensors, the possibilities are endless in discovering solutions for your monitoring needs.  If you need a solution for your monitoring we're here to help, just send us a quick quote request: Quote Inquiry. Or if you have an interesting way you use your device, we'd love to hear about it, email info@temperaturealert.com.


    Full story

    Comments (0)

  • How many temperature sensors do I need?

    What is the temperature in the room you’re in right now?  Take a guess, you’ll be correct within a few degrees.  Now, what is the temperature of the room?  Don’t bother answering that, it’s a trick question because in fact there is no “temperature of the room”, the temperature of the room is a 3D matrix that likely varies by up to 3°C (5.4°F) from one point to another.

    In spaces as different as commercial refrigerators and data centers, temperature differences can be even greater.  Computer modeling demonstrates how, in a data center, server racks can be cool at the bottom and hot near the top.  Commercial refrigerators can have very cold areas near the chilled air outlets.  Whether or not the temperature variations are meaningful depends on what they impact.  Consider the last time you turned your refrigerator down a little and noticed the next morning the milk container in the direct blast from the cooled air outlet was partially frozen.

    Temperature@lert’s White Paper Library has an entry titled “Why isn’t the sensor reading the same as my thermostat?”   (Link to White Paper) The paper shows a room cycling through a twenty-four hour cycle in a second floor, sunny bedroom temperature differences at the floor and 6-feet from the floor can be as much as 5°F, and are never equal.  MIT’s Building Technology Group is explores design, technology and implementation of environmentally responsive urban housing in China.  Figure 1 shows temperature variations from room to room in a sustainably designed apartment.  This one plane model shows a 1°C (1.8°F) temperature difference in rooms with heat sources.



    Figure 1: Modeling temperature variations in an environmentally responsive urban home shows average of 24°C and high of 25°C.   Source: MIT Chinahousing Research  (Link to MIT China House)

    To make informed decisions about how many sensors to deploy, consider whether or not the heating and cooling sources are in direct line with sensitive materials.  Enough sensors will be needed to insure the warmest and coolest locations are within established parameters.  Too many sensors can lead to “sensor data fatigue”, having too much data.  If you’re unsure, experimenting with a few in different locations is a good start.  A balance of protecting valuable materials, cost, and variability within the space being monitored will insure that when problems occur, they are noticed.

    For questions or additional information, contact Temperature@lert at info@temperaturealert.com.

    Full story

    Comments (0)