temperature@lert blog

Go Back
  • Does Cogeneration Yield a Suitable RoI in Data Centers?


    What does the data say?

    This is the second of two pieces on Cogeneration or CHP.  The first explored the topic, this one will explore the RoI of technology proven for other industries as applied to data centers.

    As the data center industry continued to consolidate and competitiveness becomes more intense, IT professionals understand the pressure on both capital and operating budgets.  They are torn by two competing forces, faster and more reliable vs. low cost and now.  IT equipment improvements are continuously and the desire to update always calls.  Reliability has become the mantra of hosted application and cloud customers and although electrical grid failures are not considered “failures against uptime guarantees” for some, businesses affected by outages feel the pain all the same.  And if there are solutions, management pressure to implement them quickly and at low cost is always a factor.

    Cogeneration is typically neither fast nor cheap, but it does offer an alternate path to reliability and uptime.   As in all major investments that require sizable capital and space, the best time to consider cogeneration is during data center construction.  That being said, data centers operating today are not going any place soon, so retrofit upgrade paths are also a consideration, especially in areas where electric power reliability from the local utility has become less reliable over time.  So when should data center professionals consider cogeneration or CHP?  Fortunately there are studies available on public websites that help provide answers.

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?

    University of Syracuse data center exterior; Microturbines in utility area (Link to Source)

    One such study is an installation at the University of Syracuse.  Opened in 2009, the 12,000 ft2 (1100 m2) data center with a peak load of 780 KW employs cogeneration and other green technologies to squeeze every ounce of energy out of the system. (Link to Source)  The site’s 12 natural gas fueled microturbines generate electricity.  The microturbine’s hot exhaust is piped to the chiller room, where it is used to generate cooling for the servers and both heat and cooling for an adjacent office building.  Technologies such as adsorption chillers to turn heat into cooling, reusing waste heat in nearby buildings and rear door server rack cooling that eliminates the need for server fans completes what IBM calls its Greenest Data Center yet.

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?

    Left: Heat exchanger used in winter months to capture waste microturbine heat for use in nearby buildings; Right: IBM “Cool Blue” server rack heat exchangers employ chilled water piped under floor.

    This is certainly an aggressive project, but can the cost be justified with a reasonable Return on Investment?  Fortunately data has recently been released to quantify the energy conservation benefits.  PUE performance measured during 2012 was presented at an October 2013 conference and show a steady PUE between 1.25 and 1.30 during the period, a value that compares very favorably when compared to the typical data center PUE of 2.0. Uptime Institute self reporting average PUE is 1.65 with qualifications, Digital Realty Trust survey of 300 IT professionals with annual revenues of at least $1 Billion and 5,000 employees revealed PUE of 2.9.  (Link to Sources: Uptime Institute Digital Realty Trust)

    Temperature@lert: Does Cogeneration Yield a Suitable RoI in Data Centers?      

    IBM/SU Green Data Center 2009 Goals (Link to Source); 2012 Actual Performance (Link to Source)

    So how can we calculate the actual RoI and compare it to the projected goals.  First, the goals stated in the table on the left show savings of $500,000+ per year.  Another presentation by the microturbine supplier shows a $300,000 per year goal, quite a bit different.  So how do we know what the savings is?  We don’t since there is no reference site where the data center is identical and in an identical location without the CHP.  So we can use the 2.0 average PUE and calculate the energy savings, but that’s not a real answer.  And we also need to take into account the fact that tax incentives and grants such as the $5 Million for the Syracuse University project needs to be reviewed to determine the cost to non-subsidized projects.  Hopefully project managers will provide more information to help data center operators better understand the actual savings as the project matures.

    CHP for data centers is presented with an array of benefits including improved reliability through less dependence on grid power, lower power costs, reduced carbon footprint.  NetApps installed CHP in their Silicon Valley data center to reduce their reliance on grid power due to frequent rolling brownouts and the uncertainties of the power market costs.  Their experience is not as instructive due to the site’s reduced need for cooling due to use of direct air cooling.  As a result the CHP system is used only when the utility is strained.  It is difficult to find quantitative data for modern installations.   While the data seems encouraging, actual energy cost savings are not provided.  We will watch the progress at this and other projects over the next several months to see if CHP costs yield an acceptable RoI via reduced energy costs.  Stay tuned.

    Full story

    Comments (0)

  • Temperature@lert Named as Finalist in 2013 American Business Awards

    TEMPERATURE@LERT NAMED AS FINALIST IN 2013 AMERICAN BUSINESS AWARDS


    The 11th annual Stevie® Awards will be presented on June 17 in Chicago and September 16 in San Francisco.

    Boston, MA – May 9, 2013 – Temperature@lert, a leading provider of real-time, cloud-based environmental monitoringsolutions designed to enable businesses to mitigate temperature-related disasters, was named a Finalist today in the New Product or Service of the Year – Software category in The 2013 American Business Awards for their Sensor Cloud service. Temperature@lert will ultimately be a Gold, Silver, or Bronze Stevie® Award winner in the program.

     

    Sensor Cloud is a web-based Software-as-a-Service product for monitoring the environmental conditions of server rooms, bio-pharma vaccine storages, and commercial refrigerators while providing regulatory compliance data logging and alerting for various environmental sensors such as temperature, humidity, water, and more. The fault-tolerant design helps ensure that sensor data is logged and maintained for years, while the website and free iPhone/Android apps enable access to sensor readings and the ability to edit phone calls, emails, and SMS alerts from anywhere.

     

    Temperature@lert’s Cellular Products have previously won several awards, including a Stevie Gold Award for their Solar Cellular Edition in 2012. However, it is Temperature@lert’s Sensor Cloud that serves as the brains of all Cellular Editions with over thousands of devices deployed and running the service. Temperature@lert's WIFI and USB devices can also be connected to Sensor Cloud for a consolidated view of all sensor readings and alert statuses. Temperature@lert’s mission is to create a cost-effective and fault-tolerant system that will allow any user to monitor their assets at any moment, anywhere.

     

    The American Business Awards are the nation’s premier business awards program. All organizations operating in the U.S.A. are eligible to submit nominations – public and private, for-profit and non-profit, large and small. 

     

    The American Business Awards will be presented at two awards events: the ABA's traditional banquet on Monday, June 17 – in Chicago for the first time, after 10 years in New York; and the new product & technology awards event on Monday, September 16 in San Francisco.

     

    More than 3,200 nominations from organizations of all sizes and in virtually every industry were submitted this year for consideration in a wide range of categories, including Most Innovative Company of the Year, Management Team of the Year, Best New Product or Service of the Year, Corporate Social Responsibility Program of the Year, and Executive of the Year, among others.  Temperature@lert is nominated in the New Product or Service of the Year – Software category for their Sensor Cloud service.

     

    “Temperature@lert’s Sensor Cloud service directly addresses every industry’s monitoring needs ranging from server rooms, to farms, to medical storage, and even to commercial food transportation operations. We are deeply honored to be recognized as a finalist for our Sensor Cloud service by the American Business Awards,” said Harry Schechter, CEO/President of Temperature@lert. “This honor only further validates the need for remote temperature monitoringbecause everyone should be able to easily prevent temperature related disasters, regardless of type of industry or size of company. We believe in giving you a solution before you even have a problem.”

     

    Finalists were chosen by more than 140 business professionals nationwide during preliminary judging in April and May.  More than 150 members of nine specialized judging committees will determine Stevie Award placements from among the Finalists during final judging, to take place May 13 - 24.  

     

    Details about The American Business Awards and the list of Finalists in all categories are available at www.StevieAwards.com/ABA.   

     


    About Temperature@lert

    Temperature@lert’s temperature and environmental monitoring solutions provide both real-time and historic views of a location’s temperature and other critical parameters through alerts and cloud-based graphs, data logs and reports. This information allows customers to immediately react to potentially disastrous temperature or other fluctuations in critical environments, as well as provide temperature consistency for regulatory and internal process control requirements. Temperature@lert has more than 40,000 devices installed in over 50 countries around the globe. For more information, please visit www.temperaturealert.com.

     

    About the Stevie Awards

    Stevie Awards are conferred in four programs: The American Business Awards, The International Business Awards, the Stevie Awards for Women in Business, and the Stevie Awards for Sales & Customer Service.  A fifth program, the Asia-Pacific Stevie Awards, will debut this year.  Honoring organizations of all types and sizes and the people behind them, the Stevies recognize outstanding performances in the workplace worldwide.  Learn more about the Stevie Awards at www.StevieAwards.com.

     

    Sponsors and partners of The 2013 American Business Awards include the Business TalkRadio Network, Callidus Software, Citrix Online, Dynamic Research Corporation, Experian, John Hancock Funds, LifeLock, PetRays, and SoftPro.

     

    ###

     

    Contact:

    Diane Deng

    Temperature@lert

    866-524-3540 x506


    Full story

    Comments (0)

  • Microsoft Hotmail & Outlook.com Outage: Data Center Safeguards and Temperature Monitoring

    Microsoft made an announcement through their Outlook.com blog about a recent issue specific to users of Outlook.com and Hotmail. After a seemingly exhaustive attempt to migrate customers from Hotmail to the new Outlook suite, Microsoft experienced a minor hiccup as they updated their firmware. There are still many questions left unanswered, even with the frank admission on the Outlook.com blog.

    Here's Microsoft's "recap" of the entire event, quoting directly from their blog.

    "At 13:35 PM PDT on March 12th, 2013 there was a service interruption that affected some people's access to a small part of the SkyDrive service, but primarily Hotmail.com and Outlook.com. Availability was restored over the course of the afternoon and evening, and fully restored by 5:43 AM PDT on March 13th, 2013."

    One point of interest must be, why did this outage occur in the afternoon, and was only fully restored by next day? Why was the timeframe stretched so far? The Outlook.com blog goes further into the issue, marking the root cause as a substantial rise in temperature due to the updated firmware. The resulting "waves" of updates/reboots took many hours to complete as they brought the datacenter to full strength.

    Microsoft continues with a detailed explanation:

    "This failure resulted in a rapid and substantial temperature spike in the datacenter.  This spike was significant enough before it was mitigated that it caused our safeguards  to come in to place for a large number of servers in this part of the datacenter...Once the safeguards kicked in on these systems, the team was instantly  alerted and they immediately began to get to work to restore access."



    It sounds like an excellent strategy; lock out user access in response to rising temperatures to prevent a melted server or data loss. However, it seems that these safeguards were directly connected to other systems and had very strict responses to the temperature change, and thereby prevented a standard 'failover' to a redundant system. 

    Particularly in datacenters and IT, the goal of temperature monitoring and alerting is to provide a direct line of communication between operators and data center temperatures. Temperature monitoring devices are most effective when utilized as an unbiased indicator of temperature change, but for integration purposes, the devices must be formatted to send instantaneous alerts without compromising other systems.  Holistic integration, or automated systems that have a series of moving parts and streamline processes, is an admirable solution for datacenters of scale, but the fact remains that specific monitors and devices must have a closed loop and limited "next-step" automation. Microsoft and Outlook.com may find that a alternative solution is the separation of their datacenter temperature monitoring devices from their automated disaster planning, and using the devices as a primary indicator of trouble (and enacting safeguards thereafter based on the situation). By this method, engineers or system administrators could investigate the temperature rise instantly and investigate the problem. After investigating, active decisions can be made towards automation and safeguarding based on the findings. The instantaneous alerts were clearly helpful to Microsoft, but it seems that the safeguard logic override inflated the problem. 

    It seems like an issue of redundancy as well, and many bloggers and comments have expressed disbelief at the simplicity of Microsoft's datacenters. Some may argue that the Outlook.com servers should have been designed for maximum redundancy, especially as it's being touted as Outlook's big step into SaaS. This is a hot topic among SaaS veterans and other cloud enthusiasts; redundancy is a complex and vital resource for disaster planning. Continued access to services, business operations, and other assets is the main benefit of a redundant system, apart from the aversion of data loss. Still, even as the redundancy was a relevant problem by Microsoft's own admission, the safeguards were the true issue.  Outlook.com was fractured not by a faulty temperature monitoring device, missed monitoring report, or an unopened email or text alert, but by the very logic of the safeguard's response to the rising temperature. Their own temperature monitoring device was obviously effective in marking the temperature change (though we can't truly confirm that the team was "instantly alerted"), but sadly, it was the next step in the monitoring logic that locked out countless users. We do recommend integration of such devices into management systems and for scaled automation, but check the sensitivity of your safeguard and logic systems to prevent an overreaction (and costly outage). Don't have an "Outlook.com" moment!


    Full story

    Comments (0)

  • Temperature@lert's Latest Generation WiFi Temperature Monitoring Device is Released!

    Temperature@lert has officially launched its latest generation WiFi Edition remote temperature sensor, the WIFI330, which can use either wireless or wired internet. Their newest version of the WiFi Edition integrates the successful features of its predecessor along with their latest innovations, and offers customers the latest user-friendly and cost-effective temperature-monitoring device. The latest generation’s features include:

    • Four Sensor/Probe Ports – Significantly Reducing the Price Point

    • Two Ethernet Ports

    • 3X Faster Processing Speed

    • Flood Sensor Capability

    • Firmware Updates Without Loss of User Settings

    • Updates From the Web Interface

    • Optional Sensor Cloud service for Online Viewing and Smartphone Apps

    Previous generation’s successful features incorporated into the new WIFI330 comprise of:

    • Combination Temperature/Humidity Option – Adds on Relative Humidity Monitoring for Critical Applications

    • Predrilled Mounting Flange – Facilitates permanent mounting of the unit

    • Power over Ethernet (PoE) support – Enables operation without AC power adapter

    • Continuous monitoring and Email alerts when temperature or (optional) humidity goes above or below user specified levels

    • WiFi and Ethernet connectivity – Operates so long as your network is available

    • Security – No software to load onto your computers or servers

    • Supports SNMP Traps

    • Supports SMTP via SSL/TLS and SMTP Authentication

    • Pre-calibrated Sensors – NIST Certification available (additional cost)

    • User programmable open source Linux operating system for custom reporting, alarms, etc.

    “The successful implementation of previous generation WiFi devices has only proven the importance of continually developing the Temperature@lert WiFi edition in order to meet users’ needs,” says Harry Schechter, Temperature@lert CEO & President. “The newest capabilities only expand upon what has already been successfully implemented, thus being able to offer an even more cost-effective temperature monitoring solution to IT, Commercial Refrigeration, Property/Facility Management, Food Services, Laboratory Research, as well as a number of other industries. Our company motto has always been to avert disaster instead of mopping up and we hope to continue to be the prime choice in monitoring solutions.”


    For more information on Temperature@lert’s WIFI330 Edition: http://www.temperaturealert.com/Wireless-Temperature-Store/Temperature-Alert-WiFi-Sensor.aspx.

    Full story

    Comments (0)

  • Consideration of High Temperature Ambient Environments and Free Cooling in Data Center Operation

    Driectly from the original post: http://www.datacenterpost.com/2013/01/consideration-of-high-temperature.html

     

    Temperature@lert

     David Ruede, VP Marketing at Temperature@lert, says:

    Techies love acronyms, and IT professionals are masters of the jargon. Where else would we find such gems as CRAC, PUE, SaaS, DCIM, VoIP and VPN among the scores if not hundreds of options for the next big idea?

    Why do we need these when The Free Dictionary lists 259 phrases alone for the acronym DC? (Link 1)  First, we love to speak in shorthand.  Time is always too short; things need to be done quickly.  Speaking in Acronym makes us an insider, the elite few who can feel the bits and petabytes flowing through the veins and arteries of the interconnected web of the virtual world.  And short of a Vulcan Mind Meld, acronyms save time, although one could argue that when used in meetings there may be a few who don’t really understand the meaning and because they don’t want to appear “stupid”, don’t ask.

    Many of these terms started off as marketing terms.  Why would we need CRAC when AC may be sufficient?  And why is PUE debated daily as to its true meaning in professional social media sites?  Every data center operator, supplier and professional looks to set themselves or their companies apart from the competition.  I’ll argue this is a good thing because it makes web searches easier – I don’t have to sort through hundreds of household air conditioners sold in retail outlets to find what I need for a data center, server or telecom room.

    Recently a new acronym has been making its way into the jargon.  HTA, High Temperature Ambient, has cropped up in several professional periodicals and online marketing pieces.  The phrase is used to describe the benefits of reduced energy consumption in data centers and other IT facilities that operate at what many consider higher than “normal” temperatures, say 30°C (86°F) for example.  Described in earlier pieces as high ambient temperature or high temperature in the ambient, the idea of running data centers at higher temperatures has gained prominence as a way to save electrical energy, a very costly piece of the data center’s operating budget.  Often used with terms like “free cooling” or “air side economizers”, the idea is that today’s servers have been specified to run at higher temperatures than those just a few years ago, so operating equipment at higher temperatures has no detrimental effect.

    In April 2012, Intel published a study of the potential energy savings in green data center maker Gitong’s modular data centers.  The Shanghai study showed an annual cost reduction of almost $33,000 per year, which is significant.

    Figures 1a, 1b: Tables showing before and after HTA results - Source: Intel Link 2

    While saving energy is a very desirable goal, data center, server and telecom room operators are well served to understand the underlying assumptions behind “turning up the heat and opening up the doors and windows”.  First, all of the equipment in an IT space comes with manuals, and the manuals specify operating conditions. Insuring all of the equipment in the ambient is able to run at elevated temperatures is highly recommended, particularly since older devices or appliances may be more prone to heat related performance degradation.  ASHRAE’s TC 9.9 2011 Thermal Guidelines for temperature and humidity control are a good reference as to where to start when designing or setting up an HVAC system. (Link 3)

    Second, while the HVAC systems in IT spaces are generally well designed and provide adequate airflow to the equipment, time has a way of changing things.  Profiling the temperature of the data center to see if any changes in operation or addition of equipment have created “hot spots” with sufficient resolution to insure each rack or piece of equipment is operating within specification can be done with existing equipment by moving temperature sensors to areas not normally monitored during the temperature mapping process.

    Third, changes in temperature can cause changes in relative humidity.  Continuous monitoring of not only temperature but relative humidity before and after raising the temperature is recommended to insure both of these critical parameters are within manufacturer’s specification.

    And if IT professionals decide to employ “free cooling” by figuratively “opening up the doors and windows”, they would be well advised to check ASHRAE’s TC 9.9 Gaseous and Particulate Contamination Guidelines for Data Centers and again their supplier manuals for specification compliance. (Link 4)

    Figure 2: Ambient Air Cooling Unit (Link 5)

    Much has been written about free cooling; a June 2012 article is a good example. (ref. Link 5)  Cooling may indeed be “free” and many can and do use free cooling combined with HTA to make significant reductions in their energy bills.  As in all good ideas, “first, do no harm” is a good motto.  IT professionals may be well served to verify and validate the assumptions against best practices as they apply to their sites before any significant changes in operation are made.

    Full story

    Comments (0)

  • Essential Tech Check List: Building & Retrofitting Your Server Room

    Whether you're building a server room, adding on, or moving equipment there are many considerations to mull over. From the basics to alarm systems, it is important to ensure your server room is efficient and to protect your mission critical equipment. Previously in our blog, we have addressed the issues surrounding the microclimate present in your server room; however, it is critical to have an understanding of how a server room should be laid-out and managed. Use our check list as a guide for promoting security, efficiency, and productivity:

    Our Essential Tech Check List

    (1) Your Basics of Space

    • -Examine the layout of the space and how many units of space you have to work with.

    • -The walls (including ceiling) and doors should isolate the sounds that your equipment is creating.

    • -Check to see which way the door opens. There should also be no windows or other entry points other than the doors in the room.

    • -Consider the floor and whether your equipment will need raised flooring. Aim for anti-static floor finishing to prevent an unwanted static charge.

    • -Make sure there is enough clearance for racks and that they are stable enough to hold your equipment.

    • -Check for aisle clearance too, make sure your have enough room for exhaust to escape and not over-heat nearby equipment.

    • -Think about whether you need ladder racks, cabinets, shelves, patch panels, or rack mounts.

    • -Take into weight and size of each piece of equipment into consideration when designing the layout.


    (2) Keeping Your Cool

    • -Check and see what type if centralized cooling is available, whether an under the floor air distribution or an air duct system.

    • -If there is no centralized system available, get an air conditioner or cooling unit that is able to keep your equipment working productively while minimizing energy consumption and costs.

    • -If at all possible, fresh air vents are great and save on energy costs and consumption!

    • -Remove any and all radiators or other heating equipment currently present in the room. You don't need to add heat at all!

    • -Monitor your cooling system(s) to make sure it is working properly, especially when no one is there.

    • -Make sure your cooling units are not too close in proximity to your electrical equipment, think condensation and flooding. Do not place air conditioning units over your servers.

    • -Monitor the humidity to prevent static charge and electrical shorts.

    • -See if a chilled water system is in the budget or find something within the budget constraints to ensure that the hot air has somewhere to go.

     

    (3) Using Your Power

    • -Check to make sure that you have enough outlets to support power to all your equipment and not to overload them.

    • -Get backup power, preferably UPS to prevent data loss from power blinking or outages.

    • -Don't surpass the maximum electrical intensity per unit of space.

    • -Consider shut down capabilities of equipment (SNMP traps for example).

    • -Make sure your equipment is grounded.

    • -Monitor for power outages if you are not using back-up power systems.

    • -Monitor your back up power systems to make sure your mission critical equipment is not failing due to power loss.

     

    (4) Keeping Secure & Safe

    • -Have at least one phone present in the room in case of emergencies.

    • -Either check for a preexisting fire alarm system and install one if there isn't.

    • -Get a fire suppression system if there is not one there. Take into consideration of whether you will have a wet or dry suppression system and the effects that will have on your equipment. (Halon is a great choice!)

    • -Have reliable contacts to help resolve issues immediately, or form a system of escalation.

    • -Monitor for flooding, especially if this has happened historically in the past.

    • -Secure entrances/exits, this is expensive equipment with critical data, you don't want just anyone in there messing around!

     

    (5) Other Considerations

    • -Get the best cabling/wiring available within budget constraints. 

    • -Keep extra cabling/wiring around, because you never know when you may need it.

    • -Consider color coding wires/cables, a little more work now but definitely a time-saver in the future!

    • -Think about lighting: location & heat produced.

    • -If there is someone sharing the space, get them some earplugs! It's going to be loud in there with the equipment being used.

    • -Consider networking/phone lines being run in there and how much space you have left after that.

    • -Plan for future expansion or retrofitting (again).

    • -Leave the service loops in the ceilings.

    • -Label outlets.

    • -Get rid of dust, your equipment hates it!

    • -Check if you have a rodent/pest problem.

    • -Cover emergency shutoff switches so that it can't be accidentally triggered.

    • -Try to centralize the room in the building so that you can eliminate having to use more cabling/wiring than you need to.

    • -Meet OSHA and ASHRAE guidelines as well local codes.


    Is your server room or do you know of someone's server room that is not being monitored for temperature? Are you concerned with energy consumption, ability to monitor off-hours, and/or preventing mission critical equipment from failure? If you or know someone who is experiencing such issues, we want to hear form YOU!

    We will be giving away ONE FREE USB DEVICE per month to the server room with the most need! Valued at $129.99,Temperature@lert USB Edition is a low-cost, high-performance device that monitors the ambient temperature in your server room and alerts you via email when the temperature rises or falls outside your acceptable range.

    Please send a brief description, pictures, and/or videos to diane@temperaturealert.com for consideration! Our team will select one winner each month based on description and need, because we firmly believe that companies in every industry 


    Full story

    Comments (0)

  • Top 3 Reasons to Monitor Your Server Room / Data Center

    It's 2013, a new year with a smaller budget and of course a higher expectancy for better equipment efficiency. In order to have this higher level of efficiency while meeting budget constraints, you would need to essentially extend the lifespan of your equipment. Expanding the lifespan requires a monitoring system that would ensure your equipment is operating in an acceptable range of environmental conditions. Here are our Top 3 Reasons to Monitor Your Server Room / Data Center:

     

    (1) Protect Your Mission Critical equipment from Failure

    The humming of servers is generally a good indicator that equipment is working diligently. However with the increase in productivity, comes an increase in temperature created by your efficient equipment. Although ASHRAE did increase the temperature envelope to 80.6°F for data centers, many still try to push the envelope in order to promote higher efficiency while trying to lower energy costs and usage. To achieve this, you would need to use less coolers and chillers yet still run equipment at a high rate of productivity; such as Google's Data Center in Belgium, which has been deemed Google's most efficicent data center.

    Innovative approaches to running your server and other technical equipment at a higher temperature have greatly improved productivity levels while lowering energy costs. However not every company has the budget for the latest in server room and data center technology. Less technologically innovative servers that try to run at higher productivity in hotter climates can fail, resulting in damaged or melting equipment as well as data loss, not to mention unhappy IT people crammed into that hot room as well.


    (2) Inability to Physically & Personally Monitor After Hours

    In the IT realm, servers are most certainly mission critical; however, servers are rarely viewed as a life or death matter. Considering how much data and information has been collected and stored, these pieces of equipment surely serve an important purpose to all. After all, technology is the backbone supporting a company's operations nowadays.

    Just like a human cannot function at high efficiency without a healthy spine, it is very difficult for a company to function productively without technology in such a tech-savvy timeBut since servers are not often seen as mission critical by ones outside the IT realm, there is a lack of a budget for monitoring these servers. Often overlooked and forgotten, there is rarely a person designated to monitor after hours when IT staff have left for the day. This often leaves these pieces of mission critical equipment unmonitored, resulting in not only informational loss but financial loss as well: During 2009, an estimated $50 million to $100 million losses occurred due to environmental issues going unmonitored!


    (3) Be Green Friendly: Lower Energy & Costs

    With decreased budgets presented and increased efficiency expected along with meeting green and sustainability initiatives, IT staff are forced to make due. This means working in hotter enviornments in order to run machines at full productivity levels while not over-using the air conditioning, cooler, chiller or HVAC systems. Even Google's Data Center in Beligum uses only fresh air to cool off the equipment. Despite the risks of high temperature, many must make these choices in order to meet departmental changes.

    By at least monitoring temperature, you can help extend the lifespans of your servers. Considering the fact that running them at higher temperatures is a must, making sure your servers are not working in too hot of an environment is therefore crucial. At some point, the envelope will be pushed to such an extent that equipment will malfunction and even melt. By efficiently limiting use of cooling & HVAC systems, you would save in costs and lower energy consumption while still protecting your mission critical equipment. By using temperature monitoring equipment with SNMP traps, you would even be able to program in a shut down mode for your equipment if the temperature threshold has been breached.

    By taking the initiative to meet all the new requirements ranging from budget to sustainability by doing temperature monitoring, you will be able to prevent disaster instead of having to clean up melted server. Learn more from our FREE E-Book on Temperature Monitoring:

    Full story

    Comments (0)

  • It's Hot! It's Cold! Oh No... It's Your Fluctuating Server Room Temperature Again...

    We know that every room, especially a server room, has its own microclimate. Even sensors that are inches apart can read different values! Although similar applications might share the same temperature threshold range, every sensor placement location is unique. It sounds strange; that there would be such fluctuations in temperature within inches, but this happens because your server room has its own minature weather pattern!

    So how do you figure out the correct temperature range for monitoring your server room? Or where to place your sensor? As many conditions as there are for the actual ourdoor weather patterns, there are many variables for sensor placement and operational range because of the changing indoor microclimate.

    Essentially, in order to determine the right thresholds for your server room "environment", you need to acquire adequate baseline knowledge. This process is called "baselining", which involves monitoring your server room first to establish a history of normal conditions. Temperature is a significant threat to your equipment and in order to battle this, you need to discover and establish your server room's microclimate (i.e. baselining)!


    Baselining is basically achieved through studying the space of your server room while considering the components within it. Thic can be done to determine the proper ranges for both temperature and humiditySo what spots are the most critical for consideration when it comes to sensor placement?

    1. Hot Spots
    At the bare minimum, place at least one sensor in a central location in the room. Note: every room has its own mini weather pattern, and conditions from one part to another can vary based on what the room contains and where vents/returns are located. The simplest rule of thumb is that heat rises. So, the higher the sensor placement, the warmer the temperature

    2. Cooling Vent Locations
    Whether it is an air conditioner, economized cooler, or another chilling device, it will affect the sensor reading depending on proximity of the sensor to the vent. If you want to monitor whether your cooling unit may be going out at different times,place a sensor in the air duct and you can determine when the cooling unit is off. Placement of a sensor in close proximity to the cooling unit may cause the sensor to pick up cooling unit "cycles", sending you false alerts in the process.

    3. Exhausts
    Besides cooling vents, you need to also consider hot vents from server cabinets or compressors. Placing a sensor near or in between these areas is crucial as high temperatures can cause damage to hardware. The exhaust-based alerts will draw attention to the high temperatures within the servers, allowing you to prevent loss of hardware (and revenue!)

    4. Ancillary Humidification Systems
    These systems help control humidity. Too much humidity can cause condensation, which leads to electrical shorts. Not enough humidity causes one to have quite the mini-electrifying experience with static electricity at its peak. Place your humidity sensor in a location seperate from the ancillary humidification system in order to prevent the sensor from getting shorted and to avoid false humidity readings.

    By monitoring temperature and humidity, one can have early warning of any disasters looming in your server room. It is always better to prevent a disaster rather than mop up after it (speaking of, flood sensors are great too!). If you need assistance in determining the best practices and routines for your server room, please feel free to shoot me an email:diane@temperaturealert.com.

    Happy Monitoring!

    Full story

    Comments (1)

  • How To Increase the Lifespan of Your Server

    It's Monday. You grab your coffee, toss in sugar, and begin chugging your beloved caffeine. There's no feeling like walking into your server room or data center and having it swelteringly hot along with equipment malfunctioning. Even though you tried to prevent this from occuring by installing air conditioning units and other coolers, loss of equipment and information still can happen.

    In fact, for every 18 degrees that the temperature remains above 68°, servers lose approximately 50% of their reliability. Servers are an investment and one must take care in protecting such an important asset. Considering the average lifespan of a server is 4-6 years, it would be more cost effective to maintain your server by keeping it at proper temperatures.

    Of course cooling units are a great way to cool down an already hot server room, but there's not always one person designated to monitor such a room 24/7. By implementing a temperature monitoring solution, you can increase the lifespan of the servers and maintain your reliability. Not to mention, you'll avoid a case of the Mondays.

     

    Get Your FREE IT/Server Room/Data Center Monitoing Guide Now:


    Full story

    Comments (0)

  • New Greener USB Edition - Same Great, Easy to Use Value

    Looking for our USB Edition?  If so, you'll see our latest design featured on this website.  Temperature@lert's team of engineers and designers put their heads together and repackaged our flagship USB temperature monitoring and alerting device.  The new design has the same robust electronics and temperature sensor, but uses less materials in the enclosure and shipping package.  This means less material is used - we've reduced the product's carbon footprint without sacrificing the value, ease of use, quality and reliability of the industry leading USB environmental monitoring appliance.

    Check out our full announcement and our product page to see what our design wizards have done.  And as we've announced, this like all Temperature@lert products is Made in the USA.

    Link to Full Announcement on Temperature@lert Website

    Full story

    Comments (0)

  1. 1
  2. 2
  3. Next page