Our customers operate technical computing environments where infrastructure software like Univa Grid Engine is a key component. This partnership allows us to support our customers on all levels, giving them more options to use their compute clusters in the most efficient manner," says Gerd-Lothar Leonhart, CEO of s+c. "Additionally, the possibility to integrate Univa Grid Engine with Hadoop systems opens up new opportunities to optimize the usage of Big Data installations."Read the Full Story * View the slides * Download the MP3 * Subscribe on iTunes* Subscribe to RSS
Friday, May 24, 2013
In this slidecast, Gary Tyreman from Univa describes the company's new partnership with Europe's science + computing.
Wednesday, May 22, 2013
In this slidecast, Rainer Enders from NCP Engineering presents: Next-gen Network Access Technology.
The NCP Secure Enterprise Solution provides a set of software products that enable complete policy freedom, unlimited scaling, multiple VPN-system setup and control, and total end-to-end security. Practically speaking, one administrator is able to handle 10,000+ secure remote users through all phases."View the slides * Download the MP3 * Subscribe on iTunes* Subscribe to RSS
In this podcast, Nicos Vekiarides from TwinStrata presents: TwinStrata CloudArray 4.5 with DRaaS. The new offering is an on-demand disaster recovery as a service (DRaaS) for VMware users.
Whether your goals are to increase storage capacity, improve off-site data protection, implement disaster recovery or all three of the above, TwinStrata CloudArray is the most comprehensive storage solution available today,” said Nicos Vekiarides, CEO of TwinStrata. “TwinStrata has made great strides in delivering enterprise-class functionality at a fraction of the cost typically required of storage solutions. What’s exciting is CloudArray 4.5 enables organizations to enjoy a full business continuity plan without the need for backup software or a dedicated disaster site-- a once unthinkable proposition.”Read the Full Story * View the slides * * Download the MP3 * Subscribe on iTunes * Subscribe to RSS
Sunday, May 12, 2013
In this slidecast, Justin Erckson from Cloudera presents a technical overview of Cloudera Impala, an SQL-on-Hadoop solution that enables users to do real-time queries of data stored in Hadoop clusters.
To avoid latency, Impala circumvents MapReduce to directly access the data through a specialized distributed query engine that is very similar to those found in commercial parallel RDBMSs. The result is order-of-magnitude faster performance than Hive, depending on the type of query and configuration.Read the Full Story * Download the MP3 * View the slides * Subscribe on iTunes* Subscribe to RSS
Thursday, May 9, 2013
In this podcast, Ken Claffey from Xyratex describes the company's new ClusterStor 1500 storage system. Designed for scale-out HPC storage solutions, the ClusterStor 1500 delivers HPC performance and efficiency with help from the Lustre file system.
Departments within larger organizations or medium-sized enterprises today, especially in the commercial, academic and government sectors, represent an underserved market. They need high-performance and scalable storage solutions that are cost-efficient, easy to deploy and manage and reliable even under heavy workloads,” said Ken Claffey, senior vice president of the ClusterStor business at Xyratex. “Growth in this market segment is being driven by the increasing adoption of simulation applications in a wide range of industries from car and aircraft design to chemical interactions and financial modeling. Traditional enterprise storage systems are simply not designed to meet the performance needs of these applications, so we engineered and built the affordable and modular ClusterStor 1500 to bring the performance power of Lustre to this underserved and growing market in the way that only ClusterStor can.”With the ability to scale performance from 1.25GB/s to 110GB/s and raw capacity from 42TB to 7.3PB, ClusterStor 1500 is purpose-built to satisfy data intensive department level compute cluster needs, ClusterStor 1500 is designed to provide best in class scale-out storage for middle tier high performance computing environments. The ClusterStor 1500 solution features scale-out storage building blocks, the Lustre parallel filesystem and a comprehensive management platform that eliminates the guesswork usually associated with building and optimizing your own HPC storage solution. Read the Full Story * View the slides * Download the MP3 * Subscribe on iTunes * Subscribe to RSS
Wednesday, May 8, 2013
In this podcast, Guy Fraker presents: get2know - Big Data for the Shared Economy. With shared rides, cars, bikes, and even rooms, the issue of trust is huge. The folks at get2know have a developed a "Trust Engine" that uses Big Data to help you decide who you trust to share your stuff. Amazing! "As we build out to scale, we’ll provide a playground for alliance partners to reward consumers who utilize shared services in postive ways. We will deliver a searchable aggregated view of shared economy providers WITH utilization incentives. By doing both in a single view, using single sign-on, we provide an economic reason to be scored. We believe that by partnering with the Collaborative Consumption community, a market is created where no user asks, “ok- I got my score- now what?” get2kno is about creating a market, not building a platform." Learn more at: http://get2kno.com
In this podcast, Scott Gnau from Teradata Labs presents: Teradata Intelligent Memory.
The introduction of Teradata Intelligent Memory allows our customers to exploit the performance of memory within Teradata Platforms, which extends our leadership position as the best performing data warehouse technology at the most competitive price,” said Scott Gnau, president, Teradata Labs. “Teradata Intelligent Memory technology is built into the data warehouse and customers don’t have to buy a separate appliance. Additionally, Teradata enables its customers to buy and configure the exact amount of in-memory capability needed for critical workloads. It is unnecessary and impractical to keep all data in memory, because all data do not have the same value to justify being placed in expensive memory.”How does Intelligent Memory work? This animation video does a good job of making this advanced technology look simple. Read the Full Story * View the slides * Download the MP3 * Subscribe on iTunes * Subscribe to RSS