Tuesday, February 26, 2013

New DDN hScalar is World's First Enterprise Apache Hadoop Appliance

In this podcast, Jeff Denworth from DDN describes the company's new hScalar storage system -- the World's First Enterprise Apache Hadoop Appliance.
DDN has developed a Hadoop solution that is all about time to value: It simplifies rollout so that enterprises can get up and running more quickly, provides typical DDN performance to accelerate data processing, and reduces the amount of time needed to maintain a Hadoop solution." said Dave Vellante, Chief Research Officer, Wikibon.org. "For enterprises with a deluge of data but a limited IT budget, the DDN hScaler appliance should be on the short list of potential solutions.”
Read the Full Story * Download the MP3 * Download the Slides * Subscribe on iTunes

Saturday, February 23, 2013

Excelegrade Digital Testing Platform Wins Startup Riot

In this podcast, Sanjay Parekh from the Startup Riot discusses how the annual event brings entrepreneurs together with VCs and Angel investors. We also get a chance to talk to the most recent winner of the Riot competition, Lauren Miller from Excelegrade.
Excelegrade is a platform that allows teachers to focus on the art of teaching while we do the science. Teachers currently spend hours on administrative tasks, like test creation, grading, etc., leaving them with less time to focus on instruction. We aim to give teachers their time back without compromising the rigor of instruction and we do this with 21st century technology. Much like textbooks are going digital, Excelegrade makes K-12 classroom assessments digital by replacing paper-based tests with assessments on tablets, smart phones, and laptops."
Read the Full Story * Download the MP3 * Subscribe on iTunes

Thursday, February 21, 2013

Examining Hadoop as a Big Data Risk in the Enterprise

In this podcast, Brian Christian from Zettaset presents: Examining Hadoop as a Big Data Risk in the Enterprise.
While the open source framework has enabled Hadoop to logically grow and expand, business and government enterprise organizations face deployment and management challenges with Hadoop. Hadoop’s core specifications are still being developed by the Apache community, and thus far, do not adequately address enterprise requirements, such as support for robust security and regulatory compliance mandates such as HIPAA and SOX, for example."
Read the Full Story * Download the MP3Download the Slides * Subscribe on iTunes

Wednesday, February 20, 2013

Infosys BigDataEdge Platform for Insight Management

In this podcast, Vishnu Bhat from Infosys presents an overview of the company's all-new BigDataEdge platform for insight management.
Enterprises today cannot afford to spend an inordinate amount of time making sense of the data deluge that surrounds them," said Vishnu Bhat, VP of Cloud at Infosys. “Infosys BigDataEdge draws upon our deep research & development capabilities and proven expertise in Big Data and analytics to help clients turn data into revenues faster. This unique platform is already enabling ten global organizations to develop actionable insights in a matter of days and act on them from day one.”
Read the Full Story * Download the MP3Download the Slides * Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Tuesday, February 19, 2013

Ethernet Secrets of TCP

In this podcast, Jeff Squyres from Cisco presents: Ethernet Secrets of TCP.
TCP? Who cares about TCP in HPC? More and more people, actually. With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of Ethernet. And it turns out that TCP doesn’t suck nearly as much as most (HPC) people think, particularly on modern servers, Ethernet fabrics, and powerful Ethernet NICs. I’ll cut to the chase: I surprised myself by being able to get ~10us half-round-trip ping-pong MPI latency over TCP (using NetPIPE)."
Read the Full Story at the MPI Blog * Download the MP3Download the Slides * Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Sunday, February 17, 2013

AMD's John Gustafson on Three Common Misconceptions about HPC

In this podcast, John Gustafson discusses how AMD is meeting customer challenges for energy efficient computing. He also shares three common misconceptions about HPC. Download the MP3 * Download the SlidesSubscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Thursday, February 14, 2013

Garantia Data Goes GA with Cloud-based Redis and Memcached Platform

In this podcast, Ofer Bengal from Garantia Data describes the company's cloud-based Redis and Memcached high performance database platform, which went GA today.
We are excited to be the first to offer AWS' European users enhanced Redis functionality they never had before," mentioned Ofer Bengal, CEO of Garantia Data. "We have seen great demand in this region for scalable, highly available and fully-automated services for Redis and Memcached. Our Redis Cloud and Memcached Cloud provide exactly the sort of functionality developers look for."
Read the Full StoryDownload the MP3Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Wednesday, February 13, 2013

LUG 2013 Coming to San Diego, April 16-18

In this podcast, Norm Morse from OpenSFS looks back on the past year's developments in the Lustre community and provides an preview of the upcoming LUG 2013 conference in San Diego April 16-18.
LUG is always about a real exchange of ideas. The LUG program committee would like to invite members of the Lustre community to submit presentation abstracts for inclusion in this year’s agenda. If you’ve considered it before, but put it off, we want to hear from you. We’ve made it easy; the first step simply requires one-page abstract of your proposed talk. We’re looking for deep-dives, new information, and controversial topics in all areas of Lustre development, application, or best-practices. The deadline to submit presentation abstracts is March 4, 2013.
Registration is now open. The call for presentations is online and there are many sponsorship opportunities. Download the MP3Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Tuesday, February 12, 2013

Nevex Technology Powers Intel Cache Acceleration Software

In this podcast, Andrew Flint and Carolyn Hanley from Intel present: Intel Cache Acceleration Software.
Intel CAS complements our SSD data center family by providing a total caching solution that delivers even more value and capability for our customers," said Chuck Brown, product line manager for Intel's Non-Volatile Memory Solutions Group. "Intel CAS delivers a multi-level cache across the SSD and DRAM for optimal performance. Compared to short-stroked hard-drive technology, we've seen up to 50 times the improvement in I/O performance throughput for read intensive workloads by adding Intel CAS with the Intel SSD 910 series1."
Read the Full StoryDownload the MP3 * Download the SlidesSubscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Ironstone VC Fund Using Data Science to Pick Disruptive Startups

Can Big Data analytics be used to predict which Startup companies will succeed? In this podcast, Thomas Thurston from Growth Science discusses the new Ironstone Venture Capital Fund, which is using Business Model Simulation to choose disruptive Startups.
The human mind is good at some things but bad at others. So we use data science and technology to help our brains with the things they weren’t designed for. This marriage between technology and the brain has allowed us to predict business behavior in ways that weren’t possible even a decade ago. It’s the future of venture capital,” said Thomas Thurston from Growth Science. "This fund is unique. First, instead of mostly using intuition, like most VCs do, we’re using powerful, proven data science to identify disruptive companies. That’s revolutionary. Second, we’re interested in seed- and early-stage companies, which is much needed as our economy rebuilds itself. Finally, unlike a lot of VCs focused on exits and quickly ‘flipping’ startups, we have a long-term view and really want to partner with people growing strong, disruptive, meaningful businesses to make the world a better place.”
Read the Full Story * Download the MP3Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

An Overview of Glasshouse Cloud Services

In this podcast, Ken Copas from Glasshouse presents an overview of the company's Cloud services.
GlassHouse delivers clarity amid the hype, providing vendor independent, objective IT consulting, to drive customer defined business outcomes, and guide clients along the full lifecycle of transforming their IT infrastructure. We are unique in our focus not just on technology but also on our clients’ information policies, procedures and organizational design, as those have the most significant impact on IT costs and effectiveness. Our solutions cover end-to-end planning, design and deployment to fully managed services."
Download the MP3 * Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Wednesday, February 6, 2013

Cycle Computing Runs Amazing 10,600 Instance on AWS for Big Pharma

In this podcast, Jason Stowe from Cycle Computing describes how the company spun up a 10,600 instance HPC cluster in 2 hours with CycleCloud on Amazon EC2. Using just one Chef 11 server and one purpose in mind, this on-the-fly cluster was used to accelerate life science research relating to a cancer target for a Big 10 Pharmaceutical company.
To tackle this problem, we decided to build software to create a CycleCloud utility supercomputer from 10,600 cloud instances, each of which was a multi-core machine! This makes this cluster the largest server-count cloud HPC environment that we know about, or has been made public to date (the former utility supercomputing leader was our 6,732 instance cluster for Schödinger from 2012). If this cluster were a physical environment, analysts said it would occupy a 12,000 sq ft data center space, costing $44 million. Instead, we created this in 2 hours, with these 10,600 hosts, used it for 9 more, at a peak cost of $549.72 per hour, and turned it off for a total cost of $4,362. Wow!"
Read the Full Story * Download the MP3 * Download the Slides (PDF)Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Monday, February 4, 2013

Merle Giles on Increasing Emphasis on the Industrial Sector at ISC'13

In podcast, Merle Giles from NCSA and I discuss the new Two-Day Industry Track on "HPC for Small and Medium Enterprises" at the upcoming ISC'13 conference. As part of the ISC Distinguished Speaker Series, Giles will present on the common needs of engineering and scientific research in regard to HPC.
The goal of the Industry Track is to help attendees from the industry, who often have different computing requirements than those at scientific institutions, make informed decisions about acquiring and operating HPC systems. This newly established track will focus on engineering and manufacturing in industry, especially on helping the industry improve product design and time-to-market through the use of HPC. The talks are also aimed at spurring a dialogue between users, technology companies, hardware vendors, software vendors and service providers. Small and medium Enterprises (SMEs) will be strongly represented.
ISC'13 will take place June 16-20 in n Liepzig, Germany, a new city for the show. Download the MP3Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.

Sunday, February 3, 2013

More than Big Data: Scott Gnau on the Teradata Unified Data Architecture

In this podcast, Scott Gnau from Teradata Labs discusses various aspects of Big Data and how the company's Unified Data Architecture can position the enterprise to succeed.

SGI Infinite Storage & Scality Ring

In this podcast, Floyd Christofferson from SGI describes how the combination of the company's Infinite Storage platform and Scality Ring technology provide a new, unified scale-out storage system. The solution is designed to provide both extreme scale and high performance, allowing customers to manage storage of massive stores of unstructured data.
Scale-out object-based solutions are designed to address this particular set of problems by minimizing manual intervention for storage expansions, migrations, and recoveries from storage system failure," said Ashish Nadkarni, research director, Storage Systems at IDC. "Such a dispersed, fault-tolerant architecture enables IT organizations to more efficiently absorb data growth in a manner that is predicable for the long term."
Read the Full Story * Download the MP3 * Download the Slides (PDF)Subscribe on iTunes * If Dropbox is blocked, download audio from Google Drive.