But Wait, There's More
Google Launches Summer of Code to Show Students Open Source Software
Search engine giant Google is a user of open source software technologies, and while it does not make its tweaks to its core open source programs available to the general public--Google has offered some code and APIs--the company knows it has to contribute to the open source community in some way to be perceived as being a good citizen.
To that end, Google has launched the Summer of Code program, which will pay student to contribute to one of 40 open source software projects as well as to programs under way at Google itself. Here's the deal: You sign up at Google at the Summer of Code Web page and if you complete an open source project by the end of the summer, Google will pay you $4,500. If you want to submit your own idea for a project to one of these groups, Google is cool with that, but if just you're are out of ideas and short on cash, Google has posted all of the things-to-do lists from the 40 open source projects together and you can pick your project from those lists.
Blue Gene Super to Simulate Key Part of the Human Brain
Human beings are very clever, but we are not quite yet smart enough to know how our own brains work. Under a new project being jokingly referred to as "Blue Brain"--the mind shutters at the idea of a human mind in the image of Big Blue, but let that idea go--IBM is working with the Ecole Polytechnique Fédérale de Lausanne in Switzerland to simulate the electrochemical processing in the neocortical complex of the human brain.
If you don't know what the neocortex is, it is the ripply outside of the brain that all mammals have and that is considered the center of intelligence and symbolic processing; you might know it as the cerebrum. Professor Henry Markram at EPFL, according to IBM, has the best understanding of the chemistry of the neocortex, and having spent 10 years building up data on the chemistry of the neocortex, he now wants to simulate a 10,000-neuron column of the neocortex. To do so on the Lintel cluster that EPFL currently has is not possible, and both IBM Research and EPFL believe that the unique architecture of the Linux-based Blue Gene supercomputer will fit well with the simulation software that EPFL will need to create to simulate this neocortex section. So, to that end, EPFL is acquiring a four-rack Blue Gene system rated at 22.8 teraflops to begin its initial simulations; EPFL has an option to add another two racks if it needs more processing capacity. Each Blue Gene rack lists for about $2.5 million, according to Tilak Agerwala, vice president of systems at IBM Research and the person who is steering the Blue Gene project into its commercialized phase.
The ultimate goal of Markham's research, says Charles Peck, the lead researcher on the IBM side of the joint project with EPFL, is to refine the initial neocortex simulation to the point where it can be accurately described with simpler algorithms than the initial ones IBM and EPFL will create such that the entire human brain can be simulated at an electrochemical level. To do so might take as much as 100 to 1,000 petaflops--that's 100,000 to 1 million teraflops--of computing power, says Peck, which is well out of reach of any of the technologies we expect to have before 10 to 15 years from now. At the high end of that range, such a machine by my math would eat up about 165 acres of space using current Blue Gene technology and would consume 4.7 gigawatts of power. Your brain does the actual processing in something the size of a melon, with minimal heat, and can be powered for a few hours on a candy bar. Amazing, isn't it? Clearly, any full-brain simulations will only be possible if the algorithms can be refined to accurately describe brain chemistry and not take up as much processing power as they initially will.
As part of the deal, IBM's Zurich lab is getting to use some of the excess capacity in the Blue Gene machine to help it design carbon nanotubes and other potential future semiconductor technologies. Other researchers at EPFL will also use the machine to study protein folding as it relates to mad cow, Creutzfeldt-Jakob, and related brain diseases. The initial Blue Gene machine at EPFL will have about 10 times the peak processing power of its current Lintel cluster.
SpikeSource Raises Nearly $13 Million in VC Dough for Expansion
Open source software stack provider SpikeSource has just raked in its first round of venture capital funding--specifically, $12 million in cash from Kleiner Perkins Caufield and Byers, where SpikeSource was incubated in 2003 by Murugan Pal, who used to be the lead developer of Oracle's application server, Fidelity Ventures, Intel Capital, and Omidyar Network (created by Pierre Omidyar, one of the founders of eBay).
In April, SpikeSource announced its SpikeSource Core stack, which is seven preconfigured software stacks with over 50 different components and using six different development languages. RHEL 3, Fedora Core 1, Fedora Core 3, SUSE 9, and SUSE 9.1 are all supported with this initial release. SpikeSource is also making a Windows stack, called WAMP, which is now available as an alpha release.
Black Duck Gets $12 Million in Second-Round Funding
The venture money spigots seem to be opening up a bit, and Black Duck Software said this week that it has secured $12 million in second-round venture capital funding to help it fuel its growth. The funding was lead by Fidelity Ventures and Intel Capital and SAP Ventures, the investment arms of the chip maker and the ERP software maker, also tossed some dough into the Black Duck nest. The initial investors who put up $5 million in July 2004 for Black Duck's first round of funding--Flagship Ventures, General Catalyst Partners, and Red Hat--also lined Black Duck's nest in this round of funding. In addition to the funding, Black Duck and Intel signed a technology and marketing agreement to optimize Black Duck's protexIP software licensing compliance software for Intel's 64-bit Xeon servers.
SGI NAS Boxes Archive and Monetize NBA Games
The National Basketball Association's NBA Entertainment unit is one of the largest suppliers of sports programming to television stations and Internet sites in the world. And it has some gargantuan data processing requirements just to do what it currently does--providing a play-by-play database of all professional basketball games and a clippings database to show snippets of games--and it will have even greater needs as its gets a little more sophisticated in what game data it archives and how it can monetize that data to make the NBA some money.
This is why the NBA's data center in Secaucus, N.J., has just acquired a 40 TB shared-storage NAS system from Silicon Graphics (which is based on its Altix Linux-Itanium server platform) and has engaged SGI's Professional Services organization to help it improve the way that basketball games are archived. Right now, the NBA has people at every basketball game, logging each and every shot. This information goes into a time-synched database, which can be linked up to video footage from several angles that can later be accessed to pull that video footage off the NAS. The NBA has a dozen years of video that it wants digitize, and even has parts of basketball seasons on video that go back to the 1940s.
So what good is digitizing all of this data and building a system to consume it? According to Greg Estes, vice president of corporate marketing at SGI, one potential use of the technology (which has not been set in stone or ink yet) would be something like this. First, players are currently tracked visually at the games, and they have no desire to wear RFID tags so they can be tracked automatically as they move around the court. Without RFID tags, placing each player on the court and tracking their every move has to be done with visual processing of some sort, and SGI is proposing to use its Linux-based Prism workstations and related visualization systems to do this. Having completely rendered the game digitally, coaches could analyze the physics of players--the shots they make, how fast they move, and so forth. Moreover, you could even go so far as to render past games in virtual reality, giving players, coaches, and fans the chance to step inside of a game and see if from the court rather than from the bleachers or the television. This will obviously take a lot of servers and storage, and SGI is clearly hoping that not only the NBA moves in this direction, but that other sports do as well.
Arkeia Backup Products Certified on RHEL 4
Backup software specialist Arkeia last week said its Server Backup for standalone servers and Network Backup for mixed Linux and Unix servers have been certified to run on Red Hat's latest Enterprise Linux 4 version. The company's Arkeia Disaster Recovery add-on product and hot backup plug ins for popular databases have also been certified on RHEL 4.
The RHEL certification follows a similar cert that Arkeia completed in April for Novell's SUSE Linux Enterprise Server 9 server. Specifically, Arkeia achieved the Novell YES certification level, which means both Arkeia and Novell guarantee that Network Backup will work on SLES 9 and that Novell and Arkeia both have the means to provide full enterprise-level support for the product if customers deploy it in production environments.
Arkeia has 4,000 customers worldwide that have deployed its backup solutions on more than 100,000 networks to date; it supports Linux, Unix, Windows, Mac OS X, and other platforms.
HP Says Itanium Application Count Up 25 Percent Since January
The Itanium software portfolio seems to be growing at a nice clip, and Hewlett-Packard was happy to report at its HP Enterprise Forum in Copenhagen, Denmark, last week that the total number of applications available on Itanium has grown to 4,000, an increase of 25 percent since January. I was told by HP in December 2004 that Itanium chip had 2,900 applications ported to it and that HP hoped to hit 4,500 applications by the end of 2005. Having pulled in 1,100 new applications in less than six months already, if this rate can be sustained, Itanium could have more than 5,000 applications by the end of the year--well ahead of HP's projections from six months ago.