Advertisment

Buried treasure

author-image
DQI Bureau
New Update

IBM customers

interested in Deep Computing technology invariably ask one question,

according to Rick Lawrence, Manager, Parallel Applications Group,

Watson. "What they want to know is whether our algorithms and

fast computers can solve a problem that affects their bottom line,"

he says.



Advertisment

Accordingly,

his group has shifted its focus over the past year from developing

new algorithms for the SP parallel computer to the larger issue

of how to use the technology to solve important problems in customer

relationship management and financial modeling.



This kind of

Deep Computing problem is known as datamining. It has long been

recognized that buried in the terabytes of everyday business data-credit

card transactions, grocery purchases, loan applications and stock

trades-are nuggets of valuable information. In the past, the challenges

of analyzing such huge databases have forced businesses to view

their customers through the simplifying lens of averages. Deep Computing

can move business beyond averages and toward individualized treatment

of



customers.



In Basingstoke,

England, just outside London, the Safeway UK supermarket chain is

experimenting with a Deep Computing system that takes the hassle

out of grocery shopping. A palm-sized electronic organizer lets

customers compose a new shopping order starting with an automatically

generated list of all recent purchases. "You can build your

shopping list any place you can take the device," says Marisa

Viveros, Manager, Emerging Database Applications. For placing an

order, the device is connected via modem to a Safeway server that

processes the order and then transmits it to the Basingstoke store.

A Safeway employee gathers the groceries. The customer just stops

by and picks them up.



Advertisment

For this project,

the Research team focused on human-computer interaction technology,

scalable server infrastructure to support large numbers of customers

and personalized content. "Everything you display on that small

screen has to be relevant to the customer," Viveros notes.



The Deep Computing

aspect of this system is a feature known as personalized product

recommendation. IBM researchers have used a variety of datamining

techniques to look for shopping patterns among the spending histories

of 20,000 Safeway customers. "Not long ago, all customers received

essentially identical marketing messages," says Lawrence. "More

recently, market segmentation techniques have directed different

messages to different groups based on common interests. Now we've

reached the third phase, where we can tailor the message to the

individual."



One approach

to this third phase is 'collaborative filtering'. "We're using

datamining clustering to find groups of people who, in some sense

of their shopping behavior, are similar," explains Viveros.

"Then we find the most popular products they've bought, and

we use that as input to another level of filtering called content-based

filtering. Finally, we assign a score to each of these products

based on a matching algorithm."



Advertisment

The result of

all this Deep Computing is a list on the handheld device of 10 products

the customer has never purchased but is likely to enjoy. This is

meant to counter the possibility that when customers view the store

through the display of a palm-sized organizer they will stop making

serendipitous discoveries and impulse purchases.



In the United States, a large provider of consumer information also
depends on the datamining capabilities of Deep Computing. The company

wants to offer improved analysis that will enable financial




institutions

to achieve better targeting, and hence higher response rates, on

the two billion offers for credit cards that are mailed each year.

"We were given about a quarter of a terabyte of data, which

we loaded into IBM's Universal Data Base on a 24-node SP at the

IBM Teraplex Integration Center-a demonstration facility for data

warehousing-in Poughkeepsie, New York," Lawrence says. "And

this represents only a small fraction of this company's data."





After being

filtered and aggregated, the data is reduced to a form that a parallel

computer can handle to discover how people actually use credit.

One of the most useful results identified a group of
people

who are usually rejected for credit but who in fact are good credit

risks-a virtually untapped market.



Performing a

datamining run on such huge databases with a single-processor system

used to take upwards of eight hours. "This is a fundamental

bottleneck in an analyst's ability to acquire insight into this

data," Viveros says. Using powerful SP2 parallel computers,

the same analysis can be performed in half an hour. Which means

that Deep Computing can give decision makers plenty of time for

what is even more important, deep thinking.

Advertisment