Keep up with the latest press releases and insights from RhythmOne.

ITV Interview: Suranga Chandratillake, Co-Founder and CEO of blinkx

Suranga Chandratillake is co-founder and CEO of video search company, Blinkx. He recently spoke to [itvt]‘s Tracy Swedlow about the company’s technologies and services, about its plans for monetizing its offerings, about its plans for targeting the IPTV and the set-top box spaces, about its strategy of creating partnerships with content providers, about the growing importance of user-generated content, about his views on the future of video search, and more.

[itvt]: Could you provide us with a little background on what Blinkx’s video search technology does, and explain how it differs from what Truveo’s does? As you may know, we recently interviewed Truveo’s co-founder, Timothy Tuttle. [Note: [itvt]‘s interview with Timothy Tuttle is available at]

Chandratillake: We’re a Web-based video search engine. So we allow you to type things into a search box, just as you would into a Google or Yahoo! search box. But rather than getting back Web pages, you get back videoclips—and audioclips as well, I should mention.

We entered this market because, while it is obvious that search is a very important engine or mechanism on the Web, search currently tends to be pretty narrow in its focus. Existing search engines do a very good job of indexing, understanding and finding text-based content—like Web pages and so on. But they do a pretty abysmal job when it comes to video content.

We were very interested in why that was, and in whether we could do a better job of it, and in what we would have to do to make video search effective. So that’s why we originally launched into what we do. One thing we realized very quickly was that the way the traditional search engines index the Web—and Truveo does this too, although in a much more sophisticated way—is that they don’t actually look at the video or audio files themselves. Instead, what they do is look at the text or metadata that lives near them or alongside them. So, when it comes to video, these kinds of search engines basically say, “There’s a Web page that has a story on it, and there’s also a videoclip on the same page. Therefore, we can probably associate that text with that videoclip.” Or, it might be that they notice a caption, or that they notice a tag on the media file itself that says it’s about a certain thing, and so on. So that was the way that the original indexes worked, including the Yahoo! index, which is powered by the Alta Vista technology, and AOL’s original video index, which is powered by Singingfish. They both fit into that category.

Now, what Truveo has done is pretty smart. That was to realize that, as well as straight HTML-style spidering—which is what those other search companies do—you can also do display-oriented spidering. Which means that a search engine, in effect, looks at pages the way a human looks at the page. If you do that, you can often find and get access to more content than you would do on a normal page. We actually do some of that, too. It’s definitely one of the things that’s enabled our video index and Truveo’s video index to be larger than Yahoo!‘s and Google’s video indexes.

But then, when we looked at that methodology further, we found that it still had its faults. At the end of the day, you’re relying on whoever put the content up on a site to, first of all, describe it properly and nicely, and, secondly, to describe it accurately. If someone says one thing, but shows you something else, there’s no way of picking up on that. The computers themselves—the search engines—aren’t really watching or listening to the video content itself. They’re just relying on what they’ve been told about that video content.

Dealing with the video itself is what we’ve really focused on. Blinkx has built technologies around actually ingesting and understanding the content itself—the rich media content itself. We use speech recognition technology to literally listen to and understand word-for-word what each videoclip or audioclip is talking about. We use visual analysis to literally watch the video and split it into different sections and different segments, so we know when the relevant searches appear. We’re also now looking to visual analysis to do yet more sophisticated things, like picking up words that appear on-screen on a piece of video—or even doing some basic facial recognition of famous faces, and so on.

[itvt]: Could you talk a little more about the speech recognition and visual analysis technologies Blinkx has developed?

Chandratillake: The speech recognition technology listens to clips that enter the system—whether they are audio or video—and creates both a phonetic and textual transcript of the content. Both are used to search into the content: so even if something isn’t mentioned in the metadata, if it’s mentioned by a speaker in the clip, we can pick up on it; and the textual transcript can also be used in the summary you see on We use a number of visual analysis steps: one of the most effective is the creation of thumbnails from each major scene in a clip. This means that when you do a search you can see a visual preview on the site, that’s related to your search, against every single hit.

So, to summarize, where we’ve focused our research and our solution is on seeing how much you can actually understand the video itself, not just its metadata. The metadata solution, at its worst, is quite simplistic—that’s certainly the case with Yahoo!‘s and Google’s solutions. Truveo has a more sophisticated version of the metadata approach—but even Truveo’s approach can easily miss stuff. So our focus is on getting access to the content itself. That approach underlies the technology and service that we’ve built, and is basically what is all about.

Let me explain the really interesting application of all of this. It’s true that the solutions that the other video search engine companies have pursued make them pretty good at finding content on the Web. When the content is on the Web, it has metadata, and it’s framed by a Web page, and so on—so the metadata approach works reasonably well. But if you look at television content that’s not on the Web, then it gets much, much harder, because TV doesn’t come with as much metadata. If you were to rely on metadata to enable video search on TV, that would be expensive and time-consuming for the content producers, because it would be a “manual” process, if you will: they’d have to make the effort to describe in detail everything in their video.

So our indexing technology is great for TV, because it doesn’t assume, in any way, that the content is living on the Internet, and because we can literally plug it into things like a cable box or a satellite box. It can automatically—without any help from metadata—understand what’s going on in a particular piece of video: it can even watch and listen to an analog signal coming straight from regular cable, terrestrial or satellite TV. However, we’ve actually received a lot of interest in what we’re doing from the still fairly nascent—at this stage—IPTV market.

So we’re talking to various set-top box manufacturers, and we’re also talking to some of the network providers and carriers. IPTV deployments, it seems, are further along in Europe. So a lot of these initial conversations are with European companies. But we are talking to the American players, as well. What they all want to know is this: given that they have raw video content—not video that’s embedded on a Web site—how well can our technology understand what’s going on in their content, and how well can it use that understanding to then help people navigate through that content more effectively?

[itvt]: You’re saying that there is a dearth of metadata?

Chandratillake: Yes. The experience that we’ve had in talking to content owners and creators is that there simply isn’t much of that data out there. At least two of our partners partner with us because they themselves have no metadata on their own TV shows. One of the side-products of our process is that we churn out transcripts for shows. So we send the transcripts of their shows back to those two partners, so that they have a way of knowing what those shows contained.

Now, there are certain production companies and certain channels that do create lots of metadata, and do know what’s going on in their programs scene-by-scene. Where that’s the case, you can base searches on that metadata. But, as more and more people create content—as content gets created faster and in a broader way—it’s very difficult to actually have human beings sitting there, tagging and describing every single thing that goes on. We believe pretty passionately that some type of alternative approach is needed.

[itvt]: You mentioned that your technology can be plugged into the set-top box…

Chandratillake: We can plug our engine into both the set-top box and the headend—for different reasons in each case. When we talk to carriers that want to run IPTV networks, their problem is that they envisage having millions of hours of content, and that they need to have an easy way for their end-users to navigate through that content. These kinds of companies are very interested in having a technology like ours sitting at the headend, that knows about every single piece of content that they have. So that when someone types in a search, we can go and find the right program for them, and the IPTV operator can then start delivering it in whatever way they deliver it.

But we are also talking to some of the set-top box manufacturers, because they’re interested in putting a lightweight version of what we do into their devices. A lot of the newer DVR’s have hundreds of hours of storage capacity, which means that end-users need video search technology like ours, simply in order to find what they want from all the programs they have stored on their device.

In both cases—headend and set-top—we wouldn’t really build a full application ourselves. We would basically provide the engine that does the indexing—that does the searching—and it would be wrapped inside an actual application built by the set-top box company or the carrier.

[itvt]: If you were to build your technology into a set-top box, how large would its footprint be?

Chandratillake: We’ve discussed different footprints with different companies. Some companies offer more complex, thick-client boxes. Therefore, they’re interested in us providing a version of our engine that can do more. Other companies want us to provide a very lightweight solution. If our software were at the headend, it wouldn’t need any download or any footprint whatsoever on the local box. Its set-top footprint could be literally zero bytes. But a more typical solution would basically be a program around a megabyte in size that would sit in the background, and be accessed by the frontend application.

[itvt]: Do you have any actual deployments of your technology in the set-top or the headend yet?

Chandratillake: Everything is still very early-stage. We don’t have anything that’s actually live, at this point.

[itvt]: Which companies are you talking to?

Chandratillake: I’m not at liberty to reveal which specific companies we’re talking to. What I can say is that we’re talking to two traditional telcos who are moving into the IPTV market, using their existing telecommunications network as the backbone for the distribution. We’re also talking to one satellite set-top box manufacturer in Europe which has a fairly large marketshare in Northern Europe. Those are the three conversations that we’re having that are the furthest along. The conversation with the set-top company, obviously, is about a set-top box deployment. The conversations with the telcos are more about a headend deployment.

[itvt]: The two telcos you mentioned—are they in the US or outside the US?

Chandratillake: Both of them are outside the US. We’re talking to some of the US telcos, too—but frankly, they seem slightly behind in the overall resolution of the idea. We are definitely having all manner of conversations with them. But, certainly at this stage, it seems that it’s relatively early days for IPTV here in the States. The conversations are more about discovery and so on, than about actual implementation. I think we’ll see the first true implementation of this kind of thing outside the US, and then we’ll see it adopted here quickly after that.

[itvt]: In your conversations with IPTV providers and set-top box companies, do you make any recommendations to them as to how to design the application that will surround your technology, in order to ensure that it will be more effective helping people search?

Chandratillake: We definitely provide this kind of advice. However, most of the people we’re talking to see themselves as being the experts on the actual user experience.

[itvt]: Now your search engine doesn’t just search based on keywords, right?

Chandratillake: No. Our search engine is actually what’s known as a “conceptual” search engine. Once you’ve found content that you’re interested in, it’s very, very good at finding other related content. One thing that we’ve developed which we believe will be very popular is basically a profiling-type system. It uses the sort of things that you normally watch, or something that you’re in the process of watching, as a touchstone for narrowing down what you may want to watch next.

[itvt]: How does that work?

Chandratillake: We use a pattern-matching approach that works using information theory and Bayesian inference. It basically looks at the content you’re consuming, and tries to find things that seem to have a high level of overlap, conceptually, with what the content actually means. That’s an area which people seem to be interested in. It would potentially work well for profiling someone over time, and suggesting what they might want to watch.

[itvt]: How does your technology attempt to understand what a piece of content—or a segment of that content—means?

Chandratillake: Well, audio and video are fuzzy content types. Even human beings are, at times, pretty bad at guessing what a particular word was or what an image is actually of. Think of the last time you were in a noisy bar or restaurant: the person next to you might have said something and your initial reaction was probably, “What?” or “Pardon?” This is because, given the noisy environment, it’s actually pretty difficult to decipher what’s being said. Moments later, however, your brain probably reversed its position as it figured out what it was that that person had said. The key phrase there is “figured out.” Our brains are really good at picking up on other hints—who you were talking to, what the preceding conversation was, body language, etc.—and at using those hints as a contextual framework to best-guess what was actually said: i.e., your brain works along the lines of, “Well, it was Susan who told me, we were discussing topic X and, therefore, she probably said Y.”

That’s exactly what the technology we use does. It listens from a pure physics point of view, converting the audio stream into phonemes, but then it applies the context of what the clip was about in general, in order to figure out which of many potential interpretations was probably—or, more accurately, probabilistically—the correct one. Another example: say the phrase, “recognize speech” out loud. Now say, “wreck a nice beach.” Out of context, or over a fuzzy phone line, they sound very, very similar. In reality, context usually means you never mix the two.

You were asking earlier about our profiling system. Now, one interesting thing about it that I should mention is that it could be accessed from more than one device: at least one of the companies we’ve talked to is very interested in our ability to allow multiple types of inputs into our system. You see, most people don’t like typing in search terms on a cell phone and most don’t have a keyboard connected to their TV. The solution to this kind of thing can be a multi-device system: as a user, you could specify searches or interests through your computer—say on a Web site—allowing the system to create a profile of what you’re interested in and thereby to start to piece together a “channel,” if you will, that contains content that should be relevant to you and your interests. Once defined, that channel could be viewed through different devices: you could select it on your mobile device or flick over to it as you would a regular channel on your TV at home. You wouldn’t need to have a keyboard in front of you to enjoy extremely personalized programming.

[itvt]: I understand that Blinkx has made a number of content partnerships. Could you talk a little about those? Why does a search engine company make content partnerships?

Chandratillake: The content partnerships are designed to let us allow our users at to search through our partner-companies’ content offerings more effectively. If you go to and type in a search, as well as the content that we’ve spidered from the Web ourselves, you can access the content archives and the content systems of the various partners we’ve signed deals with. Obviously, different partners give us different amounts of content—different degrees of access to their content databases. Some give us a lot of access, others a lot less. While we believe we have the best video-spidering technology available, many sites simply do not publish all of the live links to video they have. Many will take links off a page once they get older. In this case, the video is still there but invisible to the Web and, therefore, to Web-spidering. In such cases, having a content deal means we often have deeper and broader access to content bases. We are entirely agnostic regarding where content lives and how it gets into our system—in fact, many of our users commented on how they’d like a home for personal content they had, so we opened up a submission system for them to use. The great thing about being agnostic on this is that we’ve been able to build a very large index without worrying too much about exactly how it all appears in the system.

We are also talking to a number of partners about powering content search for them on their own sites. In general, though, we aren’t really interested in the internal or enterprise market. There are already a number of players in that market, and they do a reasonable job of that sort of thing. We’re more focused on helping companies either build their own business-to-consumer sites or distribute their content through our site.

[itvt]: What is typically the business model for your content partnerships?

Chandratillake: We don’t pay for any of the content. Basically, it’s a non-financial agreement. Our partners benefit because they get some distribution, and they get the ability to search their own content in detail. We benefit because we get to increase the size of our content base, both from a breadth and depth point of view. That way, we improve the experience for our users.

[itvt]: In general—and over the long term—what is your business model?

Chandratillake: Today we don’t generate any revenue. But, long-term, we have two basic business models. One is a consumer-oriented model: we believe we can turn into a destination site. We can then very easily monetize it through various advertising-based approaches. One of those approaches is quite simply carrying advertising alongside the searches. Another is offering advertising in the TV content itself. Some of our smaller partners who don’t have advertising today would like to be able to do that kind of thing.  Our other basic business model, in addition to advertising, is licensing our technology to companies in the IPTV and set-top box spaces, so that it allows them to deliver the services they want to deliver.

[itvt]: You mentioned that one of the ways you want to monetize what you do is by offering advertising in the TV content itself. Could you explain how that will work?

Chandratillake: When we talk to people who want advertising inserted into their content, what we envision happening is that we will host their content, or at least copy content from their systems onto ours. We actually haven’t announced any deals of this kind yet, but we have just signed a deal with a news provider who wants us to do exactly this.

What will happen there is we’ll capture and encode in various different formats the content that they broadcast. We do all of that on our site. We’ll then play that content through a special player, which includes all of their branding. They’ll actually be able to link to that player from their own Web site. So if you were to go to their Web site, you’d be able to click on video links and launch the player.

Then, whenever we play their content—their newsclips—we’ll also insert an ad, either before or after the content. The revenue that’s generated from playing that ad will be shared between ourselves and the content provider.

[itvt]: Would it be fair to say that that what you just described will be a transitional method of doing this? Or do you see yourselves involved in encoding, storage, and hosting for the foreseeable future?

Chandratillake: It’s probably a transitional thing. I think the encoding, storage and hosting process is essentially a bit of a commodity business, and that there are already a number of providers who can do it. I also don’t see why it wouldn’t be something that soon enough will be very easy for medium-to-large media companies to do themselves. On the other hand, tying that all together with search and proper distribution and advertising—that’s where it gets more complex, and that’s where I think we’ll focus.

[itvt]: Could you talk a little bit about where you see video search going over the coming years?

Chandratillake: We have a subsite called, which you can access from the main site. It’s an experiment, and it has a very limited amount of content today. It only uses content that has been submitted to us from content-creator partners who’ve agreed to us hosting a copy of their content on our site. So it’s mainly amateur and videoblog-style content. However, it’s representative of where we think video search will eventually go.

If you go to, you start in the same way you would with you can type in a search, and we bring back content that matches the search. However, as well as bringing back results, actually splices together all the pieces of content that are relevant to that search of yours. We start playing it back to you in a linear manner. So it’s a bit like watching TV, except that rather than clicking onto a particular channel, you have, in effect, created a channel that fits your current interest.

You can then save that search or channel. Every time you go back to the site, it’s still there, and, if you click on it, it’ll then go and find whatever the latest is on the search topic you entered—very much as if you had created your own personalized TV channel. The technology takes care of all the details of going and finding which shows or which bits of shows you need to watch if you’re interested in that topic, and then splices it all together.

You can see the potential of that kind of model. Admittedly, today it’s only on the Web and allows searches mainly of user-generated content. But it absolutely applies to the TV or any other living-room device, and it also applies to traditional media, not just to user-generated media.

It would also be very easy to insert small amounts of very relevant advertising into that stream, so that—just like with TV today, where you watch ads in between shows—you would see an ad in between, say, every two or three clips the system had found for you. Longer-term, I think that’s the model that’s going to work.

I think—perhaps because companies like Google and Yahoo! have attracted so much excitement and interest over the past few years—that people sometimes almost think of search as an end in itself. Rather, it should always be seen as a tool that helps us do something else that we want to do: if I have to catch a bus later to go somewhere, I might use a search service to find the timetable. But it’s not the search that I care about; what I care about is finding the timetable and getting to the place I’m going to.

Now with video content, traditionally we haven’t had to use something as mechanical as search to find what we want: we’ve been able to do it by changing channels, looking in a TV Guide magazine, or whatever. But the only reason it was so easy was because the number of choices we had was very small: as the amount of available content grows exponentially, the importance of video search increases. Nevertheless, it’s important to remember that it shouldn’t be an end in itself, and that it should instead be part of a bigger experience. That experience shouldn’t be about me having to start every evening of TV watching by having to enter data into a search box on my TV set. Instead, the system should know what I’ve watched recently and know what I regularly watch, and suggest other programs I might care about. Now, obviously search is hugely important to this kind of experience—it’s key to the whole thing. But I think that, increasingly, it will be something that we don’t really think of as being key, because it’s going to be something that takes place in the background. Of course, when we’re looking for something very specific, we’ll be able to override the system and conduct the searches ourselves—manually, if you will. But, most of the time, the system will work on its own, and the end-user will be confident that it’s doing a good job.

[itvt]: What sorts of trends are you seeing in terms of consumer usage of your service and in terms of broadband video content in general?

Chandratillake: It’s hard to be statistically significant about all of this, because we are a fairly young experiment. We’ve been live for about a year, now. However, in that period, we’ve seen steady and strong growth across every type of content—every type of search. Frankly, I think people are only beginning to realize this kind of thing exists.

I can talk about some of the trends we see that are more distinct. There’s definitely a lot of interest in news content. That seems to be one of the biggest—if not the single-biggest—areas of interest. It fits the on-demand aspect of what we do very, very well. We also get a lot of interest in other timely things, such as sports events, new shows that are coming out, or whatever. We always get huge spikes around those kinds of things.

The other really interesting trend we’re seeing is on the content-production side: it’s the huge growth of user-generated content. When we started the company, we spidered the Web; then we launched our site and started going around talking to content owners. Within about five or six months—certainly by June or July of 2005—it became obvious that the fastest-growing category of content by far was user-generated content. At the time, it was mainly podcasts. But—pretty predictably—it’s now turning into videoblogs and other kinds of video content.

Now this is really interesting, because, while a lot of it is pretty amateur and probably not appealing to most people, some of it is surprisingly good. It’s difficult to predict what’s going to happen with this kind of thing—whether there’s going to be a market for that kind of niche content, and how it will live and how it will metastasize itself, and so on. Anyway, those are the big trends we’re seeing. Certainly, one thing that’s common between these trends that we’re seeing is that everything’s growing very, very fast.

[itvt]: How do end-users upload content to your service?

Chandratillake: On our Web site, there’s a link saying, “Submit content.” If you click on it, you can either tell us about content you have that’s already hosted elsewhere, or, if you don’t have anywhere to host it, you can host it on our system. We do this basically just to assist people in getting their content out there. There are lots of people who seem to want to create stuff, but don’t really have anywhere to put it. Adding this into our system is a fairly minor thing.

[itvt]: What, if any, is the financial relationship between Blinkx and the end-users who upload content to your site?

Chandratillake: Where we have people uploading their content to our system, today it’s all free. No-one generates any revenue from it. You can use our system as much as you want, but you can’t charge for it right now. We don’t charge for it or put ads on it, either.

What we will introduce in the very short term is basically a program by which people who upload their content to our system will be able to offer to have ads appear in their content. If ads do appear in their content, they will share in the revenue that generates. So, if you are a videoblogger and you create something once a week and you think you have quite a few viewers and you’d like to monetize that, but you aren’t big enough to actually go and find the advertiser yourself, then you’d sign onto our system and we’d not only take care of all of the hosting and distribution and so on of your content, but we’d also find ads for your content. All you’d have to do is create your content, and any revenue we generate from ads in your content would be shared with you, the content creator.

[itvt]: So you don’t have any plans to charge some sort of monthly fee for hosting people’s content?

Chandratillake: No. We really think it’s going to be monetizable beyond that. The costs for us to do that sort of thing on scale is so commodity that we think we can support it through advertising.

[itvt]: Are you also interested in providing video search services to the wireless mobile industry?

Chandratillake: Yes. We’ve discussed that with some of the carriers. In general, the mobile phone carriers seem more interested in having a smaller, more focused content base. So search is less of an issue, because their customers have fewer content options. But really, to the extent that they are interested, it’s a very similar situation to IPTV: they are interested in us providing some kind of search application that would be hosted centrally, and then some way for their customers to search from the individual clients. That would be pretty straightforward to implement. All the mobile phones that would need that kind of functionality have built-in browsers and all of our technology is accessible to HTTP. So it’s a straightforward process for us to integrate into that kind of system.

[itvt]: You recently launched a service called To Go. Could you talk about that service and what it does?

Chandratillake: To Go is a service that allows users to search video content on and then upload results to their iPod or other portable video player. Users can also save their search as a channel, which is then updated automatically and placed on their device.

[itvt]: How is Blinkx funded, and what is your exit strategy? Do you envision yourselves being bought out or expanding as a standalone company?

Chandratillake: We’re still funded by angels. We’re about a year-and-a-half old. We raised some angel funding prior to launch, and we’ve taken some more money from that group of angel investors since then. It’s been relatively small amounts of funding, but it’s worked very well for us, so far. We haven’t really spent any money on marketing at this stage, so…

[itvt]: But don’t you have a huge billboard on Highway 101 North in San Francisco?

Chandratillake: Yes, we do. It worked very well for us, in terms of setting up our team, because it helped let people in Silicon Valley know what we were doing. It actually helped hugely. But, apart from that, we haven’t really done any marketing.

Exit strategy-wise, we aren’t really looking for a quick exit. Of course, we talk to lots of people about that sort of thing. Everyone has their price, as it were. But we actually think that so far, video search has been implemented pretty badly on the Web, and that it’s also very much needed in the IPTV and in the cable environment. And we believe that there aren’t many companies that can really conquer the problem the way that we can. So, while we’ve generated a certain amount of value and presence so far, we think that it’s still early days. We can still build a lot more into this, and that’s what we’ll be doing in the short-term at least.

[itvt]: What do you feel are the implications for Blinkx of AOL’s acquisition of Truveo?

Chandratillake: Really none that are significant. Regardless of relatively simplistic “video search” pigeon-holing, I believe Truveo and are trying to do fundamentally different things. The deal demonstrates the interest in and the importance of that broad “video search” category, but we’re still very focused on building what we believe will be a whole new way of splicing the precision and user-driven nature of the Internet with the compelling visual experience of television.

[itvt]: What kinds of announcements should we expect to hear from Blinkx over the coming months?

Chandratillake: Our content team has some really interesting partners coming on board, so watch out for those, and watch out for appearing in places and on devices that you hadn’t thought it would.