Archive

Ideas

You beat Google by coming up with a method for organising the world’s information that is better than the way Google does it.

They find things for you by using an equation to guess whether a specific page is a good source of information on a subject. They look at the words on the page, the tags in the header and images and take special account of the sites elsewhere on the internet that link to the page in question. They add a lot of technical knowledge on how to examine millions of new and changed pages an hour.

How could that search be improved? There are two areas that could do with being upgraded: a better way of judging whether a page delivers the correct information, and a simple way to understand why a person is searching for something. I’ll leave my suggestion of how to solve the why question for a later post; what better ways can we find to analyze web content so that the correct page can be served up to every query?

It’s time for the Web 3.0 buzzphrase section. With the semantic web, you use humans to judge the authority of content. They supply the meanings to the internet. For the first few weeks those people who take up the challenge will tag and comment away on web content. They might use tools to review tags they’ve already assigned to content on Flickr, YouTube and blogs. Once reviewed, an individual could claim authorship of the tags and comments. They could also register agreement with other human-generated labeling.

You might ask: What if different people use different criteria to assign meaning to internet content? The Web 3.0 way will be to let the social media collective judge. Members of the community will trust the tags of the people they agree with. After a while top 50 charts will appear where you’ll see ‘search stars’ rising up the charts as they become well-known. The chart will become a self-fulfilling prophesy as new search users will first turn to the stars near the top of the chart because ‘they probably know what they’re talking about.’

A list of the top contributors to the Final Cut Pro support forum at AppleIt is similar to the kudos given to Twitterers who have tens of thousands of followers. If you read forums, you might give more attention to comments written by users who have posted a great deal. On Apple’s Final Cut Pro support forum, Shane Ross has posted over 17,000 times. He and the others on the list don’t work for Apple. They want to help others, learn from fellow Final Cut users and maybe some of them get reflected glory in the community for their work here. Maybe Shane has got work as a consultant after his free contributions here.

So, how are these search stars going to label, tag and comment on web content? Have a look at yesterday’s post, where I describe an extension to the tag in HTML 5. If that could be set up as a bidirectional link, where you would be able to see all the overlays on the internet for a specific piece of content, then Google might start worrying…

Matt Davis suggested…

An open source subtitle plugin that allows in-sync tweet-style text on ANY non-text media.

Of course I can’t just link to this idea, I’m supposed to add value…

Commentary on the quality of the books available in a local library in the 60s by Orton and Halliwell
Back in the sixties writers Joe Orton and Kenneth Halliwell were first known for the prank when they defaced books from their local public library.

In the 1970s audiences started partici… pating during midnight screenings of The Rocky Horror Picture Show.

I first heard about Hypertext back in 1986 from Peter Brown. He pointed out that every time academics quote text from somewhere else, a link should appear that will take you to the document from which the quote comes.

The silhouette of the MST3K commentary team
Not long after that, Mystery Science Theatre 3000 started in the US. It was a show featuring silhouettes of people making ad-libbed funny comments in front of a series of terrible B-movies. This was followed by more shows featuring ‘unauthorised’ commentary on content such as The Chart Show (to a small extent) in the UK, Beavis and Butthead and Pop-Up Video in the US.

Videodiscs and latterly DVDs popularized commentary tracks and alternative subtitles. These days you can download fan-made commentaries and alternate subtitle tracks (used by those pirating movies into other languages)

Due to the academic uses hypertext was initially put to, I thought it was mainly used to comment on other people’s work to make attribution clearer. That use has fallen by the wayside. Maybe it’s time to revive the idea.

Wouldn’t it be interesting if people could upload commentary that is designed to be overlaid on top of other content – including video and audio. Instead of linking to a page, video or podcast, the content would appear as a new background for the current page. You would then use a layer on top to comment or add to the content below. If a video or podcast played, the player would pass timecode information to the layer above so that comments could be displayed at specific times.

This example shows a pop-up comment overlaid on top of a video on a YouTube page:
Showing how a page could use another page as a background

You could choose how your overlaid comment would look, and how you’d show which page element is being commented on:
A picture showing the darkened background

As well as text commentaries, you could also add picture, audio and video overlays to any content on any page.

This is just the beginning – a way of creating mashups using HTML 5.X…

Given that mobile phones have been irritating to use for years, I had a couple of ideas that might make them more appealing. They are based on two aspects of SMS texting that I liked: texts aren’t conversations and texts are cheaper.

An advantage of texting is that you don’t have to get into a conversation with someone. They are like telegrams: you send someone a piece of information you think they should receive. No conversation necessary. Given the I couldn’t be bothered to put the time in to get quick enough at texting, I liked the idea of being able to leave someone a message – even if their phone wasn’t going to voicemail. How about calling someone, but pressing a special digit on your phone which caused their phone not to ring, but for you to be put through to their voicemail. That means you can deliver the message without having a discussion: one of the advantages of texting. Twitter grew out of the idea of sharing one short message with groups of people without having a series of individual conversations.

The other main advantage of texting is price: it’s very cheap to send people texts. My other idea was to have a speech to text system that would convert my spoken message into a text to be sent. The more you use your phone, the more accurate the text to speech would get, especially as phone calls have very specific structures and vocabularies.

What if a mobile phone had an audio interface to Twitter? That means you could join in the conversation while you are on the move, either walking or driving (using a hands-free kit). Speech to text would convert your thoughts into Tweets, if you pause it could give you a character count update. You could use simple voice commands to edit. Summarising software could suggest alternative ways of saying the same thing in 140 characters.

The other side of Twitter could also work as audio only. Imagine if each Twitter profile could also hold a phoneme database that audio-based Twitter software could use to simulate the voice of the person that tweeted.

In the coming years more services will be audio only, so maybe it’s best to start with the simplest, such as Twitter.

13 January followup: A service has been launched that relates to this. Jott works by calling a special telephone number – the processing isn’t done on the phone even though they do have a BlackBerry application (they have an iPhone application, but it temporarily unavailable).

Either Microsoft is terrible at creating videos, or they have a good sense of an ironically bad video. Check out this submission. I think they know what they’re doing:

Play Microsoft Songsmith demo video

Their newly announced product available from Microsoft Research automatically generates accompanying music to any words you sing into your computer. You can choose key and musical style. You can then go back and change the chord progression if needed. $40 gets you a downloaded application that might be fun.

Songsmith gives me another idea.

smFrontczak: Imagine an application that you tell a story to, it adds sound effects, ambience to your speech and even music to turn your story into a higher production-value podcast or radio play. This would happen using voice recognition to understand the story in conjunction with a large sound effects library. smFrontczak could also enhance radio plays, characters could speak selected stage directions, which could be edited out of the final version.

George: The cathedral's got a mosiac...
Connor: Hurry, it's almost noon!
The children leg it across the bustling
market square and burst into
the murky cathedral.
Mark: The sun! The sun!
The cathedral clock begins to strike noon
(continues over the following)
George: The beam's pointing right at...
...the Blue Knight's shield!

If actors (or a talented individual using different voices) read this script out, the smFrontczak could interpret the script by fading out the busy market square to the left, fade in the cathedral from the left and change the ambience applied to the voices to make them sound as if they are in an echoing hall. Then church bell strikes could commence and continue (with reduced volume) during the scene.

When films are in preproduction, teams are brought in for previsualisation. Storyboarding and animation software (sometimes in 3D) are used to plan scenes to guide many departments. Perhaps the smFrontczak could be used to support the sale of a script in the first place – a tool to turn actor’s readings into dynamic radio plays…

This is the next step on the way to the day when someone will invent a real Holophoner.

The Holophoner is an imaginary device from Futurama, the animated series set 991 years in the future from some of the people that make the Simpsons. It is a musical instrument that uses holographic technology to create 3D operas to accompany the music.

I hope it’ll be a few decades until a real Holphoner appears. In a way, the technology and media industry are paving the way for the day when an individual will be able to compose and perform a complete sensory experience and share it with an audience.

What will audiences need imagination for then…?

This is an example of what I’ve been talking about. A twitter thought leads to a blog post… or two.

When I woke, my guest was watching TV. Part of the show was an interview with a French person. His voice was dubbed. As I know a little and my friend knows all French, it was a pity that we couldn’t hear what they had to say while reading the subtitles if needed.

It seems that dubbing foreign speech has become much more common that subtitles in the last 10 years. This is true of even the most highbrow TV news programmes. In 1995 they would have subtitled non-English speech. Now they hardly ever do.

There are two explanations: that TV producers and news editors think audiences are put off by subtitles, or that subtitling technology hasn’t kept up with the world of simpler post production – compared with dubbing.

I’d like to assume the latter for the moment. What is it about subtitling that makes it more difficult to organise than dubbing. It is that it isn’t too difficult to get a simultaneous translator to translate and speak at the same time, whereas producing well-written and well timed subtitles is hard.

For live TV, there is an interesting solution. Subtitle describers are employed to repeat what people are saying and what sounds can be heard into a speech recognition package, which produces subtitles for those who turn them on using their remote controls. All non-satellite TV channels have subtitles on 97% of all shows, this is how they provide the service.

This points up that editing software should not treat subtitling as an effect that is laid on top of video at some point, or only implemented when making a DVD. Maybe it is time that script, music and sound effect information is associated directly with audio clips so that scratch subtitles track could automatically be generated. Then professional summarisers and designers would clean them up before the production is delivered online, on DVD or broadcast.

A thought leads to a blog post… or two

So the original thought was “With people speaking foreign languages, over the last twenty years the technology of subtitling has fallen behind dubbing. A pity.” – Which is what I posted to Twitter 11 hours ago, before going out and having a great day in London. I didn’t think about it until I got back a short time ago and saw that Matt Davis had written a blog post partially inspired by my tweet.

Matt Davis' Twitter profile picture
His idea is much bigger than mine – maybe leading to a whole new media for a Social Media platform to share and discuss. That’s why you should check it out (Also follow Matt on Twitter if you like or blog his feed).

I then decided to write a post about Twitter and blogs, which meant turning my initial thought into (almost) an idea.

This is how Twitter and blogs can work.

Most people would never know it, but for the last few hours there’s been a big debate on the future of Twitter’s search function. Not a big deal, but it strikes at the heart of how different people use same social media platforms in different ways.

The story starts with a blog post by Loïc Le Meur: ‘Twitter: We Need Search By Authority’

We need filtering and search by authority. We’re not equal on Twitter, as we’re not equal on blogs and on the web. I am not saying someone who has more followers than yourself matters more, but what he says has a tendency to spread much faster. Comments about your brand or yourself coming from @techcrunch with 36000 followers are not equal than someone with 100 followers.

This is followed by some people you may not have heard of with the following…

Bob Warfield:

This is a seriously good way to make Twitter search Fail big time. No better way to amplify the Echo Chamber. Is that all Twitter is? The Follower haves talking while the Follower have-nots listen? Have nots are to be seen and not heard? “Let’s move the riff raff aside, this is our conversation,” seems to be the message.

Robert Scobie:

Bob Warfield has it all right: Loic Le Meur’s call for authority-based Twitter searches is all wrong.
What is Loic’s idea? To let you do Twitter searches with results ranked according to number of followers.
You’d think I’d be all over that idea, right? After all I have a lot more followers than Loic or Arrington has.
But you’d be wrong. Ranking by # of followers is a stupid idea. Dave Winer agrees. Mike Arrington, on the other hand, plays the wrong side of the field by backing Loic’s dumb idea.

Michael Arrington:

For the record, I agree with Loic. Being able to filter search results, if you choose, by the number of followers a user has makes sense. Without it, you have no way of knowing which voices are louder and making a bigger impact. It’s a way to make sense of a query when thousands or tens of thousands of results are returned.

It looks like some of those that care about the future of Twitter think that this idea will relegate Twitter to an online version of The National Enquirer (or the Weekly World News).

Different Twitters for different folks

For some Twitter is a network for sharing status: ‘I’m off to the pub for a while,’ ‘Great weather up here in Hertfordshire!’ Others use it for personal branding or PR: ‘Why does interactive TV assume a single viewer? Why not prepare for a remote per person?’ – @alex4d, ‘My Interview of the Year: http://tinyurl.com/7wac9q Thanks @timoreilly!’ – @Scobleizer. Those are two of the reasons for wanting people to follow you – to keep them updated on what’s going on in their lives, or to influence/inspire/impress a wider network.

Also Twitter is used by people to follow others for different reasons at different points in their day, depending on mood and status (‘Just mooching around on the computer to fill time’ – ‘Researching the use of social media platforms in theatre’)

The fact something as simple as putting your thoughts online can be used in many different ways has made Twitter very popular. As the number of users rises these conflicting uses might cause problems. That is why there is this kind of debate about something as simple as search – it might restrict or direct Twitter’s use in directions that some don’t want it to go in.

A Twitterer with fewer followers weighs in with a point

Twitter search is almost at the stage internet search was when Digital introduced AltaVista:

altavista1996

AltaVista became the main page used for search because its host computers could index the internet more quickly than anyone else. It was the most up to date search. The order in which results were delivered was based on the frequency of the word searched for on a page.

Eventually Google came along and worked out a method for producing the right result quickly. Their page-rank algorithm used various statistics to calculate the ‘authority’ of the organisation that created the page on which the search text is found. As the years have gone by the art of SEO, Search Engine Optimisation, has been about site designers using web content to establish the authority of the websites they manage.

I suggest that Twitter’s search function, or even home feed filtering system could use a similar system: show me Twitterers with ‘authority’ – but this authority need not only depend on number of followers, because who knows why those people follow that person. The number of people followed could be important. What about the number of direct messages, or messages responded to, or retweets, or number of links posted that no-one else has posted, but turn out to be very popular? You could also take frequency of posting into account, the amount of dialogue tweets bouncing between two people, or even the frequency of updates to the page linked to on their profile.

Some see the battle between the search engines and the SEO community as an endless arms race, where Google and others use ‘security by obscurity’ to hide the methods they use to rank search results. This battle may move to Twitter search (once Twitter starts mattering). However, a new front could be avoided if Twitter searchers could ‘roll their own’ Twitterrank algorithms.

Do you want to follow me?

What are the considerations you have when deciding to follow someone who has followed you? These are the considerations you might want to be included in your Twitterrank method: I look at the subject and frequency of recent tweets and combine that with having a look at the page they link to in their profile. Is it updated regularly with content that I’m interested in. In consider my twitter feed as a series of thoughts – some of which coalesce into ideas expressed on my blog. If a follower seems to be using Twitter and their site in the same way as me, I’m more likely to follow them. Sometimes would be useful to me for Twitter to be able rank search results or filter the main feed using these criteria. However, depending on how I happen to be using Twitter, I might want to use different search or filter ranking techniques.

If other people could get useful results with a specific Twitterrank algorithm of mine, it would be useful if they could use it too. They could take a copy as it is, or possibly subscribe to it if I feel the ranking method needs to be updated.

I guess Google defines a successful search rank when a user doesn’t click on the second page of results. Searching and filtering in Twitter is a little more complex: it depends on why the person is searching and filtering. Are they removing the clutter of thousands of tweets, or are they refining their feed to focus on a specific debate? Only by trying different ranking systems will we define which models are useful. We could then have different system for different people. That would make life more interesting for the ‘Twitter Search Optimisation’ community

A single method handed down from on high seems very Google and old-fashioned. I think a roll-your-own twitterrank system seems much more ‘2009.’ What do you think?

On the BBC iPlayer, as well as watching TV from recent days or weeks, you can also listen the output of national and local radio stations. Most music shows can only be heard for seven days. The podcast versions cannot include any commercial music. For example, I can listen to the Adam and Joe show on BBC 6 Music in full (three hours long, in a format relatively difficult for people to keep on their computers) or the podcast highlights on iTunes (mp3).

Imagine if audio (and video) broadcasts and podcasts were combinations of the broadcasters’ and local playlists. If music cannot be licensed for more than seven days, the podcast playing application could insert music from the playlists on the listener’s device. If tags were added at times when music is played stating the title and artist, it could play from the local device if present. If not, similar music could play. In Apple’s iTunes 8, the Genius system is designed to create playlists of similar music. That system could find replacements in a listener’s library to follow the mood of the show.

If you were listening to a combination radio broadcast/local playlist ‘live,’ there could be user-interface item to how much music content was from the radio station playlist, and how much is local:

music-choice-prefs

This could be the way that future radio stations work, each listener could configure the shows they way they want. They could choose how much control they have over the music, whether they hear news, weather or traffic reports. Different shows might have different settings depending on the music choice or the kind of things the DJs say between the tracks.

Listen to the most recent Adam and Joe radio show using this RealPlayer location. Listen to the highlights podcast via iTunes.

It would be interesting to imagine a similar system for visual content.

What if my visual feed was similar to my audio feed – the way music is played on radio. What if media organisations had playlists that I subscribed to?

Maybe the visual channel that I will tune into will be made up of four to five minute vignettes. Longer than traditional previews, they’d be excerpts from dramas, comedy shows and documentaries. Entertaining and stimulating on their own, but with the option for me to wait for the next ‘track’ to come along, or for me to choose to see the rest of the play, film, documentary, documentary series or comedy show. Like singles on a radio station, I would expect high- medium- and low-rotation pieces. They could be designed to be re-watched.

Movies, TV shows and documentaries usually include ‘set-pieces.’ These are the bits that you talk about afterwards without reference to the plot. The sections excerpted in the better review shows: ‘Remember the bit when they were trapped in the trash compactor and the monster with the one eye attacked them?!’ ‘What about that bit when he had to stab her in the heart with the adrenaline needle! Wow’ ‘I didn’t cry when she told that story about how her owner forgot about her when she grew up and left her to be sold by the side of the road… it just got a little dusty in the room!’ These are the set-pieces that could be included in a visual station feed. Each could have simple intro to explain the stakes for those who hadn’t seen the source film or show. If you register that you have seen the source, the big moments from the second half could also be included in the playlist.

It may be that not everyone will want to pay the full £6 for the 2 hour film, or £18 for the complete 24 part series. They might want to pay a little less for a set piece or two. Just as people today pick the best tracks from an album as opposed to the whole work.

Imagine if short films and animations could get included in the mix. What would a TV channel be then – a filter to prevent you being taken over by the massive flow of content out there? What about shared experience?

We’ll see.

Some radio stations are different from others. They can be divided into two groups: entertaining and stimulating. On the entertaining stations, the vast majority of the tracks I hear, I like. On the stimulating stations, things are less certain. DJs who care about music more than the musicians. People who are still DJs (instead of the industry term: ‘presenters’), who know who they are is less important than the music they choose to play.

I’m not always in the mood for the stimulating choice. Sometimes I even want to have music on that I can ignore at some level. But sometimes I want to hear stuff that I might not like. Then there is a better chance that I will hear something else – one track later – that I would never have heard before. If I followed my demographic and listened to a radio station that played music from my youth, I’d find that entertaining. Just not very stimulating.

That’s radio. How does that translate to the visual medium…? To go in another direction: how can I integrate my media with that broadcast from elsewhere?

I don’t know if work for freelance web designers is starting to dry up, but here’s a new sideline: take an existing Flash-based website and use the same text, pictures and video to make a version for mobile phone users.

Many people believe that Apple will not allow Adobe’s Flash on the iPhone. The PR reason is that they think that the programming language behind Flash could cause security and technical problems. An incompetent or malicious programmer could make iPhones go wrong. Since phones have been able to run more and more 3rd-party applications, they have become more unreliable.

The ‘Steve Jobs’ reason might be that Adobe have never spent enough time making Flash work well on non-Windows-based computers. People complain that it is a waste of processor and battery power when a Flash movie plays on a mobile phone. On Macs, simple Flash operations take over 70% of CPU resources.

If the iPhone becomes more popular, it might be a good idea for those sites that are centred around Flash to have an alternative non-Flash version available for those browsing on the run.

A few weeks ago I was at a social organised by Stellar Network. After chatting with various people for a while I thought it wouldn’t be too pretentious to get my iPhone out. We had been talking about post-production toys such as the new Red camera system, so I hoped that browsing the web in public would be OK. It was response to a conversation about web design. One person there was Melissa Byers. As she is a cinematographer with foresight, she grabbed a great domain: camerawoman.com:

Melissa Byers' site

I tried visiting her website. My iPhone wasn’t good enough to browse her Flash-based site.

I got an email from Melissa this afternoon. I remembered our conversation and visited her site for the first time today.

That’s why it might be a good idea to have alternative version of your Flash site for people like me with more money than sense – iPhone buyers!

You could even learn how to turn Flash sites into iPhone web applications. Before July 2008, that was the only way to add functionality to the iPhone. Find out about iPhone WebApps at http://www.apple.com/webapps/