For the last thirty years people have been trying to come up with clever ways to make TV interactive. In the early 80s, we had Teletext services. We later had phone votes. These days digital TV users know that they can get more content – such as games, documentaries and commentary tracks – by ‘pressing the red button;’ whichever method they are using to watch TV.

On the other hand, more devices can be modified to act as remote controls for TVs. Eventually all phones will be able to interact with nearby TVs. They’ll start by being able to switch channels and record to a DVR. Soon TVs (and computers) will accept text and multi-touch input from phones and remotes.

Maybe it is time for those designing the future of TV to take into account the essential nature of watching content on TV. What makes it different from going to the movies? Or watching DVDs and downloaded movies on computers and phones? The fact that you watch TV with one or more people that you usually know well. Phones and computers are usually used by one person at a time (unless the computer is being used as a TV replacement). When you are at the cinema, you may be with hundreds of people, but you know no-one but those you came with and you don’t spend time during the movie interacting with anyone (unless your primary reason isn’t to watch the film…).

Given that before the invention of the remote, anyone who walked over to the TV had control, maybe it’s time to plan for TV broadcasts where each person watching can control and interact with TV content. Instead of using a child as proxy remotes, as I was, the person who usually holds the remote (still typically the man) should be encouraged to share with others.

The future could be made of every individual consuming media on their own terms – on their own. It’s the interaction between those watching TV that makes it special. If TV improves and changes those interactions, it will keep groups of people together for a long time to come.

Before the iPhone 3G phone came out in July, people created web applications and website designed for the iPhone. It’s still possible.

In fact, if you have a blog, you can use use VenueM.com to generate a version of your blog designed to be easier to read on an iPhone or Android-powered phone.

All you need is your RSS feed (mine was ‘https://alex4d.wordpress.com/feed/’) and a couple of icons. The first will be displayed on the phone when people visit the iPhone/Android version of your blog:

hopey-100
Use a version of your blog logo that is 100 pixels square.

The second is set up so that if someone adds the phone version of your blog to their list of applications (using the ‘+’ button in Safari for example), a 57 by 57 pixel version of your blog logo will appear as the icon:
hopey-57

If your main blog host doesn’t allow advertising, you have the option to add advertising to your mobile blog if you sign up for a pro account (for $3.95 a month).

…after a while the conversation ran a little dry, so I asked them what they did for a living.

The first one said “I’m a problem solver.” I understood, but didn’t know what kind of problems they fixed. “I work with a group of people who wait for calls for help of various kinds. We’re ready to sort things out. Say, for example, your cat is stuck in a tree. We can get it down for you. If you’re locked out, we’ve got specialist equipment to get you into your home. We also rescue people from fires.” I was surprised by that last sentence. “So you’re a firefighter then?” “Sure, but we do a whole more than that!”

I turned to the second person. They described themselves as an ‘event manager.’ I’d heard of events, but I thought that that term applies to any effect that has a cause. Series of events are how humans experience time. ‘Event’ seemed to be a very general term. I got some clarification: “We do parties, product launches and press briefings. We organise conferences for thousands of people all over the world. If an organisation wants to start or maintain a community, they come to us.” I was surprised by the last two sentences. “So you’re a conference organiser then?” “Sure, but we do a whole more than that!”

I worked a while for a company that organised conferences. After a while I learned that it was part of the ‘event industry.’ It seems odd to me that this industry doesn’t describe itself in terms of what it spends most of its time doing: organising conferences. This seems to be the symptom of worrying that definition isn’t interesting enough for people outside the business. “If I say that I organise or work on conferences, they’d think it a bit sad and limiting. I’ll say that I’m in the ‘Events Industry’ – that’ll sound better. It implies variety, it’s less embarrassing.”

It isn’t a good sign when a group of people don’t define their work in terms of what they spend the most of their time doing. Maybe they think that conferences are a waste of time. Atendee recall for the content of most presentations is almost non-existent. Gimmicks and office politics rule the roost. Few dare measure if people’s actions change after a conference. They suspect that all the effect their conferences have is found when you get people from different parts of the world to have a drink together. The rest is window-dressing.

It’s odd, because most people ouside ‘events’ see conferences as exotic, worthwhile, informative, a sign that the organisation cares, a break from the norm and a way of marking special times in their lives (“Remember that time in 99 when we were in Florida the week that Star Wars Episode I came out? You did that alternative title sequence and opening bit… That was cool!” – a quote from an attendee at a tech conference).

With the costs of gathering large groups of people in a single place becoming prohibitive, event managers are going to have to come up with a new name for themselves. One that doesn’t require a list-like explanation most times they inform people. Maybe it’s time that someone redefined the conference in terms of what is supposed to do

If it’s not a word that has become tired from over-use, maybe community should be in there somewhere. What do you think?

You beat Google by coming up with a method for organising the world’s information that is better than the way Google does it.

They find things for you by using an equation to guess whether a specific page is a good source of information on a subject. They look at the words on the page, the tags in the header and images and take special account of the sites elsewhere on the internet that link to the page in question. They add a lot of technical knowledge on how to examine millions of new and changed pages an hour.

How could that search be improved? There are two areas that could do with being upgraded: a better way of judging whether a page delivers the correct information, and a simple way to understand why a person is searching for something. I’ll leave my suggestion of how to solve the why question for a later post; what better ways can we find to analyze web content so that the correct page can be served up to every query?

It’s time for the Web 3.0 buzzphrase section. With the semantic web, you use humans to judge the authority of content. They supply the meanings to the internet. For the first few weeks those people who take up the challenge will tag and comment away on web content. They might use tools to review tags they’ve already assigned to content on Flickr, YouTube and blogs. Once reviewed, an individual could claim authorship of the tags and comments. They could also register agreement with other human-generated labeling.

You might ask: What if different people use different criteria to assign meaning to internet content? The Web 3.0 way will be to let the social media collective judge. Members of the community will trust the tags of the people they agree with. After a while top 50 charts will appear where you’ll see ‘search stars’ rising up the charts as they become well-known. The chart will become a self-fulfilling prophesy as new search users will first turn to the stars near the top of the chart because ‘they probably know what they’re talking about.’

A list of the top contributors to the Final Cut Pro support forum at AppleIt is similar to the kudos given to Twitterers who have tens of thousands of followers. If you read forums, you might give more attention to comments written by users who have posted a great deal. On Apple’s Final Cut Pro support forum, Shane Ross has posted over 17,000 times. He and the others on the list don’t work for Apple. They want to help others, learn from fellow Final Cut users and maybe some of them get reflected glory in the community for their work here. Maybe Shane has got work as a consultant after his free contributions here.

So, how are these search stars going to label, tag and comment on web content? Have a look at yesterday’s post, where I describe an extension to the tag in HTML 5. If that could be set up as a bidirectional link, where you would be able to see all the overlays on the internet for a specific piece of content, then Google might start worrying…

Matt Davis suggested…

An open source subtitle plugin that allows in-sync tweet-style text on ANY non-text media.

Of course I can’t just link to this idea, I’m supposed to add value…

Commentary on the quality of the books available in a local library in the 60s by Orton and Halliwell
Back in the sixties writers Joe Orton and Kenneth Halliwell were first known for the prank when they defaced books from their local public library.

In the 1970s audiences started partici… pating during midnight screenings of The Rocky Horror Picture Show.

I first heard about Hypertext back in 1986 from Peter Brown. He pointed out that every time academics quote text from somewhere else, a link should appear that will take you to the document from which the quote comes.

The silhouette of the MST3K commentary team
Not long after that, Mystery Science Theatre 3000 started in the US. It was a show featuring silhouettes of people making ad-libbed funny comments in front of a series of terrible B-movies. This was followed by more shows featuring ‘unauthorised’ commentary on content such as The Chart Show (to a small extent) in the UK, Beavis and Butthead and Pop-Up Video in the US.

Videodiscs and latterly DVDs popularized commentary tracks and alternative subtitles. These days you can download fan-made commentaries and alternate subtitle tracks (used by those pirating movies into other languages)

Due to the academic uses hypertext was initially put to, I thought it was mainly used to comment on other people’s work to make attribution clearer. That use has fallen by the wayside. Maybe it’s time to revive the idea.

Wouldn’t it be interesting if people could upload commentary that is designed to be overlaid on top of other content – including video and audio. Instead of linking to a page, video or podcast, the content would appear as a new background for the current page. You would then use a layer on top to comment or add to the content below. If a video or podcast played, the player would pass timecode information to the layer above so that comments could be displayed at specific times.

This example shows a pop-up comment overlaid on top of a video on a YouTube page:
Showing how a page could use another page as a background

You could choose how your overlaid comment would look, and how you’d show which page element is being commented on:
A picture showing the darkened background

As well as text commentaries, you could also add picture, audio and video overlays to any content on any page.

This is just the beginning – a way of creating mashups using HTML 5.X…

Mr. Gaskin writes in to the blog:

…wouldn’t it be great if we were able to *rotate* the mask-shape filter. That way I could selectively isolate diagonals or keyframe the mask-shape on a changing shape in the shot.

Having a look at this idea points up the problem: you can’t use FXScript to modify the values used in the controls. I’ve come up with a plugin that gives this sort of control, but there’s a limit to the user interface: You can set your eight points up where you want them to be. If you want to rotate that matte, using keyframes for example, the new position of the points changed by the rotation cannot be put into the controls of the filter.

That means if you want to keyframe the rotation, scale and position of your matte to follow a specific feature in your clip and the shape of the feature changes, the points that you manipulate on screen won’t match the edge of your matte:
The matte cannot line up with the control points

In this case, the matte has been scaled and repositioned so that the edges don’t line up with the control points. The control points on screen define the shape of the matte, other controls describe the location, size and rotation of the matte. You can change the View Mode to ‘Wireframe’ to get better control of the shape by seeing how moving a point changes the line that defines the edge of the matte:
It is easier to edit in Wireframe View Mode

Here are the controls:
Controls for Alex4D 8-point Matte plugin

Download Alex4D 8-Point Matte
Download: Alex4D 8-Point Matte.

Copy the ‘Alex4D 8-Point Matte v1.fcfcc’ file into one of two places on your computer:

Your Startup HD/Library/Application Support/Final Cut Pro System Support/Plugins
or
Your Startup HD/Users/your name/Library/Preferences/Final Cut Pro User Data/Plugins/

(Your Startup HD/Users/your name/Library/Application Support/Final Cut Express Support/Plugins for Final Cut Express users)

Restart Final Cut, and you’ll see the filter in the ‘Matte’ section of ‘Video Filters’

Visit my Final Cut home for more plugins and tips
finalcuthomethumbnail

Given that mobile phones have been irritating to use for years, I had a couple of ideas that might make them more appealing. They are based on two aspects of SMS texting that I liked: texts aren’t conversations and texts are cheaper.

An advantage of texting is that you don’t have to get into a conversation with someone. They are like telegrams: you send someone a piece of information you think they should receive. No conversation necessary. Given the I couldn’t be bothered to put the time in to get quick enough at texting, I liked the idea of being able to leave someone a message – even if their phone wasn’t going to voicemail. How about calling someone, but pressing a special digit on your phone which caused their phone not to ring, but for you to be put through to their voicemail. That means you can deliver the message without having a discussion: one of the advantages of texting. Twitter grew out of the idea of sharing one short message with groups of people without having a series of individual conversations.

The other main advantage of texting is price: it’s very cheap to send people texts. My other idea was to have a speech to text system that would convert my spoken message into a text to be sent. The more you use your phone, the more accurate the text to speech would get, especially as phone calls have very specific structures and vocabularies.

What if a mobile phone had an audio interface to Twitter? That means you could join in the conversation while you are on the move, either walking or driving (using a hands-free kit). Speech to text would convert your thoughts into Tweets, if you pause it could give you a character count update. You could use simple voice commands to edit. Summarising software could suggest alternative ways of saying the same thing in 140 characters.

The other side of Twitter could also work as audio only. Imagine if each Twitter profile could also hold a phoneme database that audio-based Twitter software could use to simulate the voice of the person that tweeted.

In the coming years more services will be audio only, so maybe it’s best to start with the simplest, such as Twitter.

13 January followup: A service has been launched that relates to this. Jott works by calling a special telephone number – the processing isn’t done on the phone even though they do have a BlackBerry application (they have an iPhone application, but it temporarily unavailable).

Either Microsoft is terrible at creating videos, or they have a good sense of an ironically bad video. Check out this submission. I think they know what they’re doing:

Play Microsoft Songsmith demo video

Their newly announced product available from Microsoft Research automatically generates accompanying music to any words you sing into your computer. You can choose key and musical style. You can then go back and change the chord progression if needed. $40 gets you a downloaded application that might be fun.

Songsmith gives me another idea.

smFrontczak: Imagine an application that you tell a story to, it adds sound effects, ambience to your speech and even music to turn your story into a higher production-value podcast or radio play. This would happen using voice recognition to understand the story in conjunction with a large sound effects library. smFrontczak could also enhance radio plays, characters could speak selected stage directions, which could be edited out of the final version.

George: The cathedral's got a mosiac...
Connor: Hurry, it's almost noon!
The children leg it across the bustling
market square and burst into
the murky cathedral.
Mark: The sun! The sun!
The cathedral clock begins to strike noon
(continues over the following)
George: The beam's pointing right at...
...the Blue Knight's shield!

If actors (or a talented individual using different voices) read this script out, the smFrontczak could interpret the script by fading out the busy market square to the left, fade in the cathedral from the left and change the ambience applied to the voices to make them sound as if they are in an echoing hall. Then church bell strikes could commence and continue (with reduced volume) during the scene.

When films are in preproduction, teams are brought in for previsualisation. Storyboarding and animation software (sometimes in 3D) are used to plan scenes to guide many departments. Perhaps the smFrontczak could be used to support the sale of a script in the first place – a tool to turn actor’s readings into dynamic radio plays…

This is the next step on the way to the day when someone will invent a real Holophoner.

The Holophoner is an imaginary device from Futurama, the animated series set 991 years in the future from some of the people that make the Simpsons. It is a musical instrument that uses holographic technology to create 3D operas to accompany the music.

I hope it’ll be a few decades until a real Holphoner appears. In a way, the technology and media industry are paving the way for the day when an individual will be able to compose and perform a complete sensory experience and share it with an audience.

What will audiences need imagination for then…?

This is an example of what I’ve been talking about. A twitter thought leads to a blog post… or two.

When I woke, my guest was watching TV. Part of the show was an interview with a French person. His voice was dubbed. As I know a little and my friend knows all French, it was a pity that we couldn’t hear what they had to say while reading the subtitles if needed.

It seems that dubbing foreign speech has become much more common that subtitles in the last 10 years. This is true of even the most highbrow TV news programmes. In 1995 they would have subtitled non-English speech. Now they hardly ever do.

There are two explanations: that TV producers and news editors think audiences are put off by subtitles, or that subtitling technology hasn’t kept up with the world of simpler post production – compared with dubbing.

I’d like to assume the latter for the moment. What is it about subtitling that makes it more difficult to organise than dubbing. It is that it isn’t too difficult to get a simultaneous translator to translate and speak at the same time, whereas producing well-written and well timed subtitles is hard.

For live TV, there is an interesting solution. Subtitle describers are employed to repeat what people are saying and what sounds can be heard into a speech recognition package, which produces subtitles for those who turn them on using their remote controls. All non-satellite TV channels have subtitles on 97% of all shows, this is how they provide the service.

This points up that editing software should not treat subtitling as an effect that is laid on top of video at some point, or only implemented when making a DVD. Maybe it is time that script, music and sound effect information is associated directly with audio clips so that scratch subtitles track could automatically be generated. Then professional summarisers and designers would clean them up before the production is delivered online, on DVD or broadcast.

A thought leads to a blog post… or two

So the original thought was “With people speaking foreign languages, over the last twenty years the technology of subtitling has fallen behind dubbing. A pity.” – Which is what I posted to Twitter 11 hours ago, before going out and having a great day in London. I didn’t think about it until I got back a short time ago and saw that Matt Davis had written a blog post partially inspired by my tweet.

Matt Davis' Twitter profile picture
His idea is much bigger than mine – maybe leading to a whole new media for a Social Media platform to share and discuss. That’s why you should check it out (Also follow Matt on Twitter if you like or blog his feed).

I then decided to write a post about Twitter and blogs, which meant turning my initial thought into (almost) an idea.

This is how Twitter and blogs can work.

Following a request over at The LA Final Cut Pro User Group forum, I’m working on a transition plugin that gives you more control on the response curve as one clip fades into another.

Part of that job is finding a way to give feedback on screen showing the response curve. Here’s what it looks like at the moment:
Screenshot of FXBuilder showing the result of a plugin I'm writing

I’m also working on a transition plugin that will produce the ‘Scooby Doo going back in time’ effect amongst others: