Technology Futures

Commentaries on Directions That Will Impact the Future of Technology

Archive for the month “May, 2012”

WHDI vs WiHD? VHS vs Betamax again?





Wireless is one of today’s hot buzzwords.  My last post was about connecting rear speakers in  a surround system wirelessly (sort of).  This post is about sending audio/video signals, specifically of HD-quality normally associated with the HDMI (High Definition Multimedia Interface) wired connection prevalent in today’ TVs, DVDs, Cable and Satellite systems.

The powers that be have been working on this issue since 2007 and have come up with approaches to the problem that are supported by some of the biggest names in the electronics business.  These solutions cost $1000 or so until recently.  Now the price is down to around $200, and promises to be much less in the not-too-distant future.  In fact, solutions have been implemented at the chip level and will be embedded into equipment including TVs, receivers, computers, tablets, etc. where the incremental cost will hardly be noticed.

Unfortunately, there are two competing technologies, reminiscent of the VHS – Betamax confrontation of decades ago, with a big difference.  That is, the same companies are supporting both technologies, hedging their bets as it were.  The two technologies are called WHDI (Wireless Home Digital Interface) and WiHD (Wireless High Definition).  Neither of them have beed adopted by any recognized international standards body (such as the IEEE) and are sponsored by the euphemistic word “Consortium.”  To further confuse the consumer, there is something called Intel Wireless Display, or WiDi!

In any event, the objective is to eliminate the need for connecting an HDMI source device (cable receiver, DVD player, computer, etc.) from an HDMI playing device (TV, tablet, etc.).  This capability is especially useful when the TV is mounted in a location that is remote from the source equipment.

If you buy a kit to add to an existing equipment setup, you will get a transmitter that plugs into the source equipment’s HDMI-out jack and a receiver that plugs into the TV’s HDMI-in jack.  One or both devices aren’t needed if the technology is built-in to the equipment, assuming both the sending and receiving devices are using the same technology.  Your next question (obviously) is – Which technology should I choose?

That is a tough one to answer, just like the VHS-Betamax query of yore.  WHDI was created by an Israeli semiconductor company called Amimon.  The technology uses the 5- 6 GHz carrier band, the same as some wifi networks, wireless telephones and other consumer devices.  WiHD uses a 60 GHz carrier frequency, which is currently unoccupied by almost anything else one is like to encounter in a home.  Its adherents claim that nothing will interfere with its signals.

However, the laws of physics dictate that the higher the frequency, the shorter the range.  Early adopters report that WiHD works well for line-of-sight up to 30-odd feet, while WHDI devotees claim that the signal will go through walls and accommodate distances up to 100 feet.  Thus it would seem that the WHDI solution is more flexible.  Indeed, it is the technology used in the heavily advertised AT&T U-Verse receivers touted to be able to wirelessly broadcast a signal to up to four TVs in a single home.

However, at this stage of the game, it may be dangerous to jump to conclusions.  If you peruse the many online forums dealing with the subject, you will find that user experiences with either technology are all over the map, ranging from excellent to awful.  Besides the range and line-of-site issues, others are latency and whether or not video compression is used.  The watch-phrase is “Don’t buy an HDMI wireless system unless you can return it.”

Back to WiDi.  This is nothing more than WHDI circuitry embedded in some Intel laptop chips.  The idea is that anything you can see on your laptop can be sent wirelessly to any device supporting the WHDI standard.  It is currently being promoted by Dell (as one example) under the banner “Connected Home.”

A note of caution:  Many of the many wireless HDMI kits on the market come in packaging that does not state which technology is being used, nor does its advertising.  Therefore, if you want one that supports WiHD, you might well buy one that supports WHDI.  As usual, do your homework before laying out your money.  I also recommend that you peruse the forums (e.g. on this subject and read the product reviews by users and independent agencies like ZDNet and CNET.



Wireless Surround Sound: Kudos and Caveats

Speaker wire costs about $.15/foot for 16 gauge, $.20 for 14 gauge and $.25/ft for 12 gauge.  These gauges will, respectively, handle 20′, 35′ and 100′ respectively into an 8 ohm load, and half that distance for a 4-ohm load.  So, if you have two 8-ohm surround speakers located 30 feet from your receiver, the wire will cost about $12.00.  Further, with this conventional wired setup, there will not be any discernible delay between the sound coming out of the front and rear speakers.  They will be in sync.  By the way, speaker wire carries very low frequencies – less than 20kHz – and spending a lot for brand name cable is a waste of money.

Problem:  You are not able to run wires through the ceilings or walls, and your significant other won’t allow you to lay the wires on the floor where grandpa can trip on them.  A solution:  Use a wireless link to connect the rear speakers to your surround sound receiver.  However, you need to consider the cost.  A wireless connection can add $100 -$500 to the cost of your 5.1 system, and twice that if you have a 7.1 system.  If your speakers cost $1000 apiece, the incremental cost of wireless may seem reasonable.  If your speakers cost $100 each, the cost of adding wireless might seem exorbitant.

Be advised, I am not writing here about inexpensive bluetooth wireless speaker connections designed for iPods and the like.  I’m talking about achieving CD-quality sound in a quality home entertainment system with true surround sound.

The technology used in all the commercially available wireless systems for home use relies on using either the 2.4 or 5.8 GHz transmission bands, the same used by other household appliances including microwave ovens, wireless telephones and wifi network routers.  Thus, these systems may be subject to electrical interference resulting in static and signal dropout.  The better systems claim to have resolved that problem by providing multiple transmission channels.  The idea is that the system will find a channel that is not being used by any other appliance.  In general, the success of this scheme is a function of price.  The more expensive the system, the better job it does of avoiding collisions and the clearer the sound will be.

Another problem is that the signal to the rear speakers will be delayed by some 15-20 milliseconds.  Most people won’t notice, but sensitivee audiophiles might find that annoying.  The best way to handle that is to use a receiver that allows the user to adjust the delay time to each speaker.  Fortunately, many moderately-priced receivers today have this capability.

One company, Avnera (Portland, OR), has designed a chipset for the purpose it calls AudioMagic.  By reducing the component content to a couple of chips, the cost of wireless is drastically reduced, and we can expect to see the prices of these systems drop considerably.  The Avnera chips are used in the Rocketfish system sold by Best Buy, one of the cheapest on the market.

The following table summarizes most of the available wireless speaker links on the North American market.  Data comes from the manufacturers’ websites and may or may not be accurate.  Prices are average online prices as of May 2012.  Careful shopping will likely turn up lower prices than those shown.  All of the models listed can be found by website search.  I could not figure out how to include the links in the table – a WordPress issue I suspect.

Notice that Samsung and Sony offer systems wherein the transmitter is contained on a card that plugs into their receivers.  The cards are much less expensive than separate transmitters.  I believe that chipsets like Avnera’s AudioMagic will soon be built-into receivers, reducing the cost of the transmitter function to very little.  Receivers, however, will still need to be expensive, because of the amplifiers they must contain.  Some speakers can be purchased with built-in amplifiers, but they tend to be aimed at the low-end of the market.  If you want to choose your own rear speakers, you will need to buy separate receivers.

Read Anywhere Low Power Displays: Not Far Off?

In the 1990s, I was introduced to a French scientist who, working alone, developed a process for making displays using a reflective coating technology.  He was related to the CEO of a client company that hired my firm to evaluate the money-making potential of the technology.  My colleagues and I were blown away by the possibilities inherent in the technology.  Not only did the displays look terrific, they cost almost nothing to produce and could be laid down on almost any substrate, even paper!  To make a long story short, the technology never saw the light of day.  The scientist, a holocaust survivor, was terrified that someone was gong to steal his ideas and was very difficult to deal with.  Eventually, the company got tired of waiting for him to cooperate.

According to those who claim to know a lot about display technology, there are dozens, if not hundreds, of new display ones put forward every year.  Very few ever see the light of day.  However, in the past few years, some new technologies with big money behind them have emerged.  All of them purport to provide high-quality viewing with low power consumption.

You may recall the project called One Laptop Per Child.  It’s aim was to be able to produce a laptop computer for $100, cheap enough so that every kid in every third world country could get one.  The program has achieved some success.  About 50 countries have adopted the device.  The utility of these laptops is, of course, a function of many parameters, but one of the key ones is the display, which has to be easy to read outdoors and use up so little power that the the laptop can be recharged by sunlight, given that electrical outlets are rare in darkest Africa.

The technology that made it possible was developed by Mary Lou Jepson at MIT’s Media Lab.  She has now gone on to found a company called Pixel Qi (pronounced “chee”) that is developing next-generation display technology with the emphasis on low-power consumption.  The company just announced a new screen architecture that it claims matches the resolution, color saturation, contrast ratio and viewing angles of the Apple Retina display, but draws three to 100 times lower power and is readable. in bright sunlight.

Qualcomm, the big mobile phone company, acquired a startup called Pixtronix a couple of years ago that developed a display technology dubbed Mirasol.  Again, high-quality viewing with low power consumption.  Depending on whose article you read, Qualcomm is investing $1 – 2 billion in a plant in Taiwan to produce Mirasol displays.  Samsung, which claims to be the world’s largest consumer electronics company, acquired s company called Liquavista, a spinout from Royal Philips Electronics.  Again, low-power with the ability to read in any lighting condition.

Not to be completely left in the dust, the big-time LCD panel manufacturers like Sharp are working hard on new technologies.  One of these is known as IGZO (Indium Gallium Zinc Oxide) which offers several options including lower power and better resolution than conventional LCD technology.  It was thought by many that Apple would use the technology in its iPad3, but that didn’t happen.

As you might expect, all of the foregoing are looking to replace conventional liquid crystal displays in hand-held and laptop devices.  A San Jose, CA company, Prysm, however, is specializing in low-power large format displays, based on its proprietary laser phosphor display technology.  Imagine an entire wall displaying high-quality video imagery using 75% less power than competing technologies, and you have the idea.

There isn’t enough room in this post to go into the various technologies that these companies are employing, nor the many other display technologies under development at places like the University of Cincinnati.  Follow some of the links in this post, and you’ll at least see what the manufacturers are willing to tell you.

Without picking winners and losers, I am convinced that low-power, high-resolution displays that can be read in any light will be hitting the market big time in the next few years.  The impact on battery life and manufacturing cost will be truly significant.  Think about a tablet or laptop you can read outdoors that charges its batteries without a cord.  To quote Joe E. Brown in Some Like it Hot, “Zowie.”

Olfactory Recognition and Synthesis: Does the Nose Know?

A digital dreamer’s goal is to duplicate the five human senses: touch, sight, hearing, smell and taste, ergo creating a robot that can do, as the song says, “anything you can do, I can do better.”  (Unfortunately, sex hasn’t yet figured into the dream – yet.)  Advances have been made in taste (there are only four), touch, hearing (especially voice) and sight.  Wall-E and Artoo aren’t far off the reality mark.

The toughest sense to conquer appears to be that of smell.  Most of you are too young to remember “Smell-O-Vision,” a 1960s attempt by Mike Todd’s son (Mike Todd was a famous movie producer, creator of Todd-A-O and husband to Liz Taylor) to add smells to motion picture viewing.  Approximately 30 odors were synthesized and pumped into the theater at appropriate times.  This early attempt was unsuccessful and was quickly abandoned by the movie industry.

The applications for synthetically duplicating the olfactory sense, however, are very important, and, to my way of thinking, justify a much larger research investment that is presently being made.  Drug interdiction, security, disease detection, contaminant detection, gas leaks, crime prevention are but a few of the very important applications of the sense of smell.  The best we have right now for these applications are dogs and vultures (for gas leaks)!

There are bazillions of molecules that can contribute to odors and more bazillions of permutations and combinations of those molecules.  It would be nice if we could make up a table of these molecules and simply look them up, but that approach simply isn’t in the cards.  The numbers are just too big.

A researcher in this field is Dr. Paul Rhodes.  This Stanford Visiting Scholar formed a company called Evolved Machines that is attempting to use neural network technology to identify odors.  Unfortunately, I don’t think he has gotten very far – his last press release was issued two years ago!

Given the enormous market potential of applications based on smell, it is a wonder that would-be entrepreneurs are not all over this one.  The IEEE publishes a Sensors Journal that has scholarly articles published on the subject from time-to-time, but the apparent lack of serious research is astonishing to this observer.  Do a Google search and be amazed at the paucity of informative hits on the subject.

So this subject remains an open question.  Will it have a future?  Maybe, even probably, but it looks like it will be a very long way off.

Post Navigation