2014
04.24

What separates a path-blazing patent from an “every day” one? Put another way, what makes a patent “strategic” for a company, enabling it to gain significant competitive advantage over its competition?

Strategic patenting is a discipline in the larger practice of IP (intellectual property) management. A recent article on strategic patenting was reposted in http://www.ipstrategy.com/ on April 18, and provides a very good explanation with examples of patents that put the owners of IP way ahead of their competitors. You should not be surprised to see the name of Amazon among the companies described as leaders in strategic patenting. We should thank the author, Jackie Hutter, for this excellent contribution to our understanding of patent valuation.

Amazon Technologies, Inc. was granted a number of patents that focused on aspects of speech recognition (see my last Patent Report) that include phrase recognition with the objective of prompting response phrases to the listener, and customized speech generation based upon understanding a person’s behavioral patterns related to the specific context of the dialog. For example, one would respond differently to a person in a stressful situation compared with a pleasant one. Speech patterns vary from context to context. Having a device know the difference can lead to better dialog.

What makes this interesting to me is the sustained efforts that Amazon, Google and other companies are making around improvements in speech recognition. Just how far have we have come in enabling devices to pass the Turing Test? As recently as April 21, we read that a Google algorithm successfully passed the test, but did it really? There is controversy around the interpretation of what conditions satisfy the rules of the test, but the improvements coming weekly to the area of speech recognition are shortening the time until an unambiguous “pass” by a device will be recognized. Once this happens, the fundamental issue for personal information security will be the “hack” of your bank account by a device that is a behavioral and aural mimic of you.

Honeywell International Inc. was granted Patent 8,705,808 (“Combined face and iris recognition system”) which covers a broad range security-related use cases.

When I read through the listing of prior patent citations that provide the groundwork for the present patent, I found it striking that the first citation was for a patent granted in 1987 for an early iris recognition system. It is worth looking through because the earliest patent citation in that patent is from July 25, 1916, number 1,192,349 for a “Shadow Pupillometer.”

Following the technology bread crumb trail backward in time reminds me of Isaac Newton’s famous statement: “If I have seen further it is by standing on the shoulders of giants.” Like WiTricity’s recent citation of Tesla’s wireless energy transmission patent, creating a connection to devices separated by more than 100 years, so much of what we today introduce as technological innovation has deep roots in history.

Like Amazon’s grant for enhanced facial recognition in video, Honeywell received Patent 8,706,663 (“Detection of People in Real World Videos and Images.”) Security devices and systems are a large part of Honeywell’s business, and the advancements described in this and the face and iris recognition patent discussed above certainly help improve its product lines.

Here’s an interesting one from MIT. Reissued Patent RE44,856  (“Tactile Sensor Using Elastometric Imaging”) addresses the need to improve tactile sensors for an application such as a robot finger pad. There are three critical properties that are desired in a tactile sensor. As described in the Background section in the patent, “It should have high resolution (be able to make fine spatial discriminations), have high sensitivity (be able to detect small variations in pressure), and be compliant (able to elastically deform in response to pressure).” Let’s remember that MIT is the home of quite a few groups investigating robotics for different applications, so a patent like this granted to MIT should not come as a surprise.

Ever want to know what information is actually carried on your credit card? Then check out Patent 8,701,989 (“Methods and Systems for Displaying Loyalty Program Information on a Payment Card”) granted to MasterCard International Incorporated. The schematic of the card and the explanation of what each part of the card represents is an excellent visual aid. You’d be surprised by “what’s in your wallet,” to borrow a phrase from Capital One.

2014
04.22

Security is foremost on the minds of anyone who is involved in the world of connected devices, M2M, or the IoT (Internet of things) these days and with good reason. Data breaches and cyber threats are plaguing just about every industry. For instance, Heartbleed is definitely something that is significant and requires quick action for numerous organizations, especially if your firm is running a vulnerable version of OpenSSL. Without question, every company needs to be prepared for these types of unannounced vulnerabilities as they pop up. It’s no secret Heartbleed found its way into Web servers, but it also created havoc on routers, networking equipment, and a host of enterprise technology.

Heartbleed really opened all of our eyes to just how vulnerable enterprise systems and gadgets can be to cyber attacks. With that said, it’s almost impossible to keep up with cyber trends because as these cyber attacks increase we are seeing bad guys show off the real innovative art behind these designer breaches versus the real science of the crime.

So, the real question is how do you keep up with all the cybercrime? As I see it, it’s virtually impossible.

In talking with Bryan Sartin, director, of Verizon’s RISK team, I wasn’t surprised to hear him acknowledge the cybersecurity landscape is just getting trickier and trickier. From his perspective cybercrime is growing and so are the vulnerabilities for each and every enterprise. Sartin is a huge proponent of companies establishing sound strategic security initiatives that can limit the effects of something like a Heartbleed. His comments stem from Verizon’s security report, which it released today. The report’s goal is to help enterprises assess what they are doing right now in the area of information security.

The seventh annual Verizon 2014 Data Breach Investigations Report, states more than 1,300 confirmed data breaches and 63,000 reported security incidents throughout a 10-year range of study.

In looking at the report it highlights nine threat patterns Verizon says are responsible for a good portion (almost 92%) of the security incidents analyzed. These threat patterns include miscellaneous errors, which can be as simple as sending an email to the wrong person; “crimeware,” which the carrier defines as malware aimed at gaining control of systems; insider misuse; physical theft and loss; Web app attacks; POS (point-of-sale) intrusions; and payment card skimmers; among others.

So if the report’s ultimate message is clear—no organization is immune from a data breach—then, as an M2M industry, we need to find better ways to help enterprise companies. If the potential of more devices and gadgets being compromised increases as more apps continue to communicate with each other, the greater the risk of cybercrime, unless the M2M/IoT industry takes the necessary precautions to minimize attacks.

There is good news in all this. The M2M industry is proof positive that when data is put in the hands of the right decisionmakers it can change the fate of a business. The data-breach report does a nice job of showing that information. Now it’s up to enterprises to put the right safety measures in place to least minimize the impact of a data breach.

In a world where cybercrime is sometimes nothing more than just sport to the bad guys, you really need to be more vigilant than ever if you really want to protect your assets.

Want to tweet about this article? Use hashtags #Verizon #security #cybercrime #M2M #IoT #Heartbleed

2014
04.17

News coverage of unmanned aerial vehicles, also called drones, for use in non-military applications is increasing as the FAA moves closer to issuing rules for their use in the United States. It has been reported that the first round of rules will probably be limited to use by emergency responders. Other countries such as Australia are well ahead of the U.S. in the development and deployment of drones for civilian use cases. Concern has been expressed that drones will pose a threat to people on the ground because of factors such as loss of control causing impacts to buildings and other structures, and mid-air collisions causing falling debris. Clearly, this is a valid concern. One company that is addressing this problem is L-3 Unmanned Systems, which was granted Patent 8,700,306 (“Autonomous Collision Avoidance System for Unmanned Aerial Vehicles.”) This system does not require human control, making the detection, tracking, and avoidance of aerial hazards an autonomous function of the drone. As you can imagine, this capability will require that highly connected, sensor-laden devices be incorporated into the drone, itself a highly “connected device.” In the background section of the patent, L-3 references the need of such a system to support the forthcoming use of civilian drones in U.S national airspace.

Speech recognition continues to be an active area for patent grants, and in follow up to my report in March, three major companies received grants this week. What is interesting is how non-traditional technology companies are challenging the traditional leaders for speech recognition marketshare. Amazon and Google are emerging as speech-recognition research and development companies, driven by their desire to incorporate speech into devices such as Google Glass, autonomously driven cars and consumer-engagement applications on smartphones. Traditional players in speech recognition engines (the software) have included Nuance, Voxware and Vocollect among others. Significantly, manufacturers of the devices on which voice-application software run have moved to integrate voice software companies, most notably, Honeywell International’s acquisition of Vocollect. Motorola Solutions, now being acquired by Zebra Technologies in a deal valued at $3.5 Billion, has its own voice application for distribution and logistics, a market in which they have a considerable hardware footprint. All of this suggests a blurring of the line between the physical device and the speech recognition engine and applications that sit on the device. Voice-directed applications are becoming ubiquitous both in the B2C and B2B spaces. Google Glass is a good example of this.

Amazon was granted Patent 8,700,392 (“Speech Inclusive Device Interfaces”), which has the distinction of including Jeff Bezos’ name among the individuals associated with the patent. You do not see this often, and signals an important benchmark that the patented technology achieves for the company. One of the benefits for the technology cited in the patent is the reduction of training time for people in jobs where voice direction can be used. There is a surge of interest in the industrial space, particularly logistics, for voice-directed applications. Let’s remember that Amazon is a major owner and operator of distribution centers, for which it was granted this week Patent 8,700,502 (“System and Method of Fulfilling an Order,”) which directly relates to distribution center operations. One must be impressed by the synergy Amazon demonstrates when connecting technology to its real-world processes.

Google was granted Patent 8,700,393 (“Multi-stage Speaker Adaption”) which focuses on one of the two basic forms of speech recognition which are “dependent” and “independent.” Google’s patent improves upon the dependent form. A speaker-dependent system is “one to one,” meaning that the voice of the speaker is mapped and therefore is a unique profile. This form is used in industrial environments where workers (“user”) must be securely identified for specific work processes, and who are independently tracked for user productivity measurements. The speaker-independent form has been mostly used in consumer applications, where such a degree of user control is not required.

Honeywell, which owns Vocollect, was granted Patent 8,700,405 (“Audio System and Method for Coordinating Tasks,”) and the illustration in the patent clearly identifies this as a commercially oriented application, tied to devices worn by the user.

Coming on the heels of the April 1 awards, Visa U.S.A. Inc. was granted Patent 8,700,513 (“Authentication of a Transit Verification Value”) which further defines the art of using contactless payment technology for transit system access. As noted in my previous report, the ability to combine secure contactless payments with rapid-access, low security transit access into one form factor (credit card or smartphone) opens up a significant new market opportunity for payment facilitators like Visa and MasterCard.

Want to tweet about this article? Use hashtags #Drones #security #Google #Amazon #IoT #M2M #Vocollect #visa #retail #Zebra #Honeywell #nuance #Motorola #Voxware #FAA

2014
04.15

Women of M2M Shine

When we selected this year’s Women of M2M, we went beyond just typecasting the traditional power elite that I suspect most people might have anticipated. Rather, the women chosen this year epitomize what M2M has evolved into today. M2M has grown into this massive machine, so to speak, that is driving all the technology innovation that is sparking the growth of devices and their connections to one another. Much like our industry, these women are inspired, passionate, and very persistent.

The infectious energy of the women just filled the room during our special gathering last week. I was already pretty wowed by our selection after doing months of research on each of the women chosen, talking to their colleagues and business associates, and conducting numerous personal interviews. But nothing can beat the infectious-energy that fills a room the moment you meet these women. And that’s exactly what happened when 20, of the 42 Women of M2M, and even a few of last year’s alums showed up at an awards dinner held just outside Chicago, thanks to supporters Synchronoss, Ford, and Aeris.

It takes a lot to impress me, but this was truly an awe-inspiring evening. Meeting these women face-to-face only proved their resumes lived up to everything I had expected and more. I am certain the five men in attendance were trying to figure out if they should chime in the conversations or just bask in the glow of the success of their colleagues and/or significant others.

It was a night these women would remember for years to come. They had a chance to relax and to be recognized for years of outstanding achievement. As a result, they mingled with other women who are just as determined to find ways in which they can move the needle upward adding to the already 25% of women in the technology workforce. “Inspiration for me is to make a difference,” says Nancy Gioia of Ford. Her thoughts were echoed by most of the ladies who want to make a difference in the lives of younger women and with other co-workers.

The entire group is very committed to building strong social networks for business. While they admit their male counterparts are known to be consummate relationship builders, they need to step it up and develop stronger contacts to get themselves promoted within their own organizations, develop new relationships, and nurture the ones that already exist.

While many had a renewed feeling and a zest for working with each other, there were still a couple ladies that were reluctant to admit they could run with the big dogs. Humble and unassuming, these ladies needed a little encouragement to understand that their achievements were more than well-deserved. So if you haven’t already, please send one of the 42 bluechip women a congratulatory note. I’m certain they deserve it and you will feel better for sending it.

Want to tweet about this article? Use hashtags #M2M, #women, #females, #WoM2M

2014
04.10

Robots that become like us in thought and capabilities have long held a prominent position in science fiction going back to its humble beginnings including a stunning “first view” in Fritz Lang’s 1927 film classic Metropolis. Much has been accomplished since then to make the dream a reality, including robots with empathic skills functioning as companions for elderly and ill people. The dream has been to make an autonomously functioning device with near-human capabilities that can stand in for or augment human activity. Humans and robots working together on the production line in factories has been a reality for many years.

There’s a different line of investigation, however, heating up in terms of patent grants, that seeks to make our smart devices interact with us in a manner similar to how humans interact with each other, using non-verbal communication such as gestures. Here we are seeing a clear intent not to transform a smart device into a “traditional” robot, as defined above. Smart devices, used as tools to enable us to do a variety of tasks, remain in the form and use case for which they were originally made. A smartphone is still a phone with applications. The “human–machine interaction” research is intended to allow a device to work with a human who may not be able to hold it, tap a screen, or be within the minimum range with verbal interaction. Seeing a specific gesture from a distance can activate the device and other gestures could launch an application.

This is an area of formal research. For example, Carnegie Mellon’s Human-Computer Interaction Institute is one of a number of prominent centers involved with bringing our devices and us into a better functional alignment. Perusing their current research initiatives makes for interesting reading.

With this in mind, an intriguing grant to Amazon Technologies, Inc., this week is Patent 8,693,726 (User Identification by Gesture Recognition.”) Reading through the Background, it becomes clear that a less resource-intensive means to identify a user to a device while maintaining a high level of password-like security is the objective. An example of a resource-intensive form if user identification described in the patent is facial recognition. The use of specific gestures by the user, captured in the device’s memory, is proposed. Specific gestures such as tracing a letter in the air, or waving your hand in a certain direction, are compared with the stored gesture in the device’s memory to validate the user’s access to the device. The patent covers motion in 3 dimensions as well as time (the 4th dimension), allowing for a sequence of gestures to be used to heighten access security. The use-case implications are interesting, because a gesture recognized from a distance can facilitate further use of the device by voice interaction, keeping the user “hands free” for tasks where holding and using touch to interact with the device may prove impractical or dangerous.

Amazon was also granted Patent 8,694,350 (“Automatically Generating Task Recommendations for Human Task Performers.”) The patent covers the required elements of an electronic marketplace, complete with a Task Recommendation Generator, for human performance tasks. The “Background” section in the document provides a fascinating step through of the logic derived from software program task generation techniques ultimately applied to human tasks. There is a recognition that certain tasks benefit from human capabilities such as contextual and cultural awareness. The objective: “By enabling large numbers of unaffiliated or otherwise unrelated task requesters and task performers to interact via the intermediary electronic marketplace in this manner, free-market mechanisms mediated by the Internet or other public computer networks can be used to programmatically harness the collective intelligence of an ensemble of unrelated human task performers.”

One can ask: To what end?

2014
04.04

About two years ago I sat down with an engineer at Ford and he just wowed me. The reason, he was telling me about all these pretty awesome predictive solutions Ford was working on for the not so distant future that would be available in our car. More importantly, he was explaining what he called “the predictive” nature of our car. He wasn’t talking about my car tweeting to me or letting me know one of Susie’s Facebook friends was “unfriended” by Tommy as this was being sent to me via my in-car infotainment system. Rather, he was really sharing what I envisioned to be a truly connected car.

It’s taken me several editorials and many blogs to figure out where I believe the automakers might have gone astray with all this driver-distraction discussion. For years, OEM (original-equipment manufacturer) engineers would spend hours with me proudly sharing their views of the projects they were cooking up on for their companies. They would eagerly paint a picture of the future. After all they were the masterminds behind the high-tech safety features taking full advantage of radar, sensing, and even GPS (global-position system) solutions. With their engineering know-how they saw a world where automobile intelligence would talk to each other, sense surroundings, and report back to the transportation infrastructure almost entirely eliminating accidents, unless you intended to cause one.

This new car world would interpret traffic signals and road signs, all simply by using Wi-Fi and GPS. They will send out signals indicating their exact location and destination while essentially forming a train moving at the same speed and direction with all the other vehicles on the road. Via processing-related algorithms connected to the network car, communicate, and in time they would be alerted to hazards on the road and have the ability to take preventative actions for safety and accident avoidance, such as warning drivers of road hazards, upcoming heavy and/or stopped traffic, or even an icy road. Traffic lights and signs and other in-road infrastructure with heads-up displays would tell motorists of difficult road conditions and help you to maneuver through low-visibility conditions. All of this would be connected to the Internet with almost blazing speeds thanks to 4G/LTE, which handles a host of apps and devices within the vehicle.

But then somewhere along the way, something went awry. What was once about driver safety, which had always been a top priority for the engineer who had been sitting at the driver’s seat all along, was taken over by none other than marketers and bean counters who saw dollar signs driven by connected services. These folks recognized data meant services and services meant they could “cash-in” on consumers. Consumers would never be the wiser because they would be getting all this entertainment/infotainment option in the cabin of vehicle and they would be very pleased. What these wizards of Wallstreet failed to recognize is all of this infotainment was just compounding the already bigger problem of driver distraction. What’s more, automakers were influenced by the carriers, rather than letting the engineers sit behind the wheel. Had the automakers remained steadfast they might have realized by adding more infotainment into the dashboard they were driving head-on right into traffic.

Some of the carriers stand to gain a lot of money from these services while the car companies are getting blamed for creating too much distraction in the cabin and consumers are clearly saying they don’t want it. Connected World’s Quick Poll this week confirms it. Already almost 900 people revealed they do not want social media in their dash and they say it leads to greater driver distraction.

Perhaps the point here is we should think less about entertaining us while we drive and focus on connected–car technologies that provide onboard radar and sensor systems that automatically respond to the environment. These are the things such as lane-departure sensing, warning systems that alert us of another car in our blind spot, and technology that protects the car’s occupants in the event of a collision. All of this onboard technology is syncing up with portable devices—smartphones, tablets, and other entertainment gadgets—that drivers and passengers carry into the vehicles.

So I say it’s time the car companies go back to talking with their engineers. Maybe Ford had it right when its engineers where focusing on using data for predictive analytics. When I was talking to the Ford engineer and he explained predictive health—I’m not talking about the car’s health—rather he was referencing working with health providers to predict when a diabetic needs insulin, or the ability to determine if a driver is about to have a seizure. Just how awesome is that? Again, we had this discussion in late 2012.

I want to hear more about the cool stuff that Ford sees as the car of the future. I want to see more automakers getting me excited about how they hope to change our lives by connecting us in ways that are truly awesome, not just plain silly.

Want to tweet about this article? Use hashtags #distracteddriving, #distraction, #carriers, #automotive, #M2M, #data, #connectedcars, #invehicle, #voicecommands, #handsfree, #automakers, #Ford

2014
04.03

2G Sunset Hello 4G, Really?

Let me say this at the outset Orson Wells has nothing on the carriers and MVNOs as it pertains to the 2G, 3G, 4G/LTE connectivity discussions as of late. This is reminiscent of when millions of Americans tuned into a popular radio program featuring Orson Welles doing his now infamous adaptation of the H.G. Wells science fiction novel, “The War of the Worlds” about a Martian invasion of the earth. But think about it for a moment. Everyone is foreshadowing what will happen when AT&T shuts down its 2G GSM network support by Jan. 1, 2017. But it feels like the industry is just trying to stimulate some excitement in the M2M space around this very subject.

It’s like the industry can’t help itself. It’s as if the vendor community figures it can elicit customers to act by the thought of something happening. We can just feel it and we don’t actually know what or who or when it will happen next, but something is happening. We just can’t say enough when it comes to connectivity. Talk on the street is AT&T has already stopped adding and certifying new applications. So that means the shutdown has begun. I’m certain many smaller M2M firms have a lot of questions.

And foremost on your mind is who do you trust? The carriers have truly been a very aggressive group that hasn’t been afraid to get a little ugly during conferences, during interviews, you name it. So this begs the question now, who can you really trust? The fact remains on one end of the spectrum there will be a large number of GSM/GPRS devices that have been deployed that will be impacted. However, for years these companies have been telling us the carriers abandoning 2G connectivity are not to be trusted. But now some of these same vendors are teaming up and it seems they are “frenemies.”  So what are we supposed to tell you now? What advice are we to give you now?

Do you trust their pitch? Is this business in today’s day and age? Many of you that need to make a switch are not at large corporations or you would have just moved to 4G/LTE and be done with it. Rather, many are holding on for as long as you can until the right price and the right partner comes along. So back to my question original question, who do you trust? How to do make the transition? Is there a right or wrong answer?  Have you thought about what your position should be? Have you begun your transition strategy?

To help, we are going to put Aeris Communications, CTO, Syed Zaaem Hosain to the test. Let’s see if he can withstand the rigors of answering some of my tough questions and perhaps some of yours about the sunset of 2G and migrating to 4G. Aeris is one of those companies that touts that it has the answers. Let’s see if that’s true. So if you have some questions you’d like to have answered send them to me and I’ll ask them to Syed during our Webcast May 7. Might as well join the invasion.

2014
04.03

Reading through Tuesday’s patent grants confirmed what I already knew: I am getting old! I’m a throwback to the latter half of the 20th Century, during which I came of age and got hooked on science fiction and technology. To me, the word “blob” really means “The Blob,” the 1958 film starring a young Steve McQueen that has attained status as a cult classic, complete with its annual Blobfest at the Colonial theatre (in which it was filmed), located in Phoenixville, Pa.

So when I came across Patent 8,688,666 (“Multi-blob Consistency for Atomic Data Transactions”) granted to Amazon Technologies, Inc.  I was stopped in my tracks. Here was a hybrid description fit for both horror and Sci-Fi fans alike! Alas, it had nothing to do with my Blob. It did, however, have everything to do for cloud computing.

Those wild and crazy information technology types come up with all sorts of new ways to classify and manipulate data, and with a nod to Wikipedia, I learned “a blob (alternately known as a binary large object…is a collection of binary data stored as a single entity in a database management system. Blobs are typically images, audio or other multimedia objects, though sometimes binary executable code is stored as a blob. Database support for blobs is not universal.”

The importance of the new patent is that “a blob storage system may provide data storage capability is inherently unlimited and scalable, as addition of data storage servers may be added to the cloud.” This is an improvement over “traditional internally coded database software, such as database systems based on the relational database management system (RDBMS) model… once the design of such traditional database software is implemented, the configuration of the database software cannot be easily changed. As a result, traditional database software may be inadequate to store certain types of data, large chunks of data, or large quantities of persistent data.”

So why is Amazon concerned about all of this? It so happens to be the largest cloud-hosting services provider, earning about $3.5 billion in revenue last year from this business segment. It is about to go head-to-head with Google which is trying to unseat Amazon’s position as top provider.

Speaking of Google, it was granted 50 patents this week, adding to the 8,863 it has received since February of 1988. What is significant is that almost half of the total (4,078) was granted in the past three years.

Visa U.S.A. Inc. was granted Patent 8,668,554 (“Bank Issued Contactless Payment Card Used in Transit Fare Collection”). What is interesting about this is the evolution of one of the two NFC (Near-Field Communication) standards, specifically ISO 14443, which at long last merges the separate functions of transit access and contactless payments requiring secure transaction processing. The methods covered in the patent work through the issues around secure payment processing requirements which are slower than the speed at which a person expects to move through a train turnstile, where speed trumps secure processing.

This patent represents a milestone in the long and winding road of the use of contactless technology is the United States. To set the table for you, consider that since 2001, Japan has successfully merged the original function of contactless, getting 60 people a minute through a train-station turnstile in Tokyo, with merchant payments. Sony, a founding member of NFC, deploys the second of the two NFC standards, ISO 18092, called FeliCa. In the United States, 14443 has been the dominant standard, and in the case of public transportation operators, the sole contactless standard since 2009. The American Public Transportation Assn. (APTA) controls what technologies and their specific standards that are used in member systems. Contactless transit cards using 14443 have been deployed in U.S. systems, but for the sole purpose of getting you through a turnstile.

Visa’s patent provides a set forward for the U.S. to catch up with Japan. The seamless transactional ecosystem that the Japanese enjoy using a single format for multiple functions is something we can hope to see here in the future.

2014
03.31

We’ve been talking about steps we can take to prevent distracted driving for way too long. It seems to me the talk isn’t changing driver behavior. Regardless of all the campaigns and good intentions the numbers continue to reveal there is an epidemic in this country and it’s just not getting better unless we take some serious actions to keep us all safe when we are on the road.

Many of us are thinking it but don’t want to say it, so I will. It’s also pretty obvious most of the vendors that say they are committed to preventing distracted driving are really more concerned about making money. I know this is a pretty harsh statement, but the numbers don’t lie and we now have the data to prove it.

Here’s the harsh reality. According to the NHTSA (National Highway Traffic Safety Assn.) 660,000 drivers in the U.S., are using cellphones or other devices at any given moment during daylight hours. In 2012, more than 3,328 people were killed on the road, and 421,000 were injured in crashes involving a distracted driver. What’s more, The AAA Foundation for Traffic Safety conducted a study and discovered more than 95% oppose texting or emailing while driving. However, more two-thirds admit to talking on a cellphone while driving. Ironically, more than a third openly admit to reading texts or emails while behind the wheel. Sadly of those, a fourth openly confess to sending a text or email while driving. To make matters worse, on an annual basis, distracted driving is linked to more than 1 million accidents in North America alone and those accidents result in serious injury, death, and an economic impact of almost $40 billion a year.

These are horrific numbers. And yet we haven’t been able to put a dent in this rising epidemic. While carriers and automakers acknowledge the seriousness of the problem, they continue to brag about all the social media (i.e. Facebook, Twitter feeds, photos) and a host of other items in the dashboard or cockpit of a vehicle that adds to driver distraction. While these companies participate in campaigns encouraging drivers to keep their eyes on the road and hands on the wheel they tout these feeds and photos in the dashboard as essential to enriching the driving experience. It seems to me they are just talking out of both ends of their mouths. Smarter heads need to prevail and stop the madness. That is why I have proposed seven key action items to stir carriers and automakers away from focusing on “what’s cool” and more on what will keep our roads safe.

In honor of the fourth annual national Distracted Driving month I think it’s time we take greater steps to prevent distraction and save lives no matter who is behind the wheel day or night. That means forcing everyone to play a part in preventing distraction. We have the technology to completely eradicate accidents and to prevent unnecessary deaths due to distracted driving.

With the types of cellphones, hands-free devices, and technologically advanced vehicles now available, we have the power to end all distracted-driving incidents right now. To date, 42 states currently ban texting for all drivers, which addresses the issue, not the solution. The conversation needs to be education and steering towards real solutions. We’ve been saying we need to get motorists to stop texting, but it’s more important to explain why texting is a recipe for disaster. It’s essential to encourage everyone to keep their eyes on the road. If they have in-vehicle technology drivers need to master their voice commands. Most people can barely navigate around their own smartphones, let alone comfortably use in-car voice technology. If we collectively focused on taking advantage of what we already have available to us we see the number of accidents rapid plummet.

It’s imperative we begin the education early. Although I am recommending preventing the use of handheld cellphones while operating a vehicle; on the flipside, hands-free or voice navigation systems must be demonstrable during licensure testing, similar to passing a vision or actual driving test. There’s no doubt hands-free calling systems vary widely from one car to the next.

Some of these variables include whether the car has a flat-panel display, call buttons on the steering wheel, and so on. Many of today’s new hands-free systems simply allow phone pairing, (which include easy-to-hear phone conversations through your car’s speakers), and intuitive call buttons on your steering wheel (which let you answer and make calls without ever taking your hands off the wheel). Other items in hands-free car systems let you import a lot of contacts and other information from your phone which is displayed on the car’s flat-panel or touchscreen. A driver can often make a call from contacts with a voice command — just by saying something like “phone Tom.” 

As for older cars, drivers will be compelled to demonstrate effective use of any clip-on devices that take advantage of Bluetooth enabled products that allow them to work wirelessly with a cellphone after a one-time pairing procedure.

Until now, we have only put a Band-Aid on a major wound. We have conditioned ourselves to using cellphones anytime, anyplace. Instead of gradually trying to change the problem, let’s all come together to really educate, use technology, and even regulate to change our behavior once and for all.

Want to tweet about this article? Use hashtags #distracteddriving #distraction #carriers #automotive, #M2M, #data, #connectedcars #invehicle #voicecommands, #handsfree #automakers

2014
03.27

Awards to well-known corporate names—and a few new ones—for improved speech recognition, autonomous vehicle control, landing a space vehicle in the ocean, and improving Website analysis through behavioral portraits featured prominently these past two weeks.

Google received 103 awards, among which were three for controlling autonomously driven vehicles and three for improvements in speech recognition. The former will help the company’s self-driving car initiative and the latter for potential improvements to Google Glass.

The three patents associated with autonomously driven vehicles are 8,676,430 (“Controlling a Vehicle Having Inadequate Map Data,”) 8,676,427 (“Controlling Autonomous Vehicle Using Audio Data,) and 8,676,431 (“User Interface for Displaying Object-Based Indications in an Autonomous Driving System.”) The one I found most interesting was the one that addressed inadequate map data. We are all dependent upon GPS directions as we drive. They are from our smartphones or dedicated devices such as those made by Garmin. These devices are in turn dependent upon updates that account for new roads, streets, directional changes, and the myriad of other changes that occur daily in the U.S. Those updates depend upon the map-generating companies receiving the information from local sources as changes occur, which is problematical.

Human drivers compensate for unexpected routing disruptions using visual cues and logic. We do this quickly and even if we err, we can determine how best to overcome the issue independently of the GPS.

So imagine if you are a passenger in an autonomously driven car, and there is an inaccuracy in the map data. What will happen? This is where the process described in the patent comes into play. The patent stipulates that the autonomous car has sensors that help it detect obstacles, road conditions and other driving inputs. When a map-based error occurs, the control system employs the data from the sensors to determine the corrective action. The hierarchy is map-based data then sensor data to control the course of the car.

Google’s three patents for improvements in speech recognition are 8,682,659 (“Geotagged Environmental Audio for Enhanced Speech Recognition Accuracy,”) 8,682,661 (“Robust Speech Recognition”), and 8,682,663 (“Performing Speech Recognition Over a Network…”). All three have potential application to Google Glass, which incorporates speech commands to control applications. Speech recognition improvement is very important to commercial and industrial applications that intend to be deployed on Google Glass. Commercial and industrial workplaces tend to be noisy environments. As an example, voice-directed work applications in warehouses and distribution centers, which may include freezers, conveying systems and other high-decibel generating machinery, depend upon ruggedized devices and advance noise-canceling headsets to overcome the impediments to consistent speech recognition. Anywhere there is enhanced background noise, efforts to improve speech recognition are essential if wearable devices expect to gain a foothold, and displace the specialized devices that are presently used.

Related to this is the award Google received to improve augmented reality, for which Google Glass is designed. Patent 8,681,178 (“Showing Uncertainty in an Augmented Reality Application”) provides for a means to alert the user that a specific part of the his view, say an area around which the user sees a red circle, has a degree of uncertainty as to what the system thinks is there, such as merchants in a building.

In my most recent blog, MODEX 2014, I commented on Amazon’s efforts to recruit knowledge workers for its distribution center operations. Amazon received three patents for automating warehouse operations. They are 8,682,751 (“Product Dimension Learning Estimator,”) 8,682,473 (“Sort Bin Assignment,”) and 8,682,474 (“System and Method for Managing Reassignment of Units Among Shipments in a Materials Handling Facility.”) While it seeks to reduce the low-skill labor component in its distribution centers through continuous introductions of robotics and sensor-enhanced machinery, it is aggressively recruiting the high-skill workforce that will be required to keep the automated warehouses up and running.

Here’s a company I’m sure you have never heard before: 7 Billion People, Inc. (Austin, Texas). It was awarded Patent 8,682,741 (“Behavioral Portraits in Web Site Analysis.”) The process describes a method “for determining a website user behavioral portrait based on navigation on the Website and dynamically reconfiguring Web pages based on those portraits. In accordance with the method, data relating to the progress of a user through a Website is recorded, and an ongoing behavioral portrait of the user is built based on the data. The portrait is then used to dynamically reconfigure Web content.” This is intended to benefit eCommerce merchants that want to refine offerings that more closely match the user’s interests, based on the user’s behavior on the site. Here’s the thing: The company was started in 2006, shows investor funding through 2012, and then news about the company dries up. And, its Website is not functioning. Is this an example of an interesting patent with nowhere to go?

Finally, Blue Origin, LLC (take a guess as to what blue origin refers), was awarded Patent 8,678,321 (“Sea Landing of Space Launch Vehicles.”) The patent covers a more cost effective way to land a spacecraft in the water in a manner that will allow for its reuse. Civilian space initiatives are shaping up and have a future, and the creative thinking going into ways to bring costs to launch, recover and reuse spacecraft are to be applauded. Here’s what continues to be a puzzle to me: Why do Americans land spacecraft in the ocean, while the Russians bring them back to a terrestrial landing?