MBA 733, Management Decision-making

Figure 5For our final assignment in MBA 733, the class was asked to write essays about business decisions they regretted. That was a tough one for me. I think its too soon to say any one of my business decisions were right or wrong, even now. That is one reason, my pastor says, that we must wait until the end of time to have our works judged. We must see how all our decisions played out. Even though it is too soon to judge, there is one decision I think may top the list.

Turn-by-turn navigation devices like a Garmin or OnStar only create a user interface. They must buy the base map from a provider. The only base map on the market carrying an International Standards Organization [ISO] 9000 certification, which certified products for use in motor vehicles, was the one put out by the company I worked for.

During the company’s early years, our competitors were the likes of TeleAtlas and TomTom, strictly small-timers. TeleAtlas was a Dutch company with pretty good maps in the EU. They were trying to build their US coverage with Tiger Data. Tiger Data, put out by the USGS for free, was worth about as much as one paid for it. We called it crap map. In Tiger Data, it wasn’t unusual to find someone had written obscene messages into the road network in remote places like the Rocky Mountains, which is to say, we didn’t have much competition where quality was concerned for a long time.

We enjoyed a market share of over 85% across Europe and the US. As the only map in the industry with an ISO 9000 certification, it was the only map legal to be included as standard equipment in motor vehicles. If an automaker wanted to have in-dash navigation capabilities in their luxury line, they had to buy from us.

I started working for them while they were still ascendant in the summer of 2006 as a Geographic Technician [GeoTech]. Not long after that, TeleAtlas publicly announced they had 3 million miles of road network in the US, complete with names and address. It shook my new employer to the core. We had more miles of road network, closer to 4 million in total at the time, but fewer of them were named and addressed. We were suddenly behind, and there was no short fix to the problem.

At the time, most digital mapping was done in our production facility here in Fargo. But to meet the challenge of TeleAtlas, we would need a lot more production capacity. That was when we turned to map production houses overseas; first in India and later in Leon. Therefore, I spent most of my time doing quality assurance of GeoTechs overseas as they frantically went to work adding address ranges to rural roads across the American countryside. This was called the great Rural Addressing project and it ushered in the age of the outsourced production house.

I quickly rose to Senior GeoTech, and by early 2008, Project Lead. My undergraduate degree in Architecture uniquely qualified me for managing my first project, 3D Landmarks. These were 3D computer generated representations of landmarks that we wanted to sit on top of their location within the road network in the end user display. I was responsible for quality control. We answered the critical question; did the model look like the building, because that was pretty much the only question we could answer.

We were the first turn-by-turn navigation map to offer 3D landmarks incorporated into the visual display of the road network. For the initial release, we delivered over 3000 landmarks across several countries. In Europe, this meant cathedrals and bridges. In the US, it meant sports stadiums. With that delivery, we won the market, a position we held for several years.

Later, in 2014, the company was in another tight spot. (I was always assigned projects that were putting the company in tight spots, it seems.) Historically, road information was gathered by couple hundred Collection Vehicles distributed across the globe. A Collection Vehicle was a standard sedan tricked out with a fisheye camera on top, a satellite connection taking a GPS trace of their every move, and a trunk full of computer memory. These vehicles collected about two million road miles of information per year, which was already well over Production’s capacity to transcribe into the map.

Soon, all of that data collection equipment would become standard, in miniaturized form, in every car sold to private customers by the three or four major German carmakers. In a few years, just about every new vehicle on the road would be a data collection vehicle, providing a volume of source material orders of magnitude over what has been collected conventionally.

The company I worked for had to drastically increase their capacity to transcribe the source material in order to handle this increased supply, if it wanted to stay on top of its quality game. Unfortunately, they didn’t have the money to simply increase the size of their production facilities. They needed to figure out how to enable the teams to transcribe faster. Much faster. The current global average was one kilometer an hour. In the coming years, they would need it to be processing at ten, or even a hundred kilometers an hour, at the least.

In late 2014, the newly-reorganized R&D based in Chicago had conceived of, and promised to build, a dozen lightweight web applications in the new year that would be easy to learn, simple to use, and quick to operate. Once launched to the High-Volume Production Centers [HVPC], these web apps would revolutionize how we did map maintenance, increasing coding speeds to six, eight, and eventually ten kilometers an hour, maybe even more. The most important attributes, like speed limits, lane category, and directional signs, would get their own apps. Other apps would be for other map features; one app for geometry, one for Points of Interest [POI], and one for all kinds of Directional Signs. The other six apps would be more clearly defined in time.

I was assigned as Project Coordinator over the launch team. My role was to manage the launch team’s productionization of the new tools. Over the course of 2015, we would provide requirements to R&D, test functionality of the new apps, and oversee the deployment of these apps into our HVPCs in Mumbai and Leon. We were to make sure there was a production process, that the web apps produced a quality product, and that it was indeed faster than the legacy process.

Like Elon Musk, R&D over-promised and under-delivered. They missed the 2015 quarter one, and quarter two delivery dates for the first six apps, pushing back the launch to the end of December, a full six months late. We weren’t to see functional versions of the first couple of web apps until the opening of the third quarter, leaving only six months for all our launch activities. Unfortunately, launch activities could not be compressed or worked in tandem, only worked in sequence. We would have to work fast, and all of the testing would need to go smoothly to meet the year-end deadline.

The testing did not go well. Despite almost daily technical support from R&D, we spent nearly all of Q3 simply getting the only two web apps to work in the test environment. Testing teams, launch coordinators, and software developers all worked round the clock getting the network up and running, the firewalls down, the permissions granted, and – most importantly – the tools working. We were barely able to produce a test product before the end of the year, much less put a viable tool into live production.

Nevertheless, at the start of Q4, only a month before the targeted full-scale launch, the test results were finally coming in. It was a disaster. Quality was below American Society of Quality [ASQ] standards, productivity was no better than the legacy process, and nothing functioned long without having to call somebody to fix something. I informed R&D that we could not proceed with the launch as planned. They didn’t like it one bit. They were sure the lousy test results were due to the testing environment and not the web applications.

Nothing I said made it any better. I explained the nature of the ISO 9000 Quality Certification; I explained how the conditions of the certification required that we engage in continuous improvement of both the product and the production process. Unfortunately, I said, the testing results said use of these web applications provided no improvement. Worse, the apps were introducing errors into the map not immediately apparent to the operator. I explained that our ASQ certification meant that we had to prove the quality of our product was above 98%. Sadly, the tests scores for the web apps were well below that number. Finally, I explained that adding up only the time the tools actually worked still gave us a coding speed of less than one kilometer an hour. (It was much worse if we added all the downtime when the apps wouldn’t work.)

This wasn’t enough to appease the development arm of the company. I went on to explain the character of our customers. They were very demanding. We charged a premium for our map and the car companies went to great lengths to ensure they were getting their money’s worth. They would meticulously compare the most recent release with the previous release, looking for what kind of updating we’d done, but more importantly, they were checking to see if any of the road network had been degraded from one release to the next. Discovery of degradation would result in the customer’s demand for a correction and a very costly reship. According to everything my launch team could tell me, I said, we couldn’t guarantee the web apps wouldn’t degrade the map, degradation that would be difficult to detect and costly to correct.

Development wasn’t satisfied. They took the position that the web apps were designed for the live environment, would function fine in real operation, and that we didn’t need testing anyway. Success in the core map would be evidence of continuous improvement. Nobody could argue with success. Additionally, they said, vast improvements in speed wouldn’t be immediately realized, especially with only two apps running anyway. Development wanted to launch the web apps into the live production environment immediately, but they weren’t responsible for launch. That was my team.

As Launch Coordinator, I was a major contributor to the launch decision. I considered the main question. Shall we put the tools into production in their current state, risking map degradation, risking a reship order? Or shall we return the tools to R&D with a list of repairs and improvements, and try to launch again in the new year?

My experience was that launching too early usually resulted in three very bad things: unhappy customers, thousands of hours of rework, and an occasional reshipment of the product that could be very costly. Back in the 3D Landmark project, for example, we shipped over three thousand landmark models before we tested how they would fit into the map, or how they would display on a customer’s screen. At that time, we didn’t really have the technology to view the models together with the map in an end user’s device; open questions about scale and orientation couldn’t be answered. As the Project Manager, I recommended we meet the delivery date anyway.

The first customer to try and use the product found that every single model was wrong. It was actually a statistical anomaly. Many were hundreds of times too large; some were so small that they didn’t even show up on the screen. All of them were out of alignment with their footprint in the map. Within a few weeks, the customer called for an immediate reship. In an unusual move, the customer also provided the viewer they used to evaluate the models. The software also had the ability to edit both scale and orientation, making a fix suddenly feasible. We went to work with this editor right away, fixing models with glorious abandon.

Nevertheless, the rework was a massive undertaking. We had to set aside the next delivery of models to manage the whole thing. We spent thousands of hours fixing scale and orientation on all 3,000 models. The effort required inflating the team with dozens of GeoTechs borrowed from other projects. All of their other coding was put on hold to give us time and people to rework the landmark models. We eventually fixed everything, which allowed us to be the first to market with the product. Everyone was happy, and I received a bonus. My interest when considering the FPM launch was to avoid that kind of fiasco again.

However, I had learned the wrong lesson working on 3D Landmarks. I learned that poor quality was bad, that rework was humiliating, and that a reship was almost incalculably expensive. Despite these things, however; we won the 3=D Landmark race. Winning was everything. I didn’t get it at the time.

When the FPM test results came in, which was around Thanksgiving of 2015, I reported to my managers that the web apps failed the software testing and would need fixing before they could be launched. Since pretty much the whole North American arm of the company took the last two weeks of December off for Christmas, R&D had less than two weeks to fix anything on the apps before everyone left the office for holiday. The shortcomings of the web apps were numerous, and everyone knew they didn’t have enough time to fix the even critical bugs before the holidays began. The December, 2015 launch would be cancelled. Another launch would have to be planned later in the new year, likely not before the end of March.

In a separate report, R&D contested my recommendation. Their position was the very fact that the tools functioned long enough to achieve these results proved that they would work. It was at this point they introduced the concept of the Minimum Viable Product [MVP]. An MVP was a product having the very least amount of functionality, yet still could be said to function. R&D held that their web apps were MVPs (the other meaning of that acronym wasn’t lost on anybody) that needed to be launched into the live production environment in order to gain enough information to begin improving them, with a goal of increased functionality some time later in 2016. Only in this way could the 2015 launch be met – a date already pushed back by six months.

The managers accepted my recommendation and the December 2015 launch was cancelled; the tools were sent back to R&D for further development. It was three more months before they were able to resolve functionality issues, and another three months before the tools would finally be launched.

R&D was not happy with the delay. Over the course of the next year, 2016, they used their considerable clout in the organization to change how launch decisions were made. The developers worked to bring launch authority into their own department, saying that a six-month delay was unacceptable and wouldn’t have happened with them holding the reins.

Production hailed the decision as a success; quality was maintained, the risk of reship completely avoided, and the managers were able to achieve their year-end goals. Unfortunately, the sense of accomplishment did not outlive the disappointment of a failed launch. Market forces were in play such that a delay of half a year in major improvements might as well have been forever.

By the end of 2016, R&D has successfully moved launch authority into their own department. We were required to launch a new crop of web apps that didn’t pass software testing either. Business as Usual was severely inhibited and production speeds dropped. By mid-2017, upper management decided to close the launch coordination facility here in Fargo completely. Everyone here who was part of the launch failure of December 2015 was out of a job within another nine months.

Looking back on the decision, I can see how it was the wrong one to make. I overvalued map quality. Specifically, I put the integrity of the map ahead of speed-to-market. I also didn’t realize that successfully avoiding risk sometimes somehow made the risk look … less risky. We avoided the potential pain of a reship, but, since nobody had to write the check, nobody really cared. It was nothing compared to losing the map update race.

Our competitors, with a head start on their own high-speed web applications, were closing the quality gap. Their maps were publicly derided for their poor quality. In fact, a couple of years earlier, some poor driver in France was directed by her map to drive down a staircase in downtown Paris. It made the national news cycle. Around the same time, another map directed a driver in Seattle onto an unsecured airport runway. These were very public, very embarrassing failures, but the rivals were learning. The failures galvanized both companies to raise the quality of their maps. They never tried for Automotive Grade and would never even think of trying to enter the in-vehicle navigation market. They didn’t need to. The map was now in the phone, and the phone was everywhere, even the car.

Ironically, around the same time I was making the FPM launch recommendation based on quality, our competitors were narrowing the quality gap. Soon, the company I worked for would no longer be able to compete on quality. My launch recommendation was based on what would eventually come to be known as a Burning Platform: something that supports the whole operation, but due to market forces, is no longer relevant. High-quality, Automotive Grade map were our burning platform.

More precisely, I made my launch decision based on the three classical critical success factors; cost, quality, and speed. I should have had traded those success factors for the more important success factor: first-to-market. First-to-market was the success factor that made the 3D Landmarks, one of my first project, a success despite the quality failures. Sure, the rework cost a lot of money, but that money came out of a different fiscal quarter. The bean counters could still claim the project came in under budget while the company won the competitive marketspace and paid the cost-of-quality out of another bucket.

If I had come out in full support of R&D that cold winter of 2015, I wouldn’t have had the more influential arm of the company trying to undermine our authority over the course of the next twelve months. I could have helped reposition the launch team as the center of technological development seeing as quality was no longer exclusively ours to claim.

The company continued to struggle without a competitive edge. They also continued to lose managers and developers to production houses in Hyderabad and Indonesia. I am no longer employed as a Project Manager. I’m now at NDSU, working on my MBA. History will tell if I have chosen rightly.

Raymond Scot Sorrells

2 Comments

  1. Thanks for writing such a touching article we your colleagues can connect with your thoughts, Corporate world is more harsh even right decisions will end up undermining yourself but at the end of the day your team each one of us know we did honesty with our work

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s